diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/A Guide to Plaxis 2D Viewer A Free Tool for Geotechnical Modeling.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/A Guide to Plaxis 2D Viewer A Free Tool for Geotechnical Modeling.md
deleted file mode 100644
index a8dcbebd31e9b5cb4045dc186ea6c651cab40949..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/A Guide to Plaxis 2D Viewer A Free Tool for Geotechnical Modeling.md
+++ /dev/null
@@ -1,24 +0,0 @@
-
-
How to Use Plaxis 2D Viewer to Visualize and Analyze Geotechnical Models
-
Plaxis 2D Viewer is a free software tool that allows you to open and view Plaxis 2D projects created with Plaxis 2D software. Plaxis 2D is a powerful and user-friendly finite element package for geotechnical engineering applications, such as slope stability, excavation, foundation, tunneling, embankment, groundwater flow and more.
With Plaxis 2D Viewer, you can easily explore the geometry, boundary conditions, loads, materials and results of any Plaxis 2D project. You can also perform basic calculations and post-processing operations, such as contour plots, cross-sections, graphs and tables. Plaxis 2D Viewer is a great tool for sharing and presenting your geotechnical models with colleagues, clients or students.
-
In this article, we will show you how to use Plaxis 2D Viewer to visualize and analyze geotechnical models in a few simple steps.
-
Step 1: Download and Install Plaxis 2D Viewer
-
To download Plaxis 2D Viewer, you need to register for a free account on the Plaxis website. After logging in, you can find the download link under the Products section. The installation process is straightforward and should not take more than a few minutes.
-
-
Step 2: Open a Plaxis 2D Project
-
To open a Plaxis 2D project, you can either drag and drop the project file (.p2dxlog) into the Plaxis 2D Viewer window or use the File menu to browse for the file. You can also open multiple projects at the same time and switch between them using the tabs at the top of the window.
-
Step 3: View the Model Geometry and Properties
-
Once you open a project, you will see the model geometry in the main window. You can use the mouse to zoom in and out, pan and rotate the view. You can also use the toolbar buttons to change the view mode (wireframe, solid or transparent), toggle the grid and axes on and off, and adjust the lighting and perspective.
-
To view the model properties, such as boundary conditions, loads, materials and phases, you can use the Model Explorer panel on the left side of the window. You can expand or collapse each category by clicking on the arrow next to it. You can also select any item in the Model Explorer to highlight it in the main window and see its details in the Properties panel on the right side of the window.
-
Step 4: View the Calculation Results
-
To view the calculation results, such as displacements, stresses, strains and pore pressures, you can use the Results Explorer panel on the left side of the window. You can expand or collapse each category by clicking on the arrow next to it. You can also select any item in the Results Explorer to display it in the main window as a contour plot.
-
To change the color scale, range and legend of the contour plot, you can use the Contour Plot Settings panel on the right side of the window. You can also use the toolbar buttons to show or hide mesh lines, nodes and labels.
-
Step 5: Perform Post-Processing Operations
-
To perform post-processing operations, such as cross-sections, graphs and tables, you can use the Post-Processing menu at the top of the window. You can choose from various options depending on what type of data you want to analyze and how you want to present it.
-
For example, if you want to create a cross-section along a line or a curve in your model, you can use the Cross-Section option. This will open a new window where you can define the cross-section geometry and select what results you want to display along it. You can then view the cross-section as a plot or a table.
-
If you want to create a graph of any result variable versus any other variable or parameter in your model, you can use the Graph option. This will open a new window where you can select what variables or parameters you want to plot on the x-axis and y-axis. You can then view the graph as a line chart or a scatter plot.
-
If you want to create a table of any result variable or parameter in your model for any selected nodes or elements, you can use the Table option
ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/FlukeView42and30SoftwareforWindowsSW90Wkeygen A Step-by-Step Tutorial on Using FlukeView Software with Your Fluke ScopeMeter.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/FlukeView42and30SoftwareforWindowsSW90Wkeygen A Step-by-Step Tutorial on Using FlukeView Software with Your Fluke ScopeMeter.md
deleted file mode 100644
index 97623ad17dfb5e0fd73d09d95febd125ed114a00..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/FlukeView42and30SoftwareforWindowsSW90Wkeygen A Step-by-Step Tutorial on Using FlukeView Software with Your Fluke ScopeMeter.md
+++ /dev/null
@@ -1,133 +0,0 @@
-
-
FlukeView42and30SoftwareforWindowsSW90Wkeygen: A Comprehensive Guide
-
If you are looking for a powerful and reliable software to manage your Fluke devices, you might want to check out FlukeView42and30SoftwareforWindowsSW90Wkeygen. This software is designed to help you connect, configure, capture, and analyze data from your Fluke devices with ease. In this article, we will give you a comprehensive guide on what FlukeView42and30SoftwareforWindowsSW90Wkeygen is, how to install it, how to use it, and some tips and tricks for getting the most out of it. Let's get started!
-
What is FlukeView42and30SoftwareforWindowsSW90Wkeygen?
-
FlukeView42and30SoftwareforWindowsSW90Wkeygen is a software that allows you to communicate with your Fluke devices via a USB or RS-232 interface. It supports various models of Fluke devices, such as the 430 Series II Power Quality Analyzer, the 1730 Three-Phase Energy Logger, the 1736 and 1738 Power Loggers, the 1740 Series Three-Phase Power Quality Loggers, and the 1750 Three-Phase Power Recorder. With this software, you can:
Configure your Fluke device settings, such as measurement parameters, logging intervals, alarms, triggers, etc.
-
Capture real-time data from your Fluke device and display it on your PC screen in graphical or tabular formats.
-
Analyze data from your Fluke device using various tools, such as trend plots, histograms, statistics, harmonic analysis, etc.
-
Export data from your Fluke device to other formats, such as CSV, XML, PDF, etc.
-
Generate reports from your Fluke device data using customizable templates.
-
Update your Fluke device firmware to the latest version.
-
-
FlukeView42and30SoftwareforWindowsSW90Wkeygen: A Brief Overview
-
FlukeView42and30SoftwareforWindowsSW90Wkeygen is a Windows-based software that runs on Windows XP (SP3), Windows Vista (SP2), Windows 7 (SP1), Windows 8.1, and Windows 10 operating systems. It requires a minimum of 512 MB of RAM and 300 MB of free disk space. It also requires a USB or RS-232 cable to connect your Fluke device to your PC.
-
FlukeView42and30SoftwareforWindowsSW90Wkeygen has a user-friendly interface that consists of four main components:
-
-
The menu bar: This contains various options for accessing different functions of the software.
-
The toolbar: This contains icons for quick access to common functions of the software.
-
The device panel: This shows the status and information of your connected Fluke device.
-
The main window: This shows the data from your connected Fluke device in different views.
-
-
FlukeView42and30SoftwareforWindowsSW90Wkeygen: Features and Benefits
-
FlukeView42and30SoftwareforWindowsSW90Wkeygen offers many features and benefits for users who want to manage their Fluke devices efficiently. Some of these features and benefits are:
-
-
It supports multiple languages, such as English, French, German, Spanish, Italian, Portuguese, Chinese (Simplified), Chinese (Traditional), Japanese, Korean, Russian, Turkish, Polish, Czech, Hungarian, Romanian, Slovakian, Slovenian, Croatian, Serbian (Latin), Bulgarian , Greek , Arabic , Hebrew , Thai , Indonesian , Vietnamese , Malay , Hindi , Bengali , Tamil , Telugu , Marathi , Gujarati , Kannada , Malayalam , Punjabi , Urdu , Persian , Nepali , Sinhala , Burmese , Khmer , Lao , Mongolian , Tibetan , Uyghur , Kazakh , Kyrgyz , Uzbek , Turkmen , Tajik .
-
It allows you to configure your Fluke device settings according to your needs and preferences. You can set up measurement parameters, logging intervals, alarms, triggers, etc. You can also save and load configuration files for different applications.
-
It allows you to capture real-time data from your Fluke device and display it on your PC screen in graphical or tabular formats. You can zoom in or out, pan, scroll, or select data points on the graph. You can also switch between different views, such as waveform, trend, harmonic, etc.
-
It allows you to analyze data from your Fluke device using various tools, such as trend plots, histograms, statistics, harmonic analysis, etc. You can also compare data from different channels or devices, or apply filters or calculations to the data.
-
It allows you to export data from your Fluke device to other formats, such as CSV, XML, PDF, etc. You can also print or email data directly from the software.
-
It allows you to generate reports from your Fluke device data using customizable templates. You can add or edit text, images, tables, charts, etc. to the report. You can also preview or print the report before saving it.
-
It allows you to update your Fluke device firmware to the latest version. You can check for updates online or download them manually from the official website. You can also backup or restore your Fluke device firmware in case of any issues.
-
-
How to Install FlukeView42and30SoftwareforWindowsSW90Wkeygen?
-
To install FlukeView42and30SoftwareforWindowsSW90Wkeygen on your PC, you need to follow these steps:
-
FlukeView 42 and 30 Software for Windows SW90W keygen download
-How to install FlukeView 42 and 30 Software for Windows SW90W with keygen
-FlukeView 42 and 30 Software for Windows SW90W keygen crack
-FlukeView 42 and 30 Software for Windows SW90W keygen serial number
-FlukeView 42 and 30 Software for Windows SW90W keygen activation code
-FlukeView 42 and 30 Software for Windows SW90W keygen free
-FlukeView 42 and 30 Software for Windows SW90W keygen full version
-FlukeView 42 and 30 Software for Windows SW90W keygen torrent
-FlukeView 42 and 30 Software for Windows SW90W keygen online
-FlukeView 42 and 30 Software for Windows SW90W keygen generator
-FlukeView 42 and 30 Software for Windows SW90W keygen review
-FlukeView 42 and 30 Software for Windows SW90W keygen license key
-FlukeView 42 and 30 Software for Windows SW90W keygen patch
-FlukeView 42 and 30 Software for Windows SW90W keygen update
-FlukeView 42 and 30 Software for Windows SW90W keygen manual
-FlukeView 42 and 30 Software for Windows SW90W keygen features
-FlukeView 42 and 30 Software for Windows SW90W keygen system requirements
-FlukeView 42 and 30 Software for Windows SW90W keygen compatibility
-FlukeView 42 and 30 Software for Windows SW90W keygen support
-FlukeView 42 and 30 Software for Windows SW90W keygen troubleshooting
-FlukeView 42 and 30 Software for Windows SW90W keygen error
-FlukeView 42 and 30 Software for Windows SW90W keygen virus
-FlukeView 42 and 30 Software for Windows SW90W keygen malware
-FlukeView 42 and 30 Software for Windows SW90W keygen scam
-FlukeView 42 and 30 Software for Windows SW90W keygen legit
-FlukeView 42 and 30 Software for Windows SW90W keygen alternative
-FlukeView 42 and 30 Software for Windows SW90W keygen comparison
-FlukeView 42 and 30 Software for Windows SW90W keygen vs other software
-FlukeView 42 and 30 Software for Windows SW90W keygen benefits
-FlukeView 42 and 30 Software for Windows SW90W keygen disadvantages
-FlukeView 42 and 30 Software for Windows SW90W keygen pros and cons
-FlukeView
-
Step 1: Download the Software from the Official Website
Step 2: Extract the Zip File and Run the Setup File
-
The second step is to extract the zip file that you downloaded in step 1. You can use any zip extractor software like WinZip or WinRAR. After extracting the zip file, you will find a folder named "Flukesw90w". Inside this folder, you will find a file named "setup.exe". This is the setup file that you need to run to install the software on your PC.
-
Step 3: Enter the Keygen Code and Activate the Software
-
The third step is to enter the keygen code and activate the software on your PC. The keygen code is a series of alphanumeric characters that you need to enter during the installation process. The keygen code is provided in a text file named "Key.txt" inside the zip file that you downloaded in step 1. You need to copy and paste this code when prompted by the setup wizard. After entering the keygen code, to click on "Next" and follow the instructions on the screen to complete the installation process. After the installation is complete, you need to restart your PC to activate the software.
-
How to Use FlukeView42and30SoftwareforWindowsSW90Wkeygen?
-
To use FlukeView42and30SoftwareforWindowsSW90Wkeygen on your PC, you need to follow these steps:
-
How to Connect Your Fluke Device to Your PC
-
The first step is to connect your Fluke device to your PC using a USB or RS-232 cable. You need to make sure that your Fluke device is turned on and that your PC recognizes it. You can check the device panel on the software interface to see if your Fluke device is connected and ready.
-
How to Configure Your Fluke Device Settings
-
The second step is to configure your Fluke device settings according to your needs and preferences. You can access the configuration menu by clicking on the "Configure" icon on the toolbar or by selecting "Configure" from the "Device" menu on the menu bar. You can set up measurement parameters, logging intervals, alarms, triggers, etc. You can also save and load configuration files for different applications.
-
How to Capture and Analyze Data with FlukeView42and30SoftwareforWindowsSW90Wkeygen
-
The third step is to capture and analyze data from your Fluke device with FlukeView42and30SoftwareforWindowsSW90Wkeygen. You can access the capture menu by clicking on the "Capture" icon on the toolbar or by selecting "Capture" from the "Device" menu on the menu bar. You can choose between different modes of capture, such as continuous, triggered, manual, etc. You can also start or stop the capture at any time.
-
Once you have captured some data from your Fluke device, you can analyze it using various tools provided by the software. You can access the analysis menu by clicking on the "Analyze" icon on the toolbar or by selecting "Analyze" from the "View" menu on the menu bar. You can switch between different views of data, such as waveform, trend, harmonic, etc. You can also zoom in or out, pan, scroll, or select data points on the graph. You can also compare data from different channels or devices, or apply filters or calculations to the data.
-
Tips and Tricks for FlukeView42and30SoftwareforWindowsSW90Wkeygen Users
-
To get the most out of FlukeView42and30SoftwareforWindowsSW90Wkeygen, you might want to follow these tips and tricks:
-
How to Troubleshoot Common Problems with FlukeView42and30SoftwareforWindowsSW90Wkeygen
-
If you encounter any problems with FlukeView42and30SoftwareforWindowsSW90Wkeygen, you can try these solutions:
-
-
If your Fluke device is not recognized by your PC or by the software, you can check if the cable is properly connected, if the drivers are installed correctly, if the firmware is updated to the latest version, or if there is any interference from other devices.
-
If your data is corrupted or incomplete, you can check if there is enough memory space on your Fluke device or on your PC, if there is any power interruption during the capture process, or if there is any damage to the cable or the device.
-
If your software is not working properly or crashing frequently, you can check if your PC meets the minimum system requirements, if there is any virus or malware infection on your PC, or if there is any conflict with other software or hardware.
-
-
How to Update FlukeView42and30SoftwareforWindowsSW90Wkeygen to the Latest Version
-
To keep your software up to date and enjoy new features and bug fixes, you can update FlukeView42and30SoftwareforWindowsSW90Wkeygen to the latest version. You can check for updates online or download them manually from the official website of Fluke Corporation. You can access the update menu by clicking on the "Update" icon on the toolbar or by selecting "Update" from the "Help" menu on the menu bar. You can also backup or restore your software settings in case of any issues.
-
How to Get Support and Feedback from FlukeView42and30SoftwareforWindowsSW90Wkeygen Community
-
If you need any support or feedback from other users of FlukeView42and30SoftwareforWindowsSW90Wkeygen, you can join the online community of Fluke Corporation. You can access the community menu by clicking on the "Community" icon on the toolbar or by selecting "Community" from the "Help" menu on the menu bar. You can also contact Fluke Corporation directly via phone, email, or chat.
-
Conclusion
-
In conclusion, FlukeView42and30SoftwareforWindowsSW90Wkeygen is a powerful and reliable software that helps you manage your Fluke devices with ease. It allows you to connect, configure, capture, and analyze data from your Fluke devices with various features and benefits. It also offers tips and tricks for troubleshooting common problems, updating to the latest version, and getting support and feedback from other users. If you are looking for a software that can enhance your productivity and performance with your Fluke devices, you might want to give FlukeView42and30SoftwareforWindowsSW90Wkeygen a try!
-
FAQs
-
Here are some frequently asked questions about FlukeView42and30SoftwareforWindowsSW90Wkeygen:
-
-
What are the system requirements for FlukeView42and30SoftwareforWindowsSW90Wkeygen?
-
The system requirements for FlukeView42and30SoftwareforWindowsSW90Wkeygen are:
-
-
Operating system: Windows XP (SP3), Windows Vista (SP2), Windows 7 (SP1), Windows 8.1, or Windows 10.
-
RAM: 512 MB minimum.
-
Disk space: 300 MB minimum.
-
Cable: USB or RS-232.
-
-
What are the supported models of Fluke devices for FlukeView42and30SoftwareforWindowsSW90Wkeygen?
-
The supported models of Fluke devices for FlukeView42and30SoftwareforWindowsSW90Wkeygen are:
-
-
430 Series II Power Quality Analyzer.
-
1730 Three-Phase Energy Logger.
-
1736 and 1738 Power Loggers.
-
1740 Series Three-Phase Power Quality Loggers.
-
1750 Three-Phase Power Recorder.
-
-
How much does FlukeView42and30SoftwareforWindowsSW90Wkeygen cost?
-
FlukeView42and30SoftwareforWindowsSW90Wkeygen is a free software that you can download from the official website of Fluke Corporation. However, you need a keygen code to activate it on your PC. The keygen code is provided in a text file named "Key.txt" inside the zip file that you downloaded from the website.
-
How do I get help with FlukeView42and30SoftwareforWindowsSW90Wkeygen?
-
You can get help with FlukeView42and30SoftwareforWindowsSW90Wkeygen by joining the online community of Fluke Corporation, or by contacting them directly via phone, email, or chat. You can access their contact information here: https://www.fluke.com/en-us/support/contact.
-
What are some alternatives to FlukeView42and30SoftwareforWindowsSW90Wkeygen?
-
Some alternatives to FlukeView42and30SoftwareforWindowsSW90Wkeygen are:
-
-
Fluke Connect: This is a cloud-based software that allows you to connect and collaborate with other users of Fluke devices via a mobile app or a web browser.
-
Fluke Energy Analyze Plus: This is a desktop software that allows you to analyze energy consumption and power quality data from various sources, such as loggers, meters, or clamps.
-
Flukemaster: This is an online platform that allows you to access various resources and courses related to power quality and energy management.
-
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Fujitsu NetCobol 7 Serial Key Tips and Tricks for NetCOBOL for Windows Users.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Fujitsu NetCobol 7 Serial Key Tips and Tricks for NetCOBOL for Windows Users.md
deleted file mode 100644
index 53cee6fdbf2761c5a2fbb9c23be525bd0fff89b1..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Fujitsu NetCobol 7 Serial Key Tips and Tricks for NetCOBOL for Windows Users.md
+++ /dev/null
@@ -1,108 +0,0 @@
-
-
Fujitsu NetCobol 7 Serial Key: What You Need to Know
-
If you are a COBOL developer or a business owner who relies on COBOL applications, you may have heard of Fujitsu NetCobol 7. This is an enterprise-class COBOL compiler and runtime environment that allows you to create and run COBOL programs on Windows and Linux platforms. But before you can use this software, you need a serial key to activate it. In this article, we will tell you what you need to know about Fujitsu NetCobol 7 Serial Key, including how to get one legally, what features and benefits it offers, and what drawbacks it has.
-
Introduction
-
Fujitsu NetCobol 7 is a product of Fujitsu Limited, a Japanese multinational information technology company that provides various products and services for businesses and consumers. Fujitsu NetCobol 7 is part of the NetCOBOL family of products, which also includes NetCOBOL for .NET, NetCOBOL for Linux, and NetCOBOL Studio.
A serial key is a unique code that identifies and authenticates a software product. It is usually required to install or activate the software on a computer or device. A serial key can also be used to verify the legitimacy of the software and prevent piracy or unauthorized use.
-
To get a serial key for Fujitsu NetCobol 7, you have two options:
-
-
Buy a license from Fujitsu or an authorized reseller. This is the legal and recommended way to obtain a serial key. You can choose from different license types depending on your needs and budget. For example, you can buy a single-user license, a multi-user license, or an enterprise license. You can also buy a subscription license that gives you access to updates and support for a certain period of time.
-
Request a trial license from Fujitsu. This is a temporary and free way to try out Fujitsu NetCobol 7 before buying it. You can request a trial license by filling out an online form on the Fujitsu website. You will need to provide some basic information such as your name, email address, company name, and country. You will then receive an email with a link to download the software and a trial serial key that is valid for 30 days.
-
-
However, you should avoid using any illegal or unofficial ways to get a serial key for Fujitsu NetCobol 7, such as:
-
-
Downloading a cracked or pirated version of the software from untrusted sources. This is not only illegal but also risky, as you may expose your computer or device to malware, viruses, or spyware that can harm your data or system.
-
Using a fake or stolen serial key from online forums or websites. This is also illegal and unethical, as you are violating the intellectual property rights of Fujitsu and depriving them of their rightful revenue. Moreover, you may face legal consequences or penalties if you are caught using such a serial key.
-
Using a generator or hack tool that claims to create or bypass serial keys for Fujitsu NetCobol 7. This is also illegal and unreliable, as these tools are often scams or malware that can infect your computer or device.
-
-
Therefore, we advise you to always use a legal and official way to get a serial key for Fujitsu NetCobol 7.
-
How to activate Fujitsu NetCobol 7 with serial key
-Fujitsu NetCobol 7 license key generator online
-Fujitsu NetCobol 7 crack download free
-Fujitsu NetCobol 7 serial number verification
-Fujitsu NetCobol 7 product key finder
-Fujitsu NetCobol 7 activation code for windows
-Fujitsu NetCobol 7 registration key for mac
-Fujitsu NetCobol 7 keygen full version
-Fujitsu NetCobol 7 patch update
-Fujitsu NetCobol 7 serial key expired
-How to renew Fujitsu NetCobol 7 serial key
-Fujitsu NetCobol 7 license key transfer
-Fujitsu NetCobol 7 crack fix
-Fujitsu NetCobol 7 serial number invalid
-Fujitsu NetCobol 7 product key lost
-How to recover Fujitsu NetCobol 7 product key
-Fujitsu NetCobol 7 activation code not working
-Fujitsu NetCobol 7 registration key blocked
-Fujitsu NetCobol 7 keygen virus
-Fujitsu NetCobol 7 patch error
-How to uninstall Fujitsu NetCobol 7 serial key
-Fujitsu NetCobol 7 license key refund
-Fujitsu NetCobol 7 crack alternative
-Fujitsu NetCobol 7 serial number format
-Fujitsu NetCobol 7 product key length
-How to change Fujitsu NetCobol 7 product key
-Fujitsu NetCobol 7 activation code generator offline
-Fujitsu NetCobol 7 registration key email
-Fujitsu NetCobol 7 keygen download link
-Fujitsu NetCobol 7 patch installation guide
-How to update Fujitsu NetCobol 7 serial key
-Fujitsu NetCobol 7 license key price
-Fujitsu NetCobol 7 crack review
-Fujitsu NetCobol 7 serial number location
-Fujitsu NetCobol 7 product key backup
-How to copy Fujitsu NetCobol 7 product key
-Fujitsu NetCobol 7 activation code bypass
-Fujitsu NetCobol 7 registration key support
-Fujitsu NetCobol 7 keygen tutorial
-Fujitsu NetCobol 7 patch features
-How to check Fujitsu NetCobol 7 serial key validity
-Fujitsu NetCobol 7 license key comparison
-Fujitsu NetCobol 7 crack benefits
-Fujitsu NetCobol 7 serial number warranty
-Fujitsu NetCobol 7 product key expiration date
-How to extend Fujitsu NetCobol 7 product key validity period
-Fujitsu NetCobol 7 activation code requirements
-Fujitsu NetCobol 7 registration key delivery
-Fujitsu NetCobol 7 keygen compatibility
-Fujitsu NetCobol 7 patch security
-
Features of Fujitsu NetCobol 7
-
Fujitsu NetCobol 7 has many features that make it a powerful and versatile COBOL development tool. Here are some of the main features:
-
-
It supports COBOL development on Windows and Linux platforms. You can create and run COBOL programs on both operating systems without changing your source code or recompiling your programs.
-
It integrates with Microsoft Visual Studio and other tools. You can use Visual Studio as your integrated development environment (IDE) for editing, debugging, testing, and deploying your COBOL programs. You can also use other tools such as Eclipse, Ant, Maven, Jenkins, Git, SVN, etc.
-
It offers high performance, reliability, and compatibility. You can benefit from the fast compilation speed, optimized runtime performance, robust error handling, and comprehensive debugging features of Fujitsu NetCobol 7. You can also ensure the compatibility of your COBOL programs with various databases, web servers, middleware, frameworks, etc.
-
-
Benefits of Fujitsu NetCobol 7
-
Fujitsu NetCobol 7 has many benefits that make it a valuable and cost-effective COBOL solution for businesses and developers. Here are some of the main benefits:
-
-
It reduces COBOL licensing and runtime fees. Unlike some other COBOL products that charge per-user or per-CPU fees for both development and runtime licenses, Fujitsu NetCobol 7 only charges for development licenses. There are no runtime fees for deploying your COBOL programs on any number of computers or devices.
-
It enables modernization and migration of legacy COBOL applications. You can use Fujitsu NetCobol 7 to update your existing COBOL applications with new features and functionalities such as web services, XML processing, database access, etc. You can also migrate your legacy COBOL applications from mainframe or other platforms to Windows or Linux platforms with minimal changes.
-
It supports industry standards and specifications. You can use Fujitsu NetCobol 7 to write standard ANSI COBOL code that conforms to the latest specifications such as COBOL2002 and COBOL2014. You can also use some common COBOL syntax extensions that are supported by IBM, Micro Focus, or previous versions of NetCOBOL (formerly called FujitsuCOBOL).
-
-
Drawbacks of Fujitsu NetCobol 7
-
Fujitsu NetCobol 7 is not perfect though. It has some drawbacks that you should be aware of before using it. Here are some of the main drawbacks:
-
-
It requires a new serial number for each version upgrade. If you want to upgrade from an older version of Fujitsu NetCobol (such as V9 or V10) to V11 (the latest version), you will need to request a new serial number from Fujitsu or an authorized reseller. You cannot use your existing serial number for the upgrade.
-
It may not support some COBOL syntax extensions or features that are specific to other vendors or platforms. For example, if your COBOL code uses some syntax extensions that are only available in IBM Enterprise COBOL or Micro Focus Visual COBOL, you may encounter compilation errors or runtime errors when using Fujitsu NetCobol 7.
-
It may have compatibility issues with some operating systems or hardware configurations that are not supported by Fujitsu NetCobol 7. For example, if you want to run your COBOL programs on Windows XP or Windows Server 2003 (which are no longer supported by Microsoft), you may face some problems with Fujitsu NetCobol 7.
-
-
Conclusion
-Fujitsu NetCobol 7, a powerful and versatile COBOL compiler and runtime environment that allows you to create and run COBOL programs on Windows and Linux platforms. Fujitsu NetCobol 7 has many features and benefits that make it a valuable and cost-effective COBOL solution for businesses and developers. However, it also has some drawbacks that you should be aware of before using it. To get a serial key for Fujitsu NetCobol 7, you should always use a legal and official way, such as buying a license from Fujitsu or an authorized reseller, or requesting a trial license from Fujitsu. You should avoid using any illegal or unofficial ways, such as downloading a cracked or pirated version of the software, using a fake or stolen serial key, or using a generator or hack tool. If you are interested in Fujitsu NetCobol 7 and want to learn more about it, you can visit the Fujitsu website or contact their customer support. You can also download a free trial version of the software and try it out for yourself. We hope this article has helped you understand what you need to know about Fujitsu NetCobol 7 Serial Key. If you have any questions or feedback, please feel free to leave a comment below.
FAQs
-
Here are some common questions and answers about Fujitsu NetCobol 7 Serial Key.
-
-
Q: How much does Fujitsu NetCobol 7 cost?
-A: The price of Fujitsu NetCobol 7 depends on the type and number of licenses you buy. You can check the latest pricing information on the Fujitsu website or contact their sales team.
-
Q: How long does the trial license last?
-A: The trial license for Fujitsu NetCobol 7 is valid for 30 days from the date of activation. You can use all the features and functions of the software during the trial period.
-
Q: How can I renew or extend my trial license?
-A: You cannot renew or extend your trial license for Fujitsu NetCobol 7. If you want to continue using the software after the trial period expires, you need to buy a full license.
-
Q: How can I upgrade from an older version of Fujitsu NetCobol to V11?
-A: To upgrade from an older version of Fujitsu NetCobol (such as V9 or V10) to V11, you need to request a new serial number from Fujitsu or an authorized reseller. You cannot use your existing serial number for the upgrade.
-
Q: How can I get technical support for Fujitsu NetCobol 7?
-A: You can get technical support for Fujitsu NetCobol 7 by opening a support incident on the Fujitsu website or calling their toll-free number. You will need to provide your serial number and other information to get assistance.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/HD Online Player (broen Season 2 720p Torrent).md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/HD Online Player (broen Season 2 720p Torrent).md
deleted file mode 100644
index 9a1fde0811eba2f57dae7eaaa59615aa92efbe3b..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/HD Online Player (broen Season 2 720p Torrent).md
+++ /dev/null
@@ -1,85 +0,0 @@
-
-
HD Online Player (broen season 2 720p torrent)
-
If you are a fan of Scandinavian crime drama series, you might have heard of Broen (The Bridge), a critically acclaimed show that follows the investigations of a Swedish-Danish police team on crimes that occur on both sides of the bridge connecting the two countries. The second season of Broen aired in 2013 and received rave reviews from critics and audiences alike for its gripping storyline, complex characters, and realistic portrayal of social issues.
But if you missed watching Broen season 2 when it was broadcasted or you want to rewatch it with better quality, you might be wondering how to find and download Broen season 2 720p torrent. And more importantly, how to watch it in high-definition (HD) online without any hassle. In this article, we will answer these questions and provide you with a comprehensive guide on how to enjoy Broen season 2 in HD online using a torrent client, a torrent site, and an HD online player.
-
What is Broen?
-
Broen is a Scandinavian crime drama series that premiered in 2011 and ran for four seasons until 2018. It was created by Hans Rosenfeldt and co-produced by Sveriges Television (SVT) in Sweden and Danmarks Radio (DR) in Denmark. The series stars Sofia Helin as Saga Norén, a Swedish detective with Asperger's syndrome, and Kim Bodnia as Martin Rohde, a Danish detective with personal problems. Together, they form a cross-border police team that investigates crimes that involve both Sweden and Denmark.
-
The second season of Broen focuses on a series of murders that are linked to an eco-terrorist group called "Truth Terrorist". The group claims to expose the hypocrisy and corruption of the society by killing people who are responsible for environmental pollution, social inequality, war crimes, and human trafficking. Saga and Martin have to work together again to stop the group before they execute their final plan.
-
Broen is widely praised for its intricate plot, realistic dialogue, dark humor, and strong performances. It has won several awards, including the International Emmy Award for Best Drama Series in 2014. It has also inspired several international adaptations, such as The Bridge (USA/Mexico), The Tunnel (UK/France), Bron/Broen (Russia/Estonia), and The Bridge (Malaysia/Singapore).
-
Why watch Broen season 2 in HD online?
-
If you have watched Broen season 2 before or you are planning to watch it for the first time, you might be wondering why you should watch it in HD online instead of SD online or DVD. Here are some reasons why watching Broen season 2 in HD online is better than other options:
-
Better picture quality
-
One of the main advantages of watching Broen season 2 in HD online is that you can enjoy better picture quality than SD online or DVD. HD stands for high-definition, which means that the video has more pixels per inch than SD or DVD. Pixels are tiny dots that make up the image on your screen. The more pixels there are, the sharper and clearer the image will be.
-
Broen season 2 has a resolution of 1280 x 720 pixels, which is also known as 720p. This means that there are 1280 pixels horizontally and 720 pixels vertically on each frame of the video. This is much higher than SD resolution, which is usually around 640 x 480 pixels (480p), or DVD resolution, which is usually around 720 x 480 pixels (480i). By watching Broen season 2 in HD online, you can see more details, colors, and contrast in every scene.
-
Better sound quality
-
Another benefit of watching Broen season 2 in HD online is that you can experience better sound quality than SD online or DVD. HD audio has more bits per sample than SD or DVD audio. Bits are units of information that represent the amplitude or volume of the sound wave. The more bits there are, the more accurate and dynamic the sound will be.
-
Broen season 2 has an audio bitrate of around 192 kbps (kilobits per second), which is also known as MP3 quality. This means that there are around 192 kilobits of information per second in each frame of the audio. This is much higher than SD audio bitrate, which is usually around 128 kbps (MP3 quality), or DVD audio bitrate, which is usually around
season 2 in HD online, you can hear more clarity, depth, and immersion in every dialogue, sound effect, and music score.
-
Watch Broen Season 2 Online Free HD
-Broen Season 2 Streaming 720p Quality
-How to Download Broen Season 2 Torrent Safely
-Broen Season 2 Full Episodes HD Online
-Best HD Online Player for Broen Season 2
-Broen Season 2 720p Torrent Magnet Link
-Broen Season 2 Subtitles Download HD
-Broen Season 2 Review and Ratings HD
-Broen Season 2 Cast and Crew HD Online
-Broen Season 2 Behind the Scenes HD Video
-Broen Season 2 Recap and Spoilers HD
-Broen Season 2 Soundtrack and Music HD Online
-Broen Season 2 Awards and Nominations HD
-Broen Season 2 Trivia and Facts HD Online
-Broen Season 2 Fan Art and Merchandise HD
-Broen Season 2 Comparison with Other Shows HD Online
-Broen Season 2 Theme and Genre HD Analysis
-Broen Season 2 Plot and Characters HD Online
-Broen Season 2 Quotes and Dialogues HD
-Broen Season 2 Memes and Jokes HD Online
-Broen Season 2 Recommendations and Suggestions HD
-Broen Season 2 Release Date and Availability HD Online
-Broen Season 2 History and Background HD
-Broen Season 2 Location and Setting HD Online
-Broen Season 2 Production and Direction HD
-Broen Season 2 Adaptation and Remake HD Online
-Broen Season 2 Legal and Ethical Issues HD
-Broen Season 2 Controversy and Criticism HD Online
-Broen Season 2 Future and Sequel HD News
-Broen Season 2 Alternatives and Similar Shows HD Online
-
Better streaming experience
-
A third advantage of watching Broen season 2 in HD online is that you can have a better streaming experience than SD online or DVD. HD online streaming means that you can watch Broen season 2 in HD quality without having to download the whole file first. You can simply stream it from a server that hosts the file and watch it on your device as it downloads.
-
This is more convenient and faster than SD online streaming or DVD playback, which can suffer from buffering, lagging, and interruptions due to low bandwidth, slow connection, or physical damage. By watching Broen season 2 in HD online, you can enjoy a smooth and uninterrupted viewing experience.
-
How to find and download Broen season 2 720p torrent?
-
Now that you know why you should watch Broen season 2 in HD online, you might be wondering how to find and download Broen season 2 720p torrent. To do this, you will need two tools: a torrent client and a torrent site. Here is a guide on how to use them:
-
What is a torrent client?
-
A torrent client is a software application that allows you to download files from other users who are sharing them via a peer-to-peer (P2P) network. A P2P network is a decentralized system where users can connect directly with each other without relying on a central server. A torrent client enables you to join this network and download files in small pieces from multiple sources at the same time.
-
A torrent client also allows you to create and share your own files with other users by creating a torrent file. A torrent file is a small file that contains metadata about the file you want to share, such as its name, size, structure, and location of the sources. A torrent file does not contain the actual file itself, but it acts as a pointer that tells your torrent client where to find and download it.
-
What is a torrent site?
-
A torrent site is a website that hosts and indexes torrent files for various types of content, such as movies, TV shows, music, games, software, etc. A torrent site allows you to search for and download torrent files that match your query. A torrent site does not host or provide the actual content itself, but it acts as a directory that links you to the sources where you can find and download it.
-
How to choose a reliable and safe torrent client and site?
-
There are many torrent clients and sites available on the internet, but not all of them are reliable and safe. Some of them may contain malware, viruses, spyware, or adware that can harm your device or compromise your privacy. Some of them may also have low-quality or fake files that can waste your time or bandwidth. Therefore, it is important to choose a trustworthy and secure torrent client and site for your downloading needs.
-
Here are some criteria and tips to help you select a good torrent client and site:
- - Check the reviews and ratings of the torrent client and site from reputable sources such as VPNOverview , TechRadar , Digital Trends , etc. - Choose a torrent client and site that have a large and active user base, as this indicates popularity and reliability. - Choose a torrent client and site that have a high seed-to-leech ratio, as this indicates availability and speed. - Choose a torrent client and site that have a variety of content categories and filters, as this indicates diversity and quality. - Choose a torrent client and site that have minimal or no ads, pop-ups, or redirects, as this indicates safety and convenience. - Choose a torrent client and site that support encryption, proxy, VPN , or other security features, as this indicates privacy and protection.
How to install and use a torrent client?
-
Once you have chosen a reliable and safe torrent client for your device, you need to install and use it to download Broen season 2 720p torrent. Here are the general steps to do so:
- - Download the installer file of the torrent client from its official website or a trusted source. - Run the installer file and follow the instructions to install the torrent client on your device.
but not all of them are compatible and convenient for your device and preferences. Some of them may not support the format or resolution of your video, or may have limited features or functions. Therefore, it is important to choose an HD online player that works well with your device and preferences.
-
Here are some criteria and tips to help you select a good HD online player:
- - Check the compatibility and requirements of the HD online player with your device and operating system. Make sure it can run smoothly and stably on your device without any errors or crashes. - Choose an HD online player that supports various video formats and resolutions, especially MP4 and 720p, which are the format and resolution of Broen season 2. Make sure it can play your video without any loss of quality or performance. - Choose an HD online player that has a simple and user-friendly interface, with easy-to-access controls and settings. Make sure it can play your video without any interruptions or distractions, such as ads, pop-ups, or redirects. - Choose an HD online player that has advanced features and functions, such as subtitles, speed control, playlist, screenshot, etc. Make sure it can enhance your viewing experience and meet your needs. - Choose an HD online player that has a high security and privacy level, such as encryption, password protection, VPN , etc. Make sure it can protect your video and data from unauthorized access or leakage.
How to open and play Broen season 2 720p with an HD online player?
-
Once you have chosen a compatible and convenient HD online player for your device and preferences, you need to open and play Broen season 2 720p with it. Here are the general steps to do so:
- - Download and install the HD online player from its official website or a trusted source. Run the HD online player and configure its settings according to your preferences. You may want to adjust the volume, brightness, aspect ratio, subtitles, etc. of your video. - Locate the Broen season 2 720p file on your device and drag and drop it into the HD online player. Alternatively, you can click on the open file button in the HD online player and browse for the file on your device. - The HD online player will start playing Broen season 2 720p on your device. You can see the progress, duration, resolution, etc. of your video in the HD online player. You can also pause, resume, rewind, fast-forward, skip, or stop your video at any time. - Enjoy watching Broen season 2 720p in HD online with your friends or colleagues!
Conclusion
-
Broen season 2 is a captivating Scandinavian crime drama series that deserves to be watched in HD online for its superb picture quality, sound quality, and streaming experience. To do this, you need three tools: a torrent client, a torrent site, and an HD online player.
-
In this article, we have provided you with a comprehensive guide on how to use these tools to find and download Broen season 2 720p torrent and watch it in HD online without any hassle. We have also given you some criteria and tips to help you choose reliable and safe torrent clients and sites, as well as compatible and convenient HD online players.
-
We hope this article has been helpful for you and that you have enjoyed watching Broen season 2 in HD online with your friends or colleagues. If you have any questions or feedback, please feel free to leave a comment below.
-
FAQs
-
Here are some frequently asked questions about watching Broen season 2 in HD online:
-
Is torrenting legal?
-
you should always check the rules and regulations in your country before you download any content that might be protected under copyright. You should also use a VPN to hide your IP address and encrypt your traffic when torrenting to avoid any legal troubles or ISP throttling.
-
Is watching online videos safe?
-
Watching online videos can be safe if you use trustworthy and secure tools and websites. However, there are also some risks and dangers involved, such as malware, viruses, spyware, adware, phishing, identity theft, etc. Therefore, you should always be careful and vigilant when watching online videos. You should also use a VPN to protect your privacy and security when watching online videos.
-
What are some alternatives to torrenting and online video players?
-
If you don't want to use torrenting and online video players to watch Broen season 2 in HD online, you can also use some alternatives, such as streaming services, download sites, or DVD rentals. However, these alternatives may have some drawbacks, such as higher costs, lower quality, limited availability, or legal issues. Therefore, you should weigh the pros and cons of each alternative before you choose one.
-
How can I improve my torrenting and online video watching experience?
-
There are some tips and tricks that can help you improve your torrenting and online video watching experience, such as:
- - Use a fast and stable internet connection to avoid buffering, lagging, or interruptions. - Use a large and high-resolution screen to enjoy the HD quality of the video. - Use a good pair of headphones or speakers to appreciate the HD sound of the video. - Use a comfortable and ergonomic chair and desk to avoid strain or fatigue. - Use a dark and quiet room to avoid glare or noise.
How can I learn more about Broen season 2?
-
If you want to learn more about Broen season 2, you can visit some of these websites:
- - The official website of Broen: https://www.bronbroen.com/ - The IMDb page of Broen: https://www.imdb.com/title/tt1733785/ - The Wikipedia page of Broen: https://en.wikipedia.org/wiki/The_Bridge_(2011_TV_series) - The fan wiki of Broen: https://thebridge.fandom.com/wiki/The_Bridge_Wiki 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Build Your Dream Zoo with Zoo Life Animal Park Game - The Best Zoo Management Game Ever.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Build Your Dream Zoo with Zoo Life Animal Park Game - The Best Zoo Management Game Ever.md
deleted file mode 100644
index 2e784a1627cb32c8fcc3a8b3f45542dcc1f010bb..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Build Your Dream Zoo with Zoo Life Animal Park Game - The Best Zoo Management Game Ever.md
+++ /dev/null
@@ -1,90 +0,0 @@
-
-
Zoo Life: Animal Park Game - A Fun and Engaging Zoo Simulation Game for Mobile Devices
-
Introduction
-
Do you love animals and zoos? Have you ever dreamed of creating your own zoo and taking care of a variety of adorable animals? If so, you might want to check out Zoo Life: Animal Park Game, a brand new zoo simulation game from Sparkling Society, famous for their city building games.
Zoo Life: Animal Park Game is the perfect blend of city-building, strategy, and zoo management. You can immerse yourself in a colorful and vibrant world, where you become the ultimate zookeeper, bringing happiness to both visitors and animals. With realistic animations, engaging gameplay, and a plethora of high-quality content, Zoo Life offers an unparalleled zoo simulation experience on mobile devices.
-
In this game, you can build, customize, and expand your dream zoo with a diverse range of habitats. You can care for a vast array of animals, from cute and cuddly to rare and endangered. You can participate in exciting events, quests, and challenges for endless entertainment. You can also master strategic planning, resource management, and zookeeping skills. And you can join a global community of passionate zoo lovers and share your creations.
-
Are you ready to embark on this ultimate animal kingdom adventure? If so, you can download Zoo Life: Animal Park Game for free from Google Play, App Store, or Microsoft Store. You can also play the game offline without internet connection. Here are some tips and tricks to help you get started.
-
zoo life animal park game download
-zoo life animal park game online
-zoo life animal park game tips
-zoo life animal park game cheats
-zoo life animal park game review
-zoo life animal park game apk
-zoo life animal park game for pc
-zoo life animal park game mod
-zoo life animal park game hack
-zoo life animal park game update
-zoo life animal park game guide
-zoo life animal park game wiki
-zoo life animal park game reddit
-zoo life animal park game forum
-zoo life animal park game discord
-zoo life animal park game facebook
-zoo life animal park game instagram
-zoo life animal park game twitter
-zoo life animal park game youtube
-zoo life animal park game trailer
-zoo life animal park game ios
-zoo life animal park game android
-zoo life animal park game windows
-zoo life animal park game mac
-zoo life animal park game linux
-zoo life animal park game steam
-zoo life animal park game microsoft store
-zoo life animal park game app store
-zoo life animal park game google play
-zoo life animal park game amazon
-zoo life animal park game free
-zoo life animal park game premium
-zoo life animal park game pro
-zoo life animal park game plus
-zoo life animal park game vip
-zoo life animal park game codes
-zoo life animal park game coupons
-zoo life animal park game offers
-zoo life animal park game deals
-zoo life animal park game discounts
-zoo life animal park game support
-zoo life animal park game feedback
-zoo life animal park game questions
-zoo life animal park game answers
-zoo life animal park game help
-zoo life animal park game features
-zoo life animal park game events
-zoo life animal park game challenges
-zoo life animal park game community
-
How to Build and Manage Your Dream Zoo
-
Create Your Dream Zoo & Foster Diverse Habitats
-
The first step to building your dream zoo is to plan and customize your park's layout. You can place unique habitats, decorative items, and engaging attractions to ensure the happiness and well-being of your animals and guests. You can also expand your park by clearing new areas or buying more land.
-
The next step is to unlock exotic species from around the world. You can discover animals from different regions, such as Africa, Asia, Australia, Europe, North America, South America, Arctic, Antarctic, Oceanic, or Fantasy. Each animal has its distinct needs and preferences, such as food, water, shelter, toys, social interaction, and climate. You can also learn interesting facts and trivia about each animal in the game.
-
The final step is to create tailored environments for your animals. You can design habitats that match the natural habitats of your animals, such as savanna, jungle, desert, mountain, tundra, or ocean. You can also add plants, rocks, water, and other elements to enhance the beauty and diversity of your park.
-
Discover, Breed & Care for a Wide Range of Animals
-
One of the most rewarding aspects of Zoo Life: Animal Park Game is to take care of cute domestic and wild animals. You can feed them, pet them, play with them, and watch them interact with each other. You can also observe their unique behaviors, such as sleeping, hunting, grooming, or mating.
-
Another fun feature of the game is to foster the growth of your animal family through breeding programs. You can breed animals of the same species to produce adorable offspring. You can also breed animals of different species to create rare hybrids. For example, you can breed a lion and a tiger to get a liger, or a zebra and a horse to get a zorse.
-
A noble mission of the game is to ensure the survival of endangered species. You can help protect animals that are threatened by habitat loss, poaching, climate change, or human interference. You can also participate in conservation efforts and raise awareness about the importance of biodiversity.
-
Engage in Fun Challenges & Interactive Events
-
To keep your visitors entertained with exciting events, exhibits, and quests, you can offer them a variety of activities and attractions. You can build roller coasters, ferris wheels, carousels, and other amusement rides. You can also create animal shows, petting zoos, safari tours, and educational programs.
-
To spice up your gameplay with seasonal events and earn valuable rewards, you can join in various festivals and celebrations throughout the year. You can decorate your park with themed items and costumes. You can also collect special items and currencies to unlock exclusive rewards.
-
To join a global community of passionate zoo enthusiasts, you can connect with other players online. You can visit their zoos and rate their creations. You can also chat with them and exchange tips and tricks. You can also join clubs and compete with other clubs in weekly tournaments.
-
How to Master Zoo Management & Strategic Planning
-
Balance Your Resources, Time, and Staff Effectively
-
To run a successful zoo, you need to manage your budget, income, and expenses wisely. You need to balance the costs of building habitats, buying animals, hiring staff, and maintaining facilities. You also need to generate income from ticket sales, donations, sponsorships, and in-game purchases.
-
To improve your zoo's performance and quality, you need to research new technologies and upgrades. You need to invest in better equipment, materials, tools, and services. You also need to unlock new features, functions, and options for your park.
-
To optimize your zoo's operations and staff allocation,
In this article, we have introduced you to Zoo Life: Animal Park Game, a fun and engaging zoo simulation game for mobile devices. We have shown you how to build and manage your dream zoo, how to discover, breed, and care for a wide range of animals, and how to engage in fun challenges and interactive events. We have also given you some tips and tricks on how to master zoo management and strategic planning.
-
Zoo Life: Animal Park Game is a game that will appeal to anyone who loves animals and zoos. It is a game that will challenge your creativity, strategy, and management skills. It is a game that will immerse you in a colorful and vibrant world, where you can create your own animal kingdom.
-
If you are looking for a game that will keep you entertained for hours, Zoo Life: Animal Park Game is the game for you. You can download it for free from Google Play, App Store, or Microsoft Store. You can also play it offline without internet connection. What are you waiting for? Start your zoo adventure today and share your creations with the world!
-
FAQs
-
Q: How many animals are there in Zoo Life: Animal Park Game?
-
A: There are over 200 animals in the game, from common domestic animals to rare exotic animals. You can also create hybrid animals by breeding different species.
-
Q: How can I get more coins and gems in the game?
-
A: You can get more coins and gems by completing quests, participating in events, collecting daily rewards, watching ads, or making in-game purchases.
-
Q: How can I join a club or create my own club in the game?
-
A: You can join a club or create your own club by tapping on the club icon on the bottom right corner of the screen. You can then search for an existing club or create a new one. You can also invite other players to join your club or accept invitations from other clubs.
-
Q: How can I visit other players' zoos and rate them?
-
A: You can visit other players' zoos and rate them by tapping on the globe icon on the bottom left corner of the screen. You can then select a player from the list or enter their ID. You can also see their ratings and comments on their zoos.
-
Q: How can I contact the developers or report a bug in the game?
-
A: You can contact the developers or report a bug in the game by tapping on the settings icon on the top right corner of the screen. You can then select the support option and fill out the form with your details and feedback.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Beyond Blue Mod APK Discover the Secrets of the Ocean in this Amazing Game.md b/spaces/1phancelerku/anime-remove-background/Beyond Blue Mod APK Discover the Secrets of the Ocean in this Amazing Game.md
deleted file mode 100644
index ef55d6ce48ec767133f4d28fc0fdff19b60a9dcd..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Beyond Blue Mod APK Discover the Secrets of the Ocean in this Amazing Game.md
+++ /dev/null
@@ -1,105 +0,0 @@
-
-
How to Download Beyond Blue Mod APK on Android
-
If you are a fan of underwater exploration and marine life, you might be interested in playing Beyond Blue, a diving simulator game inspired by the BBC's Blue Planet II documentary series. But what if you want to enjoy some extra features or benefits that are not available in the official version of the game? In that case, you might want to try downloading and installing a mod APK of Beyond Blue on your Android device. In this article, we will explain what Beyond Blue is, what a mod APK is, and how to download and install one on your Android device.
-
What is Beyond Blue?
-
Beyond Blue is a single-player narrative adventure game that takes you deep into the ocean's mysteries and wonders. You play as Mirai, a marine biologist who is part of a research team that uses cutting-edge technology to study and interact with the ocean's creatures. You can explore different underwater environments, scan and document various species, and learn more about the ocean's secrets through mini-documentaries and interviews with experts. The game is developed by E-Line Media in collaboration with BBC Studios, OceanX Media, and some of the world's leading ocean scientists.
A diving simulator game inspired by Blue Planet II
-
Beyond Blue is heavily influenced by the BBC's Blue Planet II documentary series, which showcases the beauty and diversity of life in the ocean. The game features stunning visuals, realistic animations, and immersive sound effects that make you feel like you are actually diving in the ocean. You can also watch clips from the documentary series and learn more about the ocean's ecology, conservation, and challenges.
-
Features of Beyond Blue
-
Some of the features of Beyond Blue are:
-
-
Eight different dives that take you to various locations in the Western Pacific ocean
-
A story mode that follows Mirai's research on a pod of whales and their songs
-
A free dive mode that lets you explore the ocean at your own pace
-
A photo mode that allows you to take pictures of the ocean's beauty and share them with filters and settings
-
A soundtrack that features original music and songs from artists like Miles Davis, The Flaming Lips, The Edisons, and more
-
-
What is a mod APK?
-
An APK (Android Package Kit) is a file format that contains all the components of an Android app. A mod APK is a modified version of an APK that offers extra features or benefits that are not available in the original version. For example, a mod APK might have unlimited money, unlocked items, premium features, or no ads.
-
A modified version of an app that offers extra features or benefits
-
A mod APK can enhance your gaming experience by giving you access to things that you normally have to pay for or work hard for. For example, a mod APK of Beyond Blue might let you explore more locations, scan more creatures, watch more documentaries, or customize your character. A mod APK might also fix some bugs or glitches that are present in the original version.
-
Risks and precautions of using mod APKs
-
However, using mod APKs also comes with some risks and precautions. Some of them are:
-
-
Mod APKs are not authorized or endorsed by the original developers or publishers of the app. They might violate their terms of service or intellectual property rights.
-
Mod APKs might contain malware or viruses that can harm your device or steal your personal information.
-
Mod APKs might not be compatible with your device or updated with the latest version of the app.
-
Mod APKs might not work properly or cause crashes or errors in the app.
-
Mod APKs might affect your online gameplay or account status. You might get banned or suspended from the game's servers or community.
-
-
Therefore, you should always be careful and cautious when using mod APKs. You should only download them from reputable and trusted sources, scan them for viruses or malware, and backup your data before installing them. You should also respect the original developers and publishers of the app and support them if you like their work.
-
How to download and install Beyond Blue mod APK on Android
-
If you still want to try Beyond Blue mod APK on your Android device, you will need to follow some steps to download and install it. Here is a simple guide on how to do it:
-
How to download beyond blue mod apk for free
-Beyond blue mod apk latest version download
-Download beyond blue mod apk with unlimited money
-Beyond blue mod apk ocean exploration game download
-Download beyond blue mod apk and obb data
-Beyond blue mod apk no root download
-Download beyond blue mod apk for android devices
-Beyond blue mod apk offline mode download
-Download beyond blue mod apk with all episodes unlocked
-Beyond blue mod apk hack download
-Download beyond blue mod apk from moddb.com
-Beyond blue mod apk full game download
-Download beyond blue mod apk with premium features
-Beyond blue mod apk cracked download
-Download beyond blue mod apk with high graphics
-Beyond blue mod apk unlimited coins download
-Download beyond blue mod apk with cheats enabled
-Beyond blue mod apk direct download link
-Download beyond blue mod apk without ads
-Beyond blue mod apk unlocked everything download
-Download beyond blue mod apk with realistic physics
-Beyond blue mod apk mega mod download
-Download beyond blue mod apk with custom skins
-Beyond blue mod apk fast download speed
-Download beyond blue mod apk with voice over
-Beyond blue mod apk easy installation download
-Download beyond blue mod apk with bonus content
-Beyond blue mod apk safe and secure download
-Download beyond blue mod apk with new missions
-Beyond blue mod apk updated download
-Download beyond blue mod apk with achievements
-Beyond blue mod apk no verification download
-Download beyond blue mod apk with multiplayer mode
-Beyond blue mod apk best ocean game download
-Download beyond blue mod apk with soundtracks
-Beyond blue mod apk unlimited gems download
-Download beyond blue mod apk with pro features
-Beyond blue mod apk no survey download
-Download beyond blue mod apk with 3D graphics
-Beyond blue mod apk low mb download
-Download beyond blue mod apk with educational value
-Beyond blue mod apk working download link
-Download beyond blue mod apk with immersive gameplay
-Beyond blue mod apk no virus download
-Download beyond blue mod apk with original story
-Beyond blue mod apk unlimited resources download
-Download beyond blue mod apk with different modes
-
Step 1: Allow unknown apps on your device
-
By default, Android devices do not allow installing apps from unknown sources. This is a security measure to prevent installing harmful or malicious apps. However, if you want to install a mod APK, you will need to enable this option on your device. To do this, go to your device's settings, then security, then find the option that says "allow unknown apps" or "install unknown apps". Tap on it and toggle it on. You might also need to select the browser or file manager app that you will use to download the mod APK and grant it permission to install unknown apps.
-
Step 2: Download a file manager app
-
A file manager app is an app that lets you access and manage the files and folders on your device. You will need one to locate and install the mod APK file that you will download. There are many file manager apps available on the Google Play Store, such as ES File Explorer, File Manager, or Solid Explorer. You can choose any one that you like and download it on your device.
-
Step 3: Download the Beyond Blue mod APK file from a reputable source
-
The next step is to download the Beyond Blue mod APK file from a reputable and trusted source. You can search for it online using your browser or use a link that someone has shared with you. However, be careful and cautious when downloading mod APKs, as some of them might contain viruses or malware that can harm your device or steal your personal information. You should only download mod APKs from sources that have positive reviews, ratings, and feedback from other users. You should also scan the mod APK file for viruses or malware before installing it.
-
Step 4: Locate and install the APK file using the file manager app
-
The final step is to locate and install the mod APK file using the file manager app that you downloaded in step 2. To do this, open the file manager app and navigate to the folder where you downloaded the mod APK file. Tap on the file and select "install". You might see a warning message that says "this type of file can harm your device". Ignore it and tap on "install anyway". Wait for the installation process to finish and then tap on "open" to launch the app.
-
Conclusion
-
Beyond Blue is a diving simulator game that lets you explore the ocean's mysteries and wonders. If you want to enjoy some extra features or benefits that are not available in the official version of the game, you can try downloading and installing a mod APK of Beyond Blue on your Android device. However, you should be aware of the risks and precautions of using mod APKs, as they might violate the original developers' rights, contain malware or viruses, or cause problems with your device or account. You should only download mod APKs from reputable and trusted sources, scan them for viruses or malware, and backup your data before installing them.
-
FAQs
-
-
Q: Is Beyond Blue free to play?
-
A: No, Beyond Blue is not free to play. It is a paid game that costs $14.99 on Google Play Store.
-
Q: Is Beyond Blue compatible with my device?
-
A: Beyond Blue requires Android 7.0 or higher and at least 4 GB of RAM to run smoothly. You can check if your device meets these requirements by going to your device's settings, then about phone, then hardware information.
-
Q: Is Beyond Blue an online game?
-
A: No, Beyond Blue is not an online game. It does not require an internet connection to play. However, you will need an internet connection to watch the documentary clips and interviews in the game.
-
Q: How can I update Beyond Blue to the latest version?
-
A: You can update Beyond Blue to the latest version by going to Google Play Store, then my apps and games, then find Beyond Blue and tap on update. However, if you are using a mod APK, you might not be able to update the game through Google Play Store. You will need to download and install the latest version of the mod APK from the same source that you downloaded it from.
-
Q: How can I contact the developers or publishers of Beyond Blue?
-
A: You can contact the developers or publishers of Beyond Blue by visiting their official website, Facebook page, Twitter account, or email address. You can find these links in the game's description on Google Play Store or in the game's settings menu.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Bus Simulator Ultimate Free Mod APK The Best Bus Simulation Game with Unlimited Money and More.md b/spaces/1phancelerku/anime-remove-background/Bus Simulator Ultimate Free Mod APK The Best Bus Simulation Game with Unlimited Money and More.md
deleted file mode 100644
index 3e0603a600afbc888b3c0fbdfa14ac9bef687f74..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Bus Simulator Ultimate Free Mod APK The Best Bus Simulation Game with Unlimited Money and More.md
+++ /dev/null
@@ -1,99 +0,0 @@
-
-
Bus Simulator Ultimate Free Mod APK: A Realistic and Fun Driving Experience
-
Do you love driving buses and exploring different cities? Do you want to experience the thrill of being a bus driver and managing your own company? If yes, then you should try Bus Simulator Ultimate, one of the best bus simulation games for Android devices. And if you want to enjoy the game without any limitations, you should download Bus Simulator Ultimate Free Mod APK, which gives you access to unlimited money, gold, no ads, and many other features. In this article, we will tell you everything you need to know about this amazing game and how to get the modded version for free.
Bus Simulator Ultimate is a realistic and fun driving game developed by Zuuks Games, a popular game studio that also created Truck Simulator 2018: Europe and Euro Truck Driver 2018. In this game, you can drive various buses across different countries and cities, such as Germany, Turkey, Italy, France, Spain, USA, Brazil, Russia, China, and more. You can also create your own bus company and expand your business by hiring drivers, buying new buses, upgrading your fleet, and completing missions. You can also compete with other players online and rank up on the leaderboards.
-
Features of Bus Simulator Ultimate
-
Bus Simulator Ultimate has many features that make it one of the best bus simulation games on the market. Here are some of them:
-
Realistic graphics and sound effects
-
The game has stunning 3D graphics that make you feel like you are driving a real bus in a real city. You can see the details of the buildings, roads, traffic, pedestrians, weather, and more. You can also hear the realistic sound effects of the engine, horn, brakes, passengers, radio, and more.
-
Multiple bus models and routes
-
The game offers you a variety of buses to choose from, such as double-decker buses, school buses, city buses, coach buses, and more. Each bus has its own characteristics and features that affect its performance and handling. You can also customize your buses with different colors, skins, accessories, and logos. You can also drive on different routes that cover different countries and cities, each with its own landmarks, scenery, culture, and challenges.
-
Customizable settings and controls
-
The game allows you to adjust the settings according to your preferences and device specifications. You can change the graphics quality, sound volume, language, camera angle, traffic density, weather conditions, and more. You can also choose from different control options, such as tilt steering, buttons steering, steering wheel steering, or joystick steering. You can also enable or disable features such as ABS, ESP, TC, indicators, headlights, mirrors, speed limiters, etc.
-
bus simulator ultimate mod apk unlimited money
-bus simulator ultimate hack mod apk download
-bus simulator ultimate mod apk latest version
-bus simulator ultimate mod apk android 1
-bus simulator ultimate mod apk revdl
-bus simulator ultimate mod apk obb
-bus simulator ultimate mod apk offline
-bus simulator ultimate mod apk rexdl
-bus simulator ultimate mod apk happymod
-bus simulator ultimate mod apk all unlocked
-bus simulator ultimate mod apk no ads
-bus simulator ultimate mod apk 2023
-bus simulator ultimate mod apk unlimited everything
-bus simulator ultimate mod apk free shopping
-bus simulator ultimate mod apk vip
-bus simulator ultimate mod apk pro
-bus simulator ultimate mod apk premium
-bus simulator ultimate mod apk full version
-bus simulator ultimate mod apk mega
-bus simulator ultimate mod apk data
-bus simulator ultimate mod apk online
-bus simulator ultimate mod apk new update
-bus simulator ultimate mod apk unlimited coins
-bus simulator ultimate mod apk free download
-bus simulator ultimate mod apk unlocked skins
-bus simulator ultimate mod apk unlimited fuel
-bus simulator ultimate mod apk unlimited xp
-bus simulator ultimate mod apk 2.0.9
-bus simulator ultimate mod apk 2.0.8
-bus simulator ultimate mod apk 2.0.7
-bus simulator ultimate mod apk 2.0.6
-bus simulator ultimate mod apk 2.0.5
-bus simulator ultimate mod apk 2.0.4
-bus simulator ultimate mod apk 2.0.3
-bus simulator ultimate mod apk 2.0.2
-bus simulator ultimate mod apk 2.0.1
-bus simulator ultimate mod apk 2.0.0
-bus simulator ultimate mod apk 1.5.9
-bus simulator ultimate mod apk 1.5.8
-bus simulator ultimate mod apk 1.5.7
-bus simulator ultimate mod apk 1.5.6
-bus simulator ultimate mod apk 1.5.5
-bus simulator ultimate mod apk 1.5.4
-bus simulator ultimate mod apk 1.5.3
-bus simulator ultimate mod apk 1.5.2
-bus simulator ultimate mod apk 1.5.1
-bus simulator ultimate mod apk 1.5.0
-download game bus simulator ultimate free hack version for android
-
Online multiplayer and leaderboards
-
The game has an online multiplayer mode where you can join or create a room with other players from around the world. You can chat with them using voice or text messages. You can also see their stats and rankings on the leaderboards. You can also join or create a club with other players and cooperate with them to earn more money and rewards.
-
Why download Bus Simulator Ultimate Free Mod APK?
-
While Bus Simulator Ultimate is free to download and play from the Google Play Store or App Store, it has some limitations that may affect your gaming experience. For example:
-
Unlimited money and gold
Unlimited money and gold
-
Money and gold are the main currencies in the game that you can use to buy new buses, upgrade your fleet, hire drivers, and more. However, earning money and gold can be slow and tedious, especially if you want to unlock all the features and items in the game. That's why downloading Bus Simulator Ultimate Free Mod APK is a good idea, as it gives you unlimited money and gold that you can spend as much as you want without worrying about running out.
-
No ads and in-app purchases
-
Another limitation of the original game is that it contains ads and in-app purchases that can interrupt your gameplay and make you spend real money to get more features and items. These can be annoying and frustrating, especially if you just want to enjoy the game without any distractions or limitations. That's why downloading Bus Simulator Ultimate Free Mod APK is a better option, as it removes all the ads and in-app purchases from the game, making it more smooth and enjoyable.
-
Easy installation and compatibility
-
Downloading Bus Simulator Ultimate Free Mod APK is also very easy and simple. You don't need to root or jailbreak your device, or install any other apps or tools. You just need to follow a few steps that we will explain later in this article. The modded version is also compatible with most Android devices, regardless of their specifications or versions.
-
How to download and install Bus Simulator Ultimate Free Mod APK?
-
If you are interested in downloading and installing Bus Simulator Ultimate Free Mod APK, you just need to follow these steps:
-
Step 1: Download the APK file from a trusted source
-
The first thing you need to do is to download the APK file of Bus Simulator Ultimate Free Mod from a reliable and secure source. You can use the link below to get the latest version of the modded file. Make sure you have enough storage space on your device before downloading the file.
The next thing you need to do is to enable unknown sources on your device. This will allow you to install apps from sources other than the official app stores. To do this, go to your device settings, then security or privacy, then toggle on the option that says unknown sources or allow installation from unknown sources. You may see a warning message, but don't worry, it's safe to proceed.
-
Step 3: Install the APK file and launch the game
-
The final thing you need to do is to install the APK file that you downloaded in step 1. To do this, locate the file in your device's file manager or downloads folder, then tap on it to start the installation process. Follow the instructions on the screen until the installation is complete. Then, launch the game from your app drawer or home screen and enjoy!
-
Conclusion
-
Bus Simulator Ultimate is a realistic and fun driving game that lets you experience the thrill of being a bus driver and managing your own company. You can drive various buses across different countries and cities, customize your buses with different colors, skins, accessories, and logos, compete with other players online, and more. However, if you want to enjoy the game without any limitations, you should download Bus Simulator Ultimate Free Mod APK, which gives you unlimited money, gold, no ads, and many other features. Just follow the steps we explained in this article and get ready for a great driving adventure!
-
FAQs
-
Here are some frequently asked questions about Bus Simulator Ultimate Free Mod APK:
-
-
Is Bus Simulator Ultimate Free Mod APK safe to use?
-
Yes, Bus Simulator Ultimate Free Mod APK is safe to use as long as you download it from a trusted source like ours. We have tested the modded file for viruses and malware and found no issues. However, we recommend that you always scan any file before installing it on your device.
-
Is Bus Simulator Ultimate Free Mod APK legal?
-
Yes, Bus Simulator Ultimate Free Mod APK is legal as long as you use it for personal and educational purposes only. We do not encourage or support any illegal or unethical activities related to this game or any other game. We also respect the rights of the original developers of this game and advise you to support them by buying their official products.
-
Does Bus Simulator Ultimate Free Mod APK require an internet connection?
-
No, Bus Simulator Ultimate Free Mod APK does not require an internet connection to play. You can play it offline without any problems. However, if However, if you want to play online multiplayer mode or access the leaderboards, you will need an internet connection.
-
Can I update Bus Simulator Ultimate Free Mod APK?
-
No, you cannot update Bus Simulator Ultimate Free Mod APK from the official app stores, as it is a modified version of the original game. If you try to update it, you may lose all the modded features and face some errors. However, you can always check our website for the latest version of the modded file and download it from there.
-
Can I use Bus Simulator Ultimate Free Mod APK on PC or iOS devices?
-
No, Bus Simulator Ultimate Free Mod APK is only compatible with Android devices. You cannot use it on PC or iOS devices, as they have different operating systems and architectures. However, you can use an Android emulator on your PC to run the modded file, or you can use a similar game that is available for PC or iOS devices.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Ubuntu Server and Get Professional Support from Canonical.md b/spaces/1phancelerku/anime-remove-background/Download Ubuntu Server and Get Professional Support from Canonical.md
deleted file mode 100644
index cd7f0c8df5c5becd331e1b3c1c43a41ea7f3e145..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Ubuntu Server and Get Professional Support from Canonical.md
+++ /dev/null
@@ -1,275 +0,0 @@
-
-
Ubuntu Server Download: A Guide for Beginners
-
If you are looking for a reliable, secure, and scalable operating system for your server needs, you might want to consider Ubuntu Server. Ubuntu Server is a Linux-based system that offers several benefits over other server operating systems. In this article, we will explain what Ubuntu Server is, why you should use it, how to download it, how to install it, how to use it, how to install additional software on it, how to secure it, how to monitor it, how to troubleshoot it, how to upgrade it, how to compare it with other server OS es), and how to find more resources for learning more about it.
Ubuntu Server is a server operating system based on Ubuntu Linux, a popular and user-friendly distribution of Linux. Ubuntu Server supports various architectures and platforms, such as x86, x86_64, ARM, and PowerPC. Ubuntu Server offers scalability, security, and performance for various workloads, such as web hosting, cloud computing, database management, file sharing, and more. Ubuntu Server is also compatible with many open-source and proprietary software and hardware solutions.
-
Why Use Ubuntu Server?
-
There are many benefits of using Ubuntu Server over other server operating systems. Here are some of them:
-
-
Regular security updates and long-term support. Ubuntu Server receives frequent and timely security updates from the Ubuntu community and Canonical, the company behind Ubuntu. Ubuntu Server also has long-term support (LTS) releases that are supported for five years with security and maintenance updates. The current LTS release is Ubuntu 20.04 LTS, which was released in April 2020 and will be supported until April 2025.
-
Capacity to scale-out to meet your precise needs. Ubuntu Server can run on any hardware configuration, from a single-core CPU with 1 GB of RAM to a multi-core CPU with hundreds of GBs of RAM. You can also add or remove servers as needed to adjust to your workload demands. Ubuntu Server also supports various cloud platforms, such as Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), and OpenStack.
-
Efficient use of resources with modest hardware requirements. Ubuntu Server is designed to run efficiently on low-end hardware, making it ideal for older or budget-friendly machines. Ubuntu Server also has a minimal installation option that only installs the essential packages and services, leaving more disk space and memory for your applications.
-
-
How to Download Ubuntu Server?
-
There are different ways to obtain the server ISO image, which is the file that contains the installation program and the operating system files. Here are some of them:
-
-
Official website. You can download the ISO image from the official website of Ubuntu at https://ubuntu.com/download/server. You can choose between the latest release (Ubuntu 21.04) or the latest LTS release (Ubuntu 20.04 LTS). You can also choose between different architectures (64-bit or ARM).
-
Torrents. You can download the ISO image using a BitTorrent client from https://ubuntu.com/download/alternative-downloads. Torrents are a peer-to-peer file-sharing method that can speed up the download process and reduce the load on the servers.
-
Mirrors. You can download the ISO image from one of the many mirror sites that host copies of the Ubuntu files. You can find a list of mirrors at https://launchpad.net/ubuntu/+cdmirrors. Mirrors can offer faster download speeds depending on your location and network connection.
-
Alternative releases. You can download the ISO image from http://cdimage.ubuntu.com, which hosts alternative releases of Ubuntu, such as daily builds, testing versions, or older versions. Alternative releases are not recommended for production use, but they can be useful for testing or experimenting purposes.
-
-
When downloading the ISO image, you should choose the right version and architecture for your needs. The version refers to the release number of Ubuntu Server, such as 20.04 LTS or 21.04. The architecture refers to the type of processor that your server has, such as 64-bit or ARM. You should check the compatibility of your hardware and software with the version and architecture of Ubuntu Server before downloading it.
-
ubuntu server download iso
-ubuntu server download 22.04
-ubuntu server download 23.04
-ubuntu server download 20.04
-ubuntu server download usb
-ubuntu server download virtualbox
-ubuntu server download raspberry pi
-ubuntu server download torrent
-ubuntu server download mirror
-ubuntu server download lts
-ubuntu server download for windows
-ubuntu server download vmware
-ubuntu server download 18.04
-ubuntu server download 16.04
-ubuntu server download size
-ubuntu server download minimal
-ubuntu server download arm
-ubuntu server download docker
-ubuntu server download old versions
-ubuntu server download netinstall
-ubuntu server download 32 bit
-ubuntu server download checksum
-ubuntu server download cloud image
-ubuntu server download gui
-ubuntu server download zfs
-ubuntu server download alternative
-ubuntu server download powerpc
-ubuntu server download s390x
-ubuntu server download risc-v
-ubuntu server download multipass
-ubuntu server download maas
-ubuntu server download openstack
-ubuntu server download pro
-ubuntu server download live cd
-ubuntu server download desktop environment
-ubuntu server download snapd
-ubuntu server download microk8s
-ubuntu server download juju
-ubuntu server download landscape
-ubuntu server download ansible
-ubuntu server download sshd
-ubuntu server download apache2
-ubuntu server download mysql-server
-ubuntu server download php7.4-fpm
-ubuntu server download python3-pip
-ubuntu server download nodejs
-ubuntu server download ruby
-ubuntu server download java
-ubuntu server download nginx
-
How to Create a Bootable USB or DVD?
-
To install Ubuntu Server on your machine, you need to create a bootable media from the ISO image that you downloaded. A bootable media is a USB flash drive or a DVD that contains the installation program and the operating system files. You can use various tools and steps to make a bootable media from the ISO image, depending on your operating system and preference. Here are some examples:
-
-
Rufus. Rufus is a free and open-source tool that can create bootable USB drives from ISO images. It is available for Windows only. You can download Rufus from https://rufus.ie. To use Rufus, you need to insert a USB flash drive into your computer, launch Rufus, select the ISO image and the USB drive, and click Start. Rufus will format the USB drive and copy the ISO image to it.
-
Etcher. Etcher is another free and open-source tool that can create bootable USB drives from ISO images. It is available for Windows, Mac, and Linux. You can download Etcher from https://www.balena.io/etcher. To use Etcher, you need to insert a USB flash drive into your computer, launch Etcher, select the ISO image and the USB drive, and click Flash. Etcher will validate the USB drive and copy the ISO image to it.
-
UNetbootin. UNetbootin is yet another free and open-source tool that can create bootable USB drives from ISO images. It is also available for Windows, Mac, and Linux. You can download UNetbootin from https://unetbootin.github.io. To use UNetbootin, you need to insert a USB flash drive into your computer, launch UNetbootin, select the ISO image and the USB drive, and click OK. UNetbootin will format the USB drive and copy the ISO image to it.
-
dd command. dd is a command-line tool that can create bootable USB drives from ISO images. It is available for Linux and Mac. To use dd, you need to insert a USB flash drive into your computer, open a terminal window, and run the following command (replace /dev/sdx with the device name of your USB drive):
To install Ubuntu Server on your machine, you need to boot from the bootable media that you created and follow the instructions on the screen. Here are the main steps:
-
-
Booting from the install media and choosing the language. Insert the bootable media into your machine and power it on. You may need to change the boot order in your BIOS or UEFI settings to boot from the media. You will see a menu with several options. Choose Install Ubuntu Server to start the installation process. You will be asked to choose your preferred language for the installation.
-
Selecting the keyboard layout and network settings. You will be asked to choose your keyboard layout and configure your network settings. You can use DHCP or static IP addresses for your network connection. You can also set up a proxy server if needed.
-
Configuring the storage and partitions. You will be asked to choose how you want to use your disk space for Ubuntu Server. You can use guided partitioning or manual partitioning. Guided partitioning will automatically create and format partitions for you. Manual partitioning will let you create and format partitions yourself. You should have at least one partition for the root file system (/) and one partition for swap space. You can also create separate partitions for other directories, such as /home or /var.
-
Setting up a profile and installing software. You will be asked to enter your name, username, password, and server name. You will also be asked to choose which software you want to install on your server. You can select from various options, such as standard system utilities, OpenSSH server, web server, mail server, database server, or custom software selection.
-
Completing the installation and rebooting. After installing the software, you will be asked to confirm if you want to install the GRUB bootloader on your disk. GRUB is a program that lets you choose which operating system to boot from. You should install GRUB unless you have another bootloader already installed. The installation process will finish and you will be prompted to remove the install media and reboot your machine.
-
-
How to Use Ubuntu Server?
-
After installing Ubuntu Server on your machine, you can start using it for your server needs. You can use basic commands and tasks to manage your server, such as updating the system and installing packages, configuring network and firewall settings, creating users and groups, setting up SSH and remote access, and more. Here are some examples:
-
-
Updating the system and installing packages. You can use the apt command to update your system and install packages from the official Ubuntu repositories. For example, to update your system, you can run:
-
-
sudo apt update sudo apt upgrade
-
-
To install a package, such as nginx, you can run:
-
-
sudo apt install nginx
-
-
Configuring network and firewall settings. You can use the ip command to check your network interface settings and the ping command to test your network connectivity. For example, to check your IP address, you can run:
-
-
ip a
-
-
To ping a website, such as google.com, you can run:
-
-
ping google.com
-
-
You can use the ufw command to manage your firewall settings and allow or deny incoming or outgoing traffic. For example, to enable the firewall and allow SSH connections, you can run:
-
-
sudo ufw enable sudo ufw allow ssh
-
-
Creating users and groups. You can use the adduser and addgroup commands to create new users and groups on your server. For example, to create a new user named alice and add her to a group named admin, you can run:
-
-
sudo adduser alice sudo addgroup admin sudo adduser alice admin
-
-
Setting up SSH and remote access. You can use the ssh command to connect to your server from another machine over a secure connection. For example, to connect to your server with the username bob, you can run:
-
-
ssh bob@server-ip-address
-
-
You can also use the scp command to copy files between your server and another machine over SSH. For example, to copy a file named file.txt from your server to your local machine, you can run:
How to Install Additional Software on Ubuntu Server?
-
Besides using the apt command to install software from the official Ubuntu repositories, you can also use other ways to install software on your server. Here are some of them:
-
-
Using snap. Snap is a package management system that allows you to install software from various sources, such as Canonical, third-party developers, or yourself. Snap packages are self-contained and isolated from the rest of the system, which makes them easy to install and update. You can use the snap command to install snap packages. For example, to install the snap package for Nextcloud, a cloud storage service, you can run:
-
-
sudo snap install nextcloud
-
-
Using dpkg. dpkg is a low-level tool that can install software from Debian packages (.deb files), which are the format used by Ubuntu and other Debian-based distributions. You can use the dpkg command to install Debian packages that are not available in the official Ubuntu repositories or that you have downloaded from other sources. For example, to install a Debian package named package.deb that you have downloaded in your home directory, you can run:
-
-
sudo dpkg -i ~/package.deb
-
-
Adding repositories and PPAs. Repositories are online sources of software packages that are maintained by Ubuntu or other developers. PPAs (Personal Package Archives) are repositories that are hosted on Launchpad, a platform for hosting and developing open-source projects. You can add repositories and PPAs to your system to access more software packages that are not available in the official Ubuntu repositories. You can use the add-apt-repository command to add repositories and PPAs. For example, to add the PPA for LibreOffice, a free and open-source office suite, you can run:
Installing from source code or binary files. Some software packages are not available in any repository or PPA, but are distributed as source code or binary files. Source code is the original code that is written by the developers, and binary files are the executable files that are compiled from the source code. You can install software from source code or binary files by following the instructions provided by the developers. For example, to install a software package named package.tar.gz that you have downloaded in your home directory, you can run:
-
-
tar xzf ~/package.tar.gz cd package ./configure make sudo make install
-
Note that installing software from source code or binary files may require additional dependencies, libraries, or tools that are not installed by default on your system. You should also be careful about the source and quality of the software that you install from these methods, as they may not be verified or supported by Ubuntu or Canonical.
-
How to Secure Ubuntu Server?
-
Security is an important aspect of any server operating system, and Ubuntu Server is no exception. You should follow some best practices and tips to harden your server security and protect it from unauthorized access, attacks, or breaches. Here are some of them:
-
-
Setting up a strong password and SSH keys. You should use a strong and unique password for your user account and your sudo password. A strong password should be at least 8 characters long and contain a mix of uppercase and lowercase letters, numbers, and symbols. You should also use SSH keys instead of passwords to authenticate yourself when connecting to your server via SSH. SSH keys are more secure and convenient than passwords, as they are based on cryptography and do not require typing or remembering. You can generate SSH keys using the ssh-keygen command and copy them to your server using the ssh-copy-id command.
-
Disabling root login and unused services. You should disable root login on your server, as it is a common target for attackers who want to gain full control of your system. You can disable root login by editing the /etc/ssh/sshd_config file and setting PermitRootLogin to no. You should also disable any services that you do not use or need on your server, as they may pose security risks or consume resources unnecessarily. You can disable services using the systemctl command.
-
Installing fail2ban and ufw. Fail2ban is a tool that monitors your server logs and bans IP addresses that show malicious signs, such as repeated failed login attempts or suspicious requests. Ufw is a tool that manages your firewall rules and allows or denies incoming or outgoing traffic based on predefined or custom criteria. You can install fail2ban and ufw using the apt command and configure them according to your needs.
-
Applying security updates and patches. You should keep your system and software up to date with the latest security updates and patches from Ubuntu and Canonical. Security updates and patches fix vulnerabilities and bugs that may compromise your server security or performance. You can apply security updates and patches using the apt command or enabling automatic updates.
-
-
How to Monitor Ubuntu Server?
-
Monitoring is another important aspect of any server operating system, as it allows you to check your server performance and health, identify issues or bottlenecks, optimize resources, and troubleshoot problems. You can use various tools and methods to monitor your server, such as commands, software, logs, and alerts. Here are some examples:
-
-
Using commands. You can use various commands to monitor different aspects of your server, such as CPU usage, memory usage, disk usage, network activity, processes, etc. Some of these commands are top, htop, vmstat, iostat, sar, etc. For example, to monitor CPU usage in real time, you can run:
-
-
top
-
-
To monitor disk usage in human-readable format, you can run:
-
-
df -h
-
-
Installing monitoring software. You can also install monitoring software on your server that can provide more comprehensive and graphical information about your server performance and health. Some of these software are Nagios, Zabbix, Munin, etc. For example, to install Nagios on your server, you can run:
-
-
sudo apt install nagios4
-
-
You can then access the Nagios web interface at http://server-ip-address/nagios4 and view various metrics and graphs about your server.
-
Checking logs and alerts. Logs are files that record various events and activities that occur on your server, such as errors, warnings, messages, requests, etc. You can check the logs on your server to find out what is happening on your system and troubleshoot any issues. Logs are usually stored in the /var/log directory. You can use the tail, grep, or less commands to view or search the logs. For example, to view the last 10 lines of the syslog file, you can run:
-
-
sudo tail -n 10 /var/log/syslog
-
-
Alerts are notifications that inform you of any critical or abnormal events or conditions that occur on your server, such as high CPU load, low disk space, failed services, etc. You can set up alerts on your server using various tools or methods, such as email, SMS, webhooks, etc. For example, to set up email alerts for disk space usage on your server, you can use a tool such as https://github.com/alexanderepstein/Bash-Snippets/tree/master/diskusagereport.
-
-
How to Troubleshoot Ubuntu Server?
-
Sometimes, you may encounter problems or issues on your server that affect its functionality or performance. You should be able to troubleshoot these problems and find solutions or workarounds to fix them. You can use various tools and methods to troubleshoot your server, such as recovery mode, live CD, network tools, disk tools, backups, snapshots, online help, or support. Here are some examples:
-
-
Booting into recovery mode or using a live CD. Recovery mode is a special boot option that allows you to access your server with minimal settings and perform basic tasks, such as repairing file systems, resetting passwords, or updating packages. You can boot into recovery mode by selecting Advanced options for Ubuntu from the GRUB menu and then choosing the recovery mode option. Live CD is a bootable media that contains a live version of Ubuntu that you can run without installing it on your disk. You can use a live CD to access your server and perform various tasks, such as copying files, editing configurations, or running diagnostics. You can create a live CD using the same methods that you used to create a bootable media for installing Ubuntu Server.
-
Fixing network connectivity or disk errors. Network connectivity or disk errors are common causes of server problems that prevent you from accessing or using your server properly. You can use various network tools or disk tools to fix these errors. For example, to fix network connectivity errors, you can use the ping, traceroute, or nslookup commands to test your network connection and resolve any issues. To fix disk errors, you can use the fsck command to check and repair your file systems or the badblocks command to scan and mark bad sectors on your disk.
-
Restoring backups or snapshots. Backups or snapshots are copies of your server data or state that you can use to restore your server in case of any problems or disasters. You should always have backups or snapshots of your server and store them in a safe and separate location. You can use various tools or methods to create and restore backups or snapshots of your server. For example, to create a backup of your home directory using the rsync command, you can run:
-
-
rsync -avz /home/username /backup/location
-
-
To restore a snapshot of your server using the LVM command, you can run:
-
-
lvconvert --merge /dev/vgname/snapshot
-
-
Searching for help online or contacting support. If you cannot find a solution or workaround for your server problem by yourself, you can search for help online or contact support. You can find many online resources and communities that offer help and advice for Ubuntu Server users and administrators. Some of these resources are:
You can also contact support from Canonical, the company behind Ubuntu, if you have a paid subscription or contract with them. You can find more information about Canonical's support services at https://ubuntu.com/support a>.
-
Comparison of Ubuntu Server with Other Server Operating Systems
-
Ubuntu Server is not the only server operating system available in the market. There are other server OSes that you can choose from, depending on your preferences, requirements, and budget. Some of these server OSes are CentOS, Debian, or Windows Server. Each of these server OSes has its own advantages and disadvantages, and you should compare them carefully before making a decision. Here are some of the factors that you should consider when comparing Ubuntu Server with other server OSes:
-
-
Features. You should compare the features and capabilities of each server OS, such as supported architectures and platforms, available software and packages, security and stability, performance and scalability, etc. You should also check if the server OS has any unique or exclusive features that may suit your needs better than others.
-
Performance. You should compare the performance and efficiency of each server OS, such as CPU usage, memory usage, disk usage, network throughput, etc. You should also check how well the server OS can handle various workloads and scenarios, such as high traffic, high concurrency, high availability, etc.
-
Compatibility. You should compare the compatibility and interoperability of each server OS with your existing hardware and software solutions. You should check if the server OS can run on your hardware configuration and support your devices and drivers. You should also check if the server OS can work with your software applications and services, such as databases, web servers, mail servers, etc.
-
Support. You should compare the support and maintenance of each server OS from the developers and vendors. You should check how often the server OS receives updates and patches, how long the server OS is supported for, how easy it is to get help and assistance, how much it costs to get support, etc.
-
-
Here is a table that summarizes some of the main differences between Ubuntu Server and other server OSes:
-
-
-
Server OS
-
Features
-
Performance
-
Compatibility
-
Support
-
-
-
Ubuntu Server
-
- Linux-based - Supports various architectures and platforms - Offers scalability, security, and performance for various workloads - Compatible with many open-source and proprietary software and hardware solutions - Has regular security updates and long-term support releases - Has a large and active community of users and developers
-
- Runs efficiently on low-end hardware - Has a minimal installation option that only installs the essential packages and services - Has a capacity to scale-out to meet your precise needs - Supports various cloud platforms
-
- Supports x86, x86_64, ARM, and PowerPC architectures - Supports many devices and drivers - Works with many software applications and services - Has a large repository of software packages that can be installed using apt or snap commands
-
- Receives frequent and timely security updates from Ubuntu community and Canonical - Has long-term support (LTS) releases that are supported for five years with security and maintenance updates - Has free online resources and communities for help and advice - Has paid support services from Canonical for enterprise users
-
-
-
CentOS
-
- Linux-based - Based on Red Hat Enterprise Linux (RHEL) - Offers stability, reliability, and security for enterprise workloads - Compatible with many open-source and proprietary software and hardware solutions - Has regular security updates and long-term support releases - Has a large and active community of users and developers
-
- Runs efficiently on low-end hardware - Has a minimal installation option that only installs the essential packages and services - Has a capacity to scale-out to meet your precise needs - Supports various cloud platforms
-
- Supports x86_64 architecture only - Supports many devices and drivers - Works with many software applications and services - Has a large repository of software packages that can be installed using yum or dnf commands
-
- Receives frequent and timely security updates from CentOS community and Red Hat - Has long-term support (LTS) releases that are supported for 10 years with security and maintenance updates - Has free online resources and communities for help and advice - Has paid support services from Red Hat for enterprise users
-
-
-
Debian
-
- Linux-based - One of the oldest and most popular distributions of Linux - Offers stability, security, and versatility for various workloads - Compatible with many open-source and proprietary software and hardware solutions - Has regular security updates and long-term support releases - Has a large and active community of users and developers
-
- Runs efficiently on low-end hardware - Has a minimal installation option that only installs the essential packages and services - Has a capacity to scale-out to meet your precise needs - Supports various cloud platforms
-
- Supports x86, x86_64, ARM, and PowerPC architectures - Supports many devices and drivers - Works with many software applications and services - Has a large repository of software packages that can be installed using apt or dpkg commands
-
- Receives frequent and timely security updates from Debian community and Debian Security Team - Has long-term support (LTS) releases that are supported for five years with security and maintenance updates - Has free online resources and communities for help and advice - Has paid support services from third-party vendors for enterprise users
-
-
-
Windows Server
-
- Windows-based - The server version of the Windows operating system - Offers compatibility, functionality, and integration for enterprise workloads - Compatible with many proprietary software and hardware solutions - Has regular security updates and long-term support releases - Has a large and active community of users and developers
-
- Requires more hardware resources than Linux-based server OSes - Has a graphical user interface (GUI) that can be installed or removed as needed - Has a capacity to scale-up to meet your precise needs - Supports various cloud platforms
-
- Supports x86_64 architecture only - Supports many devices and drivers - Works with many software applications and services - Has a large repository of software packages that can be installed using Windows Update or PowerShell commands
-
- Receives frequent and timely security updates from Microsoft - Has long-term support (LTS) releases that are supported for 10 years with security and maintenance updates - Has free online resources and communities for help and advice - Has paid support services from Microsoft for enterprise users
-
-
-
Resources for Learning More About Ubuntu Server
-
If you want to learn more about Ubuntu Server, you can find many websites, books, courses, and podcasts that can help you. Here are some of them:
-
-
Websites. You can visit the following websites to find more information, tutorials, guides, tips, tricks, news, and reviews about Ubuntu Server:
Books. You can read the following books to learn more about Ubuntu Server:
-
-
-
The Official Ubuntu Server Book (3rd Edition) by Kyle Rankin and Benjamin Mako Hill. This book covers everything you need to know about installing, configuring, managing, securing, monitoring, troubleshooting, and upgrading Ubuntu Server. It also includes practical examples and exercises to help you master Ubuntu Server skills.
-
Ubuntu Unleashed 2019 Edition: Covering 18.04, 18.10, 19.04 (13th Edition) by Matthew Helmke. This book covers everything you need to know about using Ubuntu as a desktop or server operating system. It also includes tips and tricks for advanced users and developers.
-
Mastering Ubuntu Server: Master the art of deploying, configuring, managing and completeness of the backup and store it in a safe and separate location.
-
Install Ubuntu Server on your new server. You can follow the steps that we have described in this article to download, create a bootable media, and install Ubuntu Server on your new server. You should choose the same or compatible architecture and version as your old server.
-
Restore your data and settings on your new server. You can use the tools or methods that are available for Ubuntu Server to restore your data and settings from your backup. You may need to convert or adapt some of the data or settings to make them compatible with Ubuntu Server. You may also need to install additional software or packages on your new server to run your applications or services.
-
Test and verify your new server. You should test and verify that your new server is working properly and that all your data and settings are transferred correctly. You should also check for any errors or issues that may arise during or after the migration process. You should also update and secure your new server as needed.
-
-
Migrating your server from another OS to Ubuntu Server can be a challenging and time-consuming task, but it can also bring many benefits, such as improved performance, security, compatibility, and support. You should plan and prepare carefully before migrating your server and follow the best practices and tips that we have provided in this article.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Epic Heroes War The Ultimate Fighting Game with Premium Mod APK and AN1 Features.md b/spaces/1phancelerku/anime-remove-background/Epic Heroes War The Ultimate Fighting Game with Premium Mod APK and AN1 Features.md
deleted file mode 100644
index b38d0c6297e138deaf86ca0bd8714c5b127e1084..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Epic Heroes War The Ultimate Fighting Game with Premium Mod APK and AN1 Features.md
+++ /dev/null
@@ -1,147 +0,0 @@
-
-
Epic Heroes War Premium Mod Apk An1: A Guide for Gamers
-
Are you a fan of epic battles, legendary heroes, and thrilling adventures? If so, you might want to check out Epic Heroes War, a real-time strategy and role-playing game that lets you create your own army of heroes and fight against various enemies. Epic Heroes War is a fun and addictive game that offers hours of entertainment and challenge. But what if you want to enjoy the game without any limitations or restrictions? What if you want to have unlimited money and gems, unlock all heroes and items, and remove annoying ads? Well, there is a way to do that, and it is called Epic Heroes War premium mod apk an1.
Epic Heroes War premium mod apk an1 is a modified version of the original game that gives you access to all the features and benefits that you would normally have to pay for or earn through hard work. With Epic Heroes War premium mod apk an1, you can enjoy the game to the fullest and dominate the battlefield with ease. In this article, we will tell you everything you need to know about Epic Heroes War premium mod apk an1, including its features, benefits, how to download and install it, and how it compares to the original game. So, if you are ready to take your gaming experience to the next level, read on!
-
Features of Epic Heroes War
-
Before we dive into the details of Epic Heroes War premium mod apk an1, let us first review some of the features of Epic Heroes War that make it such a great game. Epic Heroes War is a unique game that combines real-time strategy and role-playing elements in a single gameplay. Here are some of the features that you can enjoy in Epic Heroes War:
-
Real-time Strategy and Role-playing Game
-
Epic Heroes War is not your typical RPG or RTS game. It is a hybrid of both genres that offers a dynamic and exciting gameplay. In Epic Heroes War, you can create your own army of heroes from over 60 different characters, each with their own skills, abilities, and stats. You can also equip them with various items and weapons to enhance their performance. You can then lead your army into battle against other players or AI enemies in different modes and scenarios. You can also control your heroes individually or as a group, depending on your strategy and preference. You can also upgrade your heroes and items as you progress through the game.
-
Over 60 Heroes and Items
-
Epic Heroes War offers a wide variety of heroes and items for you to choose from. You can find heroes from different categories, such as warriors, mages, archers, assassins, support, etc. You can also find heroes from different mythologies and legends, such as Zeus, Thor, Hercules, Odin, Loki, etc. Each hero has their own unique skills and abilities that can be activated during battle. You can also equip your heroes with different items and weapons that can boost their stats and effects. Some of the items include swords, axes, hammers, bows, daggers, shields, armors, rings, etc.
-
Online PvP Battles and Co-op Modes
-
Epic Heroes War allows you to compete or cooperate with other players online in various modes and challenges. You can join or create a guild with other players and participate in guild wars, raids, tournaments, etc. You can also challenge other players in 1v1 or 5v5 PvP battles and climb the leaderboards. You can also team up with other players in co-op modes and fight against powerful bosses or waves of enemies.
-
Campaigns, Quests, and Bosses
-
Benefits of Epic Heroes War Premium Mod Apk An1
-
Now that you know some of the features of Epic Heroes War, you might be wondering why you should use Epic Heroes War premium mod apk an1 instead of the original game. Well, there are many benefits that Epic Heroes War premium mod apk an1 can offer you, such as:
-
epic heroes war mod apk unlimited money and gems
-epic heroes war god of shadow mod apk
-epic heroes war hack mod apk download
-epic heroes war gift code 2023
-epic heroes war mod apk latest version
-epic heroes war premium apk free download
-epic heroes war mod menu apk
-epic heroes war 2 mod apk
-epic heroes war mod apk offline
-epic heroes war premium hack apk
-epic heroes war mod apk android 1
-epic heroes war cheat codes
-epic heroes war mod apk rexdl
-epic heroes war premium unlocked apk
-epic heroes war mod apk revdl
-epic heroes war unlimited gems apk
-epic heroes war mod apk happymod
-epic heroes war premium free download
-epic heroes war mod apk no root
-epic heroes war premium features
-epic heroes war mod apk unlimited everything
-epic heroes war god of shadow premium mod apk
-epic heroes war hack version download
-epic heroes war premium vs free
-epic heroes war mod apk pure
-epic heroes war god of shadow hack apk
-epic heroes war hack online generator
-epic heroes war premium benefits
-epic heroes war mod apk platinmods
-epic heroes war god of shadow cheats
-epic heroes war hack tool no survey
-epic heroes war premium worth it
-epic heroes war mod apk obb
-epic heroes war god of shadow mod menu apk
-epic heroes war hack without human verification
-epic heroes war premium cost
-epic heroes war mod apk unlimited diamond
-epic heroes war god of shadow gift code 2023
-epic heroes war hack ios download
-epic heroes war premium review
-epic heroes war mod apk all unlocked
-epic heroes war god of shadow unlimited money and gems
-epic heroes war hack apk 2023
-epic heroes war premium reddit
-epic heroes war mod apk old version
-epic heroes war god of shadow latest version mod apk
-epic heroes war hack game guardian
-epic heroes war premium ios
-
Unlimited Money and Gems
-
One of the main benefits of Epic Heroes War premium mod apk an1 is that it gives you unlimited money and gems, which are the main currencies in the game. With unlimited money and gems, you can buy any hero, item, upgrade, or anything else that you want in the game. You don't have to worry about running out of resources or spending real money to get them. You can enjoy the game without any limitations or restrictions.
-
Unlocked All Heroes and Items
-
Another benefit of Epic Heroes War premium mod apk an1 is that it unlocks all heroes and items for you to use. You don't have to wait for a certain level or complete a certain quest to unlock them. You can access all heroes and items from the start of the game and choose your favorite ones to create your army. You can also try different combinations and strategies with different heroes and items.
-
No Ads and No Root Required
-
A third benefit of Epic Heroes War premium mod apk an1 is that it removes annoying ads and does not require root access to install. Ads can be very distracting and irritating when you are playing a game, especially when they pop up in the middle of a battle or a cutscene. Epic Heroes War premium mod apk an1 eliminates all ads from the game, so you can enjoy it without any interruptions or delays. Moreover, Epic Heroes War premium mod apk an1 does not require root access to install, which means you don't have to risk damaging your device or voiding your warranty to use it. You can install it safely and easily on any Android device.
-
Easy to Download and Install
-
A fourth benefit of Epic Heroes War premium mod apk an1 is that it is easy to download and install. You don't need any special skills or knowledge to use it. All you need to do is follow a few simple steps that we will explain in the next section. You can download and install Epic Heroes War premium mod apk an1 in a matter of minutes and start playing right away.
h3: Step 2: Download Epic Heroes War Premium Mod Apk An1 File
-
The second step you need to do is to download Epic Heroes War premium mod apk an1 file from a trusted source. You can use the link below to download the file from our website. The file is safe and virus-free, so you don't have to worry about any harm to your device. The file size is about 100 MB, so make sure you have enough space and a stable internet connection before downloading it.
Step 3: Locate and Install Epic Heroes War Premium Mod Apk An1 File
-
The third step you need to do is to locate and install Epic Heroes War premium mod apk an1 file on your device. To do this, you need to use a file manager app that can access the downloaded files on your device. You can use any file manager app that you prefer, but we recommend using ES File Explorer, which is a popular and reliable app that you can download from the Google Play Store. Once you have the file manager app, follow these steps:
-
-
Open the file manager app and go to the folder where you downloaded Epic Heroes War premium mod apk an1 file. It is usually in the Downloads folder, but it might be in a different location depending on your settings.
-
Tap on the Epic Heroes War premium mod apk an1 file and select Install. You might see a pop-up message asking for your permission to install the app. Just tap on Install again and wait for the installation process to finish.
-
Once the installation is done, you will see a message saying that the app has been installed successfully. You can then tap on Open to launch the app or Done to exit the installer.
-
-
Step 4: Launch Epic Heroes War Premium Mod Apk An1 and Enjoy
-
Comparison of Epic Heroes War Original Game and Premium Mod Apk An1
-
To give you a better idea of how Epic Heroes War premium mod apk an1 differs from the original game, we have created a table that compares the two versions in terms of some key aspects. You can see the table below and decide for yourself which version suits you better.
-
-
-
Aspect
-
Original Game
-
Premium Mod Apk An1
-
-
-
Money
-
Limited and earned through gameplay or in-app purchases
-
Unlimited and free
-
-
-
Gems
-
Limited and earned through gameplay or in-app purchases
-
Unlimited and free
-
-
-
Heroes
-
Limited and unlocked through gameplay or in-app purchases
-
Unlocked and free
-
-
-
Items
-
Limited and unlocked through gameplay or in-app purchases
-
Unlocked and free
-
-
-
Ads
-
Present and annoying
-
Removed and no interruptions
-
-
-
Root
-
Not required but recommended for some features
-
Not required and not recommended
-
-
-
Installation
-
Easy from Google Play Store but requires updates and permissions
-
Easy from trusted source but requires unknown sources enabled
-
-
-
Safety
-
Safe and virus-free but may contain bugs or glitches
-
Safe and virus-free but may not be compatible with some devices or versions
-
Conclusion
-
Epic Heroes War is a fantastic game that combines real-time strategy and role-playing elements in a unique and exciting gameplay. You can create your own army of heroes and fight against various enemies in different modes and scenarios. You can also enjoy the game with other players online in PvP battles and co-op modes. However, if you want to have more fun and freedom in the game, you should try Epic Heroes War premium mod apk an1, which gives you unlimited money and gems, unlocks all heroes and items, removes ads, and does not require root access. You can download and install Epic Heroes War premium mod apk an1 easily and safely from our website and enjoy its features right away. So, what are you waiting for? Download Epic Heroes War premium mod apk an1 now and unleash your epic power!
-
FAQs
-
Here are some of the frequently asked questions related to Epic Heroes War premium mod apk an1:
-
-
Q: Is Epic Heroes War premium mod apk an1 safe to use?
-
A: Yes, Epic Heroes War premium mod apk an1 is safe to use, as long as you download it from our website, which is a trusted source. We have tested the file and found no viruses or malware in it. However, you should always be careful when downloading and installing any app from unknown sources, as they may contain harmful or malicious content.
-
Q: Is Epic Heroes War premium mod apk an1 compatible with my device or version?
-
A: Epic Heroes War premium mod apk an1 is compatible with most Android devices and versions, as it is based on the latest version of the original game. However, there may be some cases where Epic Heroes War premium mod apk an1 may not work properly or cause some issues on your device or version. If that happens, you can try uninstalling and reinstalling the app, clearing the cache and data, or contacting us for support.
-
Q: How can I update Epic Heroes War premium mod apk an1?
-
A: Epic Heroes War premium mod apk an1 is updated regularly to keep up with the changes and improvements of the original game. You can check our website for the latest version of Epic Heroes War premium mod apk an1 and download it from there. You can also enable notifications on our website to get notified when a new version is available.
-
Q: How can I contact you for feedback or support?
-
A: We appreciate your feedback and support for Epic Heroes War premium mod apk an1. You can contact us through our website or email us at [email protected] We will try to respond to your queries as soon as possible.
-
Q: Can I share Epic Heroes War premium mod apk an1 with my friends?
-
A: Yes, you can share Epic Heroes War premium mod apk an1 with your friends, as long as you do not modify or distribute it for commercial purposes. You can also invite your friends to join you in playing Epic Heroes War premium mod apk an1 online and have more fun together.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Experience the Thrill of Pooking - Billiards City a Stunning Pool Game.md b/spaces/1phancelerku/anime-remove-background/Experience the Thrill of Pooking - Billiards City a Stunning Pool Game.md
deleted file mode 100644
index 72ce4631fb9ac4332352823c1cee290b091e51a2..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Experience the Thrill of Pooking - Billiards City a Stunning Pool Game.md
+++ /dev/null
@@ -1,93 +0,0 @@
-
-
Pooking Billiards City: A Modern Arcade Style Pool Game
-
Introduction
-
If you love a relaxed game of 8 ball, then you will surely enjoy Pooking Billiards City, a modern arcade style pool game with single player mode. In this game, you can challenge yourself with various levels of difficulty, from beginner to pro, and hone your skills in the most realistic and exciting billiards simulator. You can also customize your cue stick and ball with different designs and colors, and unlock new city bars and trophies as you progress. In this article, we will tell you more about Pooking Billiards City, its features, tips and tricks, and how to download it for free on your Android device.
-
Features of Pooking Billiards City
-
Amazing Single Player Mode
-
Pooking Billiards City has a single player mode that lets you play at your own pace and style. You can choose from four different game modes: Classic, Combo, Challenge, and Practice. In Classic mode, you have to clear all the balls on the table in any order. In Combo mode, you have to make consecutive shots without missing. In Challenge mode, you have to complete specific tasks within a time limit. In Practice mode, you can practice your shots without any pressure or rules.
Pooking Billiards City uses the latest technology to create the most realistic and exciting billiards simulator. The game has stunning HD graphics that make the balls and decals look lifelike. The game also has fantastic playability and ultra realistic ball physics that simulate the real movement and collision of the balls. You can experience pool like never before, thanks to the amazing visual and sound effects of Pooking Billiards City.
-
Smooth and Intuitive Controls
-
Pooking Billiards City has smooth and intuitive controls that make the game easy to play for anyone. You can use the slider on the right side of the screen to adjust the direction of the cue stick, and the power bar on the left side of the screen to adjust the strength of the shot. You can also use the spin buttons on the bottom of the screen to add some spin to the cue ball, which can help you make more accurate and creative shots. The game also has a touch control for moving the stick, which gives you more precision and flexibility.
-
Stylish and Diverse Levels
-
Pooking Billiards City has stylish and diverse levels that will challenge your skills and keep you entertained. The game has over 1000 levels that vary in difficulty, layout, and background. You can play in cool and funky looking billiard tables that are set in different city bars, such as New York, London, Paris, Tokyo, Dubai, and more. You can also win trophies and become the acclaimed Pooking Billiards City Champion as you complete each level.
-
Cool and Funky Cue Sticks and Balls
-
Pooking Billiards City has cool and funky cue sticks and balls that you can customize according to your preference. You can choose from a variety of designs and colors for your cue stick and ball, such as stars, stripes, hearts, skulls, flames, and more. You can also unlock new cue sticks and balls as you play and earn coins. You can use the coins to buy more items from the shop, or use them to play higher stakes games. You can also watch ads to get free coins and items. Pooking Billiards City lets you express your personality and style with your cue stick and ball.
-
Tips and Tricks for Pooking Billiards City
-
Take As Much Time As You Need
-
Pooking Billiards City is a relaxing game that does not have a timer or a limit on the number of shots you can take. You can take as much time as you need to plan and execute your shots, without any pressure or stress. You can also pause the game anytime you want, and resume it later. This way, you can enjoy the game at your own pace and convenience.
-
pooking billiards city android game download
-pooking billiards city latest version apk
-pooking billiards city pool game free download
-pooking billiards city mod apk unlimited money
-pooking billiards city offline apk download
-pooking billiards city 3d hd graphics apk
-pooking billiards city apk for pc windows 10
-pooking billiards city hack apk download
-pooking billiards city realistic pool simulator apk
-pooking billiards city apk pure download
-pooking billiards city apk mirror download
-pooking billiards city apk old version download
-pooking billiards city apk mod menu download
-pooking billiards city no ads apk download
-pooking billiards city pro apk download
-pooking billiards city online multiplayer apk
-pooking billiards city cheats apk download
-pooking billiards city premium apk download
-pooking billiards city full unlocked apk download
-pooking billiards city unlimited coins apk download
-pooking billiards city new update apk download
-pooking billiards city best pool game apk
-pooking billiards city easy to play apk download
-pooking billiards city fun and relaxing apk download
-pooking billiards city smooth controls apk download
-pooking billiards city amazing single player mode apk
-pooking billiards city touch control for moving the stick apk
-pooking billiards city super realistic ball physics apk
-pooking billiards city stunning hd graphics apk download
-pooking billiards city different levels of difficulty apk
-pooking billiards city access to new city bars and trophies apk
-pooking billiards city challenge yourself and become the champion apk
-pooking billiards city free and safe to download apk
-pooking billiards city no internet connection required apk
-pooking billiards city compatible with all android devices apk
-pooking billiards city fast and easy installation apk download
-pooking billiards city low storage space needed apk download
-pooking billiards city user friendly interface and design apk
-pooking billiards city addictive and enjoyable gameplay apk
-pooking billiards city regular updates and bug fixes apk download
-
Lower Power Is Often Better
-
Pooking Billiards City has a power bar that lets you adjust the strength of your shot. However, you don't always need to use the maximum power to make a good shot. In fact, lower power is often better, especially when you are aiming for a precise or delicate shot. Lower power can help you avoid overshooting, missing, or scratching the cue ball. It can also help you control the direction and speed of the cue ball and the object balls better.
-
Always Keep The Next Shot In Mind
-
Pooking Billiards City is not just about making the current shot, but also about setting up the next shot. You should always keep the next shot in mind when you are playing, and try to position the cue ball in a favorable spot for it. You should also try to clear any obstacles or clusters that might block your next shot. By thinking ahead, you can make your game easier and smoother.
-
Learn To Utilize Spins
-
Pooking Billiards City has spin buttons that let you add some spin to the cue ball, which can change its direction and speed after hitting an object ball or a cushion. You can use spins to make more accurate and creative shots, such as curve shots, bank shots, cut shots, and more. You can also use spins to avoid scratches or fouls, or to get out of tricky situations. Spins can be very useful and fun to use, but they also require some practice and skill to master.
-
Break Balls Apart When You Can
-
Pooking Billiards City sometimes has levels where the balls are clustered together in a tight formation, which makes it harder to clear them. In such cases, you should try to break them apart when you can, by using a powerful shot or a spin shot. Breaking balls apart can create more space and opportunities for making shots, and also increase your chances of potting multiple balls in one shot.
-
How to Download Pooking Billiards City APK for Free
-
What is an APK file?
-
An APK file is an Android Package file that contains all the files and data needed to install an app on an Android device. APK files are usually used to distribute apps that are not available on the Google Play Store, or to update apps before they are officially released on the Play Store. APK files can also be used to install apps on devices that do not have access to the Play Store, such as some regions or countries where Google services are restricted.
-
Where can you download Pooking Billiards City APK for free?
-
There are many websites that offer free APK files for various apps, including Pooking Billiards City. However, not all of them are safe and reliable, as some of them may contain viruses, malware, or fake apps that can harm your device or steal your data. Therefore, you should be careful and cautious when downloading APK files from unknown sources. You should always check the reviews, ratings, comments, and permissions of the APK file before downloading it.
-
One of the safest and most trusted websites that we recommend for downloading Pooking Billiards City APK for free is [APKPure]. APKPure is a popular and reputable platform that provides high-quality and verified APK files for thousands of apps and games. You can download Pooking Billiards City APK from APKPure by following these steps:
- - Go to [APKPure] website on your browser. - Search for Pooking Billiards City in the search bar. - Click on the Pooking Billiards City icon from the search results. - Click on the Download APK button on the app page. - Wait for the download to finish.
How to install Pooking Billiards City APK on your Android device?
After downloading the Pooking Billiards City APK file from APKPure, you need to install it on your Android device. To do that, you need to enable the installation of apps from unknown sources on your device settings. This is a security measure that prevents unauthorized or harmful apps from being installed on your device. You can enable this option by following these steps:
- - Go to your device settings and tap on Security or Privacy. - Find and tap on the option that says Unknown Sources or Install Unknown Apps. - Turn on the toggle or check the box that allows the installation of apps from unknown sources. - Confirm your choice if prompted.
Once you have enabled this option, you can install the Pooking Billiards City APK file by following these steps:
- - Locate the Pooking Billiards City APK file on your device storage, either in the Downloads folder or in the APKPure folder. - Tap on the Pooking Billiards City APK file to open it. - Tap on the Install button on the screen. - Wait for the installation to finish. - Tap on the Open button to launch the game.
Conclusion
-
Pooking Billiards City is a modern arcade style pool game that offers a realistic and exciting billiards experience. You can play in various game modes, levels, and locations, and customize your cue stick and ball. You can also download Pooking Billiards City APK for free from APKPure, and install it on your Android device easily. Pooking Billiards City is a fun and relaxing game that you can enjoy anytime and anywhere.
-
FAQs
-
Here are some frequently asked questions about Pooking Billiards City:
-
-
Is Pooking Billiards City free to play?
-
Yes, Pooking Billiards City is free to play, but it contains ads and in-app purchases. You can watch ads to get free coins and items, or buy them with real money. You can also remove ads by paying a small fee.
-
Can I play Pooking Billiards City offline?
-
Yes, you can play Pooking Billiards City offline, as long as you have downloaded and installed the game on your device. However, some features may not be available offline, such as watching ads or updating the game.
-
Can I play Pooking Billiards City with friends?
-
No, Pooking Billiards City does not have a multiplayer mode or a social feature. You can only play against the computer in single player mode.
-
How can I update Pooking Billiards City?
-
If you have downloaded Pooking Billiards City from the Google Play Store, you can update it automatically or manually through the Play Store app. If you have downloaded Pooking Billiards City APK from APKPure, you can update it by downloading and installing the latest version of the APK file from APKPure.
-
Is Pooking Billiards City safe to download and install?
-
Yes, Pooking Billiards City is safe to download and install, as long as you download it from a trusted source, such as the Google Play Store or APKPure. You should also scan the APK file with an antivirus app before installing it, and enable the installation of apps from unknown sources only for trusted apps.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Experience the thrill of Indian Bikes Driving 3D - Free APK Download.md b/spaces/1phancelerku/anime-remove-background/Experience the thrill of Indian Bikes Driving 3D - Free APK Download.md
deleted file mode 100644
index 1d4fbee02608766fe98cbc36d1c0ed2653c6892e..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Experience the thrill of Indian Bikes Driving 3D - Free APK Download.md
+++ /dev/null
@@ -1,108 +0,0 @@
-
-
Download Indian Bike Driving 3D APK: A Realistic Motorcycle Simulator for Android
-
If you are a fan of motorcycles and driving games, you should definitely check out Indian Bike Driving 3D APK. This is a free realistic motorcycle simulator for Android devices that lets you drive the most famous bikes in the world through challenging roads. You can also customise your bike's appearance, with many different challenges to complete. In this article, we will tell you everything you need to know about this amazing game, including its features, how to download and install it, tips and tricks for playing it, and its pros and cons.
Indian Bike Driving 3D APK has many features that make it one of the best motorcycle simulators for Android. Here are some of them:
-
-
A variety of bikes to choose from: You can drive more than 50 different bikes in this game, including famous brands like Yamaha, Ducati, Kawasaki, KTM, and more. You can also unlock more bikes by using cheat codes or collecting coins.
-
Customise your bike's appearance: You can change the color, wheels, stickers, and accessories of your bike to make it look unique. You can also upgrade your bike's performance by improving its speed, acceleration, handling, and braking.
-
Drive through realistic environments: You can explore various realistic environments in this game, such as city streets, highways, deserts, mountains, forests, and more. You can also experience different weather conditions and time of day.
-
Complete different challenges: You can test your skills as a driver by completing different challenges in this game, such as racing against other bikers, performing stunts, escaping from police, delivering cargo, and more. You can also play in different modes and levels to increase the difficulty and fun.
-
Use cheat codes to unlock more fun: You can use cheat codes in this game to unlock more features and items, such as cars, planes, helicopters, horses, dogs, cycles, monster trucks, and more. You can also use cheat codes to activate super jump, slow motion, moon gravity, skyfall, infinity health, and more.
-
-
How to Download and Install Indian Bike Driving 3D APK
-
If you want to download and install Indian Bike Driving 3D APK on your Android device, you need to follow these simple steps:
-
-
Step 1: Download the APK file from a trusted source: You can download the APK file of this game from a trusted source like [APKCombo](^1^) or [FileHippo](^2^). Make sure you download the latest version of the game for better performance and compatibility.
-
Step 2: Enable unknown sources on your device: Before you install the APK file, you need to enable unknown sources on your device. This will allow you to install apps from sources other than the Google Play Store. To do this, go to your device's settings, then security, then unknown sources, and turn it on.
-
Step 3: Install the APK file and launch the game: After you download the APK file, locate it on your device's storage and tap on it to install it. Follow the instructions on the screen and wait for the installation to finish. Then, launch the game from your app drawer and enjoy.
-
-
Tips and Tricks for Playing Indian Bike Driving 3D APK
-
Indian Bike Driving 3D APK is a fun and addictive game, but it can also be challenging and frustrating at times. Here are some tips and tricks that can help you play better and have more fun:
-
-
Tip 1: Use the brake and accelerator buttons wisely: You can control your bike's speed by using the brake and accelerator buttons on the screen. You can also tilt your device to steer your bike. However, you need to be careful not to overuse or misuse these buttons, as they can affect your balance and stability. For example, if you brake too hard or too late, you might skid or crash. If you accelerate too much or too fast, you might lose control or flip over. Therefore, you need to use these buttons wisely and adjust them according to the road conditions and traffic.
-
Tip 2: Avoid crashing into other vehicles and obstacles: One of the main challenges of this game is to avoid crashing into other vehicles and obstacles on the road. If you crash, you will lose health and coins, and you might have to restart the level. Therefore, you need to be alert and aware of your surroundings, and avoid colliding with anything that might damage your bike or yourself. You can also use the horn button to warn other drivers of your presence.
-
Tip 3: Collect coins and fuel to upgrade your bike: As you drive through the roads, you will see coins and fuel cans scattered around. You can collect them by driving over them or by using a magnet power-up. Coins can be used to buy new bikes or upgrade your existing ones. Fuel can be used to refill your bike's tank when it runs low. Therefore, you should try to collect as many coins and fuel as possible, as they will help you improve your performance and enjoyment.
-
Tip 4: Try different modes and levels to test your skills: This game has different modes and levels that offer different challenges and rewards. You can choose from free ride mode, career mode, endless mode, time trial mode, parking mode, cargo mode, stunt mode, police chase mode, and more. You can also choose from easy, medium, hard, or extreme levels of difficulty. You should try different modes and levels to test your skills, have more fun, and earn more coins.
-
-
Pros and Cons of Indian Bike Driving 3D APK
-
Indian Bike Driving 3D APK is a great game for motorcycle enthusiasts and driving fans, but it also has some drawbacks that might affect your experience. Here are some of the pros and cons of this game:
-
-
-
Pros
-
Cons
-
-
-
- Free to download and play
-
- Contains ads that might be annoying or intrusive
-
-
-
- Realistic graphics and sound effects
-
- Has some bugs and glitches that might affect the gameplay
-
-
-
- Fun and addictive gameplay
-
- Requires internet connection to play online or access some features
-
-
-
- Easy to play but hard to master
-
- Might drain your battery or consume your data quickly
-
-
-
- Many bikes, environments, modes, levels, challenges, and features to enjoy
-
- Might not be compatible with some devices or Android versions
-
-
-
Conclusion
-
In conclusion, Indian Bike Driving 3D APK is a realistic motorcycle simulator for Android devices that lets you drive the most famous bikes in the world through challenging roads. You can also customise your bike's appearance, complete different challenges, use cheat codes to unlock more fun, and more. If you want to download and install this game on your device, you just need to follow the simple steps we mentioned above. You can also use our tips and tricks to play better and have more fun. However, you should also be aware of the pros and cons of this game before you decide to play it . We hope this article has helped you learn more about this game and how to download and play it. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy driving!
-
Download Indian Bikes Driving 3D for Android free
-How to install Indian Bikes Driving 3D APK on your device
-Indian Bikes Driving 3D APK latest version download
-Indian Bikes Driving 3D game features and reviews
-Best Indian Bikes Driving 3D APK mod and hack
-Indian Bikes Driving 3D APK offline mode download
-Indian Bikes Driving 3D APK file size and requirements
-Indian Bikes Driving 3D APK download for PC and laptop
-Indian Bikes Driving 3D APK cheats and tips
-Indian Bikes Driving 3D APK update and bug fixes
-Indian Bikes Driving 3D simulator gameplay and graphics
-Indian Bikes Driving 3D APK download link and QR code
-Indian Bikes Driving 3D APK alternatives and similar apps
-Indian Bikes Driving 3D APK ratings and feedback
-Indian Bikes Driving 3D APK compatibility and support
-Indian Bikes Driving 3D APK download from Google Play Store
-Indian Bikes Driving 3D APK download from third-party sources
-Indian Bikes Driving 3D APK safe and secure download
-Indian Bikes Driving 3D APK free trial and premium version
-Indian Bikes Driving 3D APK rewards and achievements
-Indian Bikes Driving 3D APK customisation and personalisation
-Indian Bikes Driving 3D APK challenges and missions
-Indian Bikes Driving 3D APK online and multiplayer mode
-Indian Bikes Driving 3D APK fun and addictive game
-Indian Bikes Driving 3D APK realistic and immersive experience
-
FAQs
-
Here are some of the frequently asked questions about Indian Bike Driving 3D APK:
-
-
What is the size of Indian Bike Driving 3D APK?: The size of Indian Bike Driving 3D APK is about 50 MB, but it might vary depending on your device and Android version.
-
Is Indian Bike Driving 3D APK safe to download and install?: Yes, Indian Bike Driving 3D APK is safe to download and install, as long as you download it from a trusted source like [APKCombo] or [FileHippo]. However, you should always scan the APK file with an antivirus software before installing it, just to be sure.
-
How can I remove ads from Indian Bike Driving 3D APK?: Unfortunately, there is no official way to remove ads from Indian Bike Driving 3D APK, as they are part of the game's revenue source. However, you can try some unofficial methods, such as using an ad blocker app, turning off your internet connection, or using a modded version of the game. However, we do not recommend or endorse these methods, as they might violate the game's terms of service or cause other problems.
-
How can I get more coins in Indian Bike Driving 3D APK?: You can get more coins in Indian Bike Driving 3D APK by playing the game regularly, completing challenges, collecting coins on the road, using a magnet power-up, or using cheat codes. You can also buy more coins with real money, but we do not advise you to do so, as it might ruin the fun and challenge of the game.
-
What are some of the best cheat codes for Indian Bike Driving 3D APK?: Some of the best cheat codes for Indian Bike Driving 3D APK are:
-
CAR: Unlocks a car that you can drive instead of a bike
-
PLANE: Unlocks a plane that you can fly instead of a bike
-
HELICOPTER: Unlocks a helicopter that you can fly instead of a bike
-
HORSE: Unlocks a horse that you can ride instead of a bike
-
DOG: Unlocks a dog that follows you around
-
CYCLE: Unlocks a cycle that you can ride instead of a bike
-
MONSTER: Unlocks a monster truck that you can drive instead of a bike
-
SUPERJUMP: Activates super jump mode that lets you jump higher
-
SLOWMO: Activates slow motion mode that slows down time
-
MOONGRAVITY: Activates moon gravity mode that reduces gravity
-
SKYFALL: Activates skyfall mode that makes you fall from the sky
-
INFINITYHEALTH: Activates infinity health mode that makes you invincible
-
-To use these cheat codes, you need to type them in the chat box during the game. You can also turn them off by typing them again.
-
-
-
-
\ No newline at end of file
diff --git a/spaces/AnimalEquality/chatbot/_proc/_docs/site_libs/quarto-html/anchor.min.js b/spaces/AnimalEquality/chatbot/_proc/_docs/site_libs/quarto-html/anchor.min.js
deleted file mode 100644
index 1c2b86faedde03cb3c3459e4900f7ef29125d94a..0000000000000000000000000000000000000000
--- a/spaces/AnimalEquality/chatbot/_proc/_docs/site_libs/quarto-html/anchor.min.js
+++ /dev/null
@@ -1,9 +0,0 @@
-// @license magnet:?xt=urn:btih:d3d9a9a6595521f9666a5e94cc830dab83b65699&dn=expat.txt Expat
-//
-// AnchorJS - v4.3.1 - 2021-04-17
-// https://www.bryanbraun.com/anchorjs/
-// Copyright (c) 2021 Bryan Braun; Licensed MIT
-//
-// @license magnet:?xt=urn:btih:d3d9a9a6595521f9666a5e94cc830dab83b65699&dn=expat.txt Expat
-!function(A,e){"use strict";"function"==typeof define&&define.amd?define([],e):"object"==typeof module&&module.exports?module.exports=e():(A.AnchorJS=e(),A.anchors=new A.AnchorJS)}(this,function(){"use strict";return function(A){function d(A){A.icon=Object.prototype.hasOwnProperty.call(A,"icon")?A.icon:"",A.visible=Object.prototype.hasOwnProperty.call(A,"visible")?A.visible:"hover",A.placement=Object.prototype.hasOwnProperty.call(A,"placement")?A.placement:"right",A.ariaLabel=Object.prototype.hasOwnProperty.call(A,"ariaLabel")?A.ariaLabel:"Anchor",A.class=Object.prototype.hasOwnProperty.call(A,"class")?A.class:"",A.base=Object.prototype.hasOwnProperty.call(A,"base")?A.base:"",A.truncate=Object.prototype.hasOwnProperty.call(A,"truncate")?Math.floor(A.truncate):64,A.titleText=Object.prototype.hasOwnProperty.call(A,"titleText")?A.titleText:""}function w(A){var e;if("string"==typeof A||A instanceof String)e=[].slice.call(document.querySelectorAll(A));else{if(!(Array.isArray(A)||A instanceof NodeList))throw new TypeError("The selector provided to AnchorJS was invalid.");e=[].slice.call(A)}return e}this.options=A||{},this.elements=[],d(this.options),this.isTouchDevice=function(){return Boolean("ontouchstart"in window||window.TouchEvent||window.DocumentTouch&&document instanceof DocumentTouch)},this.add=function(A){var e,t,o,i,n,s,a,c,r,l,h,u,p=[];if(d(this.options),"touch"===(l=this.options.visible)&&(l=this.isTouchDevice()?"always":"hover"),0===(e=w(A=A||"h2, h3, h4, h5, h6")).length)return this;for(null===document.head.querySelector("style.anchorjs")&&((u=document.createElement("style")).className="anchorjs",u.appendChild(document.createTextNode("")),void 0===(A=document.head.querySelector('[rel="stylesheet"],style'))?document.head.appendChild(u):document.head.insertBefore(u,A),u.sheet.insertRule(".anchorjs-link{opacity:0;text-decoration:none;-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale}",u.sheet.cssRules.length),u.sheet.insertRule(":hover>.anchorjs-link,.anchorjs-link:focus{opacity:1}",u.sheet.cssRules.length),u.sheet.insertRule("[data-anchorjs-icon]::after{content:attr(data-anchorjs-icon)}",u.sheet.cssRules.length),u.sheet.insertRule('@font-face{font-family:anchorjs-icons;src:url(data:n/a;base64,AAEAAAALAIAAAwAwT1MvMg8yG2cAAAE4AAAAYGNtYXDp3gC3AAABpAAAAExnYXNwAAAAEAAAA9wAAAAIZ2x5ZlQCcfwAAAH4AAABCGhlYWQHFvHyAAAAvAAAADZoaGVhBnACFwAAAPQAAAAkaG10eASAADEAAAGYAAAADGxvY2EACACEAAAB8AAAAAhtYXhwAAYAVwAAARgAAAAgbmFtZQGOH9cAAAMAAAAAunBvc3QAAwAAAAADvAAAACAAAQAAAAEAAHzE2p9fDzz1AAkEAAAAAADRecUWAAAAANQA6R8AAAAAAoACwAAAAAgAAgAAAAAAAAABAAADwP/AAAACgAAA/9MCrQABAAAAAAAAAAAAAAAAAAAAAwABAAAAAwBVAAIAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAMCQAGQAAUAAAKZAswAAACPApkCzAAAAesAMwEJAAAAAAAAAAAAAAAAAAAAARAAAAAAAAAAAAAAAAAAAAAAQAAg//0DwP/AAEADwABAAAAAAQAAAAAAAAAAAAAAIAAAAAAAAAIAAAACgAAxAAAAAwAAAAMAAAAcAAEAAwAAABwAAwABAAAAHAAEADAAAAAIAAgAAgAAACDpy//9//8AAAAg6cv//f///+EWNwADAAEAAAAAAAAAAAAAAAAACACEAAEAAAAAAAAAAAAAAAAxAAACAAQARAKAAsAAKwBUAAABIiYnJjQ3NzY2MzIWFxYUBwcGIicmNDc3NjQnJiYjIgYHBwYUFxYUBwYGIwciJicmNDc3NjIXFhQHBwYUFxYWMzI2Nzc2NCcmNDc2MhcWFAcHBgYjARQGDAUtLXoWOR8fORYtLTgKGwoKCjgaGg0gEhIgDXoaGgkJBQwHdR85Fi0tOAobCgoKOBoaDSASEiANehoaCQkKGwotLXoWOR8BMwUFLYEuehYXFxYugC44CQkKGwo4GkoaDQ0NDXoaShoKGwoFBe8XFi6ALjgJCQobCjgaShoNDQ0NehpKGgobCgoKLYEuehYXAAAADACWAAEAAAAAAAEACAAAAAEAAAAAAAIAAwAIAAEAAAAAAAMACAAAAAEAAAAAAAQACAAAAAEAAAAAAAUAAQALAAEAAAAAAAYACAAAAAMAAQQJAAEAEAAMAAMAAQQJAAIABgAcAAMAAQQJAAMAEAAMAAMAAQQJAAQAEAAMAAMAAQQJAAUAAgAiAAMAAQQJAAYAEAAMYW5jaG9yanM0MDBAAGEAbgBjAGgAbwByAGoAcwA0ADAAMABAAAAAAwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAH//wAP) format("truetype")}',u.sheet.cssRules.length)),u=document.querySelectorAll("[id]"),t=[].map.call(u,function(A){return A.id}),i=0;i\]./()*\\\n\t\b\v\u00A0]/g,"-").replace(/-{2,}/g,"-").substring(0,this.options.truncate).replace(/^-+|-+$/gm,"").toLowerCase()},this.hasAnchorJSLink=function(A){var e=A.firstChild&&-1<(" "+A.firstChild.className+" ").indexOf(" anchorjs-link "),A=A.lastChild&&-1<(" "+A.lastChild.className+" ").indexOf(" anchorjs-link ");return e||A||!1}}});
-// @license-end
\ No newline at end of file
diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/ngrok/README.md b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/ngrok/README.md
deleted file mode 100644
index 0324bf9852408d9d2b86cc0165c2d548996f9c94..0000000000000000000000000000000000000000
--- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/ngrok/README.md
+++ /dev/null
@@ -1,69 +0,0 @@
-# Adding an ingress URL through the ngrok Agent SDK for Python
-
-[ngrok](https://ngrok.com) is a globally distributed reverse proxy commonly used for quickly getting a public URL to a
-service running inside a private network, such as on your local laptop. The ngrok agent is usually
-deployed inside a private network and is used to communicate with the ngrok cloud service.
-
-By default the authtoken in the NGROK_AUTHTOKEN environment variable will be used. Alternatively one may be specified in
-the `settings.json` file, see the Examples below. Retrieve your authtoken on the [Auth Token page of your ngrok dashboard](https://dashboard.ngrok.com/get-started/your-authtoken), signing up is free.
-
-# Documentation
-
-For a list of all available options, see [the configuration documentation](https://ngrok.com/docs/ngrok-agent/config/) or [the connect example](https://github.com/ngrok/ngrok-py/blob/main/examples/ngrok-connect-full.py).
-
-The ngrok Python SDK is [on github here](https://github.com/ngrok/ngrok-py). A quickstart guide and a full API reference are included in the [ngrok-py Python API documentation](https://ngrok.github.io/ngrok-py/).
-
-# Running
-
-To enable ngrok install the requirements and then add `--extension ngrok` to the command line options, for instance:
-
-```bash
-pip install -r extensions/ngrok/requirements.txt
-python server.py --extension ngrok
-```
-
-In the output you should then see something like this:
-
-```bash
-INFO:Loading the extension "ngrok"...
-INFO:Session created
-INFO:Created tunnel "9d9d0944dc75ff9d3aae653e5eb29fe9" with url "https://d83706cf7be7.ngrok.app"
-INFO:Tunnel "9d9d0944dc75ff9d3aae653e5eb29fe9" TCP forwarding to "localhost:7860"
-INFO:Ingress established at https://d83706cf7be7.ngrok.app
-```
-
-You can now access the webui via the url shown, in this case `https://d83706cf7be7.ngrok.app`. It is recommended to add some authentication to the ingress, see below.
-
-# Example Settings
-
-In `settings.json` add a `ngrok` key with a dictionary of options, for instance:
-
-To enable basic authentication:
-```json
-{
- "ngrok": {
- "basic_auth": "user:password"
- }
-}
-```
-
-To enable OAUTH authentication:
-```json
-{
- "ngrok": {
- "oauth_provider": "google",
- "oauth_allow_domains": "asdf.com",
- "oauth_allow_emails": "asdf@asdf.com"
- }
-}
-```
-
-To add an authtoken instead of using the NGROK_AUTHTOKEN environment variable:
-```json
-{
- "ngrok": {
- "authtoken": "",
- "authtoken_from_env":false
- }
-}
-```
\ No newline at end of file
diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/block_requests.py b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/block_requests.py
deleted file mode 100644
index 775a9b1434879e287ad44e06722df85504b3c978..0000000000000000000000000000000000000000
--- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/block_requests.py
+++ /dev/null
@@ -1,47 +0,0 @@
-import builtins
-import io
-
-import requests
-
-from modules.logging_colors import logger
-
-original_open = open
-original_get = requests.get
-
-
-class RequestBlocker:
-
- def __enter__(self):
- requests.get = my_get
-
- def __exit__(self, exc_type, exc_value, traceback):
- requests.get = original_get
-
-
-class OpenMonkeyPatch:
-
- def __enter__(self):
- builtins.open = my_open
-
- def __exit__(self, exc_type, exc_value, traceback):
- builtins.open = original_open
-
-
-def my_get(url, **kwargs):
- logger.info('Unwanted HTTP request redirected to localhost :)')
- kwargs.setdefault('allow_redirects', True)
- return requests.api.request('get', 'http://127.0.0.1/', **kwargs)
-
-
-# Kindly provided by our friend WizardLM-30B
-def my_open(*args, **kwargs):
- filename = str(args[0])
- if filename.endswith('index.html'):
- with original_open(*args, **kwargs) as f:
- file_contents = f.read()
-
- file_contents = file_contents.replace(b'', b'')
- file_contents = file_contents.replace(b'cdnjs.cloudflare.com', b'127.0.0.1')
- return io.BytesIO(file_contents)
- else:
- return original_open(*args, **kwargs)
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/merge_cells.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/merge_cells.py
deleted file mode 100644
index 48ca8cc0a8aca8432835bd760c0403a3c35b34cf..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/merge_cells.py
+++ /dev/null
@@ -1,149 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from abc import abstractmethod
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-from ..cnn import ConvModule
-
-
-class BaseMergeCell(nn.Module):
- """The basic class for cells used in NAS-FPN and NAS-FCOS.
-
- BaseMergeCell takes 2 inputs. After applying convolution
- on them, they are resized to the target size. Then,
- they go through binary_op, which depends on the type of cell.
- If with_out_conv is True, the result of output will go through
- another convolution layer.
-
- Args:
- in_channels (int): number of input channels in out_conv layer.
- out_channels (int): number of output channels in out_conv layer.
- with_out_conv (bool): Whether to use out_conv layer
- out_conv_cfg (dict): Config dict for convolution layer, which should
- contain "groups", "kernel_size", "padding", "bias" to build
- out_conv layer.
- out_norm_cfg (dict): Config dict for normalization layer in out_conv.
- out_conv_order (tuple): The order of conv/norm/activation layers in
- out_conv.
- with_input1_conv (bool): Whether to use convolution on input1.
- with_input2_conv (bool): Whether to use convolution on input2.
- input_conv_cfg (dict): Config dict for building input1_conv layer and
- input2_conv layer, which is expected to contain the type of
- convolution.
- Default: None, which means using conv2d.
- input_norm_cfg (dict): Config dict for normalization layer in
- input1_conv and input2_conv layer. Default: None.
- upsample_mode (str): Interpolation method used to resize the output
- of input1_conv and input2_conv to target size. Currently, we
- support ['nearest', 'bilinear']. Default: 'nearest'.
- """
-
- def __init__(self,
- fused_channels=256,
- out_channels=256,
- with_out_conv=True,
- out_conv_cfg=dict(
- groups=1, kernel_size=3, padding=1, bias=True),
- out_norm_cfg=None,
- out_conv_order=('act', 'conv', 'norm'),
- with_input1_conv=False,
- with_input2_conv=False,
- input_conv_cfg=None,
- input_norm_cfg=None,
- upsample_mode='nearest'):
- super(BaseMergeCell, self).__init__()
- assert upsample_mode in ['nearest', 'bilinear']
- self.with_out_conv = with_out_conv
- self.with_input1_conv = with_input1_conv
- self.with_input2_conv = with_input2_conv
- self.upsample_mode = upsample_mode
-
- if self.with_out_conv:
- self.out_conv = ConvModule(
- fused_channels,
- out_channels,
- **out_conv_cfg,
- norm_cfg=out_norm_cfg,
- order=out_conv_order)
-
- self.input1_conv = self._build_input_conv(
- out_channels, input_conv_cfg,
- input_norm_cfg) if with_input1_conv else nn.Sequential()
- self.input2_conv = self._build_input_conv(
- out_channels, input_conv_cfg,
- input_norm_cfg) if with_input2_conv else nn.Sequential()
-
- def _build_input_conv(self, channel, conv_cfg, norm_cfg):
- return ConvModule(
- channel,
- channel,
- 3,
- padding=1,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- bias=True)
-
- @abstractmethod
- def _binary_op(self, x1, x2):
- pass
-
- def _resize(self, x, size):
- if x.shape[-2:] == size:
- return x
- elif x.shape[-2:] < size:
- return F.interpolate(x, size=size, mode=self.upsample_mode)
- else:
- assert x.shape[-2] % size[-2] == 0 and x.shape[-1] % size[-1] == 0
- kernel_size = x.shape[-1] // size[-1]
- x = F.max_pool2d(x, kernel_size=kernel_size, stride=kernel_size)
- return x
-
- def forward(self, x1, x2, out_size=None):
- assert x1.shape[:2] == x2.shape[:2]
- assert out_size is None or len(out_size) == 2
- if out_size is None: # resize to larger one
- out_size = max(x1.size()[2:], x2.size()[2:])
-
- x1 = self.input1_conv(x1)
- x2 = self.input2_conv(x2)
-
- x1 = self._resize(x1, out_size)
- x2 = self._resize(x2, out_size)
-
- x = self._binary_op(x1, x2)
- if self.with_out_conv:
- x = self.out_conv(x)
- return x
-
-
-class SumCell(BaseMergeCell):
-
- def __init__(self, in_channels, out_channels, **kwargs):
- super(SumCell, self).__init__(in_channels, out_channels, **kwargs)
-
- def _binary_op(self, x1, x2):
- return x1 + x2
-
-
-class ConcatCell(BaseMergeCell):
-
- def __init__(self, in_channels, out_channels, **kwargs):
- super(ConcatCell, self).__init__(in_channels * 2, out_channels,
- **kwargs)
-
- def _binary_op(self, x1, x2):
- ret = torch.cat([x1, x2], dim=1)
- return ret
-
-
-class GlobalPoolingCell(BaseMergeCell):
-
- def __init__(self, in_channels=None, out_channels=None, **kwargs):
- super().__init__(in_channels, out_channels, **kwargs)
- self.global_pool = nn.AdaptiveAvgPool2d((1, 1))
-
- def _binary_op(self, x1, x2):
- x2_att = self.global_pool(x2).sigmoid()
- return x2 + x2_att * x1
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/runner/hooks/evaluation.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/runner/hooks/evaluation.py
deleted file mode 100644
index 4d00999ce5665c53bded8de9e084943eee2d230d..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/runner/hooks/evaluation.py
+++ /dev/null
@@ -1,509 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import os.path as osp
-import warnings
-from math import inf
-
-import torch.distributed as dist
-from torch.nn.modules.batchnorm import _BatchNorm
-from torch.utils.data import DataLoader
-
-from annotator.uniformer.mmcv.fileio import FileClient
-from annotator.uniformer.mmcv.utils import is_seq_of
-from .hook import Hook
-from .logger import LoggerHook
-
-
-class EvalHook(Hook):
- """Non-Distributed evaluation hook.
-
- This hook will regularly perform evaluation in a given interval when
- performing in non-distributed environment.
-
- Args:
- dataloader (DataLoader): A PyTorch dataloader, whose dataset has
- implemented ``evaluate`` function.
- start (int | None, optional): Evaluation starting epoch. It enables
- evaluation before the training starts if ``start`` <= the resuming
- epoch. If None, whether to evaluate is merely decided by
- ``interval``. Default: None.
- interval (int): Evaluation interval. Default: 1.
- by_epoch (bool): Determine perform evaluation by epoch or by iteration.
- If set to True, it will perform by epoch. Otherwise, by iteration.
- Default: True.
- save_best (str, optional): If a metric is specified, it would measure
- the best checkpoint during evaluation. The information about best
- checkpoint would be saved in ``runner.meta['hook_msgs']`` to keep
- best score value and best checkpoint path, which will be also
- loaded when resume checkpoint. Options are the evaluation metrics
- on the test dataset. e.g., ``bbox_mAP``, ``segm_mAP`` for bbox
- detection and instance segmentation. ``AR@100`` for proposal
- recall. If ``save_best`` is ``auto``, the first key of the returned
- ``OrderedDict`` result will be used. Default: None.
- rule (str | None, optional): Comparison rule for best score. If set to
- None, it will infer a reasonable rule. Keys such as 'acc', 'top'
- .etc will be inferred by 'greater' rule. Keys contain 'loss' will
- be inferred by 'less' rule. Options are 'greater', 'less', None.
- Default: None.
- test_fn (callable, optional): test a model with samples from a
- dataloader, and return the test results. If ``None``, the default
- test function ``mmcv.engine.single_gpu_test`` will be used.
- (default: ``None``)
- greater_keys (List[str] | None, optional): Metric keys that will be
- inferred by 'greater' comparison rule. If ``None``,
- _default_greater_keys will be used. (default: ``None``)
- less_keys (List[str] | None, optional): Metric keys that will be
- inferred by 'less' comparison rule. If ``None``, _default_less_keys
- will be used. (default: ``None``)
- out_dir (str, optional): The root directory to save checkpoints. If not
- specified, `runner.work_dir` will be used by default. If specified,
- the `out_dir` will be the concatenation of `out_dir` and the last
- level directory of `runner.work_dir`.
- `New in version 1.3.16.`
- file_client_args (dict): Arguments to instantiate a FileClient.
- See :class:`mmcv.fileio.FileClient` for details. Default: None.
- `New in version 1.3.16.`
- **eval_kwargs: Evaluation arguments fed into the evaluate function of
- the dataset.
-
- Notes:
- If new arguments are added for EvalHook, tools/test.py,
- tools/eval_metric.py may be affected.
- """
-
- # Since the key for determine greater or less is related to the downstream
- # tasks, downstream repos may need to overwrite the following inner
- # variable accordingly.
-
- rule_map = {'greater': lambda x, y: x > y, 'less': lambda x, y: x < y}
- init_value_map = {'greater': -inf, 'less': inf}
- _default_greater_keys = [
- 'acc', 'top', 'AR@', 'auc', 'precision', 'mAP', 'mDice', 'mIoU',
- 'mAcc', 'aAcc'
- ]
- _default_less_keys = ['loss']
-
- def __init__(self,
- dataloader,
- start=None,
- interval=1,
- by_epoch=True,
- save_best=None,
- rule=None,
- test_fn=None,
- greater_keys=None,
- less_keys=None,
- out_dir=None,
- file_client_args=None,
- **eval_kwargs):
- if not isinstance(dataloader, DataLoader):
- raise TypeError(f'dataloader must be a pytorch DataLoader, '
- f'but got {type(dataloader)}')
-
- if interval <= 0:
- raise ValueError(f'interval must be a positive number, '
- f'but got {interval}')
-
- assert isinstance(by_epoch, bool), '``by_epoch`` should be a boolean'
-
- if start is not None and start < 0:
- raise ValueError(f'The evaluation start epoch {start} is smaller '
- f'than 0')
-
- self.dataloader = dataloader
- self.interval = interval
- self.start = start
- self.by_epoch = by_epoch
-
- assert isinstance(save_best, str) or save_best is None, \
- '""save_best"" should be a str or None ' \
- f'rather than {type(save_best)}'
- self.save_best = save_best
- self.eval_kwargs = eval_kwargs
- self.initial_flag = True
-
- if test_fn is None:
- from annotator.uniformer.mmcv.engine import single_gpu_test
- self.test_fn = single_gpu_test
- else:
- self.test_fn = test_fn
-
- if greater_keys is None:
- self.greater_keys = self._default_greater_keys
- else:
- if not isinstance(greater_keys, (list, tuple)):
- greater_keys = (greater_keys, )
- assert is_seq_of(greater_keys, str)
- self.greater_keys = greater_keys
-
- if less_keys is None:
- self.less_keys = self._default_less_keys
- else:
- if not isinstance(less_keys, (list, tuple)):
- less_keys = (less_keys, )
- assert is_seq_of(less_keys, str)
- self.less_keys = less_keys
-
- if self.save_best is not None:
- self.best_ckpt_path = None
- self._init_rule(rule, self.save_best)
-
- self.out_dir = out_dir
- self.file_client_args = file_client_args
-
- def _init_rule(self, rule, key_indicator):
- """Initialize rule, key_indicator, comparison_func, and best score.
-
- Here is the rule to determine which rule is used for key indicator
- when the rule is not specific (note that the key indicator matching
- is case-insensitive):
- 1. If the key indicator is in ``self.greater_keys``, the rule will be
- specified as 'greater'.
- 2. Or if the key indicator is in ``self.less_keys``, the rule will be
- specified as 'less'.
- 3. Or if the key indicator is equal to the substring in any one item
- in ``self.greater_keys``, the rule will be specified as 'greater'.
- 4. Or if the key indicator is equal to the substring in any one item
- in ``self.less_keys``, the rule will be specified as 'less'.
-
- Args:
- rule (str | None): Comparison rule for best score.
- key_indicator (str | None): Key indicator to determine the
- comparison rule.
- """
- if rule not in self.rule_map and rule is not None:
- raise KeyError(f'rule must be greater, less or None, '
- f'but got {rule}.')
-
- if rule is None:
- if key_indicator != 'auto':
- # `_lc` here means we use the lower case of keys for
- # case-insensitive matching
- key_indicator_lc = key_indicator.lower()
- greater_keys = [key.lower() for key in self.greater_keys]
- less_keys = [key.lower() for key in self.less_keys]
-
- if key_indicator_lc in greater_keys:
- rule = 'greater'
- elif key_indicator_lc in less_keys:
- rule = 'less'
- elif any(key in key_indicator_lc for key in greater_keys):
- rule = 'greater'
- elif any(key in key_indicator_lc for key in less_keys):
- rule = 'less'
- else:
- raise ValueError(f'Cannot infer the rule for key '
- f'{key_indicator}, thus a specific rule '
- f'must be specified.')
- self.rule = rule
- self.key_indicator = key_indicator
- if self.rule is not None:
- self.compare_func = self.rule_map[self.rule]
-
- def before_run(self, runner):
- if not self.out_dir:
- self.out_dir = runner.work_dir
-
- self.file_client = FileClient.infer_client(self.file_client_args,
- self.out_dir)
-
- # if `self.out_dir` is not equal to `runner.work_dir`, it means that
- # `self.out_dir` is set so the final `self.out_dir` is the
- # concatenation of `self.out_dir` and the last level directory of
- # `runner.work_dir`
- if self.out_dir != runner.work_dir:
- basename = osp.basename(runner.work_dir.rstrip(osp.sep))
- self.out_dir = self.file_client.join_path(self.out_dir, basename)
- runner.logger.info(
- (f'The best checkpoint will be saved to {self.out_dir} by '
- f'{self.file_client.name}'))
-
- if self.save_best is not None:
- if runner.meta is None:
- warnings.warn('runner.meta is None. Creating an empty one.')
- runner.meta = dict()
- runner.meta.setdefault('hook_msgs', dict())
- self.best_ckpt_path = runner.meta['hook_msgs'].get(
- 'best_ckpt', None)
-
- def before_train_iter(self, runner):
- """Evaluate the model only at the start of training by iteration."""
- if self.by_epoch or not self.initial_flag:
- return
- if self.start is not None and runner.iter >= self.start:
- self.after_train_iter(runner)
- self.initial_flag = False
-
- def before_train_epoch(self, runner):
- """Evaluate the model only at the start of training by epoch."""
- if not (self.by_epoch and self.initial_flag):
- return
- if self.start is not None and runner.epoch >= self.start:
- self.after_train_epoch(runner)
- self.initial_flag = False
-
- def after_train_iter(self, runner):
- """Called after every training iter to evaluate the results."""
- if not self.by_epoch and self._should_evaluate(runner):
- # Because the priority of EvalHook is higher than LoggerHook, the
- # training log and the evaluating log are mixed. Therefore,
- # we need to dump the training log and clear it before evaluating
- # log is generated. In addition, this problem will only appear in
- # `IterBasedRunner` whose `self.by_epoch` is False, because
- # `EpochBasedRunner` whose `self.by_epoch` is True calls
- # `_do_evaluate` in `after_train_epoch` stage, and at this stage
- # the training log has been printed, so it will not cause any
- # problem. more details at
- # https://github.com/open-mmlab/mmsegmentation/issues/694
- for hook in runner._hooks:
- if isinstance(hook, LoggerHook):
- hook.after_train_iter(runner)
- runner.log_buffer.clear()
-
- self._do_evaluate(runner)
-
- def after_train_epoch(self, runner):
- """Called after every training epoch to evaluate the results."""
- if self.by_epoch and self._should_evaluate(runner):
- self._do_evaluate(runner)
-
- def _do_evaluate(self, runner):
- """perform evaluation and save ckpt."""
- results = self.test_fn(runner.model, self.dataloader)
- runner.log_buffer.output['eval_iter_num'] = len(self.dataloader)
- key_score = self.evaluate(runner, results)
- # the key_score may be `None` so it needs to skip the action to save
- # the best checkpoint
- if self.save_best and key_score:
- self._save_ckpt(runner, key_score)
-
- def _should_evaluate(self, runner):
- """Judge whether to perform evaluation.
-
- Here is the rule to judge whether to perform evaluation:
- 1. It will not perform evaluation during the epoch/iteration interval,
- which is determined by ``self.interval``.
- 2. It will not perform evaluation if the start time is larger than
- current time.
- 3. It will not perform evaluation when current time is larger than
- the start time but during epoch/iteration interval.
-
- Returns:
- bool: The flag indicating whether to perform evaluation.
- """
- if self.by_epoch:
- current = runner.epoch
- check_time = self.every_n_epochs
- else:
- current = runner.iter
- check_time = self.every_n_iters
-
- if self.start is None:
- if not check_time(runner, self.interval):
- # No evaluation during the interval.
- return False
- elif (current + 1) < self.start:
- # No evaluation if start is larger than the current time.
- return False
- else:
- # Evaluation only at epochs/iters 3, 5, 7...
- # if start==3 and interval==2
- if (current + 1 - self.start) % self.interval:
- return False
- return True
-
- def _save_ckpt(self, runner, key_score):
- """Save the best checkpoint.
-
- It will compare the score according to the compare function, write
- related information (best score, best checkpoint path) and save the
- best checkpoint into ``work_dir``.
- """
- if self.by_epoch:
- current = f'epoch_{runner.epoch + 1}'
- cur_type, cur_time = 'epoch', runner.epoch + 1
- else:
- current = f'iter_{runner.iter + 1}'
- cur_type, cur_time = 'iter', runner.iter + 1
-
- best_score = runner.meta['hook_msgs'].get(
- 'best_score', self.init_value_map[self.rule])
- if self.compare_func(key_score, best_score):
- best_score = key_score
- runner.meta['hook_msgs']['best_score'] = best_score
-
- if self.best_ckpt_path and self.file_client.isfile(
- self.best_ckpt_path):
- self.file_client.remove(self.best_ckpt_path)
- runner.logger.info(
- (f'The previous best checkpoint {self.best_ckpt_path} was '
- 'removed'))
-
- best_ckpt_name = f'best_{self.key_indicator}_{current}.pth'
- self.best_ckpt_path = self.file_client.join_path(
- self.out_dir, best_ckpt_name)
- runner.meta['hook_msgs']['best_ckpt'] = self.best_ckpt_path
-
- runner.save_checkpoint(
- self.out_dir, best_ckpt_name, create_symlink=False)
- runner.logger.info(
- f'Now best checkpoint is saved as {best_ckpt_name}.')
- runner.logger.info(
- f'Best {self.key_indicator} is {best_score:0.4f} '
- f'at {cur_time} {cur_type}.')
-
- def evaluate(self, runner, results):
- """Evaluate the results.
-
- Args:
- runner (:obj:`mmcv.Runner`): The underlined training runner.
- results (list): Output results.
- """
- eval_res = self.dataloader.dataset.evaluate(
- results, logger=runner.logger, **self.eval_kwargs)
-
- for name, val in eval_res.items():
- runner.log_buffer.output[name] = val
- runner.log_buffer.ready = True
-
- if self.save_best is not None:
- # If the performance of model is pool, the `eval_res` may be an
- # empty dict and it will raise exception when `self.save_best` is
- # not None. More details at
- # https://github.com/open-mmlab/mmdetection/issues/6265.
- if not eval_res:
- warnings.warn(
- 'Since `eval_res` is an empty dict, the behavior to save '
- 'the best checkpoint will be skipped in this evaluation.')
- return None
-
- if self.key_indicator == 'auto':
- # infer from eval_results
- self._init_rule(self.rule, list(eval_res.keys())[0])
- return eval_res[self.key_indicator]
-
- return None
-
-
-class DistEvalHook(EvalHook):
- """Distributed evaluation hook.
-
- This hook will regularly perform evaluation in a given interval when
- performing in distributed environment.
-
- Args:
- dataloader (DataLoader): A PyTorch dataloader, whose dataset has
- implemented ``evaluate`` function.
- start (int | None, optional): Evaluation starting epoch. It enables
- evaluation before the training starts if ``start`` <= the resuming
- epoch. If None, whether to evaluate is merely decided by
- ``interval``. Default: None.
- interval (int): Evaluation interval. Default: 1.
- by_epoch (bool): Determine perform evaluation by epoch or by iteration.
- If set to True, it will perform by epoch. Otherwise, by iteration.
- default: True.
- save_best (str, optional): If a metric is specified, it would measure
- the best checkpoint during evaluation. The information about best
- checkpoint would be saved in ``runner.meta['hook_msgs']`` to keep
- best score value and best checkpoint path, which will be also
- loaded when resume checkpoint. Options are the evaluation metrics
- on the test dataset. e.g., ``bbox_mAP``, ``segm_mAP`` for bbox
- detection and instance segmentation. ``AR@100`` for proposal
- recall. If ``save_best`` is ``auto``, the first key of the returned
- ``OrderedDict`` result will be used. Default: None.
- rule (str | None, optional): Comparison rule for best score. If set to
- None, it will infer a reasonable rule. Keys such as 'acc', 'top'
- .etc will be inferred by 'greater' rule. Keys contain 'loss' will
- be inferred by 'less' rule. Options are 'greater', 'less', None.
- Default: None.
- test_fn (callable, optional): test a model with samples from a
- dataloader in a multi-gpu manner, and return the test results. If
- ``None``, the default test function ``mmcv.engine.multi_gpu_test``
- will be used. (default: ``None``)
- tmpdir (str | None): Temporary directory to save the results of all
- processes. Default: None.
- gpu_collect (bool): Whether to use gpu or cpu to collect results.
- Default: False.
- broadcast_bn_buffer (bool): Whether to broadcast the
- buffer(running_mean and running_var) of rank 0 to other rank
- before evaluation. Default: True.
- out_dir (str, optional): The root directory to save checkpoints. If not
- specified, `runner.work_dir` will be used by default. If specified,
- the `out_dir` will be the concatenation of `out_dir` and the last
- level directory of `runner.work_dir`.
- file_client_args (dict): Arguments to instantiate a FileClient.
- See :class:`mmcv.fileio.FileClient` for details. Default: None.
- **eval_kwargs: Evaluation arguments fed into the evaluate function of
- the dataset.
- """
-
- def __init__(self,
- dataloader,
- start=None,
- interval=1,
- by_epoch=True,
- save_best=None,
- rule=None,
- test_fn=None,
- greater_keys=None,
- less_keys=None,
- broadcast_bn_buffer=True,
- tmpdir=None,
- gpu_collect=False,
- out_dir=None,
- file_client_args=None,
- **eval_kwargs):
-
- if test_fn is None:
- from annotator.uniformer.mmcv.engine import multi_gpu_test
- test_fn = multi_gpu_test
-
- super().__init__(
- dataloader,
- start=start,
- interval=interval,
- by_epoch=by_epoch,
- save_best=save_best,
- rule=rule,
- test_fn=test_fn,
- greater_keys=greater_keys,
- less_keys=less_keys,
- out_dir=out_dir,
- file_client_args=file_client_args,
- **eval_kwargs)
-
- self.broadcast_bn_buffer = broadcast_bn_buffer
- self.tmpdir = tmpdir
- self.gpu_collect = gpu_collect
-
- def _do_evaluate(self, runner):
- """perform evaluation and save ckpt."""
- # Synchronization of BatchNorm's buffer (running_mean
- # and running_var) is not supported in the DDP of pytorch,
- # which may cause the inconsistent performance of models in
- # different ranks, so we broadcast BatchNorm's buffers
- # of rank 0 to other ranks to avoid this.
- if self.broadcast_bn_buffer:
- model = runner.model
- for name, module in model.named_modules():
- if isinstance(module,
- _BatchNorm) and module.track_running_stats:
- dist.broadcast(module.running_var, 0)
- dist.broadcast(module.running_mean, 0)
-
- tmpdir = self.tmpdir
- if tmpdir is None:
- tmpdir = osp.join(runner.work_dir, '.eval_hook')
-
- results = self.test_fn(
- runner.model,
- self.dataloader,
- tmpdir=tmpdir,
- gpu_collect=self.gpu_collect)
- if runner.rank == 0:
- print('\n')
- runner.log_buffer.output['eval_iter_num'] = len(self.dataloader)
- key_score = self.evaluate(runner, results)
- # the key_score may be `None` so it needs to skip the action to
- # save the best checkpoint
- if self.save_best and key_score:
- self._save_ckpt(runner, key_score)
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/cldm/logger.py b/spaces/Anonymous-sub/Rerender/ControlNet/cldm/logger.py
deleted file mode 100644
index 6a8803846f2a8979f87f3cf9ea5b12869439e62f..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/cldm/logger.py
+++ /dev/null
@@ -1,76 +0,0 @@
-import os
-
-import numpy as np
-import torch
-import torchvision
-from PIL import Image
-from pytorch_lightning.callbacks import Callback
-from pytorch_lightning.utilities.distributed import rank_zero_only
-
-
-class ImageLogger(Callback):
- def __init__(self, batch_frequency=2000, max_images=4, clamp=True, increase_log_steps=True,
- rescale=True, disabled=False, log_on_batch_idx=False, log_first_step=False,
- log_images_kwargs=None):
- super().__init__()
- self.rescale = rescale
- self.batch_freq = batch_frequency
- self.max_images = max_images
- if not increase_log_steps:
- self.log_steps = [self.batch_freq]
- self.clamp = clamp
- self.disabled = disabled
- self.log_on_batch_idx = log_on_batch_idx
- self.log_images_kwargs = log_images_kwargs if log_images_kwargs else {}
- self.log_first_step = log_first_step
-
- @rank_zero_only
- def log_local(self, save_dir, split, images, global_step, current_epoch, batch_idx):
- root = os.path.join(save_dir, "image_log", split)
- for k in images:
- grid = torchvision.utils.make_grid(images[k], nrow=4)
- if self.rescale:
- grid = (grid + 1.0) / 2.0 # -1,1 -> 0,1; c,h,w
- grid = grid.transpose(0, 1).transpose(1, 2).squeeze(-1)
- grid = grid.numpy()
- grid = (grid * 255).astype(np.uint8)
- filename = "{}_gs-{:06}_e-{:06}_b-{:06}.png".format(k, global_step, current_epoch, batch_idx)
- path = os.path.join(root, filename)
- os.makedirs(os.path.split(path)[0], exist_ok=True)
- Image.fromarray(grid).save(path)
-
- def log_img(self, pl_module, batch, batch_idx, split="train"):
- check_idx = batch_idx # if self.log_on_batch_idx else pl_module.global_step
- if (self.check_frequency(check_idx) and # batch_idx % self.batch_freq == 0
- hasattr(pl_module, "log_images") and
- callable(pl_module.log_images) and
- self.max_images > 0):
- logger = type(pl_module.logger)
-
- is_train = pl_module.training
- if is_train:
- pl_module.eval()
-
- with torch.no_grad():
- images = pl_module.log_images(batch, split=split, **self.log_images_kwargs)
-
- for k in images:
- N = min(images[k].shape[0], self.max_images)
- images[k] = images[k][:N]
- if isinstance(images[k], torch.Tensor):
- images[k] = images[k].detach().cpu()
- if self.clamp:
- images[k] = torch.clamp(images[k], -1., 1.)
-
- self.log_local(pl_module.logger.save_dir, split, images,
- pl_module.global_step, pl_module.current_epoch, batch_idx)
-
- if is_train:
- pl_module.train()
-
- def check_frequency(self, check_idx):
- return check_idx % self.batch_freq == 0
-
- def on_train_batch_end(self, trainer, pl_module, outputs, batch, batch_idx, dataloader_idx):
- if not self.disabled:
- self.log_img(pl_module, batch, batch_idx, split="train")
diff --git a/spaces/AsakuraMizu/moe-tts/text/japanese.py b/spaces/AsakuraMizu/moe-tts/text/japanese.py
deleted file mode 100644
index 375e4d50872d5c68ee57ca17470a2ca425425eba..0000000000000000000000000000000000000000
--- a/spaces/AsakuraMizu/moe-tts/text/japanese.py
+++ /dev/null
@@ -1,153 +0,0 @@
-import re
-from unidecode import unidecode
-import pyopenjtalk
-
-
-# Regular expression matching Japanese without punctuation marks:
-_japanese_characters = re.compile(
- r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]')
-
-# Regular expression matching non-Japanese characters or punctuation marks:
-_japanese_marks = re.compile(
- r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]')
-
-# List of (symbol, Japanese) pairs for marks:
-_symbols_to_japanese = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('%', 'パーセント')
-]]
-
-# List of (romaji, ipa) pairs for marks:
-_romaji_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('ts', 'ʦ'),
- ('u', 'ɯ'),
- ('j', 'ʥ'),
- ('y', 'j'),
- ('ni', 'n^i'),
- ('nj', 'n^'),
- ('hi', 'çi'),
- ('hj', 'ç'),
- ('f', 'ɸ'),
- ('I', 'i*'),
- ('U', 'ɯ*'),
- ('r', 'ɾ')
-]]
-
-# List of (romaji, ipa2) pairs for marks:
-_romaji_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('u', 'ɯ'),
- ('ʧ', 'tʃ'),
- ('j', 'dʑ'),
- ('y', 'j'),
- ('ni', 'n^i'),
- ('nj', 'n^'),
- ('hi', 'çi'),
- ('hj', 'ç'),
- ('f', 'ɸ'),
- ('I', 'i*'),
- ('U', 'ɯ*'),
- ('r', 'ɾ')
-]]
-
-# List of (consonant, sokuon) pairs:
-_real_sokuon = [(re.compile('%s' % x[0]), x[1]) for x in [
- (r'Q([↑↓]*[kg])', r'k#\1'),
- (r'Q([↑↓]*[tdjʧ])', r't#\1'),
- (r'Q([↑↓]*[sʃ])', r's\1'),
- (r'Q([↑↓]*[pb])', r'p#\1')
-]]
-
-# List of (consonant, hatsuon) pairs:
-_real_hatsuon = [(re.compile('%s' % x[0]), x[1]) for x in [
- (r'N([↑↓]*[pbm])', r'm\1'),
- (r'N([↑↓]*[ʧʥj])', r'n^\1'),
- (r'N([↑↓]*[tdn])', r'n\1'),
- (r'N([↑↓]*[kg])', r'ŋ\1')
-]]
-
-
-def symbols_to_japanese(text):
- for regex, replacement in _symbols_to_japanese:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def japanese_to_romaji_with_accent(text):
- '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html'''
- text = symbols_to_japanese(text)
- sentences = re.split(_japanese_marks, text)
- marks = re.findall(_japanese_marks, text)
- text = ''
- for i, sentence in enumerate(sentences):
- if re.match(_japanese_characters, sentence):
- if text != '':
- text += ' '
- labels = pyopenjtalk.extract_fullcontext(sentence)
- for n, label in enumerate(labels):
- phoneme = re.search(r'\-([^\+]*)\+', label).group(1)
- if phoneme not in ['sil', 'pau']:
- text += phoneme.replace('ch', 'ʧ').replace('sh',
- 'ʃ').replace('cl', 'Q')
- else:
- continue
- # n_moras = int(re.search(r'/F:(\d+)_', label).group(1))
- a1 = int(re.search(r"/A:(\-?[0-9]+)\+", label).group(1))
- a2 = int(re.search(r"\+(\d+)\+", label).group(1))
- a3 = int(re.search(r"\+(\d+)/", label).group(1))
- if re.search(r'\-([^\+]*)\+', labels[n + 1]).group(1) in ['sil', 'pau']:
- a2_next = -1
- else:
- a2_next = int(
- re.search(r"\+(\d+)\+", labels[n + 1]).group(1))
- # Accent phrase boundary
- if a3 == 1 and a2_next == 1:
- text += ' '
- # Falling
- elif a1 == 0 and a2_next == a2 + 1:
- text += '↓'
- # Rising
- elif a2 == 1 and a2_next == 2:
- text += '↑'
- if i < len(marks):
- text += unidecode(marks[i]).replace(' ', '')
- return text
-
-
-def get_real_sokuon(text):
- for regex, replacement in _real_sokuon:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def get_real_hatsuon(text):
- for regex, replacement in _real_hatsuon:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def japanese_to_ipa(text):
- text = japanese_to_romaji_with_accent(text).replace('...', '…')
- text = re.sub(
- r'([aiueo])\1+', lambda x: x.group(0)[0]+'ː'*(len(x.group(0))-1), text)
- text = get_real_sokuon(text)
- text = get_real_hatsuon(text)
- for regex, replacement in _romaji_to_ipa:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def japanese_to_ipa2(text):
- text = japanese_to_romaji_with_accent(text).replace('...', '…')
- text = get_real_sokuon(text)
- text = get_real_hatsuon(text)
- for regex, replacement in _romaji_to_ipa2:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def japanese_to_ipa3(text):
- text = japanese_to_ipa2(text).replace('n^', 'ȵ').replace(
- 'ʃ', 'ɕ').replace('*', '\u0325').replace('#', '\u031a')
- text = re.sub(
- r'([aiɯeo])\1+', lambda x: x.group(0)[0]+'ː'*(len(x.group(0))-1), text)
- text = re.sub(r'((?:^|\s)(?:ts|tɕ|[kpt]))', r'\1ʰ', text)
- return text
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/mbcharsetprober.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/mbcharsetprober.py
deleted file mode 100644
index 666307e8fe0608c69f2b6578a49794e1e20a139a..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/mbcharsetprober.py
+++ /dev/null
@@ -1,95 +0,0 @@
-######################## BEGIN LICENSE BLOCK ########################
-# The Original Code is Mozilla Universal charset detector code.
-#
-# The Initial Developer of the Original Code is
-# Netscape Communications Corporation.
-# Portions created by the Initial Developer are Copyright (C) 2001
-# the Initial Developer. All Rights Reserved.
-#
-# Contributor(s):
-# Mark Pilgrim - port to Python
-# Shy Shalom - original C code
-# Proofpoint, Inc.
-#
-# This library is free software; you can redistribute it and/or
-# modify it under the terms of the GNU Lesser General Public
-# License as published by the Free Software Foundation; either
-# version 2.1 of the License, or (at your option) any later version.
-#
-# This library is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-# Lesser General Public License for more details.
-#
-# You should have received a copy of the GNU Lesser General Public
-# License along with this library; if not, write to the Free Software
-# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
-# 02110-1301 USA
-######################### END LICENSE BLOCK #########################
-
-from typing import Optional, Union
-
-from .chardistribution import CharDistributionAnalysis
-from .charsetprober import CharSetProber
-from .codingstatemachine import CodingStateMachine
-from .enums import LanguageFilter, MachineState, ProbingState
-
-
-class MultiByteCharSetProber(CharSetProber):
- """
- MultiByteCharSetProber
- """
-
- def __init__(self, lang_filter: LanguageFilter = LanguageFilter.NONE) -> None:
- super().__init__(lang_filter=lang_filter)
- self.distribution_analyzer: Optional[CharDistributionAnalysis] = None
- self.coding_sm: Optional[CodingStateMachine] = None
- self._last_char = bytearray(b"\0\0")
-
- def reset(self) -> None:
- super().reset()
- if self.coding_sm:
- self.coding_sm.reset()
- if self.distribution_analyzer:
- self.distribution_analyzer.reset()
- self._last_char = bytearray(b"\0\0")
-
- def feed(self, byte_str: Union[bytes, bytearray]) -> ProbingState:
- assert self.coding_sm is not None
- assert self.distribution_analyzer is not None
-
- for i, byte in enumerate(byte_str):
- coding_state = self.coding_sm.next_state(byte)
- if coding_state == MachineState.ERROR:
- self.logger.debug(
- "%s %s prober hit error at byte %s",
- self.charset_name,
- self.language,
- i,
- )
- self._state = ProbingState.NOT_ME
- break
- if coding_state == MachineState.ITS_ME:
- self._state = ProbingState.FOUND_IT
- break
- if coding_state == MachineState.START:
- char_len = self.coding_sm.get_current_charlen()
- if i == 0:
- self._last_char[1] = byte
- self.distribution_analyzer.feed(self._last_char, char_len)
- else:
- self.distribution_analyzer.feed(byte_str[i - 1 : i + 1], char_len)
-
- self._last_char[0] = byte_str[-1]
-
- if self.state == ProbingState.DETECTING:
- if self.distribution_analyzer.got_enough_data() and (
- self.get_confidence() > self.SHORTCUT_THRESHOLD
- ):
- self._state = ProbingState.FOUND_IT
-
- return self.state
-
- def get_confidence(self) -> float:
- assert self.distribution_analyzer is not None
- return self.distribution_analyzer.get_confidence()
diff --git a/spaces/Audio-AGI/AudioSep/models/CLAP/open_clip/pretrained.py b/spaces/Audio-AGI/AudioSep/models/CLAP/open_clip/pretrained.py
deleted file mode 100644
index e211d8b5b59320a599e62605f1dee6199f317253..0000000000000000000000000000000000000000
--- a/spaces/Audio-AGI/AudioSep/models/CLAP/open_clip/pretrained.py
+++ /dev/null
@@ -1,167 +0,0 @@
-import hashlib
-import os
-import urllib
-import warnings
-
-from tqdm import tqdm
-
-_RN50 = dict(
- openai="https://openaipublic.azureedge.net/clip/models/afeb0e10f9e5a86da6080e35cf09123aca3b358a0c3e3b6c78a7b63bc04b6762/RN50.pt",
- yfcc15m="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/rn50-quickgelu-yfcc15m-455df137.pt",
- cc12m="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/rn50-quickgelu-cc12m-f000538c.pt",
-)
-
-_RN50_quickgelu = dict(
- openai="https://openaipublic.azureedge.net/clip/models/afeb0e10f9e5a86da6080e35cf09123aca3b358a0c3e3b6c78a7b63bc04b6762/RN50.pt",
- yfcc15m="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/rn50-quickgelu-yfcc15m-455df137.pt",
- cc12m="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/rn50-quickgelu-cc12m-f000538c.pt",
-)
-
-_RN101 = dict(
- openai="https://openaipublic.azureedge.net/clip/models/8fa8567bab74a42d41c5915025a8e4538c3bdbe8804a470a72f30b0d94fab599/RN101.pt",
- yfcc15m="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/rn101-quickgelu-yfcc15m-3e04b30e.pt",
-)
-
-_RN101_quickgelu = dict(
- openai="https://openaipublic.azureedge.net/clip/models/8fa8567bab74a42d41c5915025a8e4538c3bdbe8804a470a72f30b0d94fab599/RN101.pt",
- yfcc15m="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/rn101-quickgelu-yfcc15m-3e04b30e.pt",
-)
-
-_RN50x4 = dict(
- openai="https://openaipublic.azureedge.net/clip/models/7e526bd135e493cef0776de27d5f42653e6b4c8bf9e0f653bb11773263205fdd/RN50x4.pt",
-)
-
-_RN50x16 = dict(
- openai="https://openaipublic.azureedge.net/clip/models/52378b407f34354e150460fe41077663dd5b39c54cd0bfd2b27167a4a06ec9aa/RN50x16.pt",
-)
-
-_RN50x64 = dict(
- openai="https://openaipublic.azureedge.net/clip/models/be1cfb55d75a9666199fb2206c106743da0f6468c9d327f3e0d0a543a9919d9c/RN50x64.pt",
-)
-
-_VITB32 = dict(
- openai="https://openaipublic.azureedge.net/clip/models/40d365715913c9da98579312b702a82c18be219cc2a73407c4526f58eba950af/ViT-B-32.pt",
- laion400m_e31="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/vit_b_32-quickgelu-laion400m_e31-d867053b.pt",
- laion400m_e32="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/vit_b_32-quickgelu-laion400m_e32-46683a32.pt",
- laion400m_avg="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/vit_b_32-quickgelu-laion400m_avg-8a00ab3c.pt",
-)
-
-_VITB32_quickgelu = dict(
- openai="https://openaipublic.azureedge.net/clip/models/40d365715913c9da98579312b702a82c18be219cc2a73407c4526f58eba950af/ViT-B-32.pt",
- laion400m_e31="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/vit_b_32-quickgelu-laion400m_e31-d867053b.pt",
- laion400m_e32="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/vit_b_32-quickgelu-laion400m_e32-46683a32.pt",
- laion400m_avg="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/vit_b_32-quickgelu-laion400m_avg-8a00ab3c.pt",
-)
-
-_VITB16 = dict(
- openai="https://openaipublic.azureedge.net/clip/models/5806e77cd80f8b59890b7e101eabd078d9fb84e6937f9e85e4ecb61988df416f/ViT-B-16.pt",
-)
-
-_VITL14 = dict(
- openai="https://openaipublic.azureedge.net/clip/models/b8cca3fd41ae0c99ba7e8951adf17d267cdb84cd88be6f7c2e0eca1737a03836/ViT-L-14.pt",
-)
-
-_PRETRAINED = {
- "RN50": _RN50,
- "RN50-quickgelu": _RN50_quickgelu,
- "RN101": _RN101,
- "RN101-quickgelu": _RN101_quickgelu,
- "RN50x4": _RN50x4,
- "RN50x16": _RN50x16,
- "ViT-B-32": _VITB32,
- "ViT-B-32-quickgelu": _VITB32_quickgelu,
- "ViT-B-16": _VITB16,
- "ViT-L-14": _VITL14,
-}
-
-
-def list_pretrained(as_str: bool = False):
- """returns list of pretrained models
- Returns a tuple (model_name, pretrain_tag) by default or 'name:tag' if as_str == True
- """
- return [
- ":".join([k, t]) if as_str else (k, t)
- for k in _PRETRAINED.keys()
- for t in _PRETRAINED[k].keys()
- ]
-
-
-def list_pretrained_tag_models(tag: str):
- """return all models having the specified pretrain tag"""
- models = []
- for k in _PRETRAINED.keys():
- if tag in _PRETRAINED[k]:
- models.append(k)
- return models
-
-
-def list_pretrained_model_tags(model: str):
- """return all pretrain tags for the specified model architecture"""
- tags = []
- if model in _PRETRAINED:
- tags.extend(_PRETRAINED[model].keys())
- return tags
-
-
-def get_pretrained_url(model: str, tag: str):
- if model not in _PRETRAINED:
- return ""
- model_pretrained = _PRETRAINED[model]
- if tag not in model_pretrained:
- return ""
- return model_pretrained[tag]
-
-
-def download_pretrained(url: str, root: str = os.path.expanduser("~/.cache/clip")):
- os.makedirs(root, exist_ok=True)
- filename = os.path.basename(url)
-
- if "openaipublic" in url:
- expected_sha256 = url.split("/")[-2]
- else:
- expected_sha256 = ""
-
- download_target = os.path.join(root, filename)
-
- if os.path.exists(download_target) and not os.path.isfile(download_target):
- raise RuntimeError(f"{download_target} exists and is not a regular file")
-
- if os.path.isfile(download_target):
- if expected_sha256:
- if (
- hashlib.sha256(open(download_target, "rb").read()).hexdigest()
- == expected_sha256
- ):
- return download_target
- else:
- warnings.warn(
- f"{download_target} exists, but the SHA256 checksum does not match; re-downloading the file"
- )
- else:
- return download_target
-
- with urllib.request.urlopen(url) as source, open(download_target, "wb") as output:
- with tqdm(
- total=int(source.info().get("Content-Length")),
- ncols=80,
- unit="iB",
- unit_scale=True,
- ) as loop:
- while True:
- buffer = source.read(8192)
- if not buffer:
- break
-
- output.write(buffer)
- loop.update(len(buffer))
-
- if (
- expected_sha256
- and hashlib.sha256(open(download_target, "rb").read()).hexdigest()
- != expected_sha256
- ):
- raise RuntimeError(
- f"Model has been downloaded but the SHA256 checksum does not not match"
- )
-
- return download_target
diff --git a/spaces/Banbri/zcvzcv/src/lib/replaceNonWhiteWithTransparent.ts b/spaces/Banbri/zcvzcv/src/lib/replaceNonWhiteWithTransparent.ts
deleted file mode 100644
index 6ffe6df050134290d39ee114e427741b26cfb419..0000000000000000000000000000000000000000
--- a/spaces/Banbri/zcvzcv/src/lib/replaceNonWhiteWithTransparent.ts
+++ /dev/null
@@ -1,46 +0,0 @@
-export function replaceNonWhiteWithTransparent(imageBase64: string): Promise {
- return new Promise((resolve, reject) => {
- const img = new Image();
- img.onload = () => {
- const canvas = document.createElement('canvas');
- const ctx = canvas.getContext('2d');
- if (!ctx) {
- reject('Unable to get canvas context');
- return;
- }
-
- const ratio = window.devicePixelRatio || 1;
- canvas.width = img.width * ratio;
- canvas.height = img.height * ratio;
- ctx.scale(ratio, ratio);
-
- ctx.drawImage(img, 0, 0);
-
- const imageData = ctx.getImageData(0, 0, img.width, img.height);
- const data = imageData.data;
- console.log("ok")
-
- for (let i = 0; i < data.length; i += 4) {
- if (data[i] === 255 && data[i + 1] === 255 && data[i + 2] === 255) {
- // Change white (also shades of grays) pixels to black
- data[i] = 0;
- data[i + 1] = 0;
- data[i + 2] = 0;
- } else {
- // Change all other pixels to transparent
- data[i + 3] = 0;
- }
- }
-
- ctx.putImageData(imageData, 0, 0);
-
- resolve(canvas.toDataURL());
- };
-
- img.onerror = (err) => {
- reject(err);
- };
-
- img.src = imageBase64;
- });
-}
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Descargar Ataque Areo 3d Versin Completa Para Windows 10.md b/spaces/Benson/text-generation/Examples/Descargar Ataque Areo 3d Versin Completa Para Windows 10.md
deleted file mode 100644
index 0de9b69a8a26479d58b5d8facf00c43890768dc6..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar Ataque Areo 3d Versin Completa Para Windows 10.md
+++ /dev/null
@@ -1,81 +0,0 @@
-
-
Cómo descargar la versión completa de Air Strike 3D para Windows 10
-
Si estás buscando un juego emocionante y lleno de acción que pondrá a prueba tus habilidades como piloto de helicóptero, entonces deberías probar Air Strike 3D. Este juego es un shooter estilo árcade en 3D que te permite volar y destruir varios enemigos en diferentes escenarios. Puedes elegir entre 10 helicópteros diferentes, cada uno con sus propias armas y habilidades, y enfrentarte a cientos de enemigos en 20 desafiantes niveles. En este artículo, te mostraremos cómo descargar la versión completa de Air Strike 3D para Windows 10, así como algunos consejos y trucos para jugar el juego.
-
¿Qué es el ataque aéreo 3D?
-
Air Strike 3D es un juego desarrollado por Divogames y lanzado en 2004. También se conoce como AirStrike II y III, y es una modificación del clásico juego de árcade, AirStrike. El juego se desarrolla en un mundo futurista donde tienes que luchar contra una organización terrorista que se ha apoderado del mundo. Tienes que usar tu helicóptero para volar sobre varios terrenos, como desiertos, selvas, ciudades y océanos, y destruir bases enemigas, tanques, aviones, barcos y más. El juego tiene gráficos realistas, efectos de sonido y física, así como un sistema de dificultad dinámica que se ajusta a tu rendimiento.
-
descargar ataque aéreo 3d versión completa para windows 10
Algunas de las características de Air Strike 3D son:
-
-
10 helicópteros diferentes con armas y características únicas
-
20 niveles con diferentes entornos y enemigos
-
5 modos de dificultad de fácil a insano
-
Varios power-ups y bonos para recoger
-
Gráficos y efectos de sonido de alta calidad
-
Controles simples e intuitivos
-
-
Requisitos para el ataque aéreo 3D
-
Para jugar Air Strike 3D en su PC con Windows 10, necesita tener los siguientes requisitos:
-
-
Windows XP/Vista/7/8/10
-Procesador
Pentium III o superior
-
128 MB de RAM
-
32 MB de memoria de vídeo
-
-
100 MB de espacio libre en disco
-
-
Dónde descargar la versión completa de Air Strike 3D para Windows 10
-
Hay varias fuentes donde puede descargar la versión completa de Air Strike 3D para Windows 10. Aquí están algunas de ellas:
-
Uptodown
-
Uptodown es un sitio web que ofrece descargas gratuitas de varios juegos y aplicaciones para diferentes plataformas. Puede descargar la versión completa de Air Strike 3D para Windows 10 desde Uptodown siguiendo estos pasos:
Espere a que la descarga termine y guarde el archivo en su PC
-
-
Filehippo
-
Filehippo es otro sitio web que proporciona descargas gratuitas de software y juegos para varios dispositivos. Puede descargar la versión completa de Air Strike 3D para Windows 10 desde Filehippo haciendo lo siguiente:
También puede descargar la versión completa de Air Strike 3D para Windows 10 de otras fuentes, como sitios de torrents, plataformas para compartir archivos o sitios web no oficiales. Sin embargo, debe tener cuidado al descargar de estas fuentes, ya que pueden contener virus, malware u otros archivos dañinos que pueden dañar su PC o comprometer su seguridad. Siempre debe escanear los archivos descargados con un programa antivirus confiable antes de instalarlos. También debe comprobar las revisiones y calificaciones de las fuentes para asegurarse de que son confiables y seguros.
-
Cómo instalar la versión completa de Air Strike 3D para Windows 10
-
Una vez que haya descargado la versión completa de Air Strike 3D para Windows 10 desde una de las fuentes mencionadas anteriormente, puede instalarlo en su PC siguiendo estos pasos:
-
-
El archivo del juego suele estar comprimido, como ZIP o RAR. Es necesario extraerlo con un programa como WinRAR o 7-Zip. Para hacer esto, haga clic derecho en el archivo del juego y seleccione "Extraer aquí" o "Extraer a Air Strike 3D" (o cualquier otro nombre de carpeta que prefiera). Esto creará una nueva carpeta con los archivos de juego extraídos.
-
-
Paso 2: Extraer el archivo del juego
-
Abra la carpeta extraída y busque el archivo de configuración. Normalmente se llama "setup.exe" o "install.exe". Haga doble clic en él para iniciar el asistente de instalación. Siga las instrucciones en la pantalla para completar el proceso de instalación. Es posible que deba aceptar los términos y condiciones, elegir una carpeta de destino y crear un acceso directo en su escritorio.
-
Paso 3: Ejecutar el archivo de configuración
-
Una vez finalizada la instalación, puede ejecutar el juego haciendo clic en el acceso directo de su escritorio o yendo a la carpeta de destino y haciendo doble clic en el archivo ejecutable del juego. Por lo general se llama "AirStrike3D.exe" o "AirStrikeII.exe". El juego comenzará y podrás disfrutar jugando.
-
Paso 4: Disfruta del juego
-
Air Strike 3D es un juego divertido y adictivo que te mantendrá entretenido durante horas. Puedes elegir entre diferentes modos de juego, como Campaña, Supervivencia, Juego rápido o Misión personalizada. También puedes ajustar la configuración del juego, como sonido, gráficos, controles y dificultad. También puede ver sus estadísticas, logros y puntuaciones altas.
-
Consejos y trucos para jugar Air Strike 3D
-
Para aprovechar al máximo su experiencia de juego, aquí hay algunos consejos y trucos para jugar Air Strike 3D:
-
Elige el helicóptero adecuado
-
-
Usa tus armas sabiamente
-
Tienes dos tipos de armas en Air Strike 3D: primaria y secundaria. El arma principal suele ser una ametralladora que dispara continuamente cuando se pulsa el botón izquierdo del ratón. El arma secundaria suele ser un cohete o una bomba que se dispara cuando se pulsa el botón derecho del ratón. Tienes una cantidad limitada de armas secundarias, por lo que debes usarlas con moderación y estratégicamente. También puedes recoger potenciadores y bonificaciones que te dan armas o municiones adicionales, como misiles, bombas de napalm, bombas de racimo, láseres, pistolas de plasma, etc.
-
Evitar el fuego enemigo
-
Te enfrentarás a muchos enemigos en Air Strike 3D, como tanques, aviones, barcos, torretas, cohetes, minas, etc. Intentarán derribarte con sus propias armas. Usted debe evitar su fuego por moverse y esquivar sus ataques. También puedes usar tus armas para destruir sus proyectiles o desviarlos con tus palas de helicóptero. También debe tener cuidado con los peligros ambientales, como edificios, puentes, árboles, montañas, etc., que pueden bloquear su camino o dañar su helicóptero. También debes vigilar tus barras de salud y armadura, que indican cuánto daño puedes sufrir antes de perder el juego.
-
Conclusión
-
Air Strike 3D es un gran juego para cualquier persona que ama volar y juegos de disparos. Tiene gráficos increíbles, efectos de sonido y jugabilidad que te mantendrán enganchado durante horas. Puede descargar la versión completa de Air Strike 3D para Windows 10 desde varias fuentes, como Uptodown, Filehippo u otros sitios web. También puede instalarlo fácilmente en su PC siguiendo los pasos que hemos proporcionado. También puedes mejorar tus habilidades y divertirte más siguiendo los consejos y trucos que hemos compartido. ¡Esperamos que disfrutes jugando Air Strike 3D y te diviertas!
-
Preguntas frecuentes
-
Aquí hay algunas preguntas frecuentes sobre Air Strike 3D:
-
-
Q: ¿Cómo puedo cambiar la configuración del juego?
-
-
Q: ¿Cómo puedo guardar y cargar el progreso de mi juego?
-
A: Puede guardar y cargar el progreso del juego haciendo clic en los botones "Guardar" o "Cargar" en el menú principal. Puede tener hasta tres ranuras de guardado.
-
Q: ¿Cómo puedo desbloquear nuevos helicópteros?
-
A: Puede desbloquear nuevos helicópteros completando ciertos niveles o logrando ciertas puntuaciones. Puede ver los requisitos para cada helicóptero en la pantalla "Seleccionar helicóptero".
-
Q: ¿Cómo puedo obtener más armas y municiones?
-
A: Puedes obtener más armas y munición mediante la recopilación de power-ups y bonos que aparecen al azar en la pantalla. Generalmente están marcados con iconos o letras, como M para misiles, N para bombas de napalm, L para láseres, etc.
-
Q: ¿Cómo hago una pausa o salgo del juego?
-
A: Puede hacer una pausa o salir del juego pulsando la tecla Esc del teclado. Esto mostrará un menú donde puedes reanudar, reiniciar o salir del juego.
-
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/_palettes.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/_palettes.py
deleted file mode 100644
index 3c748d33e45bfcdc690ceee490cbb50b516cd2b3..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/_palettes.py
+++ /dev/null
@@ -1,309 +0,0 @@
-from .palette import Palette
-
-
-# Taken from https://en.wikipedia.org/wiki/ANSI_escape_code (Windows 10 column)
-WINDOWS_PALETTE = Palette(
- [
- (12, 12, 12),
- (197, 15, 31),
- (19, 161, 14),
- (193, 156, 0),
- (0, 55, 218),
- (136, 23, 152),
- (58, 150, 221),
- (204, 204, 204),
- (118, 118, 118),
- (231, 72, 86),
- (22, 198, 12),
- (249, 241, 165),
- (59, 120, 255),
- (180, 0, 158),
- (97, 214, 214),
- (242, 242, 242),
- ]
-)
-
-# # The standard ansi colors (including bright variants)
-STANDARD_PALETTE = Palette(
- [
- (0, 0, 0),
- (170, 0, 0),
- (0, 170, 0),
- (170, 85, 0),
- (0, 0, 170),
- (170, 0, 170),
- (0, 170, 170),
- (170, 170, 170),
- (85, 85, 85),
- (255, 85, 85),
- (85, 255, 85),
- (255, 255, 85),
- (85, 85, 255),
- (255, 85, 255),
- (85, 255, 255),
- (255, 255, 255),
- ]
-)
-
-
-# The 256 color palette
-EIGHT_BIT_PALETTE = Palette(
- [
- (0, 0, 0),
- (128, 0, 0),
- (0, 128, 0),
- (128, 128, 0),
- (0, 0, 128),
- (128, 0, 128),
- (0, 128, 128),
- (192, 192, 192),
- (128, 128, 128),
- (255, 0, 0),
- (0, 255, 0),
- (255, 255, 0),
- (0, 0, 255),
- (255, 0, 255),
- (0, 255, 255),
- (255, 255, 255),
- (0, 0, 0),
- (0, 0, 95),
- (0, 0, 135),
- (0, 0, 175),
- (0, 0, 215),
- (0, 0, 255),
- (0, 95, 0),
- (0, 95, 95),
- (0, 95, 135),
- (0, 95, 175),
- (0, 95, 215),
- (0, 95, 255),
- (0, 135, 0),
- (0, 135, 95),
- (0, 135, 135),
- (0, 135, 175),
- (0, 135, 215),
- (0, 135, 255),
- (0, 175, 0),
- (0, 175, 95),
- (0, 175, 135),
- (0, 175, 175),
- (0, 175, 215),
- (0, 175, 255),
- (0, 215, 0),
- (0, 215, 95),
- (0, 215, 135),
- (0, 215, 175),
- (0, 215, 215),
- (0, 215, 255),
- (0, 255, 0),
- (0, 255, 95),
- (0, 255, 135),
- (0, 255, 175),
- (0, 255, 215),
- (0, 255, 255),
- (95, 0, 0),
- (95, 0, 95),
- (95, 0, 135),
- (95, 0, 175),
- (95, 0, 215),
- (95, 0, 255),
- (95, 95, 0),
- (95, 95, 95),
- (95, 95, 135),
- (95, 95, 175),
- (95, 95, 215),
- (95, 95, 255),
- (95, 135, 0),
- (95, 135, 95),
- (95, 135, 135),
- (95, 135, 175),
- (95, 135, 215),
- (95, 135, 255),
- (95, 175, 0),
- (95, 175, 95),
- (95, 175, 135),
- (95, 175, 175),
- (95, 175, 215),
- (95, 175, 255),
- (95, 215, 0),
- (95, 215, 95),
- (95, 215, 135),
- (95, 215, 175),
- (95, 215, 215),
- (95, 215, 255),
- (95, 255, 0),
- (95, 255, 95),
- (95, 255, 135),
- (95, 255, 175),
- (95, 255, 215),
- (95, 255, 255),
- (135, 0, 0),
- (135, 0, 95),
- (135, 0, 135),
- (135, 0, 175),
- (135, 0, 215),
- (135, 0, 255),
- (135, 95, 0),
- (135, 95, 95),
- (135, 95, 135),
- (135, 95, 175),
- (135, 95, 215),
- (135, 95, 255),
- (135, 135, 0),
- (135, 135, 95),
- (135, 135, 135),
- (135, 135, 175),
- (135, 135, 215),
- (135, 135, 255),
- (135, 175, 0),
- (135, 175, 95),
- (135, 175, 135),
- (135, 175, 175),
- (135, 175, 215),
- (135, 175, 255),
- (135, 215, 0),
- (135, 215, 95),
- (135, 215, 135),
- (135, 215, 175),
- (135, 215, 215),
- (135, 215, 255),
- (135, 255, 0),
- (135, 255, 95),
- (135, 255, 135),
- (135, 255, 175),
- (135, 255, 215),
- (135, 255, 255),
- (175, 0, 0),
- (175, 0, 95),
- (175, 0, 135),
- (175, 0, 175),
- (175, 0, 215),
- (175, 0, 255),
- (175, 95, 0),
- (175, 95, 95),
- (175, 95, 135),
- (175, 95, 175),
- (175, 95, 215),
- (175, 95, 255),
- (175, 135, 0),
- (175, 135, 95),
- (175, 135, 135),
- (175, 135, 175),
- (175, 135, 215),
- (175, 135, 255),
- (175, 175, 0),
- (175, 175, 95),
- (175, 175, 135),
- (175, 175, 175),
- (175, 175, 215),
- (175, 175, 255),
- (175, 215, 0),
- (175, 215, 95),
- (175, 215, 135),
- (175, 215, 175),
- (175, 215, 215),
- (175, 215, 255),
- (175, 255, 0),
- (175, 255, 95),
- (175, 255, 135),
- (175, 255, 175),
- (175, 255, 215),
- (175, 255, 255),
- (215, 0, 0),
- (215, 0, 95),
- (215, 0, 135),
- (215, 0, 175),
- (215, 0, 215),
- (215, 0, 255),
- (215, 95, 0),
- (215, 95, 95),
- (215, 95, 135),
- (215, 95, 175),
- (215, 95, 215),
- (215, 95, 255),
- (215, 135, 0),
- (215, 135, 95),
- (215, 135, 135),
- (215, 135, 175),
- (215, 135, 215),
- (215, 135, 255),
- (215, 175, 0),
- (215, 175, 95),
- (215, 175, 135),
- (215, 175, 175),
- (215, 175, 215),
- (215, 175, 255),
- (215, 215, 0),
- (215, 215, 95),
- (215, 215, 135),
- (215, 215, 175),
- (215, 215, 215),
- (215, 215, 255),
- (215, 255, 0),
- (215, 255, 95),
- (215, 255, 135),
- (215, 255, 175),
- (215, 255, 215),
- (215, 255, 255),
- (255, 0, 0),
- (255, 0, 95),
- (255, 0, 135),
- (255, 0, 175),
- (255, 0, 215),
- (255, 0, 255),
- (255, 95, 0),
- (255, 95, 95),
- (255, 95, 135),
- (255, 95, 175),
- (255, 95, 215),
- (255, 95, 255),
- (255, 135, 0),
- (255, 135, 95),
- (255, 135, 135),
- (255, 135, 175),
- (255, 135, 215),
- (255, 135, 255),
- (255, 175, 0),
- (255, 175, 95),
- (255, 175, 135),
- (255, 175, 175),
- (255, 175, 215),
- (255, 175, 255),
- (255, 215, 0),
- (255, 215, 95),
- (255, 215, 135),
- (255, 215, 175),
- (255, 215, 215),
- (255, 215, 255),
- (255, 255, 0),
- (255, 255, 95),
- (255, 255, 135),
- (255, 255, 175),
- (255, 255, 215),
- (255, 255, 255),
- (8, 8, 8),
- (18, 18, 18),
- (28, 28, 28),
- (38, 38, 38),
- (48, 48, 48),
- (58, 58, 58),
- (68, 68, 68),
- (78, 78, 78),
- (88, 88, 88),
- (98, 98, 98),
- (108, 108, 108),
- (118, 118, 118),
- (128, 128, 128),
- (138, 138, 138),
- (148, 148, 148),
- (158, 158, 158),
- (168, 168, 168),
- (178, 178, 178),
- (188, 188, 188),
- (198, 198, 198),
- (208, 208, 208),
- (218, 218, 218),
- (228, 228, 228),
- (238, 238, 238),
- ]
-)
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/tests/test_model_e2e.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/tests/test_model_e2e.py
deleted file mode 100644
index 40a6009431199af8230d0a039c44e0472930a163..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/tests/test_model_e2e.py
+++ /dev/null
@@ -1,150 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
-
-
-import unittest
-import torch
-
-import detectron2.model_zoo as model_zoo
-from detectron2.config import get_cfg
-from detectron2.modeling import build_model
-from detectron2.structures import BitMasks, Boxes, ImageList, Instances
-from detectron2.utils.events import EventStorage
-
-
-def get_model_zoo(config_path):
- """
- Like model_zoo.get, but do not load any weights (even pretrained)
- """
- cfg_file = model_zoo.get_config_file(config_path)
- cfg = get_cfg()
- cfg.merge_from_file(cfg_file)
- if not torch.cuda.is_available():
- cfg.MODEL.DEVICE = "cpu"
- return build_model(cfg)
-
-
-def create_model_input(img, inst=None):
- if inst is not None:
- return {"image": img, "instances": inst}
- else:
- return {"image": img}
-
-
-def get_empty_instance(h, w):
- inst = Instances((h, w))
- inst.gt_boxes = Boxes(torch.rand(0, 4))
- inst.gt_classes = torch.tensor([]).to(dtype=torch.int64)
- inst.gt_masks = BitMasks(torch.rand(0, h, w))
- return inst
-
-
-def get_regular_bitmask_instances(h, w):
- inst = Instances((h, w))
- inst.gt_boxes = Boxes(torch.rand(3, 4))
- inst.gt_boxes.tensor[:, 2:] += inst.gt_boxes.tensor[:, :2]
- inst.gt_classes = torch.tensor([3, 4, 5]).to(dtype=torch.int64)
- inst.gt_masks = BitMasks((torch.rand(3, h, w) > 0.5))
- return inst
-
-
-class ModelE2ETest(unittest.TestCase):
- def setUp(self):
- torch.manual_seed(43)
- self.model = get_model_zoo(self.CONFIG_PATH)
-
- def _test_eval(self, input_sizes):
- inputs = [create_model_input(torch.rand(3, s[0], s[1])) for s in input_sizes]
- self.model.eval()
- self.model(inputs)
-
- def _test_train(self, input_sizes, instances):
- assert len(input_sizes) == len(instances)
- inputs = [
- create_model_input(torch.rand(3, s[0], s[1]), inst)
- for s, inst in zip(input_sizes, instances)
- ]
- self.model.train()
- with EventStorage():
- losses = self.model(inputs)
- sum(losses.values()).backward()
- del losses
-
- def _inf_tensor(self, *shape):
- return 1.0 / torch.zeros(*shape, device=self.model.device)
-
- def _nan_tensor(self, *shape):
- return torch.zeros(*shape, device=self.model.device).fill_(float("nan"))
-
-
-class MaskRCNNE2ETest(ModelE2ETest):
- CONFIG_PATH = "COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml"
-
- def test_empty_data(self):
- instances = [get_empty_instance(200, 250), get_empty_instance(200, 249)]
- self._test_eval([(200, 250), (200, 249)])
- self._test_train([(200, 250), (200, 249)], instances)
-
- def test_half_empty_data(self):
- instances = [get_empty_instance(200, 250), get_regular_bitmask_instances(200, 249)]
- self._test_train([(200, 250), (200, 249)], instances)
-
- def test_rpn_inf_nan_data(self):
- self.model.eval()
- for tensor in [self._inf_tensor, self._nan_tensor]:
- images = ImageList(tensor(1, 3, 512, 512), [(510, 510)])
- features = {
- "p2": tensor(1, 256, 256, 256),
- "p3": tensor(1, 256, 128, 128),
- "p4": tensor(1, 256, 64, 64),
- "p5": tensor(1, 256, 32, 32),
- "p6": tensor(1, 256, 16, 16),
- }
- props, _ = self.model.proposal_generator(images, features)
- self.assertEqual(len(props[0]), 0)
-
- def test_roiheads_inf_nan_data(self):
- self.model.eval()
- for tensor in [self._inf_tensor, self._nan_tensor]:
- images = ImageList(tensor(1, 3, 512, 512), [(510, 510)])
- features = {
- "p2": tensor(1, 256, 256, 256),
- "p3": tensor(1, 256, 128, 128),
- "p4": tensor(1, 256, 64, 64),
- "p5": tensor(1, 256, 32, 32),
- "p6": tensor(1, 256, 16, 16),
- }
- props = [Instances((510, 510))]
- props[0].proposal_boxes = Boxes([[10, 10, 20, 20]]).to(device=self.model.device)
- props[0].objectness_logits = torch.tensor([1.0]).reshape(1, 1)
- det, _ = self.model.roi_heads(images, features, props)
- self.assertEqual(len(det[0]), 0)
-
-
-class RetinaNetE2ETest(ModelE2ETest):
- CONFIG_PATH = "COCO-Detection/retinanet_R_50_FPN_1x.yaml"
-
- def test_empty_data(self):
- instances = [get_empty_instance(200, 250), get_empty_instance(200, 249)]
- self._test_eval([(200, 250), (200, 249)])
- self._test_train([(200, 250), (200, 249)], instances)
-
- def test_inf_nan_data(self):
- self.model.eval()
- self.model.score_threshold = -999999999
- for tensor in [self._inf_tensor, self._nan_tensor]:
- images = ImageList(tensor(1, 3, 512, 512), [(510, 510)])
- features = [
- tensor(1, 256, 128, 128),
- tensor(1, 256, 64, 64),
- tensor(1, 256, 32, 32),
- tensor(1, 256, 16, 16),
- tensor(1, 256, 8, 8),
- ]
- anchors = self.model.anchor_generator(features)
- box_cls, box_delta = self.model.head(features)
- box_cls = [tensor(*k.shape) for k in box_cls]
- box_delta = [tensor(*k.shape) for k in box_delta]
- det = self.model.inference(box_cls, box_delta, anchors, images.image_sizes)
- # all predictions (if any) are infinite or nan
- if len(det[0]):
- self.assertTrue(torch.isfinite(det[0].pred_boxes.tensor).sum() == 0)
diff --git a/spaces/CVPR/LIVE/thrust/cmake/ThrustRunTest.cmake b/spaces/CVPR/LIVE/thrust/cmake/ThrustRunTest.cmake
deleted file mode 100644
index 0d03129f0160c7918126d3cda7fccf66d2cc43d2..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/cmake/ThrustRunTest.cmake
+++ /dev/null
@@ -1,8 +0,0 @@
-execute_process(
- COMMAND "${THRUST_BINARY}"
- RESULT_VARIABLE EXIT_CODE
-)
-
-if (NOT "0" STREQUAL "${EXIT_CODE}")
- message(FATAL_ERROR "${THRUST_BINARY} failed (${EXIT_CODE})")
-endif ()
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/trivial_copy.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/trivial_copy.h
deleted file mode 100644
index 8fbd0a987a294a7a33375b74a4c127922f0d2c0b..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/trivial_copy.h
+++ /dev/null
@@ -1,62 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-/*! \file trivial_copy.h
- * \brief Sequential copy algorithms for plain-old-data.
- */
-
-#pragma once
-
-#include
-#include
-#include
-
-namespace thrust
-{
-namespace system
-{
-namespace detail
-{
-namespace sequential
-{
-
-
-template
-__host__ __device__
- T *trivial_copy_n(const T *first,
- std::ptrdiff_t n,
- T *result)
-{
- T* return_value = NULL;
- if (THRUST_IS_HOST_CODE) {
- #if THRUST_INCLUDE_HOST_CODE
- std::memmove(result, first, n * sizeof(T));
- return_value = result + n;
- #endif
- } else {
- #if THRUST_INCLUDE_DEVICE_CODE
- return_value = thrust::system::detail::sequential::general_copy_n(first, n, result);
- #endif
- }
- return return_value;
-} // end trivial_copy_n()
-
-
-} // end namespace sequential
-} // end namespace detail
-} // end namespace system
-} // end namespace thrust
-
diff --git a/spaces/CVPR/WALT/mmdet/models/detectors/reppoints_detector.py b/spaces/CVPR/WALT/mmdet/models/detectors/reppoints_detector.py
deleted file mode 100644
index a5f6be31e14488e4b8a006b7142a82c872388d82..0000000000000000000000000000000000000000
--- a/spaces/CVPR/WALT/mmdet/models/detectors/reppoints_detector.py
+++ /dev/null
@@ -1,22 +0,0 @@
-from ..builder import DETECTORS
-from .single_stage import SingleStageDetector
-
-
-@DETECTORS.register_module()
-class RepPointsDetector(SingleStageDetector):
- """RepPoints: Point Set Representation for Object Detection.
-
- This detector is the implementation of:
- - RepPoints detector (https://arxiv.org/pdf/1904.11490)
- """
-
- def __init__(self,
- backbone,
- neck,
- bbox_head,
- train_cfg=None,
- test_cfg=None,
- pretrained=None):
- super(RepPointsDetector,
- self).__init__(backbone, neck, bbox_head, train_cfg, test_cfg,
- pretrained)
diff --git a/spaces/CVPR/WALT/mmdet/models/roi_heads/bbox_heads/dii_head.py b/spaces/CVPR/WALT/mmdet/models/roi_heads/bbox_heads/dii_head.py
deleted file mode 100644
index 8c970a78184672aaaa95edcdaecec03a26604390..0000000000000000000000000000000000000000
--- a/spaces/CVPR/WALT/mmdet/models/roi_heads/bbox_heads/dii_head.py
+++ /dev/null
@@ -1,415 +0,0 @@
-import torch
-import torch.nn as nn
-from mmcv.cnn import (bias_init_with_prob, build_activation_layer,
- build_norm_layer)
-from mmcv.runner import auto_fp16, force_fp32
-
-from mmdet.core import multi_apply
-from mmdet.models.builder import HEADS, build_loss
-from mmdet.models.dense_heads.atss_head import reduce_mean
-from mmdet.models.losses import accuracy
-from mmdet.models.utils import FFN, MultiheadAttention, build_transformer
-from .bbox_head import BBoxHead
-
-
-@HEADS.register_module()
-class DIIHead(BBoxHead):
- r"""Dynamic Instance Interactive Head for `Sparse R-CNN: End-to-End Object
- Detection with Learnable Proposals `_
-
- Args:
- num_classes (int): Number of class in dataset.
- Defaults to 80.
- num_ffn_fcs (int): The number of fully-connected
- layers in FFNs. Defaults to 2.
- num_heads (int): The hidden dimension of FFNs.
- Defaults to 8.
- num_cls_fcs (int): The number of fully-connected
- layers in classification subnet. Defaults to 1.
- num_reg_fcs (int): The number of fully-connected
- layers in regression subnet. Defaults to 3.
- feedforward_channels (int): The hidden dimension
- of FFNs. Defaults to 2048
- in_channels (int): Hidden_channels of MultiheadAttention.
- Defaults to 256.
- dropout (float): Probability of drop the channel.
- Defaults to 0.0
- ffn_act_cfg (dict): The activation config for FFNs.
- dynamic_conv_cfg (dict): The convolution config
- for DynamicConv.
- loss_iou (dict): The config for iou or giou loss.
-
- """
-
- def __init__(self,
- num_classes=80,
- num_ffn_fcs=2,
- num_heads=8,
- num_cls_fcs=1,
- num_reg_fcs=3,
- feedforward_channels=2048,
- in_channels=256,
- dropout=0.0,
- ffn_act_cfg=dict(type='ReLU', inplace=True),
- dynamic_conv_cfg=dict(
- type='DynamicConv',
- in_channels=256,
- feat_channels=64,
- out_channels=256,
- input_feat_shape=7,
- act_cfg=dict(type='ReLU', inplace=True),
- norm_cfg=dict(type='LN')),
- loss_iou=dict(type='GIoULoss', loss_weight=2.0),
- **kwargs):
- super(DIIHead, self).__init__(
- num_classes=num_classes,
- reg_decoded_bbox=True,
- reg_class_agnostic=True,
- **kwargs)
- self.loss_iou = build_loss(loss_iou)
- self.in_channels = in_channels
- self.fp16_enabled = False
- self.attention = MultiheadAttention(in_channels, num_heads, dropout)
- self.attention_norm = build_norm_layer(dict(type='LN'), in_channels)[1]
-
- self.instance_interactive_conv = build_transformer(dynamic_conv_cfg)
- self.instance_interactive_conv_dropout = nn.Dropout(dropout)
- self.instance_interactive_conv_norm = build_norm_layer(
- dict(type='LN'), in_channels)[1]
-
- self.ffn = FFN(
- in_channels,
- feedforward_channels,
- num_ffn_fcs,
- act_cfg=ffn_act_cfg,
- dropout=dropout)
- self.ffn_norm = build_norm_layer(dict(type='LN'), in_channels)[1]
-
- self.cls_fcs = nn.ModuleList()
- for _ in range(num_cls_fcs):
- self.cls_fcs.append(
- nn.Linear(in_channels, in_channels, bias=False))
- self.cls_fcs.append(
- build_norm_layer(dict(type='LN'), in_channels)[1])
- self.cls_fcs.append(
- build_activation_layer(dict(type='ReLU', inplace=True)))
-
- # over load the self.fc_cls in BBoxHead
- if self.loss_cls.use_sigmoid:
- self.fc_cls = nn.Linear(in_channels, self.num_classes)
- else:
- self.fc_cls = nn.Linear(in_channels, self.num_classes + 1)
-
- self.reg_fcs = nn.ModuleList()
- for _ in range(num_reg_fcs):
- self.reg_fcs.append(
- nn.Linear(in_channels, in_channels, bias=False))
- self.reg_fcs.append(
- build_norm_layer(dict(type='LN'), in_channels)[1])
- self.reg_fcs.append(
- build_activation_layer(dict(type='ReLU', inplace=True)))
- # over load the self.fc_cls in BBoxHead
- self.fc_reg = nn.Linear(in_channels, 4)
-
- assert self.reg_class_agnostic, 'DIIHead only ' \
- 'suppport `reg_class_agnostic=True` '
- assert self.reg_decoded_bbox, 'DIIHead only ' \
- 'suppport `reg_decoded_bbox=True`'
-
- def init_weights(self):
- """Use xavier initialization for all weight parameter and set
- classification head bias as a specific value when use focal loss."""
- for p in self.parameters():
- if p.dim() > 1:
- nn.init.xavier_uniform_(p)
- else:
- # adopt the default initialization for
- # the weight and bias of the layer norm
- pass
- if self.loss_cls.use_sigmoid:
- bias_init = bias_init_with_prob(0.01)
- nn.init.constant_(self.fc_cls.bias, bias_init)
-
- @auto_fp16()
- def forward(self, roi_feat, proposal_feat):
- """Forward function of Dynamic Instance Interactive Head.
-
- Args:
- roi_feat (Tensor): Roi-pooling features with shape
- (batch_size*num_proposals, feature_dimensions,
- pooling_h , pooling_w).
- proposal_feat (Tensor): Intermediate feature get from
- diihead in last stage, has shape
- (batch_size, num_proposals, feature_dimensions)
-
- Returns:
- tuple[Tensor]: Usually a tuple of classification scores
- and bbox prediction and a intermediate feature.
-
- - cls_scores (Tensor): Classification scores for
- all proposals, has shape
- (batch_size, num_proposals, num_classes).
- - bbox_preds (Tensor): Box energies / deltas for
- all proposals, has shape
- (batch_size, num_proposals, 4).
- - obj_feat (Tensor): Object feature before classification
- and regression subnet, has shape
- (batch_size, num_proposal, feature_dimensions).
- """
- N, num_proposals = proposal_feat.shape[:2]
-
- # Self attention
- proposal_feat = proposal_feat.permute(1, 0, 2)
- proposal_feat = self.attention_norm(self.attention(proposal_feat))
-
- # instance interactive
- proposal_feat = proposal_feat.permute(1, 0,
- 2).reshape(-1, self.in_channels)
- proposal_feat_iic = self.instance_interactive_conv(
- proposal_feat, roi_feat)
- proposal_feat = proposal_feat + self.instance_interactive_conv_dropout(
- proposal_feat_iic)
- obj_feat = self.instance_interactive_conv_norm(proposal_feat)
-
- # FFN
- obj_feat = self.ffn_norm(self.ffn(obj_feat))
-
- cls_feat = obj_feat
- reg_feat = obj_feat
-
- for cls_layer in self.cls_fcs:
- cls_feat = cls_layer(cls_feat)
- for reg_layer in self.reg_fcs:
- reg_feat = reg_layer(reg_feat)
-
- cls_score = self.fc_cls(cls_feat).view(N, num_proposals, -1)
- bbox_delta = self.fc_reg(reg_feat).view(N, num_proposals, -1)
-
- return cls_score, bbox_delta, obj_feat.view(N, num_proposals, -1)
-
- @force_fp32(apply_to=('cls_score', 'bbox_pred'))
- def loss(self,
- cls_score,
- bbox_pred,
- labels,
- label_weights,
- bbox_targets,
- bbox_weights,
- imgs_whwh=None,
- reduction_override=None,
- **kwargs):
- """"Loss function of DIIHead, get loss of all images.
-
- Args:
- cls_score (Tensor): Classification prediction
- results of all class, has shape
- (batch_size * num_proposals_single_image, num_classes)
- bbox_pred (Tensor): Regression prediction results,
- has shape
- (batch_size * num_proposals_single_image, 4), the last
- dimension 4 represents [tl_x, tl_y, br_x, br_y].
- labels (Tensor): Label of each proposals, has shape
- (batch_size * num_proposals_single_image
- label_weights (Tensor): Classification loss
- weight of each proposals, has shape
- (batch_size * num_proposals_single_image
- bbox_targets (Tensor): Regression targets of each
- proposals, has shape
- (batch_size * num_proposals_single_image, 4),
- the last dimension 4 represents
- [tl_x, tl_y, br_x, br_y].
- bbox_weights (Tensor): Regression loss weight of each
- proposals's coordinate, has shape
- (batch_size * num_proposals_single_image, 4),
- imgs_whwh (Tensor): imgs_whwh (Tensor): Tensor with\
- shape (batch_size, num_proposals, 4), the last
- dimension means
- [img_width,img_height, img_width, img_height].
- reduction_override (str, optional): The reduction
- method used to override the original reduction
- method of the loss. Options are "none",
- "mean" and "sum". Defaults to None,
-
- Returns:
- dict[str, Tensor]: Dictionary of loss components
- """
- losses = dict()
- bg_class_ind = self.num_classes
- # note in spare rcnn num_gt == num_pos
- pos_inds = (labels >= 0) & (labels < bg_class_ind)
- num_pos = pos_inds.sum().float()
- avg_factor = reduce_mean(num_pos)
- if cls_score is not None:
- if cls_score.numel() > 0:
- losses['loss_cls'] = self.loss_cls(
- cls_score,
- labels,
- label_weights,
- avg_factor=avg_factor,
- reduction_override=reduction_override)
- losses['pos_acc'] = accuracy(cls_score[pos_inds],
- labels[pos_inds])
- if bbox_pred is not None:
- # 0~self.num_classes-1 are FG, self.num_classes is BG
- # do not perform bounding box regression for BG anymore.
- if pos_inds.any():
- pos_bbox_pred = bbox_pred.reshape(bbox_pred.size(0),
- 4)[pos_inds.type(torch.bool)]
- imgs_whwh = imgs_whwh.reshape(bbox_pred.size(0),
- 4)[pos_inds.type(torch.bool)]
- losses['loss_bbox'] = self.loss_bbox(
- pos_bbox_pred / imgs_whwh,
- bbox_targets[pos_inds.type(torch.bool)] / imgs_whwh,
- bbox_weights[pos_inds.type(torch.bool)],
- avg_factor=avg_factor)
- losses['loss_iou'] = self.loss_iou(
- pos_bbox_pred,
- bbox_targets[pos_inds.type(torch.bool)],
- bbox_weights[pos_inds.type(torch.bool)],
- avg_factor=avg_factor)
- else:
- losses['loss_bbox'] = bbox_pred.sum() * 0
- losses['loss_iou'] = bbox_pred.sum() * 0
- return losses
-
- def _get_target_single(self, pos_inds, neg_inds, pos_bboxes, neg_bboxes,
- pos_gt_bboxes, pos_gt_labels, cfg):
- """Calculate the ground truth for proposals in the single image
- according to the sampling results.
-
- Almost the same as the implementation in `bbox_head`,
- we add pos_inds and neg_inds to select positive and
- negative samples instead of selecting the first num_pos
- as positive samples.
-
- Args:
- pos_inds (Tensor): The length is equal to the
- positive sample numbers contain all index
- of the positive sample in the origin proposal set.
- neg_inds (Tensor): The length is equal to the
- negative sample numbers contain all index
- of the negative sample in the origin proposal set.
- pos_bboxes (Tensor): Contains all the positive boxes,
- has shape (num_pos, 4), the last dimension 4
- represents [tl_x, tl_y, br_x, br_y].
- neg_bboxes (Tensor): Contains all the negative boxes,
- has shape (num_neg, 4), the last dimension 4
- represents [tl_x, tl_y, br_x, br_y].
- pos_gt_bboxes (Tensor): Contains all the gt_boxes,
- has shape (num_gt, 4), the last dimension 4
- represents [tl_x, tl_y, br_x, br_y].
- pos_gt_labels (Tensor): Contains all the gt_labels,
- has shape (num_gt).
- cfg (obj:`ConfigDict`): `train_cfg` of R-CNN.
-
- Returns:
- Tuple[Tensor]: Ground truth for proposals in a single image.
- Containing the following Tensors:
-
- - labels(Tensor): Gt_labels for all proposals, has
- shape (num_proposals,).
- - label_weights(Tensor): Labels_weights for all proposals, has
- shape (num_proposals,).
- - bbox_targets(Tensor):Regression target for all proposals, has
- shape (num_proposals, 4), the last dimension 4
- represents [tl_x, tl_y, br_x, br_y].
- - bbox_weights(Tensor):Regression weights for all proposals,
- has shape (num_proposals, 4).
- """
- num_pos = pos_bboxes.size(0)
- num_neg = neg_bboxes.size(0)
- num_samples = num_pos + num_neg
-
- # original implementation uses new_zeros since BG are set to be 0
- # now use empty & fill because BG cat_id = num_classes,
- # FG cat_id = [0, num_classes-1]
- labels = pos_bboxes.new_full((num_samples, ),
- self.num_classes,
- dtype=torch.long)
- label_weights = pos_bboxes.new_zeros(num_samples)
- bbox_targets = pos_bboxes.new_zeros(num_samples, 4)
- bbox_weights = pos_bboxes.new_zeros(num_samples, 4)
- if num_pos > 0:
- labels[pos_inds] = pos_gt_labels
- pos_weight = 1.0 if cfg.pos_weight <= 0 else cfg.pos_weight
- label_weights[pos_inds] = pos_weight
- if not self.reg_decoded_bbox:
- pos_bbox_targets = self.bbox_coder.encode(
- pos_bboxes, pos_gt_bboxes)
- else:
- pos_bbox_targets = pos_gt_bboxes
- bbox_targets[pos_inds, :] = pos_bbox_targets
- bbox_weights[pos_inds, :] = 1
- if num_neg > 0:
- label_weights[neg_inds] = 1.0
-
- return labels, label_weights, bbox_targets, bbox_weights
-
- def get_targets(self,
- sampling_results,
- gt_bboxes,
- gt_labels,
- rcnn_train_cfg,
- concat=True):
- """Calculate the ground truth for all samples in a batch according to
- the sampling_results.
-
- Almost the same as the implementation in bbox_head, we passed
- additional parameters pos_inds_list and neg_inds_list to
- `_get_target_single` function.
-
- Args:
- sampling_results (List[obj:SamplingResults]): Assign results of
- all images in a batch after sampling.
- gt_bboxes (list[Tensor]): Gt_bboxes of all images in a batch,
- each tensor has shape (num_gt, 4), the last dimension 4
- represents [tl_x, tl_y, br_x, br_y].
- gt_labels (list[Tensor]): Gt_labels of all images in a batch,
- each tensor has shape (num_gt,).
- rcnn_train_cfg (obj:`ConfigDict`): `train_cfg` of RCNN.
- concat (bool): Whether to concatenate the results of all
- the images in a single batch.
-
- Returns:
- Tuple[Tensor]: Ground truth for proposals in a single image.
- Containing the following list of Tensors:
-
- - labels (list[Tensor],Tensor): Gt_labels for all
- proposals in a batch, each tensor in list has
- shape (num_proposals,) when `concat=False`, otherwise just
- a single tensor has shape (num_all_proposals,).
- - label_weights (list[Tensor]): Labels_weights for
- all proposals in a batch, each tensor in list has shape
- (num_proposals,) when `concat=False`, otherwise just a
- single tensor has shape (num_all_proposals,).
- - bbox_targets (list[Tensor],Tensor): Regression target
- for all proposals in a batch, each tensor in list has
- shape (num_proposals, 4) when `concat=False`, otherwise
- just a single tensor has shape (num_all_proposals, 4),
- the last dimension 4 represents [tl_x, tl_y, br_x, br_y].
- - bbox_weights (list[tensor],Tensor): Regression weights for
- all proposals in a batch, each tensor in list has shape
- (num_proposals, 4) when `concat=False`, otherwise just a
- single tensor has shape (num_all_proposals, 4).
- """
- pos_inds_list = [res.pos_inds for res in sampling_results]
- neg_inds_list = [res.neg_inds for res in sampling_results]
- pos_bboxes_list = [res.pos_bboxes for res in sampling_results]
- neg_bboxes_list = [res.neg_bboxes for res in sampling_results]
- pos_gt_bboxes_list = [res.pos_gt_bboxes for res in sampling_results]
- pos_gt_labels_list = [res.pos_gt_labels for res in sampling_results]
- labels, label_weights, bbox_targets, bbox_weights = multi_apply(
- self._get_target_single,
- pos_inds_list,
- neg_inds_list,
- pos_bboxes_list,
- neg_bboxes_list,
- pos_gt_bboxes_list,
- pos_gt_labels_list,
- cfg=rcnn_train_cfg)
- if concat:
- labels = torch.cat(labels, 0)
- label_weights = torch.cat(label_weights, 0)
- bbox_targets = torch.cat(bbox_targets, 0)
- bbox_weights = torch.cat(bbox_weights, 0)
- return labels, label_weights, bbox_targets, bbox_weights
diff --git a/spaces/CVPR/lama-example/saicinpainting/evaluation/losses/lpips.py b/spaces/CVPR/lama-example/saicinpainting/evaluation/losses/lpips.py
deleted file mode 100644
index b5f19b747f2457902695213f7efcde4fdc306c1f..0000000000000000000000000000000000000000
--- a/spaces/CVPR/lama-example/saicinpainting/evaluation/losses/lpips.py
+++ /dev/null
@@ -1,891 +0,0 @@
-############################################################
-# The contents below have been combined using files in the #
-# following repository: #
-# https://github.com/richzhang/PerceptualSimilarity #
-############################################################
-
-############################################################
-# __init__.py #
-############################################################
-
-import numpy as np
-from skimage.metrics import structural_similarity
-import torch
-
-from saicinpainting.utils import get_shape
-
-
-class PerceptualLoss(torch.nn.Module):
- def __init__(self, model='net-lin', net='alex', colorspace='rgb', model_path=None, spatial=False, use_gpu=True):
- # VGG using our perceptually-learned weights (LPIPS metric)
- # def __init__(self, model='net', net='vgg', use_gpu=True): # "default" way of using VGG as a perceptual loss
- super(PerceptualLoss, self).__init__()
- self.use_gpu = use_gpu
- self.spatial = spatial
- self.model = DistModel()
- self.model.initialize(model=model, net=net, use_gpu=use_gpu, colorspace=colorspace,
- model_path=model_path, spatial=self.spatial)
-
- def forward(self, pred, target, normalize=True):
- """
- Pred and target are Variables.
- If normalize is True, assumes the images are between [0,1] and then scales them between [-1,+1]
- If normalize is False, assumes the images are already between [-1,+1]
- Inputs pred and target are Nx3xHxW
- Output pytorch Variable N long
- """
-
- if normalize:
- target = 2 * target - 1
- pred = 2 * pred - 1
-
- return self.model(target, pred)
-
-
-def normalize_tensor(in_feat, eps=1e-10):
- norm_factor = torch.sqrt(torch.sum(in_feat ** 2, dim=1, keepdim=True))
- return in_feat / (norm_factor + eps)
-
-
-def l2(p0, p1, range=255.):
- return .5 * np.mean((p0 / range - p1 / range) ** 2)
-
-
-def psnr(p0, p1, peak=255.):
- return 10 * np.log10(peak ** 2 / np.mean((1. * p0 - 1. * p1) ** 2))
-
-
-def dssim(p0, p1, range=255.):
- return (1 - compare_ssim(p0, p1, data_range=range, multichannel=True)) / 2.
-
-
-def rgb2lab(in_img, mean_cent=False):
- from skimage import color
- img_lab = color.rgb2lab(in_img)
- if (mean_cent):
- img_lab[:, :, 0] = img_lab[:, :, 0] - 50
- return img_lab
-
-
-def tensor2np(tensor_obj):
- # change dimension of a tensor object into a numpy array
- return tensor_obj[0].cpu().float().numpy().transpose((1, 2, 0))
-
-
-def np2tensor(np_obj):
- # change dimenion of np array into tensor array
- return torch.Tensor(np_obj[:, :, :, np.newaxis].transpose((3, 2, 0, 1)))
-
-
-def tensor2tensorlab(image_tensor, to_norm=True, mc_only=False):
- # image tensor to lab tensor
- from skimage import color
-
- img = tensor2im(image_tensor)
- img_lab = color.rgb2lab(img)
- if (mc_only):
- img_lab[:, :, 0] = img_lab[:, :, 0] - 50
- if (to_norm and not mc_only):
- img_lab[:, :, 0] = img_lab[:, :, 0] - 50
- img_lab = img_lab / 100.
-
- return np2tensor(img_lab)
-
-
-def tensorlab2tensor(lab_tensor, return_inbnd=False):
- from skimage import color
- import warnings
- warnings.filterwarnings("ignore")
-
- lab = tensor2np(lab_tensor) * 100.
- lab[:, :, 0] = lab[:, :, 0] + 50
-
- rgb_back = 255. * np.clip(color.lab2rgb(lab.astype('float')), 0, 1)
- if (return_inbnd):
- # convert back to lab, see if we match
- lab_back = color.rgb2lab(rgb_back.astype('uint8'))
- mask = 1. * np.isclose(lab_back, lab, atol=2.)
- mask = np2tensor(np.prod(mask, axis=2)[:, :, np.newaxis])
- return (im2tensor(rgb_back), mask)
- else:
- return im2tensor(rgb_back)
-
-
-def rgb2lab(input):
- from skimage import color
- return color.rgb2lab(input / 255.)
-
-
-def tensor2im(image_tensor, imtype=np.uint8, cent=1., factor=255. / 2.):
- image_numpy = image_tensor[0].cpu().float().numpy()
- image_numpy = (np.transpose(image_numpy, (1, 2, 0)) + cent) * factor
- return image_numpy.astype(imtype)
-
-
-def im2tensor(image, imtype=np.uint8, cent=1., factor=255. / 2.):
- return torch.Tensor((image / factor - cent)
- [:, :, :, np.newaxis].transpose((3, 2, 0, 1)))
-
-
-def tensor2vec(vector_tensor):
- return vector_tensor.data.cpu().numpy()[:, :, 0, 0]
-
-
-def voc_ap(rec, prec, use_07_metric=False):
- """ ap = voc_ap(rec, prec, [use_07_metric])
- Compute VOC AP given precision and recall.
- If use_07_metric is true, uses the
- VOC 07 11 point method (default:False).
- """
- if use_07_metric:
- # 11 point metric
- ap = 0.
- for t in np.arange(0., 1.1, 0.1):
- if np.sum(rec >= t) == 0:
- p = 0
- else:
- p = np.max(prec[rec >= t])
- ap = ap + p / 11.
- else:
- # correct AP calculation
- # first append sentinel values at the end
- mrec = np.concatenate(([0.], rec, [1.]))
- mpre = np.concatenate(([0.], prec, [0.]))
-
- # compute the precision envelope
- for i in range(mpre.size - 1, 0, -1):
- mpre[i - 1] = np.maximum(mpre[i - 1], mpre[i])
-
- # to calculate area under PR curve, look for points
- # where X axis (recall) changes value
- i = np.where(mrec[1:] != mrec[:-1])[0]
-
- # and sum (\Delta recall) * prec
- ap = np.sum((mrec[i + 1] - mrec[i]) * mpre[i + 1])
- return ap
-
-
-def tensor2im(image_tensor, imtype=np.uint8, cent=1., factor=255. / 2.):
- # def tensor2im(image_tensor, imtype=np.uint8, cent=1., factor=1.):
- image_numpy = image_tensor[0].cpu().float().numpy()
- image_numpy = (np.transpose(image_numpy, (1, 2, 0)) + cent) * factor
- return image_numpy.astype(imtype)
-
-
-def im2tensor(image, imtype=np.uint8, cent=1., factor=255. / 2.):
- # def im2tensor(image, imtype=np.uint8, cent=1., factor=1.):
- return torch.Tensor((image / factor - cent)
- [:, :, :, np.newaxis].transpose((3, 2, 0, 1)))
-
-
-############################################################
-# base_model.py #
-############################################################
-
-
-class BaseModel(torch.nn.Module):
- def __init__(self):
- super().__init__()
-
- def name(self):
- return 'BaseModel'
-
- def initialize(self, use_gpu=True):
- self.use_gpu = use_gpu
-
- def forward(self):
- pass
-
- def get_image_paths(self):
- pass
-
- def optimize_parameters(self):
- pass
-
- def get_current_visuals(self):
- return self.input
-
- def get_current_errors(self):
- return {}
-
- def save(self, label):
- pass
-
- # helper saving function that can be used by subclasses
- def save_network(self, network, path, network_label, epoch_label):
- save_filename = '%s_net_%s.pth' % (epoch_label, network_label)
- save_path = os.path.join(path, save_filename)
- torch.save(network.state_dict(), save_path)
-
- # helper loading function that can be used by subclasses
- def load_network(self, network, network_label, epoch_label):
- save_filename = '%s_net_%s.pth' % (epoch_label, network_label)
- save_path = os.path.join(self.save_dir, save_filename)
- print('Loading network from %s' % save_path)
- network.load_state_dict(torch.load(save_path, map_location='cpu'))
-
- def update_learning_rate():
- pass
-
- def get_image_paths(self):
- return self.image_paths
-
- def save_done(self, flag=False):
- np.save(os.path.join(self.save_dir, 'done_flag'), flag)
- np.savetxt(os.path.join(self.save_dir, 'done_flag'), [flag, ], fmt='%i')
-
-
-############################################################
-# dist_model.py #
-############################################################
-
-import os
-from collections import OrderedDict
-from scipy.ndimage import zoom
-from tqdm import tqdm
-
-
-class DistModel(BaseModel):
- def name(self):
- return self.model_name
-
- def initialize(self, model='net-lin', net='alex', colorspace='Lab', pnet_rand=False, pnet_tune=False,
- model_path=None,
- use_gpu=True, printNet=False, spatial=False,
- is_train=False, lr=.0001, beta1=0.5, version='0.1'):
- '''
- INPUTS
- model - ['net-lin'] for linearly calibrated network
- ['net'] for off-the-shelf network
- ['L2'] for L2 distance in Lab colorspace
- ['SSIM'] for ssim in RGB colorspace
- net - ['squeeze','alex','vgg']
- model_path - if None, will look in weights/[NET_NAME].pth
- colorspace - ['Lab','RGB'] colorspace to use for L2 and SSIM
- use_gpu - bool - whether or not to use a GPU
- printNet - bool - whether or not to print network architecture out
- spatial - bool - whether to output an array containing varying distances across spatial dimensions
- spatial_shape - if given, output spatial shape. if None then spatial shape is determined automatically via spatial_factor (see below).
- spatial_factor - if given, specifies upsampling factor relative to the largest spatial extent of a convolutional layer. if None then resized to size of input images.
- spatial_order - spline order of filter for upsampling in spatial mode, by default 1 (bilinear).
- is_train - bool - [True] for training mode
- lr - float - initial learning rate
- beta1 - float - initial momentum term for adam
- version - 0.1 for latest, 0.0 was original (with a bug)
- '''
- BaseModel.initialize(self, use_gpu=use_gpu)
-
- self.model = model
- self.net = net
- self.is_train = is_train
- self.spatial = spatial
- self.model_name = '%s [%s]' % (model, net)
-
- if (self.model == 'net-lin'): # pretrained net + linear layer
- self.net = PNetLin(pnet_rand=pnet_rand, pnet_tune=pnet_tune, pnet_type=net,
- use_dropout=True, spatial=spatial, version=version, lpips=True)
- kw = dict(map_location='cpu')
- if (model_path is None):
- import inspect
- model_path = os.path.abspath(
- os.path.join(os.path.dirname(__file__), '..', '..', '..', 'models', 'lpips_models', f'{net}.pth'))
-
- if (not is_train):
- self.net.load_state_dict(torch.load(model_path, **kw), strict=False)
-
- elif (self.model == 'net'): # pretrained network
- self.net = PNetLin(pnet_rand=pnet_rand, pnet_type=net, lpips=False)
- elif (self.model in ['L2', 'l2']):
- self.net = L2(use_gpu=use_gpu, colorspace=colorspace) # not really a network, only for testing
- self.model_name = 'L2'
- elif (self.model in ['DSSIM', 'dssim', 'SSIM', 'ssim']):
- self.net = DSSIM(use_gpu=use_gpu, colorspace=colorspace)
- self.model_name = 'SSIM'
- else:
- raise ValueError("Model [%s] not recognized." % self.model)
-
- self.trainable_parameters = list(self.net.parameters())
-
- if self.is_train: # training mode
- # extra network on top to go from distances (d0,d1) => predicted human judgment (h*)
- self.rankLoss = BCERankingLoss()
- self.trainable_parameters += list(self.rankLoss.net.parameters())
- self.lr = lr
- self.old_lr = lr
- self.optimizer_net = torch.optim.Adam(self.trainable_parameters, lr=lr, betas=(beta1, 0.999))
- else: # test mode
- self.net.eval()
-
- # if (use_gpu):
- # self.net.to(gpu_ids[0])
- # self.net = torch.nn.DataParallel(self.net, device_ids=gpu_ids)
- # if (self.is_train):
- # self.rankLoss = self.rankLoss.to(device=gpu_ids[0]) # just put this on GPU0
-
- if (printNet):
- print('---------- Networks initialized -------------')
- print_network(self.net)
- print('-----------------------------------------------')
-
- def forward(self, in0, in1, retPerLayer=False):
- ''' Function computes the distance between image patches in0 and in1
- INPUTS
- in0, in1 - torch.Tensor object of shape Nx3xXxY - image patch scaled to [-1,1]
- OUTPUT
- computed distances between in0 and in1
- '''
-
- return self.net(in0, in1, retPerLayer=retPerLayer)
-
- # ***** TRAINING FUNCTIONS *****
- def optimize_parameters(self):
- self.forward_train()
- self.optimizer_net.zero_grad()
- self.backward_train()
- self.optimizer_net.step()
- self.clamp_weights()
-
- def clamp_weights(self):
- for module in self.net.modules():
- if (hasattr(module, 'weight') and module.kernel_size == (1, 1)):
- module.weight.data = torch.clamp(module.weight.data, min=0)
-
- def set_input(self, data):
- self.input_ref = data['ref']
- self.input_p0 = data['p0']
- self.input_p1 = data['p1']
- self.input_judge = data['judge']
-
- # if (self.use_gpu):
- # self.input_ref = self.input_ref.to(device=self.gpu_ids[0])
- # self.input_p0 = self.input_p0.to(device=self.gpu_ids[0])
- # self.input_p1 = self.input_p1.to(device=self.gpu_ids[0])
- # self.input_judge = self.input_judge.to(device=self.gpu_ids[0])
-
- # self.var_ref = Variable(self.input_ref, requires_grad=True)
- # self.var_p0 = Variable(self.input_p0, requires_grad=True)
- # self.var_p1 = Variable(self.input_p1, requires_grad=True)
-
- def forward_train(self): # run forward pass
- # print(self.net.module.scaling_layer.shift)
- # print(torch.norm(self.net.module.net.slice1[0].weight).item(), torch.norm(self.net.module.lin0.model[1].weight).item())
-
- assert False, "We shoud've not get here when using LPIPS as a metric"
-
- self.d0 = self(self.var_ref, self.var_p0)
- self.d1 = self(self.var_ref, self.var_p1)
- self.acc_r = self.compute_accuracy(self.d0, self.d1, self.input_judge)
-
- self.var_judge = Variable(1. * self.input_judge).view(self.d0.size())
-
- self.loss_total = self.rankLoss(self.d0, self.d1, self.var_judge * 2. - 1.)
-
- return self.loss_total
-
- def backward_train(self):
- torch.mean(self.loss_total).backward()
-
- def compute_accuracy(self, d0, d1, judge):
- ''' d0, d1 are Variables, judge is a Tensor '''
- d1_lt_d0 = (d1 < d0).cpu().data.numpy().flatten()
- judge_per = judge.cpu().numpy().flatten()
- return d1_lt_d0 * judge_per + (1 - d1_lt_d0) * (1 - judge_per)
-
- def get_current_errors(self):
- retDict = OrderedDict([('loss_total', self.loss_total.data.cpu().numpy()),
- ('acc_r', self.acc_r)])
-
- for key in retDict.keys():
- retDict[key] = np.mean(retDict[key])
-
- return retDict
-
- def get_current_visuals(self):
- zoom_factor = 256 / self.var_ref.data.size()[2]
-
- ref_img = tensor2im(self.var_ref.data)
- p0_img = tensor2im(self.var_p0.data)
- p1_img = tensor2im(self.var_p1.data)
-
- ref_img_vis = zoom(ref_img, [zoom_factor, zoom_factor, 1], order=0)
- p0_img_vis = zoom(p0_img, [zoom_factor, zoom_factor, 1], order=0)
- p1_img_vis = zoom(p1_img, [zoom_factor, zoom_factor, 1], order=0)
-
- return OrderedDict([('ref', ref_img_vis),
- ('p0', p0_img_vis),
- ('p1', p1_img_vis)])
-
- def save(self, path, label):
- if (self.use_gpu):
- self.save_network(self.net.module, path, '', label)
- else:
- self.save_network(self.net, path, '', label)
- self.save_network(self.rankLoss.net, path, 'rank', label)
-
- def update_learning_rate(self, nepoch_decay):
- lrd = self.lr / nepoch_decay
- lr = self.old_lr - lrd
-
- for param_group in self.optimizer_net.param_groups:
- param_group['lr'] = lr
-
- print('update lr [%s] decay: %f -> %f' % (type, self.old_lr, lr))
- self.old_lr = lr
-
-
-def score_2afc_dataset(data_loader, func, name=''):
- ''' Function computes Two Alternative Forced Choice (2AFC) score using
- distance function 'func' in dataset 'data_loader'
- INPUTS
- data_loader - CustomDatasetDataLoader object - contains a TwoAFCDataset inside
- func - callable distance function - calling d=func(in0,in1) should take 2
- pytorch tensors with shape Nx3xXxY, and return numpy array of length N
- OUTPUTS
- [0] - 2AFC score in [0,1], fraction of time func agrees with human evaluators
- [1] - dictionary with following elements
- d0s,d1s - N arrays containing distances between reference patch to perturbed patches
- gts - N array in [0,1], preferred patch selected by human evaluators
- (closer to "0" for left patch p0, "1" for right patch p1,
- "0.6" means 60pct people preferred right patch, 40pct preferred left)
- scores - N array in [0,1], corresponding to what percentage function agreed with humans
- CONSTS
- N - number of test triplets in data_loader
- '''
-
- d0s = []
- d1s = []
- gts = []
-
- for data in tqdm(data_loader.load_data(), desc=name):
- d0s += func(data['ref'], data['p0']).data.cpu().numpy().flatten().tolist()
- d1s += func(data['ref'], data['p1']).data.cpu().numpy().flatten().tolist()
- gts += data['judge'].cpu().numpy().flatten().tolist()
-
- d0s = np.array(d0s)
- d1s = np.array(d1s)
- gts = np.array(gts)
- scores = (d0s < d1s) * (1. - gts) + (d1s < d0s) * gts + (d1s == d0s) * .5
-
- return (np.mean(scores), dict(d0s=d0s, d1s=d1s, gts=gts, scores=scores))
-
-
-def score_jnd_dataset(data_loader, func, name=''):
- ''' Function computes JND score using distance function 'func' in dataset 'data_loader'
- INPUTS
- data_loader - CustomDatasetDataLoader object - contains a JNDDataset inside
- func - callable distance function - calling d=func(in0,in1) should take 2
- pytorch tensors with shape Nx3xXxY, and return pytorch array of length N
- OUTPUTS
- [0] - JND score in [0,1], mAP score (area under precision-recall curve)
- [1] - dictionary with following elements
- ds - N array containing distances between two patches shown to human evaluator
- sames - N array containing fraction of people who thought the two patches were identical
- CONSTS
- N - number of test triplets in data_loader
- '''
-
- ds = []
- gts = []
-
- for data in tqdm(data_loader.load_data(), desc=name):
- ds += func(data['p0'], data['p1']).data.cpu().numpy().tolist()
- gts += data['same'].cpu().numpy().flatten().tolist()
-
- sames = np.array(gts)
- ds = np.array(ds)
-
- sorted_inds = np.argsort(ds)
- ds_sorted = ds[sorted_inds]
- sames_sorted = sames[sorted_inds]
-
- TPs = np.cumsum(sames_sorted)
- FPs = np.cumsum(1 - sames_sorted)
- FNs = np.sum(sames_sorted) - TPs
-
- precs = TPs / (TPs + FPs)
- recs = TPs / (TPs + FNs)
- score = voc_ap(recs, precs)
-
- return (score, dict(ds=ds, sames=sames))
-
-
-############################################################
-# networks_basic.py #
-############################################################
-
-import torch.nn as nn
-from torch.autograd import Variable
-import numpy as np
-
-
-def spatial_average(in_tens, keepdim=True):
- return in_tens.mean([2, 3], keepdim=keepdim)
-
-
-def upsample(in_tens, out_H=64): # assumes scale factor is same for H and W
- in_H = in_tens.shape[2]
- scale_factor = 1. * out_H / in_H
-
- return nn.Upsample(scale_factor=scale_factor, mode='bilinear', align_corners=False)(in_tens)
-
-
-# Learned perceptual metric
-class PNetLin(nn.Module):
- def __init__(self, pnet_type='vgg', pnet_rand=False, pnet_tune=False, use_dropout=True, spatial=False,
- version='0.1', lpips=True):
- super(PNetLin, self).__init__()
-
- self.pnet_type = pnet_type
- self.pnet_tune = pnet_tune
- self.pnet_rand = pnet_rand
- self.spatial = spatial
- self.lpips = lpips
- self.version = version
- self.scaling_layer = ScalingLayer()
-
- if (self.pnet_type in ['vgg', 'vgg16']):
- net_type = vgg16
- self.chns = [64, 128, 256, 512, 512]
- elif (self.pnet_type == 'alex'):
- net_type = alexnet
- self.chns = [64, 192, 384, 256, 256]
- elif (self.pnet_type == 'squeeze'):
- net_type = squeezenet
- self.chns = [64, 128, 256, 384, 384, 512, 512]
- self.L = len(self.chns)
-
- self.net = net_type(pretrained=not self.pnet_rand, requires_grad=self.pnet_tune)
-
- if (lpips):
- self.lin0 = NetLinLayer(self.chns[0], use_dropout=use_dropout)
- self.lin1 = NetLinLayer(self.chns[1], use_dropout=use_dropout)
- self.lin2 = NetLinLayer(self.chns[2], use_dropout=use_dropout)
- self.lin3 = NetLinLayer(self.chns[3], use_dropout=use_dropout)
- self.lin4 = NetLinLayer(self.chns[4], use_dropout=use_dropout)
- self.lins = [self.lin0, self.lin1, self.lin2, self.lin3, self.lin4]
- if (self.pnet_type == 'squeeze'): # 7 layers for squeezenet
- self.lin5 = NetLinLayer(self.chns[5], use_dropout=use_dropout)
- self.lin6 = NetLinLayer(self.chns[6], use_dropout=use_dropout)
- self.lins += [self.lin5, self.lin6]
-
- def forward(self, in0, in1, retPerLayer=False):
- # v0.0 - original release had a bug, where input was not scaled
- in0_input, in1_input = (self.scaling_layer(in0), self.scaling_layer(in1)) if self.version == '0.1' else (
- in0, in1)
- outs0, outs1 = self.net(in0_input), self.net(in1_input)
- feats0, feats1, diffs = {}, {}, {}
-
- for kk in range(self.L):
- feats0[kk], feats1[kk] = normalize_tensor(outs0[kk]), normalize_tensor(outs1[kk])
- diffs[kk] = (feats0[kk] - feats1[kk]) ** 2
-
- if (self.lpips):
- if (self.spatial):
- res = [upsample(self.lins[kk].model(diffs[kk]), out_H=in0.shape[2]) for kk in range(self.L)]
- else:
- res = [spatial_average(self.lins[kk].model(diffs[kk]), keepdim=True) for kk in range(self.L)]
- else:
- if (self.spatial):
- res = [upsample(diffs[kk].sum(dim=1, keepdim=True), out_H=in0.shape[2]) for kk in range(self.L)]
- else:
- res = [spatial_average(diffs[kk].sum(dim=1, keepdim=True), keepdim=True) for kk in range(self.L)]
-
- val = res[0]
- for l in range(1, self.L):
- val += res[l]
-
- if (retPerLayer):
- return (val, res)
- else:
- return val
-
-
-class ScalingLayer(nn.Module):
- def __init__(self):
- super(ScalingLayer, self).__init__()
- self.register_buffer('shift', torch.Tensor([-.030, -.088, -.188])[None, :, None, None])
- self.register_buffer('scale', torch.Tensor([.458, .448, .450])[None, :, None, None])
-
- def forward(self, inp):
- return (inp - self.shift) / self.scale
-
-
-class NetLinLayer(nn.Module):
- ''' A single linear layer which does a 1x1 conv '''
-
- def __init__(self, chn_in, chn_out=1, use_dropout=False):
- super(NetLinLayer, self).__init__()
-
- layers = [nn.Dropout(), ] if (use_dropout) else []
- layers += [nn.Conv2d(chn_in, chn_out, 1, stride=1, padding=0, bias=False), ]
- self.model = nn.Sequential(*layers)
-
-
-class Dist2LogitLayer(nn.Module):
- ''' takes 2 distances, puts through fc layers, spits out value between [0,1] (if use_sigmoid is True) '''
-
- def __init__(self, chn_mid=32, use_sigmoid=True):
- super(Dist2LogitLayer, self).__init__()
-
- layers = [nn.Conv2d(5, chn_mid, 1, stride=1, padding=0, bias=True), ]
- layers += [nn.LeakyReLU(0.2, True), ]
- layers += [nn.Conv2d(chn_mid, chn_mid, 1, stride=1, padding=0, bias=True), ]
- layers += [nn.LeakyReLU(0.2, True), ]
- layers += [nn.Conv2d(chn_mid, 1, 1, stride=1, padding=0, bias=True), ]
- if (use_sigmoid):
- layers += [nn.Sigmoid(), ]
- self.model = nn.Sequential(*layers)
-
- def forward(self, d0, d1, eps=0.1):
- return self.model(torch.cat((d0, d1, d0 - d1, d0 / (d1 + eps), d1 / (d0 + eps)), dim=1))
-
-
-class BCERankingLoss(nn.Module):
- def __init__(self, chn_mid=32):
- super(BCERankingLoss, self).__init__()
- self.net = Dist2LogitLayer(chn_mid=chn_mid)
- # self.parameters = list(self.net.parameters())
- self.loss = torch.nn.BCELoss()
-
- def forward(self, d0, d1, judge):
- per = (judge + 1.) / 2.
- self.logit = self.net(d0, d1)
- return self.loss(self.logit, per)
-
-
-# L2, DSSIM metrics
-class FakeNet(nn.Module):
- def __init__(self, use_gpu=True, colorspace='Lab'):
- super(FakeNet, self).__init__()
- self.use_gpu = use_gpu
- self.colorspace = colorspace
-
-
-class L2(FakeNet):
-
- def forward(self, in0, in1, retPerLayer=None):
- assert (in0.size()[0] == 1) # currently only supports batchSize 1
-
- if (self.colorspace == 'RGB'):
- (N, C, X, Y) = in0.size()
- value = torch.mean(torch.mean(torch.mean((in0 - in1) ** 2, dim=1).view(N, 1, X, Y), dim=2).view(N, 1, 1, Y),
- dim=3).view(N)
- return value
- elif (self.colorspace == 'Lab'):
- value = l2(tensor2np(tensor2tensorlab(in0.data, to_norm=False)),
- tensor2np(tensor2tensorlab(in1.data, to_norm=False)), range=100.).astype('float')
- ret_var = Variable(torch.Tensor((value,)))
- # if (self.use_gpu):
- # ret_var = ret_var.cuda()
- return ret_var
-
-
-class DSSIM(FakeNet):
-
- def forward(self, in0, in1, retPerLayer=None):
- assert (in0.size()[0] == 1) # currently only supports batchSize 1
-
- if (self.colorspace == 'RGB'):
- value = dssim(1. * tensor2im(in0.data), 1. * tensor2im(in1.data), range=255.).astype('float')
- elif (self.colorspace == 'Lab'):
- value = dssim(tensor2np(tensor2tensorlab(in0.data, to_norm=False)),
- tensor2np(tensor2tensorlab(in1.data, to_norm=False)), range=100.).astype('float')
- ret_var = Variable(torch.Tensor((value,)))
- # if (self.use_gpu):
- # ret_var = ret_var.cuda()
- return ret_var
-
-
-def print_network(net):
- num_params = 0
- for param in net.parameters():
- num_params += param.numel()
- print('Network', net)
- print('Total number of parameters: %d' % num_params)
-
-
-############################################################
-# pretrained_networks.py #
-############################################################
-
-from collections import namedtuple
-import torch
-from torchvision import models as tv
-
-
-class squeezenet(torch.nn.Module):
- def __init__(self, requires_grad=False, pretrained=True):
- super(squeezenet, self).__init__()
- pretrained_features = tv.squeezenet1_1(pretrained=pretrained).features
- self.slice1 = torch.nn.Sequential()
- self.slice2 = torch.nn.Sequential()
- self.slice3 = torch.nn.Sequential()
- self.slice4 = torch.nn.Sequential()
- self.slice5 = torch.nn.Sequential()
- self.slice6 = torch.nn.Sequential()
- self.slice7 = torch.nn.Sequential()
- self.N_slices = 7
- for x in range(2):
- self.slice1.add_module(str(x), pretrained_features[x])
- for x in range(2, 5):
- self.slice2.add_module(str(x), pretrained_features[x])
- for x in range(5, 8):
- self.slice3.add_module(str(x), pretrained_features[x])
- for x in range(8, 10):
- self.slice4.add_module(str(x), pretrained_features[x])
- for x in range(10, 11):
- self.slice5.add_module(str(x), pretrained_features[x])
- for x in range(11, 12):
- self.slice6.add_module(str(x), pretrained_features[x])
- for x in range(12, 13):
- self.slice7.add_module(str(x), pretrained_features[x])
- if not requires_grad:
- for param in self.parameters():
- param.requires_grad = False
-
- def forward(self, X):
- h = self.slice1(X)
- h_relu1 = h
- h = self.slice2(h)
- h_relu2 = h
- h = self.slice3(h)
- h_relu3 = h
- h = self.slice4(h)
- h_relu4 = h
- h = self.slice5(h)
- h_relu5 = h
- h = self.slice6(h)
- h_relu6 = h
- h = self.slice7(h)
- h_relu7 = h
- vgg_outputs = namedtuple("SqueezeOutputs", ['relu1', 'relu2', 'relu3', 'relu4', 'relu5', 'relu6', 'relu7'])
- out = vgg_outputs(h_relu1, h_relu2, h_relu3, h_relu4, h_relu5, h_relu6, h_relu7)
-
- return out
-
-
-class alexnet(torch.nn.Module):
- def __init__(self, requires_grad=False, pretrained=True):
- super(alexnet, self).__init__()
- alexnet_pretrained_features = tv.alexnet(pretrained=pretrained).features
- self.slice1 = torch.nn.Sequential()
- self.slice2 = torch.nn.Sequential()
- self.slice3 = torch.nn.Sequential()
- self.slice4 = torch.nn.Sequential()
- self.slice5 = torch.nn.Sequential()
- self.N_slices = 5
- for x in range(2):
- self.slice1.add_module(str(x), alexnet_pretrained_features[x])
- for x in range(2, 5):
- self.slice2.add_module(str(x), alexnet_pretrained_features[x])
- for x in range(5, 8):
- self.slice3.add_module(str(x), alexnet_pretrained_features[x])
- for x in range(8, 10):
- self.slice4.add_module(str(x), alexnet_pretrained_features[x])
- for x in range(10, 12):
- self.slice5.add_module(str(x), alexnet_pretrained_features[x])
- if not requires_grad:
- for param in self.parameters():
- param.requires_grad = False
-
- def forward(self, X):
- h = self.slice1(X)
- h_relu1 = h
- h = self.slice2(h)
- h_relu2 = h
- h = self.slice3(h)
- h_relu3 = h
- h = self.slice4(h)
- h_relu4 = h
- h = self.slice5(h)
- h_relu5 = h
- alexnet_outputs = namedtuple("AlexnetOutputs", ['relu1', 'relu2', 'relu3', 'relu4', 'relu5'])
- out = alexnet_outputs(h_relu1, h_relu2, h_relu3, h_relu4, h_relu5)
-
- return out
-
-
-class vgg16(torch.nn.Module):
- def __init__(self, requires_grad=False, pretrained=True):
- super(vgg16, self).__init__()
- vgg_pretrained_features = tv.vgg16(pretrained=pretrained).features
- self.slice1 = torch.nn.Sequential()
- self.slice2 = torch.nn.Sequential()
- self.slice3 = torch.nn.Sequential()
- self.slice4 = torch.nn.Sequential()
- self.slice5 = torch.nn.Sequential()
- self.N_slices = 5
- for x in range(4):
- self.slice1.add_module(str(x), vgg_pretrained_features[x])
- for x in range(4, 9):
- self.slice2.add_module(str(x), vgg_pretrained_features[x])
- for x in range(9, 16):
- self.slice3.add_module(str(x), vgg_pretrained_features[x])
- for x in range(16, 23):
- self.slice4.add_module(str(x), vgg_pretrained_features[x])
- for x in range(23, 30):
- self.slice5.add_module(str(x), vgg_pretrained_features[x])
- if not requires_grad:
- for param in self.parameters():
- param.requires_grad = False
-
- def forward(self, X):
- h = self.slice1(X)
- h_relu1_2 = h
- h = self.slice2(h)
- h_relu2_2 = h
- h = self.slice3(h)
- h_relu3_3 = h
- h = self.slice4(h)
- h_relu4_3 = h
- h = self.slice5(h)
- h_relu5_3 = h
- vgg_outputs = namedtuple("VggOutputs", ['relu1_2', 'relu2_2', 'relu3_3', 'relu4_3', 'relu5_3'])
- out = vgg_outputs(h_relu1_2, h_relu2_2, h_relu3_3, h_relu4_3, h_relu5_3)
-
- return out
-
-
-class resnet(torch.nn.Module):
- def __init__(self, requires_grad=False, pretrained=True, num=18):
- super(resnet, self).__init__()
- if (num == 18):
- self.net = tv.resnet18(pretrained=pretrained)
- elif (num == 34):
- self.net = tv.resnet34(pretrained=pretrained)
- elif (num == 50):
- self.net = tv.resnet50(pretrained=pretrained)
- elif (num == 101):
- self.net = tv.resnet101(pretrained=pretrained)
- elif (num == 152):
- self.net = tv.resnet152(pretrained=pretrained)
- self.N_slices = 5
-
- self.conv1 = self.net.conv1
- self.bn1 = self.net.bn1
- self.relu = self.net.relu
- self.maxpool = self.net.maxpool
- self.layer1 = self.net.layer1
- self.layer2 = self.net.layer2
- self.layer3 = self.net.layer3
- self.layer4 = self.net.layer4
-
- def forward(self, X):
- h = self.conv1(X)
- h = self.bn1(h)
- h = self.relu(h)
- h_relu1 = h
- h = self.maxpool(h)
- h = self.layer1(h)
- h_conv2 = h
- h = self.layer2(h)
- h_conv3 = h
- h = self.layer3(h)
- h_conv4 = h
- h = self.layer4(h)
- h_conv5 = h
-
- outputs = namedtuple("Outputs", ['relu1', 'conv2', 'conv3', 'conv4', 'conv5'])
- out = outputs(h_relu1, h_conv2, h_conv3, h_conv4, h_conv5)
-
- return out
diff --git a/spaces/Chaitanya01/InvestingPlatform/tempCodeRunnerFile.py b/spaces/Chaitanya01/InvestingPlatform/tempCodeRunnerFile.py
deleted file mode 100644
index 78a11b17cf86773d18b2f5e35b9f616f841ff43e..0000000000000000000000000000000000000000
--- a/spaces/Chaitanya01/InvestingPlatform/tempCodeRunnerFile.py
+++ /dev/null
@@ -1,2 +0,0 @@
- # client.chat_postMessage(channel = f"#{df.loc[symbol]['alert_type'].lower()}_signal",
-
\ No newline at end of file
diff --git a/spaces/CofAI/chat.b4/g4f/Provider/Providers/helpers/phind.py b/spaces/CofAI/chat.b4/g4f/Provider/Providers/helpers/phind.py
deleted file mode 100644
index 70525d51d849c43bd1cf29c7f9b18f22bff1e982..0000000000000000000000000000000000000000
--- a/spaces/CofAI/chat.b4/g4f/Provider/Providers/helpers/phind.py
+++ /dev/null
@@ -1,69 +0,0 @@
-import sys
-import json
-import datetime
-import urllib.parse
-
-from curl_cffi import requests
-
-config = json.loads(sys.argv[1])
-prompt = config['messages'][-1]['content']
-
-skill = 'expert' if config['model'] == 'gpt-4' else 'intermediate'
-
-json_data = json.dumps({
- 'question': prompt,
- 'options': {
- 'skill': skill,
- 'date': datetime.datetime.now().strftime('%d/%m/%Y'),
- 'language': 'en',
- 'detailed': True,
- 'creative': True,
- 'customLinks': []}}, separators=(',', ':'))
-
-headers = {
- 'Content-Type': 'application/json',
- 'Pragma': 'no-cache',
- 'Accept': '*/*',
- 'Sec-Fetch-Site': 'same-origin',
- 'Accept-Language': 'en-GB,en;q=0.9',
- 'Cache-Control': 'no-cache',
- 'Sec-Fetch-Mode': 'cors',
- 'Content-Length': str(len(json_data)),
- 'Origin': 'https://www.phind.com',
- 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.4 Safari/605.1.15',
- 'Referer': f'https://www.phind.com/search?q={urllib.parse.quote(prompt)}&source=searchbox',
- 'Connection': 'keep-alive',
- 'Host': 'www.phind.com',
- 'Sec-Fetch-Dest': 'empty'
-}
-
-
-def output(chunk):
- try:
- if b'PHIND_METADATA' in chunk:
- return
-
- if chunk == b'data: \r\ndata: \r\ndata: \r\n\r\n':
- chunk = b'data: \n\r\n\r\n'
-
- chunk = chunk.decode()
-
- chunk = chunk.replace('data: \r\n\r\ndata: ', 'data: \n')
- chunk = chunk.replace('\r\ndata: \r\ndata: \r\n\r\n', '\n\r\n\r\n')
- chunk = chunk.replace('data: ', '').replace('\r\n\r\n', '')
-
- print(chunk, flush=True, end = '')
-
- except json.decoder.JSONDecodeError:
- pass
-
-while True:
- try:
- response = requests.post('https://www.phind.com/api/infer/answer',
- headers=headers, data=json_data, content_callback=output, timeout=999999, impersonate='safari15_5')
-
- exit(0)
-
- except Exception as e:
- print('an error occured, retrying... |', e, flush=True)
- continue
\ No newline at end of file
diff --git "a/spaces/Cong723/gpt-academic-public/crazy_functions/\346\211\271\351\207\217\347\277\273\350\257\221PDF\346\226\207\346\241\243_\345\244\232\347\272\277\347\250\213.py" "b/spaces/Cong723/gpt-academic-public/crazy_functions/\346\211\271\351\207\217\347\277\273\350\257\221PDF\346\226\207\346\241\243_\345\244\232\347\272\277\347\250\213.py"
deleted file mode 100644
index 4adb9a464bc71ec4a177b76536d5e5fab619ef2d..0000000000000000000000000000000000000000
--- "a/spaces/Cong723/gpt-academic-public/crazy_functions/\346\211\271\351\207\217\347\277\273\350\257\221PDF\346\226\207\346\241\243_\345\244\232\347\272\277\347\250\213.py"
+++ /dev/null
@@ -1,131 +0,0 @@
-from toolbox import CatchException, report_execption, write_results_to_file
-from toolbox import update_ui
-from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
-from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
-from .crazy_utils import read_and_clean_pdf_text
-from colorful import *
-
-@CatchException
-def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, sys_prompt, web_port):
- import glob
- import os
-
- # 基本信息:功能、贡献者
- chatbot.append([
- "函数插件功能?",
- "批量翻译PDF文档。函数插件贡献者: Binary-Husky"])
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- # 尝试导入依赖,如果缺少依赖,则给出安装建议
- try:
- import fitz
- import tiktoken
- except:
- report_execption(chatbot, history,
- a=f"解析项目: {txt}",
- b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pymupdf tiktoken```。")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
-
- # 清空历史,以免输入溢出
- history = []
-
- # 检测输入参数,如没有给定输入参数,直接退出
- if os.path.exists(txt):
- project_folder = txt
- else:
- if txt == "":
- txt = '空空如也的输入栏'
- report_execption(chatbot, history,
- a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
-
- # 搜索需要处理的文件清单
- file_manifest = [f for f in glob.glob(
- f'{project_folder}/**/*.pdf', recursive=True)]
-
- # 如果没找到任何文件
- if len(file_manifest) == 0:
- report_execption(chatbot, history,
- a=f"解析项目: {txt}", b=f"找不到任何.tex或.pdf文件: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
-
- # 开始正式执行任务
- yield from 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, sys_prompt)
-
-
-def 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, sys_prompt):
- import os
- import tiktoken
- TOKEN_LIMIT_PER_FRAGMENT = 1280
- generated_conclusion_files = []
- for index, fp in enumerate(file_manifest):
-
- # 读取PDF文件
- file_content, page_one = read_and_clean_pdf_text(fp)
-
- # 递归地切割PDF文件
- from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
- from request_llm.bridge_all import model_info
- enc = model_info["gpt-3.5-turbo"]['tokenizer']
- def get_token_num(txt): return len(enc.encode(txt, disallowed_special=()))
- paper_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf(
- txt=file_content, get_token_fn=get_token_num, limit=TOKEN_LIMIT_PER_FRAGMENT)
- page_one_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf(
- txt=str(page_one), get_token_fn=get_token_num, limit=TOKEN_LIMIT_PER_FRAGMENT//4)
-
- # 为了更好的效果,我们剥离Introduction之后的部分(如果有)
- paper_meta = page_one_fragments[0].split('introduction')[0].split('Introduction')[0].split('INTRODUCTION')[0]
-
- # 单线,获取文章meta信息
- paper_meta_info = yield from request_gpt_model_in_new_thread_with_ui_alive(
- inputs=f"以下是一篇学术论文的基础信息,请从中提取出“标题”、“收录会议或期刊”、“作者”、“摘要”、“编号”、“作者邮箱”这六个部分。请用markdown格式输出,最后用中文翻译摘要部分。请提取:{paper_meta}",
- inputs_show_user=f"请从{fp}中提取出“标题”、“收录会议或期刊”等基本信息。",
- llm_kwargs=llm_kwargs,
- chatbot=chatbot, history=[],
- sys_prompt="Your job is to collect information from materials。",
- )
-
- # 多线,翻译
- gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
- inputs_array=[
- f"你需要翻译以下内容:\n{frag}" for frag in paper_fragments],
- inputs_show_user_array=[f"\n---\n 原文: \n\n {frag.replace('#', '')} \n---\n 翻译:\n " for frag in paper_fragments],
- llm_kwargs=llm_kwargs,
- chatbot=chatbot,
- history_array=[[paper_meta] for _ in paper_fragments],
- sys_prompt_array=[
- "请你作为一个学术翻译,负责把学术论文准确翻译成中文。注意文章中的每一句话都要翻译。" for _ in paper_fragments],
- # max_workers=5 # OpenAI所允许的最大并行过载
- )
-
- # 整理报告的格式
- for i,k in enumerate(gpt_response_collection):
- if i%2==0:
- gpt_response_collection[i] = f"\n\n---\n\n ## 原文[{i//2}/{len(gpt_response_collection)//2}]: \n\n {paper_fragments[i//2].replace('#', '')} \n\n---\n\n ## 翻译[{i//2}/{len(gpt_response_collection)//2}]:\n "
- else:
- gpt_response_collection[i] = gpt_response_collection[i]
- final = ["一、论文概况\n\n---\n\n", paper_meta_info.replace('# ', '### ') + '\n\n---\n\n', "二、论文翻译", ""]
- final.extend(gpt_response_collection)
- create_report_file_name = f"{os.path.basename(fp)}.trans.md"
- res = write_results_to_file(final, file_name=create_report_file_name)
-
- # 更新UI
- generated_conclusion_files.append(f'./gpt_log/{create_report_file_name}')
- chatbot.append((f"{fp}完成了吗?", res))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- # 准备文件的下载
- import shutil
- for pdf_path in generated_conclusion_files:
- # 重命名文件
- rename_file = f'./gpt_log/总结论文-{os.path.basename(pdf_path)}'
- if os.path.exists(rename_file):
- os.remove(rename_file)
- shutil.copyfile(pdf_path, rename_file)
- if os.path.exists(pdf_path):
- os.remove(pdf_path)
- chatbot.append(("给出输出文件清单", str(generated_conclusion_files)))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
diff --git a/spaces/CourserLi/classify/README.md b/spaces/CourserLi/classify/README.md
deleted file mode 100644
index 86e7585a247287904efd72b31775b32577864bd1..0000000000000000000000000000000000000000
--- a/spaces/CourserLi/classify/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Classify
-emoji: 🏢
-colorFrom: yellow
-colorTo: pink
-sdk: gradio
-sdk_version: 3.15.0
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/ldm/models/diffusion/ddpm.py b/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/ldm/models/diffusion/ddpm.py
deleted file mode 100644
index f71a44af48c8cba8e97849b7e6813b3e6f9fe83c..0000000000000000000000000000000000000000
--- a/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/ldm/models/diffusion/ddpm.py
+++ /dev/null
@@ -1,1797 +0,0 @@
-"""
-wild mixture of
-https://github.com/lucidrains/denoising-diffusion-pytorch/blob/7706bdfc6f527f58d33f84b7b522e61e6e3164b3/denoising_diffusion_pytorch/denoising_diffusion_pytorch.py
-https://github.com/openai/improved-diffusion/blob/e94489283bb876ac1477d5dd7709bbbd2d9902ce/improved_diffusion/gaussian_diffusion.py
-https://github.com/CompVis/taming-transformers
--- merci
-"""
-
-import torch
-import torch.nn as nn
-import numpy as np
-import pytorch_lightning as pl
-from torch.optim.lr_scheduler import LambdaLR
-from einops import rearrange, repeat
-from contextlib import contextmanager, nullcontext
-from functools import partial
-import itertools
-from tqdm import tqdm
-from torchvision.utils import make_grid
-from pytorch_lightning.utilities.distributed import rank_zero_only
-from omegaconf import ListConfig
-
-from ldm.util import log_txt_as_img, exists, default, ismap, isimage, mean_flat, count_params, instantiate_from_config
-from ldm.modules.ema import LitEma
-from ldm.modules.distributions.distributions import normal_kl, DiagonalGaussianDistribution
-from ldm.models.autoencoder import IdentityFirstStage, AutoencoderKL
-from ldm.modules.diffusionmodules.util import make_beta_schedule, extract_into_tensor, noise_like
-from ldm.models.diffusion.ddim import DDIMSampler
-
-
-__conditioning_keys__ = {'concat': 'c_concat',
- 'crossattn': 'c_crossattn',
- 'adm': 'y'}
-
-
-def disabled_train(self, mode=True):
- """Overwrite model.train with this function to make sure train/eval mode
- does not change anymore."""
- return self
-
-
-def uniform_on_device(r1, r2, shape, device):
- return (r1 - r2) * torch.rand(*shape, device=device) + r2
-
-
-class DDPM(pl.LightningModule):
- # classic DDPM with Gaussian diffusion, in image space
- def __init__(self,
- unet_config,
- timesteps=1000,
- beta_schedule="linear",
- loss_type="l2",
- ckpt_path=None,
- ignore_keys=[],
- load_only_unet=False,
- monitor="val/loss",
- use_ema=True,
- first_stage_key="image",
- image_size=256,
- channels=3,
- log_every_t=100,
- clip_denoised=True,
- linear_start=1e-4,
- linear_end=2e-2,
- cosine_s=8e-3,
- given_betas=None,
- original_elbo_weight=0.,
- v_posterior=0., # weight for choosing posterior variance as sigma = (1-v) * beta_tilde + v * beta
- l_simple_weight=1.,
- conditioning_key=None,
- parameterization="eps", # all assuming fixed variance schedules
- scheduler_config=None,
- use_positional_encodings=False,
- learn_logvar=False,
- logvar_init=0.,
- make_it_fit=False,
- ucg_training=None,
- reset_ema=False,
- reset_num_ema_updates=False,
- ):
- super().__init__()
- assert parameterization in ["eps", "x0", "v"], 'currently only supporting "eps" and "x0" and "v"'
- self.parameterization = parameterization
- print(f"{self.__class__.__name__}: Running in {self.parameterization}-prediction mode")
- self.cond_stage_model = None
- self.clip_denoised = clip_denoised
- self.log_every_t = log_every_t
- self.first_stage_key = first_stage_key
- self.image_size = image_size # try conv?
- self.channels = channels
- self.use_positional_encodings = use_positional_encodings
- self.model = DiffusionWrapper(unet_config, conditioning_key)
- count_params(self.model, verbose=True)
- self.use_ema = use_ema
- if self.use_ema:
- self.model_ema = LitEma(self.model)
- print(f"Keeping EMAs of {len(list(self.model_ema.buffers()))}.")
-
- self.use_scheduler = scheduler_config is not None
- if self.use_scheduler:
- self.scheduler_config = scheduler_config
-
- self.v_posterior = v_posterior
- self.original_elbo_weight = original_elbo_weight
- self.l_simple_weight = l_simple_weight
-
- if monitor is not None:
- self.monitor = monitor
- self.make_it_fit = make_it_fit
- if reset_ema: assert exists(ckpt_path)
- if ckpt_path is not None:
- self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys, only_model=load_only_unet)
- if reset_ema:
- assert self.use_ema
- print(f"Resetting ema to pure model weights. This is useful when restoring from an ema-only checkpoint.")
- self.model_ema = LitEma(self.model)
- if reset_num_ema_updates:
- print(" +++++++++++ WARNING: RESETTING NUM_EMA UPDATES TO ZERO +++++++++++ ")
- assert self.use_ema
- self.model_ema.reset_num_updates()
-
- self.register_schedule(given_betas=given_betas, beta_schedule=beta_schedule, timesteps=timesteps,
- linear_start=linear_start, linear_end=linear_end, cosine_s=cosine_s)
-
- self.loss_type = loss_type
-
- self.learn_logvar = learn_logvar
- logvar = torch.full(fill_value=logvar_init, size=(self.num_timesteps,))
- if self.learn_logvar:
- self.logvar = nn.Parameter(self.logvar, requires_grad=True)
- else:
- self.register_buffer('logvar', logvar)
-
- self.ucg_training = ucg_training or dict()
- if self.ucg_training:
- self.ucg_prng = np.random.RandomState()
-
- def register_schedule(self, given_betas=None, beta_schedule="linear", timesteps=1000,
- linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3):
- if exists(given_betas):
- betas = given_betas
- else:
- betas = make_beta_schedule(beta_schedule, timesteps, linear_start=linear_start, linear_end=linear_end,
- cosine_s=cosine_s)
- alphas = 1. - betas
- alphas_cumprod = np.cumprod(alphas, axis=0)
- alphas_cumprod_prev = np.append(1., alphas_cumprod[:-1])
-
- timesteps, = betas.shape
- self.num_timesteps = int(timesteps)
- self.linear_start = linear_start
- self.linear_end = linear_end
- assert alphas_cumprod.shape[0] == self.num_timesteps, 'alphas have to be defined for each timestep'
-
- to_torch = partial(torch.tensor, dtype=torch.float32)
-
- self.register_buffer('betas', to_torch(betas))
- self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod))
- self.register_buffer('alphas_cumprod_prev', to_torch(alphas_cumprod_prev))
-
- # calculations for diffusion q(x_t | x_{t-1}) and others
- self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod)))
- self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod)))
- self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod)))
- self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod)))
- self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod - 1)))
-
- # calculations for posterior q(x_{t-1} | x_t, x_0)
- posterior_variance = (1 - self.v_posterior) * betas * (1. - alphas_cumprod_prev) / (
- 1. - alphas_cumprod) + self.v_posterior * betas
- # above: equal to 1. / (1. / (1. - alpha_cumprod_tm1) + alpha_t / beta_t)
- self.register_buffer('posterior_variance', to_torch(posterior_variance))
- # below: log calculation clipped because the posterior variance is 0 at the beginning of the diffusion chain
- self.register_buffer('posterior_log_variance_clipped', to_torch(np.log(np.maximum(posterior_variance, 1e-20))))
- self.register_buffer('posterior_mean_coef1', to_torch(
- betas * np.sqrt(alphas_cumprod_prev) / (1. - alphas_cumprod)))
- self.register_buffer('posterior_mean_coef2', to_torch(
- (1. - alphas_cumprod_prev) * np.sqrt(alphas) / (1. - alphas_cumprod)))
-
- if self.parameterization == "eps":
- lvlb_weights = self.betas ** 2 / (
- 2 * self.posterior_variance * to_torch(alphas) * (1 - self.alphas_cumprod))
- elif self.parameterization == "x0":
- lvlb_weights = 0.5 * np.sqrt(torch.Tensor(alphas_cumprod)) / (2. * 1 - torch.Tensor(alphas_cumprod))
- elif self.parameterization == "v":
- lvlb_weights = torch.ones_like(self.betas ** 2 / (
- 2 * self.posterior_variance * to_torch(alphas) * (1 - self.alphas_cumprod)))
- else:
- raise NotImplementedError("mu not supported")
- lvlb_weights[0] = lvlb_weights[1]
- self.register_buffer('lvlb_weights', lvlb_weights, persistent=False)
- assert not torch.isnan(self.lvlb_weights).all()
-
- @contextmanager
- def ema_scope(self, context=None):
- if self.use_ema:
- self.model_ema.store(self.model.parameters())
- self.model_ema.copy_to(self.model)
- if context is not None:
- print(f"{context}: Switched to EMA weights")
- try:
- yield None
- finally:
- if self.use_ema:
- self.model_ema.restore(self.model.parameters())
- if context is not None:
- print(f"{context}: Restored training weights")
-
- @torch.no_grad()
- def init_from_ckpt(self, path, ignore_keys=list(), only_model=False):
- sd = torch.load(path, map_location="cpu")
- if "state_dict" in list(sd.keys()):
- sd = sd["state_dict"]
- keys = list(sd.keys())
- for k in keys:
- for ik in ignore_keys:
- if k.startswith(ik):
- print("Deleting key {} from state_dict.".format(k))
- del sd[k]
- if self.make_it_fit:
- n_params = len([name for name, _ in
- itertools.chain(self.named_parameters(),
- self.named_buffers())])
- for name, param in tqdm(
- itertools.chain(self.named_parameters(),
- self.named_buffers()),
- desc="Fitting old weights to new weights",
- total=n_params
- ):
- if not name in sd:
- continue
- old_shape = sd[name].shape
- new_shape = param.shape
- assert len(old_shape) == len(new_shape)
- if len(new_shape) > 2:
- # we only modify first two axes
- assert new_shape[2:] == old_shape[2:]
- # assumes first axis corresponds to output dim
- if not new_shape == old_shape:
- new_param = param.clone()
- old_param = sd[name]
- if len(new_shape) == 1:
- for i in range(new_param.shape[0]):
- new_param[i] = old_param[i % old_shape[0]]
- elif len(new_shape) >= 2:
- for i in range(new_param.shape[0]):
- for j in range(new_param.shape[1]):
- new_param[i, j] = old_param[i % old_shape[0], j % old_shape[1]]
-
- n_used_old = torch.ones(old_shape[1])
- for j in range(new_param.shape[1]):
- n_used_old[j % old_shape[1]] += 1
- n_used_new = torch.zeros(new_shape[1])
- for j in range(new_param.shape[1]):
- n_used_new[j] = n_used_old[j % old_shape[1]]
-
- n_used_new = n_used_new[None, :]
- while len(n_used_new.shape) < len(new_shape):
- n_used_new = n_used_new.unsqueeze(-1)
- new_param /= n_used_new
-
- sd[name] = new_param
-
- missing, unexpected = self.load_state_dict(sd, strict=False) if not only_model else self.model.load_state_dict(
- sd, strict=False)
- print(f"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys")
- if len(missing) > 0:
- print(f"Missing Keys:\n {missing}")
- if len(unexpected) > 0:
- print(f"\nUnexpected Keys:\n {unexpected}")
-
- def q_mean_variance(self, x_start, t):
- """
- Get the distribution q(x_t | x_0).
- :param x_start: the [N x C x ...] tensor of noiseless inputs.
- :param t: the number of diffusion steps (minus 1). Here, 0 means one step.
- :return: A tuple (mean, variance, log_variance), all of x_start's shape.
- """
- mean = (extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start)
- variance = extract_into_tensor(1.0 - self.alphas_cumprod, t, x_start.shape)
- log_variance = extract_into_tensor(self.log_one_minus_alphas_cumprod, t, x_start.shape)
- return mean, variance, log_variance
-
- def predict_start_from_noise(self, x_t, t, noise):
- return (
- extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t -
- extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) * noise
- )
-
- def predict_start_from_z_and_v(self, x_t, t, v):
- # self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod)))
- # self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod)))
- return (
- extract_into_tensor(self.sqrt_alphas_cumprod, t, x_t.shape) * x_t -
- extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_t.shape) * v
- )
-
- def predict_eps_from_z_and_v(self, x_t, t, v):
- return (
- extract_into_tensor(self.sqrt_alphas_cumprod, t, x_t.shape) * v +
- extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_t.shape) * x_t
- )
-
- def q_posterior(self, x_start, x_t, t):
- posterior_mean = (
- extract_into_tensor(self.posterior_mean_coef1, t, x_t.shape) * x_start +
- extract_into_tensor(self.posterior_mean_coef2, t, x_t.shape) * x_t
- )
- posterior_variance = extract_into_tensor(self.posterior_variance, t, x_t.shape)
- posterior_log_variance_clipped = extract_into_tensor(self.posterior_log_variance_clipped, t, x_t.shape)
- return posterior_mean, posterior_variance, posterior_log_variance_clipped
-
- def p_mean_variance(self, x, t, clip_denoised: bool):
- model_out = self.model(x, t)
- if self.parameterization == "eps":
- x_recon = self.predict_start_from_noise(x, t=t, noise=model_out)
- elif self.parameterization == "x0":
- x_recon = model_out
- if clip_denoised:
- x_recon.clamp_(-1., 1.)
-
- model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t)
- return model_mean, posterior_variance, posterior_log_variance
-
- @torch.no_grad()
- def p_sample(self, x, t, clip_denoised=True, repeat_noise=False):
- b, *_, device = *x.shape, x.device
- model_mean, _, model_log_variance = self.p_mean_variance(x=x, t=t, clip_denoised=clip_denoised)
- noise = noise_like(x.shape, device, repeat_noise)
- # no noise when t == 0
- nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1)))
- return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise
-
- @torch.no_grad()
- def p_sample_loop(self, shape, return_intermediates=False):
- device = self.betas.device
- b = shape[0]
- img = torch.randn(shape, device=device)
- intermediates = [img]
- for i in tqdm(reversed(range(0, self.num_timesteps)), desc='Sampling t', total=self.num_timesteps):
- img = self.p_sample(img, torch.full((b,), i, device=device, dtype=torch.long),
- clip_denoised=self.clip_denoised)
- if i % self.log_every_t == 0 or i == self.num_timesteps - 1:
- intermediates.append(img)
- if return_intermediates:
- return img, intermediates
- return img
-
- @torch.no_grad()
- def sample(self, batch_size=16, return_intermediates=False):
- image_size = self.image_size
- channels = self.channels
- return self.p_sample_loop((batch_size, channels, image_size, image_size),
- return_intermediates=return_intermediates)
-
- def q_sample(self, x_start, t, noise=None):
- noise = default(noise, lambda: torch.randn_like(x_start))
- return (extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start +
- extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) * noise)
-
- def get_v(self, x, noise, t):
- return (
- extract_into_tensor(self.sqrt_alphas_cumprod, t, x.shape) * noise -
- extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x.shape) * x
- )
-
- def get_loss(self, pred, target, mean=True):
- if self.loss_type == 'l1':
- loss = (target - pred).abs()
- if mean:
- loss = loss.mean()
- elif self.loss_type == 'l2':
- if mean:
- loss = torch.nn.functional.mse_loss(target, pred)
- else:
- loss = torch.nn.functional.mse_loss(target, pred, reduction='none')
- else:
- raise NotImplementedError("unknown loss type '{loss_type}'")
-
- return loss
-
- def p_losses(self, x_start, t, noise=None):
- noise = default(noise, lambda: torch.randn_like(x_start))
- x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise)
- model_out = self.model(x_noisy, t)
-
- loss_dict = {}
- if self.parameterization == "eps":
- target = noise
- elif self.parameterization == "x0":
- target = x_start
- elif self.parameterization == "v":
- target = self.get_v(x_start, noise, t)
- else:
- raise NotImplementedError(f"Parameterization {self.parameterization} not yet supported")
-
- loss = self.get_loss(model_out, target, mean=False).mean(dim=[1, 2, 3])
-
- log_prefix = 'train' if self.training else 'val'
-
- loss_dict.update({f'{log_prefix}/loss_simple': loss.mean()})
- loss_simple = loss.mean() * self.l_simple_weight
-
- loss_vlb = (self.lvlb_weights[t] * loss).mean()
- loss_dict.update({f'{log_prefix}/loss_vlb': loss_vlb})
-
- loss = loss_simple + self.original_elbo_weight * loss_vlb
-
- loss_dict.update({f'{log_prefix}/loss': loss})
-
- return loss, loss_dict
-
- def forward(self, x, *args, **kwargs):
- # b, c, h, w, device, img_size, = *x.shape, x.device, self.image_size
- # assert h == img_size and w == img_size, f'height and width of image must be {img_size}'
- t = torch.randint(0, self.num_timesteps, (x.shape[0],), device=self.device).long()
- return self.p_losses(x, t, *args, **kwargs)
-
- def get_input(self, batch, k):
- x = batch[k]
- if len(x.shape) == 3:
- x = x[..., None]
- x = rearrange(x, 'b h w c -> b c h w')
- x = x.to(memory_format=torch.contiguous_format).float()
- return x
-
- def shared_step(self, batch):
- x = self.get_input(batch, self.first_stage_key)
- loss, loss_dict = self(x)
- return loss, loss_dict
-
- def training_step(self, batch, batch_idx):
- for k in self.ucg_training:
- p = self.ucg_training[k]["p"]
- val = self.ucg_training[k]["val"]
- if val is None:
- val = ""
- for i in range(len(batch[k])):
- if self.ucg_prng.choice(2, p=[1 - p, p]):
- batch[k][i] = val
-
- loss, loss_dict = self.shared_step(batch)
-
- self.log_dict(loss_dict, prog_bar=True,
- logger=True, on_step=True, on_epoch=True)
-
- self.log("global_step", self.global_step,
- prog_bar=True, logger=True, on_step=True, on_epoch=False)
-
- if self.use_scheduler:
- lr = self.optimizers().param_groups[0]['lr']
- self.log('lr_abs', lr, prog_bar=True, logger=True, on_step=True, on_epoch=False)
-
- return loss
-
- @torch.no_grad()
- def validation_step(self, batch, batch_idx):
- _, loss_dict_no_ema = self.shared_step(batch)
- with self.ema_scope():
- _, loss_dict_ema = self.shared_step(batch)
- loss_dict_ema = {key + '_ema': loss_dict_ema[key] for key in loss_dict_ema}
- self.log_dict(loss_dict_no_ema, prog_bar=False, logger=True, on_step=False, on_epoch=True)
- self.log_dict(loss_dict_ema, prog_bar=False, logger=True, on_step=False, on_epoch=True)
-
- def on_train_batch_end(self, *args, **kwargs):
- if self.use_ema:
- self.model_ema(self.model)
-
- def _get_rows_from_list(self, samples):
- n_imgs_per_row = len(samples)
- denoise_grid = rearrange(samples, 'n b c h w -> b n c h w')
- denoise_grid = rearrange(denoise_grid, 'b n c h w -> (b n) c h w')
- denoise_grid = make_grid(denoise_grid, nrow=n_imgs_per_row)
- return denoise_grid
-
- @torch.no_grad()
- def log_images(self, batch, N=8, n_row=2, sample=True, return_keys=None, **kwargs):
- log = dict()
- x = self.get_input(batch, self.first_stage_key)
- N = min(x.shape[0], N)
- n_row = min(x.shape[0], n_row)
- x = x.to(self.device)[:N]
- log["inputs"] = x
-
- # get diffusion row
- diffusion_row = list()
- x_start = x[:n_row]
-
- for t in range(self.num_timesteps):
- if t % self.log_every_t == 0 or t == self.num_timesteps - 1:
- t = repeat(torch.tensor([t]), '1 -> b', b=n_row)
- t = t.to(self.device).long()
- noise = torch.randn_like(x_start)
- x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise)
- diffusion_row.append(x_noisy)
-
- log["diffusion_row"] = self._get_rows_from_list(diffusion_row)
-
- if sample:
- # get denoise row
- with self.ema_scope("Plotting"):
- samples, denoise_row = self.sample(batch_size=N, return_intermediates=True)
-
- log["samples"] = samples
- log["denoise_row"] = self._get_rows_from_list(denoise_row)
-
- if return_keys:
- if np.intersect1d(list(log.keys()), return_keys).shape[0] == 0:
- return log
- else:
- return {key: log[key] for key in return_keys}
- return log
-
- def configure_optimizers(self):
- lr = self.learning_rate
- params = list(self.model.parameters())
- if self.learn_logvar:
- params = params + [self.logvar]
- opt = torch.optim.AdamW(params, lr=lr)
- return opt
-
-
-class LatentDiffusion(DDPM):
- """main class"""
-
- def __init__(self,
- first_stage_config,
- cond_stage_config,
- num_timesteps_cond=None,
- cond_stage_key="image",
- cond_stage_trainable=False,
- concat_mode=True,
- cond_stage_forward=None,
- conditioning_key=None,
- scale_factor=1.0,
- scale_by_std=False,
- force_null_conditioning=False,
- *args, **kwargs):
- self.force_null_conditioning = force_null_conditioning
- self.num_timesteps_cond = default(num_timesteps_cond, 1)
- self.scale_by_std = scale_by_std
- assert self.num_timesteps_cond <= kwargs['timesteps']
- # for backwards compatibility after implementation of DiffusionWrapper
- if conditioning_key is None:
- conditioning_key = 'concat' if concat_mode else 'crossattn'
- if cond_stage_config == '__is_unconditional__' and not self.force_null_conditioning:
- conditioning_key = None
- ckpt_path = kwargs.pop("ckpt_path", None)
- reset_ema = kwargs.pop("reset_ema", False)
- reset_num_ema_updates = kwargs.pop("reset_num_ema_updates", False)
- ignore_keys = kwargs.pop("ignore_keys", [])
- super().__init__(conditioning_key=conditioning_key, *args, **kwargs)
- self.concat_mode = concat_mode
- self.cond_stage_trainable = cond_stage_trainable
- self.cond_stage_key = cond_stage_key
- try:
- self.num_downs = len(first_stage_config.params.ddconfig.ch_mult) - 1
- except:
- self.num_downs = 0
- if not scale_by_std:
- self.scale_factor = scale_factor
- else:
- self.register_buffer('scale_factor', torch.tensor(scale_factor))
- self.instantiate_first_stage(first_stage_config)
- self.instantiate_cond_stage(cond_stage_config)
- self.cond_stage_forward = cond_stage_forward
- self.clip_denoised = False
- self.bbox_tokenizer = None
-
- self.restarted_from_ckpt = False
- if ckpt_path is not None:
- self.init_from_ckpt(ckpt_path, ignore_keys)
- self.restarted_from_ckpt = True
- if reset_ema:
- assert self.use_ema
- print(
- f"Resetting ema to pure model weights. This is useful when restoring from an ema-only checkpoint.")
- self.model_ema = LitEma(self.model)
- if reset_num_ema_updates:
- print(" +++++++++++ WARNING: RESETTING NUM_EMA UPDATES TO ZERO +++++++++++ ")
- assert self.use_ema
- self.model_ema.reset_num_updates()
-
- def make_cond_schedule(self, ):
- self.cond_ids = torch.full(size=(self.num_timesteps,), fill_value=self.num_timesteps - 1, dtype=torch.long)
- ids = torch.round(torch.linspace(0, self.num_timesteps - 1, self.num_timesteps_cond)).long()
- self.cond_ids[:self.num_timesteps_cond] = ids
-
- @rank_zero_only
- @torch.no_grad()
- def on_train_batch_start(self, batch, batch_idx, dataloader_idx):
- # only for very first batch
- if self.scale_by_std and self.current_epoch == 0 and self.global_step == 0 and batch_idx == 0 and not self.restarted_from_ckpt:
- assert self.scale_factor == 1., 'rather not use custom rescaling and std-rescaling simultaneously'
- # set rescale weight to 1./std of encodings
- print("### USING STD-RESCALING ###")
- x = super().get_input(batch, self.first_stage_key)
- x = x.to(self.device)
- encoder_posterior = self.encode_first_stage(x)
- z = self.get_first_stage_encoding(encoder_posterior).detach()
- del self.scale_factor
- self.register_buffer('scale_factor', 1. / z.flatten().std())
- print(f"setting self.scale_factor to {self.scale_factor}")
- print("### USING STD-RESCALING ###")
-
- def register_schedule(self,
- given_betas=None, beta_schedule="linear", timesteps=1000,
- linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3):
- super().register_schedule(given_betas, beta_schedule, timesteps, linear_start, linear_end, cosine_s)
-
- self.shorten_cond_schedule = self.num_timesteps_cond > 1
- if self.shorten_cond_schedule:
- self.make_cond_schedule()
-
- def instantiate_first_stage(self, config):
- model = instantiate_from_config(config)
- self.first_stage_model = model.eval()
- self.first_stage_model.train = disabled_train
- for param in self.first_stage_model.parameters():
- param.requires_grad = False
-
- def instantiate_cond_stage(self, config):
- if not self.cond_stage_trainable:
- if config == "__is_first_stage__":
- print("Using first stage also as cond stage.")
- self.cond_stage_model = self.first_stage_model
- elif config == "__is_unconditional__":
- print(f"Training {self.__class__.__name__} as an unconditional model.")
- self.cond_stage_model = None
- # self.be_unconditional = True
- else:
- model = instantiate_from_config(config)
- self.cond_stage_model = model.eval()
- self.cond_stage_model.train = disabled_train
- for param in self.cond_stage_model.parameters():
- param.requires_grad = False
- else:
- assert config != '__is_first_stage__'
- assert config != '__is_unconditional__'
- model = instantiate_from_config(config)
- self.cond_stage_model = model
-
- def _get_denoise_row_from_list(self, samples, desc='', force_no_decoder_quantization=False):
- denoise_row = []
- for zd in tqdm(samples, desc=desc):
- denoise_row.append(self.decode_first_stage(zd.to(self.device),
- force_not_quantize=force_no_decoder_quantization))
- n_imgs_per_row = len(denoise_row)
- denoise_row = torch.stack(denoise_row) # n_log_step, n_row, C, H, W
- denoise_grid = rearrange(denoise_row, 'n b c h w -> b n c h w')
- denoise_grid = rearrange(denoise_grid, 'b n c h w -> (b n) c h w')
- denoise_grid = make_grid(denoise_grid, nrow=n_imgs_per_row)
- return denoise_grid
-
- def get_first_stage_encoding(self, encoder_posterior):
- if isinstance(encoder_posterior, DiagonalGaussianDistribution):
- z = encoder_posterior.sample()
- elif isinstance(encoder_posterior, torch.Tensor):
- z = encoder_posterior
- else:
- raise NotImplementedError(f"encoder_posterior of type '{type(encoder_posterior)}' not yet implemented")
- return self.scale_factor * z
-
- def get_learned_conditioning(self, c):
- if self.cond_stage_forward is None:
- if hasattr(self.cond_stage_model, 'encode') and callable(self.cond_stage_model.encode):
- c = self.cond_stage_model.encode(c)
- if isinstance(c, DiagonalGaussianDistribution):
- c = c.mode()
- else:
- c = self.cond_stage_model(c)
- else:
- assert hasattr(self.cond_stage_model, self.cond_stage_forward)
- c = getattr(self.cond_stage_model, self.cond_stage_forward)(c)
- return c
-
- def meshgrid(self, h, w):
- y = torch.arange(0, h).view(h, 1, 1).repeat(1, w, 1)
- x = torch.arange(0, w).view(1, w, 1).repeat(h, 1, 1)
-
- arr = torch.cat([y, x], dim=-1)
- return arr
-
- def delta_border(self, h, w):
- """
- :param h: height
- :param w: width
- :return: normalized distance to image border,
- wtith min distance = 0 at border and max dist = 0.5 at image center
- """
- lower_right_corner = torch.tensor([h - 1, w - 1]).view(1, 1, 2)
- arr = self.meshgrid(h, w) / lower_right_corner
- dist_left_up = torch.min(arr, dim=-1, keepdims=True)[0]
- dist_right_down = torch.min(1 - arr, dim=-1, keepdims=True)[0]
- edge_dist = torch.min(torch.cat([dist_left_up, dist_right_down], dim=-1), dim=-1)[0]
- return edge_dist
-
- def get_weighting(self, h, w, Ly, Lx, device):
- weighting = self.delta_border(h, w)
- weighting = torch.clip(weighting, self.split_input_params["clip_min_weight"],
- self.split_input_params["clip_max_weight"], )
- weighting = weighting.view(1, h * w, 1).repeat(1, 1, Ly * Lx).to(device)
-
- if self.split_input_params["tie_braker"]:
- L_weighting = self.delta_border(Ly, Lx)
- L_weighting = torch.clip(L_weighting,
- self.split_input_params["clip_min_tie_weight"],
- self.split_input_params["clip_max_tie_weight"])
-
- L_weighting = L_weighting.view(1, 1, Ly * Lx).to(device)
- weighting = weighting * L_weighting
- return weighting
-
- def get_fold_unfold(self, x, kernel_size, stride, uf=1, df=1): # todo load once not every time, shorten code
- """
- :param x: img of size (bs, c, h, w)
- :return: n img crops of size (n, bs, c, kernel_size[0], kernel_size[1])
- """
- bs, nc, h, w = x.shape
-
- # number of crops in image
- Ly = (h - kernel_size[0]) // stride[0] + 1
- Lx = (w - kernel_size[1]) // stride[1] + 1
-
- if uf == 1 and df == 1:
- fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride)
- unfold = torch.nn.Unfold(**fold_params)
-
- fold = torch.nn.Fold(output_size=x.shape[2:], **fold_params)
-
- weighting = self.get_weighting(kernel_size[0], kernel_size[1], Ly, Lx, x.device).to(x.dtype)
- normalization = fold(weighting).view(1, 1, h, w) # normalizes the overlap
- weighting = weighting.view((1, 1, kernel_size[0], kernel_size[1], Ly * Lx))
-
- elif uf > 1 and df == 1:
- fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride)
- unfold = torch.nn.Unfold(**fold_params)
-
- fold_params2 = dict(kernel_size=(kernel_size[0] * uf, kernel_size[0] * uf),
- dilation=1, padding=0,
- stride=(stride[0] * uf, stride[1] * uf))
- fold = torch.nn.Fold(output_size=(x.shape[2] * uf, x.shape[3] * uf), **fold_params2)
-
- weighting = self.get_weighting(kernel_size[0] * uf, kernel_size[1] * uf, Ly, Lx, x.device).to(x.dtype)
- normalization = fold(weighting).view(1, 1, h * uf, w * uf) # normalizes the overlap
- weighting = weighting.view((1, 1, kernel_size[0] * uf, kernel_size[1] * uf, Ly * Lx))
-
- elif df > 1 and uf == 1:
- fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride)
- unfold = torch.nn.Unfold(**fold_params)
-
- fold_params2 = dict(kernel_size=(kernel_size[0] // df, kernel_size[0] // df),
- dilation=1, padding=0,
- stride=(stride[0] // df, stride[1] // df))
- fold = torch.nn.Fold(output_size=(x.shape[2] // df, x.shape[3] // df), **fold_params2)
-
- weighting = self.get_weighting(kernel_size[0] // df, kernel_size[1] // df, Ly, Lx, x.device).to(x.dtype)
- normalization = fold(weighting).view(1, 1, h // df, w // df) # normalizes the overlap
- weighting = weighting.view((1, 1, kernel_size[0] // df, kernel_size[1] // df, Ly * Lx))
-
- else:
- raise NotImplementedError
-
- return fold, unfold, normalization, weighting
-
- @torch.no_grad()
- def get_input(self, batch, k, return_first_stage_outputs=False, force_c_encode=False,
- cond_key=None, return_original_cond=False, bs=None, return_x=False):
- x = super().get_input(batch, k)
- if bs is not None:
- x = x[:bs]
- x = x.to(self.device)
- encoder_posterior = self.encode_first_stage(x)
- z = self.get_first_stage_encoding(encoder_posterior).detach()
-
- if self.model.conditioning_key is not None and not self.force_null_conditioning:
- if cond_key is None:
- cond_key = self.cond_stage_key
- if cond_key != self.first_stage_key:
- if cond_key in ['caption', 'coordinates_bbox', "txt"]:
- xc = batch[cond_key]
- elif cond_key in ['class_label', 'cls']:
- xc = batch
- else:
- xc = super().get_input(batch, cond_key).to(self.device)
- else:
- xc = x
- if not self.cond_stage_trainable or force_c_encode:
- if isinstance(xc, dict) or isinstance(xc, list):
- c = self.get_learned_conditioning(xc)
- else:
- c = self.get_learned_conditioning(xc.to(self.device))
- else:
- c = xc
- if bs is not None:
- c = c[:bs]
-
- if self.use_positional_encodings:
- pos_x, pos_y = self.compute_latent_shifts(batch)
- ckey = __conditioning_keys__[self.model.conditioning_key]
- c = {ckey: c, 'pos_x': pos_x, 'pos_y': pos_y}
-
- else:
- c = None
- xc = None
- if self.use_positional_encodings:
- pos_x, pos_y = self.compute_latent_shifts(batch)
- c = {'pos_x': pos_x, 'pos_y': pos_y}
- out = [z, c]
- if return_first_stage_outputs:
- xrec = self.decode_first_stage(z)
- out.extend([x, xrec])
- if return_x:
- out.extend([x])
- if return_original_cond:
- out.append(xc)
- return out
-
- @torch.no_grad()
- def decode_first_stage(self, z, predict_cids=False, force_not_quantize=False):
- if predict_cids:
- if z.dim() == 4:
- z = torch.argmax(z.exp(), dim=1).long()
- z = self.first_stage_model.quantize.get_codebook_entry(z, shape=None)
- z = rearrange(z, 'b h w c -> b c h w').contiguous()
-
- z = 1. / self.scale_factor * z
- return self.first_stage_model.decode(z)
-
- @torch.no_grad()
- def encode_first_stage(self, x):
- return self.first_stage_model.encode(x)
-
- def shared_step(self, batch, **kwargs):
- x, c = self.get_input(batch, self.first_stage_key)
- loss = self(x, c)
- return loss
-
- def forward(self, x, c, *args, **kwargs):
- t = torch.randint(0, self.num_timesteps, (x.shape[0],), device=self.device).long()
- if self.model.conditioning_key is not None:
- assert c is not None
- if self.cond_stage_trainable:
- c = self.get_learned_conditioning(c)
- if self.shorten_cond_schedule: # TODO: drop this option
- tc = self.cond_ids[t].to(self.device)
- c = self.q_sample(x_start=c, t=tc, noise=torch.randn_like(c.float()))
- return self.p_losses(x, c, t, *args, **kwargs)
-
- def apply_model(self, x_noisy, t, cond, return_ids=False):
- if isinstance(cond, dict):
- # hybrid case, cond is expected to be a dict
- pass
- else:
- if not isinstance(cond, list):
- cond = [cond]
- key = 'c_concat' if self.model.conditioning_key == 'concat' else 'c_crossattn'
- cond = {key: cond}
-
- x_recon = self.model(x_noisy, t, **cond)
-
- if isinstance(x_recon, tuple) and not return_ids:
- return x_recon[0]
- else:
- return x_recon
-
- def _predict_eps_from_xstart(self, x_t, t, pred_xstart):
- return (extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - pred_xstart) / \
- extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape)
-
- def _prior_bpd(self, x_start):
- """
- Get the prior KL term for the variational lower-bound, measured in
- bits-per-dim.
- This term can't be optimized, as it only depends on the encoder.
- :param x_start: the [N x C x ...] tensor of inputs.
- :return: a batch of [N] KL values (in bits), one per batch element.
- """
- batch_size = x_start.shape[0]
- t = torch.tensor([self.num_timesteps - 1] * batch_size, device=x_start.device)
- qt_mean, _, qt_log_variance = self.q_mean_variance(x_start, t)
- kl_prior = normal_kl(mean1=qt_mean, logvar1=qt_log_variance, mean2=0.0, logvar2=0.0)
- return mean_flat(kl_prior) / np.log(2.0)
-
- def p_losses(self, x_start, cond, t, noise=None):
- noise = default(noise, lambda: torch.randn_like(x_start))
- x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise)
- model_output = self.apply_model(x_noisy, t, cond)
-
- loss_dict = {}
- prefix = 'train' if self.training else 'val'
-
- if self.parameterization == "x0":
- target = x_start
- elif self.parameterization == "eps":
- target = noise
- elif self.parameterization == "v":
- target = self.get_v(x_start, noise, t)
- else:
- raise NotImplementedError()
-
- loss_simple = self.get_loss(model_output, target, mean=False).mean([1, 2, 3])
- loss_dict.update({f'{prefix}/loss_simple': loss_simple.mean()})
-
- logvar_t = self.logvar[t].to(self.device)
- loss = loss_simple / torch.exp(logvar_t) + logvar_t
- # loss = loss_simple / torch.exp(self.logvar) + self.logvar
- if self.learn_logvar:
- loss_dict.update({f'{prefix}/loss_gamma': loss.mean()})
- loss_dict.update({'logvar': self.logvar.data.mean()})
-
- loss = self.l_simple_weight * loss.mean()
-
- loss_vlb = self.get_loss(model_output, target, mean=False).mean(dim=(1, 2, 3))
- loss_vlb = (self.lvlb_weights[t] * loss_vlb).mean()
- loss_dict.update({f'{prefix}/loss_vlb': loss_vlb})
- loss += (self.original_elbo_weight * loss_vlb)
- loss_dict.update({f'{prefix}/loss': loss})
-
- return loss, loss_dict
-
- def p_mean_variance(self, x, c, t, clip_denoised: bool, return_codebook_ids=False, quantize_denoised=False,
- return_x0=False, score_corrector=None, corrector_kwargs=None):
- t_in = t
- model_out = self.apply_model(x, t_in, c, return_ids=return_codebook_ids)
-
- if score_corrector is not None:
- assert self.parameterization == "eps"
- model_out = score_corrector.modify_score(self, model_out, x, t, c, **corrector_kwargs)
-
- if return_codebook_ids:
- model_out, logits = model_out
-
- if self.parameterization == "eps":
- x_recon = self.predict_start_from_noise(x, t=t, noise=model_out)
- elif self.parameterization == "x0":
- x_recon = model_out
- else:
- raise NotImplementedError()
-
- if clip_denoised:
- x_recon.clamp_(-1., 1.)
- if quantize_denoised:
- x_recon, _, [_, _, indices] = self.first_stage_model.quantize(x_recon)
- model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t)
- if return_codebook_ids:
- return model_mean, posterior_variance, posterior_log_variance, logits
- elif return_x0:
- return model_mean, posterior_variance, posterior_log_variance, x_recon
- else:
- return model_mean, posterior_variance, posterior_log_variance
-
- @torch.no_grad()
- def p_sample(self, x, c, t, clip_denoised=False, repeat_noise=False,
- return_codebook_ids=False, quantize_denoised=False, return_x0=False,
- temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None):
- b, *_, device = *x.shape, x.device
- outputs = self.p_mean_variance(x=x, c=c, t=t, clip_denoised=clip_denoised,
- return_codebook_ids=return_codebook_ids,
- quantize_denoised=quantize_denoised,
- return_x0=return_x0,
- score_corrector=score_corrector, corrector_kwargs=corrector_kwargs)
- if return_codebook_ids:
- raise DeprecationWarning("Support dropped.")
- model_mean, _, model_log_variance, logits = outputs
- elif return_x0:
- model_mean, _, model_log_variance, x0 = outputs
- else:
- model_mean, _, model_log_variance = outputs
-
- noise = noise_like(x.shape, device, repeat_noise) * temperature
- if noise_dropout > 0.:
- noise = torch.nn.functional.dropout(noise, p=noise_dropout)
- # no noise when t == 0
- nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1)))
-
- if return_codebook_ids:
- return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, logits.argmax(dim=1)
- if return_x0:
- return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, x0
- else:
- return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise
-
- @torch.no_grad()
- def progressive_denoising(self, cond, shape, verbose=True, callback=None, quantize_denoised=False,
- img_callback=None, mask=None, x0=None, temperature=1., noise_dropout=0.,
- score_corrector=None, corrector_kwargs=None, batch_size=None, x_T=None, start_T=None,
- log_every_t=None):
- if not log_every_t:
- log_every_t = self.log_every_t
- timesteps = self.num_timesteps
- if batch_size is not None:
- b = batch_size if batch_size is not None else shape[0]
- shape = [batch_size] + list(shape)
- else:
- b = batch_size = shape[0]
- if x_T is None:
- img = torch.randn(shape, device=self.device)
- else:
- img = x_T
- intermediates = []
- if cond is not None:
- if isinstance(cond, dict):
- cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else
- list(map(lambda x: x[:batch_size], cond[key])) for key in cond}
- else:
- cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size]
-
- if start_T is not None:
- timesteps = min(timesteps, start_T)
- iterator = tqdm(reversed(range(0, timesteps)), desc='Progressive Generation',
- total=timesteps) if verbose else reversed(
- range(0, timesteps))
- if type(temperature) == float:
- temperature = [temperature] * timesteps
-
- for i in iterator:
- ts = torch.full((b,), i, device=self.device, dtype=torch.long)
- if self.shorten_cond_schedule:
- assert self.model.conditioning_key != 'hybrid'
- tc = self.cond_ids[ts].to(cond.device)
- cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond))
-
- img, x0_partial = self.p_sample(img, cond, ts,
- clip_denoised=self.clip_denoised,
- quantize_denoised=quantize_denoised, return_x0=True,
- temperature=temperature[i], noise_dropout=noise_dropout,
- score_corrector=score_corrector, corrector_kwargs=corrector_kwargs)
- if mask is not None:
- assert x0 is not None
- img_orig = self.q_sample(x0, ts)
- img = img_orig * mask + (1. - mask) * img
-
- if i % log_every_t == 0 or i == timesteps - 1:
- intermediates.append(x0_partial)
- if callback: callback(i)
- if img_callback: img_callback(img, i)
- return img, intermediates
-
- @torch.no_grad()
- def p_sample_loop(self, cond, shape, return_intermediates=False,
- x_T=None, verbose=True, callback=None, timesteps=None, quantize_denoised=False,
- mask=None, x0=None, img_callback=None, start_T=None,
- log_every_t=None):
-
- if not log_every_t:
- log_every_t = self.log_every_t
- device = self.betas.device
- b = shape[0]
- if x_T is None:
- img = torch.randn(shape, device=device)
- else:
- img = x_T
-
- intermediates = [img]
- if timesteps is None:
- timesteps = self.num_timesteps
-
- if start_T is not None:
- timesteps = min(timesteps, start_T)
- iterator = tqdm(reversed(range(0, timesteps)), desc='Sampling t', total=timesteps) if verbose else reversed(
- range(0, timesteps))
-
- if mask is not None:
- assert x0 is not None
- assert x0.shape[2:3] == mask.shape[2:3] # spatial size has to match
-
- for i in iterator:
- ts = torch.full((b,), i, device=device, dtype=torch.long)
- if self.shorten_cond_schedule:
- assert self.model.conditioning_key != 'hybrid'
- tc = self.cond_ids[ts].to(cond.device)
- cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond))
-
- img = self.p_sample(img, cond, ts,
- clip_denoised=self.clip_denoised,
- quantize_denoised=quantize_denoised)
- if mask is not None:
- img_orig = self.q_sample(x0, ts)
- img = img_orig * mask + (1. - mask) * img
-
- if i % log_every_t == 0 or i == timesteps - 1:
- intermediates.append(img)
- if callback: callback(i)
- if img_callback: img_callback(img, i)
-
- if return_intermediates:
- return img, intermediates
- return img
-
- @torch.no_grad()
- def sample(self, cond, batch_size=16, return_intermediates=False, x_T=None,
- verbose=True, timesteps=None, quantize_denoised=False,
- mask=None, x0=None, shape=None, **kwargs):
- if shape is None:
- shape = (batch_size, self.channels, self.image_size, self.image_size)
- if cond is not None:
- if isinstance(cond, dict):
- cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else
- list(map(lambda x: x[:batch_size], cond[key])) for key in cond}
- else:
- cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size]
- return self.p_sample_loop(cond,
- shape,
- return_intermediates=return_intermediates, x_T=x_T,
- verbose=verbose, timesteps=timesteps, quantize_denoised=quantize_denoised,
- mask=mask, x0=x0)
-
- @torch.no_grad()
- def sample_log(self, cond, batch_size, ddim, ddim_steps, **kwargs):
- if ddim:
- ddim_sampler = DDIMSampler(self)
- shape = (self.channels, self.image_size, self.image_size)
- samples, intermediates = ddim_sampler.sample(ddim_steps, batch_size,
- shape, cond, verbose=False, **kwargs)
-
- else:
- samples, intermediates = self.sample(cond=cond, batch_size=batch_size,
- return_intermediates=True, **kwargs)
-
- return samples, intermediates
-
- @torch.no_grad()
- def get_unconditional_conditioning(self, batch_size, null_label=None):
- if null_label is not None:
- xc = null_label
- if isinstance(xc, ListConfig):
- xc = list(xc)
- if isinstance(xc, dict) or isinstance(xc, list):
- c = self.get_learned_conditioning(xc)
- else:
- if hasattr(xc, "to"):
- xc = xc.to(self.device)
- c = self.get_learned_conditioning(xc)
- else:
- if self.cond_stage_key in ["class_label", "cls"]:
- xc = self.cond_stage_model.get_unconditional_conditioning(batch_size, device=self.device)
- return self.get_learned_conditioning(xc)
- else:
- raise NotImplementedError("todo")
- if isinstance(c, list): # in case the encoder gives us a list
- for i in range(len(c)):
- c[i] = repeat(c[i], '1 ... -> b ...', b=batch_size).to(self.device)
- else:
- c = repeat(c, '1 ... -> b ...', b=batch_size).to(self.device)
- return c
-
- @torch.no_grad()
- def log_images(self, batch, N=8, n_row=4, sample=True, ddim_steps=50, ddim_eta=0., return_keys=None,
- quantize_denoised=True, inpaint=True, plot_denoise_rows=False, plot_progressive_rows=True,
- plot_diffusion_rows=True, unconditional_guidance_scale=1., unconditional_guidance_label=None,
- use_ema_scope=True,
- **kwargs):
- ema_scope = self.ema_scope if use_ema_scope else nullcontext
- use_ddim = ddim_steps is not None
-
- log = dict()
- z, c, x, xrec, xc = self.get_input(batch, self.first_stage_key,
- return_first_stage_outputs=True,
- force_c_encode=True,
- return_original_cond=True,
- bs=N)
- N = min(x.shape[0], N)
- n_row = min(x.shape[0], n_row)
- log["inputs"] = x
- log["reconstruction"] = xrec
- if self.model.conditioning_key is not None:
- if hasattr(self.cond_stage_model, "decode"):
- xc = self.cond_stage_model.decode(c)
- log["conditioning"] = xc
- elif self.cond_stage_key in ["caption", "txt"]:
- xc = log_txt_as_img((x.shape[2], x.shape[3]), batch[self.cond_stage_key], size=x.shape[2] // 25)
- log["conditioning"] = xc
- elif self.cond_stage_key in ['class_label', "cls"]:
- try:
- xc = log_txt_as_img((x.shape[2], x.shape[3]), batch["human_label"], size=x.shape[2] // 25)
- log['conditioning'] = xc
- except KeyError:
- # probably no "human_label" in batch
- pass
- elif isimage(xc):
- log["conditioning"] = xc
- if ismap(xc):
- log["original_conditioning"] = self.to_rgb(xc)
-
- if plot_diffusion_rows:
- # get diffusion row
- diffusion_row = list()
- z_start = z[:n_row]
- for t in range(self.num_timesteps):
- if t % self.log_every_t == 0 or t == self.num_timesteps - 1:
- t = repeat(torch.tensor([t]), '1 -> b', b=n_row)
- t = t.to(self.device).long()
- noise = torch.randn_like(z_start)
- z_noisy = self.q_sample(x_start=z_start, t=t, noise=noise)
- diffusion_row.append(self.decode_first_stage(z_noisy))
-
- diffusion_row = torch.stack(diffusion_row) # n_log_step, n_row, C, H, W
- diffusion_grid = rearrange(diffusion_row, 'n b c h w -> b n c h w')
- diffusion_grid = rearrange(diffusion_grid, 'b n c h w -> (b n) c h w')
- diffusion_grid = make_grid(diffusion_grid, nrow=diffusion_row.shape[0])
- log["diffusion_row"] = diffusion_grid
-
- if sample:
- # get denoise row
- with ema_scope("Sampling"):
- samples, z_denoise_row = self.sample_log(cond=c, batch_size=N, ddim=use_ddim,
- ddim_steps=ddim_steps, eta=ddim_eta)
- # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True)
- x_samples = self.decode_first_stage(samples)
- log["samples"] = x_samples
- if plot_denoise_rows:
- denoise_grid = self._get_denoise_row_from_list(z_denoise_row)
- log["denoise_row"] = denoise_grid
-
- if quantize_denoised and not isinstance(self.first_stage_model, AutoencoderKL) and not isinstance(
- self.first_stage_model, IdentityFirstStage):
- # also display when quantizing x0 while sampling
- with ema_scope("Plotting Quantized Denoised"):
- samples, z_denoise_row = self.sample_log(cond=c, batch_size=N, ddim=use_ddim,
- ddim_steps=ddim_steps, eta=ddim_eta,
- quantize_denoised=True)
- # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True,
- # quantize_denoised=True)
- x_samples = self.decode_first_stage(samples.to(self.device))
- log["samples_x0_quantized"] = x_samples
-
- if unconditional_guidance_scale > 1.0:
- uc = self.get_unconditional_conditioning(N, unconditional_guidance_label)
- if self.model.conditioning_key == "crossattn-adm":
- uc = {"c_crossattn": [uc], "c_adm": c["c_adm"]}
- with ema_scope("Sampling with classifier-free guidance"):
- samples_cfg, _ = self.sample_log(cond=c, batch_size=N, ddim=use_ddim,
- ddim_steps=ddim_steps, eta=ddim_eta,
- unconditional_guidance_scale=unconditional_guidance_scale,
- unconditional_conditioning=uc,
- )
- x_samples_cfg = self.decode_first_stage(samples_cfg)
- log[f"samples_cfg_scale_{unconditional_guidance_scale:.2f}"] = x_samples_cfg
-
- if inpaint:
- # make a simple center square
- b, h, w = z.shape[0], z.shape[2], z.shape[3]
- mask = torch.ones(N, h, w).to(self.device)
- # zeros will be filled in
- mask[:, h // 4:3 * h // 4, w // 4:3 * w // 4] = 0.
- mask = mask[:, None, ...]
- with ema_scope("Plotting Inpaint"):
- samples, _ = self.sample_log(cond=c, batch_size=N, ddim=use_ddim, eta=ddim_eta,
- ddim_steps=ddim_steps, x0=z[:N], mask=mask)
- x_samples = self.decode_first_stage(samples.to(self.device))
- log["samples_inpainting"] = x_samples
- log["mask"] = mask
-
- # outpaint
- mask = 1. - mask
- with ema_scope("Plotting Outpaint"):
- samples, _ = self.sample_log(cond=c, batch_size=N, ddim=use_ddim, eta=ddim_eta,
- ddim_steps=ddim_steps, x0=z[:N], mask=mask)
- x_samples = self.decode_first_stage(samples.to(self.device))
- log["samples_outpainting"] = x_samples
-
- if plot_progressive_rows:
- with ema_scope("Plotting Progressives"):
- img, progressives = self.progressive_denoising(c,
- shape=(self.channels, self.image_size, self.image_size),
- batch_size=N)
- prog_row = self._get_denoise_row_from_list(progressives, desc="Progressive Generation")
- log["progressive_row"] = prog_row
-
- if return_keys:
- if np.intersect1d(list(log.keys()), return_keys).shape[0] == 0:
- return log
- else:
- return {key: log[key] for key in return_keys}
- return log
-
- def configure_optimizers(self):
- lr = self.learning_rate
- params = list(self.model.parameters())
- if self.cond_stage_trainable:
- print(f"{self.__class__.__name__}: Also optimizing conditioner params!")
- params = params + list(self.cond_stage_model.parameters())
- if self.learn_logvar:
- print('Diffusion model optimizing logvar')
- params.append(self.logvar)
- opt = torch.optim.AdamW(params, lr=lr)
- if self.use_scheduler:
- assert 'target' in self.scheduler_config
- scheduler = instantiate_from_config(self.scheduler_config)
-
- print("Setting up LambdaLR scheduler...")
- scheduler = [
- {
- 'scheduler': LambdaLR(opt, lr_lambda=scheduler.schedule),
- 'interval': 'step',
- 'frequency': 1
- }]
- return [opt], scheduler
- return opt
-
- @torch.no_grad()
- def to_rgb(self, x):
- x = x.float()
- if not hasattr(self, "colorize"):
- self.colorize = torch.randn(3, x.shape[1], 1, 1).to(x)
- x = nn.functional.conv2d(x, weight=self.colorize)
- x = 2. * (x - x.min()) / (x.max() - x.min()) - 1.
- return x
-
-
-class DiffusionWrapper(pl.LightningModule):
- def __init__(self, diff_model_config, conditioning_key):
- super().__init__()
- self.sequential_cross_attn = diff_model_config.pop("sequential_crossattn", False)
- self.diffusion_model = instantiate_from_config(diff_model_config)
- self.conditioning_key = conditioning_key
- assert self.conditioning_key in [None, 'concat', 'crossattn', 'hybrid', 'adm', 'hybrid-adm', 'crossattn-adm']
-
- def forward(self, x, t, c_concat: list = None, c_crossattn: list = None, c_adm=None):
- if self.conditioning_key is None:
- out = self.diffusion_model(x, t)
- elif self.conditioning_key == 'concat':
- xc = torch.cat([x] + c_concat, dim=1)
- out = self.diffusion_model(xc, t)
- elif self.conditioning_key == 'crossattn':
- if not self.sequential_cross_attn:
- cc = torch.cat(c_crossattn, 1)
- else:
- cc = c_crossattn
- out = self.diffusion_model(x, t, context=cc)
- elif self.conditioning_key == 'hybrid':
- xc = torch.cat([x] + c_concat, dim=1)
- cc = torch.cat(c_crossattn, 1)
- out = self.diffusion_model(xc, t, context=cc)
- elif self.conditioning_key == 'hybrid-adm':
- assert c_adm is not None
- xc = torch.cat([x] + c_concat, dim=1)
- cc = torch.cat(c_crossattn, 1)
- out = self.diffusion_model(xc, t, context=cc, y=c_adm)
- elif self.conditioning_key == 'crossattn-adm':
- assert c_adm is not None
- cc = torch.cat(c_crossattn, 1)
- out = self.diffusion_model(x, t, context=cc, y=c_adm)
- elif self.conditioning_key == 'adm':
- cc = c_crossattn[0]
- out = self.diffusion_model(x, t, y=cc)
- else:
- raise NotImplementedError()
-
- return out
-
-
-class LatentUpscaleDiffusion(LatentDiffusion):
- def __init__(self, *args, low_scale_config, low_scale_key="LR", noise_level_key=None, **kwargs):
- super().__init__(*args, **kwargs)
- # assumes that neither the cond_stage nor the low_scale_model contain trainable params
- assert not self.cond_stage_trainable
- self.instantiate_low_stage(low_scale_config)
- self.low_scale_key = low_scale_key
- self.noise_level_key = noise_level_key
-
- def instantiate_low_stage(self, config):
- model = instantiate_from_config(config)
- self.low_scale_model = model.eval()
- self.low_scale_model.train = disabled_train
- for param in self.low_scale_model.parameters():
- param.requires_grad = False
-
- @torch.no_grad()
- def get_input(self, batch, k, cond_key=None, bs=None, log_mode=False):
- if not log_mode:
- z, c = super().get_input(batch, k, force_c_encode=True, bs=bs)
- else:
- z, c, x, xrec, xc = super().get_input(batch, self.first_stage_key, return_first_stage_outputs=True,
- force_c_encode=True, return_original_cond=True, bs=bs)
- x_low = batch[self.low_scale_key][:bs]
- x_low = rearrange(x_low, 'b h w c -> b c h w')
- x_low = x_low.to(memory_format=torch.contiguous_format).float()
- zx, noise_level = self.low_scale_model(x_low)
- if self.noise_level_key is not None:
- # get noise level from batch instead, e.g. when extracting a custom noise level for bsr
- raise NotImplementedError('TODO')
-
- all_conds = {"c_concat": [zx], "c_crossattn": [c], "c_adm": noise_level}
- if log_mode:
- # TODO: maybe disable if too expensive
- x_low_rec = self.low_scale_model.decode(zx)
- return z, all_conds, x, xrec, xc, x_low, x_low_rec, noise_level
- return z, all_conds
-
- @torch.no_grad()
- def log_images(self, batch, N=8, n_row=4, sample=True, ddim_steps=200, ddim_eta=1., return_keys=None,
- plot_denoise_rows=False, plot_progressive_rows=True, plot_diffusion_rows=True,
- unconditional_guidance_scale=1., unconditional_guidance_label=None, use_ema_scope=True,
- **kwargs):
- ema_scope = self.ema_scope if use_ema_scope else nullcontext
- use_ddim = ddim_steps is not None
-
- log = dict()
- z, c, x, xrec, xc, x_low, x_low_rec, noise_level = self.get_input(batch, self.first_stage_key, bs=N,
- log_mode=True)
- N = min(x.shape[0], N)
- n_row = min(x.shape[0], n_row)
- log["inputs"] = x
- log["reconstruction"] = xrec
- log["x_lr"] = x_low
- log[f"x_lr_rec_@noise_levels{'-'.join(map(lambda x: str(x), list(noise_level.cpu().numpy())))}"] = x_low_rec
- if self.model.conditioning_key is not None:
- if hasattr(self.cond_stage_model, "decode"):
- xc = self.cond_stage_model.decode(c)
- log["conditioning"] = xc
- elif self.cond_stage_key in ["caption", "txt"]:
- xc = log_txt_as_img((x.shape[2], x.shape[3]), batch[self.cond_stage_key], size=x.shape[2] // 25)
- log["conditioning"] = xc
- elif self.cond_stage_key in ['class_label', 'cls']:
- xc = log_txt_as_img((x.shape[2], x.shape[3]), batch["human_label"], size=x.shape[2] // 25)
- log['conditioning'] = xc
- elif isimage(xc):
- log["conditioning"] = xc
- if ismap(xc):
- log["original_conditioning"] = self.to_rgb(xc)
-
- if plot_diffusion_rows:
- # get diffusion row
- diffusion_row = list()
- z_start = z[:n_row]
- for t in range(self.num_timesteps):
- if t % self.log_every_t == 0 or t == self.num_timesteps - 1:
- t = repeat(torch.tensor([t]), '1 -> b', b=n_row)
- t = t.to(self.device).long()
- noise = torch.randn_like(z_start)
- z_noisy = self.q_sample(x_start=z_start, t=t, noise=noise)
- diffusion_row.append(self.decode_first_stage(z_noisy))
-
- diffusion_row = torch.stack(diffusion_row) # n_log_step, n_row, C, H, W
- diffusion_grid = rearrange(diffusion_row, 'n b c h w -> b n c h w')
- diffusion_grid = rearrange(diffusion_grid, 'b n c h w -> (b n) c h w')
- diffusion_grid = make_grid(diffusion_grid, nrow=diffusion_row.shape[0])
- log["diffusion_row"] = diffusion_grid
-
- if sample:
- # get denoise row
- with ema_scope("Sampling"):
- samples, z_denoise_row = self.sample_log(cond=c, batch_size=N, ddim=use_ddim,
- ddim_steps=ddim_steps, eta=ddim_eta)
- # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True)
- x_samples = self.decode_first_stage(samples)
- log["samples"] = x_samples
- if plot_denoise_rows:
- denoise_grid = self._get_denoise_row_from_list(z_denoise_row)
- log["denoise_row"] = denoise_grid
-
- if unconditional_guidance_scale > 1.0:
- uc_tmp = self.get_unconditional_conditioning(N, unconditional_guidance_label)
- # TODO explore better "unconditional" choices for the other keys
- # maybe guide away from empty text label and highest noise level and maximally degraded zx?
- uc = dict()
- for k in c:
- if k == "c_crossattn":
- assert isinstance(c[k], list) and len(c[k]) == 1
- uc[k] = [uc_tmp]
- elif k == "c_adm": # todo: only run with text-based guidance?
- assert isinstance(c[k], torch.Tensor)
- #uc[k] = torch.ones_like(c[k]) * self.low_scale_model.max_noise_level
- uc[k] = c[k]
- elif isinstance(c[k], list):
- uc[k] = [c[k][i] for i in range(len(c[k]))]
- else:
- uc[k] = c[k]
-
- with ema_scope("Sampling with classifier-free guidance"):
- samples_cfg, _ = self.sample_log(cond=c, batch_size=N, ddim=use_ddim,
- ddim_steps=ddim_steps, eta=ddim_eta,
- unconditional_guidance_scale=unconditional_guidance_scale,
- unconditional_conditioning=uc,
- )
- x_samples_cfg = self.decode_first_stage(samples_cfg)
- log[f"samples_cfg_scale_{unconditional_guidance_scale:.2f}"] = x_samples_cfg
-
- if plot_progressive_rows:
- with ema_scope("Plotting Progressives"):
- img, progressives = self.progressive_denoising(c,
- shape=(self.channels, self.image_size, self.image_size),
- batch_size=N)
- prog_row = self._get_denoise_row_from_list(progressives, desc="Progressive Generation")
- log["progressive_row"] = prog_row
-
- return log
-
-
-class LatentFinetuneDiffusion(LatentDiffusion):
- """
- Basis for different finetunas, such as inpainting or depth2image
- To disable finetuning mode, set finetune_keys to None
- """
-
- def __init__(self,
- concat_keys: tuple,
- finetune_keys=("model.diffusion_model.input_blocks.0.0.weight",
- "model_ema.diffusion_modelinput_blocks00weight"
- ),
- keep_finetune_dims=4,
- # if model was trained without concat mode before and we would like to keep these channels
- c_concat_log_start=None, # to log reconstruction of c_concat codes
- c_concat_log_end=None,
- *args, **kwargs
- ):
- ckpt_path = kwargs.pop("ckpt_path", None)
- ignore_keys = kwargs.pop("ignore_keys", list())
- super().__init__(*args, **kwargs)
- self.finetune_keys = finetune_keys
- self.concat_keys = concat_keys
- self.keep_dims = keep_finetune_dims
- self.c_concat_log_start = c_concat_log_start
- self.c_concat_log_end = c_concat_log_end
- if exists(self.finetune_keys): assert exists(ckpt_path), 'can only finetune from a given checkpoint'
- if exists(ckpt_path):
- self.init_from_ckpt(ckpt_path, ignore_keys)
-
- def init_from_ckpt(self, path, ignore_keys=list(), only_model=False):
- sd = torch.load(path, map_location="cpu")
- if "state_dict" in list(sd.keys()):
- sd = sd["state_dict"]
- keys = list(sd.keys())
- for k in keys:
- for ik in ignore_keys:
- if k.startswith(ik):
- print("Deleting key {} from state_dict.".format(k))
- del sd[k]
-
- # make it explicit, finetune by including extra input channels
- if exists(self.finetune_keys) and k in self.finetune_keys:
- new_entry = None
- for name, param in self.named_parameters():
- if name in self.finetune_keys:
- print(
- f"modifying key '{name}' and keeping its original {self.keep_dims} (channels) dimensions only")
- new_entry = torch.zeros_like(param) # zero init
- assert exists(new_entry), 'did not find matching parameter to modify'
- new_entry[:, :self.keep_dims, ...] = sd[k]
- sd[k] = new_entry
-
- missing, unexpected = self.load_state_dict(sd, strict=False) if not only_model else self.model.load_state_dict(
- sd, strict=False)
- print(f"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys")
- if len(missing) > 0:
- print(f"Missing Keys: {missing}")
- if len(unexpected) > 0:
- print(f"Unexpected Keys: {unexpected}")
-
- @torch.no_grad()
- def log_images(self, batch, N=8, n_row=4, sample=True, ddim_steps=200, ddim_eta=1., return_keys=None,
- quantize_denoised=True, inpaint=True, plot_denoise_rows=False, plot_progressive_rows=True,
- plot_diffusion_rows=True, unconditional_guidance_scale=1., unconditional_guidance_label=None,
- use_ema_scope=True,
- **kwargs):
- ema_scope = self.ema_scope if use_ema_scope else nullcontext
- use_ddim = ddim_steps is not None
-
- log = dict()
- z, c, x, xrec, xc = self.get_input(batch, self.first_stage_key, bs=N, return_first_stage_outputs=True)
- c_cat, c = c["c_concat"][0], c["c_crossattn"][0]
- N = min(x.shape[0], N)
- n_row = min(x.shape[0], n_row)
- log["inputs"] = x
- log["reconstruction"] = xrec
- if self.model.conditioning_key is not None:
- if hasattr(self.cond_stage_model, "decode"):
- xc = self.cond_stage_model.decode(c)
- log["conditioning"] = xc
- elif self.cond_stage_key in ["caption", "txt"]:
- xc = log_txt_as_img((x.shape[2], x.shape[3]), batch[self.cond_stage_key], size=x.shape[2] // 25)
- log["conditioning"] = xc
- elif self.cond_stage_key in ['class_label', 'cls']:
- xc = log_txt_as_img((x.shape[2], x.shape[3]), batch["human_label"], size=x.shape[2] // 25)
- log['conditioning'] = xc
- elif isimage(xc):
- log["conditioning"] = xc
- if ismap(xc):
- log["original_conditioning"] = self.to_rgb(xc)
-
- if not (self.c_concat_log_start is None and self.c_concat_log_end is None):
- log["c_concat_decoded"] = self.decode_first_stage(c_cat[:, self.c_concat_log_start:self.c_concat_log_end])
-
- if plot_diffusion_rows:
- # get diffusion row
- diffusion_row = list()
- z_start = z[:n_row]
- for t in range(self.num_timesteps):
- if t % self.log_every_t == 0 or t == self.num_timesteps - 1:
- t = repeat(torch.tensor([t]), '1 -> b', b=n_row)
- t = t.to(self.device).long()
- noise = torch.randn_like(z_start)
- z_noisy = self.q_sample(x_start=z_start, t=t, noise=noise)
- diffusion_row.append(self.decode_first_stage(z_noisy))
-
- diffusion_row = torch.stack(diffusion_row) # n_log_step, n_row, C, H, W
- diffusion_grid = rearrange(diffusion_row, 'n b c h w -> b n c h w')
- diffusion_grid = rearrange(diffusion_grid, 'b n c h w -> (b n) c h w')
- diffusion_grid = make_grid(diffusion_grid, nrow=diffusion_row.shape[0])
- log["diffusion_row"] = diffusion_grid
-
- if sample:
- # get denoise row
- with ema_scope("Sampling"):
- samples, z_denoise_row = self.sample_log(cond={"c_concat": [c_cat], "c_crossattn": [c]},
- batch_size=N, ddim=use_ddim,
- ddim_steps=ddim_steps, eta=ddim_eta)
- # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True)
- x_samples = self.decode_first_stage(samples)
- log["samples"] = x_samples
- if plot_denoise_rows:
- denoise_grid = self._get_denoise_row_from_list(z_denoise_row)
- log["denoise_row"] = denoise_grid
-
- if unconditional_guidance_scale > 1.0:
- uc_cross = self.get_unconditional_conditioning(N, unconditional_guidance_label)
- uc_cat = c_cat
- uc_full = {"c_concat": [uc_cat], "c_crossattn": [uc_cross]}
- with ema_scope("Sampling with classifier-free guidance"):
- samples_cfg, _ = self.sample_log(cond={"c_concat": [c_cat], "c_crossattn": [c]},
- batch_size=N, ddim=use_ddim,
- ddim_steps=ddim_steps, eta=ddim_eta,
- unconditional_guidance_scale=unconditional_guidance_scale,
- unconditional_conditioning=uc_full,
- )
- x_samples_cfg = self.decode_first_stage(samples_cfg)
- log[f"samples_cfg_scale_{unconditional_guidance_scale:.2f}"] = x_samples_cfg
-
- return log
-
-
-class LatentInpaintDiffusion(LatentFinetuneDiffusion):
- """
- can either run as pure inpainting model (only concat mode) or with mixed conditionings,
- e.g. mask as concat and text via cross-attn.
- To disable finetuning mode, set finetune_keys to None
- """
-
- def __init__(self,
- concat_keys=("mask", "masked_image"),
- masked_image_key="masked_image",
- *args, **kwargs
- ):
- super().__init__(concat_keys, *args, **kwargs)
- self.masked_image_key = masked_image_key
- assert self.masked_image_key in concat_keys
-
- @torch.no_grad()
- def get_input(self, batch, k, cond_key=None, bs=None, return_first_stage_outputs=False):
- # note: restricted to non-trainable encoders currently
- assert not self.cond_stage_trainable, 'trainable cond stages not yet supported for inpainting'
- z, c, x, xrec, xc = super().get_input(batch, self.first_stage_key, return_first_stage_outputs=True,
- force_c_encode=True, return_original_cond=True, bs=bs)
-
- assert exists(self.concat_keys)
- c_cat = list()
- for ck in self.concat_keys:
- cc = rearrange(batch[ck], 'b h w c -> b c h w').to(memory_format=torch.contiguous_format).float()
- if bs is not None:
- cc = cc[:bs]
- cc = cc.to(self.device)
- bchw = z.shape
- if ck != self.masked_image_key:
- cc = torch.nn.functional.interpolate(cc, size=bchw[-2:])
- else:
- cc = self.get_first_stage_encoding(self.encode_first_stage(cc))
- c_cat.append(cc)
- c_cat = torch.cat(c_cat, dim=1)
- all_conds = {"c_concat": [c_cat], "c_crossattn": [c]}
- if return_first_stage_outputs:
- return z, all_conds, x, xrec, xc
- return z, all_conds
-
- @torch.no_grad()
- def log_images(self, *args, **kwargs):
- log = super(LatentInpaintDiffusion, self).log_images(*args, **kwargs)
- log["masked_image"] = rearrange(args[0]["masked_image"],
- 'b h w c -> b c h w').to(memory_format=torch.contiguous_format).float()
- return log
-
-
-class LatentDepth2ImageDiffusion(LatentFinetuneDiffusion):
- """
- condition on monocular depth estimation
- """
-
- def __init__(self, depth_stage_config, concat_keys=("midas_in",), *args, **kwargs):
- super().__init__(concat_keys=concat_keys, *args, **kwargs)
- self.depth_model = instantiate_from_config(depth_stage_config)
- self.depth_stage_key = concat_keys[0]
-
- @torch.no_grad()
- def get_input(self, batch, k, cond_key=None, bs=None, return_first_stage_outputs=False):
- # note: restricted to non-trainable encoders currently
- assert not self.cond_stage_trainable, 'trainable cond stages not yet supported for depth2img'
- z, c, x, xrec, xc = super().get_input(batch, self.first_stage_key, return_first_stage_outputs=True,
- force_c_encode=True, return_original_cond=True, bs=bs)
-
- assert exists(self.concat_keys)
- assert len(self.concat_keys) == 1
- c_cat = list()
- for ck in self.concat_keys:
- cc = batch[ck]
- if bs is not None:
- cc = cc[:bs]
- cc = cc.to(self.device)
- cc = self.depth_model(cc)
- cc = torch.nn.functional.interpolate(
- cc,
- size=z.shape[2:],
- mode="bicubic",
- align_corners=False,
- )
-
- depth_min, depth_max = torch.amin(cc, dim=[1, 2, 3], keepdim=True), torch.amax(cc, dim=[1, 2, 3],
- keepdim=True)
- cc = 2. * (cc - depth_min) / (depth_max - depth_min + 0.001) - 1.
- c_cat.append(cc)
- c_cat = torch.cat(c_cat, dim=1)
- all_conds = {"c_concat": [c_cat], "c_crossattn": [c]}
- if return_first_stage_outputs:
- return z, all_conds, x, xrec, xc
- return z, all_conds
-
- @torch.no_grad()
- def log_images(self, *args, **kwargs):
- log = super().log_images(*args, **kwargs)
- depth = self.depth_model(args[0][self.depth_stage_key])
- depth_min, depth_max = torch.amin(depth, dim=[1, 2, 3], keepdim=True), \
- torch.amax(depth, dim=[1, 2, 3], keepdim=True)
- log["depth"] = 2. * (depth - depth_min) / (depth_max - depth_min) - 1.
- return log
-
-
-class LatentUpscaleFinetuneDiffusion(LatentFinetuneDiffusion):
- """
- condition on low-res image (and optionally on some spatial noise augmentation)
- """
- def __init__(self, concat_keys=("lr",), reshuffle_patch_size=None,
- low_scale_config=None, low_scale_key=None, *args, **kwargs):
- super().__init__(concat_keys=concat_keys, *args, **kwargs)
- self.reshuffle_patch_size = reshuffle_patch_size
- self.low_scale_model = None
- if low_scale_config is not None:
- print("Initializing a low-scale model")
- assert exists(low_scale_key)
- self.instantiate_low_stage(low_scale_config)
- self.low_scale_key = low_scale_key
-
- def instantiate_low_stage(self, config):
- model = instantiate_from_config(config)
- self.low_scale_model = model.eval()
- self.low_scale_model.train = disabled_train
- for param in self.low_scale_model.parameters():
- param.requires_grad = False
-
- @torch.no_grad()
- def get_input(self, batch, k, cond_key=None, bs=None, return_first_stage_outputs=False):
- # note: restricted to non-trainable encoders currently
- assert not self.cond_stage_trainable, 'trainable cond stages not yet supported for upscaling-ft'
- z, c, x, xrec, xc = super().get_input(batch, self.first_stage_key, return_first_stage_outputs=True,
- force_c_encode=True, return_original_cond=True, bs=bs)
-
- assert exists(self.concat_keys)
- assert len(self.concat_keys) == 1
- # optionally make spatial noise_level here
- c_cat = list()
- noise_level = None
- for ck in self.concat_keys:
- cc = batch[ck]
- cc = rearrange(cc, 'b h w c -> b c h w')
- if exists(self.reshuffle_patch_size):
- assert isinstance(self.reshuffle_patch_size, int)
- cc = rearrange(cc, 'b c (p1 h) (p2 w) -> b (p1 p2 c) h w',
- p1=self.reshuffle_patch_size, p2=self.reshuffle_patch_size)
- if bs is not None:
- cc = cc[:bs]
- cc = cc.to(self.device)
- if exists(self.low_scale_model) and ck == self.low_scale_key:
- cc, noise_level = self.low_scale_model(cc)
- c_cat.append(cc)
- c_cat = torch.cat(c_cat, dim=1)
- if exists(noise_level):
- all_conds = {"c_concat": [c_cat], "c_crossattn": [c], "c_adm": noise_level}
- else:
- all_conds = {"c_concat": [c_cat], "c_crossattn": [c]}
- if return_first_stage_outputs:
- return z, all_conds, x, xrec, xc
- return z, all_conds
-
- @torch.no_grad()
- def log_images(self, *args, **kwargs):
- log = super().log_images(*args, **kwargs)
- log["lr"] = rearrange(args[0]["lr"], 'b h w c -> b c h w')
- return log
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/attr/_make.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/attr/_make.py
deleted file mode 100644
index d72f738eeca66ea96ec836f57720a7f5d6ec5169..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/attr/_make.py
+++ /dev/null
@@ -1,2987 +0,0 @@
-# SPDX-License-Identifier: MIT
-
-import copy
-import enum
-import linecache
-import sys
-import types
-import typing
-
-from operator import itemgetter
-
-# We need to import _compat itself in addition to the _compat members to avoid
-# having the thread-local in the globals here.
-from . import _compat, _config, setters
-from ._compat import (
- PY310,
- _AnnotationExtractor,
- get_generic_base,
- set_closure_cell,
-)
-from .exceptions import (
- DefaultAlreadySetError,
- FrozenInstanceError,
- NotAnAttrsClassError,
- UnannotatedAttributeError,
-)
-
-
-# This is used at least twice, so cache it here.
-_obj_setattr = object.__setattr__
-_init_converter_pat = "__attr_converter_%s"
-_init_factory_pat = "__attr_factory_%s"
-_classvar_prefixes = (
- "typing.ClassVar",
- "t.ClassVar",
- "ClassVar",
- "typing_extensions.ClassVar",
-)
-# we don't use a double-underscore prefix because that triggers
-# name mangling when trying to create a slot for the field
-# (when slots=True)
-_hash_cache_field = "_attrs_cached_hash"
-
-_empty_metadata_singleton = types.MappingProxyType({})
-
-# Unique object for unequivocal getattr() defaults.
-_sentinel = object()
-
-_ng_default_on_setattr = setters.pipe(setters.convert, setters.validate)
-
-
-class _Nothing(enum.Enum):
- """
- Sentinel to indicate the lack of a value when ``None`` is ambiguous.
-
- If extending attrs, you can use ``typing.Literal[NOTHING]`` to show
- that a value may be ``NOTHING``.
-
- .. versionchanged:: 21.1.0 ``bool(NOTHING)`` is now False.
- .. versionchanged:: 22.2.0 ``NOTHING`` is now an ``enum.Enum`` variant.
- """
-
- NOTHING = enum.auto()
-
- def __repr__(self):
- return "NOTHING"
-
- def __bool__(self):
- return False
-
-
-NOTHING = _Nothing.NOTHING
-"""
-Sentinel to indicate the lack of a value when ``None`` is ambiguous.
-"""
-
-
-class _CacheHashWrapper(int):
- """
- An integer subclass that pickles / copies as None
-
- This is used for non-slots classes with ``cache_hash=True``, to avoid
- serializing a potentially (even likely) invalid hash value. Since ``None``
- is the default value for uncalculated hashes, whenever this is copied,
- the copy's value for the hash should automatically reset.
-
- See GH #613 for more details.
- """
-
- def __reduce__(self, _none_constructor=type(None), _args=()):
- return _none_constructor, _args
-
-
-def attrib(
- default=NOTHING,
- validator=None,
- repr=True,
- cmp=None,
- hash=None,
- init=True,
- metadata=None,
- type=None,
- converter=None,
- factory=None,
- kw_only=False,
- eq=None,
- order=None,
- on_setattr=None,
- alias=None,
-):
- """
- Create a new attribute on a class.
-
- .. warning::
-
- Does *not* do anything unless the class is also decorated with
- `attr.s` / `attrs.define` / et cetera!
-
- Please consider using `attrs.field` in new code (``attr.ib`` will *never*
- go away, though).
-
- :param default: A value that is used if an *attrs*-generated ``__init__``
- is used and no value is passed while instantiating or the attribute is
- excluded using ``init=False``.
-
- If the value is an instance of `attrs.Factory`, its callable will be
- used to construct a new value (useful for mutable data types like lists
- or dicts).
-
- If a default is not set (or set manually to `attrs.NOTHING`), a value
- *must* be supplied when instantiating; otherwise a `TypeError`
- will be raised.
-
- The default can also be set using decorator notation as shown below.
-
- :type default: Any value
-
- :param callable factory: Syntactic sugar for
- ``default=attr.Factory(factory)``.
-
- :param validator: `callable` that is called by *attrs*-generated
- ``__init__`` methods after the instance has been initialized. They
- receive the initialized instance, the :func:`~attrs.Attribute`, and the
- passed value.
-
- The return value is *not* inspected so the validator has to throw an
- exception itself.
-
- If a `list` is passed, its items are treated as validators and must
- all pass.
-
- Validators can be globally disabled and re-enabled using
- `attrs.validators.get_disabled` / `attrs.validators.set_disabled`.
-
- The validator can also be set using decorator notation as shown below.
-
- :type validator: `callable` or a `list` of `callable`\\ s.
-
- :param repr: Include this attribute in the generated ``__repr__``
- method. If ``True``, include the attribute; if ``False``, omit it. By
- default, the built-in ``repr()`` function is used. To override how the
- attribute value is formatted, pass a ``callable`` that takes a single
- value and returns a string. Note that the resulting string is used
- as-is, i.e. it will be used directly *instead* of calling ``repr()``
- (the default).
- :type repr: a `bool` or a `callable` to use a custom function.
-
- :param eq: If ``True`` (default), include this attribute in the
- generated ``__eq__`` and ``__ne__`` methods that check two instances
- for equality. To override how the attribute value is compared,
- pass a ``callable`` that takes a single value and returns the value
- to be compared.
- :type eq: a `bool` or a `callable`.
-
- :param order: If ``True`` (default), include this attributes in the
- generated ``__lt__``, ``__le__``, ``__gt__`` and ``__ge__`` methods.
- To override how the attribute value is ordered,
- pass a ``callable`` that takes a single value and returns the value
- to be ordered.
- :type order: a `bool` or a `callable`.
-
- :param cmp: Setting *cmp* is equivalent to setting *eq* and *order* to the
- same value. Must not be mixed with *eq* or *order*.
- :type cmp: a `bool` or a `callable`.
-
- :param Optional[bool] hash: Include this attribute in the generated
- ``__hash__`` method. If ``None`` (default), mirror *eq*'s value. This
- is the correct behavior according the Python spec. Setting this value
- to anything else than ``None`` is *discouraged*.
- :param bool init: Include this attribute in the generated ``__init__``
- method. It is possible to set this to ``False`` and set a default
- value. In that case this attributed is unconditionally initialized
- with the specified default value or factory.
- :param callable converter: `callable` that is called by
- *attrs*-generated ``__init__`` methods to convert attribute's value
- to the desired format. It is given the passed-in value, and the
- returned value will be used as the new value of the attribute. The
- value is converted before being passed to the validator, if any.
- :param metadata: An arbitrary mapping, to be used by third-party
- components. See `extending-metadata`.
-
- :param type: The type of the attribute. Nowadays, the preferred method to
- specify the type is using a variable annotation (see :pep:`526`).
- This argument is provided for backward compatibility.
- Regardless of the approach used, the type will be stored on
- ``Attribute.type``.
-
- Please note that *attrs* doesn't do anything with this metadata by
- itself. You can use it as part of your own code or for
- `static type checking `.
- :param kw_only: Make this attribute keyword-only in the generated
- ``__init__`` (if ``init`` is ``False``, this parameter is ignored).
- :param on_setattr: Allows to overwrite the *on_setattr* setting from
- `attr.s`. If left `None`, the *on_setattr* value from `attr.s` is used.
- Set to `attrs.setters.NO_OP` to run **no** `setattr` hooks for this
- attribute -- regardless of the setting in `attr.s`.
- :type on_setattr: `callable`, or a list of callables, or `None`, or
- `attrs.setters.NO_OP`
- :param Optional[str] alias: Override this attribute's parameter name in the
- generated ``__init__`` method. If left `None`, default to ``name``
- stripped of leading underscores. See `private-attributes`.
-
- .. versionadded:: 15.2.0 *convert*
- .. versionadded:: 16.3.0 *metadata*
- .. versionchanged:: 17.1.0 *validator* can be a ``list`` now.
- .. versionchanged:: 17.1.0
- *hash* is ``None`` and therefore mirrors *eq* by default.
- .. versionadded:: 17.3.0 *type*
- .. deprecated:: 17.4.0 *convert*
- .. versionadded:: 17.4.0 *converter* as a replacement for the deprecated
- *convert* to achieve consistency with other noun-based arguments.
- .. versionadded:: 18.1.0
- ``factory=f`` is syntactic sugar for ``default=attr.Factory(f)``.
- .. versionadded:: 18.2.0 *kw_only*
- .. versionchanged:: 19.2.0 *convert* keyword argument removed.
- .. versionchanged:: 19.2.0 *repr* also accepts a custom callable.
- .. deprecated:: 19.2.0 *cmp* Removal on or after 2021-06-01.
- .. versionadded:: 19.2.0 *eq* and *order*
- .. versionadded:: 20.1.0 *on_setattr*
- .. versionchanged:: 20.3.0 *kw_only* backported to Python 2
- .. versionchanged:: 21.1.0
- *eq*, *order*, and *cmp* also accept a custom callable
- .. versionchanged:: 21.1.0 *cmp* undeprecated
- .. versionadded:: 22.2.0 *alias*
- """
- eq, eq_key, order, order_key = _determine_attrib_eq_order(
- cmp, eq, order, True
- )
-
- if hash is not None and hash is not True and hash is not False:
- raise TypeError(
- "Invalid value for hash. Must be True, False, or None."
- )
-
- if factory is not None:
- if default is not NOTHING:
- raise ValueError(
- "The `default` and `factory` arguments are mutually "
- "exclusive."
- )
- if not callable(factory):
- raise ValueError("The `factory` argument must be a callable.")
- default = Factory(factory)
-
- if metadata is None:
- metadata = {}
-
- # Apply syntactic sugar by auto-wrapping.
- if isinstance(on_setattr, (list, tuple)):
- on_setattr = setters.pipe(*on_setattr)
-
- if validator and isinstance(validator, (list, tuple)):
- validator = and_(*validator)
-
- if converter and isinstance(converter, (list, tuple)):
- converter = pipe(*converter)
-
- return _CountingAttr(
- default=default,
- validator=validator,
- repr=repr,
- cmp=None,
- hash=hash,
- init=init,
- converter=converter,
- metadata=metadata,
- type=type,
- kw_only=kw_only,
- eq=eq,
- eq_key=eq_key,
- order=order,
- order_key=order_key,
- on_setattr=on_setattr,
- alias=alias,
- )
-
-
-def _compile_and_eval(script, globs, locs=None, filename=""):
- """
- "Exec" the script with the given global (globs) and local (locs) variables.
- """
- bytecode = compile(script, filename, "exec")
- eval(bytecode, globs, locs)
-
-
-def _make_method(name, script, filename, globs):
- """
- Create the method with the script given and return the method object.
- """
- locs = {}
-
- # In order of debuggers like PDB being able to step through the code,
- # we add a fake linecache entry.
- count = 1
- base_filename = filename
- while True:
- linecache_tuple = (
- len(script),
- None,
- script.splitlines(True),
- filename,
- )
- old_val = linecache.cache.setdefault(filename, linecache_tuple)
- if old_val == linecache_tuple:
- break
- else:
- filename = f"{base_filename[:-1]}-{count}>"
- count += 1
-
- _compile_and_eval(script, globs, locs, filename)
-
- return locs[name]
-
-
-def _make_attr_tuple_class(cls_name, attr_names):
- """
- Create a tuple subclass to hold `Attribute`s for an `attrs` class.
-
- The subclass is a bare tuple with properties for names.
-
- class MyClassAttributes(tuple):
- __slots__ = ()
- x = property(itemgetter(0))
- """
- attr_class_name = f"{cls_name}Attributes"
- attr_class_template = [
- f"class {attr_class_name}(tuple):",
- " __slots__ = ()",
- ]
- if attr_names:
- for i, attr_name in enumerate(attr_names):
- attr_class_template.append(
- f" {attr_name} = _attrs_property(_attrs_itemgetter({i}))"
- )
- else:
- attr_class_template.append(" pass")
- globs = {"_attrs_itemgetter": itemgetter, "_attrs_property": property}
- _compile_and_eval("\n".join(attr_class_template), globs)
- return globs[attr_class_name]
-
-
-# Tuple class for extracted attributes from a class definition.
-# `base_attrs` is a subset of `attrs`.
-_Attributes = _make_attr_tuple_class(
- "_Attributes",
- [
- # all attributes to build dunder methods for
- "attrs",
- # attributes that have been inherited
- "base_attrs",
- # map inherited attributes to their originating classes
- "base_attrs_map",
- ],
-)
-
-
-def _is_class_var(annot):
- """
- Check whether *annot* is a typing.ClassVar.
-
- The string comparison hack is used to avoid evaluating all string
- annotations which would put attrs-based classes at a performance
- disadvantage compared to plain old classes.
- """
- annot = str(annot)
-
- # Annotation can be quoted.
- if annot.startswith(("'", '"')) and annot.endswith(("'", '"')):
- annot = annot[1:-1]
-
- return annot.startswith(_classvar_prefixes)
-
-
-def _has_own_attribute(cls, attrib_name):
- """
- Check whether *cls* defines *attrib_name* (and doesn't just inherit it).
- """
- attr = getattr(cls, attrib_name, _sentinel)
- if attr is _sentinel:
- return False
-
- for base_cls in cls.__mro__[1:]:
- a = getattr(base_cls, attrib_name, None)
- if attr is a:
- return False
-
- return True
-
-
-def _get_annotations(cls):
- """
- Get annotations for *cls*.
- """
- if _has_own_attribute(cls, "__annotations__"):
- return cls.__annotations__
-
- return {}
-
-
-def _collect_base_attrs(cls, taken_attr_names):
- """
- Collect attr.ibs from base classes of *cls*, except *taken_attr_names*.
- """
- base_attrs = []
- base_attr_map = {} # A dictionary of base attrs to their classes.
-
- # Traverse the MRO and collect attributes.
- for base_cls in reversed(cls.__mro__[1:-1]):
- for a in getattr(base_cls, "__attrs_attrs__", []):
- if a.inherited or a.name in taken_attr_names:
- continue
-
- a = a.evolve(inherited=True)
- base_attrs.append(a)
- base_attr_map[a.name] = base_cls
-
- # For each name, only keep the freshest definition i.e. the furthest at the
- # back. base_attr_map is fine because it gets overwritten with every new
- # instance.
- filtered = []
- seen = set()
- for a in reversed(base_attrs):
- if a.name in seen:
- continue
- filtered.insert(0, a)
- seen.add(a.name)
-
- return filtered, base_attr_map
-
-
-def _collect_base_attrs_broken(cls, taken_attr_names):
- """
- Collect attr.ibs from base classes of *cls*, except *taken_attr_names*.
-
- N.B. *taken_attr_names* will be mutated.
-
- Adhere to the old incorrect behavior.
-
- Notably it collects from the front and considers inherited attributes which
- leads to the buggy behavior reported in #428.
- """
- base_attrs = []
- base_attr_map = {} # A dictionary of base attrs to their classes.
-
- # Traverse the MRO and collect attributes.
- for base_cls in cls.__mro__[1:-1]:
- for a in getattr(base_cls, "__attrs_attrs__", []):
- if a.name in taken_attr_names:
- continue
-
- a = a.evolve(inherited=True)
- taken_attr_names.add(a.name)
- base_attrs.append(a)
- base_attr_map[a.name] = base_cls
-
- return base_attrs, base_attr_map
-
-
-def _transform_attrs(
- cls, these, auto_attribs, kw_only, collect_by_mro, field_transformer
-):
- """
- Transform all `_CountingAttr`s on a class into `Attribute`s.
-
- If *these* is passed, use that and don't look for them on the class.
-
- *collect_by_mro* is True, collect them in the correct MRO order, otherwise
- use the old -- incorrect -- order. See #428.
-
- Return an `_Attributes`.
- """
- cd = cls.__dict__
- anns = _get_annotations(cls)
-
- if these is not None:
- ca_list = [(name, ca) for name, ca in these.items()]
- elif auto_attribs is True:
- ca_names = {
- name
- for name, attr in cd.items()
- if isinstance(attr, _CountingAttr)
- }
- ca_list = []
- annot_names = set()
- for attr_name, type in anns.items():
- if _is_class_var(type):
- continue
- annot_names.add(attr_name)
- a = cd.get(attr_name, NOTHING)
-
- if not isinstance(a, _CountingAttr):
- if a is NOTHING:
- a = attrib()
- else:
- a = attrib(default=a)
- ca_list.append((attr_name, a))
-
- unannotated = ca_names - annot_names
- if len(unannotated) > 0:
- raise UnannotatedAttributeError(
- "The following `attr.ib`s lack a type annotation: "
- + ", ".join(
- sorted(unannotated, key=lambda n: cd.get(n).counter)
- )
- + "."
- )
- else:
- ca_list = sorted(
- (
- (name, attr)
- for name, attr in cd.items()
- if isinstance(attr, _CountingAttr)
- ),
- key=lambda e: e[1].counter,
- )
-
- own_attrs = [
- Attribute.from_counting_attr(
- name=attr_name, ca=ca, type=anns.get(attr_name)
- )
- for attr_name, ca in ca_list
- ]
-
- if collect_by_mro:
- base_attrs, base_attr_map = _collect_base_attrs(
- cls, {a.name for a in own_attrs}
- )
- else:
- base_attrs, base_attr_map = _collect_base_attrs_broken(
- cls, {a.name for a in own_attrs}
- )
-
- if kw_only:
- own_attrs = [a.evolve(kw_only=True) for a in own_attrs]
- base_attrs = [a.evolve(kw_only=True) for a in base_attrs]
-
- attrs = base_attrs + own_attrs
-
- # Mandatory vs non-mandatory attr order only matters when they are part of
- # the __init__ signature and when they aren't kw_only (which are moved to
- # the end and can be mandatory or non-mandatory in any order, as they will
- # be specified as keyword args anyway). Check the order of those attrs:
- had_default = False
- for a in (a for a in attrs if a.init is not False and a.kw_only is False):
- if had_default is True and a.default is NOTHING:
- raise ValueError(
- "No mandatory attributes allowed after an attribute with a "
- f"default value or factory. Attribute in question: {a!r}"
- )
-
- if had_default is False and a.default is not NOTHING:
- had_default = True
-
- if field_transformer is not None:
- attrs = field_transformer(cls, attrs)
-
- # Resolve default field alias after executing field_transformer.
- # This allows field_transformer to differentiate between explicit vs
- # default aliases and supply their own defaults.
- attrs = [
- a.evolve(alias=_default_init_alias_for(a.name)) if not a.alias else a
- for a in attrs
- ]
-
- # Create AttrsClass *after* applying the field_transformer since it may
- # add or remove attributes!
- attr_names = [a.name for a in attrs]
- AttrsClass = _make_attr_tuple_class(cls.__name__, attr_names)
-
- return _Attributes((AttrsClass(attrs), base_attrs, base_attr_map))
-
-
-def _frozen_setattrs(self, name, value):
- """
- Attached to frozen classes as __setattr__.
- """
- if isinstance(self, BaseException) and name in (
- "__cause__",
- "__context__",
- "__traceback__",
- ):
- BaseException.__setattr__(self, name, value)
- return
-
- raise FrozenInstanceError()
-
-
-def _frozen_delattrs(self, name):
- """
- Attached to frozen classes as __delattr__.
- """
- raise FrozenInstanceError()
-
-
-class _ClassBuilder:
- """
- Iteratively build *one* class.
- """
-
- __slots__ = (
- "_attr_names",
- "_attrs",
- "_base_attr_map",
- "_base_names",
- "_cache_hash",
- "_cls",
- "_cls_dict",
- "_delete_attribs",
- "_frozen",
- "_has_pre_init",
- "_has_post_init",
- "_is_exc",
- "_on_setattr",
- "_slots",
- "_weakref_slot",
- "_wrote_own_setattr",
- "_has_custom_setattr",
- )
-
- def __init__(
- self,
- cls,
- these,
- slots,
- frozen,
- weakref_slot,
- getstate_setstate,
- auto_attribs,
- kw_only,
- cache_hash,
- is_exc,
- collect_by_mro,
- on_setattr,
- has_custom_setattr,
- field_transformer,
- ):
- attrs, base_attrs, base_map = _transform_attrs(
- cls,
- these,
- auto_attribs,
- kw_only,
- collect_by_mro,
- field_transformer,
- )
-
- self._cls = cls
- self._cls_dict = dict(cls.__dict__) if slots else {}
- self._attrs = attrs
- self._base_names = {a.name for a in base_attrs}
- self._base_attr_map = base_map
- self._attr_names = tuple(a.name for a in attrs)
- self._slots = slots
- self._frozen = frozen
- self._weakref_slot = weakref_slot
- self._cache_hash = cache_hash
- self._has_pre_init = bool(getattr(cls, "__attrs_pre_init__", False))
- self._has_post_init = bool(getattr(cls, "__attrs_post_init__", False))
- self._delete_attribs = not bool(these)
- self._is_exc = is_exc
- self._on_setattr = on_setattr
-
- self._has_custom_setattr = has_custom_setattr
- self._wrote_own_setattr = False
-
- self._cls_dict["__attrs_attrs__"] = self._attrs
-
- if frozen:
- self._cls_dict["__setattr__"] = _frozen_setattrs
- self._cls_dict["__delattr__"] = _frozen_delattrs
-
- self._wrote_own_setattr = True
- elif on_setattr in (
- _ng_default_on_setattr,
- setters.validate,
- setters.convert,
- ):
- has_validator = has_converter = False
- for a in attrs:
- if a.validator is not None:
- has_validator = True
- if a.converter is not None:
- has_converter = True
-
- if has_validator and has_converter:
- break
- if (
- (
- on_setattr == _ng_default_on_setattr
- and not (has_validator or has_converter)
- )
- or (on_setattr == setters.validate and not has_validator)
- or (on_setattr == setters.convert and not has_converter)
- ):
- # If class-level on_setattr is set to convert + validate, but
- # there's no field to convert or validate, pretend like there's
- # no on_setattr.
- self._on_setattr = None
-
- if getstate_setstate:
- (
- self._cls_dict["__getstate__"],
- self._cls_dict["__setstate__"],
- ) = self._make_getstate_setstate()
-
- def __repr__(self):
- return f"<_ClassBuilder(cls={self._cls.__name__})>"
-
- if PY310:
- import abc
-
- def build_class(self):
- """
- Finalize class based on the accumulated configuration.
-
- Builder cannot be used after calling this method.
- """
- if self._slots is True:
- return self._create_slots_class()
-
- return self.abc.update_abstractmethods(
- self._patch_original_class()
- )
-
- else:
-
- def build_class(self):
- """
- Finalize class based on the accumulated configuration.
-
- Builder cannot be used after calling this method.
- """
- if self._slots is True:
- return self._create_slots_class()
-
- return self._patch_original_class()
-
- def _patch_original_class(self):
- """
- Apply accumulated methods and return the class.
- """
- cls = self._cls
- base_names = self._base_names
-
- # Clean class of attribute definitions (`attr.ib()`s).
- if self._delete_attribs:
- for name in self._attr_names:
- if (
- name not in base_names
- and getattr(cls, name, _sentinel) is not _sentinel
- ):
- try:
- delattr(cls, name)
- except AttributeError:
- # This can happen if a base class defines a class
- # variable and we want to set an attribute with the
- # same name by using only a type annotation.
- pass
-
- # Attach our dunder methods.
- for name, value in self._cls_dict.items():
- setattr(cls, name, value)
-
- # If we've inherited an attrs __setattr__ and don't write our own,
- # reset it to object's.
- if not self._wrote_own_setattr and getattr(
- cls, "__attrs_own_setattr__", False
- ):
- cls.__attrs_own_setattr__ = False
-
- if not self._has_custom_setattr:
- cls.__setattr__ = _obj_setattr
-
- return cls
-
- def _create_slots_class(self):
- """
- Build and return a new class with a `__slots__` attribute.
- """
- cd = {
- k: v
- for k, v in self._cls_dict.items()
- if k not in tuple(self._attr_names) + ("__dict__", "__weakref__")
- }
-
- # If our class doesn't have its own implementation of __setattr__
- # (either from the user or by us), check the bases, if one of them has
- # an attrs-made __setattr__, that needs to be reset. We don't walk the
- # MRO because we only care about our immediate base classes.
- # XXX: This can be confused by subclassing a slotted attrs class with
- # XXX: a non-attrs class and subclass the resulting class with an attrs
- # XXX: class. See `test_slotted_confused` for details. For now that's
- # XXX: OK with us.
- if not self._wrote_own_setattr:
- cd["__attrs_own_setattr__"] = False
-
- if not self._has_custom_setattr:
- for base_cls in self._cls.__bases__:
- if base_cls.__dict__.get("__attrs_own_setattr__", False):
- cd["__setattr__"] = _obj_setattr
- break
-
- # Traverse the MRO to collect existing slots
- # and check for an existing __weakref__.
- existing_slots = dict()
- weakref_inherited = False
- for base_cls in self._cls.__mro__[1:-1]:
- if base_cls.__dict__.get("__weakref__", None) is not None:
- weakref_inherited = True
- existing_slots.update(
- {
- name: getattr(base_cls, name)
- for name in getattr(base_cls, "__slots__", [])
- }
- )
-
- base_names = set(self._base_names)
-
- names = self._attr_names
- if (
- self._weakref_slot
- and "__weakref__" not in getattr(self._cls, "__slots__", ())
- and "__weakref__" not in names
- and not weakref_inherited
- ):
- names += ("__weakref__",)
-
- # We only add the names of attributes that aren't inherited.
- # Setting __slots__ to inherited attributes wastes memory.
- slot_names = [name for name in names if name not in base_names]
- # There are slots for attributes from current class
- # that are defined in parent classes.
- # As their descriptors may be overridden by a child class,
- # we collect them here and update the class dict
- reused_slots = {
- slot: slot_descriptor
- for slot, slot_descriptor in existing_slots.items()
- if slot in slot_names
- }
- slot_names = [name for name in slot_names if name not in reused_slots]
- cd.update(reused_slots)
- if self._cache_hash:
- slot_names.append(_hash_cache_field)
- cd["__slots__"] = tuple(slot_names)
-
- cd["__qualname__"] = self._cls.__qualname__
-
- # Create new class based on old class and our methods.
- cls = type(self._cls)(self._cls.__name__, self._cls.__bases__, cd)
-
- # The following is a fix for
- # .
- # If a method mentions `__class__` or uses the no-arg super(), the
- # compiler will bake a reference to the class in the method itself
- # as `method.__closure__`. Since we replace the class with a
- # clone, we rewrite these references so it keeps working.
- for item in cls.__dict__.values():
- if isinstance(item, (classmethod, staticmethod)):
- # Class- and staticmethods hide their functions inside.
- # These might need to be rewritten as well.
- closure_cells = getattr(item.__func__, "__closure__", None)
- elif isinstance(item, property):
- # Workaround for property `super()` shortcut (PY3-only).
- # There is no universal way for other descriptors.
- closure_cells = getattr(item.fget, "__closure__", None)
- else:
- closure_cells = getattr(item, "__closure__", None)
-
- if not closure_cells: # Catch None or the empty list.
- continue
- for cell in closure_cells:
- try:
- match = cell.cell_contents is self._cls
- except ValueError: # ValueError: Cell is empty
- pass
- else:
- if match:
- set_closure_cell(cell, cls)
-
- return cls
-
- def add_repr(self, ns):
- self._cls_dict["__repr__"] = self._add_method_dunders(
- _make_repr(self._attrs, ns, self._cls)
- )
- return self
-
- def add_str(self):
- repr = self._cls_dict.get("__repr__")
- if repr is None:
- raise ValueError(
- "__str__ can only be generated if a __repr__ exists."
- )
-
- def __str__(self):
- return self.__repr__()
-
- self._cls_dict["__str__"] = self._add_method_dunders(__str__)
- return self
-
- def _make_getstate_setstate(self):
- """
- Create custom __setstate__ and __getstate__ methods.
- """
- # __weakref__ is not writable.
- state_attr_names = tuple(
- an for an in self._attr_names if an != "__weakref__"
- )
-
- def slots_getstate(self):
- """
- Automatically created by attrs.
- """
- return {name: getattr(self, name) for name in state_attr_names}
-
- hash_caching_enabled = self._cache_hash
-
- def slots_setstate(self, state):
- """
- Automatically created by attrs.
- """
- __bound_setattr = _obj_setattr.__get__(self)
- if isinstance(state, tuple):
- # Backward compatibility with attrs instances pickled with
- # attrs versions before v22.2.0 which stored tuples.
- for name, value in zip(state_attr_names, state):
- __bound_setattr(name, value)
- else:
- for name in state_attr_names:
- if name in state:
- __bound_setattr(name, state[name])
-
- # The hash code cache is not included when the object is
- # serialized, but it still needs to be initialized to None to
- # indicate that the first call to __hash__ should be a cache
- # miss.
- if hash_caching_enabled:
- __bound_setattr(_hash_cache_field, None)
-
- return slots_getstate, slots_setstate
-
- def make_unhashable(self):
- self._cls_dict["__hash__"] = None
- return self
-
- def add_hash(self):
- self._cls_dict["__hash__"] = self._add_method_dunders(
- _make_hash(
- self._cls,
- self._attrs,
- frozen=self._frozen,
- cache_hash=self._cache_hash,
- )
- )
-
- return self
-
- def add_init(self):
- self._cls_dict["__init__"] = self._add_method_dunders(
- _make_init(
- self._cls,
- self._attrs,
- self._has_pre_init,
- self._has_post_init,
- self._frozen,
- self._slots,
- self._cache_hash,
- self._base_attr_map,
- self._is_exc,
- self._on_setattr,
- attrs_init=False,
- )
- )
-
- return self
-
- def add_match_args(self):
- self._cls_dict["__match_args__"] = tuple(
- field.name
- for field in self._attrs
- if field.init and not field.kw_only
- )
-
- def add_attrs_init(self):
- self._cls_dict["__attrs_init__"] = self._add_method_dunders(
- _make_init(
- self._cls,
- self._attrs,
- self._has_pre_init,
- self._has_post_init,
- self._frozen,
- self._slots,
- self._cache_hash,
- self._base_attr_map,
- self._is_exc,
- self._on_setattr,
- attrs_init=True,
- )
- )
-
- return self
-
- def add_eq(self):
- cd = self._cls_dict
-
- cd["__eq__"] = self._add_method_dunders(
- _make_eq(self._cls, self._attrs)
- )
- cd["__ne__"] = self._add_method_dunders(_make_ne())
-
- return self
-
- def add_order(self):
- cd = self._cls_dict
-
- cd["__lt__"], cd["__le__"], cd["__gt__"], cd["__ge__"] = (
- self._add_method_dunders(meth)
- for meth in _make_order(self._cls, self._attrs)
- )
-
- return self
-
- def add_setattr(self):
- if self._frozen:
- return self
-
- sa_attrs = {}
- for a in self._attrs:
- on_setattr = a.on_setattr or self._on_setattr
- if on_setattr and on_setattr is not setters.NO_OP:
- sa_attrs[a.name] = a, on_setattr
-
- if not sa_attrs:
- return self
-
- if self._has_custom_setattr:
- # We need to write a __setattr__ but there already is one!
- raise ValueError(
- "Can't combine custom __setattr__ with on_setattr hooks."
- )
-
- # docstring comes from _add_method_dunders
- def __setattr__(self, name, val):
- try:
- a, hook = sa_attrs[name]
- except KeyError:
- nval = val
- else:
- nval = hook(self, a, val)
-
- _obj_setattr(self, name, nval)
-
- self._cls_dict["__attrs_own_setattr__"] = True
- self._cls_dict["__setattr__"] = self._add_method_dunders(__setattr__)
- self._wrote_own_setattr = True
-
- return self
-
- def _add_method_dunders(self, method):
- """
- Add __module__ and __qualname__ to a *method* if possible.
- """
- try:
- method.__module__ = self._cls.__module__
- except AttributeError:
- pass
-
- try:
- method.__qualname__ = ".".join(
- (self._cls.__qualname__, method.__name__)
- )
- except AttributeError:
- pass
-
- try:
- method.__doc__ = (
- "Method generated by attrs for class "
- f"{self._cls.__qualname__}."
- )
- except AttributeError:
- pass
-
- return method
-
-
-def _determine_attrs_eq_order(cmp, eq, order, default_eq):
- """
- Validate the combination of *cmp*, *eq*, and *order*. Derive the effective
- values of eq and order. If *eq* is None, set it to *default_eq*.
- """
- if cmp is not None and any((eq is not None, order is not None)):
- raise ValueError("Don't mix `cmp` with `eq' and `order`.")
-
- # cmp takes precedence due to bw-compatibility.
- if cmp is not None:
- return cmp, cmp
-
- # If left None, equality is set to the specified default and ordering
- # mirrors equality.
- if eq is None:
- eq = default_eq
-
- if order is None:
- order = eq
-
- if eq is False and order is True:
- raise ValueError("`order` can only be True if `eq` is True too.")
-
- return eq, order
-
-
-def _determine_attrib_eq_order(cmp, eq, order, default_eq):
- """
- Validate the combination of *cmp*, *eq*, and *order*. Derive the effective
- values of eq and order. If *eq* is None, set it to *default_eq*.
- """
- if cmp is not None and any((eq is not None, order is not None)):
- raise ValueError("Don't mix `cmp` with `eq' and `order`.")
-
- def decide_callable_or_boolean(value):
- """
- Decide whether a key function is used.
- """
- if callable(value):
- value, key = True, value
- else:
- key = None
- return value, key
-
- # cmp takes precedence due to bw-compatibility.
- if cmp is not None:
- cmp, cmp_key = decide_callable_or_boolean(cmp)
- return cmp, cmp_key, cmp, cmp_key
-
- # If left None, equality is set to the specified default and ordering
- # mirrors equality.
- if eq is None:
- eq, eq_key = default_eq, None
- else:
- eq, eq_key = decide_callable_or_boolean(eq)
-
- if order is None:
- order, order_key = eq, eq_key
- else:
- order, order_key = decide_callable_or_boolean(order)
-
- if eq is False and order is True:
- raise ValueError("`order` can only be True if `eq` is True too.")
-
- return eq, eq_key, order, order_key
-
-
-def _determine_whether_to_implement(
- cls, flag, auto_detect, dunders, default=True
-):
- """
- Check whether we should implement a set of methods for *cls*.
-
- *flag* is the argument passed into @attr.s like 'init', *auto_detect* the
- same as passed into @attr.s and *dunders* is a tuple of attribute names
- whose presence signal that the user has implemented it themselves.
-
- Return *default* if no reason for either for or against is found.
- """
- if flag is True or flag is False:
- return flag
-
- if flag is None and auto_detect is False:
- return default
-
- # Logically, flag is None and auto_detect is True here.
- for dunder in dunders:
- if _has_own_attribute(cls, dunder):
- return False
-
- return default
-
-
-def attrs(
- maybe_cls=None,
- these=None,
- repr_ns=None,
- repr=None,
- cmp=None,
- hash=None,
- init=None,
- slots=False,
- frozen=False,
- weakref_slot=True,
- str=False,
- auto_attribs=False,
- kw_only=False,
- cache_hash=False,
- auto_exc=False,
- eq=None,
- order=None,
- auto_detect=False,
- collect_by_mro=False,
- getstate_setstate=None,
- on_setattr=None,
- field_transformer=None,
- match_args=True,
- unsafe_hash=None,
-):
- r"""
- A class decorator that adds :term:`dunder methods` according to the
- specified attributes using `attr.ib` or the *these* argument.
-
- Please consider using `attrs.define` / `attrs.frozen` in new code
- (``attr.s`` will *never* go away, though).
-
- :param these: A dictionary of name to `attr.ib` mappings. This is
- useful to avoid the definition of your attributes within the class body
- because you can't (e.g. if you want to add ``__repr__`` methods to
- Django models) or don't want to.
-
- If *these* is not ``None``, *attrs* will *not* search the class body
- for attributes and will *not* remove any attributes from it.
-
- The order is deduced from the order of the attributes inside *these*.
-
- :type these: `dict` of `str` to `attr.ib`
-
- :param str repr_ns: When using nested classes, there's no way in Python 2
- to automatically detect that. Therefore it's possible to set the
- namespace explicitly for a more meaningful ``repr`` output.
- :param bool auto_detect: Instead of setting the *init*, *repr*, *eq*,
- *order*, and *hash* arguments explicitly, assume they are set to
- ``True`` **unless any** of the involved methods for one of the
- arguments is implemented in the *current* class (i.e. it is *not*
- inherited from some base class).
-
- So for example by implementing ``__eq__`` on a class yourself,
- *attrs* will deduce ``eq=False`` and will create *neither*
- ``__eq__`` *nor* ``__ne__`` (but Python classes come with a sensible
- ``__ne__`` by default, so it *should* be enough to only implement
- ``__eq__`` in most cases).
-
- .. warning::
-
- If you prevent *attrs* from creating the ordering methods for you
- (``order=False``, e.g. by implementing ``__le__``), it becomes
- *your* responsibility to make sure its ordering is sound. The best
- way is to use the `functools.total_ordering` decorator.
-
-
- Passing ``True`` or ``False`` to *init*, *repr*, *eq*, *order*,
- *cmp*, or *hash* overrides whatever *auto_detect* would determine.
-
- :param bool repr: Create a ``__repr__`` method with a human readable
- representation of *attrs* attributes..
- :param bool str: Create a ``__str__`` method that is identical to
- ``__repr__``. This is usually not necessary except for
- `Exception`\ s.
- :param Optional[bool] eq: If ``True`` or ``None`` (default), add ``__eq__``
- and ``__ne__`` methods that check two instances for equality.
-
- They compare the instances as if they were tuples of their *attrs*
- attributes if and only if the types of both classes are *identical*!
- :param Optional[bool] order: If ``True``, add ``__lt__``, ``__le__``,
- ``__gt__``, and ``__ge__`` methods that behave like *eq* above and
- allow instances to be ordered. If ``None`` (default) mirror value of
- *eq*.
- :param Optional[bool] cmp: Setting *cmp* is equivalent to setting *eq*
- and *order* to the same value. Must not be mixed with *eq* or *order*.
- :param Optional[bool] unsafe_hash: If ``None`` (default), the ``__hash__``
- method is generated according how *eq* and *frozen* are set.
-
- 1. If *both* are True, *attrs* will generate a ``__hash__`` for you.
- 2. If *eq* is True and *frozen* is False, ``__hash__`` will be set to
- None, marking it unhashable (which it is).
- 3. If *eq* is False, ``__hash__`` will be left untouched meaning the
- ``__hash__`` method of the base class will be used (if base class is
- ``object``, this means it will fall back to id-based hashing.).
-
- Although not recommended, you can decide for yourself and force
- *attrs* to create one (e.g. if the class is immutable even though you
- didn't freeze it programmatically) by passing ``True`` or not. Both of
- these cases are rather special and should be used carefully.
-
- See our documentation on `hashing`, Python's documentation on
- `object.__hash__`, and the `GitHub issue that led to the default \
- behavior `_ for more
- details.
- :param Optional[bool] hash: Alias for *unsafe_hash*. *unsafe_hash* takes
- precedence.
- :param bool init: Create a ``__init__`` method that initializes the
- *attrs* attributes. Leading underscores are stripped for the argument
- name. If a ``__attrs_pre_init__`` method exists on the class, it will
- be called before the class is initialized. If a ``__attrs_post_init__``
- method exists on the class, it will be called after the class is fully
- initialized.
-
- If ``init`` is ``False``, an ``__attrs_init__`` method will be
- injected instead. This allows you to define a custom ``__init__``
- method that can do pre-init work such as ``super().__init__()``,
- and then call ``__attrs_init__()`` and ``__attrs_post_init__()``.
- :param bool slots: Create a :term:`slotted class ` that's
- more memory-efficient. Slotted classes are generally superior to the
- default dict classes, but have some gotchas you should know about, so
- we encourage you to read the :term:`glossary entry `.
- :param bool frozen: Make instances immutable after initialization. If
- someone attempts to modify a frozen instance,
- `attrs.exceptions.FrozenInstanceError` is raised.
-
- .. note::
-
- 1. This is achieved by installing a custom ``__setattr__`` method
- on your class, so you can't implement your own.
-
- 2. True immutability is impossible in Python.
-
- 3. This *does* have a minor a runtime performance `impact
- ` when initializing new instances. In other words:
- ``__init__`` is slightly slower with ``frozen=True``.
-
- 4. If a class is frozen, you cannot modify ``self`` in
- ``__attrs_post_init__`` or a self-written ``__init__``. You can
- circumvent that limitation by using
- ``object.__setattr__(self, "attribute_name", value)``.
-
- 5. Subclasses of a frozen class are frozen too.
-
- :param bool weakref_slot: Make instances weak-referenceable. This has no
- effect unless ``slots`` is also enabled.
- :param bool auto_attribs: If ``True``, collect :pep:`526`-annotated
- attributes from the class body.
-
- In this case, you **must** annotate every field. If *attrs*
- encounters a field that is set to an `attr.ib` but lacks a type
- annotation, an `attr.exceptions.UnannotatedAttributeError` is
- raised. Use ``field_name: typing.Any = attr.ib(...)`` if you don't
- want to set a type.
-
- If you assign a value to those attributes (e.g. ``x: int = 42``), that
- value becomes the default value like if it were passed using
- ``attr.ib(default=42)``. Passing an instance of `attrs.Factory` also
- works as expected in most cases (see warning below).
-
- Attributes annotated as `typing.ClassVar`, and attributes that are
- neither annotated nor set to an `attr.ib` are **ignored**.
-
- .. warning::
- For features that use the attribute name to create decorators (e.g.
- :ref:`validators `), you still *must* assign `attr.ib`
- to them. Otherwise Python will either not find the name or try to
- use the default value to call e.g. ``validator`` on it.
-
- These errors can be quite confusing and probably the most common bug
- report on our bug tracker.
-
- :param bool kw_only: Make all attributes keyword-only
- in the generated ``__init__`` (if ``init`` is ``False``, this
- parameter is ignored).
- :param bool cache_hash: Ensure that the object's hash code is computed
- only once and stored on the object. If this is set to ``True``,
- hashing must be either explicitly or implicitly enabled for this
- class. If the hash code is cached, avoid any reassignments of
- fields involved in hash code computation or mutations of the objects
- those fields point to after object creation. If such changes occur,
- the behavior of the object's hash code is undefined.
- :param bool auto_exc: If the class subclasses `BaseException`
- (which implicitly includes any subclass of any exception), the
- following happens to behave like a well-behaved Python exceptions
- class:
-
- - the values for *eq*, *order*, and *hash* are ignored and the
- instances compare and hash by the instance's ids (N.B. *attrs* will
- *not* remove existing implementations of ``__hash__`` or the equality
- methods. It just won't add own ones.),
- - all attributes that are either passed into ``__init__`` or have a
- default value are additionally available as a tuple in the ``args``
- attribute,
- - the value of *str* is ignored leaving ``__str__`` to base classes.
- :param bool collect_by_mro: Setting this to `True` fixes the way *attrs*
- collects attributes from base classes. The default behavior is
- incorrect in certain cases of multiple inheritance. It should be on by
- default but is kept off for backward-compatibility.
-
- See issue `#428 `_ for
- more details.
-
- :param Optional[bool] getstate_setstate:
- .. note::
- This is usually only interesting for slotted classes and you should
- probably just set *auto_detect* to `True`.
-
- If `True`, ``__getstate__`` and
- ``__setstate__`` are generated and attached to the class. This is
- necessary for slotted classes to be pickleable. If left `None`, it's
- `True` by default for slotted classes and ``False`` for dict classes.
-
- If *auto_detect* is `True`, and *getstate_setstate* is left `None`,
- and **either** ``__getstate__`` or ``__setstate__`` is detected directly
- on the class (i.e. not inherited), it is set to `False` (this is usually
- what you want).
-
- :param on_setattr: A callable that is run whenever the user attempts to set
- an attribute (either by assignment like ``i.x = 42`` or by using
- `setattr` like ``setattr(i, "x", 42)``). It receives the same arguments
- as validators: the instance, the attribute that is being modified, and
- the new value.
-
- If no exception is raised, the attribute is set to the return value of
- the callable.
-
- If a list of callables is passed, they're automatically wrapped in an
- `attrs.setters.pipe`.
- :type on_setattr: `callable`, or a list of callables, or `None`, or
- `attrs.setters.NO_OP`
-
- :param Optional[callable] field_transformer:
- A function that is called with the original class object and all
- fields right before *attrs* finalizes the class. You can use
- this, e.g., to automatically add converters or validators to
- fields based on their types. See `transform-fields` for more details.
-
- :param bool match_args:
- If `True` (default), set ``__match_args__`` on the class to support
- :pep:`634` (Structural Pattern Matching). It is a tuple of all
- non-keyword-only ``__init__`` parameter names on Python 3.10 and later.
- Ignored on older Python versions.
-
- .. versionadded:: 16.0.0 *slots*
- .. versionadded:: 16.1.0 *frozen*
- .. versionadded:: 16.3.0 *str*
- .. versionadded:: 16.3.0 Support for ``__attrs_post_init__``.
- .. versionchanged:: 17.1.0
- *hash* supports ``None`` as value which is also the default now.
- .. versionadded:: 17.3.0 *auto_attribs*
- .. versionchanged:: 18.1.0
- If *these* is passed, no attributes are deleted from the class body.
- .. versionchanged:: 18.1.0 If *these* is ordered, the order is retained.
- .. versionadded:: 18.2.0 *weakref_slot*
- .. deprecated:: 18.2.0
- ``__lt__``, ``__le__``, ``__gt__``, and ``__ge__`` now raise a
- `DeprecationWarning` if the classes compared are subclasses of
- each other. ``__eq`` and ``__ne__`` never tried to compared subclasses
- to each other.
- .. versionchanged:: 19.2.0
- ``__lt__``, ``__le__``, ``__gt__``, and ``__ge__`` now do not consider
- subclasses comparable anymore.
- .. versionadded:: 18.2.0 *kw_only*
- .. versionadded:: 18.2.0 *cache_hash*
- .. versionadded:: 19.1.0 *auto_exc*
- .. deprecated:: 19.2.0 *cmp* Removal on or after 2021-06-01.
- .. versionadded:: 19.2.0 *eq* and *order*
- .. versionadded:: 20.1.0 *auto_detect*
- .. versionadded:: 20.1.0 *collect_by_mro*
- .. versionadded:: 20.1.0 *getstate_setstate*
- .. versionadded:: 20.1.0 *on_setattr*
- .. versionadded:: 20.3.0 *field_transformer*
- .. versionchanged:: 21.1.0
- ``init=False`` injects ``__attrs_init__``
- .. versionchanged:: 21.1.0 Support for ``__attrs_pre_init__``
- .. versionchanged:: 21.1.0 *cmp* undeprecated
- .. versionadded:: 21.3.0 *match_args*
- .. versionadded:: 22.2.0
- *unsafe_hash* as an alias for *hash* (for :pep:`681` compliance).
- """
- eq_, order_ = _determine_attrs_eq_order(cmp, eq, order, None)
-
- # unsafe_hash takes precedence due to PEP 681.
- if unsafe_hash is not None:
- hash = unsafe_hash
-
- if isinstance(on_setattr, (list, tuple)):
- on_setattr = setters.pipe(*on_setattr)
-
- def wrap(cls):
- is_frozen = frozen or _has_frozen_base_class(cls)
- is_exc = auto_exc is True and issubclass(cls, BaseException)
- has_own_setattr = auto_detect and _has_own_attribute(
- cls, "__setattr__"
- )
-
- if has_own_setattr and is_frozen:
- raise ValueError("Can't freeze a class with a custom __setattr__.")
-
- builder = _ClassBuilder(
- cls,
- these,
- slots,
- is_frozen,
- weakref_slot,
- _determine_whether_to_implement(
- cls,
- getstate_setstate,
- auto_detect,
- ("__getstate__", "__setstate__"),
- default=slots,
- ),
- auto_attribs,
- kw_only,
- cache_hash,
- is_exc,
- collect_by_mro,
- on_setattr,
- has_own_setattr,
- field_transformer,
- )
- if _determine_whether_to_implement(
- cls, repr, auto_detect, ("__repr__",)
- ):
- builder.add_repr(repr_ns)
- if str is True:
- builder.add_str()
-
- eq = _determine_whether_to_implement(
- cls, eq_, auto_detect, ("__eq__", "__ne__")
- )
- if not is_exc and eq is True:
- builder.add_eq()
- if not is_exc and _determine_whether_to_implement(
- cls, order_, auto_detect, ("__lt__", "__le__", "__gt__", "__ge__")
- ):
- builder.add_order()
-
- builder.add_setattr()
-
- nonlocal hash
- if (
- hash is None
- and auto_detect is True
- and _has_own_attribute(cls, "__hash__")
- ):
- hash = False
-
- if hash is not True and hash is not False and hash is not None:
- # Can't use `hash in` because 1 == True for example.
- raise TypeError(
- "Invalid value for hash. Must be True, False, or None."
- )
- elif hash is False or (hash is None and eq is False) or is_exc:
- # Don't do anything. Should fall back to __object__'s __hash__
- # which is by id.
- if cache_hash:
- raise TypeError(
- "Invalid value for cache_hash. To use hash caching,"
- " hashing must be either explicitly or implicitly "
- "enabled."
- )
- elif hash is True or (
- hash is None and eq is True and is_frozen is True
- ):
- # Build a __hash__ if told so, or if it's safe.
- builder.add_hash()
- else:
- # Raise TypeError on attempts to hash.
- if cache_hash:
- raise TypeError(
- "Invalid value for cache_hash. To use hash caching,"
- " hashing must be either explicitly or implicitly "
- "enabled."
- )
- builder.make_unhashable()
-
- if _determine_whether_to_implement(
- cls, init, auto_detect, ("__init__",)
- ):
- builder.add_init()
- else:
- builder.add_attrs_init()
- if cache_hash:
- raise TypeError(
- "Invalid value for cache_hash. To use hash caching,"
- " init must be True."
- )
-
- if (
- PY310
- and match_args
- and not _has_own_attribute(cls, "__match_args__")
- ):
- builder.add_match_args()
-
- return builder.build_class()
-
- # maybe_cls's type depends on the usage of the decorator. It's a class
- # if it's used as `@attrs` but ``None`` if used as `@attrs()`.
- if maybe_cls is None:
- return wrap
- else:
- return wrap(maybe_cls)
-
-
-_attrs = attrs
-"""
-Internal alias so we can use it in functions that take an argument called
-*attrs*.
-"""
-
-
-def _has_frozen_base_class(cls):
- """
- Check whether *cls* has a frozen ancestor by looking at its
- __setattr__.
- """
- return cls.__setattr__ is _frozen_setattrs
-
-
-def _generate_unique_filename(cls, func_name):
- """
- Create a "filename" suitable for a function being generated.
- """
- return (
- f""
- )
-
-
-def _make_hash(cls, attrs, frozen, cache_hash):
- attrs = tuple(
- a for a in attrs if a.hash is True or (a.hash is None and a.eq is True)
- )
-
- tab = " "
-
- unique_filename = _generate_unique_filename(cls, "hash")
- type_hash = hash(unique_filename)
- # If eq is custom generated, we need to include the functions in globs
- globs = {}
-
- hash_def = "def __hash__(self"
- hash_func = "hash(("
- closing_braces = "))"
- if not cache_hash:
- hash_def += "):"
- else:
- hash_def += ", *"
-
- hash_def += (
- ", _cache_wrapper="
- + "__import__('attr._make')._make._CacheHashWrapper):"
- )
- hash_func = "_cache_wrapper(" + hash_func
- closing_braces += ")"
-
- method_lines = [hash_def]
-
- def append_hash_computation_lines(prefix, indent):
- """
- Generate the code for actually computing the hash code.
- Below this will either be returned directly or used to compute
- a value which is then cached, depending on the value of cache_hash
- """
-
- method_lines.extend(
- [
- indent + prefix + hash_func,
- indent + f" {type_hash},",
- ]
- )
-
- for a in attrs:
- if a.eq_key:
- cmp_name = f"_{a.name}_key"
- globs[cmp_name] = a.eq_key
- method_lines.append(
- indent + f" {cmp_name}(self.{a.name}),"
- )
- else:
- method_lines.append(indent + f" self.{a.name},")
-
- method_lines.append(indent + " " + closing_braces)
-
- if cache_hash:
- method_lines.append(tab + f"if self.{_hash_cache_field} is None:")
- if frozen:
- append_hash_computation_lines(
- f"object.__setattr__(self, '{_hash_cache_field}', ", tab * 2
- )
- method_lines.append(tab * 2 + ")") # close __setattr__
- else:
- append_hash_computation_lines(
- f"self.{_hash_cache_field} = ", tab * 2
- )
- method_lines.append(tab + f"return self.{_hash_cache_field}")
- else:
- append_hash_computation_lines("return ", tab)
-
- script = "\n".join(method_lines)
- return _make_method("__hash__", script, unique_filename, globs)
-
-
-def _add_hash(cls, attrs):
- """
- Add a hash method to *cls*.
- """
- cls.__hash__ = _make_hash(cls, attrs, frozen=False, cache_hash=False)
- return cls
-
-
-def _make_ne():
- """
- Create __ne__ method.
- """
-
- def __ne__(self, other):
- """
- Check equality and either forward a NotImplemented or
- return the result negated.
- """
- result = self.__eq__(other)
- if result is NotImplemented:
- return NotImplemented
-
- return not result
-
- return __ne__
-
-
-def _make_eq(cls, attrs):
- """
- Create __eq__ method for *cls* with *attrs*.
- """
- attrs = [a for a in attrs if a.eq]
-
- unique_filename = _generate_unique_filename(cls, "eq")
- lines = [
- "def __eq__(self, other):",
- " if other.__class__ is not self.__class__:",
- " return NotImplemented",
- ]
-
- # We can't just do a big self.x = other.x and... clause due to
- # irregularities like nan == nan is false but (nan,) == (nan,) is true.
- globs = {}
- if attrs:
- lines.append(" return (")
- others = [" ) == ("]
- for a in attrs:
- if a.eq_key:
- cmp_name = f"_{a.name}_key"
- # Add the key function to the global namespace
- # of the evaluated function.
- globs[cmp_name] = a.eq_key
- lines.append(f" {cmp_name}(self.{a.name}),")
- others.append(f" {cmp_name}(other.{a.name}),")
- else:
- lines.append(f" self.{a.name},")
- others.append(f" other.{a.name},")
-
- lines += others + [" )"]
- else:
- lines.append(" return True")
-
- script = "\n".join(lines)
-
- return _make_method("__eq__", script, unique_filename, globs)
-
-
-def _make_order(cls, attrs):
- """
- Create ordering methods for *cls* with *attrs*.
- """
- attrs = [a for a in attrs if a.order]
-
- def attrs_to_tuple(obj):
- """
- Save us some typing.
- """
- return tuple(
- key(value) if key else value
- for value, key in (
- (getattr(obj, a.name), a.order_key) for a in attrs
- )
- )
-
- def __lt__(self, other):
- """
- Automatically created by attrs.
- """
- if other.__class__ is self.__class__:
- return attrs_to_tuple(self) < attrs_to_tuple(other)
-
- return NotImplemented
-
- def __le__(self, other):
- """
- Automatically created by attrs.
- """
- if other.__class__ is self.__class__:
- return attrs_to_tuple(self) <= attrs_to_tuple(other)
-
- return NotImplemented
-
- def __gt__(self, other):
- """
- Automatically created by attrs.
- """
- if other.__class__ is self.__class__:
- return attrs_to_tuple(self) > attrs_to_tuple(other)
-
- return NotImplemented
-
- def __ge__(self, other):
- """
- Automatically created by attrs.
- """
- if other.__class__ is self.__class__:
- return attrs_to_tuple(self) >= attrs_to_tuple(other)
-
- return NotImplemented
-
- return __lt__, __le__, __gt__, __ge__
-
-
-def _add_eq(cls, attrs=None):
- """
- Add equality methods to *cls* with *attrs*.
- """
- if attrs is None:
- attrs = cls.__attrs_attrs__
-
- cls.__eq__ = _make_eq(cls, attrs)
- cls.__ne__ = _make_ne()
-
- return cls
-
-
-def _make_repr(attrs, ns, cls):
- unique_filename = _generate_unique_filename(cls, "repr")
- # Figure out which attributes to include, and which function to use to
- # format them. The a.repr value can be either bool or a custom
- # callable.
- attr_names_with_reprs = tuple(
- (a.name, (repr if a.repr is True else a.repr), a.init)
- for a in attrs
- if a.repr is not False
- )
- globs = {
- name + "_repr": r for name, r, _ in attr_names_with_reprs if r != repr
- }
- globs["_compat"] = _compat
- globs["AttributeError"] = AttributeError
- globs["NOTHING"] = NOTHING
- attribute_fragments = []
- for name, r, i in attr_names_with_reprs:
- accessor = (
- "self." + name if i else 'getattr(self, "' + name + '", NOTHING)'
- )
- fragment = (
- "%s={%s!r}" % (name, accessor)
- if r == repr
- else "%s={%s_repr(%s)}" % (name, name, accessor)
- )
- attribute_fragments.append(fragment)
- repr_fragment = ", ".join(attribute_fragments)
-
- if ns is None:
- cls_name_fragment = '{self.__class__.__qualname__.rsplit(">.", 1)[-1]}'
- else:
- cls_name_fragment = ns + ".{self.__class__.__name__}"
-
- lines = [
- "def __repr__(self):",
- " try:",
- " already_repring = _compat.repr_context.already_repring",
- " except AttributeError:",
- " already_repring = {id(self),}",
- " _compat.repr_context.already_repring = already_repring",
- " else:",
- " if id(self) in already_repring:",
- " return '...'",
- " else:",
- " already_repring.add(id(self))",
- " try:",
- f" return f'{cls_name_fragment}({repr_fragment})'",
- " finally:",
- " already_repring.remove(id(self))",
- ]
-
- return _make_method(
- "__repr__", "\n".join(lines), unique_filename, globs=globs
- )
-
-
-def _add_repr(cls, ns=None, attrs=None):
- """
- Add a repr method to *cls*.
- """
- if attrs is None:
- attrs = cls.__attrs_attrs__
-
- cls.__repr__ = _make_repr(attrs, ns, cls)
- return cls
-
-
-def fields(cls):
- """
- Return the tuple of *attrs* attributes for a class.
-
- The tuple also allows accessing the fields by their names (see below for
- examples).
-
- :param type cls: Class to introspect.
-
- :raise TypeError: If *cls* is not a class.
- :raise attrs.exceptions.NotAnAttrsClassError: If *cls* is not an *attrs*
- class.
-
- :rtype: tuple (with name accessors) of `attrs.Attribute`
-
- .. versionchanged:: 16.2.0 Returned tuple allows accessing the fields
- by name.
- .. versionchanged:: 23.1.0 Add support for generic classes.
- """
- generic_base = get_generic_base(cls)
-
- if generic_base is None and not isinstance(cls, type):
- raise TypeError("Passed object must be a class.")
-
- attrs = getattr(cls, "__attrs_attrs__", None)
-
- if attrs is None:
- if generic_base is not None:
- attrs = getattr(generic_base, "__attrs_attrs__", None)
- if attrs is not None:
- # Even though this is global state, stick it on here to speed
- # it up. We rely on `cls` being cached for this to be
- # efficient.
- cls.__attrs_attrs__ = attrs
- return attrs
- raise NotAnAttrsClassError(f"{cls!r} is not an attrs-decorated class.")
-
- return attrs
-
-
-def fields_dict(cls):
- """
- Return an ordered dictionary of *attrs* attributes for a class, whose
- keys are the attribute names.
-
- :param type cls: Class to introspect.
-
- :raise TypeError: If *cls* is not a class.
- :raise attrs.exceptions.NotAnAttrsClassError: If *cls* is not an *attrs*
- class.
-
- :rtype: dict
-
- .. versionadded:: 18.1.0
- """
- if not isinstance(cls, type):
- raise TypeError("Passed object must be a class.")
- attrs = getattr(cls, "__attrs_attrs__", None)
- if attrs is None:
- raise NotAnAttrsClassError(f"{cls!r} is not an attrs-decorated class.")
- return {a.name: a for a in attrs}
-
-
-def validate(inst):
- """
- Validate all attributes on *inst* that have a validator.
-
- Leaves all exceptions through.
-
- :param inst: Instance of a class with *attrs* attributes.
- """
- if _config._run_validators is False:
- return
-
- for a in fields(inst.__class__):
- v = a.validator
- if v is not None:
- v(inst, a, getattr(inst, a.name))
-
-
-def _is_slot_cls(cls):
- return "__slots__" in cls.__dict__
-
-
-def _is_slot_attr(a_name, base_attr_map):
- """
- Check if the attribute name comes from a slot class.
- """
- return a_name in base_attr_map and _is_slot_cls(base_attr_map[a_name])
-
-
-def _make_init(
- cls,
- attrs,
- pre_init,
- post_init,
- frozen,
- slots,
- cache_hash,
- base_attr_map,
- is_exc,
- cls_on_setattr,
- attrs_init,
-):
- has_cls_on_setattr = (
- cls_on_setattr is not None and cls_on_setattr is not setters.NO_OP
- )
-
- if frozen and has_cls_on_setattr:
- raise ValueError("Frozen classes can't use on_setattr.")
-
- needs_cached_setattr = cache_hash or frozen
- filtered_attrs = []
- attr_dict = {}
- for a in attrs:
- if not a.init and a.default is NOTHING:
- continue
-
- filtered_attrs.append(a)
- attr_dict[a.name] = a
-
- if a.on_setattr is not None:
- if frozen is True:
- raise ValueError("Frozen classes can't use on_setattr.")
-
- needs_cached_setattr = True
- elif has_cls_on_setattr and a.on_setattr is not setters.NO_OP:
- needs_cached_setattr = True
-
- unique_filename = _generate_unique_filename(cls, "init")
-
- script, globs, annotations = _attrs_to_init_script(
- filtered_attrs,
- frozen,
- slots,
- pre_init,
- post_init,
- cache_hash,
- base_attr_map,
- is_exc,
- needs_cached_setattr,
- has_cls_on_setattr,
- attrs_init,
- )
- if cls.__module__ in sys.modules:
- # This makes typing.get_type_hints(CLS.__init__) resolve string types.
- globs.update(sys.modules[cls.__module__].__dict__)
-
- globs.update({"NOTHING": NOTHING, "attr_dict": attr_dict})
-
- if needs_cached_setattr:
- # Save the lookup overhead in __init__ if we need to circumvent
- # setattr hooks.
- globs["_cached_setattr_get"] = _obj_setattr.__get__
-
- init = _make_method(
- "__attrs_init__" if attrs_init else "__init__",
- script,
- unique_filename,
- globs,
- )
- init.__annotations__ = annotations
-
- return init
-
-
-def _setattr(attr_name, value_var, has_on_setattr):
- """
- Use the cached object.setattr to set *attr_name* to *value_var*.
- """
- return f"_setattr('{attr_name}', {value_var})"
-
-
-def _setattr_with_converter(attr_name, value_var, has_on_setattr):
- """
- Use the cached object.setattr to set *attr_name* to *value_var*, but run
- its converter first.
- """
- return "_setattr('%s', %s(%s))" % (
- attr_name,
- _init_converter_pat % (attr_name,),
- value_var,
- )
-
-
-def _assign(attr_name, value, has_on_setattr):
- """
- Unless *attr_name* has an on_setattr hook, use normal assignment. Otherwise
- relegate to _setattr.
- """
- if has_on_setattr:
- return _setattr(attr_name, value, True)
-
- return f"self.{attr_name} = {value}"
-
-
-def _assign_with_converter(attr_name, value_var, has_on_setattr):
- """
- Unless *attr_name* has an on_setattr hook, use normal assignment after
- conversion. Otherwise relegate to _setattr_with_converter.
- """
- if has_on_setattr:
- return _setattr_with_converter(attr_name, value_var, True)
-
- return "self.%s = %s(%s)" % (
- attr_name,
- _init_converter_pat % (attr_name,),
- value_var,
- )
-
-
-def _attrs_to_init_script(
- attrs,
- frozen,
- slots,
- pre_init,
- post_init,
- cache_hash,
- base_attr_map,
- is_exc,
- needs_cached_setattr,
- has_cls_on_setattr,
- attrs_init,
-):
- """
- Return a script of an initializer for *attrs* and a dict of globals.
-
- The globals are expected by the generated script.
-
- If *frozen* is True, we cannot set the attributes directly so we use
- a cached ``object.__setattr__``.
- """
- lines = []
- if pre_init:
- lines.append("self.__attrs_pre_init__()")
-
- if needs_cached_setattr:
- lines.append(
- # Circumvent the __setattr__ descriptor to save one lookup per
- # assignment.
- # Note _setattr will be used again below if cache_hash is True
- "_setattr = _cached_setattr_get(self)"
- )
-
- if frozen is True:
- if slots is True:
- fmt_setter = _setattr
- fmt_setter_with_converter = _setattr_with_converter
- else:
- # Dict frozen classes assign directly to __dict__.
- # But only if the attribute doesn't come from an ancestor slot
- # class.
- # Note _inst_dict will be used again below if cache_hash is True
- lines.append("_inst_dict = self.__dict__")
-
- def fmt_setter(attr_name, value_var, has_on_setattr):
- if _is_slot_attr(attr_name, base_attr_map):
- return _setattr(attr_name, value_var, has_on_setattr)
-
- return f"_inst_dict['{attr_name}'] = {value_var}"
-
- def fmt_setter_with_converter(
- attr_name, value_var, has_on_setattr
- ):
- if has_on_setattr or _is_slot_attr(attr_name, base_attr_map):
- return _setattr_with_converter(
- attr_name, value_var, has_on_setattr
- )
-
- return "_inst_dict['%s'] = %s(%s)" % (
- attr_name,
- _init_converter_pat % (attr_name,),
- value_var,
- )
-
- else:
- # Not frozen.
- fmt_setter = _assign
- fmt_setter_with_converter = _assign_with_converter
-
- args = []
- kw_only_args = []
- attrs_to_validate = []
-
- # This is a dictionary of names to validator and converter callables.
- # Injecting this into __init__ globals lets us avoid lookups.
- names_for_globals = {}
- annotations = {"return": None}
-
- for a in attrs:
- if a.validator:
- attrs_to_validate.append(a)
-
- attr_name = a.name
- has_on_setattr = a.on_setattr is not None or (
- a.on_setattr is not setters.NO_OP and has_cls_on_setattr
- )
- # a.alias is set to maybe-mangled attr_name in _ClassBuilder if not
- # explicitly provided
- arg_name = a.alias
-
- has_factory = isinstance(a.default, Factory)
- if has_factory and a.default.takes_self:
- maybe_self = "self"
- else:
- maybe_self = ""
-
- if a.init is False:
- if has_factory:
- init_factory_name = _init_factory_pat % (a.name,)
- if a.converter is not None:
- lines.append(
- fmt_setter_with_converter(
- attr_name,
- init_factory_name + f"({maybe_self})",
- has_on_setattr,
- )
- )
- conv_name = _init_converter_pat % (a.name,)
- names_for_globals[conv_name] = a.converter
- else:
- lines.append(
- fmt_setter(
- attr_name,
- init_factory_name + f"({maybe_self})",
- has_on_setattr,
- )
- )
- names_for_globals[init_factory_name] = a.default.factory
- else:
- if a.converter is not None:
- lines.append(
- fmt_setter_with_converter(
- attr_name,
- f"attr_dict['{attr_name}'].default",
- has_on_setattr,
- )
- )
- conv_name = _init_converter_pat % (a.name,)
- names_for_globals[conv_name] = a.converter
- else:
- lines.append(
- fmt_setter(
- attr_name,
- f"attr_dict['{attr_name}'].default",
- has_on_setattr,
- )
- )
- elif a.default is not NOTHING and not has_factory:
- arg = f"{arg_name}=attr_dict['{attr_name}'].default"
- if a.kw_only:
- kw_only_args.append(arg)
- else:
- args.append(arg)
-
- if a.converter is not None:
- lines.append(
- fmt_setter_with_converter(
- attr_name, arg_name, has_on_setattr
- )
- )
- names_for_globals[
- _init_converter_pat % (a.name,)
- ] = a.converter
- else:
- lines.append(fmt_setter(attr_name, arg_name, has_on_setattr))
-
- elif has_factory:
- arg = f"{arg_name}=NOTHING"
- if a.kw_only:
- kw_only_args.append(arg)
- else:
- args.append(arg)
- lines.append(f"if {arg_name} is not NOTHING:")
-
- init_factory_name = _init_factory_pat % (a.name,)
- if a.converter is not None:
- lines.append(
- " "
- + fmt_setter_with_converter(
- attr_name, arg_name, has_on_setattr
- )
- )
- lines.append("else:")
- lines.append(
- " "
- + fmt_setter_with_converter(
- attr_name,
- init_factory_name + "(" + maybe_self + ")",
- has_on_setattr,
- )
- )
- names_for_globals[
- _init_converter_pat % (a.name,)
- ] = a.converter
- else:
- lines.append(
- " " + fmt_setter(attr_name, arg_name, has_on_setattr)
- )
- lines.append("else:")
- lines.append(
- " "
- + fmt_setter(
- attr_name,
- init_factory_name + "(" + maybe_self + ")",
- has_on_setattr,
- )
- )
- names_for_globals[init_factory_name] = a.default.factory
- else:
- if a.kw_only:
- kw_only_args.append(arg_name)
- else:
- args.append(arg_name)
-
- if a.converter is not None:
- lines.append(
- fmt_setter_with_converter(
- attr_name, arg_name, has_on_setattr
- )
- )
- names_for_globals[
- _init_converter_pat % (a.name,)
- ] = a.converter
- else:
- lines.append(fmt_setter(attr_name, arg_name, has_on_setattr))
-
- if a.init is True:
- if a.type is not None and a.converter is None:
- annotations[arg_name] = a.type
- elif a.converter is not None:
- # Try to get the type from the converter.
- t = _AnnotationExtractor(a.converter).get_first_param_type()
- if t:
- annotations[arg_name] = t
-
- if attrs_to_validate: # we can skip this if there are no validators.
- names_for_globals["_config"] = _config
- lines.append("if _config._run_validators is True:")
- for a in attrs_to_validate:
- val_name = "__attr_validator_" + a.name
- attr_name = "__attr_" + a.name
- lines.append(f" {val_name}(self, {attr_name}, self.{a.name})")
- names_for_globals[val_name] = a.validator
- names_for_globals[attr_name] = a
-
- if post_init:
- lines.append("self.__attrs_post_init__()")
-
- # because this is set only after __attrs_post_init__ is called, a crash
- # will result if post-init tries to access the hash code. This seemed
- # preferable to setting this beforehand, in which case alteration to
- # field values during post-init combined with post-init accessing the
- # hash code would result in silent bugs.
- if cache_hash:
- if frozen:
- if slots:
- # if frozen and slots, then _setattr defined above
- init_hash_cache = "_setattr('%s', %s)"
- else:
- # if frozen and not slots, then _inst_dict defined above
- init_hash_cache = "_inst_dict['%s'] = %s"
- else:
- init_hash_cache = "self.%s = %s"
- lines.append(init_hash_cache % (_hash_cache_field, "None"))
-
- # For exceptions we rely on BaseException.__init__ for proper
- # initialization.
- if is_exc:
- vals = ",".join(f"self.{a.name}" for a in attrs if a.init)
-
- lines.append(f"BaseException.__init__(self, {vals})")
-
- args = ", ".join(args)
- if kw_only_args:
- args += "%s*, %s" % (
- ", " if args else "", # leading comma
- ", ".join(kw_only_args), # kw_only args
- )
-
- return (
- "def %s(self, %s):\n %s\n"
- % (
- ("__attrs_init__" if attrs_init else "__init__"),
- args,
- "\n ".join(lines) if lines else "pass",
- ),
- names_for_globals,
- annotations,
- )
-
-
-def _default_init_alias_for(name: str) -> str:
- """
- The default __init__ parameter name for a field.
-
- This performs private-name adjustment via leading-unscore stripping,
- and is the default value of Attribute.alias if not provided.
- """
-
- return name.lstrip("_")
-
-
-class Attribute:
- """
- *Read-only* representation of an attribute.
-
- .. warning::
-
- You should never instantiate this class yourself.
-
- The class has *all* arguments of `attr.ib` (except for ``factory``
- which is only syntactic sugar for ``default=Factory(...)`` plus the
- following:
-
- - ``name`` (`str`): The name of the attribute.
- - ``alias`` (`str`): The __init__ parameter name of the attribute, after
- any explicit overrides and default private-attribute-name handling.
- - ``inherited`` (`bool`): Whether or not that attribute has been inherited
- from a base class.
- - ``eq_key`` and ``order_key`` (`typing.Callable` or `None`): The callables
- that are used for comparing and ordering objects by this attribute,
- respectively. These are set by passing a callable to `attr.ib`'s ``eq``,
- ``order``, or ``cmp`` arguments. See also :ref:`comparison customization
- `.
-
- Instances of this class are frequently used for introspection purposes
- like:
-
- - `fields` returns a tuple of them.
- - Validators get them passed as the first argument.
- - The :ref:`field transformer ` hook receives a list of
- them.
- - The ``alias`` property exposes the __init__ parameter name of the field,
- with any overrides and default private-attribute handling applied.
-
-
- .. versionadded:: 20.1.0 *inherited*
- .. versionadded:: 20.1.0 *on_setattr*
- .. versionchanged:: 20.2.0 *inherited* is not taken into account for
- equality checks and hashing anymore.
- .. versionadded:: 21.1.0 *eq_key* and *order_key*
- .. versionadded:: 22.2.0 *alias*
-
- For the full version history of the fields, see `attr.ib`.
- """
-
- __slots__ = (
- "name",
- "default",
- "validator",
- "repr",
- "eq",
- "eq_key",
- "order",
- "order_key",
- "hash",
- "init",
- "metadata",
- "type",
- "converter",
- "kw_only",
- "inherited",
- "on_setattr",
- "alias",
- )
-
- def __init__(
- self,
- name,
- default,
- validator,
- repr,
- cmp, # XXX: unused, remove along with other cmp code.
- hash,
- init,
- inherited,
- metadata=None,
- type=None,
- converter=None,
- kw_only=False,
- eq=None,
- eq_key=None,
- order=None,
- order_key=None,
- on_setattr=None,
- alias=None,
- ):
- eq, eq_key, order, order_key = _determine_attrib_eq_order(
- cmp, eq_key or eq, order_key or order, True
- )
-
- # Cache this descriptor here to speed things up later.
- bound_setattr = _obj_setattr.__get__(self)
-
- # Despite the big red warning, people *do* instantiate `Attribute`
- # themselves.
- bound_setattr("name", name)
- bound_setattr("default", default)
- bound_setattr("validator", validator)
- bound_setattr("repr", repr)
- bound_setattr("eq", eq)
- bound_setattr("eq_key", eq_key)
- bound_setattr("order", order)
- bound_setattr("order_key", order_key)
- bound_setattr("hash", hash)
- bound_setattr("init", init)
- bound_setattr("converter", converter)
- bound_setattr(
- "metadata",
- (
- types.MappingProxyType(dict(metadata)) # Shallow copy
- if metadata
- else _empty_metadata_singleton
- ),
- )
- bound_setattr("type", type)
- bound_setattr("kw_only", kw_only)
- bound_setattr("inherited", inherited)
- bound_setattr("on_setattr", on_setattr)
- bound_setattr("alias", alias)
-
- def __setattr__(self, name, value):
- raise FrozenInstanceError()
-
- @classmethod
- def from_counting_attr(cls, name, ca, type=None):
- # type holds the annotated value. deal with conflicts:
- if type is None:
- type = ca.type
- elif ca.type is not None:
- raise ValueError(
- "Type annotation and type argument cannot both be present"
- )
- inst_dict = {
- k: getattr(ca, k)
- for k in Attribute.__slots__
- if k
- not in (
- "name",
- "validator",
- "default",
- "type",
- "inherited",
- ) # exclude methods and deprecated alias
- }
- return cls(
- name=name,
- validator=ca._validator,
- default=ca._default,
- type=type,
- cmp=None,
- inherited=False,
- **inst_dict,
- )
-
- # Don't use attrs.evolve since fields(Attribute) doesn't work
- def evolve(self, **changes):
- """
- Copy *self* and apply *changes*.
-
- This works similarly to `attrs.evolve` but that function does not work
- with `Attribute`.
-
- It is mainly meant to be used for `transform-fields`.
-
- .. versionadded:: 20.3.0
- """
- new = copy.copy(self)
-
- new._setattrs(changes.items())
-
- return new
-
- # Don't use _add_pickle since fields(Attribute) doesn't work
- def __getstate__(self):
- """
- Play nice with pickle.
- """
- return tuple(
- getattr(self, name) if name != "metadata" else dict(self.metadata)
- for name in self.__slots__
- )
-
- def __setstate__(self, state):
- """
- Play nice with pickle.
- """
- self._setattrs(zip(self.__slots__, state))
-
- def _setattrs(self, name_values_pairs):
- bound_setattr = _obj_setattr.__get__(self)
- for name, value in name_values_pairs:
- if name != "metadata":
- bound_setattr(name, value)
- else:
- bound_setattr(
- name,
- types.MappingProxyType(dict(value))
- if value
- else _empty_metadata_singleton,
- )
-
-
-_a = [
- Attribute(
- name=name,
- default=NOTHING,
- validator=None,
- repr=True,
- cmp=None,
- eq=True,
- order=False,
- hash=(name != "metadata"),
- init=True,
- inherited=False,
- alias=_default_init_alias_for(name),
- )
- for name in Attribute.__slots__
-]
-
-Attribute = _add_hash(
- _add_eq(
- _add_repr(Attribute, attrs=_a),
- attrs=[a for a in _a if a.name != "inherited"],
- ),
- attrs=[a for a in _a if a.hash and a.name != "inherited"],
-)
-
-
-class _CountingAttr:
- """
- Intermediate representation of attributes that uses a counter to preserve
- the order in which the attributes have been defined.
-
- *Internal* data structure of the attrs library. Running into is most
- likely the result of a bug like a forgotten `@attr.s` decorator.
- """
-
- __slots__ = (
- "counter",
- "_default",
- "repr",
- "eq",
- "eq_key",
- "order",
- "order_key",
- "hash",
- "init",
- "metadata",
- "_validator",
- "converter",
- "type",
- "kw_only",
- "on_setattr",
- "alias",
- )
- __attrs_attrs__ = tuple(
- Attribute(
- name=name,
- alias=_default_init_alias_for(name),
- default=NOTHING,
- validator=None,
- repr=True,
- cmp=None,
- hash=True,
- init=True,
- kw_only=False,
- eq=True,
- eq_key=None,
- order=False,
- order_key=None,
- inherited=False,
- on_setattr=None,
- )
- for name in (
- "counter",
- "_default",
- "repr",
- "eq",
- "order",
- "hash",
- "init",
- "on_setattr",
- "alias",
- )
- ) + (
- Attribute(
- name="metadata",
- alias="metadata",
- default=None,
- validator=None,
- repr=True,
- cmp=None,
- hash=False,
- init=True,
- kw_only=False,
- eq=True,
- eq_key=None,
- order=False,
- order_key=None,
- inherited=False,
- on_setattr=None,
- ),
- )
- cls_counter = 0
-
- def __init__(
- self,
- default,
- validator,
- repr,
- cmp,
- hash,
- init,
- converter,
- metadata,
- type,
- kw_only,
- eq,
- eq_key,
- order,
- order_key,
- on_setattr,
- alias,
- ):
- _CountingAttr.cls_counter += 1
- self.counter = _CountingAttr.cls_counter
- self._default = default
- self._validator = validator
- self.converter = converter
- self.repr = repr
- self.eq = eq
- self.eq_key = eq_key
- self.order = order
- self.order_key = order_key
- self.hash = hash
- self.init = init
- self.metadata = metadata
- self.type = type
- self.kw_only = kw_only
- self.on_setattr = on_setattr
- self.alias = alias
-
- def validator(self, meth):
- """
- Decorator that adds *meth* to the list of validators.
-
- Returns *meth* unchanged.
-
- .. versionadded:: 17.1.0
- """
- if self._validator is None:
- self._validator = meth
- else:
- self._validator = and_(self._validator, meth)
- return meth
-
- def default(self, meth):
- """
- Decorator that allows to set the default for an attribute.
-
- Returns *meth* unchanged.
-
- :raises DefaultAlreadySetError: If default has been set before.
-
- .. versionadded:: 17.1.0
- """
- if self._default is not NOTHING:
- raise DefaultAlreadySetError()
-
- self._default = Factory(meth, takes_self=True)
-
- return meth
-
-
-_CountingAttr = _add_eq(_add_repr(_CountingAttr))
-
-
-class Factory:
- """
- Stores a factory callable.
-
- If passed as the default value to `attrs.field`, the factory is used to
- generate a new value.
-
- :param callable factory: A callable that takes either none or exactly one
- mandatory positional argument depending on *takes_self*.
- :param bool takes_self: Pass the partially initialized instance that is
- being initialized as a positional argument.
-
- .. versionadded:: 17.1.0 *takes_self*
- """
-
- __slots__ = ("factory", "takes_self")
-
- def __init__(self, factory, takes_self=False):
- self.factory = factory
- self.takes_self = takes_self
-
- def __getstate__(self):
- """
- Play nice with pickle.
- """
- return tuple(getattr(self, name) for name in self.__slots__)
-
- def __setstate__(self, state):
- """
- Play nice with pickle.
- """
- for name, value in zip(self.__slots__, state):
- setattr(self, name, value)
-
-
-_f = [
- Attribute(
- name=name,
- default=NOTHING,
- validator=None,
- repr=True,
- cmp=None,
- eq=True,
- order=False,
- hash=True,
- init=True,
- inherited=False,
- )
- for name in Factory.__slots__
-]
-
-Factory = _add_hash(_add_eq(_add_repr(Factory, attrs=_f), attrs=_f), attrs=_f)
-
-
-def make_class(name, attrs, bases=(object,), **attributes_arguments):
- r"""
- A quick way to create a new class called *name* with *attrs*.
-
- :param str name: The name for the new class.
-
- :param attrs: A list of names or a dictionary of mappings of names to
- `attr.ib`\ s / `attrs.field`\ s.
-
- The order is deduced from the order of the names or attributes inside
- *attrs*. Otherwise the order of the definition of the attributes is
- used.
- :type attrs: `list` or `dict`
-
- :param tuple bases: Classes that the new class will subclass.
-
- :param attributes_arguments: Passed unmodified to `attr.s`.
-
- :return: A new class with *attrs*.
- :rtype: type
-
- .. versionadded:: 17.1.0 *bases*
- .. versionchanged:: 18.1.0 If *attrs* is ordered, the order is retained.
- """
- if isinstance(attrs, dict):
- cls_dict = attrs
- elif isinstance(attrs, (list, tuple)):
- cls_dict = {a: attrib() for a in attrs}
- else:
- raise TypeError("attrs argument must be a dict or a list.")
-
- pre_init = cls_dict.pop("__attrs_pre_init__", None)
- post_init = cls_dict.pop("__attrs_post_init__", None)
- user_init = cls_dict.pop("__init__", None)
-
- body = {}
- if pre_init is not None:
- body["__attrs_pre_init__"] = pre_init
- if post_init is not None:
- body["__attrs_post_init__"] = post_init
- if user_init is not None:
- body["__init__"] = user_init
-
- type_ = types.new_class(name, bases, {}, lambda ns: ns.update(body))
-
- # For pickling to work, the __module__ variable needs to be set to the
- # frame where the class is created. Bypass this step in environments where
- # sys._getframe is not defined (Jython for example) or sys._getframe is not
- # defined for arguments greater than 0 (IronPython).
- try:
- type_.__module__ = sys._getframe(1).f_globals.get(
- "__name__", "__main__"
- )
- except (AttributeError, ValueError):
- pass
-
- # We do it here for proper warnings with meaningful stacklevel.
- cmp = attributes_arguments.pop("cmp", None)
- (
- attributes_arguments["eq"],
- attributes_arguments["order"],
- ) = _determine_attrs_eq_order(
- cmp,
- attributes_arguments.get("eq"),
- attributes_arguments.get("order"),
- True,
- )
-
- return _attrs(these=cls_dict, **attributes_arguments)(type_)
-
-
-# These are required by within this module so we define them here and merely
-# import into .validators / .converters.
-
-
-@attrs(slots=True, hash=True)
-class _AndValidator:
- """
- Compose many validators to a single one.
- """
-
- _validators = attrib()
-
- def __call__(self, inst, attr, value):
- for v in self._validators:
- v(inst, attr, value)
-
-
-def and_(*validators):
- """
- A validator that composes multiple validators into one.
-
- When called on a value, it runs all wrapped validators.
-
- :param callables validators: Arbitrary number of validators.
-
- .. versionadded:: 17.1.0
- """
- vals = []
- for validator in validators:
- vals.extend(
- validator._validators
- if isinstance(validator, _AndValidator)
- else [validator]
- )
-
- return _AndValidator(tuple(vals))
-
-
-def pipe(*converters):
- """
- A converter that composes multiple converters into one.
-
- When called on a value, it runs all wrapped converters, returning the
- *last* value.
-
- Type annotations will be inferred from the wrapped converters', if
- they have any.
-
- :param callables converters: Arbitrary number of converters.
-
- .. versionadded:: 20.1.0
- """
-
- def pipe_converter(val):
- for converter in converters:
- val = converter(val)
-
- return val
-
- if not converters:
- # If the converter list is empty, pipe_converter is the identity.
- A = typing.TypeVar("A")
- pipe_converter.__annotations__ = {"val": A, "return": A}
- else:
- # Get parameter type from first converter.
- t = _AnnotationExtractor(converters[0]).get_first_param_type()
- if t:
- pipe_converter.__annotations__["val"] = t
-
- # Get return type from last converter.
- rt = _AnnotationExtractor(converters[-1]).get_return_type()
- if rt:
- pipe_converter.__annotations__["return"] = rt
-
- return pipe_converter
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/misc/encodingTools.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/misc/encodingTools.py
deleted file mode 100644
index 3b2651d3b1ce222060fa67abaeac4da8030618fa..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/misc/encodingTools.py
+++ /dev/null
@@ -1,72 +0,0 @@
-"""fontTools.misc.encodingTools.py -- tools for working with OpenType encodings.
-"""
-
-import fontTools.encodings.codecs
-
-# Map keyed by platformID, then platEncID, then possibly langID
-_encodingMap = {
- 0: { # Unicode
- 0: "utf_16_be",
- 1: "utf_16_be",
- 2: "utf_16_be",
- 3: "utf_16_be",
- 4: "utf_16_be",
- 5: "utf_16_be",
- 6: "utf_16_be",
- },
- 1: { # Macintosh
- # See
- # https://github.com/fonttools/fonttools/issues/236
- 0: { # Macintosh, platEncID==0, keyed by langID
- 15: "mac_iceland",
- 17: "mac_turkish",
- 18: "mac_croatian",
- 24: "mac_latin2",
- 25: "mac_latin2",
- 26: "mac_latin2",
- 27: "mac_latin2",
- 28: "mac_latin2",
- 36: "mac_latin2",
- 37: "mac_romanian",
- 38: "mac_latin2",
- 39: "mac_latin2",
- 40: "mac_latin2",
- Ellipsis: "mac_roman", # Other
- },
- 1: "x_mac_japanese_ttx",
- 2: "x_mac_trad_chinese_ttx",
- 3: "x_mac_korean_ttx",
- 6: "mac_greek",
- 7: "mac_cyrillic",
- 25: "x_mac_simp_chinese_ttx",
- 29: "mac_latin2",
- 35: "mac_turkish",
- 37: "mac_iceland",
- },
- 2: { # ISO
- 0: "ascii",
- 1: "utf_16_be",
- 2: "latin1",
- },
- 3: { # Microsoft
- 0: "utf_16_be",
- 1: "utf_16_be",
- 2: "shift_jis",
- 3: "gb2312",
- 4: "big5",
- 5: "euc_kr",
- 6: "johab",
- 10: "utf_16_be",
- },
-}
-
-
-def getEncoding(platformID, platEncID, langID, default=None):
- """Returns the Python encoding name for OpenType platformID/encodingID/langID
- triplet. If encoding for these values is not known, by default None is
- returned. That can be overriden by passing a value to the default argument.
- """
- encoding = _encodingMap.get(platformID, {}).get(platEncID, default)
- if isinstance(encoding, dict):
- encoding = encoding.get(langID, encoding[Ellipsis])
- return encoding
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-a39e64b0.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-a39e64b0.js
deleted file mode 100644
index d531a3b82b0e282a3f99db8e3aa879fa8879fb6f..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-a39e64b0.js
+++ /dev/null
@@ -1,2 +0,0 @@
-import{S as M,e as z,s as E,G as A,N as b,k as T,O as y,K as u,p as v,o as B,M as h,z as R,v as N,A as k,x as I,V as j,P as C,R as q,J as D,U as O,T as K,am as _e,h as P,m as se,u as oe,y as ae,f as V,q as ge,r as he,E as me}from"./index-1d65707a.js";import"./Button-f155035a.js";import{B as G}from"./BlockTitle-dee077e8.js";import"./Info-7c6961ef.js";const w=i=>{var e=null;return i<0?e=[52,152,219]:e=[231,76,60],be(de(Math.abs(i),[255,255,255],e))},de=(i,e,t)=>{i>1&&(i=1),i=Math.sqrt(i);var n=[0,0,0],s;for(s=0;s<3;s++)n[s]=Math.round(e[s]*(1-i)+t[s]*i);return n},be=i=>"rgb("+i[0]+", "+i[1]+", "+i[2]+")",x=(i,e,t,n,s)=>{var o=n/s,c=e/t,l=0,r=0,f=i?o>c:o{"interpretation"in o&&t(0,n=o.interpretation),"label"in o&&t(1,s=o.label)},[n,s]}class we extends M{constructor(e){super(),z(this,e,pe,ke,E,{interpretation:0,label:1})}}function Q(i,e,t){const n=i.slice();return n[3]=e[t],n[5]=t,n}function ye(i){let e;return{c(){e=C(i[2])},m(t,n){v(t,e,n)},p(t,n){n&4&&q(e,t[2])},d(t){t&&k(e)}}}function W(i){let e,t=i[3]+"",n,s,o;return{c(){e=b("li"),n=C(t),s=y(),u(e,"class","dropdown-item svelte-1cqwepf"),u(e,"style",o="background-color: "+w(i[0][i[5]]))},m(c,l){v(c,e,l),h(e,n),h(e,s)},p(c,l){l&2&&t!==(t=c[3]+"")&&q(n,t),l&1&&o!==(o="background-color: "+w(c[0][c[5]]))&&u(e,"style",o)},d(c){c&&k(e)}}}function Se(i){let e,t,n,s,o;t=new G({props:{$$slots:{default:[ye]},$$scope:{ctx:i}}});let c=A(i[1]),l=[];for(let r=0;r{"interpretation"in c&&t(0,n=c.interpretation),"choices"in c&&t(1,s=c.choices),"label"in c&&t(2,o=c.label)},[n,s,o]}class qe extends M{constructor(e){super(),z(this,e,Ce,Se,E,{interpretation:0,choices:1,label:2})}}function Re(i){let e;return{c(){e=C(i[0])},m(t,n){v(t,e,n)},p(t,n){n&1&&q(e,t[0])},d(t){t&&k(e)}}}function Ae(i){let e,t,n,s,o,c,l,r,f,a,_,g,m;return t=new G({props:{$$slots:{default:[Re]},$$scope:{ctx:i}}}),{c(){e=b("div"),T(t.$$.fragment),n=y(),s=b("button"),o=b("div"),l=y(),r=b("div"),f=D("svg"),a=D("line"),_=D("line"),u(o,"class","checkbox svelte-1nw19ca"),u(o,"style",c="background-color: "+w(i[2][0])),u(a,"x1","-7.5"),u(a,"y1","0"),u(a,"x2","-2.5"),u(a,"y2","5"),u(a,"stroke","black"),u(a,"stroke-width","4"),u(a,"stroke-linecap","round"),u(_,"x1","-2.5"),u(_,"y1","5"),u(_,"x2","7.5"),u(_,"y2","-7.5"),u(_,"stroke","black"),u(_,"stroke-width","4"),u(_,"stroke-linecap","round"),u(f,"viewBox","-10 -10 20 20"),u(f,"class","svelte-1nw19ca"),u(r,"class","checkbox svelte-1nw19ca"),u(r,"style",g="background-color: "+w(i[2][1])),u(s,"class","checkbox-item svelte-1nw19ca"),O(s,"selected",i[1]),u(e,"class","input-checkbox svelte-1nw19ca")},m(d,p){v(d,e,p),B(t,e,null),h(e,n),h(e,s),h(s,o),h(s,l),h(s,r),h(r,f),h(f,a),h(f,_),m=!0},p(d,[p]){const S={};p&9&&(S.$$scope={dirty:p,ctx:d}),t.$set(S),(!m||p&4&&c!==(c="background-color: "+w(d[2][0])))&&u(o,"style",c),(!m||p&4&&g!==(g="background-color: "+w(d[2][1])))&&u(r,"style",g),(!m||p&2)&&O(s,"selected",d[1])},i(d){m||(R(t.$$.fragment,d),m=!0)},o(d){N(t.$$.fragment,d),m=!1},d(d){d&&k(e),I(t)}}}function Ne(i,e,t){let{label:n=""}=e,{original:s}=e,{interpretation:o}=e;return i.$$set=c=>{"label"in c&&t(0,n=c.label),"original"in c&&t(1,s=c.original),"interpretation"in c&&t(2,o=c.interpretation)},[n,s,o]}class Te extends M{constructor(e){super(),z(this,e,Ne,Ae,E,{label:0,original:1,interpretation:2})}}function X(i,e,t){const n=i.slice();return n[4]=e[t],n[6]=t,n}function Be(i){let e;return{c(){e=C(i[3])},m(t,n){v(t,e,n)},p(t,n){n&8&&q(e,t[3])},d(t){t&&k(e)}}}function Y(i){let e,t,n,s,o,c,l,r,f,a,_=i[4]+"",g,m;return{c(){e=b("button"),t=b("div"),s=y(),o=b("div"),c=D("svg"),l=D("line"),r=D("line"),a=y(),g=C(_),m=y(),u(t,"class","checkbox svelte-1cbhr6k"),u(t,"style",n="background-color: "+w(i[1][i[6]][0])),u(l,"x1","-7.5"),u(l,"y1","0"),u(l,"x2","-2.5"),u(l,"y2","5"),u(l,"stroke","black"),u(l,"stroke-width","4"),u(l,"stroke-linecap","round"),u(r,"x1","-2.5"),u(r,"y1","5"),u(r,"x2","7.5"),u(r,"y2","-7.5"),u(r,"stroke","black"),u(r,"stroke-width","4"),u(r,"stroke-linecap","round"),u(c,"viewBox","-10 -10 20 20"),u(c,"class","svelte-1cbhr6k"),u(o,"class","checkbox svelte-1cbhr6k"),u(o,"style",f="background-color: "+w(i[1][i[6]][1])),u(e,"class","checkbox-item svelte-1cbhr6k"),O(e,"selected",i[0].includes(i[4]))},m(d,p){v(d,e,p),h(e,t),h(e,s),h(e,o),h(o,c),h(c,l),h(c,r),h(e,a),h(e,g),h(e,m)},p(d,p){p&2&&n!==(n="background-color: "+w(d[1][d[6]][0]))&&u(t,"style",n),p&2&&f!==(f="background-color: "+w(d[1][d[6]][1]))&&u(o,"style",f),p&4&&_!==(_=d[4]+"")&&q(g,_),p&5&&O(e,"selected",d[0].includes(d[4]))},d(d){d&&k(e)}}}function Ie(i){let e,t,n,s;t=new G({props:{$$slots:{default:[Be]},$$scope:{ctx:i}}});let o=A(i[2]),c=[];for(let l=0;l{"original"in l&&t(0,n=l.original),"interpretation"in l&&t(1,s=l.interpretation),"choices"in l&&t(2,o=l.choices),"label"in l&&t(3,c=l.label)},[n,s,o,c]}class ze extends M{constructor(e){super(),z(this,e,Me,Ie,E,{original:0,interpretation:1,choices:2,label:3})}}function Z(i,e,t){const n=i.slice();return n[6]=e[t],n}function Ee(i){let e;return{c(){e=C(i[5])},m(t,n){v(t,e,n)},p(t,n){n&32&&q(e,t[5])},d(t){t&&k(e)}}}function $(i){let e,t;return{c(){e=b("div"),u(e,"style",t="background-color: "+w(i[6])),u(e,"class","svelte-1sxprr7")},m(n,s){v(n,e,s)},p(n,s){s&2&&t!==(t="background-color: "+w(n[6]))&&u(e,"style",t)},d(n){n&&k(e)}}}function Ge(i){let e,t,n,s,o,c,l,r,f,a;t=new G({props:{$$slots:{default:[Ee]},$$scope:{ctx:i}}});let _=A(i[1]),g=[];for(let m=0;m<_.length;m+=1)g[m]=$(Z(i,_,m));return{c(){e=b("div"),T(t.$$.fragment),n=y(),s=b("input"),o=y(),c=b("div");for(let m=0;m{"original"in f&&t(0,n=f.original),"interpretation"in f&&t(1,s=f.interpretation),"minimum"in f&&t(2,o=f.minimum),"maximum"in f&&t(3,c=f.maximum),"step"in f&&t(4,l=f.step),"label"in f&&t(5,r=f.label)},[n,s,o,c,l,r]}class De extends M{constructor(e){super(),z(this,e,je,Ge,E,{original:0,interpretation:1,minimum:2,maximum:3,step:4,label:5})}}function ee(i,e,t){const n=i.slice();return n[4]=e[t],n[6]=t,n}function Oe(i){let e;return{c(){e=C(i[3])},m(t,n){v(t,e,n)},p(t,n){n&8&&q(e,t[3])},d(t){t&&k(e)}}}function te(i){let e,t,n,s,o=i[4]+"",c,l;return{c(){e=b("button"),t=b("div"),s=y(),c=C(o),l=y(),u(t,"class","radio-circle svelte-1nekfre"),u(t,"style",n="background-color: "+w(i[1][i[6]])),u(e,"class","radio-item svelte-1nekfre"),O(e,"selected",i[0]===i[4])},m(r,f){v(r,e,f),h(e,t),h(e,s),h(e,c),h(e,l)},p(r,f){f&2&&n!==(n="background-color: "+w(r[1][r[6]]))&&u(t,"style",n),f&4&&o!==(o=r[4]+"")&&q(c,o),f&5&&O(e,"selected",r[0]===r[4])},d(r){r&&k(e)}}}function Ue(i){let e,t,n,s;t=new G({props:{$$slots:{default:[Oe]},$$scope:{ctx:i}}});let o=A(i[2]),c=[];for(let l=0;l{"original"in l&&t(0,n=l.original),"interpretation"in l&&t(1,s=l.interpretation),"choices"in l&&t(2,o=l.choices),"label"in l&&t(3,c=l.label)},[n,s,o,c]}class Je extends M{constructor(e){super(),z(this,e,Fe,Ue,E,{original:0,interpretation:1,choices:2,label:3})}}function Ke(i){let e;return{c(){e=C(i[1])},m(t,n){v(t,e,n)},p(t,n){n&2&&q(e,t[1])},d(t){t&&k(e)}}}function Pe(i){let e,t,n,s,o,c,l,r,f,a;return t=new G({props:{$$slots:{default:[Ke]},$$scope:{ctx:i}}}),{c(){e=b("div"),T(t.$$.fragment),n=y(),s=b("div"),o=b("div"),c=b("canvas"),l=y(),r=b("img"),u(o,"class","interpretation svelte-h0dntu"),K(r.src,f=i[0])||u(r,"src",f),u(r,"class","svelte-h0dntu"),u(s,"class","image-preview svelte-h0dntu"),u(e,"class","input-image")},m(_,g){v(_,e,g),B(t,e,null),h(e,n),h(e,s),h(s,o),h(o,c),i[6](c),h(s,l),h(s,r),i[7](r),a=!0},p(_,[g]){const m={};g&514&&(m.$$scope={dirty:g,ctx:_}),t.$set(m),(!a||g&1&&!K(r.src,f=_[0]))&&u(r,"src",f)},i(_){a||(R(t.$$.fragment,_),a=!0)},o(_){N(t.$$.fragment,_),a=!1},d(_){_&&k(e),I(t),i[6](null),i[7](null)}}}function Ve(i,e,t){let{original:n}=e,{interpretation:s}=e,{shape:o}=e,{label:c=""}=e,l,r;const f=(g,m,d,p)=>{var S=d/g[0].length,U=p/g.length,F=0;g.forEach(function(fe){var J=0;fe.forEach(function(ue){m.fillStyle=w(ue),m.fillRect(J*S,F*U,S,U),J++}),F++})};_e(()=>{let g=x(!0,r.width,r.height,r.naturalWidth,r.naturalHeight);o&&(g=x(!0,g.width,g.height,o[0],o[1]));let m=g.width,d=g.height;l.setAttribute("height",`${d}`),l.setAttribute("width",`${m}`),f(s,l.getContext("2d"),m,d)});function a(g){P[g?"unshift":"push"](()=>{l=g,t(2,l)})}function _(g){P[g?"unshift":"push"](()=>{r=g,t(3,r)})}return i.$$set=g=>{"original"in g&&t(0,n=g.original),"interpretation"in g&&t(4,s=g.interpretation),"shape"in g&&t(5,o=g.shape),"label"in g&&t(1,c=g.label)},[n,c,l,r,s,o,a,_]}class xe extends M{constructor(e){super(),z(this,e,Ve,Pe,E,{original:0,interpretation:4,shape:5,label:1})}}function le(i,e,t){const n=i.slice();return n[2]=e[t],n}function He(i){let e;return{c(){e=C(i[1])},m(t,n){v(t,e,n)},p(t,n){n&2&&q(e,t[1])},d(t){t&&k(e)}}}function ne(i){let e,t;return{c(){e=b("div"),u(e,"class","item svelte-13lmfcp"),u(e,"style",t="background-color: "+w(i[2]))},m(n,s){v(n,e,s)},p(n,s){s&1&&t!==(t="background-color: "+w(n[2]))&&u(e,"style",t)},d(n){n&&k(e)}}}function Le(i){let e,t,n,s,o;t=new G({props:{$$slots:{default:[He]},$$scope:{ctx:i}}});let c=A(i[0]),l=[];for(let r=0;r{"interpretation"in o&&t(0,n=o.interpretation),"label"in o&&t(1,s=o.label)},[n,s]}class We extends M{constructor(e){super(),z(this,e,Qe,Le,E,{interpretation:0,label:1})}}function ie(i,e,t){const n=i.slice();return n[2]=e[t][0],n[3]=e[t][1],n}function Xe(i){let e;return{c(){e=C(i[0])},m(t,n){v(t,e,n)},p(t,n){n&1&&q(e,t[0])},d(t){t&&k(e)}}}function re(i){let e,t=i[2]+"",n,s,o;return{c(){e=b("span"),n=C(t),s=y(),u(e,"class","text-span svelte-15c0u2m"),u(e,"style",o="background-color: "+w(i[3]))},m(c,l){v(c,e,l),h(e,n),h(e,s)},p(c,l){l&2&&t!==(t=c[2]+"")&&q(n,t),l&2&&o!==(o="background-color: "+w(c[3]))&&u(e,"style",o)},d(c){c&&k(e)}}}function Ye(i){let e,t,n,s;t=new G({props:{$$slots:{default:[Xe]},$$scope:{ctx:i}}});let o=A(i[1]),c=[];for(let l=0;l{"label"in o&&t(0,n=o.label),"interpretation"in o&&t(1,s=o.interpretation)},[n,s]}class $e extends M{constructor(e){super(),z(this,e,Ze,Ye,E,{label:0,interpretation:1})}}const et={audio:We,dropdown:qe,checkbox:Te,checkboxgroup:ze,number:we,slider:De,radio:Je,image:xe,textbox:$e};function ce(i){let e,t,n;const s=[i[0],{original:i[1].original},{interpretation:i[1].interpretation}];var o=i[2];function c(l){let r={};for(let f=0;f{I(a,1)}),ae()}o?(e=V(o,c()),T(e.$$.fragment),R(e.$$.fragment,1),B(e,t.parentNode,t)):e=null}else o&&e.$set(f)},i(l){n||(e&&R(e.$$.fragment,l),n=!0)},o(l){e&&N(e.$$.fragment,l),n=!1},d(l){l&&k(t),e&&I(e,l)}}}function tt(i){let e,t,n=i[1]&&ce(i);return{c(){n&&n.c(),e=se()},m(s,o){n&&n.m(s,o),v(s,e,o),t=!0},p(s,[o]){s[1]?n?(n.p(s,o),o&2&&R(n,1)):(n=ce(s),n.c(),R(n,1),n.m(e.parentNode,e)):n&&(oe(),N(n,1,1,()=>{n=null}),ae())},i(s){t||(R(n),t=!0)},o(s){N(n),t=!1},d(s){s&&k(e),n&&n.d(s)}}}function lt(i,e,t){let n,{component:s}=e,{component_props:o}=e,{value:c}=e;return i.$$set=l=>{"component"in l&&t(3,s=l.component),"component_props"in l&&t(0,o=l.component_props),"value"in l&&t(1,c=l.value)},i.$$.update=()=>{i.$$.dirty&8&&t(2,n=et[s])},[o,c,n,s]}class nt extends M{constructor(e){super(),z(this,e,lt,tt,E,{component:3,component_props:0,value:1})}}const ot=nt,at=["dynamic"];export{ot as Component,at as modes};
-//# sourceMappingURL=index-a39e64b0.js.map
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-99879c0f.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-99879c0f.js
deleted file mode 100644
index d26e4fde5f425f825b588b997c09dc80144abf5f..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-99879c0f.js
+++ /dev/null
@@ -1,2 +0,0 @@
-import{S as $,e as x,s as K,J,K as f,p as q,M as N,n as P,A as j,N as A,O as X,P as ie,k as U,T as fe,Z as Ue,U as he,o as M,Q as F,aj as Me,af as be,Y as Se,X as Ce,u as Q,v as p,y as Y,z as g,R as _e,x as S,a1 as Ee,B as de,a6 as Pe,aB as Re,F as y,h as ye,m as le,j as ze,t as Fe,a9 as Ae,ab as De,ac as Ie,ad as Oe,am as Xe,a7 as Le,ak as T,E as He,ae as Je,q as Ke,r as We}from"./index-3370be2a.js";import{n as ge}from"./ModifyUpload.svelte_svelte_type_style_lang-d2acacf0.js";import{B as Ge}from"./Button-89624748.js";import{U as Qe}from"./Upload-f29b2460.js";import{M as Ye}from"./ModifyUpload-d8fc50ab.js";import{B as Be}from"./BlockLabel-56db415e.js";import{U as Ze,W as $e}from"./StaticImage.svelte_svelte_type_style_lang-e84b963e.js";import{I as xe}from"./IconButton-abe5ede9.js";import{E as et}from"./Empty-585389a4.js";import{u as tt,S as lt}from"./ShareButton-39feba51.js";import{D as nt}from"./Download-fdaaf5d4.js";import{U as at}from"./UploadText-28892309.js";import"./Blocks-f0129fcd.js";function it(n){let e,l;return{c(){e=J("svg"),l=J("path"),f(l,"d","M8 3H5a2 2 0 0 0-2 2v3m18 0V5a2 2 0 0 0-2-2h-3m0 18h3a2 2 0 0 0 2-2v-3M3 16v3a2 2 0 0 0 2 2h3"),f(e,"xmlns","http://www.w3.org/2000/svg"),f(e,"width","100%"),f(e,"height","100%"),f(e,"viewBox","0 0 24 24"),f(e,"fill","none"),f(e,"stroke","currentColor"),f(e,"stroke-width","1.5"),f(e,"stroke-linecap","round"),f(e,"stroke-linejoin","round")},m(t,a){q(t,e,a),N(e,l)},p:P,i:P,o:P,d(t){t&&j(e)}}}class rt extends ${constructor(e){super(),x(this,e,null,it,K,{})}}function ot(n){let e,l,t;return{c(){e=J("svg"),l=J("rect"),t=J("rect"),f(l,"x","6"),f(l,"y","4"),f(l,"width","4"),f(l,"height","16"),f(t,"x","14"),f(t,"y","4"),f(t,"width","4"),f(t,"height","16"),f(e,"xmlns","http://www.w3.org/2000/svg"),f(e,"width","100%"),f(e,"height","100%"),f(e,"viewBox","0 0 24 24"),f(e,"fill","none"),f(e,"stroke","currentColor"),f(e,"stroke-width","1.5"),f(e,"stroke-linecap","round"),f(e,"stroke-linejoin","round")},m(a,i){q(a,e,i),N(e,l),N(e,t)},p:P,i:P,o:P,d(a){a&&j(e)}}}class st extends ${constructor(e){super(),x(this,e,null,ot,K,{})}}function ut(n){let e,l;return{c(){e=J("svg"),l=J("polygon"),f(l,"points","5 3 19 12 5 21 5 3"),f(e,"xmlns","http://www.w3.org/2000/svg"),f(e,"width","100%"),f(e,"height","100%"),f(e,"viewBox","0 0 24 24"),f(e,"fill","none"),f(e,"stroke","currentColor"),f(e,"stroke-width","1.5"),f(e,"stroke-linecap","round"),f(e,"stroke-linejoin","round")},m(t,a){q(t,e,a),N(e,l)},p:P,i:P,o:P,d(t){t&&j(e)}}}class ft extends ${constructor(e){super(),x(this,e,null,ut,K,{})}}function ct(n){let e,l,t;return{c(){e=J("svg"),l=J("polygon"),t=J("rect"),f(l,"points","23 7 16 12 23 17 23 7"),f(t,"x","1"),f(t,"y","5"),f(t,"width","15"),f(t,"height","14"),f(t,"rx","2"),f(t,"ry","2"),f(e,"xmlns","http://www.w3.org/2000/svg"),f(e,"width","100%"),f(e,"height","100%"),f(e,"viewBox","0 0 24 24"),f(e,"fill","none"),f(e,"stroke","currentColor"),f(e,"stroke-width","1.5"),f(e,"stroke-linecap","round"),f(e,"stroke-linejoin","round"),f(e,"class","feather feather-video")},m(a,i){q(a,e,i),N(e,l),N(e,t)},p:P,i:P,o:P,d(a){a&&j(e)}}}class me extends ${constructor(e){super(),x(this,e,null,ct,K,{})}}const we=n=>{let e=["B","KB","MB","GB","PB"],l=0;for(;n>1024;)n/=1024,l++;let t=e[l];return n.toFixed(1)+" "+t},_t=()=>!0;function dt(n,{autoplay:e}){async function l(){e&&await n.play()}return n.addEventListener("loadeddata",l),{destroy(){n.removeEventListener("loadeddata",l)}}}const{isNaN:mt}=Pe;function ht(n){let e,l;return e=new st({}),{c(){U(e.$$.fragment)},m(t,a){M(e,t,a),l=!0},i(t){l||(g(e.$$.fragment,t),l=!0)},o(t){p(e.$$.fragment,t),l=!1},d(t){S(e,t)}}}function bt(n){let e,l;return e=new ft({}),{c(){U(e.$$.fragment)},m(t,a){M(e,t,a),l=!0},i(t){l||(g(e.$$.fragment,t),l=!0)},o(t){p(e.$$.fragment,t),l=!1},d(t){S(e,t)}}}function gt(n){let e,l;return e=new Ze({}),{c(){U(e.$$.fragment)},m(t,a){M(e,t,a),l=!0},i(t){l||(g(e.$$.fragment,t),l=!0)},o(t){p(e.$$.fragment,t),l=!1},d(t){S(e,t)}}}function wt(n){let e,l,t,a,i,s,b=!1,_,m=!0,r,u,o,d,E,B,V,I,w,D=ce(n[5])+"",L,W,O=ce(n[6])+"",H,k,C,h,te,ee,Z,z,ne,re;function oe(){cancelAnimationFrame(_),l.paused||(_=Re(oe),b=!0),n[15].call(l)}const se=[gt,bt,ht],G=[];function ue(v,R){return v[5]===v[6]?0:v[7]?1:2}return B=ue(n),V=G[B]=se[B](n),Z=new rt({}),{c(){e=A("div"),l=A("video"),t=A("track"),u=X(),o=A("div"),d=A("div"),E=A("span"),V.c(),I=X(),w=A("span"),L=ie(D),W=ie(" / "),H=ie(O),k=X(),C=A("progress"),te=X(),ee=A("div"),U(Z.$$.fragment),f(t,"kind","captions"),fe(t.src,a=n[1])||f(t,"src",a),t.default=!0,fe(l.src,i=n[0])||f(l,"src",i),f(l,"preload","auto"),f(l,"data-testid",s=`${n[4]}-player`),f(l,"class","svelte-1voqrms"),n[6]===void 0&&Ue(()=>n[16].call(l)),he(l,"mirror",n[2]),f(E,"class","icon svelte-1voqrms"),f(w,"class","time svelte-1voqrms"),C.value=h=n[5]/n[6]||0,f(C,"class","svelte-1voqrms"),f(ee,"class","icon svelte-1voqrms"),f(d,"class","inner svelte-1voqrms"),f(o,"class","controls svelte-1voqrms"),f(e,"class","wrap svelte-1voqrms")},m(v,R){q(v,e,R),N(e,l),N(l,t),n[18](l),N(e,u),N(e,o),N(o,d),N(d,E),G[B].m(E,null),N(d,I),N(d,w),N(w,L),N(w,W),N(w,H),N(d,k),N(d,C),N(d,te),N(d,ee),M(Z,ee,null),z=!0,ne||(re=[F(l,"click",n[10]),F(l,"play",n[13]),F(l,"pause",n[14]),F(l,"ended",n[12]),F(l,"timeupdate",oe),F(l,"durationchange",n[16]),F(l,"play",n[17]),F(l,"pause",n[17]),Me(r=dt.call(null,l,{autoplay:n[3]})),F(E,"click",n[10]),F(C,"mousemove",n[9]),F(C,"touchmove",be(n[9])),F(C,"click",Se(be(n[11]))),F(ee,"click",n[19])],ne=!0)},p(v,[R]){(!z||R&2&&!fe(t.src,a=v[1]))&&f(t,"src",a),(!z||R&1&&!fe(l.src,i=v[0]))&&f(l,"src",i),(!z||R&16&&s!==(s=`${v[4]}-player`))&&f(l,"data-testid",s),!b&&R&32&&!mt(v[5])&&(l.currentTime=v[5]),b=!1,R&128&&m!==(m=v[7])&&l[m?"pause":"play"](),r&&Ce(r.update)&&R&8&&r.update.call(null,{autoplay:v[3]}),(!z||R&4)&&he(l,"mirror",v[2]);let ae=B;B=ue(v),B!==ae&&(Q(),p(G[ae],1,1,()=>{G[ae]=null}),Y(),V=G[B],V||(V=G[B]=se[B](v),V.c()),g(V,1),V.m(E,null)),(!z||R&32)&&D!==(D=ce(v[5])+"")&&_e(L,D),(!z||R&64)&&O!==(O=ce(v[6])+"")&&_e(H,O),(!z||R&96&&h!==(h=v[5]/v[6]||0))&&(C.value=h)},i(v){z||(g(V),g(Z.$$.fragment,v),z=!0)},o(v){p(V),p(Z.$$.fragment,v),z=!1},d(v){v&&j(e),n[18](null),G[B].d(),S(Z),ne=!1,Ee(re)}}}function ce(n){if(isNaN(n)||!isFinite(n))return"...";const e=Math.floor(n/60);let l=Math.floor(n%60);return n<10&&(l=`0${l}`),`${e}:${l}`}function kt(n,e,l){let{src:t}=e,{subtitle:a=null}=e,{mirror:i}=e,{autoplay:s}=e,{label:b="test"}=e;const _=de();let m=0,r,u=!0,o;function d(k){if(!r)return;if(k.type==="click"){B(k);return}if(k.type!=="touchmove"&&!(k.buttons&1))return;const C=k.type==="touchmove"?k.touches[0].clientX:k.clientX,{left:h,right:te}=k.currentTarget.getBoundingClientRect();l(5,m=r*(C-h)/(te-h))}async function E(){document.fullscreenElement!=o&&(o.currentTime>0&&!o.paused&&!o.ended&&o.readyState>o.HAVE_CURRENT_DATA?o.pause():await o.play())}function B(k){const{left:C,right:h}=k.currentTarget.getBoundingClientRect();l(5,m=r*(k.clientX-C)/(h-C))}function V(){_("stop"),_("end")}function I(k){y.call(this,n,k)}function w(k){y.call(this,n,k)}function D(){m=this.currentTime,l(5,m)}function L(){r=this.duration,l(6,r)}function W(){u=this.paused,l(7,u)}function O(k){ye[k?"unshift":"push"](()=>{o=k,l(8,o)})}const H=()=>o.requestFullscreen();return n.$$set=k=>{"src"in k&&l(0,t=k.src),"subtitle"in k&&l(1,a=k.subtitle),"mirror"in k&&l(2,i=k.mirror),"autoplay"in k&&l(3,s=k.autoplay),"label"in k&&l(4,b=k.label)},[t,a,i,s,b,m,r,u,o,d,E,B,V,I,w,D,L,W,O,H]}class Ne extends ${constructor(e){super(),x(this,e,kt,wt,K,{src:0,subtitle:1,mirror:2,autoplay:3,label:4})}}function pt(n){let e,l,t,a,i,s,b;e=new Ye({}),e.$on("clear",n[11]);const _=[Bt,yt],m=[];function r(u,o){return t==null&&(t=!!_t()),t?0:u[0].size?1:-1}return~(a=r(n))&&(i=m[a]=_[a](n)),{c(){U(e.$$.fragment),l=X(),i&&i.c(),s=le()},m(u,o){M(e,u,o),q(u,l,o),~a&&m[a].m(u,o),q(u,s,o),b=!0},p(u,o){let d=a;a=r(u),a===d?~a&&m[a].p(u,o):(i&&(Q(),p(m[d],1,1,()=>{m[d]=null}),Y()),~a?(i=m[a],i?i.p(u,o):(i=m[a]=_[a](u),i.c()),g(i,1),i.m(s.parentNode,s)):i=null)},i(u){b||(g(e.$$.fragment,u),g(i),b=!0)},o(u){p(e.$$.fragment,u),p(i),b=!1},d(u){u&&(j(l),j(s)),S(e,u),~a&&m[a].d(u)}}}function vt(n){let e,l,t,a;const i=[Vt,Nt],s=[];function b(_,m){return _[2]==="upload"?0:_[2]==="webcam"?1:-1}return~(e=b(n))&&(l=s[e]=i[e](n)),{c(){l&&l.c(),t=le()},m(_,m){~e&&s[e].m(_,m),q(_,t,m),a=!0},p(_,m){let r=e;e=b(_),e===r?~e&&s[e].p(_,m):(l&&(Q(),p(s[r],1,1,()=>{s[r]=null}),Y()),~e?(l=s[e],l?l.p(_,m):(l=s[e]=i[e](_),l.c()),g(l,1),l.m(t.parentNode,t)):l=null)},i(_){a||(g(l),a=!0)},o(_){p(l),a=!1},d(_){_&&j(t),~e&&s[e].d(_)}}}function yt(n){let e,l=n[0].name+"",t,a,i,s=we(n[0].size)+"",b;return{c(){e=A("div"),t=ie(l),a=X(),i=A("div"),b=ie(s),f(e,"class","file-name svelte-a6ruol"),f(i,"class","file-size svelte-a6ruol")},m(_,m){q(_,e,m),N(e,t),q(_,a,m),q(_,i,m),N(i,b)},p(_,m){m&1&&l!==(l=_[0].name+"")&&_e(t,l),m&1&&s!==(s=we(_[0].size)+"")&&_e(b,s)},i:P,o:P,d(_){_&&(j(e),j(a),j(i))}}}function Bt(n){let e=n[0]?.data,l,t,a=ke(n);return{c(){a.c(),l=le()},m(i,s){a.m(i,s),q(i,l,s),t=!0},p(i,s){s&1&&K(e,e=i[0]?.data)?(Q(),p(a,1,1,P),Y(),a=ke(i),a.c(),g(a,1),a.m(l.parentNode,l)):a.p(i,s)},i(i){t||(g(a),t=!0)},o(i){p(a),t=!1},d(i){i&&j(l),a.d(i)}}}function ke(n){let e,l;return e=new Ne({props:{autoplay:n[7],src:n[0].data,subtitle:n[1]?.data,mirror:n[5]&&n[2]==="webcam",label:n[3]}}),e.$on("play",n[18]),e.$on("pause",n[19]),e.$on("stop",n[20]),e.$on("end",n[21]),{c(){U(e.$$.fragment)},m(t,a){M(e,t,a),l=!0},p(t,a){const i={};a&128&&(i.autoplay=t[7]),a&1&&(i.src=t[0].data),a&2&&(i.subtitle=t[1]?.data),a&36&&(i.mirror=t[5]&&t[2]==="webcam"),a&8&&(i.label=t[3]),e.$set(i)},i(t){l||(g(e.$$.fragment,t),l=!0)},o(t){p(e.$$.fragment,t),l=!1},d(t){S(e,t)}}}function Nt(n){let e,l;return e=new $e({props:{mirror_webcam:n[5],include_audio:n[6],mode:"video"}}),e.$on("error",n[14]),e.$on("capture",n[15]),e.$on("start_recording",n[16]),e.$on("stop_recording",n[17]),{c(){U(e.$$.fragment)},m(t,a){M(e,t,a),l=!0},p(t,a){const i={};a&32&&(i.mirror_webcam=t[5]),a&64&&(i.include_audio=t[6]),e.$set(i)},i(t){l||(g(e.$$.fragment,t),l=!0)},o(t){p(e.$$.fragment,t),l=!1},d(t){S(e,t)}}}function Vt(n){let e,l,t;function a(s){n[13](s)}let i={filetype:"video/x-m4v,video/*",$$slots:{default:[Tt]},$$scope:{ctx:n}};return n[8]!==void 0&&(i.dragging=n[8]),e=new Qe({props:i}),ye.push(()=>ze(e,"dragging",a)),e.$on("load",n[10]),{c(){U(e.$$.fragment)},m(s,b){M(e,s,b),t=!0},p(s,b){const _={};b&4194304&&(_.$$scope={dirty:b,ctx:s}),!l&&b&256&&(l=!0,_.dragging=s[8],Fe(()=>l=!1)),e.$set(_)},i(s){t||(g(e.$$.fragment,s),t=!0)},o(s){p(e.$$.fragment,s),t=!1},d(s){S(e,s)}}}function Tt(n){let e;const l=n[12].default,t=Ae(l,n,n[22],null);return{c(){t&&t.c()},m(a,i){t&&t.m(a,i),e=!0},p(a,i){t&&t.p&&(!e||i&4194304)&&De(t,l,a,a[22],e?Oe(l,a[22],i,null):Ie(a[22]),null)},i(a){e||(g(t,a),e=!0)},o(a){p(t,a),e=!1},d(a){t&&t.d(a)}}}function qt(n){let e,l,t,a,i,s;e=new Be({props:{show_label:n[4],Icon:me,label:n[3]||"Video"}});const b=[vt,pt],_=[];function m(r,u){return r[0]===null?0:1}return t=m(n),a=_[t]=b[t](n),{c(){U(e.$$.fragment),l=X(),a.c(),i=le()},m(r,u){M(e,r,u),q(r,l,u),_[t].m(r,u),q(r,i,u),s=!0},p(r,[u]){const o={};u&16&&(o.show_label=r[4]),u&8&&(o.label=r[3]||"Video"),e.$set(o);let d=t;t=m(r),t===d?_[t].p(r,u):(Q(),p(_[d],1,1,()=>{_[d]=null}),Y(),a=_[t],a?a.p(r,u):(a=_[t]=b[t](r),a.c()),g(a,1),a.m(i.parentNode,i))},i(r){s||(g(e.$$.fragment,r),g(a),s=!0)},o(r){p(e.$$.fragment,r),p(a),s=!1},d(r){r&&(j(l),j(i)),S(e,r),_[t].d(r)}}}function jt(n,e,l){let{$$slots:t={},$$scope:a}=e,{value:i=null}=e,{subtitle:s=null}=e,{source:b}=e,{label:_=void 0}=e,{show_label:m=!0}=e,{mirror_webcam:r=!1}=e,{include_audio:u}=e,{autoplay:o}=e;const d=de();function E({detail:h}){d("change",h),d("upload",h),l(0,i=h)}function B({detail:h}){l(0,i=null),d("change",h),d("clear")}let V=!1;function I(h){V=h,l(8,V)}function w(h){y.call(this,n,h)}const D=({detail:h})=>d("change",h);function L(h){y.call(this,n,h)}function W(h){y.call(this,n,h)}function O(h){y.call(this,n,h)}function H(h){y.call(this,n,h)}function k(h){y.call(this,n,h)}function C(h){y.call(this,n,h)}return n.$$set=h=>{"value"in h&&l(0,i=h.value),"subtitle"in h&&l(1,s=h.subtitle),"source"in h&&l(2,b=h.source),"label"in h&&l(3,_=h.label),"show_label"in h&&l(4,m=h.show_label),"mirror_webcam"in h&&l(5,r=h.mirror_webcam),"include_audio"in h&&l(6,u=h.include_audio),"autoplay"in h&&l(7,o=h.autoplay),"$$scope"in h&&l(22,a=h.$$scope)},n.$$.update=()=>{n.$$.dirty&256&&d("drag",V)},[i,s,b,_,m,r,u,o,V,d,E,B,t,I,w,D,L,W,O,H,k,C,a]}let Ut=class extends ${constructor(e){super(),x(this,e,jt,qt,K,{value:0,subtitle:1,source:2,label:3,show_label:4,mirror_webcam:5,include_audio:6,autoplay:7})}};function Mt(n){let e=n[0].data,l,t,a,i,s,b,_,m,r=pe(n);i=new xe({props:{Icon:nt,label:"Download"}});let u=n[5]&&ve(n);return{c(){r.c(),l=X(),t=A("div"),a=A("a"),U(i.$$.fragment),_=X(),u&&u.c(),f(a,"href",s=n[0].data),f(a,"target",window.__is_colab__?"_blank":null),f(a,"download",b=n[0].orig_name||n[0].name),f(t,"class","icon-buttons svelte-rvdo70"),f(t,"data-testid","download-div")},m(o,d){r.m(o,d),q(o,l,d),q(o,t,d),N(t,a),M(i,a,null),N(t,_),u&&u.m(t,null),m=!0},p(o,d){d&1&&K(e,e=o[0].data)?(Q(),p(r,1,1,P),Y(),r=pe(o),r.c(),g(r,1),r.m(l.parentNode,l)):r.p(o,d),(!m||d&1&&s!==(s=o[0].data))&&f(a,"href",s),(!m||d&1&&b!==(b=o[0].orig_name||o[0].name))&&f(a,"download",b),o[5]?u?(u.p(o,d),d&32&&g(u,1)):(u=ve(o),u.c(),g(u,1),u.m(t,null)):u&&(Q(),p(u,1,1,()=>{u=null}),Y())},i(o){m||(g(r),g(i.$$.fragment,o),g(u),m=!0)},o(o){p(r),p(i.$$.fragment,o),p(u),m=!1},d(o){o&&(j(l),j(t)),r.d(o),S(i),u&&u.d()}}}function St(n){let e,l;return e=new et({props:{unpadded_box:!0,size:"large",$$slots:{default:[Ct]},$$scope:{ctx:n}}}),{c(){U(e.$$.fragment)},m(t,a){M(e,t,a),l=!0},p(t,a){const i={};a&32768&&(i.$$scope={dirty:a,ctx:t}),e.$set(i)},i(t){l||(g(e.$$.fragment,t),l=!0)},o(t){p(e.$$.fragment,t),l=!1},d(t){S(e,t)}}}function pe(n){let e,l;return e=new Ne({props:{src:n[0].data,subtitle:n[1]?.data,autoplay:n[4],mirror:!1,label:n[2]}}),e.$on("play",n[6]),e.$on("pause",n[7]),e.$on("ended",n[8]),{c(){U(e.$$.fragment)},m(t,a){M(e,t,a),l=!0},p(t,a){const i={};a&1&&(i.src=t[0].data),a&2&&(i.subtitle=t[1]?.data),a&16&&(i.autoplay=t[4]),a&4&&(i.label=t[2]),e.$set(i)},i(t){l||(g(e.$$.fragment,t),l=!0)},o(t){p(e.$$.fragment,t),l=!1},d(t){S(e,t)}}}function ve(n){let e,l;return e=new lt({props:{value:n[0],formatter:n[9]}}),e.$on("error",n[10]),e.$on("share",n[11]),{c(){U(e.$$.fragment)},m(t,a){M(e,t,a),l=!0},p(t,a){const i={};a&1&&(i.value=t[0]),e.$set(i)},i(t){l||(g(e.$$.fragment,t),l=!0)},o(t){p(e.$$.fragment,t),l=!1},d(t){S(e,t)}}}function Ct(n){let e,l;return e=new me({}),{c(){U(e.$$.fragment)},m(t,a){M(e,t,a),l=!0},i(t){l||(g(e.$$.fragment,t),l=!0)},o(t){p(e.$$.fragment,t),l=!1},d(t){S(e,t)}}}function Et(n){let e,l,t,a,i,s;e=new Be({props:{show_label:n[3],Icon:me,label:n[2]||"Video"}});const b=[St,Mt],_=[];function m(r,u){return r[0]===null?0:1}return t=m(n),a=_[t]=b[t](n),{c(){U(e.$$.fragment),l=X(),a.c(),i=le()},m(r,u){M(e,r,u),q(r,l,u),_[t].m(r,u),q(r,i,u),s=!0},p(r,[u]){const o={};u&8&&(o.show_label=r[3]),u&4&&(o.label=r[2]||"Video"),e.$set(o);let d=t;t=m(r),t===d?_[t].p(r,u):(Q(),p(_[d],1,1,()=>{_[d]=null}),Y(),a=_[t],a?a.p(r,u):(a=_[t]=b[t](r),a.c()),g(a,1),a.m(i.parentNode,i))},i(r){s||(g(e.$$.fragment,r),g(a),s=!0)},o(r){p(e.$$.fragment,r),p(a),s=!1},d(r){r&&(j(l),j(i)),S(e,r),_[t].d(r)}}}function Pt(n,e,l){let{value:t=null}=e,{subtitle:a=null}=e,{label:i=void 0}=e,{show_label:s=!0}=e,{autoplay:b}=e,{show_share_button:_=!0}=e,m=null,r=null;const u=de();Xe(async()=>{t!==m&&a!==r&&r!==null&&(m=t,l(0,t=null),await Le(),l(0,t=m)),m=t,r=a});function o(w){y.call(this,n,w)}function d(w){y.call(this,n,w)}function E(w){y.call(this,n,w)}const B=async w=>w?await tt(w.data,"url"):"";function V(w){y.call(this,n,w)}function I(w){y.call(this,n,w)}return n.$$set=w=>{"value"in w&&l(0,t=w.value),"subtitle"in w&&l(1,a=w.subtitle),"label"in w&&l(2,i=w.label),"show_label"in w&&l(3,s=w.show_label),"autoplay"in w&&l(4,b=w.autoplay),"show_share_button"in w&&l(5,_=w.show_share_button)},n.$$.update=()=>{n.$$.dirty&1&&t&&u("change",t)},[t,a,i,s,b,_,o,d,E,B,V,I]}class Rt extends ${constructor(e){super(),x(this,e,Pt,Et,K,{value:0,subtitle:1,label:2,show_label:3,autoplay:4,show_share_button:5})}}function zt(n){let e,l;return e=new Ut({props:{value:n[18],subtitle:n[19],label:n[5],show_label:n[7],source:n[6],mirror_webcam:n[10],include_audio:n[11],autoplay:n[16],$$slots:{default:[At]},$$scope:{ctx:n}}}),e.$on("change",n[21]),e.$on("drag",n[30]),e.$on("error",n[31]),e.$on("clear",n[32]),e.$on("play",n[33]),e.$on("pause",n[34]),e.$on("upload",n[35]),e.$on("stop",n[36]),e.$on("end",n[37]),e.$on("start_recording",n[38]),e.$on("stop_recording",n[39]),{c(){U(e.$$.fragment)},m(t,a){M(e,t,a),l=!0},p(t,a){const i={};a[0]&262144&&(i.value=t[18]),a[0]&524288&&(i.subtitle=t[19]),a[0]&32&&(i.label=t[5]),a[0]&128&&(i.show_label=t[7]),a[0]&64&&(i.source=t[6]),a[0]&1024&&(i.mirror_webcam=t[10]),a[0]&2048&&(i.include_audio=t[11]),a[0]&65536&&(i.autoplay=t[16]),a[1]&1024&&(i.$$scope={dirty:a,ctx:t}),e.$set(i)},i(t){l||(g(e.$$.fragment,t),l=!0)},o(t){p(e.$$.fragment,t),l=!1},d(t){S(e,t)}}}function Ft(n){let e,l;return e=new Rt({props:{value:n[18],subtitle:n[19],label:n[5],show_label:n[7],autoplay:n[16],show_share_button:n[17]}}),e.$on("play",n[25]),e.$on("pause",n[26]),e.$on("stop",n[27]),e.$on("share",n[28]),e.$on("error",n[29]),{c(){U(e.$$.fragment)},m(t,a){M(e,t,a),l=!0},p(t,a){const i={};a[0]&262144&&(i.value=t[18]),a[0]&524288&&(i.subtitle=t[19]),a[0]&32&&(i.label=t[5]),a[0]&128&&(i.show_label=t[7]),a[0]&65536&&(i.autoplay=t[16]),a[0]&131072&&(i.show_share_button=t[17]),e.$set(i)},i(t){l||(g(e.$$.fragment,t),l=!0)},o(t){p(e.$$.fragment,t),l=!1},d(t){S(e,t)}}}function At(n){let e,l;return e=new at({props:{type:"video"}}),{c(){U(e.$$.fragment)},m(t,a){M(e,t,a),l=!0},p:P,i(t){l||(g(e.$$.fragment,t),l=!0)},o(t){p(e.$$.fragment,t),l=!1},d(t){S(e,t)}}}function Dt(n){let e,l,t,a,i,s;const b=[n[1]];let _={};for(let o=0;o{r[B]=null}),Y(),a=r[t],a?a.p(o,d):(a=r[t]=m[t](o),a.c()),g(a,1),a.m(i.parentNode,i))},i(o){s||(g(e.$$.fragment,o),g(a),s=!0)},o(o){p(e.$$.fragment,o),p(a),s=!1},d(o){o&&(j(l),j(i)),S(e,o),r[t].d(o)}}}function It(n){let e,l;return e=new Ge({props:{visible:n[4],variant:n[15]==="dynamic"&&n[0]===null&&n[6]==="upload"?"dashed":"solid",border_mode:n[20]?"focus":"base",padding:!1,elem_id:n[2],elem_classes:n[3],height:n[8],width:n[9],container:n[12],scale:n[13],min_width:n[14],allow_overflow:!1,$$slots:{default:[Dt]},$$scope:{ctx:n}}}),{c(){U(e.$$.fragment)},m(t,a){M(e,t,a),l=!0},p(t,a){const i={};a[0]&16&&(i.visible=t[4]),a[0]&32833&&(i.variant=t[15]==="dynamic"&&t[0]===null&&t[6]==="upload"?"dashed":"solid"),a[0]&1048576&&(i.border_mode=t[20]?"focus":"base"),a[0]&4&&(i.elem_id=t[2]),a[0]&8&&(i.elem_classes=t[3]),a[0]&256&&(i.height=t[8]),a[0]&512&&(i.width=t[9]),a[0]&4096&&(i.container=t[12]),a[0]&8192&&(i.scale=t[13]),a[0]&16384&&(i.min_width=t[14]),a[0]&2067682|a[1]&1024&&(i.$$scope={dirty:a,ctx:t}),e.$set(i)},i(t){l||(g(e.$$.fragment,t),l=!0)},o(t){p(e.$$.fragment,t),l=!1},d(t){S(e,t)}}}function Ot(n,e,l){let{elem_id:t=""}=e,{elem_classes:a=[]}=e,{visible:i=!0}=e,{value:s=null}=e,b=null,{label:_}=e,{source:m}=e,{root:r}=e,{root_url:u}=e,{show_label:o}=e,{loading_status:d}=e,{height:E}=e,{width:B}=e,{mirror_webcam:V}=e,{include_audio:I}=e,{container:w=!1}=e,{scale:D=null}=e,{min_width:L=void 0}=e,{mode:W}=e,{autoplay:O=!1}=e,{show_share_button:H=!0}=e,k=null,C=null,h=!1;const te=de();function ee({detail:c}){c!=null?l(0,s=[c,null]):l(0,s=null)}function Z(c){y.call(this,n,c)}function z(c){y.call(this,n,c)}function ne(c){y.call(this,n,c)}function re(c){y.call(this,n,c)}function oe(c){y.call(this,n,c)}const se=({detail:c})=>l(20,h=c),G=({detail:c})=>{l(1,d=d||{}),l(1,d.status="error",d),l(1,d.message=c,d)};function ue(c){y.call(this,n,c)}function v(c){y.call(this,n,c)}function R(c){y.call(this,n,c)}function ae(c){y.call(this,n,c)}function Ve(c){y.call(this,n,c)}function Te(c){y.call(this,n,c)}function qe(c){y.call(this,n,c)}function je(c){y.call(this,n,c)}return n.$$set=c=>{"elem_id"in c&&l(2,t=c.elem_id),"elem_classes"in c&&l(3,a=c.elem_classes),"visible"in c&&l(4,i=c.visible),"value"in c&&l(0,s=c.value),"label"in c&&l(5,_=c.label),"source"in c&&l(6,m=c.source),"root"in c&&l(22,r=c.root),"root_url"in c&&l(23,u=c.root_url),"show_label"in c&&l(7,o=c.show_label),"loading_status"in c&&l(1,d=c.loading_status),"height"in c&&l(8,E=c.height),"width"in c&&l(9,B=c.width),"mirror_webcam"in c&&l(10,V=c.mirror_webcam),"include_audio"in c&&l(11,I=c.include_audio),"container"in c&&l(12,w=c.container),"scale"in c&&l(13,D=c.scale),"min_width"in c&&l(14,L=c.min_width),"mode"in c&&l(15,W=c.mode),"autoplay"in c&&l(16,O=c.autoplay),"show_share_button"in c&&l(17,H=c.show_share_button)},n.$$.update=()=>{n.$$.dirty[0]&12582913&&(s!=null?(l(18,k=ge(s[0],r,u)),l(19,C=ge(s[1],r,u))):(l(18,k=null),l(19,C=null))),n.$$.dirty[0]&16777217&&JSON.stringify(s)!==JSON.stringify(b)&&(l(24,b=s),te("change"))},[s,d,t,a,i,_,m,o,E,B,V,I,w,D,L,W,O,H,k,C,h,ee,r,u,b,Z,z,ne,re,oe,se,G,ue,v,R,ae,Ve,Te,qe,je]}class Xt extends ${constructor(e){super(),x(this,e,Ot,It,K,{elem_id:2,elem_classes:3,visible:4,value:0,label:5,source:6,root:22,root_url:23,show_label:7,loading_status:1,height:8,width:9,mirror_webcam:10,include_audio:11,container:12,scale:13,min_width:14,mode:15,autoplay:16,show_share_button:17},null,[-1,-1])}get elem_id(){return this.$$.ctx[2]}set elem_id(e){this.$$set({elem_id:e}),T()}get elem_classes(){return this.$$.ctx[3]}set elem_classes(e){this.$$set({elem_classes:e}),T()}get visible(){return this.$$.ctx[4]}set visible(e){this.$$set({visible:e}),T()}get value(){return this.$$.ctx[0]}set value(e){this.$$set({value:e}),T()}get label(){return this.$$.ctx[5]}set label(e){this.$$set({label:e}),T()}get source(){return this.$$.ctx[6]}set source(e){this.$$set({source:e}),T()}get root(){return this.$$.ctx[22]}set root(e){this.$$set({root:e}),T()}get root_url(){return this.$$.ctx[23]}set root_url(e){this.$$set({root_url:e}),T()}get show_label(){return this.$$.ctx[7]}set show_label(e){this.$$set({show_label:e}),T()}get loading_status(){return this.$$.ctx[1]}set loading_status(e){this.$$set({loading_status:e}),T()}get height(){return this.$$.ctx[8]}set height(e){this.$$set({height:e}),T()}get width(){return this.$$.ctx[9]}set width(e){this.$$set({width:e}),T()}get mirror_webcam(){return this.$$.ctx[10]}set mirror_webcam(e){this.$$set({mirror_webcam:e}),T()}get include_audio(){return this.$$.ctx[11]}set include_audio(e){this.$$set({include_audio:e}),T()}get container(){return this.$$.ctx[12]}set container(e){this.$$set({container:e}),T()}get scale(){return this.$$.ctx[13]}set scale(e){this.$$set({scale:e}),T()}get min_width(){return this.$$.ctx[14]}set min_width(e){this.$$set({min_width:e}),T()}get mode(){return this.$$.ctx[15]}set mode(e){this.$$set({mode:e}),T()}get autoplay(){return this.$$.ctx[16]}set autoplay(e){this.$$set({autoplay:e}),T()}get show_share_button(){return this.$$.ctx[17]}set show_share_button(e){this.$$set({show_share_button:e}),T()}}const nl=Xt,al=["static","dynamic"],il=n=>({type:{input_payload:"{ name: string; data: string }",response_object:"{ name: string; data: string, is_file: boolean }"},description:{input_payload:"object with file name and base64 data",response_object:"object that includes path to video file. The URL: {ROOT}file={name} contains the data"}});export{nl as Component,il as document,al as modes};
-//# sourceMappingURL=index-99879c0f.js.map
diff --git a/spaces/DaleChen/AutoGPT/autogpt/commands/web_selenium.py b/spaces/DaleChen/AutoGPT/autogpt/commands/web_selenium.py
deleted file mode 100644
index 11bdfeb1f1630fc6ff6f55d68e8d7233281c5098..0000000000000000000000000000000000000000
--- a/spaces/DaleChen/AutoGPT/autogpt/commands/web_selenium.py
+++ /dev/null
@@ -1,154 +0,0 @@
-"""Selenium web scraping module."""
-from __future__ import annotations
-
-import logging
-from pathlib import Path
-from sys import platform
-
-from bs4 import BeautifulSoup
-from selenium import webdriver
-from selenium.webdriver.chrome.options import Options as ChromeOptions
-from selenium.webdriver.common.by import By
-from selenium.webdriver.firefox.options import Options as FirefoxOptions
-from selenium.webdriver.remote.webdriver import WebDriver
-from selenium.webdriver.safari.options import Options as SafariOptions
-from selenium.webdriver.support import expected_conditions as EC
-from selenium.webdriver.support.wait import WebDriverWait
-from webdriver_manager.chrome import ChromeDriverManager
-from webdriver_manager.firefox import GeckoDriverManager
-
-import autogpt.processing.text as summary
-from autogpt.config import Config
-from autogpt.processing.html import extract_hyperlinks, format_hyperlinks
-
-FILE_DIR = Path(__file__).parent.parent
-CFG = Config()
-
-
-def browse_website(url: str, question: str) -> tuple[str, WebDriver]:
- """Browse a website and return the answer and links to the user
-
- Args:
- url (str): The url of the website to browse
- question (str): The question asked by the user
-
- Returns:
- Tuple[str, WebDriver]: The answer and links to the user and the webdriver
- """
- driver, text = scrape_text_with_selenium(url)
- add_header(driver)
- summary_text = summary.summarize_text(url, text, question, driver)
- links = scrape_links_with_selenium(driver, url)
-
- # Limit links to 5
- if len(links) > 5:
- links = links[:5]
- close_browser(driver)
- return f"Answer gathered from website: {summary_text} \n \n Links: {links}", driver
-
-
-def scrape_text_with_selenium(url: str) -> tuple[WebDriver, str]:
- """Scrape text from a website using selenium
-
- Args:
- url (str): The url of the website to scrape
-
- Returns:
- Tuple[WebDriver, str]: The webdriver and the text scraped from the website
- """
- logging.getLogger("selenium").setLevel(logging.CRITICAL)
-
- options_available = {
- "chrome": ChromeOptions,
- "safari": SafariOptions,
- "firefox": FirefoxOptions,
- }
-
- options = options_available[CFG.selenium_web_browser]()
- options.add_argument(
- "user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.5615.49 Safari/537.36"
- )
-
- if CFG.selenium_web_browser == "firefox":
- driver = webdriver.Firefox(
- executable_path=GeckoDriverManager().install(), options=options
- )
- elif CFG.selenium_web_browser == "safari":
- # Requires a bit more setup on the users end
- # See https://developer.apple.com/documentation/webkit/testing_with_webdriver_in_safari
- driver = webdriver.Safari(options=options)
- else:
- if platform == "linux" or platform == "linux2":
- options.add_argument("--disable-dev-shm-usage")
- options.add_argument("--remote-debugging-port=9222")
-
- options.add_argument("--no-sandbox")
- if CFG.selenium_headless:
- options.add_argument("--headless")
- options.add_argument("--disable-gpu")
-
- driver = webdriver.Chrome(
- executable_path=ChromeDriverManager().install(), options=options
- )
- driver.get(url)
-
- WebDriverWait(driver, 10).until(
- EC.presence_of_element_located((By.TAG_NAME, "body"))
- )
-
- # Get the HTML content directly from the browser's DOM
- page_source = driver.execute_script("return document.body.outerHTML;")
- soup = BeautifulSoup(page_source, "html.parser")
-
- for script in soup(["script", "style"]):
- script.extract()
-
- text = soup.get_text()
- lines = (line.strip() for line in text.splitlines())
- chunks = (phrase.strip() for line in lines for phrase in line.split(" "))
- text = "\n".join(chunk for chunk in chunks if chunk)
- return driver, text
-
-
-def scrape_links_with_selenium(driver: WebDriver, url: str) -> list[str]:
- """Scrape links from a website using selenium
-
- Args:
- driver (WebDriver): The webdriver to use to scrape the links
-
- Returns:
- List[str]: The links scraped from the website
- """
- page_source = driver.page_source
- soup = BeautifulSoup(page_source, "html.parser")
-
- for script in soup(["script", "style"]):
- script.extract()
-
- hyperlinks = extract_hyperlinks(soup, url)
-
- return format_hyperlinks(hyperlinks)
-
-
-def close_browser(driver: WebDriver) -> None:
- """Close the browser
-
- Args:
- driver (WebDriver): The webdriver to close
-
- Returns:
- None
- """
- driver.quit()
-
-
-def add_header(driver: WebDriver) -> None:
- """Add a header to the website
-
- Args:
- driver (WebDriver): The webdriver to use to add the header
-
- Returns:
- None
- """
- driver.execute_script(open(f"{FILE_DIR}/js/overlay.js", "r").read())
diff --git a/spaces/Detomo/aisatsu-app-api/README.md b/spaces/Detomo/aisatsu-app-api/README.md
deleted file mode 100644
index 5d3e089bab0403e14646e4ebf796c939b9adddc8..0000000000000000000000000000000000000000
--- a/spaces/Detomo/aisatsu-app-api/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Aisatsu App Api
-emoji: 🦀
-colorFrom: green
-colorTo: pink
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-pinned: false
-license: creativeml-openrail-m
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/ECCV2022/bytetrack/exps/example/mot/yolox_x_mot17_half.py b/spaces/ECCV2022/bytetrack/exps/example/mot/yolox_x_mot17_half.py
deleted file mode 100644
index 441119b72b8714e78f8f0311933c1c24360fa3d8..0000000000000000000000000000000000000000
--- a/spaces/ECCV2022/bytetrack/exps/example/mot/yolox_x_mot17_half.py
+++ /dev/null
@@ -1,138 +0,0 @@
-# encoding: utf-8
-import os
-import random
-import torch
-import torch.nn as nn
-import torch.distributed as dist
-
-from yolox.exp import Exp as MyExp
-from yolox.data import get_yolox_datadir
-
-class Exp(MyExp):
- def __init__(self):
- super(Exp, self).__init__()
- self.num_classes = 1
- self.depth = 1.33
- self.width = 1.25
- self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]
- self.train_ann = "train.json"
- self.val_ann = "val_half.json"
- self.input_size = (800, 1440)
- self.test_size = (800, 1440)
- self.random_size = (18, 32)
- self.max_epoch = 80
- self.print_interval = 20
- self.eval_interval = 5
- self.test_conf = 0.1
- self.nmsthre = 0.7
- self.no_aug_epochs = 10
- self.basic_lr_per_img = 0.001 / 64.0
- self.warmup_epochs = 1
-
- def get_data_loader(self, batch_size, is_distributed, no_aug=False):
- from yolox.data import (
- MOTDataset,
- TrainTransform,
- YoloBatchSampler,
- DataLoader,
- InfiniteSampler,
- MosaicDetection,
- )
-
- dataset = MOTDataset(
- data_dir=os.path.join(get_yolox_datadir(), "mot"),
- json_file=self.train_ann,
- name='train',
- img_size=self.input_size,
- preproc=TrainTransform(
- rgb_means=(0.485, 0.456, 0.406),
- std=(0.229, 0.224, 0.225),
- max_labels=500,
- ),
- )
-
- dataset = MosaicDetection(
- dataset,
- mosaic=not no_aug,
- img_size=self.input_size,
- preproc=TrainTransform(
- rgb_means=(0.485, 0.456, 0.406),
- std=(0.229, 0.224, 0.225),
- max_labels=1000,
- ),
- degrees=self.degrees,
- translate=self.translate,
- scale=self.scale,
- shear=self.shear,
- perspective=self.perspective,
- enable_mixup=self.enable_mixup,
- )
-
- self.dataset = dataset
-
- if is_distributed:
- batch_size = batch_size // dist.get_world_size()
-
- sampler = InfiniteSampler(
- len(self.dataset), seed=self.seed if self.seed else 0
- )
-
- batch_sampler = YoloBatchSampler(
- sampler=sampler,
- batch_size=batch_size,
- drop_last=False,
- input_dimension=self.input_size,
- mosaic=not no_aug,
- )
-
- dataloader_kwargs = {"num_workers": self.data_num_workers, "pin_memory": True}
- dataloader_kwargs["batch_sampler"] = batch_sampler
- train_loader = DataLoader(self.dataset, **dataloader_kwargs)
-
- return train_loader
-
- def get_eval_loader(self, batch_size, is_distributed, testdev=False):
- from yolox.data import MOTDataset, ValTransform
-
- valdataset = MOTDataset(
- data_dir=os.path.join(get_yolox_datadir(), "mot"),
- json_file=self.val_ann,
- img_size=self.test_size,
- name='train',
- preproc=ValTransform(
- rgb_means=(0.485, 0.456, 0.406),
- std=(0.229, 0.224, 0.225),
- ),
- )
-
- if is_distributed:
- batch_size = batch_size // dist.get_world_size()
- sampler = torch.utils.data.distributed.DistributedSampler(
- valdataset, shuffle=False
- )
- else:
- sampler = torch.utils.data.SequentialSampler(valdataset)
-
- dataloader_kwargs = {
- "num_workers": self.data_num_workers,
- "pin_memory": True,
- "sampler": sampler,
- }
- dataloader_kwargs["batch_size"] = batch_size
- val_loader = torch.utils.data.DataLoader(valdataset, **dataloader_kwargs)
-
- return val_loader
-
- def get_evaluator(self, batch_size, is_distributed, testdev=False):
- from yolox.evaluators import COCOEvaluator
-
- val_loader = self.get_eval_loader(batch_size, is_distributed, testdev=testdev)
- evaluator = COCOEvaluator(
- dataloader=val_loader,
- img_size=self.test_size,
- confthre=self.test_conf,
- nmsthre=self.nmsthre,
- num_classes=self.num_classes,
- testdev=testdev,
- )
- return evaluator
diff --git a/spaces/EXPOSUREEE/Ai-Image-Enhancer/realesrgan/weights/README.md b/spaces/EXPOSUREEE/Ai-Image-Enhancer/realesrgan/weights/README.md
deleted file mode 100644
index 4d7b7e642591ef88575d9e6c360a4d29e0cc1a4f..0000000000000000000000000000000000000000
--- a/spaces/EXPOSUREEE/Ai-Image-Enhancer/realesrgan/weights/README.md
+++ /dev/null
@@ -1,3 +0,0 @@
-# Weights
-
-Put the downloaded weights to this folder.
diff --git a/spaces/Eddycrack864/Applio-Inference/demucs/augment.py b/spaces/Eddycrack864/Applio-Inference/demucs/augment.py
deleted file mode 100644
index bb36d3298d89470f306316322e7587187819c94b..0000000000000000000000000000000000000000
--- a/spaces/Eddycrack864/Applio-Inference/demucs/augment.py
+++ /dev/null
@@ -1,106 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import random
-import torch as th
-from torch import nn
-
-
-class Shift(nn.Module):
- """
- Randomly shift audio in time by up to `shift` samples.
- """
- def __init__(self, shift=8192):
- super().__init__()
- self.shift = shift
-
- def forward(self, wav):
- batch, sources, channels, time = wav.size()
- length = time - self.shift
- if self.shift > 0:
- if not self.training:
- wav = wav[..., :length]
- else:
- offsets = th.randint(self.shift, [batch, sources, 1, 1], device=wav.device)
- offsets = offsets.expand(-1, -1, channels, -1)
- indexes = th.arange(length, device=wav.device)
- wav = wav.gather(3, indexes + offsets)
- return wav
-
-
-class FlipChannels(nn.Module):
- """
- Flip left-right channels.
- """
- def forward(self, wav):
- batch, sources, channels, time = wav.size()
- if self.training and wav.size(2) == 2:
- left = th.randint(2, (batch, sources, 1, 1), device=wav.device)
- left = left.expand(-1, -1, -1, time)
- right = 1 - left
- wav = th.cat([wav.gather(2, left), wav.gather(2, right)], dim=2)
- return wav
-
-
-class FlipSign(nn.Module):
- """
- Random sign flip.
- """
- def forward(self, wav):
- batch, sources, channels, time = wav.size()
- if self.training:
- signs = th.randint(2, (batch, sources, 1, 1), device=wav.device, dtype=th.float32)
- wav = wav * (2 * signs - 1)
- return wav
-
-
-class Remix(nn.Module):
- """
- Shuffle sources to make new mixes.
- """
- def __init__(self, group_size=4):
- """
- Shuffle sources within one batch.
- Each batch is divided into groups of size `group_size` and shuffling is done within
- each group separatly. This allow to keep the same probability distribution no matter
- the number of GPUs. Without this grouping, using more GPUs would lead to a higher
- probability of keeping two sources from the same track together which can impact
- performance.
- """
- super().__init__()
- self.group_size = group_size
-
- def forward(self, wav):
- batch, streams, channels, time = wav.size()
- device = wav.device
-
- if self.training:
- group_size = self.group_size or batch
- if batch % group_size != 0:
- raise ValueError(f"Batch size {batch} must be divisible by group size {group_size}")
- groups = batch // group_size
- wav = wav.view(groups, group_size, streams, channels, time)
- permutations = th.argsort(th.rand(groups, group_size, streams, 1, 1, device=device),
- dim=1)
- wav = wav.gather(1, permutations.expand(-1, -1, -1, channels, time))
- wav = wav.view(batch, streams, channels, time)
- return wav
-
-
-class Scale(nn.Module):
- def __init__(self, proba=1., min=0.25, max=1.25):
- super().__init__()
- self.proba = proba
- self.min = min
- self.max = max
-
- def forward(self, wav):
- batch, streams, channels, time = wav.size()
- device = wav.device
- if self.training and random.random() < self.proba:
- scales = th.empty(batch, streams, 1, 1, device=device).uniform_(self.min, self.max)
- wav *= scales
- return wav
diff --git a/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/taming/modules/losses/vqperceptual.py b/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/taming/modules/losses/vqperceptual.py
deleted file mode 100644
index c2febd445728479d4cd9aacdb2572cb1f1af04db..0000000000000000000000000000000000000000
--- a/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/taming/modules/losses/vqperceptual.py
+++ /dev/null
@@ -1,136 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-from taming.modules.losses.lpips import LPIPS
-from taming.modules.discriminator.model import NLayerDiscriminator, weights_init
-
-
-class DummyLoss(nn.Module):
- def __init__(self):
- super().__init__()
-
-
-def adopt_weight(weight, global_step, threshold=0, value=0.):
- if global_step < threshold:
- weight = value
- return weight
-
-
-def hinge_d_loss(logits_real, logits_fake):
- loss_real = torch.mean(F.relu(1. - logits_real))
- loss_fake = torch.mean(F.relu(1. + logits_fake))
- d_loss = 0.5 * (loss_real + loss_fake)
- return d_loss
-
-
-def vanilla_d_loss(logits_real, logits_fake):
- d_loss = 0.5 * (
- torch.mean(torch.nn.functional.softplus(-logits_real)) +
- torch.mean(torch.nn.functional.softplus(logits_fake)))
- return d_loss
-
-
-class VQLPIPSWithDiscriminator(nn.Module):
- def __init__(self, disc_start, codebook_weight=1.0, pixelloss_weight=1.0,
- disc_num_layers=3, disc_in_channels=3, disc_factor=1.0, disc_weight=1.0,
- perceptual_weight=1.0, use_actnorm=False, disc_conditional=False,
- disc_ndf=64, disc_loss="hinge"):
- super().__init__()
- assert disc_loss in ["hinge", "vanilla"]
- self.codebook_weight = codebook_weight
- self.pixel_weight = pixelloss_weight
- self.perceptual_loss = LPIPS().eval()
- self.perceptual_weight = perceptual_weight
-
- self.discriminator = NLayerDiscriminator(input_nc=disc_in_channels,
- n_layers=disc_num_layers,
- use_actnorm=use_actnorm,
- ndf=disc_ndf
- ).apply(weights_init)
- self.discriminator_iter_start = disc_start
- if disc_loss == "hinge":
- self.disc_loss = hinge_d_loss
- elif disc_loss == "vanilla":
- self.disc_loss = vanilla_d_loss
- else:
- raise ValueError(f"Unknown GAN loss '{disc_loss}'.")
- print(f"VQLPIPSWithDiscriminator running with {disc_loss} loss.")
- self.disc_factor = disc_factor
- self.discriminator_weight = disc_weight
- self.disc_conditional = disc_conditional
-
- def calculate_adaptive_weight(self, nll_loss, g_loss, last_layer=None):
- if last_layer is not None:
- nll_grads = torch.autograd.grad(nll_loss, last_layer, retain_graph=True)[0]
- g_grads = torch.autograd.grad(g_loss, last_layer, retain_graph=True)[0]
- else:
- nll_grads = torch.autograd.grad(nll_loss, self.last_layer[0], retain_graph=True)[0]
- g_grads = torch.autograd.grad(g_loss, self.last_layer[0], retain_graph=True)[0]
-
- d_weight = torch.norm(nll_grads) / (torch.norm(g_grads) + 1e-4)
- d_weight = torch.clamp(d_weight, 0.0, 1e4).detach()
- d_weight = d_weight * self.discriminator_weight
- return d_weight
-
- def forward(self, codebook_loss, inputs, reconstructions, optimizer_idx,
- global_step, last_layer=None, cond=None, split="train"):
- rec_loss = torch.abs(inputs.contiguous() - reconstructions.contiguous())
- if self.perceptual_weight > 0:
- p_loss = self.perceptual_loss(inputs.contiguous(), reconstructions.contiguous())
- rec_loss = rec_loss + self.perceptual_weight * p_loss
- else:
- p_loss = torch.tensor([0.0])
-
- nll_loss = rec_loss
- #nll_loss = torch.sum(nll_loss) / nll_loss.shape[0]
- nll_loss = torch.mean(nll_loss)
-
- # now the GAN part
- if optimizer_idx == 0:
- # generator update
- if cond is None:
- assert not self.disc_conditional
- logits_fake = self.discriminator(reconstructions.contiguous())
- else:
- assert self.disc_conditional
- logits_fake = self.discriminator(torch.cat((reconstructions.contiguous(), cond), dim=1))
- g_loss = -torch.mean(logits_fake)
-
- try:
- d_weight = self.calculate_adaptive_weight(nll_loss, g_loss, last_layer=last_layer)
- except RuntimeError:
- assert not self.training
- d_weight = torch.tensor(0.0)
-
- disc_factor = adopt_weight(self.disc_factor, global_step, threshold=self.discriminator_iter_start)
- loss = nll_loss + d_weight * disc_factor * g_loss + self.codebook_weight * codebook_loss.mean()
-
- log = {"{}/total_loss".format(split): loss.clone().detach().mean(),
- "{}/quant_loss".format(split): codebook_loss.detach().mean(),
- "{}/nll_loss".format(split): nll_loss.detach().mean(),
- "{}/rec_loss".format(split): rec_loss.detach().mean(),
- "{}/p_loss".format(split): p_loss.detach().mean(),
- "{}/d_weight".format(split): d_weight.detach(),
- "{}/disc_factor".format(split): torch.tensor(disc_factor),
- "{}/g_loss".format(split): g_loss.detach().mean(),
- }
- return loss, log
-
- if optimizer_idx == 1:
- # second pass for discriminator update
- if cond is None:
- logits_real = self.discriminator(inputs.contiguous().detach())
- logits_fake = self.discriminator(reconstructions.contiguous().detach())
- else:
- logits_real = self.discriminator(torch.cat((inputs.contiguous().detach(), cond), dim=1))
- logits_fake = self.discriminator(torch.cat((reconstructions.contiguous().detach(), cond), dim=1))
-
- disc_factor = adopt_weight(self.disc_factor, global_step, threshold=self.discriminator_iter_start)
- d_loss = disc_factor * self.disc_loss(logits_real, logits_fake)
-
- log = {"{}/disc_loss".format(split): d_loss.clone().detach().mean(),
- "{}/logits_real".format(split): logits_real.detach().mean(),
- "{}/logits_fake".format(split): logits_fake.detach().mean()
- }
- return d_loss, log
diff --git a/spaces/EronSamez/RVC_HFmeu/Dockerfile b/spaces/EronSamez/RVC_HFmeu/Dockerfile
deleted file mode 100644
index b81f131c79cc585012b28002f4916491e85f3a33..0000000000000000000000000000000000000000
--- a/spaces/EronSamez/RVC_HFmeu/Dockerfile
+++ /dev/null
@@ -1,29 +0,0 @@
-# syntax=docker/dockerfile:1
-
-FROM python:3.10-bullseye
-
-EXPOSE 7865
-
-WORKDIR /app
-
-COPY . .
-
-RUN apt update && apt install -y -qq ffmpeg aria2 && apt clean
-
-RUN pip3 install --no-cache-dir -r requirements.txt
-
-RUN aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/D40k.pth -d assets/pretrained_v2/ -o D40k.pth
-RUN aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/G40k.pth -d assets/pretrained_v2/ -o G40k.pth
-RUN aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0D40k.pth -d assets/pretrained_v2/ -o f0D40k.pth
-RUN aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0G40k.pth -d assets/pretrained_v2/ -o f0G40k.pth
-
-RUN aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP2-人声vocals+非人声instrumentals.pth -d assets/uvr5_weights/ -o HP2-人声vocals+非人声instrumentals.pth
-RUN aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP5-主旋律人声vocals+其他instrumentals.pth -d assets/uvr5_weights/ -o HP5-主旋律人声vocals+其他instrumentals.pth
-
-RUN aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/hubert_base.pt -d assets/hubert -o hubert_base.pt
-
-RUN aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/rmvpe.pt -d assets/hubert -o rmvpe.pt
-
-VOLUME [ "/app/weights", "/app/opt" ]
-
-CMD ["python3", "infer-web.py"]
\ No newline at end of file
diff --git a/spaces/EsoCode/text-generation-webui/modules/loaders.py b/spaces/EsoCode/text-generation-webui/modules/loaders.py
deleted file mode 100644
index 44e893fbffb1644f801ba5d90a99f7180ca8ff68..0000000000000000000000000000000000000000
--- a/spaces/EsoCode/text-generation-webui/modules/loaders.py
+++ /dev/null
@@ -1,100 +0,0 @@
-import functools
-
-import gradio as gr
-
-from modules import shared
-
-loaders_and_params = {
- 'AutoGPTQ': [
- 'triton',
- 'no_inject_fused_attention',
- 'no_inject_fused_mlp',
- 'no_use_cuda_fp16',
- 'wbits',
- 'groupsize',
- 'desc_act',
- 'gpu_memory',
- 'cpu_memory',
- 'cpu',
- 'disk',
- 'auto_devices',
- 'trust_remote_code',
- 'autogptq_info',
- ],
- 'GPTQ-for-LLaMa': [
- 'wbits',
- 'groupsize',
- 'model_type',
- 'pre_layer',
- 'gptq_for_llama_info',
- ],
- 'llama.cpp': [
- 'n_ctx',
- 'n_gpu_layers',
- 'n_batch',
- 'threads',
- 'no_mmap',
- 'mlock',
- 'llama_cpp_seed',
- ],
- 'Transformers': [
- 'cpu_memory',
- 'gpu_memory',
- 'trust_remote_code',
- 'load_in_8bit',
- 'bf16',
- 'cpu',
- 'disk',
- 'auto_devices',
- 'load_in_4bit',
- 'use_double_quant',
- 'quant_type',
- 'compute_dtype',
- 'trust_remote_code',
- 'transformers_info'
- ],
- 'ExLlama' : [
- 'gpu_split',
- 'max_seq_len',
- 'compress_pos_emb',
- 'exllama_info',
- ],
- 'ExLlama_HF' : [
- 'gpu_split',
- 'max_seq_len',
- 'compress_pos_emb',
- 'exllama_HF_info',
- ]
-}
-
-
-def get_gpu_memory_keys():
- return [k for k in shared.gradio if k.startswith('gpu_memory')]
-
-
-@functools.cache
-def get_all_params():
- all_params = set()
- for k in loaders_and_params:
- for el in loaders_and_params[k]:
- all_params.add(el)
-
- if 'gpu_memory' in all_params:
- all_params.remove('gpu_memory')
- for k in get_gpu_memory_keys():
- all_params.add(k)
-
- return sorted(all_params)
-
-
-def make_loader_params_visible(loader):
- params = []
- all_params = get_all_params()
- if loader in loaders_and_params:
- params = loaders_and_params[loader]
-
- if 'gpu_memory' in params:
- params.remove('gpu_memory')
- params += get_gpu_memory_keys()
-
- return [gr.update(visible=True) if k in params else gr.update(visible=False) for k in all_params]
diff --git a/spaces/EuroPython2022/BayesCap/src/utils.py b/spaces/EuroPython2022/BayesCap/src/utils.py
deleted file mode 100644
index 83cfc804f60ec81babd0385bb88ab4cfa3a4b67d..0000000000000000000000000000000000000000
--- a/spaces/EuroPython2022/BayesCap/src/utils.py
+++ /dev/null
@@ -1,1273 +0,0 @@
-import random
-from typing import Any, Optional
-import numpy as np
-import os
-import cv2
-from glob import glob
-from PIL import Image, ImageDraw
-from tqdm import tqdm
-import kornia
-import matplotlib.pyplot as plt
-import seaborn as sns
-import albumentations as albu
-import functools
-import math
-
-import torch
-import torch.nn as nn
-from torch import Tensor
-import torchvision as tv
-import torchvision.models as models
-from torchvision import transforms
-from torchvision.transforms import functional as F
-from losses import TempCombLoss
-
-########### DeblurGAN function
-def get_norm_layer(norm_type='instance'):
- if norm_type == 'batch':
- norm_layer = functools.partial(nn.BatchNorm2d, affine=True)
- elif norm_type == 'instance':
- norm_layer = functools.partial(nn.InstanceNorm2d, affine=False, track_running_stats=True)
- else:
- raise NotImplementedError('normalization layer [%s] is not found' % norm_type)
- return norm_layer
-
-def _array_to_batch(x):
- x = np.transpose(x, (2, 0, 1))
- x = np.expand_dims(x, 0)
- return torch.from_numpy(x)
-
-def get_normalize():
- normalize = albu.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])
- normalize = albu.Compose([normalize], additional_targets={'target': 'image'})
-
- def process(a, b):
- r = normalize(image=a, target=b)
- return r['image'], r['target']
-
- return process
-
-def preprocess(x: np.ndarray, mask: Optional[np.ndarray]):
- x, _ = get_normalize()(x, x)
- if mask is None:
- mask = np.ones_like(x, dtype=np.float32)
- else:
- mask = np.round(mask.astype('float32') / 255)
-
- h, w, _ = x.shape
- block_size = 32
- min_height = (h // block_size + 1) * block_size
- min_width = (w // block_size + 1) * block_size
-
- pad_params = {'mode': 'constant',
- 'constant_values': 0,
- 'pad_width': ((0, min_height - h), (0, min_width - w), (0, 0))
- }
- x = np.pad(x, **pad_params)
- mask = np.pad(mask, **pad_params)
-
- return map(_array_to_batch, (x, mask)), h, w
-
-def postprocess(x: torch.Tensor) -> np.ndarray:
- x, = x
- x = x.detach().cpu().float().numpy()
- x = (np.transpose(x, (1, 2, 0)) + 1) / 2.0 * 255.0
- return x.astype('uint8')
-
-def sorted_glob(pattern):
- return sorted(glob(pattern))
-###########
-
-def normalize(image: np.ndarray) -> np.ndarray:
- """Normalize the ``OpenCV.imread`` or ``skimage.io.imread`` data.
- Args:
- image (np.ndarray): The image data read by ``OpenCV.imread`` or ``skimage.io.imread``.
- Returns:
- Normalized image data. Data range [0, 1].
- """
- return image.astype(np.float64) / 255.0
-
-
-def unnormalize(image: np.ndarray) -> np.ndarray:
- """Un-normalize the ``OpenCV.imread`` or ``skimage.io.imread`` data.
- Args:
- image (np.ndarray): The image data read by ``OpenCV.imread`` or ``skimage.io.imread``.
- Returns:
- Denormalized image data. Data range [0, 255].
- """
- return image.astype(np.float64) * 255.0
-
-
-def image2tensor(image: np.ndarray, range_norm: bool, half: bool) -> torch.Tensor:
- """Convert ``PIL.Image`` to Tensor.
- Args:
- image (np.ndarray): The image data read by ``PIL.Image``
- range_norm (bool): Scale [0, 1] data to between [-1, 1]
- half (bool): Whether to convert torch.float32 similarly to torch.half type.
- Returns:
- Normalized image data
- Examples:
- >>> image = Image.open("image.bmp")
- >>> tensor_image = image2tensor(image, range_norm=False, half=False)
- """
- tensor = F.to_tensor(image)
-
- if range_norm:
- tensor = tensor.mul_(2.0).sub_(1.0)
- if half:
- tensor = tensor.half()
-
- return tensor
-
-
-def tensor2image(tensor: torch.Tensor, range_norm: bool, half: bool) -> Any:
- """Converts ``torch.Tensor`` to ``PIL.Image``.
- Args:
- tensor (torch.Tensor): The image that needs to be converted to ``PIL.Image``
- range_norm (bool): Scale [-1, 1] data to between [0, 1]
- half (bool): Whether to convert torch.float32 similarly to torch.half type.
- Returns:
- Convert image data to support PIL library
- Examples:
- >>> tensor = torch.randn([1, 3, 128, 128])
- >>> image = tensor2image(tensor, range_norm=False, half=False)
- """
- if range_norm:
- tensor = tensor.add_(1.0).div_(2.0)
- if half:
- tensor = tensor.half()
-
- image = tensor.squeeze_(0).permute(1, 2, 0).mul_(255).clamp_(0, 255).cpu().numpy().astype("uint8")
-
- return image
-
-
-def convert_rgb_to_y(image: Any) -> Any:
- """Convert RGB image or tensor image data to YCbCr(Y) format.
- Args:
- image: RGB image data read by ``PIL.Image''.
- Returns:
- Y image array data.
- """
- if type(image) == np.ndarray:
- return 16. + (64.738 * image[:, :, 0] + 129.057 * image[:, :, 1] + 25.064 * image[:, :, 2]) / 256.
- elif type(image) == torch.Tensor:
- if len(image.shape) == 4:
- image = image.squeeze_(0)
- return 16. + (64.738 * image[0, :, :] + 129.057 * image[1, :, :] + 25.064 * image[2, :, :]) / 256.
- else:
- raise Exception("Unknown Type", type(image))
-
-
-def convert_rgb_to_ycbcr(image: Any) -> Any:
- """Convert RGB image or tensor image data to YCbCr format.
- Args:
- image: RGB image data read by ``PIL.Image''.
- Returns:
- YCbCr image array data.
- """
- if type(image) == np.ndarray:
- y = 16. + (64.738 * image[:, :, 0] + 129.057 * image[:, :, 1] + 25.064 * image[:, :, 2]) / 256.
- cb = 128. + (-37.945 * image[:, :, 0] - 74.494 * image[:, :, 1] + 112.439 * image[:, :, 2]) / 256.
- cr = 128. + (112.439 * image[:, :, 0] - 94.154 * image[:, :, 1] - 18.285 * image[:, :, 2]) / 256.
- return np.array([y, cb, cr]).transpose([1, 2, 0])
- elif type(image) == torch.Tensor:
- if len(image.shape) == 4:
- image = image.squeeze(0)
- y = 16. + (64.738 * image[0, :, :] + 129.057 * image[1, :, :] + 25.064 * image[2, :, :]) / 256.
- cb = 128. + (-37.945 * image[0, :, :] - 74.494 * image[1, :, :] + 112.439 * image[2, :, :]) / 256.
- cr = 128. + (112.439 * image[0, :, :] - 94.154 * image[1, :, :] - 18.285 * image[2, :, :]) / 256.
- return torch.cat([y, cb, cr], 0).permute(1, 2, 0)
- else:
- raise Exception("Unknown Type", type(image))
-
-
-def convert_ycbcr_to_rgb(image: Any) -> Any:
- """Convert YCbCr format image to RGB format.
- Args:
- image: YCbCr image data read by ``PIL.Image''.
- Returns:
- RGB image array data.
- """
- if type(image) == np.ndarray:
- r = 298.082 * image[:, :, 0] / 256. + 408.583 * image[:, :, 2] / 256. - 222.921
- g = 298.082 * image[:, :, 0] / 256. - 100.291 * image[:, :, 1] / 256. - 208.120 * image[:, :, 2] / 256. + 135.576
- b = 298.082 * image[:, :, 0] / 256. + 516.412 * image[:, :, 1] / 256. - 276.836
- return np.array([r, g, b]).transpose([1, 2, 0])
- elif type(image) == torch.Tensor:
- if len(image.shape) == 4:
- image = image.squeeze(0)
- r = 298.082 * image[0, :, :] / 256. + 408.583 * image[2, :, :] / 256. - 222.921
- g = 298.082 * image[0, :, :] / 256. - 100.291 * image[1, :, :] / 256. - 208.120 * image[2, :, :] / 256. + 135.576
- b = 298.082 * image[0, :, :] / 256. + 516.412 * image[1, :, :] / 256. - 276.836
- return torch.cat([r, g, b], 0).permute(1, 2, 0)
- else:
- raise Exception("Unknown Type", type(image))
-
-
-def center_crop(lr: Any, hr: Any, image_size: int, upscale_factor: int) -> [Any, Any]:
- """Cut ``PIL.Image`` in the center area of the image.
- Args:
- lr: Low-resolution image data read by ``PIL.Image``.
- hr: High-resolution image data read by ``PIL.Image``.
- image_size (int): The size of the captured image area. It should be the size of the high-resolution image.
- upscale_factor (int): magnification factor.
- Returns:
- Randomly cropped low-resolution images and high-resolution images.
- """
- w, h = hr.size
-
- left = (w - image_size) // 2
- top = (h - image_size) // 2
- right = left + image_size
- bottom = top + image_size
-
- lr = lr.crop((left // upscale_factor,
- top // upscale_factor,
- right // upscale_factor,
- bottom // upscale_factor))
- hr = hr.crop((left, top, right, bottom))
-
- return lr, hr
-
-
-def random_crop(lr: Any, hr: Any, image_size: int, upscale_factor: int) -> [Any, Any]:
- """Will ``PIL.Image`` randomly capture the specified area of the image.
- Args:
- lr: Low-resolution image data read by ``PIL.Image``.
- hr: High-resolution image data read by ``PIL.Image``.
- image_size (int): The size of the captured image area. It should be the size of the high-resolution image.
- upscale_factor (int): magnification factor.
- Returns:
- Randomly cropped low-resolution images and high-resolution images.
- """
- w, h = hr.size
- left = torch.randint(0, w - image_size + 1, size=(1,)).item()
- top = torch.randint(0, h - image_size + 1, size=(1,)).item()
- right = left + image_size
- bottom = top + image_size
-
- lr = lr.crop((left // upscale_factor,
- top // upscale_factor,
- right // upscale_factor,
- bottom // upscale_factor))
- hr = hr.crop((left, top, right, bottom))
-
- return lr, hr
-
-
-def random_rotate(lr: Any, hr: Any, angle: int) -> [Any, Any]:
- """Will ``PIL.Image`` randomly rotate the image.
- Args:
- lr: Low-resolution image data read by ``PIL.Image``.
- hr: High-resolution image data read by ``PIL.Image``.
- angle (int): rotation angle, clockwise and counterclockwise rotation.
- Returns:
- Randomly rotated low-resolution images and high-resolution images.
- """
- angle = random.choice((+angle, -angle))
- lr = F.rotate(lr, angle)
- hr = F.rotate(hr, angle)
-
- return lr, hr
-
-
-def random_horizontally_flip(lr: Any, hr: Any, p=0.5) -> [Any, Any]:
- """Flip the ``PIL.Image`` image horizontally randomly.
- Args:
- lr: Low-resolution image data read by ``PIL.Image``.
- hr: High-resolution image data read by ``PIL.Image``.
- p (optional, float): rollover probability. (Default: 0.5)
- Returns:
- Low-resolution image and high-resolution image after random horizontal flip.
- """
- if torch.rand(1).item() > p:
- lr = F.hflip(lr)
- hr = F.hflip(hr)
-
- return lr, hr
-
-
-def random_vertically_flip(lr: Any, hr: Any, p=0.5) -> [Any, Any]:
- """Turn the ``PIL.Image`` image upside down randomly.
- Args:
- lr: Low-resolution image data read by ``PIL.Image``.
- hr: High-resolution image data read by ``PIL.Image``.
- p (optional, float): rollover probability. (Default: 0.5)
- Returns:
- Randomly rotated up and down low-resolution images and high-resolution images.
- """
- if torch.rand(1).item() > p:
- lr = F.vflip(lr)
- hr = F.vflip(hr)
-
- return lr, hr
-
-
-def random_adjust_brightness(lr: Any, hr: Any) -> [Any, Any]:
- """Set ``PIL.Image`` to randomly adjust the image brightness.
- Args:
- lr: Low-resolution image data read by ``PIL.Image``.
- hr: High-resolution image data read by ``PIL.Image``.
- Returns:
- Low-resolution image and high-resolution image with randomly adjusted brightness.
- """
- # Randomly adjust the brightness gain range.
- factor = random.uniform(0.5, 2)
- lr = F.adjust_brightness(lr, factor)
- hr = F.adjust_brightness(hr, factor)
-
- return lr, hr
-
-
-def random_adjust_contrast(lr: Any, hr: Any) -> [Any, Any]:
- """Set ``PIL.Image`` to randomly adjust the image contrast.
- Args:
- lr: Low-resolution image data read by ``PIL.Image``.
- hr: High-resolution image data read by ``PIL.Image``.
- Returns:
- Low-resolution image and high-resolution image with randomly adjusted contrast.
- """
- # Randomly adjust the contrast gain range.
- factor = random.uniform(0.5, 2)
- lr = F.adjust_contrast(lr, factor)
- hr = F.adjust_contrast(hr, factor)
-
- return lr, hr
-
-#### metrics to compute -- assumes single images, i.e., tensor of 3 dims
-def img_mae(x1, x2):
- m = torch.abs(x1-x2).mean()
- return m
-
-def img_mse(x1, x2):
- m = torch.pow(torch.abs(x1-x2),2).mean()
- return m
-
-def img_psnr(x1, x2):
- m = kornia.metrics.psnr(x1, x2, 1)
- return m
-
-def img_ssim(x1, x2):
- m = kornia.metrics.ssim(x1.unsqueeze(0), x2.unsqueeze(0), 5)
- m = m.mean()
- return m
-
-def show_SR_w_uncer(xLR, xHR, xSR, xSRvar, elim=(0,0.01), ulim=(0,0.15)):
- '''
- xLR/SR/HR: 3xHxW
- xSRvar: 1xHxW
- '''
- plt.figure(figsize=(30,10))
-
- plt.subplot(1,5,1)
- plt.imshow(xLR.to('cpu').data.clip(0,1).transpose(0,2).transpose(0,1))
- plt.axis('off')
-
- plt.subplot(1,5,2)
- plt.imshow(xHR.to('cpu').data.clip(0,1).transpose(0,2).transpose(0,1))
- plt.axis('off')
-
- plt.subplot(1,5,3)
- plt.imshow(xSR.to('cpu').data.clip(0,1).transpose(0,2).transpose(0,1))
- plt.axis('off')
-
- plt.subplot(1,5,4)
- error_map = torch.mean(torch.pow(torch.abs(xSR-xHR),2), dim=0).to('cpu').data.unsqueeze(0)
- print('error', error_map.min(), error_map.max())
- plt.imshow(error_map.transpose(0,2).transpose(0,1), cmap='jet')
- plt.clim(elim[0], elim[1])
- plt.axis('off')
-
- plt.subplot(1,5,5)
- print('uncer', xSRvar.min(), xSRvar.max())
- plt.imshow(xSRvar.to('cpu').data.transpose(0,2).transpose(0,1), cmap='hot')
- plt.clim(ulim[0], ulim[1])
- plt.axis('off')
-
- plt.subplots_adjust(wspace=0, hspace=0)
- plt.show()
-
-def show_SR_w_err(xLR, xHR, xSR, elim=(0,0.01), task=None, xMask=None):
- '''
- xLR/SR/HR: 3xHxW
- '''
- plt.figure(figsize=(30,10))
-
- if task != 'm':
- plt.subplot(1,4,1)
- plt.imshow(xLR.to('cpu').data.clip(0,1).transpose(0,2).transpose(0,1))
- plt.axis('off')
-
- plt.subplot(1,4,2)
- plt.imshow(xHR.to('cpu').data.clip(0,1).transpose(0,2).transpose(0,1))
- plt.axis('off')
-
- plt.subplot(1,4,3)
- plt.imshow(xSR.to('cpu').data.clip(0,1).transpose(0,2).transpose(0,1))
- plt.axis('off')
- else:
- plt.subplot(1,4,1)
- plt.imshow(xLR.to('cpu').data.clip(0,1).transpose(0,2).transpose(0,1), cmap='gray')
- plt.clim(0,0.9)
- plt.axis('off')
-
- plt.subplot(1,4,2)
- plt.imshow(xHR.to('cpu').data.clip(0,1).transpose(0,2).transpose(0,1), cmap='gray')
- plt.clim(0,0.9)
- plt.axis('off')
-
- plt.subplot(1,4,3)
- plt.imshow(xSR.to('cpu').data.clip(0,1).transpose(0,2).transpose(0,1), cmap='gray')
- plt.clim(0,0.9)
- plt.axis('off')
-
- plt.subplot(1,4,4)
- if task == 'inpainting':
- error_map = torch.mean(torch.pow(torch.abs(xSR-xHR),2), dim=0).to('cpu').data.unsqueeze(0)*xMask.to('cpu').data
- else:
- error_map = torch.mean(torch.pow(torch.abs(xSR-xHR),2), dim=0).to('cpu').data.unsqueeze(0)
- print('error', error_map.min(), error_map.max())
- plt.imshow(error_map.transpose(0,2).transpose(0,1), cmap='jet')
- plt.clim(elim[0], elim[1])
- plt.axis('off')
-
- plt.subplots_adjust(wspace=0, hspace=0)
- plt.show()
-
-def show_uncer4(xSRvar1, xSRvar2, xSRvar3, xSRvar4, ulim=(0,0.15)):
- '''
- xSRvar: 1xHxW
- '''
- plt.figure(figsize=(30,10))
-
- plt.subplot(1,4,1)
- print('uncer', xSRvar1.min(), xSRvar1.max())
- plt.imshow(xSRvar1.to('cpu').data.transpose(0,2).transpose(0,1), cmap='hot')
- plt.clim(ulim[0], ulim[1])
- plt.axis('off')
-
- plt.subplot(1,4,2)
- print('uncer', xSRvar2.min(), xSRvar2.max())
- plt.imshow(xSRvar2.to('cpu').data.transpose(0,2).transpose(0,1), cmap='hot')
- plt.clim(ulim[0], ulim[1])
- plt.axis('off')
-
- plt.subplot(1,4,3)
- print('uncer', xSRvar3.min(), xSRvar3.max())
- plt.imshow(xSRvar3.to('cpu').data.transpose(0,2).transpose(0,1), cmap='hot')
- plt.clim(ulim[0], ulim[1])
- plt.axis('off')
-
- plt.subplot(1,4,4)
- print('uncer', xSRvar4.min(), xSRvar4.max())
- plt.imshow(xSRvar4.to('cpu').data.transpose(0,2).transpose(0,1), cmap='hot')
- plt.clim(ulim[0], ulim[1])
- plt.axis('off')
-
- plt.subplots_adjust(wspace=0, hspace=0)
- plt.show()
-
-def get_UCE(list_err, list_yout_var, num_bins=100):
- err_min = np.min(list_err)
- err_max = np.max(list_err)
- err_len = (err_max-err_min)/num_bins
- num_points = len(list_err)
-
- bin_stats = {}
- for i in range(num_bins):
- bin_stats[i] = {
- 'start_idx': err_min + i*err_len,
- 'end_idx': err_min + (i+1)*err_len,
- 'num_points': 0,
- 'mean_err': 0,
- 'mean_var': 0,
- }
-
- for e,v in zip(list_err, list_yout_var):
- for i in range(num_bins):
- if e>=bin_stats[i]['start_idx'] and e0:
- list_x.append(bin_stats[i]['mean_err'])
- list_y.append(bin_stats[i]['mean_var'])
-
- # sns.set_style('darkgrid')
- # sns.scatterplot(x=list_x, y=list_y)
- # sns.regplot(x=list_x, y=list_y, order=1)
- # plt.xlabel('MSE', fontsize=34)
- # plt.ylabel('Uncertainty', fontsize=34)
- # plt.plot(list_x, list_x, color='r')
- # plt.xlim(np.min(list_x), np.max(list_x))
- # plt.ylim(np.min(list_err), np.max(list_x))
- # plt.show()
-
- return bin_stats, uce
-
-##################### training BayesCap
-def train_BayesCap(
- NetC,
- NetG,
- train_loader,
- eval_loader,
- Cri = TempCombLoss(),
- device='cuda',
- dtype=torch.cuda.FloatTensor(),
- init_lr=1e-4,
- num_epochs=100,
- eval_every=1,
- ckpt_path='../ckpt/BayesCap',
- T1=1e0,
- T2=5e-2,
- task=None,
-):
- NetC.to(device)
- NetC.train()
- NetG.to(device)
- NetG.eval()
- optimizer = torch.optim.Adam(list(NetC.parameters()), lr=init_lr)
- optim_scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, num_epochs)
-
- score = -1e8
- all_loss = []
- for eph in range(num_epochs):
- eph_loss = 0
- with tqdm(train_loader, unit='batch') as tepoch:
- for (idx, batch) in enumerate(tepoch):
- if idx>2000:
- break
- tepoch.set_description('Epoch {}'.format(eph))
- ##
- xLR, xHR = batch[0].to(device), batch[1].to(device)
- xLR, xHR = xLR.type(dtype), xHR.type(dtype)
- if task == 'inpainting':
- xMask = random_mask(xLR.shape[0], (xLR.shape[2], xLR.shape[3]))
- xMask = xMask.to(device).type(dtype)
- # pass them through the network
- with torch.no_grad():
- if task == 'inpainting':
- _, xSR1 = NetG(xLR, xMask)
- elif task == 'depth':
- xSR1 = NetG(xLR)[("disp", 0)]
- else:
- xSR1 = NetG(xLR)
- # with torch.autograd.set_detect_anomaly(True):
- xSR = xSR1.clone()
- xSRC_mu, xSRC_alpha, xSRC_beta = NetC(xSR)
- # print(xSRC_alpha)
- optimizer.zero_grad()
- if task == 'depth':
- loss = Cri(xSRC_mu, xSRC_alpha, xSRC_beta, xSR, T1=T1, T2=T2)
- else:
- loss = Cri(xSRC_mu, xSRC_alpha, xSRC_beta, xHR, T1=T1, T2=T2)
- # print(loss)
- loss.backward()
- optimizer.step()
- ##
- eph_loss += loss.item()
- tepoch.set_postfix(loss=loss.item())
- eph_loss /= len(train_loader)
- all_loss.append(eph_loss)
- print('Avg. loss: {}'.format(eph_loss))
- # evaluate and save the models
- torch.save(NetC.state_dict(), ckpt_path+'_last.pth')
- if eph%eval_every == 0:
- curr_score = eval_BayesCap(
- NetC,
- NetG,
- eval_loader,
- device=device,
- dtype=dtype,
- task=task,
- )
- print('current score: {} | Last best score: {}'.format(curr_score, score))
- if curr_score >= score:
- score = curr_score
- torch.save(NetC.state_dict(), ckpt_path+'_best.pth')
- optim_scheduler.step()
-
-#### get different uncertainty maps
-def get_uncer_BayesCap(
- NetC,
- NetG,
- xin,
- task=None,
- xMask=None,
-):
- with torch.no_grad():
- if task == 'inpainting':
- _, xSR = NetG(xin, xMask)
- else:
- xSR = NetG(xin)
- xSRC_mu, xSRC_alpha, xSRC_beta = NetC(xSR)
- a_map = (1/(xSRC_alpha + 1e-5)).to('cpu').data
- b_map = xSRC_beta.to('cpu').data
- xSRvar = (a_map**2)*(torch.exp(torch.lgamma(3/(b_map + 1e-2)))/torch.exp(torch.lgamma(1/(b_map + 1e-2))))
-
- return xSRvar
-
-def get_uncer_TTDAp(
- NetG,
- xin,
- p_mag=0.05,
- num_runs=50,
- task=None,
- xMask=None,
-):
- list_xSR = []
- with torch.no_grad():
- for z in range(num_runs):
- if task == 'inpainting':
- _, xSRz = NetG(xin+p_mag*xin.max()*torch.randn_like(xin), xMask)
- else:
- xSRz = NetG(xin+p_mag*xin.max()*torch.randn_like(xin))
- list_xSR.append(xSRz)
- xSRmean = torch.mean(torch.cat(list_xSR, dim=0), dim=0).unsqueeze(0)
- xSRvar = torch.mean(torch.var(torch.cat(list_xSR, dim=0), dim=0), dim=0).unsqueeze(0).unsqueeze(1)
- return xSRvar
-
-def get_uncer_DO(
- NetG,
- xin,
- dop=0.2,
- num_runs=50,
- task=None,
- xMask=None,
-):
- list_xSR = []
- with torch.no_grad():
- for z in range(num_runs):
- if task == 'inpainting':
- _, xSRz = NetG(xin, xMask, dop=dop)
- else:
- xSRz = NetG(xin, dop=dop)
- list_xSR.append(xSRz)
- xSRmean = torch.mean(torch.cat(list_xSR, dim=0), dim=0).unsqueeze(0)
- xSRvar = torch.mean(torch.var(torch.cat(list_xSR, dim=0), dim=0), dim=0).unsqueeze(0).unsqueeze(1)
- return xSRvar
-
-################### Different eval functions
-
-def eval_BayesCap(
- NetC,
- NetG,
- eval_loader,
- device='cuda',
- dtype=torch.cuda.FloatTensor,
- task=None,
- xMask=None,
-):
- NetC.to(device)
- NetC.eval()
- NetG.to(device)
- NetG.eval()
-
- mean_ssim = 0
- mean_psnr = 0
- mean_mse = 0
- mean_mae = 0
- num_imgs = 0
- list_error = []
- list_var = []
- with tqdm(eval_loader, unit='batch') as tepoch:
- for (idx, batch) in enumerate(tepoch):
- tepoch.set_description('Validating ...')
- ##
- xLR, xHR = batch[0].to(device), batch[1].to(device)
- xLR, xHR = xLR.type(dtype), xHR.type(dtype)
- if task == 'inpainting':
- if xMask==None:
- xMask = random_mask(xLR.shape[0], (xLR.shape[2], xLR.shape[3]))
- xMask = xMask.to(device).type(dtype)
- else:
- xMask = xMask.to(device).type(dtype)
- # pass them through the network
- with torch.no_grad():
- if task == 'inpainting':
- _, xSR = NetG(xLR, xMask)
- elif task == 'depth':
- xSR = NetG(xLR)[("disp", 0)]
- else:
- xSR = NetG(xLR)
- xSRC_mu, xSRC_alpha, xSRC_beta = NetC(xSR)
- a_map = (1/(xSRC_alpha + 1e-5)).to('cpu').data
- b_map = xSRC_beta.to('cpu').data
- xSRvar = (a_map**2)*(torch.exp(torch.lgamma(3/(b_map + 1e-2)))/torch.exp(torch.lgamma(1/(b_map + 1e-2))))
- n_batch = xSRC_mu.shape[0]
- if task == 'depth':
- xHR = xSR
- for j in range(n_batch):
- num_imgs += 1
- mean_ssim += img_ssim(xSRC_mu[j], xHR[j])
- mean_psnr += img_psnr(xSRC_mu[j], xHR[j])
- mean_mse += img_mse(xSRC_mu[j], xHR[j])
- mean_mae += img_mae(xSRC_mu[j], xHR[j])
-
- show_SR_w_uncer(xLR[j], xHR[j], xSR[j], xSRvar[j])
-
- error_map = torch.mean(torch.pow(torch.abs(xSR[j]-xHR[j]),2), dim=0).to('cpu').data.reshape(-1)
- var_map = xSRvar[j].to('cpu').data.reshape(-1)
- list_error.extend(list(error_map.numpy()))
- list_var.extend(list(var_map.numpy()))
- ##
- mean_ssim /= num_imgs
- mean_psnr /= num_imgs
- mean_mse /= num_imgs
- mean_mae /= num_imgs
- print(
- 'Avg. SSIM: {} | Avg. PSNR: {} | Avg. MSE: {} | Avg. MAE: {}'.format
- (
- mean_ssim, mean_psnr, mean_mse, mean_mae
- )
- )
- # print(len(list_error), len(list_var))
- # print('UCE: ', get_UCE(list_error[::10], list_var[::10], num_bins=500)[1])
- # print('C.Coeff: ', np.corrcoef(np.array(list_error[::10]), np.array(list_var[::10])))
- return mean_ssim
-
-def eval_TTDA_p(
- NetG,
- eval_loader,
- device='cuda',
- dtype=torch.cuda.FloatTensor,
- p_mag=0.05,
- num_runs=50,
- task = None,
- xMask = None,
-):
- NetG.to(device)
- NetG.eval()
-
- mean_ssim = 0
- mean_psnr = 0
- mean_mse = 0
- mean_mae = 0
- num_imgs = 0
- with tqdm(eval_loader, unit='batch') as tepoch:
- for (idx, batch) in enumerate(tepoch):
- tepoch.set_description('Validating ...')
- ##
- xLR, xHR = batch[0].to(device), batch[1].to(device)
- xLR, xHR = xLR.type(dtype), xHR.type(dtype)
- # pass them through the network
- list_xSR = []
- with torch.no_grad():
- if task=='inpainting':
- _, xSR = NetG(xLR, xMask)
- else:
- xSR = NetG(xLR)
- for z in range(num_runs):
- xSRz = NetG(xLR+p_mag*xLR.max()*torch.randn_like(xLR))
- list_xSR.append(xSRz)
- xSRmean = torch.mean(torch.cat(list_xSR, dim=0), dim=0).unsqueeze(0)
- xSRvar = torch.mean(torch.var(torch.cat(list_xSR, dim=0), dim=0), dim=0).unsqueeze(0).unsqueeze(1)
- n_batch = xSR.shape[0]
- for j in range(n_batch):
- num_imgs += 1
- mean_ssim += img_ssim(xSR[j], xHR[j])
- mean_psnr += img_psnr(xSR[j], xHR[j])
- mean_mse += img_mse(xSR[j], xHR[j])
- mean_mae += img_mae(xSR[j], xHR[j])
-
- show_SR_w_uncer(xLR[j], xHR[j], xSR[j], xSRvar[j])
-
- mean_ssim /= num_imgs
- mean_psnr /= num_imgs
- mean_mse /= num_imgs
- mean_mae /= num_imgs
- print(
- 'Avg. SSIM: {} | Avg. PSNR: {} | Avg. MSE: {} | Avg. MAE: {}'.format
- (
- mean_ssim, mean_psnr, mean_mse, mean_mae
- )
- )
-
- return mean_ssim
-
-def eval_DO(
- NetG,
- eval_loader,
- device='cuda',
- dtype=torch.cuda.FloatTensor,
- dop=0.2,
- num_runs=50,
- task=None,
- xMask=None,
-):
- NetG.to(device)
- NetG.eval()
-
- mean_ssim = 0
- mean_psnr = 0
- mean_mse = 0
- mean_mae = 0
- num_imgs = 0
- with tqdm(eval_loader, unit='batch') as tepoch:
- for (idx, batch) in enumerate(tepoch):
- tepoch.set_description('Validating ...')
- ##
- xLR, xHR = batch[0].to(device), batch[1].to(device)
- xLR, xHR = xLR.type(dtype), xHR.type(dtype)
- # pass them through the network
- list_xSR = []
- with torch.no_grad():
- if task == 'inpainting':
- _, xSR = NetG(xLR, xMask)
- else:
- xSR = NetG(xLR)
- for z in range(num_runs):
- xSRz = NetG(xLR, dop=dop)
- list_xSR.append(xSRz)
- xSRmean = torch.mean(torch.cat(list_xSR, dim=0), dim=0).unsqueeze(0)
- xSRvar = torch.mean(torch.var(torch.cat(list_xSR, dim=0), dim=0), dim=0).unsqueeze(0).unsqueeze(1)
- n_batch = xSR.shape[0]
- for j in range(n_batch):
- num_imgs += 1
- mean_ssim += img_ssim(xSR[j], xHR[j])
- mean_psnr += img_psnr(xSR[j], xHR[j])
- mean_mse += img_mse(xSR[j], xHR[j])
- mean_mae += img_mae(xSR[j], xHR[j])
-
- show_SR_w_uncer(xLR[j], xHR[j], xSR[j], xSRvar[j])
- ##
- mean_ssim /= num_imgs
- mean_psnr /= num_imgs
- mean_mse /= num_imgs
- mean_mae /= num_imgs
- print(
- 'Avg. SSIM: {} | Avg. PSNR: {} | Avg. MSE: {} | Avg. MAE: {}'.format
- (
- mean_ssim, mean_psnr, mean_mse, mean_mae
- )
- )
-
- return mean_ssim
-
-
-############### compare all function
-def compare_all(
- NetC,
- NetG,
- eval_loader,
- p_mag = 0.05,
- dop = 0.2,
- num_runs = 100,
- device='cuda',
- dtype=torch.cuda.FloatTensor,
- task=None,
-):
- NetC.to(device)
- NetC.eval()
- NetG.to(device)
- NetG.eval()
-
- with tqdm(eval_loader, unit='batch') as tepoch:
- for (idx, batch) in enumerate(tepoch):
- tepoch.set_description('Comparing ...')
- ##
- xLR, xHR = batch[0].to(device), batch[1].to(device)
- xLR, xHR = xLR.type(dtype), xHR.type(dtype)
- if task == 'inpainting':
- xMask = random_mask(xLR.shape[0], (xLR.shape[2], xLR.shape[3]))
- xMask = xMask.to(device).type(dtype)
- # pass them through the network
- with torch.no_grad():
- if task == 'inpainting':
- _, xSR = NetG(xLR, xMask)
- else:
- xSR = NetG(xLR)
- xSRC_mu, xSRC_alpha, xSRC_beta = NetC(xSR)
-
- if task == 'inpainting':
- xSRvar1 = get_uncer_TTDAp(NetG, xLR, p_mag=p_mag, num_runs=num_runs, task='inpainting', xMask=xMask)
- xSRvar2 = get_uncer_DO(NetG, xLR, dop=dop, num_runs=num_runs, task='inpainting', xMask=xMask)
- xSRvar3 = get_uncer_BayesCap(NetC, NetG, xLR, task='inpainting', xMask=xMask)
- else:
- xSRvar1 = get_uncer_TTDAp(NetG, xLR, p_mag=p_mag, num_runs=num_runs)
- xSRvar2 = get_uncer_DO(NetG, xLR, dop=dop, num_runs=num_runs)
- xSRvar3 = get_uncer_BayesCap(NetC, NetG, xLR)
-
- print('bdg', xSRvar1.shape, xSRvar2.shape, xSRvar3.shape)
-
- n_batch = xSR.shape[0]
- for j in range(n_batch):
- if task=='s':
- show_SR_w_err(xLR[j], xHR[j], xSR[j])
- show_uncer4(xSRvar1[j], torch.sqrt(xSRvar1[j]), torch.pow(xSRvar1[j], 0.48), torch.pow(xSRvar1[j], 0.42))
- show_uncer4(xSRvar2[j], torch.sqrt(xSRvar2[j]), torch.pow(xSRvar3[j], 1.5), xSRvar3[j])
- if task=='d':
- show_SR_w_err(xLR[j], xHR[j], 0.5*xSR[j]+0.5*xHR[j])
- show_uncer4(xSRvar1[j], torch.sqrt(xSRvar1[j]), torch.pow(xSRvar1[j], 0.48), torch.pow(xSRvar1[j], 0.42))
- show_uncer4(xSRvar2[j], torch.sqrt(xSRvar2[j]), torch.pow(xSRvar3[j], 0.8), xSRvar3[j])
- if task=='inpainting':
- show_SR_w_err(xLR[j]*(1-xMask[j]), xHR[j], xSR[j], elim=(0,0.25), task='inpainting', xMask=xMask[j])
- show_uncer4(xSRvar1[j], torch.sqrt(xSRvar1[j]), torch.pow(xSRvar1[j], 0.45), torch.pow(xSRvar1[j], 0.4))
- show_uncer4(xSRvar2[j], torch.sqrt(xSRvar2[j]), torch.pow(xSRvar3[j], 0.8), xSRvar3[j])
- if task=='m':
- show_SR_w_err(xLR[j], xHR[j], xSR[j], elim=(0,0.04), task='m')
- show_uncer4(0.4*xSRvar1[j]+0.6*xSRvar2[j], torch.sqrt(xSRvar1[j]), torch.pow(xSRvar1[j], 0.48), torch.pow(xSRvar1[j], 0.42), ulim=(0.02,0.15))
- show_uncer4(xSRvar2[j], torch.sqrt(xSRvar2[j]), torch.pow(xSRvar3[j], 1.5), xSRvar3[j], ulim=(0.02,0.15))
-
-
-################# Degrading Identity
-def degrage_BayesCap_p(
- NetC,
- NetG,
- eval_loader,
- device='cuda',
- dtype=torch.cuda.FloatTensor,
- num_runs=50,
-):
- NetC.to(device)
- NetC.eval()
- NetG.to(device)
- NetG.eval()
-
- p_mag_list = [0, 0.05, 0.1, 0.15, 0.2]
- list_s = []
- list_p = []
- list_u1 = []
- list_u2 = []
- list_c = []
- for p_mag in p_mag_list:
- mean_ssim = 0
- mean_psnr = 0
- mean_mse = 0
- mean_mae = 0
- num_imgs = 0
- list_error = []
- list_error2 = []
- list_var = []
-
- with tqdm(eval_loader, unit='batch') as tepoch:
- for (idx, batch) in enumerate(tepoch):
- tepoch.set_description('Validating ...')
- ##
- xLR, xHR = batch[0].to(device), batch[1].to(device)
- xLR, xHR = xLR.type(dtype), xHR.type(dtype)
- # pass them through the network
- with torch.no_grad():
- xSR = NetG(xLR)
- xSRC_mu, xSRC_alpha, xSRC_beta = NetC(xSR + p_mag*xSR.max()*torch.randn_like(xSR))
- a_map = (1/(xSRC_alpha + 1e-5)).to('cpu').data
- b_map = xSRC_beta.to('cpu').data
- xSRvar = (a_map**2)*(torch.exp(torch.lgamma(3/(b_map + 1e-2)))/torch.exp(torch.lgamma(1/(b_map + 1e-2))))
- n_batch = xSRC_mu.shape[0]
- for j in range(n_batch):
- num_imgs += 1
- mean_ssim += img_ssim(xSRC_mu[j], xSR[j])
- mean_psnr += img_psnr(xSRC_mu[j], xSR[j])
- mean_mse += img_mse(xSRC_mu[j], xSR[j])
- mean_mae += img_mae(xSRC_mu[j], xSR[j])
-
- error_map = torch.mean(torch.pow(torch.abs(xSR[j]-xHR[j]),2), dim=0).to('cpu').data.reshape(-1)
- error_map2 = torch.mean(torch.pow(torch.abs(xSRC_mu[j]-xHR[j]),2), dim=0).to('cpu').data.reshape(-1)
- var_map = xSRvar[j].to('cpu').data.reshape(-1)
- list_error.extend(list(error_map.numpy()))
- list_error2.extend(list(error_map2.numpy()))
- list_var.extend(list(var_map.numpy()))
- ##
- mean_ssim /= num_imgs
- mean_psnr /= num_imgs
- mean_mse /= num_imgs
- mean_mae /= num_imgs
- print(
- 'Avg. SSIM: {} | Avg. PSNR: {} | Avg. MSE: {} | Avg. MAE: {}'.format
- (
- mean_ssim, mean_psnr, mean_mse, mean_mae
- )
- )
- uce1 = get_UCE(list_error[::100], list_var[::100], num_bins=200)[1]
- uce2 = get_UCE(list_error2[::100], list_var[::100], num_bins=200)[1]
- print('UCE1: ', uce1)
- print('UCE2: ', uce2)
- list_s.append(mean_ssim.item())
- list_p.append(mean_psnr.item())
- list_u1.append(uce1)
- list_u2.append(uce2)
-
- plt.plot(list_s)
- plt.show()
- plt.plot(list_p)
- plt.show()
-
- plt.plot(list_u1, label='wrt SR output')
- plt.plot(list_u2, label='wrt BayesCap output')
- plt.legend()
- plt.show()
-
- sns.set_style('darkgrid')
- fig,ax = plt.subplots()
- # make a plot
- ax.plot(p_mag_list, list_s, color="red", marker="o")
- # set x-axis label
- ax.set_xlabel("Reducing faithfulness of BayesCap Reconstruction",fontsize=10)
- # set y-axis label
- ax.set_ylabel("SSIM btwn BayesCap and SRGAN outputs", color="red",fontsize=10)
-
- # twin object for two different y-axis on the sample plot
- ax2=ax.twinx()
- # make a plot with different y-axis using second axis object
- ax2.plot(p_mag_list, list_u1, color="blue", marker="o", label='UCE wrt to error btwn SRGAN output and GT')
- ax2.plot(p_mag_list, list_u2, color="orange", marker="o", label='UCE wrt to error btwn BayesCap output and GT')
- ax2.set_ylabel("UCE", color="green", fontsize=10)
- plt.legend(fontsize=10)
- plt.tight_layout()
- plt.show()
-
-################# DeepFill_v2
-
-# ----------------------------------------
-# PATH processing
-# ----------------------------------------
-def text_readlines(filename):
- # Try to read a txt file and return a list.Return [] if there was a mistake.
- try:
- file = open(filename, 'r')
- except IOError:
- error = []
- return error
- content = file.readlines()
- # This for loop deletes the EOF (like \n)
- for i in range(len(content)):
- content[i] = content[i][:len(content[i])-1]
- file.close()
- return content
-
-def savetxt(name, loss_log):
- np_loss_log = np.array(loss_log)
- np.savetxt(name, np_loss_log)
-
-def get_files(path):
- # read a folder, return the complete path
- ret = []
- for root, dirs, files in os.walk(path):
- for filespath in files:
- ret.append(os.path.join(root, filespath))
- return ret
-
-def get_names(path):
- # read a folder, return the image name
- ret = []
- for root, dirs, files in os.walk(path):
- for filespath in files:
- ret.append(filespath)
- return ret
-
-def text_save(content, filename, mode = 'a'):
- # save a list to a txt
- # Try to save a list variable in txt file.
- file = open(filename, mode)
- for i in range(len(content)):
- file.write(str(content[i]) + '\n')
- file.close()
-
-def check_path(path):
- if not os.path.exists(path):
- os.makedirs(path)
-
-# ----------------------------------------
-# Validation and Sample at training
-# ----------------------------------------
-def save_sample_png(sample_folder, sample_name, img_list, name_list, pixel_max_cnt = 255):
- # Save image one-by-one
- for i in range(len(img_list)):
- img = img_list[i]
- # Recover normalization: * 255 because last layer is sigmoid activated
- img = img * 255
- # Process img_copy and do not destroy the data of img
- img_copy = img.clone().data.permute(0, 2, 3, 1)[0, :, :, :].cpu().numpy()
- img_copy = np.clip(img_copy, 0, pixel_max_cnt)
- img_copy = img_copy.astype(np.uint8)
- img_copy = cv2.cvtColor(img_copy, cv2.COLOR_RGB2BGR)
- # Save to certain path
- save_img_name = sample_name + '_' + name_list[i] + '.jpg'
- save_img_path = os.path.join(sample_folder, save_img_name)
- cv2.imwrite(save_img_path, img_copy)
-
-def psnr(pred, target, pixel_max_cnt = 255):
- mse = torch.mul(target - pred, target - pred)
- rmse_avg = (torch.mean(mse).item()) ** 0.5
- p = 20 * np.log10(pixel_max_cnt / rmse_avg)
- return p
-
-def grey_psnr(pred, target, pixel_max_cnt = 255):
- pred = torch.sum(pred, dim = 0)
- target = torch.sum(target, dim = 0)
- mse = torch.mul(target - pred, target - pred)
- rmse_avg = (torch.mean(mse).item()) ** 0.5
- p = 20 * np.log10(pixel_max_cnt * 3 / rmse_avg)
- return p
-
-def ssim(pred, target):
- pred = pred.clone().data.permute(0, 2, 3, 1).cpu().numpy()
- target = target.clone().data.permute(0, 2, 3, 1).cpu().numpy()
- target = target[0]
- pred = pred[0]
- ssim = skimage.measure.compare_ssim(target, pred, multichannel = True)
- return ssim
-
-## for contextual attention
-
-def extract_image_patches(images, ksizes, strides, rates, padding='same'):
- """
- Extract patches from images and put them in the C output dimension.
- :param padding:
- :param images: [batch, channels, in_rows, in_cols]. A 4-D Tensor with shape
- :param ksizes: [ksize_rows, ksize_cols]. The size of the sliding window for
- each dimension of images
- :param strides: [stride_rows, stride_cols]
- :param rates: [dilation_rows, dilation_cols]
- :return: A Tensor
- """
- assert len(images.size()) == 4
- assert padding in ['same', 'valid']
- batch_size, channel, height, width = images.size()
-
- if padding == 'same':
- images = same_padding(images, ksizes, strides, rates)
- elif padding == 'valid':
- pass
- else:
- raise NotImplementedError('Unsupported padding type: {}.\
- Only "same" or "valid" are supported.'.format(padding))
-
- unfold = torch.nn.Unfold(kernel_size=ksizes,
- dilation=rates,
- padding=0,
- stride=strides)
- patches = unfold(images)
- return patches # [N, C*k*k, L], L is the total number of such blocks
-
-def same_padding(images, ksizes, strides, rates):
- assert len(images.size()) == 4
- batch_size, channel, rows, cols = images.size()
- out_rows = (rows + strides[0] - 1) // strides[0]
- out_cols = (cols + strides[1] - 1) // strides[1]
- effective_k_row = (ksizes[0] - 1) * rates[0] + 1
- effective_k_col = (ksizes[1] - 1) * rates[1] + 1
- padding_rows = max(0, (out_rows-1)*strides[0]+effective_k_row-rows)
- padding_cols = max(0, (out_cols-1)*strides[1]+effective_k_col-cols)
- # Pad the input
- padding_top = int(padding_rows / 2.)
- padding_left = int(padding_cols / 2.)
- padding_bottom = padding_rows - padding_top
- padding_right = padding_cols - padding_left
- paddings = (padding_left, padding_right, padding_top, padding_bottom)
- images = torch.nn.ZeroPad2d(paddings)(images)
- return images
-
-def reduce_mean(x, axis=None, keepdim=False):
- if not axis:
- axis = range(len(x.shape))
- for i in sorted(axis, reverse=True):
- x = torch.mean(x, dim=i, keepdim=keepdim)
- return x
-
-
-def reduce_std(x, axis=None, keepdim=False):
- if not axis:
- axis = range(len(x.shape))
- for i in sorted(axis, reverse=True):
- x = torch.std(x, dim=i, keepdim=keepdim)
- return x
-
-
-def reduce_sum(x, axis=None, keepdim=False):
- if not axis:
- axis = range(len(x.shape))
- for i in sorted(axis, reverse=True):
- x = torch.sum(x, dim=i, keepdim=keepdim)
- return x
-
-def random_mask(num_batch=1, mask_shape=(256,256)):
- list_mask = []
- for _ in range(num_batch):
- # rectangle mask
- image_height = mask_shape[0]
- image_width = mask_shape[1]
- max_delta_height = image_height//8
- max_delta_width = image_width//8
- height = image_height//4
- width = image_width//4
- max_t = image_height - height
- max_l = image_width - width
- t = random.randint(0, max_t)
- l = random.randint(0, max_l)
- # bbox = (t, l, height, width)
- h = random.randint(0, max_delta_height//2)
- w = random.randint(0, max_delta_width//2)
- mask = torch.zeros((1, 1, image_height, image_width))
- mask[:, :, t+h:t+height-h, l+w:l+width-w] = 1
- rect_mask = mask
-
- # brush mask
- min_num_vertex = 4
- max_num_vertex = 12
- mean_angle = 2 * math.pi / 5
- angle_range = 2 * math.pi / 15
- min_width = 12
- max_width = 40
- H, W = image_height, image_width
- average_radius = math.sqrt(H*H+W*W) / 8
- mask = Image.new('L', (W, H), 0)
-
- for _ in range(np.random.randint(1, 4)):
- num_vertex = np.random.randint(min_num_vertex, max_num_vertex)
- angle_min = mean_angle - np.random.uniform(0, angle_range)
- angle_max = mean_angle + np.random.uniform(0, angle_range)
- angles = []
- vertex = []
- for i in range(num_vertex):
- if i % 2 == 0:
- angles.append(2*math.pi - np.random.uniform(angle_min, angle_max))
- else:
- angles.append(np.random.uniform(angle_min, angle_max))
-
- h, w = mask.size
- vertex.append((int(np.random.randint(0, w)), int(np.random.randint(0, h))))
- for i in range(num_vertex):
- r = np.clip(
- np.random.normal(loc=average_radius, scale=average_radius//2),
- 0, 2*average_radius)
- new_x = np.clip(vertex[-1][0] + r * math.cos(angles[i]), 0, w)
- new_y = np.clip(vertex[-1][1] + r * math.sin(angles[i]), 0, h)
- vertex.append((int(new_x), int(new_y)))
-
- draw = ImageDraw.Draw(mask)
- width = int(np.random.uniform(min_width, max_width))
- draw.line(vertex, fill=255, width=width)
- for v in vertex:
- draw.ellipse((v[0] - width//2,
- v[1] - width//2,
- v[0] + width//2,
- v[1] + width//2),
- fill=255)
-
- if np.random.normal() > 0:
- mask.transpose(Image.FLIP_LEFT_RIGHT)
- if np.random.normal() > 0:
- mask.transpose(Image.FLIP_TOP_BOTTOM)
-
- mask = transforms.ToTensor()(mask)
- mask = mask.reshape((1, 1, H, W))
- brush_mask = mask
-
- mask = torch.cat([rect_mask, brush_mask], dim=1).max(dim=1, keepdim=True)[0]
- list_mask.append(mask)
- mask = torch.cat(list_mask, dim=0)
- return mask
\ No newline at end of file
diff --git a/spaces/EuroPython2022/Warehouse_Apparel_Detection/metadata/predictor_yolo_detector/models/experimental.py b/spaces/EuroPython2022/Warehouse_Apparel_Detection/metadata/predictor_yolo_detector/models/experimental.py
deleted file mode 100644
index 1f3c81900925e6fc8d4530e33379879afefc3f8a..0000000000000000000000000000000000000000
--- a/spaces/EuroPython2022/Warehouse_Apparel_Detection/metadata/predictor_yolo_detector/models/experimental.py
+++ /dev/null
@@ -1,152 +0,0 @@
-# This file contains experimental modules
-
-import numpy as np
-import torch
-import torch.nn as nn
-
-from metadata.predictor_yolo_detector.models.common import Conv, DWConv
-from metadata.predictor_yolo_detector.utils.google_utils import attempt_download
-
-
-class CrossConv(nn.Module):
- # Cross Convolution Downsample
- def __init__(self, c1, c2, k=3, s=1, g=1, e=1.0, shortcut=False):
- # ch_in, ch_out, kernel, stride, groups, expansion, shortcut
- super(CrossConv, self).__init__()
- c_ = int(c2 * e) # hidden channels
- self.cv1 = Conv(c1, c_, (1, k), (1, s))
- self.cv2 = Conv(c_, c2, (k, 1), (s, 1), g=g)
- self.add = shortcut and c1 == c2
-
- def forward(self, x):
- return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))
-
-
-class C3(nn.Module):
- # Cross Convolution CSP
- def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
- super(C3, self).__init__()
- c_ = int(c2 * e) # hidden channels
- self.cv1 = Conv(c1, c_, 1, 1)
- self.cv2 = nn.Conv2d(c1, c_, 1, 1, bias=False)
- self.cv3 = nn.Conv2d(c_, c_, 1, 1, bias=False)
- self.cv4 = Conv(2 * c_, c2, 1, 1)
- self.bn = nn.BatchNorm2d(2 * c_) # applied to cat(cv2, cv3)
- self.act = nn.LeakyReLU(0.1, inplace=True)
- self.m = nn.Sequential(*[CrossConv(c_, c_, 3, 1, g, 1.0, shortcut) for _ in range(n)])
-
- def forward(self, x):
- y1 = self.cv3(self.m(self.cv1(x)))
- y2 = self.cv2(x)
- return self.cv4(self.act(self.bn(torch.cat((y1, y2), dim=1))))
-
-
-class Sum(nn.Module):
- # Weighted sum of 2 or more layers https://arxiv.org/abs/1911.09070
- def __init__(self, n, weight=False): # n: number of inputs
- super(Sum, self).__init__()
- self.weight = weight # apply weights boolean
- self.iter = range(n - 1) # iter object
- if weight:
- self.w = nn.Parameter(-torch.arange(1., n) / 2, requires_grad=True) # layer weights
-
- def forward(self, x):
- y = x[0] # no weight
- if self.weight:
- w = torch.sigmoid(self.w) * 2
- for i in self.iter:
- y = y + x[i + 1] * w[i]
- else:
- for i in self.iter:
- y = y + x[i + 1]
- return y
-
-
-class GhostConv(nn.Module):
- # Ghost Convolution https://github.com/huawei-noah/ghostnet
- def __init__(self, c1, c2, k=1, s=1, g=1, act=True): # ch_in, ch_out, kernel, stride, groups
- super(GhostConv, self).__init__()
- c_ = c2 // 2 # hidden channels
- self.cv1 = Conv(c1, c_, k, s, None, g, act)
- self.cv2 = Conv(c_, c_, 5, 1, None, c_, act)
-
- def forward(self, x):
- y = self.cv1(x)
- return torch.cat([y, self.cv2(y)], 1)
-
-
-class GhostBottleneck(nn.Module):
- # Ghost Bottleneck https://github.com/huawei-noah/ghostnet
- def __init__(self, c1, c2, k, s):
- super(GhostBottleneck, self).__init__()
- c_ = c2 // 2
- self.conv = nn.Sequential(GhostConv(c1, c_, 1, 1), # pw
- DWConv(c_, c_, k, s, act=False) if s == 2 else nn.Identity(), # dw
- GhostConv(c_, c2, 1, 1, act=False)) # pw-linear
- self.shortcut = nn.Sequential(DWConv(c1, c1, k, s, act=False),
- Conv(c1, c2, 1, 1, act=False)) if s == 2 else nn.Identity()
-
- def forward(self, x):
- return self.conv(x) + self.shortcut(x)
-
-
-class MixConv2d(nn.Module):
- # Mixed Depthwise Conv https://arxiv.org/abs/1907.09595
- def __init__(self, c1, c2, k=(1, 3), s=1, equal_ch=True):
- super(MixConv2d, self).__init__()
- groups = len(k)
- if equal_ch: # equal c_ per group
- i = torch.linspace(0, groups - 1E-6, c2).floor() # c2 indices
- c_ = [(i == g).sum() for g in range(groups)] # intermediate channels
- else: # equal weight.numel() per group
- b = [c2] + [0] * groups
- a = np.eye(groups + 1, groups, k=-1)
- a -= np.roll(a, 1, axis=1)
- a *= np.array(k) ** 2
- a[0] = 1
- c_ = np.linalg.lstsq(a, b, rcond=None)[0].round() # solve for equal weight indices, ax = b
-
- self.m = nn.ModuleList([nn.Conv2d(c1, int(c_[g]), k[g], s, k[g] // 2, bias=False) for g in range(groups)])
- self.bn = nn.BatchNorm2d(c2)
- self.act = nn.LeakyReLU(0.1, inplace=True)
-
- def forward(self, x):
- return x + self.act(self.bn(torch.cat([m(x) for m in self.m], 1)))
-
-
-class Ensemble(nn.ModuleList):
- # Ensemble of models
- def __init__(self):
- super(Ensemble, self).__init__()
-
- def forward(self, x, augment=False):
- y = []
- for module in self:
- y.append(module(x, augment)[0])
- # y = torch.stack(y).max(0)[0] # max ensemble
- # y = torch.cat(y, 1) # nms ensemble
- y = torch.stack(y).mean(0) # mean ensemble
- return y, None # inference, train output
-
-
-def attempt_load(weights, map_location=None):
- # Loads an ensemble of models weights=[a,b,c] or a single model weights=[a] or weights=a
- model = Ensemble()
- for w in weights if isinstance(weights, list) else [weights]:
- attempt_download(w)
- model.append(torch.load(w, map_location=map_location)['model'].float().fuse().eval()) # load FP32 model
-
- # Compatibility updates
- for m in model.modules():
- if type(m) in [nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6]:
- m.inplace = True # pytorch 1.7.0 compatibility
- elif type(m) is Conv:
- m._non_persistent_buffers_set = set() # pytorch 1.6.0 compatibility
-
- if len(model) == 1:
- return model[-1] # return model
- else:
- print('Ensemble created with %s\n' % weights)
- for k in ['names', 'stride']:
- setattr(model, k, getattr(model[-1], k))
- return model # return ensemble
diff --git a/spaces/EuroPython2022/mmocr-demo/configs/textrecog/robust_scanner/robustscanner_r31_academic.py b/spaces/EuroPython2022/mmocr-demo/configs/textrecog/robust_scanner/robustscanner_r31_academic.py
deleted file mode 100644
index 65a980b61684dee9929b7800ee82b4461ed2fc40..0000000000000000000000000000000000000000
--- a/spaces/EuroPython2022/mmocr-demo/configs/textrecog/robust_scanner/robustscanner_r31_academic.py
+++ /dev/null
@@ -1,34 +0,0 @@
-_base_ = [
- '../../_base_/default_runtime.py',
- '../../_base_/recog_models/robust_scanner.py',
- '../../_base_/schedules/schedule_adam_step_5e.py',
- '../../_base_/recog_pipelines/sar_pipeline.py',
- '../../_base_/recog_datasets/ST_SA_MJ_real_train.py',
- '../../_base_/recog_datasets/academic_test.py'
-]
-
-train_list = {{_base_.train_list}}
-test_list = {{_base_.test_list}}
-
-train_pipeline = {{_base_.train_pipeline}}
-test_pipeline = {{_base_.test_pipeline}}
-
-data = dict(
- samples_per_gpu=64,
- workers_per_gpu=2,
- val_dataloader=dict(samples_per_gpu=1),
- test_dataloader=dict(samples_per_gpu=1),
- train=dict(
- type='UniformConcatDataset',
- datasets=train_list,
- pipeline=train_pipeline),
- val=dict(
- type='UniformConcatDataset',
- datasets=test_list,
- pipeline=test_pipeline),
- test=dict(
- type='UniformConcatDataset',
- datasets=test_list,
- pipeline=test_pipeline))
-
-evaluation = dict(interval=1, metric='acc')
diff --git a/spaces/FrankZxShen/so-vits-svc-models-pcr/vencoder/HubertSoft_Onnx.py b/spaces/FrankZxShen/so-vits-svc-models-pcr/vencoder/HubertSoft_Onnx.py
deleted file mode 100644
index 06f10a4ca79c429ed59ab9743578128e8db506cc..0000000000000000000000000000000000000000
--- a/spaces/FrankZxShen/so-vits-svc-models-pcr/vencoder/HubertSoft_Onnx.py
+++ /dev/null
@@ -1,28 +0,0 @@
-from vencoder.encoder import SpeechEncoder
-import onnxruntime
-import torch
-
-class HubertSoft_Onnx(SpeechEncoder):
- def __init__(self,vec_path = "pretrain/hubert-soft.onnx",device=None):
- print("load model(s) from {}".format(vec_path))
- self.hidden_dim = 256
- if device is None:
- self.dev = torch.device("cpu")
- else:
- self.dev = torch.device(device)
- if device == 'cpu' or device == torch.device("cpu") or device is None:
- providers = ['CPUExecutionProvider']
- elif device == 'cuda' or device == torch.device("cuda"):
- providers = ['CUDAExecutionProvider', 'CPUExecutionProvider']
- self.model = onnxruntime.InferenceSession(vec_path, providers=providers)
-
- def encoder(self, wav):
- feats = wav
- if feats.dim() == 2: # double channels
- feats = feats.mean(-1)
- assert feats.dim() == 1, feats.dim()
- feats = feats.view(1, -1)
- feats = feats.unsqueeze(0).cpu().detach().numpy()
- onnx_input = {self.model.get_inputs()[0].name: feats}
- logits = self.model.run(None, onnx_input)
- return torch.tensor(logits[0]).transpose(1, 2).to(self.dev)
\ No newline at end of file
diff --git a/spaces/FridaZuley/RVC_HFKawaii/demucs/model.py b/spaces/FridaZuley/RVC_HFKawaii/demucs/model.py
deleted file mode 100644
index e9d932f4d014f7b95b394d2e24ed5edc379ded8d..0000000000000000000000000000000000000000
--- a/spaces/FridaZuley/RVC_HFKawaii/demucs/model.py
+++ /dev/null
@@ -1,202 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import math
-
-import julius
-from torch import nn
-
-from .utils import capture_init, center_trim
-
-
-class BLSTM(nn.Module):
- def __init__(self, dim, layers=1):
- super().__init__()
- self.lstm = nn.LSTM(bidirectional=True, num_layers=layers, hidden_size=dim, input_size=dim)
- self.linear = nn.Linear(2 * dim, dim)
-
- def forward(self, x):
- x = x.permute(2, 0, 1)
- x = self.lstm(x)[0]
- x = self.linear(x)
- x = x.permute(1, 2, 0)
- return x
-
-
-def rescale_conv(conv, reference):
- std = conv.weight.std().detach()
- scale = (std / reference)**0.5
- conv.weight.data /= scale
- if conv.bias is not None:
- conv.bias.data /= scale
-
-
-def rescale_module(module, reference):
- for sub in module.modules():
- if isinstance(sub, (nn.Conv1d, nn.ConvTranspose1d)):
- rescale_conv(sub, reference)
-
-
-class Demucs(nn.Module):
- @capture_init
- def __init__(self,
- sources,
- audio_channels=2,
- channels=64,
- depth=6,
- rewrite=True,
- glu=True,
- rescale=0.1,
- resample=True,
- kernel_size=8,
- stride=4,
- growth=2.,
- lstm_layers=2,
- context=3,
- normalize=False,
- samplerate=44100,
- segment_length=4 * 10 * 44100):
- """
- Args:
- sources (list[str]): list of source names
- audio_channels (int): stereo or mono
- channels (int): first convolution channels
- depth (int): number of encoder/decoder layers
- rewrite (bool): add 1x1 convolution to each encoder layer
- and a convolution to each decoder layer.
- For the decoder layer, `context` gives the kernel size.
- glu (bool): use glu instead of ReLU
- resample_input (bool): upsample x2 the input and downsample /2 the output.
- rescale (int): rescale initial weights of convolutions
- to get their standard deviation closer to `rescale`
- kernel_size (int): kernel size for convolutions
- stride (int): stride for convolutions
- growth (float): multiply (resp divide) number of channels by that
- for each layer of the encoder (resp decoder)
- lstm_layers (int): number of lstm layers, 0 = no lstm
- context (int): kernel size of the convolution in the
- decoder before the transposed convolution. If > 1,
- will provide some context from neighboring time
- steps.
- samplerate (int): stored as meta information for easing
- future evaluations of the model.
- segment_length (int): stored as meta information for easing
- future evaluations of the model. Length of the segments on which
- the model was trained.
- """
-
- super().__init__()
- self.audio_channels = audio_channels
- self.sources = sources
- self.kernel_size = kernel_size
- self.context = context
- self.stride = stride
- self.depth = depth
- self.resample = resample
- self.channels = channels
- self.normalize = normalize
- self.samplerate = samplerate
- self.segment_length = segment_length
-
- self.encoder = nn.ModuleList()
- self.decoder = nn.ModuleList()
-
- if glu:
- activation = nn.GLU(dim=1)
- ch_scale = 2
- else:
- activation = nn.ReLU()
- ch_scale = 1
- in_channels = audio_channels
- for index in range(depth):
- encode = []
- encode += [nn.Conv1d(in_channels, channels, kernel_size, stride), nn.ReLU()]
- if rewrite:
- encode += [nn.Conv1d(channels, ch_scale * channels, 1), activation]
- self.encoder.append(nn.Sequential(*encode))
-
- decode = []
- if index > 0:
- out_channels = in_channels
- else:
- out_channels = len(self.sources) * audio_channels
- if rewrite:
- decode += [nn.Conv1d(channels, ch_scale * channels, context), activation]
- decode += [nn.ConvTranspose1d(channels, out_channels, kernel_size, stride)]
- if index > 0:
- decode.append(nn.ReLU())
- self.decoder.insert(0, nn.Sequential(*decode))
- in_channels = channels
- channels = int(growth * channels)
-
- channels = in_channels
-
- if lstm_layers:
- self.lstm = BLSTM(channels, lstm_layers)
- else:
- self.lstm = None
-
- if rescale:
- rescale_module(self, reference=rescale)
-
- def valid_length(self, length):
- """
- Return the nearest valid length to use with the model so that
- there is no time steps left over in a convolutions, e.g. for all
- layers, size of the input - kernel_size % stride = 0.
-
- If the mixture has a valid length, the estimated sources
- will have exactly the same length when context = 1. If context > 1,
- the two signals can be center trimmed to match.
-
- For training, extracts should have a valid length.For evaluation
- on full tracks we recommend passing `pad = True` to :method:`forward`.
- """
- if self.resample:
- length *= 2
- for _ in range(self.depth):
- length = math.ceil((length - self.kernel_size) / self.stride) + 1
- length = max(1, length)
- length += self.context - 1
- for _ in range(self.depth):
- length = (length - 1) * self.stride + self.kernel_size
-
- if self.resample:
- length = math.ceil(length / 2)
- return int(length)
-
- def forward(self, mix):
- x = mix
-
- if self.normalize:
- mono = mix.mean(dim=1, keepdim=True)
- mean = mono.mean(dim=-1, keepdim=True)
- std = mono.std(dim=-1, keepdim=True)
- else:
- mean = 0
- std = 1
-
- x = (x - mean) / (1e-5 + std)
-
- if self.resample:
- x = julius.resample_frac(x, 1, 2)
-
- saved = []
- for encode in self.encoder:
- x = encode(x)
- saved.append(x)
- if self.lstm:
- x = self.lstm(x)
- for decode in self.decoder:
- skip = center_trim(saved.pop(-1), x)
- x = x + skip
- x = decode(x)
-
- if self.resample:
- x = julius.resample_frac(x, 2, 1)
- x = x * std + mean
- x = x.view(x.size(0), len(self.sources), self.audio_channels, x.size(-1))
- return x
diff --git a/spaces/FridaZuley/RVC_HFKawaii/demucs/wav.py b/spaces/FridaZuley/RVC_HFKawaii/demucs/wav.py
deleted file mode 100644
index a65c3b2ba5aacb1fcab3753f1f85ff7b8db7fc11..0000000000000000000000000000000000000000
--- a/spaces/FridaZuley/RVC_HFKawaii/demucs/wav.py
+++ /dev/null
@@ -1,174 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from collections import OrderedDict
-import hashlib
-import math
-import json
-from pathlib import Path
-
-import julius
-import torch as th
-from torch import distributed
-import torchaudio as ta
-from torch.nn import functional as F
-
-from .audio import convert_audio_channels
-from .compressed import get_musdb_tracks
-
-MIXTURE = "mixture"
-EXT = ".wav"
-
-
-def _track_metadata(track, sources):
- track_length = None
- track_samplerate = None
- for source in sources + [MIXTURE]:
- file = track / f"{source}{EXT}"
- info = ta.info(str(file))
- length = info.num_frames
- if track_length is None:
- track_length = length
- track_samplerate = info.sample_rate
- elif track_length != length:
- raise ValueError(
- f"Invalid length for file {file}: "
- f"expecting {track_length} but got {length}.")
- elif info.sample_rate != track_samplerate:
- raise ValueError(
- f"Invalid sample rate for file {file}: "
- f"expecting {track_samplerate} but got {info.sample_rate}.")
- if source == MIXTURE:
- wav, _ = ta.load(str(file))
- wav = wav.mean(0)
- mean = wav.mean().item()
- std = wav.std().item()
-
- return {"length": length, "mean": mean, "std": std, "samplerate": track_samplerate}
-
-
-def _build_metadata(path, sources):
- meta = {}
- path = Path(path)
- for file in path.iterdir():
- meta[file.name] = _track_metadata(file, sources)
- return meta
-
-
-class Wavset:
- def __init__(
- self,
- root, metadata, sources,
- length=None, stride=None, normalize=True,
- samplerate=44100, channels=2):
- """
- Waveset (or mp3 set for that matter). Can be used to train
- with arbitrary sources. Each track should be one folder inside of `path`.
- The folder should contain files named `{source}.{ext}`.
- Files will be grouped according to `sources` (each source is a list of
- filenames).
-
- Sample rate and channels will be converted on the fly.
-
- `length` is the sample size to extract (in samples, not duration).
- `stride` is how many samples to move by between each example.
- """
- self.root = Path(root)
- self.metadata = OrderedDict(metadata)
- self.length = length
- self.stride = stride or length
- self.normalize = normalize
- self.sources = sources
- self.channels = channels
- self.samplerate = samplerate
- self.num_examples = []
- for name, meta in self.metadata.items():
- track_length = int(self.samplerate * meta['length'] / meta['samplerate'])
- if length is None or track_length < length:
- examples = 1
- else:
- examples = int(math.ceil((track_length - self.length) / self.stride) + 1)
- self.num_examples.append(examples)
-
- def __len__(self):
- return sum(self.num_examples)
-
- def get_file(self, name, source):
- return self.root / name / f"{source}{EXT}"
-
- def __getitem__(self, index):
- for name, examples in zip(self.metadata, self.num_examples):
- if index >= examples:
- index -= examples
- continue
- meta = self.metadata[name]
- num_frames = -1
- offset = 0
- if self.length is not None:
- offset = int(math.ceil(
- meta['samplerate'] * self.stride * index / self.samplerate))
- num_frames = int(math.ceil(
- meta['samplerate'] * self.length / self.samplerate))
- wavs = []
- for source in self.sources:
- file = self.get_file(name, source)
- wav, _ = ta.load(str(file), frame_offset=offset, num_frames=num_frames)
- wav = convert_audio_channels(wav, self.channels)
- wavs.append(wav)
-
- example = th.stack(wavs)
- example = julius.resample_frac(example, meta['samplerate'], self.samplerate)
- if self.normalize:
- example = (example - meta['mean']) / meta['std']
- if self.length:
- example = example[..., :self.length]
- example = F.pad(example, (0, self.length - example.shape[-1]))
- return example
-
-
-def get_wav_datasets(args, samples, sources):
- sig = hashlib.sha1(str(args.wav).encode()).hexdigest()[:8]
- metadata_file = args.metadata / (sig + ".json")
- train_path = args.wav / "train"
- valid_path = args.wav / "valid"
- if not metadata_file.is_file() and args.rank == 0:
- train = _build_metadata(train_path, sources)
- valid = _build_metadata(valid_path, sources)
- json.dump([train, valid], open(metadata_file, "w"))
- if args.world_size > 1:
- distributed.barrier()
- train, valid = json.load(open(metadata_file))
- train_set = Wavset(train_path, train, sources,
- length=samples, stride=args.data_stride,
- samplerate=args.samplerate, channels=args.audio_channels,
- normalize=args.norm_wav)
- valid_set = Wavset(valid_path, valid, [MIXTURE] + sources,
- samplerate=args.samplerate, channels=args.audio_channels,
- normalize=args.norm_wav)
- return train_set, valid_set
-
-
-def get_musdb_wav_datasets(args, samples, sources):
- metadata_file = args.metadata / "musdb_wav.json"
- root = args.musdb / "train"
- if not metadata_file.is_file() and args.rank == 0:
- metadata = _build_metadata(root, sources)
- json.dump(metadata, open(metadata_file, "w"))
- if args.world_size > 1:
- distributed.barrier()
- metadata = json.load(open(metadata_file))
-
- train_tracks = get_musdb_tracks(args.musdb, is_wav=True, subsets=["train"], split="train")
- metadata_train = {name: meta for name, meta in metadata.items() if name in train_tracks}
- metadata_valid = {name: meta for name, meta in metadata.items() if name not in train_tracks}
- train_set = Wavset(root, metadata_train, sources,
- length=samples, stride=args.data_stride,
- samplerate=args.samplerate, channels=args.audio_channels,
- normalize=args.norm_wav)
- valid_set = Wavset(root, metadata_valid, [MIXTURE] + sources,
- samplerate=args.samplerate, channels=args.audio_channels,
- normalize=args.norm_wav)
- return train_set, valid_set
diff --git a/spaces/GMFTBY/PandaGPT/app.py b/spaces/GMFTBY/PandaGPT/app.py
deleted file mode 100644
index e196ea3afb3f2b424df615f3b846d4bc71472bfe..0000000000000000000000000000000000000000
--- a/spaces/GMFTBY/PandaGPT/app.py
+++ /dev/null
@@ -1,247 +0,0 @@
-from transformers import AutoModel, AutoTokenizer
-from copy import deepcopy
-import os
-import ipdb
-import gradio as gr
-import mdtex2html
-from model.openllama import OpenLLAMAPEFTModel
-import torch
-import json
-
-# init the model
-args = {
- 'model': 'openllama_peft',
- 'imagebind_ckpt_path': 'pretrained_ckpt/imagebind_ckpt',
- 'vicuna_ckpt_path': 'openllmplayground/vicuna_7b_v0',
- 'delta_ckpt_path': 'pretrained_ckpt/pandagpt_ckpt/7b/pytorch_model.pt',
- 'stage': 2,
- 'max_tgt_len': 128,
- 'lora_r': 32,
- 'lora_alpha': 32,
- 'lora_dropout': 0.1,
-}
-model = OpenLLAMAPEFTModel(**args)
-delta_ckpt = torch.load(args['delta_ckpt_path'], map_location=torch.device('cpu'))
-model.load_state_dict(delta_ckpt, strict=False)
-model = model.half().cuda().eval() if torch.cuda.is_available() else model.eval()
-print(f'[!] init the 13b model over ...')
-
-"""Override Chatbot.postprocess"""
-
-
-def postprocess(self, y):
- if y is None:
- return []
- for i, (message, response) in enumerate(y):
- y[i] = (
- None if message is None else mdtex2html.convert((message)),
- None if response is None else mdtex2html.convert(response),
- )
- return y
-
-
-gr.Chatbot.postprocess = postprocess
-
-
-def parse_text(text):
- """copy from https://github.com/GaiZhenbiao/ChuanhuChatGPT/"""
- lines = text.split("\n")
- lines = [line for line in lines if line != ""]
- count = 0
- for i, line in enumerate(lines):
- if "```" in line:
- count += 1
- items = line.split('`')
- if count % 2 == 1:
- lines[i] = f'
'
- else:
- lines[i] = f'
'
- else:
- if i > 0:
- if count % 2 == 1:
- line = line.replace("`", "\`")
- line = line.replace("<", "<")
- line = line.replace(">", ">")
- line = line.replace(" ", " ")
- line = line.replace("*", "*")
- line = line.replace("_", "_")
- line = line.replace("-", "-")
- line = line.replace(".", ".")
- line = line.replace("!", "!")
- line = line.replace("(", "(")
- line = line.replace(")", ")")
- line = line.replace("$", "$")
- lines[i] = " "+line
- text = "".join(lines)
- return text
-
-
-def re_predict(
- input,
- image_path,
- audio_path,
- video_path,
- thermal_path,
- chatbot,
- max_length,
- top_p,
- temperature,
- history,
- modality_cache,
-):
- # drop the latest query and answers and generate again
- q, a = history.pop()
- chatbot.pop()
- return predict(q, image_path, audio_path, video_path, thermal_path, chatbot, max_length, top_p, temperature, history, modality_cache)
-
-
-def predict(
- input,
- image_path,
- audio_path,
- video_path,
- thermal_path,
- chatbot,
- max_length,
- top_p,
- temperature,
- history,
- modality_cache,
-):
- if image_path is None and audio_path is None and video_path is None and thermal_path is None:
- # return [(input, "图片和音频以及视频为空!请重新上传才能开启对话。")]
- gr.Error("图片和音频以及视频为空!请重新上传才能开启对话。")
- else:
- print(f'[!] image path: {image_path}\n[!] audio path: {audio_path}\n[!] video path: {video_path}\n[!] thermal path: {thermal_path}')
-
- # prepare the prompt
- prompt_text = ''
- for idx, (q, a) in enumerate(history):
- if idx == 0:
- prompt_text += f'{q}\n### Assistant: {a}\n###'
- else:
- prompt_text += f' Human: {q}\n### Assistant: {a}\n###'
- if len(history) == 0:
- prompt_text += f'{input}'
- else:
- prompt_text += f' Human: {input}'
-
- with torch.no_grad():
- response = model.generate({
- 'prompt': prompt_text,
- 'image_paths': [image_path] if image_path else [],
- 'audio_paths': [audio_path] if audio_path else [],
- 'video_paths': [video_path] if video_path else [],
- 'thermal_paths': [thermal_path] if thermal_path else [],
- 'top_p': top_p,
- 'temperature': temperature,
- 'max_tgt_len': max_length,
- 'modality_embeds': modality_cache
- })
- chatbot.append((parse_text(input), parse_text(response)))
- history.append((input, response))
- return chatbot, history, modality_cache
-
-
-def reset_user_input():
- return gr.update(value='')
-
-def reset_dialog():
- return [], []
-
-def reset_state():
- return None, None, None, None, [], [], []
-
-
-with gr.Blocks(scale=4) as demo:
- gr.HTML("""
PandaGPT
""")
- gr.Markdown('''We note that the current online demo uses the 7B version of PandaGPT due to the limitation of computation resource.
-
- Better results should be expected when switching to the 13B version of PandaGPT.
-
- For more details on how to run 13B PandaGPT, please refer to our [main project repository](https://github.com/yxuansu/PandaGPT).''')
-
- with gr.Row(scale=4):
- with gr.Column(scale=1):
- image_path = gr.Image(type="filepath", label="Image", value=None)
- with gr.Column(scale=1):
- audio_path = gr.Audio(type="filepath", label="Audio", value=None)
- with gr.Column(scale=1):
- video_path = gr.Video(type='file', label="Video")
- with gr.Column(scale=1):
- thermal_path = gr.Image(type="filepath", label="Thermal Image", value=None)
-
- chatbot = gr.Chatbot().style(height=300)
- with gr.Row():
- with gr.Column(scale=4):
- with gr.Column(scale=12):
- user_input = gr.Textbox(show_label=False, placeholder="Input...", lines=10).style(container=False)
- with gr.Column(min_width=32, scale=1):
- with gr.Row(scale=1):
- submitBtn = gr.Button("Submit", variant="primary")
- with gr.Row(scale=1):
- resubmitBtn = gr.Button("Resubmit", variant="primary")
- with gr.Column(scale=1):
- emptyBtn = gr.Button("Clear History")
- max_length = gr.Slider(0, 400, value=256, step=1.0, label="Maximum length", interactive=True)
- top_p = gr.Slider(0, 1, value=0.01, step=0.01, label="Top P", interactive=True)
- temperature = gr.Slider(0, 1, value=1.0, step=0.01, label="Temperature", interactive=True)
-
- history = gr.State([])
- modality_cache = gr.State([])
-
- submitBtn.click(
- predict, [
- user_input,
- image_path,
- audio_path,
- video_path,
- thermal_path,
- chatbot,
- max_length,
- top_p,
- temperature,
- history,
- modality_cache,
- ], [
- chatbot,
- history,
- modality_cache
- ],
- show_progress=True
- )
-
- resubmitBtn.click(
- re_predict, [
- user_input,
- image_path,
- audio_path,
- video_path,
- thermal_path,
- chatbot,
- max_length,
- top_p,
- temperature,
- history,
- modality_cache,
- ], [
- chatbot,
- history,
- modality_cache
- ],
- show_progress=True
- )
-
-
- submitBtn.click(reset_user_input, [], [user_input])
- emptyBtn.click(reset_state, outputs=[
- image_path,
- audio_path,
- video_path,
- thermal_path,
- chatbot,
- history,
- modality_cache
- ], show_progress=True)
-
-demo.launch(enable_queue=True)
diff --git a/spaces/GastonMazzei/escher-inpaint-project/glide_text2im/unet.py b/spaces/GastonMazzei/escher-inpaint-project/glide_text2im/unet.py
deleted file mode 100644
index b61437a44ef7510e0c62afaae070deabc24c42bb..0000000000000000000000000000000000000000
--- a/spaces/GastonMazzei/escher-inpaint-project/glide_text2im/unet.py
+++ /dev/null
@@ -1,635 +0,0 @@
-import math
-from abc import abstractmethod
-
-import torch as th
-import torch.nn as nn
-import torch.nn.functional as F
-
-from .fp16_util import convert_module_to_f16, convert_module_to_f32
-from .nn import avg_pool_nd, conv_nd, linear, normalization, timestep_embedding, zero_module
-
-
-class TimestepBlock(nn.Module):
- """
- Any module where forward() takes timestep embeddings as a second argument.
- """
-
- @abstractmethod
- def forward(self, x, emb):
- """
- Apply the module to `x` given `emb` timestep embeddings.
- """
-
-
-class TimestepEmbedSequential(nn.Sequential, TimestepBlock):
- """
- A sequential module that passes timestep embeddings to the children that
- support it as an extra input.
- """
-
- def forward(self, x, emb, encoder_out=None):
- for layer in self:
- if isinstance(layer, TimestepBlock):
- x = layer(x, emb)
- elif isinstance(layer, AttentionBlock):
- x = layer(x, encoder_out)
- else:
- x = layer(x)
- return x
-
-
-class Upsample(nn.Module):
- """
- An upsampling layer with an optional convolution.
-
- :param channels: channels in the inputs and outputs.
- :param use_conv: a bool determining if a convolution is applied.
- :param dims: determines if the signal is 1D, 2D, or 3D. If 3D, then
- upsampling occurs in the inner-two dimensions.
- """
-
- def __init__(self, channels, use_conv, dims=2, out_channels=None):
- super().__init__()
- self.channels = channels
- self.out_channels = out_channels or channels
- self.use_conv = use_conv
- self.dims = dims
- if use_conv:
- self.conv = conv_nd(dims, self.channels, self.out_channels, 3, padding=1)
-
- def forward(self, x):
- assert x.shape[1] == self.channels
- if self.dims == 3:
- x = F.interpolate(x, (x.shape[2], x.shape[3] * 2, x.shape[4] * 2), mode="nearest")
- else:
- x = F.interpolate(x, scale_factor=2, mode="nearest")
- if self.use_conv:
- x = self.conv(x)
- return x
-
-
-class Downsample(nn.Module):
- """
- A downsampling layer with an optional convolution.
-
- :param channels: channels in the inputs and outputs.
- :param use_conv: a bool determining if a convolution is applied.
- :param dims: determines if the signal is 1D, 2D, or 3D. If 3D, then
- downsampling occurs in the inner-two dimensions.
- """
-
- def __init__(self, channels, use_conv, dims=2, out_channels=None):
- super().__init__()
- self.channels = channels
- self.out_channels = out_channels or channels
- self.use_conv = use_conv
- self.dims = dims
- stride = 2 if dims != 3 else (1, 2, 2)
- if use_conv:
- self.op = conv_nd(dims, self.channels, self.out_channels, 3, stride=stride, padding=1)
- else:
- assert self.channels == self.out_channels
- self.op = avg_pool_nd(dims, kernel_size=stride, stride=stride)
-
- def forward(self, x):
- assert x.shape[1] == self.channels
- return self.op(x)
-
-
-class ResBlock(TimestepBlock):
- """
- A residual block that can optionally change the number of channels.
-
- :param channels: the number of input channels.
- :param emb_channels: the number of timestep embedding channels.
- :param dropout: the rate of dropout.
- :param out_channels: if specified, the number of out channels.
- :param use_conv: if True and out_channels is specified, use a spatial
- convolution instead of a smaller 1x1 convolution to change the
- channels in the skip connection.
- :param dims: determines if the signal is 1D, 2D, or 3D.
- :param use_checkpoint: if True, use gradient checkpointing on this module.
- :param up: if True, use this block for upsampling.
- :param down: if True, use this block for downsampling.
- """
-
- def __init__(
- self,
- channels,
- emb_channels,
- dropout,
- out_channels=None,
- use_conv=False,
- use_scale_shift_norm=False,
- dims=2,
- use_checkpoint=False,
- up=False,
- down=False,
- ):
- super().__init__()
- self.channels = channels
- self.emb_channels = emb_channels
- self.dropout = dropout
- self.out_channels = out_channels or channels
- self.use_conv = use_conv
- self.use_checkpoint = use_checkpoint
- self.use_scale_shift_norm = use_scale_shift_norm
-
- self.in_layers = nn.Sequential(
- normalization(channels, swish=1.0),
- nn.Identity(),
- conv_nd(dims, channels, self.out_channels, 3, padding=1),
- )
-
- self.updown = up or down
-
- if up:
- self.h_upd = Upsample(channels, False, dims)
- self.x_upd = Upsample(channels, False, dims)
- elif down:
- self.h_upd = Downsample(channels, False, dims)
- self.x_upd = Downsample(channels, False, dims)
- else:
- self.h_upd = self.x_upd = nn.Identity()
-
- self.emb_layers = nn.Sequential(
- nn.SiLU(),
- linear(
- emb_channels,
- 2 * self.out_channels if use_scale_shift_norm else self.out_channels,
- ),
- )
- self.out_layers = nn.Sequential(
- normalization(self.out_channels, swish=0.0 if use_scale_shift_norm else 1.0),
- nn.SiLU() if use_scale_shift_norm else nn.Identity(),
- nn.Dropout(p=dropout),
- zero_module(conv_nd(dims, self.out_channels, self.out_channels, 3, padding=1)),
- )
-
- if self.out_channels == channels:
- self.skip_connection = nn.Identity()
- elif use_conv:
- self.skip_connection = conv_nd(dims, channels, self.out_channels, 3, padding=1)
- else:
- self.skip_connection = conv_nd(dims, channels, self.out_channels, 1)
-
- def forward(self, x, emb):
- """
- Apply the block to a Tensor, conditioned on a timestep embedding.
-
- :param x: an [N x C x ...] Tensor of features.
- :param emb: an [N x emb_channels] Tensor of timestep embeddings.
- :return: an [N x C x ...] Tensor of outputs.
- """
- if self.updown:
- in_rest, in_conv = self.in_layers[:-1], self.in_layers[-1]
- h = in_rest(x)
- h = self.h_upd(h)
- x = self.x_upd(x)
- h = in_conv(h)
- else:
- h = self.in_layers(x)
- emb_out = self.emb_layers(emb).type(h.dtype)
- while len(emb_out.shape) < len(h.shape):
- emb_out = emb_out[..., None]
- if self.use_scale_shift_norm:
- out_norm, out_rest = self.out_layers[0], self.out_layers[1:]
- scale, shift = th.chunk(emb_out, 2, dim=1)
- h = out_norm(h) * (1 + scale) + shift
- h = out_rest(h)
- else:
- h = h + emb_out
- h = self.out_layers(h)
- return self.skip_connection(x) + h
-
-
-class AttentionBlock(nn.Module):
- """
- An attention block that allows spatial positions to attend to each other.
-
- Originally ported from here, but adapted to the N-d case.
- https://github.com/hojonathanho/diffusion/blob/1e0dceb3b3495bbe19116a5e1b3596cd0706c543/diffusion_tf/models/unet.py#L66.
- """
-
- def __init__(
- self,
- channels,
- num_heads=1,
- num_head_channels=-1,
- use_checkpoint=False,
- encoder_channels=None,
- ):
- super().__init__()
- self.channels = channels
- if num_head_channels == -1:
- self.num_heads = num_heads
- else:
- assert (
- channels % num_head_channels == 0
- ), f"q,k,v channels {channels} is not divisible by num_head_channels {num_head_channels}"
- self.num_heads = channels // num_head_channels
- self.use_checkpoint = use_checkpoint
- self.norm = normalization(channels, swish=0.0)
- self.qkv = conv_nd(1, channels, channels * 3, 1)
- self.attention = QKVAttention(self.num_heads)
-
- if encoder_channels is not None:
- self.encoder_kv = conv_nd(1, encoder_channels, channels * 2, 1)
- self.proj_out = zero_module(conv_nd(1, channels, channels, 1))
-
- def forward(self, x, encoder_out=None):
- b, c, *spatial = x.shape
- qkv = self.qkv(self.norm(x).view(b, c, -1))
- if encoder_out is not None:
- encoder_out = self.encoder_kv(encoder_out)
- h = self.attention(qkv, encoder_out)
- else:
- h = self.attention(qkv)
- h = self.proj_out(h)
- return x + h.reshape(b, c, *spatial)
-
-
-class QKVAttention(nn.Module):
- """
- A module which performs QKV attention. Matches legacy QKVAttention + input/ouput heads shaping
- """
-
- def __init__(self, n_heads):
- super().__init__()
- self.n_heads = n_heads
-
- def forward(self, qkv, encoder_kv=None):
- """
- Apply QKV attention.
-
- :param qkv: an [N x (H * 3 * C) x T] tensor of Qs, Ks, and Vs.
- :return: an [N x (H * C) x T] tensor after attention.
- """
- bs, width, length = qkv.shape
- assert width % (3 * self.n_heads) == 0
- ch = width // (3 * self.n_heads)
- q, k, v = qkv.reshape(bs * self.n_heads, ch * 3, length).split(ch, dim=1)
- if encoder_kv is not None:
- assert encoder_kv.shape[1] == self.n_heads * ch * 2
- ek, ev = encoder_kv.reshape(bs * self.n_heads, ch * 2, -1).split(ch, dim=1)
- k = th.cat([ek, k], dim=-1)
- v = th.cat([ev, v], dim=-1)
- scale = 1 / math.sqrt(math.sqrt(ch))
- weight = th.einsum(
- "bct,bcs->bts", q * scale, k * scale
- ) # More stable with f16 than dividing afterwards
- weight = th.softmax(weight.float(), dim=-1).type(weight.dtype)
- a = th.einsum("bts,bcs->bct", weight, v)
- return a.reshape(bs, -1, length)
-
-
-class UNetModel(nn.Module):
- """
- The full UNet model with attention and timestep embedding.
-
- :param in_channels: channels in the input Tensor.
- :param model_channels: base channel count for the model.
- :param out_channels: channels in the output Tensor.
- :param num_res_blocks: number of residual blocks per downsample.
- :param attention_resolutions: a collection of downsample rates at which
- attention will take place. May be a set, list, or tuple.
- For example, if this contains 4, then at 4x downsampling, attention
- will be used.
- :param dropout: the dropout probability.
- :param channel_mult: channel multiplier for each level of the UNet.
- :param conv_resample: if True, use learned convolutions for upsampling and
- downsampling.
- :param dims: determines if the signal is 1D, 2D, or 3D.
- :param num_classes: if specified (as an int), then this model will be
- class-conditional with `num_classes` classes.
- :param use_checkpoint: use gradient checkpointing to reduce memory usage.
- :param num_heads: the number of attention heads in each attention layer.
- :param num_heads_channels: if specified, ignore num_heads and instead use
- a fixed channel width per attention head.
- :param num_heads_upsample: works with num_heads to set a different number
- of heads for upsampling. Deprecated.
- :param use_scale_shift_norm: use a FiLM-like conditioning mechanism.
- :param resblock_updown: use residual blocks for up/downsampling.
- """
-
- def __init__(
- self,
- in_channels,
- model_channels,
- out_channels,
- num_res_blocks,
- attention_resolutions,
- dropout=0,
- channel_mult=(1, 2, 4, 8),
- conv_resample=True,
- dims=2,
- num_classes=None,
- use_checkpoint=False,
- use_fp16=False,
- num_heads=1,
- num_head_channels=-1,
- num_heads_upsample=-1,
- use_scale_shift_norm=False,
- resblock_updown=False,
- encoder_channels=None,
- ):
- super().__init__()
-
- if num_heads_upsample == -1:
- num_heads_upsample = num_heads
-
- self.in_channels = in_channels
- self.model_channels = model_channels
- self.out_channels = out_channels
- self.num_res_blocks = num_res_blocks
- self.attention_resolutions = attention_resolutions
- self.dropout = dropout
- self.channel_mult = channel_mult
- self.conv_resample = conv_resample
- self.num_classes = num_classes
- self.use_checkpoint = use_checkpoint
- self.dtype = th.float16 if use_fp16 else th.float32
- self.num_heads = num_heads
- self.num_head_channels = num_head_channels
- self.num_heads_upsample = num_heads_upsample
-
- time_embed_dim = model_channels * 4
- self.time_embed = nn.Sequential(
- linear(model_channels, time_embed_dim),
- nn.SiLU(),
- linear(time_embed_dim, time_embed_dim),
- )
-
- if self.num_classes is not None:
- self.label_emb = nn.Embedding(num_classes, time_embed_dim)
-
- ch = input_ch = int(channel_mult[0] * model_channels)
- self.input_blocks = nn.ModuleList(
- [TimestepEmbedSequential(conv_nd(dims, in_channels, ch, 3, padding=1))]
- )
- self._feature_size = ch
- input_block_chans = [ch]
- ds = 1
- for level, mult in enumerate(channel_mult):
- for _ in range(num_res_blocks):
- layers = [
- ResBlock(
- ch,
- time_embed_dim,
- dropout,
- out_channels=int(mult * model_channels),
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- )
- ]
- ch = int(mult * model_channels)
- if ds in attention_resolutions:
- layers.append(
- AttentionBlock(
- ch,
- use_checkpoint=use_checkpoint,
- num_heads=num_heads,
- num_head_channels=num_head_channels,
- encoder_channels=encoder_channels,
- )
- )
- self.input_blocks.append(TimestepEmbedSequential(*layers))
- self._feature_size += ch
- input_block_chans.append(ch)
- if level != len(channel_mult) - 1:
- out_ch = ch
- self.input_blocks.append(
- TimestepEmbedSequential(
- ResBlock(
- ch,
- time_embed_dim,
- dropout,
- out_channels=out_ch,
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- down=True,
- )
- if resblock_updown
- else Downsample(ch, conv_resample, dims=dims, out_channels=out_ch)
- )
- )
- ch = out_ch
- input_block_chans.append(ch)
- ds *= 2
- self._feature_size += ch
-
- self.middle_block = TimestepEmbedSequential(
- ResBlock(
- ch,
- time_embed_dim,
- dropout,
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- ),
- AttentionBlock(
- ch,
- use_checkpoint=use_checkpoint,
- num_heads=num_heads,
- num_head_channels=num_head_channels,
- encoder_channels=encoder_channels,
- ),
- ResBlock(
- ch,
- time_embed_dim,
- dropout,
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- ),
- )
- self._feature_size += ch
-
- self.output_blocks = nn.ModuleList([])
- for level, mult in list(enumerate(channel_mult))[::-1]:
- for i in range(num_res_blocks + 1):
- ich = input_block_chans.pop()
- layers = [
- ResBlock(
- ch + ich,
- time_embed_dim,
- dropout,
- out_channels=int(model_channels * mult),
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- )
- ]
- ch = int(model_channels * mult)
- if ds in attention_resolutions:
- layers.append(
- AttentionBlock(
- ch,
- use_checkpoint=use_checkpoint,
- num_heads=num_heads_upsample,
- num_head_channels=num_head_channels,
- encoder_channels=encoder_channels,
- )
- )
- if level and i == num_res_blocks:
- out_ch = ch
- layers.append(
- ResBlock(
- ch,
- time_embed_dim,
- dropout,
- out_channels=out_ch,
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- up=True,
- )
- if resblock_updown
- else Upsample(ch, conv_resample, dims=dims, out_channels=out_ch)
- )
- ds //= 2
- self.output_blocks.append(TimestepEmbedSequential(*layers))
- self._feature_size += ch
-
- self.out = nn.Sequential(
- normalization(ch, swish=1.0),
- nn.Identity(),
- zero_module(conv_nd(dims, input_ch, out_channels, 3, padding=1)),
- )
- self.use_fp16 = use_fp16
-
- def convert_to_fp16(self):
- """
- Convert the torso of the model to float16.
- """
- self.input_blocks.apply(convert_module_to_f16)
- self.middle_block.apply(convert_module_to_f16)
- self.output_blocks.apply(convert_module_to_f16)
-
- def convert_to_fp32(self):
- """
- Convert the torso of the model to float32.
- """
- self.input_blocks.apply(convert_module_to_f32)
- self.middle_block.apply(convert_module_to_f32)
- self.output_blocks.apply(convert_module_to_f32)
-
- def forward(self, x, timesteps, y=None):
- """
- Apply the model to an input batch.
-
- :param x: an [N x C x ...] Tensor of inputs.
- :param timesteps: a 1-D batch of timesteps.
- :param y: an [N] Tensor of labels, if class-conditional.
- :return: an [N x C x ...] Tensor of outputs.
- """
- assert (y is not None) == (
- self.num_classes is not None
- ), "must specify y if and only if the model is class-conditional"
-
- hs = []
- emb = self.time_embed(timestep_embedding(timesteps, self.model_channels))
-
- if self.num_classes is not None:
- assert y.shape == (x.shape[0],)
- emb = emb + self.label_emb(y)
-
- h = x.type(self.dtype)
- for module in self.input_blocks:
- h = module(h, emb)
- hs.append(h)
- h = self.middle_block(h, emb)
- for module in self.output_blocks:
- h = th.cat([h, hs.pop()], dim=1)
- h = module(h, emb)
- h = h.type(x.dtype)
- return self.out(h)
-
-class SuperResUNetModel(UNetModel):
- """
- A UNetModel that performs super-resolution.
-
- Expects an extra kwarg `low_res` to condition on a low-resolution image.
- """
-
- def __init__(self, *args, **kwargs):
- if "in_channels" in kwargs:
- kwargs = dict(kwargs)
- kwargs["in_channels"] = kwargs["in_channels"] * 2
- else:
- # Curse you, Python. Or really, just curse positional arguments :|.
- args = list(args)
- args[1] = args[1] * 2
- super().__init__(*args, **kwargs)
-
- def forward(self, x, timesteps, low_res=None, **kwargs):
- _, _, new_height, new_width = x.shape
- upsampled = F.interpolate(low_res, (new_height, new_width), mode="bilinear")
- x = th.cat([x, upsampled], dim=1)
- return super().forward(x, timesteps, **kwargs)
-
-
-class InpaintUNetModel(UNetModel):
- """
- A UNetModel which can perform inpainting.
- """
-
- def __init__(self, *args, **kwargs):
- if "in_channels" in kwargs:
- kwargs = dict(kwargs)
- kwargs["in_channels"] = kwargs["in_channels"] * 2 + 1
- else:
- # Curse you, Python. Or really, just curse positional arguments :|.
- args = list(args)
- args[1] = args[1] * 2 + 1
- super().__init__(*args, **kwargs)
-
- def forward(self, x, timesteps, inpaint_image=None, inpaint_mask=None, **kwargs):
- if inpaint_image is None:
- inpaint_image = th.zeros_like(x)
- if inpaint_mask is None:
- inpaint_mask = th.zeros_like(x[:, :1])
- return super().forward(
- th.cat([x, inpaint_image * inpaint_mask, inpaint_mask], dim=1),
- timesteps,
- **kwargs,
- )
-
-
-class SuperResInpaintUNetModel(UNetModel):
- """
- A UNetModel which can perform both upsampling and inpainting.
- """
-
- def __init__(self, *args, **kwargs):
- if "in_channels" in kwargs:
- kwargs = dict(kwargs)
- kwargs["in_channels"] = kwargs["in_channels"] * 3 + 1
- else:
- # Curse you, Python. Or really, just curse positional arguments :|.
- args = list(args)
- args[1] = args[1] * 3 + 1
- super().__init__(*args, **kwargs)
-
- def forward(
- self,
- x,
- timesteps,
- inpaint_image=None,
- inpaint_mask=None,
- low_res=None,
- **kwargs,
- ):
- if inpaint_image is None:
- inpaint_image = th.zeros_like(x)
- if inpaint_mask is None:
- inpaint_mask = th.zeros_like(x[:, :1])
- _, _, new_height, new_width = x.shape
- upsampled = F.interpolate(low_res, (new_height, new_width), mode="bilinear")
- return super().forward(
- th.cat([x, inpaint_image * inpaint_mask, inpaint_mask, upsampled], dim=1),
- timesteps,
- **kwargs,
- )
diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/sphere_container_color_match.py b/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/sphere_container_color_match.py
deleted file mode 100644
index 7f2c51d44fefaa3b4cde37a51d2f3479e2668e9c..0000000000000000000000000000000000000000
--- a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/sphere_container_color_match.py
+++ /dev/null
@@ -1,53 +0,0 @@
-import numpy as np
-import os
-import pybullet as p
-import random
-from cliport.tasks import primitives
-from cliport.tasks.grippers import Spatula
-from cliport.tasks.task import Task
-from cliport.utils import utils
-import numpy as np
-from cliport.tasks.task import Task
-from cliport.utils import utils
-import pybullet as p
-
-class SphereContainerColorMatch(Task):
- """Pick up each sphere and place it into a container of the same color."""
-
- def __init__(self):
- super().__init__()
- self.max_steps = 4
- self.lang_template = "put the {color} sphere in the {color} container"
- self.task_completed_desc = "done matching spheres and containers."
- self.additional_reset()
-
- def reset(self, env):
- super().reset(env)
-
- # Define colors and corresponding names
- colors = [utils.COLORS['red'], utils.COLORS['blue'], utils.COLORS['green'], utils.COLORS['yellow']]
- color_names = ['red', 'blue', 'green', 'yellow']
-
- # Add containers.
- container_size = (0.12, 0.12, 0.12)
- container_urdf = 'container/container-template.urdf'
- containers = []
- for i in range(4):
- container_pose = self.get_random_pose(env, container_size)
- container_id = env.add_object(container_urdf, container_pose, color=colors[i])
- containers.append(container_id)
-
- # Add spheres.
- sphere_size = (0.04, 0.04, 0.04)
- sphere_urdf = 'sphere/sphere.urdf'
- spheres = []
- for i in range(4):
- sphere_pose = self.get_random_pose(env, sphere_size)
- sphere_id = env.add_object(sphere_urdf, sphere_pose, color=colors[i])
- spheres.append(sphere_id)
-
- # Goal: each sphere is in a container of the same color.
- for i in range(4):
- self.add_goal(objs=[spheres[i]], matches=np.ones((1, 1)), targ_poses=[p.getBasePositionAndOrientation(containers[i])], replace=False,
- rotations=True, metric='pose', params=None, step_max_reward=1/4,
- language_goal=self.lang_template.format(color=color_names[i]))
\ No newline at end of file
diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/models/streams/two_stream_attention_lang_fusion.py b/spaces/Gen-Sim/Gen-Sim/cliport/models/streams/two_stream_attention_lang_fusion.py
deleted file mode 100644
index bb18915a965f55357751d59bb8d5a8ca487bebce..0000000000000000000000000000000000000000
--- a/spaces/Gen-Sim/Gen-Sim/cliport/models/streams/two_stream_attention_lang_fusion.py
+++ /dev/null
@@ -1,115 +0,0 @@
-import numpy as np
-import torch
-import torch.nn.functional as F
-
-from cliport.models.core.attention import Attention
-import cliport.models as models
-import cliport.models.core.fusion as fusion
-
-
-class TwoStreamAttentionLangFusion(Attention):
- """Two Stream Language-Conditioned Attention (a.k.a Pick) module."""
-
- def __init__(self, stream_fcn, in_shape, n_rotations, preprocess, cfg, device):
- self.fusion_type = cfg['train']['attn_stream_fusion_type']
- super().__init__(stream_fcn, in_shape, n_rotations, preprocess, cfg, device)
-
- def _build_nets(self):
- stream_one_fcn, stream_two_fcn = self.stream_fcn
- stream_one_model = models.names[stream_one_fcn] # resnet_lat.REsNet45_10s
- stream_two_model = models.names[stream_two_fcn] # clip_ligunet_lat.CLIP_LIGUnet_lat
-
- self.attn_stream_one = stream_one_model(self.in_shape, 1, self.cfg, self.device, self.preprocess)
- self.attn_stream_two = stream_two_model(self.in_shape, 1, self.cfg, self.device, self.preprocess)
- self.fusion = fusion.names[self.fusion_type](input_dim=1)
-
- print(f"Attn FCN - Stream One: {stream_one_fcn}, Stream Two: {stream_two_fcn}, Stream Fusion: {self.fusion_type}")
-
- def attend(self, x, l):
- x1 = self.attn_stream_one(x)
- x2 = self.attn_stream_two(x, l)
- x = self.fusion(x1, x2)
- return x
-
- def forward(self, inp_img, lang_goal, softmax=True):
- """Forward pass."""
- if len(inp_img.shape) < 4:
- inp_img = inp_img[None]
-
- if type(inp_img) is not torch.Tensor:
- in_data = inp_img # .reshape(in_shape)
- in_tens = torch.from_numpy(in_data.copy()).to(dtype=torch.float, device=self.device) # [B W H 6]
- else:
- in_data = inp_img
- in_tens = in_data
-
- # [B W H 6]
- in_tens = torch.nn.functional.pad(in_tens, tuple(self.padding[[2,1,0]].reshape(-1)), mode='constant')
-
- # Rotation pivot.
- pv = np.array(in_tens.shape[1:3]) // 2
-
- # Rotate input.
- in_tens = in_tens.permute(0, 3, 1, 2) # [B 6 W H]
-
- # in_tens = in_tens.repeat(self.n_rotations, 1, 1, 1)
- # make n copies, but keep batchsize
- in_tens = [in_tens] * self.n_rotations
- in_tens = self.rotator(in_tens, pivot=pv)
-
- # Forward pass.
- logits = self.attend(torch.cat(in_tens, dim=0), lang_goal)
-
- # Rotate back output.
- logits = self.rotator([logits], reverse=True, pivot=pv)
- logits = torch.cat(logits, dim=0)
- c0 = self.padding[:2, 0]
- c1 = c0 + inp_img[0].shape[:2]
- logits = logits[:, :, c0[0]:c1[0], c0[1]:c1[1]]
- output_shape = logits.shape
-
- # logits = logits.permute(1, 2, 3, 0) # [B W H 1]
- output = logits.reshape(len(logits), -1)
- if softmax:
- output = F.softmax(output, dim=-1)
- return output.view(output_shape)
-
-
-class TwoStreamAttentionLangFusionLat(TwoStreamAttentionLangFusion):
- """Language-Conditioned Attention (a.k.a Pick) module with lateral connections."""
-
- def __init__(self, stream_fcn, in_shape, n_rotations, preprocess, cfg, device):
- self.fusion_type = cfg['train']['attn_stream_fusion_type']
- super().__init__(stream_fcn, in_shape, n_rotations, preprocess, cfg, device)
-
- def attend(self, x, l):
- x1, lat = self.attn_stream_one(x)
- x2 = self.attn_stream_two(x, lat, l)
- x = self.fusion(x1, x2)
- return x
-
-
-
-class TwoStreamAttentionLangFusionLatReduce(TwoStreamAttentionLangFusion):
- """Language-Conditioned Attention (a.k.a Pick) module with lateral connections."""
-
- def __init__(self, stream_fcn, in_shape, n_rotations, preprocess, cfg, device):
- self.fusion_type = cfg['train']['attn_stream_fusion_type']
- super().__init__(stream_fcn, in_shape, n_rotations, preprocess, cfg, device)
-
- del self.attn_stream_one
- del self.attn_stream_two
-
- stream_one_fcn = 'plain_resnet_reduce_lat'
- stream_one_model = models.names[stream_one_fcn]
- stream_two_fcn = 'clip_ling'
- stream_two_model = models.names[stream_two_fcn]
-
- self.attn_stream_one = stream_one_model(self.in_shape, 1, self.cfg, self.device, self.preprocess)
- self.attn_stream_two = stream_two_model(self.in_shape, 1, self.cfg, self.device, self.preprocess)
-
- def attend(self, x, l):
- x1, lat = self.attn_stream_one(x)
- x2 = self.attn_stream_two(x, lat, l)
- x = self.fusion(x1, x2)
- return x
\ No newline at end of file
diff --git a/spaces/Giuvyz/rvc-genshin/app.py b/spaces/Giuvyz/rvc-genshin/app.py
deleted file mode 100644
index 506a7e77241e81155c39c1184e16ee067bcb7ace..0000000000000000000000000000000000000000
--- a/spaces/Giuvyz/rvc-genshin/app.py
+++ /dev/null
@@ -1,188 +0,0 @@
-import os
-import json
-import argparse
-import traceback
-import logging
-import gradio as gr
-import numpy as np
-import librosa
-import torch
-import asyncio
-import edge_tts
-from datetime import datetime
-from fairseq import checkpoint_utils
-from infer_pack.models import SynthesizerTrnMs256NSFsid, SynthesizerTrnMs256NSFsid_nono
-from vc_infer_pipeline import VC
-from config import (
- is_half,
- device
-)
-logging.getLogger("numba").setLevel(logging.WARNING)
-limitation = os.getenv("SYSTEM") == "spaces" # limit audio length in huggingface spaces
-
-def create_vc_fn(tgt_sr, net_g, vc, if_f0, file_index, file_big_npy):
- def vc_fn(
- input_audio,
- f0_up_key,
- f0_method,
- index_rate,
- tts_mode,
- tts_text,
- tts_voice
- ):
- try:
- if tts_mode:
- if len(tts_text) > 100 and limitation:
- return "Text is too long", None
- if tts_text is None or tts_voice is None:
- return "You need to enter text and select a voice", None
- asyncio.run(edge_tts.Communicate(tts_text, "-".join(tts_voice.split('-')[:-1])).save("tts.mp3"))
- audio, sr = librosa.load("tts.mp3", sr=16000, mono=True)
- else:
- if args.files:
- audio, sr = librosa.load(input_audio, sr=16000, mono=True)
- else:
- if input_audio is None:
- return "You need to upload an audio", None
- sampling_rate, audio = input_audio
- duration = audio.shape[0] / sampling_rate
- if duration > 20 and limitation:
- return "Please upload an audio file that is less than 20 seconds. If you need to generate a longer audio file, please use Colab.", None
- audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32)
- if len(audio.shape) > 1:
- audio = librosa.to_mono(audio.transpose(1, 0))
- if sampling_rate != 16000:
- audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000)
- times = [0, 0, 0]
- f0_up_key = int(f0_up_key)
- audio_opt = vc.pipeline(
- hubert_model,
- net_g,
- 0,
- audio,
- times,
- f0_up_key,
- f0_method,
- file_index,
- file_big_npy,
- index_rate,
- if_f0,
- )
- print(
- f"[{datetime.now().strftime('%Y-%m-%d %H:%M')}]: npy: {times[0]}, f0: {times[1]}s, infer: {times[2]}s"
- )
- return "Success", (tgt_sr, audio_opt)
- except:
- info = traceback.format_exc()
- print(info)
- return info, (None, None)
- return vc_fn
-
-def load_hubert():
- global hubert_model
- models, _, _ = checkpoint_utils.load_model_ensemble_and_task(
- ["hubert_base.pt"],
- suffix="",
- )
- hubert_model = models[0]
- hubert_model = hubert_model.to(device)
- if is_half:
- hubert_model = hubert_model.half()
- else:
- hubert_model = hubert_model.float()
- hubert_model.eval()
-
-def change_to_tts_mode(tts_mode):
- if tts_mode:
- return gr.Audio.update(visible=False), gr.Textbox.update(visible=True), gr.Dropdown.update(visible=True)
- else:
- return gr.Audio.update(visible=True), gr.Textbox.update(visible=False), gr.Dropdown.update(visible=False)
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--api', action="store_true", default=False)
- parser.add_argument("--share", action="store_true", default=False, help="share gradio app")
- parser.add_argument("--files", action="store_true", default=False, help="load audio from path")
- args, unknown = parser.parse_known_args()
- load_hubert()
- models = []
- tts_voice_list = asyncio.get_event_loop().run_until_complete(edge_tts.list_voices())
- voices = [f"{v['ShortName']}-{v['Gender']}" for v in tts_voice_list]
- with open("weights/model_info.json", "r", encoding="utf-8") as f:
- models_info = json.load(f)
- for name, info in models_info.items():
- if not info['enable']:
- continue
- title = info['title']
- author = info.get("author", None)
- cover = f"weights/{name}/{info['cover']}"
- index = f"weights/{name}/{info['feature_retrieval_library']}"
- npy = f"weights/{name}/{info['feature_file']}"
- cpt = torch.load(f"weights/{name}/{name}.pth", map_location="cpu")
- tgt_sr = cpt["config"][-1]
- cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk
- if_f0 = cpt.get("f0", 1)
- if if_f0 == 1:
- net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=is_half)
- else:
- net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"])
- del net_g.enc_q
- print(net_g.load_state_dict(cpt["weight"], strict=False)) # 不加这一行清不干净, 真奇葩
- net_g.eval().to(device)
- if is_half:
- net_g = net_g.half()
- else:
- net_g = net_g.float()
- vc = VC(tgt_sr, device, is_half)
- models.append((name, title, author, cover, create_vc_fn(tgt_sr, net_g, vc, if_f0, index, npy)))
- with gr.Blocks() as app:
- gr.Markdown(
- "#
RVC Models\n"
- "##
The input audio should be clean and pure voice without background music.\n"
- "\n\n"
- "[](https://colab.research.google.com/drive/119oVwiNZWbml3y8APvjEtWpDPQCIucoG?usp=share_link)\n\n"
- "[](https://huggingface.co/spaces/ardha27pi/rvc-models?duplicate=true)\n\n"
- "[](https://github.com/ardha27/AI-Song-Cover-RVC)\n\n"
- "[](https://ko-fi.com/R6R7AH1FA)\n\n"
- )
- with gr.Tabs():
- for (name, title, author, cover, vc_fn) in models:
- with gr.TabItem(name):
- with gr.Row():
- gr.Markdown(
- '
'
- f'
{title}
\n'+
- (f'
Model author: {author}
' if author else "")+
- (f'' if cover else "")+
- '
'
- )
- with gr.Row():
- with gr.Column():
- if args.files:
- vc_input = gr.Textbox(label="Input audio path")
- else:
- vc_input = gr.Audio(label="Input audio"+' (less than 20 seconds)' if limitation else '')
- vc_transpose = gr.Number(label="Transpose", value=0)
- vc_f0method = gr.Radio(
- label="Pitch extraction algorithm, PM is fast but Harvest is better for low frequencies",
- choices=["pm", "harvest"],
- value="pm",
- interactive=True,
- )
- vc_index_ratio = gr.Slider(
- minimum=0,
- maximum=1,
- label="Retrieval feature ratio",
- value=0.6,
- interactive=True,
- )
- tts_mode = gr.Checkbox(label="tts (use edge-tts as input)", value=False)
- tts_text = gr.Textbox(visible=False,label="TTS text (100 words limitation)" if limitation else "TTS text")
- tts_voice = gr.Dropdown(label="Edge-tts speaker", choices=voices, visible=False, allow_custom_value=False, value="en-US-AnaNeural-Female")
- vc_submit = gr.Button("Generate", variant="primary")
- with gr.Column():
- vc_output1 = gr.Textbox(label="Output Message")
- vc_output2 = gr.Audio(label="Output Audio")
- vc_submit.click(vc_fn, [vc_input, vc_transpose, vc_f0method, vc_index_ratio, tts_mode, tts_text, tts_voice], [vc_output1, vc_output2])
- tts_mode.change(change_to_tts_mode, [tts_mode], [vc_input, tts_text, tts_voice])
- app.queue(concurrency_count=1, max_size=20, api_open=args.api).launch(share=args.share)
\ No newline at end of file
diff --git a/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/relax/cleanup.py b/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/relax/cleanup.py
deleted file mode 100644
index 4bdd449d28c6733eb948f0502db04dbbade53ad2..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/relax/cleanup.py
+++ /dev/null
@@ -1,127 +0,0 @@
-# Copyright 2021 DeepMind Technologies Limited
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""Cleans up a PDB file using pdbfixer in preparation for OpenMM simulations.
-
-fix_pdb uses a third-party tool. We also support fixing some additional edge
-cases like removing chains of length one (see clean_structure).
-"""
-import io
-
-import pdbfixer
-from simtk.openmm import app
-from simtk.openmm.app import element
-
-
-def fix_pdb(pdbfile, alterations_info):
- """Apply pdbfixer to the contents of a PDB file; return a PDB string result.
-
- 1) Replaces nonstandard residues.
- 2) Removes heterogens (non protein residues) including water.
- 3) Adds missing residues and missing atoms within existing residues.
- 4) Adds hydrogens assuming pH=7.0.
- 5) KeepIds is currently true, so the fixer must keep the existing chain and
- residue identifiers. This will fail for some files in wider PDB that have
- invalid IDs.
-
- Args:
- pdbfile: Input PDB file handle.
- alterations_info: A dict that will store details of changes made.
-
- Returns:
- A PDB string representing the fixed structure.
- """
- fixer = pdbfixer.PDBFixer(pdbfile=pdbfile)
- fixer.findNonstandardResidues()
- alterations_info['nonstandard_residues'] = fixer.nonstandardResidues
- fixer.replaceNonstandardResidues()
- _remove_heterogens(fixer, alterations_info, keep_water=False)
- fixer.findMissingResidues()
- alterations_info['missing_residues'] = fixer.missingResidues
- fixer.findMissingAtoms()
- alterations_info['missing_heavy_atoms'] = fixer.missingAtoms
- alterations_info['missing_terminals'] = fixer.missingTerminals
- fixer.addMissingAtoms(seed=0)
- fixer.addMissingHydrogens()
- out_handle = io.StringIO()
- app.PDBFile.writeFile(fixer.topology, fixer.positions, out_handle,
- keepIds=True)
- return out_handle.getvalue()
-
-
-def clean_structure(pdb_structure, alterations_info):
- """Applies additional fixes to an OpenMM structure, to handle edge cases.
-
- Args:
- pdb_structure: An OpenMM structure to modify and fix.
- alterations_info: A dict that will store details of changes made.
- """
- _replace_met_se(pdb_structure, alterations_info)
- _remove_chains_of_length_one(pdb_structure, alterations_info)
-
-
-def _remove_heterogens(fixer, alterations_info, keep_water):
- """Removes the residues that Pdbfixer considers to be heterogens.
-
- Args:
- fixer: A Pdbfixer instance.
- alterations_info: A dict that will store details of changes made.
- keep_water: If True, water (HOH) is not considered to be a heterogen.
- """
- initial_resnames = set()
- for chain in fixer.topology.chains():
- for residue in chain.residues():
- initial_resnames.add(residue.name)
- fixer.removeHeterogens(keepWater=keep_water)
- final_resnames = set()
- for chain in fixer.topology.chains():
- for residue in chain.residues():
- final_resnames.add(residue.name)
- alterations_info['removed_heterogens'] = (
- initial_resnames.difference(final_resnames))
-
-
-def _replace_met_se(pdb_structure, alterations_info):
- """Replace the Se in any MET residues that were not marked as modified."""
- modified_met_residues = []
- for res in pdb_structure.iter_residues():
- name = res.get_name_with_spaces().strip()
- if name == 'MET':
- s_atom = res.get_atom('SD')
- if s_atom.element_symbol == 'Se':
- s_atom.element_symbol = 'S'
- s_atom.element = element.get_by_symbol('S')
- modified_met_residues.append(s_atom.residue_number)
- alterations_info['Se_in_MET'] = modified_met_residues
-
-
-def _remove_chains_of_length_one(pdb_structure, alterations_info):
- """Removes chains that correspond to a single amino acid.
-
- A single amino acid in a chain is both N and C terminus. There is no force
- template for this case.
-
- Args:
- pdb_structure: An OpenMM pdb_structure to modify and fix.
- alterations_info: A dict that will store details of changes made.
- """
- removed_chains = {}
- for model in pdb_structure.iter_models():
- valid_chains = [c for c in model.iter_chains() if len(c) > 1]
- invalid_chain_ids = [c.chain_id for c in model.iter_chains() if len(c) <= 1]
- model.chains = valid_chains
- for chain_id in invalid_chain_ids:
- model.chains_by_id.pop(chain_id)
- removed_chains[model.number] = invalid_chain_ids
- alterations_info['removed_chains'] = removed_chains
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/hrnet/faster_rcnn_hrnetv2p_w40_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/hrnet/faster_rcnn_hrnetv2p_w40_1x_coco.py
deleted file mode 100644
index d0fd9fa0284f17272c0785701f2ae81860bc04b6..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/hrnet/faster_rcnn_hrnetv2p_w40_1x_coco.py
+++ /dev/null
@@ -1,10 +0,0 @@
-_base_ = './faster_rcnn_hrnetv2p_w32_1x_coco.py'
-model = dict(
- pretrained='open-mmlab://msra/hrnetv2_w40',
- backbone=dict(
- type='HRNet',
- extra=dict(
- stage2=dict(num_channels=(40, 80)),
- stage3=dict(num_channels=(40, 80, 160)),
- stage4=dict(num_channels=(40, 80, 160, 320)))),
- neck=dict(type='HRFPN', in_channels=[40, 80, 160, 320], out_channels=256))
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/rpn/rpn_r101_fpn_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/rpn/rpn_r101_fpn_1x_coco.py
deleted file mode 100644
index b2af6119319c03a8e213b2c352fc48e66bc8a822..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/rpn/rpn_r101_fpn_1x_coco.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './rpn_r50_fpn_1x_coco.py'
-model = dict(pretrained='torchvision://resnet101', backbone=dict(depth=101))
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/bbox/assigners/base_assigner.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/bbox/assigners/base_assigner.py
deleted file mode 100644
index 1ff0160dbb4bfbf53cb40d1d5cb29bcc3d197a59..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/bbox/assigners/base_assigner.py
+++ /dev/null
@@ -1,9 +0,0 @@
-from abc import ABCMeta, abstractmethod
-
-
-class BaseAssigner(metaclass=ABCMeta):
- """Base assigner that assigns boxes to ground truth boxes."""
-
- @abstractmethod
- def assign(self, bboxes, gt_bboxes, gt_bboxes_ignore=None, gt_labels=None):
- """Assign boxes to either a ground truth boxes or a negative boxes."""
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/dense_heads/gfl_head.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/dense_heads/gfl_head.py
deleted file mode 100644
index 961bc92237663ad5343d3d08eb9c0e4e811ada05..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/dense_heads/gfl_head.py
+++ /dev/null
@@ -1,647 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from mmcv.cnn import ConvModule, Scale, bias_init_with_prob, normal_init
-from mmcv.runner import force_fp32
-
-from mmdet.core import (anchor_inside_flags, bbox2distance, bbox_overlaps,
- build_assigner, build_sampler, distance2bbox,
- images_to_levels, multi_apply, multiclass_nms,
- reduce_mean, unmap)
-from ..builder import HEADS, build_loss
-from .anchor_head import AnchorHead
-
-
-class Integral(nn.Module):
- """A fixed layer for calculating integral result from distribution.
-
- This layer calculates the target location by :math: `sum{P(y_i) * y_i}`,
- P(y_i) denotes the softmax vector that represents the discrete distribution
- y_i denotes the discrete set, usually {0, 1, 2, ..., reg_max}
-
- Args:
- reg_max (int): The maximal value of the discrete set. Default: 16. You
- may want to reset it according to your new dataset or related
- settings.
- """
-
- def __init__(self, reg_max=16):
- super(Integral, self).__init__()
- self.reg_max = reg_max
- self.register_buffer('project',
- torch.linspace(0, self.reg_max, self.reg_max + 1))
-
- def forward(self, x):
- """Forward feature from the regression head to get integral result of
- bounding box location.
-
- Args:
- x (Tensor): Features of the regression head, shape (N, 4*(n+1)),
- n is self.reg_max.
-
- Returns:
- x (Tensor): Integral result of box locations, i.e., distance
- offsets from the box center in four directions, shape (N, 4).
- """
- x = F.softmax(x.reshape(-1, self.reg_max + 1), dim=1)
- x = F.linear(x, self.project.type_as(x)).reshape(-1, 4)
- return x
-
-
-@HEADS.register_module()
-class GFLHead(AnchorHead):
- """Generalized Focal Loss: Learning Qualified and Distributed Bounding
- Boxes for Dense Object Detection.
-
- GFL head structure is similar with ATSS, however GFL uses
- 1) joint representation for classification and localization quality, and
- 2) flexible General distribution for bounding box locations,
- which are supervised by
- Quality Focal Loss (QFL) and Distribution Focal Loss (DFL), respectively
-
- https://arxiv.org/abs/2006.04388
-
- Args:
- num_classes (int): Number of categories excluding the background
- category.
- in_channels (int): Number of channels in the input feature map.
- stacked_convs (int): Number of conv layers in cls and reg tower.
- Default: 4.
- conv_cfg (dict): dictionary to construct and config conv layer.
- Default: None.
- norm_cfg (dict): dictionary to construct and config norm layer.
- Default: dict(type='GN', num_groups=32, requires_grad=True).
- loss_qfl (dict): Config of Quality Focal Loss (QFL).
- reg_max (int): Max value of integral set :math: `{0, ..., reg_max}`
- in QFL setting. Default: 16.
- Example:
- >>> self = GFLHead(11, 7)
- >>> feats = [torch.rand(1, 7, s, s) for s in [4, 8, 16, 32, 64]]
- >>> cls_quality_score, bbox_pred = self.forward(feats)
- >>> assert len(cls_quality_score) == len(self.scales)
- """
-
- def __init__(self,
- num_classes,
- in_channels,
- stacked_convs=4,
- conv_cfg=None,
- norm_cfg=dict(type='GN', num_groups=32, requires_grad=True),
- loss_dfl=dict(type='DistributionFocalLoss', loss_weight=0.25),
- reg_max=16,
- **kwargs):
- self.stacked_convs = stacked_convs
- self.conv_cfg = conv_cfg
- self.norm_cfg = norm_cfg
- self.reg_max = reg_max
- super(GFLHead, self).__init__(num_classes, in_channels, **kwargs)
-
- self.sampling = False
- if self.train_cfg:
- self.assigner = build_assigner(self.train_cfg.assigner)
- # SSD sampling=False so use PseudoSampler
- sampler_cfg = dict(type='PseudoSampler')
- self.sampler = build_sampler(sampler_cfg, context=self)
-
- self.integral = Integral(self.reg_max)
- self.loss_dfl = build_loss(loss_dfl)
-
- def _init_layers(self):
- """Initialize layers of the head."""
- self.relu = nn.ReLU(inplace=True)
- self.cls_convs = nn.ModuleList()
- self.reg_convs = nn.ModuleList()
- for i in range(self.stacked_convs):
- chn = self.in_channels if i == 0 else self.feat_channels
- self.cls_convs.append(
- ConvModule(
- chn,
- self.feat_channels,
- 3,
- stride=1,
- padding=1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg))
- self.reg_convs.append(
- ConvModule(
- chn,
- self.feat_channels,
- 3,
- stride=1,
- padding=1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg))
- assert self.num_anchors == 1, 'anchor free version'
- self.gfl_cls = nn.Conv2d(
- self.feat_channels, self.cls_out_channels, 3, padding=1)
- self.gfl_reg = nn.Conv2d(
- self.feat_channels, 4 * (self.reg_max + 1), 3, padding=1)
- self.scales = nn.ModuleList(
- [Scale(1.0) for _ in self.anchor_generator.strides])
-
- def init_weights(self):
- """Initialize weights of the head."""
- for m in self.cls_convs:
- normal_init(m.conv, std=0.01)
- for m in self.reg_convs:
- normal_init(m.conv, std=0.01)
- bias_cls = bias_init_with_prob(0.01)
- normal_init(self.gfl_cls, std=0.01, bias=bias_cls)
- normal_init(self.gfl_reg, std=0.01)
-
- def forward(self, feats):
- """Forward features from the upstream network.
-
- Args:
- feats (tuple[Tensor]): Features from the upstream network, each is
- a 4D-tensor.
-
- Returns:
- tuple: Usually a tuple of classification scores and bbox prediction
- cls_scores (list[Tensor]): Classification and quality (IoU)
- joint scores for all scale levels, each is a 4D-tensor,
- the channel number is num_classes.
- bbox_preds (list[Tensor]): Box distribution logits for all
- scale levels, each is a 4D-tensor, the channel number is
- 4*(n+1), n is max value of integral set.
- """
- return multi_apply(self.forward_single, feats, self.scales)
-
- def forward_single(self, x, scale):
- """Forward feature of a single scale level.
-
- Args:
- x (Tensor): Features of a single scale level.
- scale (:obj: `mmcv.cnn.Scale`): Learnable scale module to resize
- the bbox prediction.
-
- Returns:
- tuple:
- cls_score (Tensor): Cls and quality joint scores for a single
- scale level the channel number is num_classes.
- bbox_pred (Tensor): Box distribution logits for a single scale
- level, the channel number is 4*(n+1), n is max value of
- integral set.
- """
- cls_feat = x
- reg_feat = x
- for cls_conv in self.cls_convs:
- cls_feat = cls_conv(cls_feat)
- for reg_conv in self.reg_convs:
- reg_feat = reg_conv(reg_feat)
- cls_score = self.gfl_cls(cls_feat)
- bbox_pred = scale(self.gfl_reg(reg_feat)).float()
- return cls_score, bbox_pred
-
- def anchor_center(self, anchors):
- """Get anchor centers from anchors.
-
- Args:
- anchors (Tensor): Anchor list with shape (N, 4), "xyxy" format.
-
- Returns:
- Tensor: Anchor centers with shape (N, 2), "xy" format.
- """
- anchors_cx = (anchors[..., 2] + anchors[..., 0]) / 2
- anchors_cy = (anchors[..., 3] + anchors[..., 1]) / 2
- return torch.stack([anchors_cx, anchors_cy], dim=-1)
-
- def loss_single(self, anchors, cls_score, bbox_pred, labels, label_weights,
- bbox_targets, stride, num_total_samples):
- """Compute loss of a single scale level.
-
- Args:
- anchors (Tensor): Box reference for each scale level with shape
- (N, num_total_anchors, 4).
- cls_score (Tensor): Cls and quality joint scores for each scale
- level has shape (N, num_classes, H, W).
- bbox_pred (Tensor): Box distribution logits for each scale
- level with shape (N, 4*(n+1), H, W), n is max value of integral
- set.
- labels (Tensor): Labels of each anchors with shape
- (N, num_total_anchors).
- label_weights (Tensor): Label weights of each anchor with shape
- (N, num_total_anchors)
- bbox_targets (Tensor): BBox regression targets of each anchor wight
- shape (N, num_total_anchors, 4).
- stride (tuple): Stride in this scale level.
- num_total_samples (int): Number of positive samples that is
- reduced over all GPUs.
-
- Returns:
- dict[str, Tensor]: A dictionary of loss components.
- """
- assert stride[0] == stride[1], 'h stride is not equal to w stride!'
- anchors = anchors.reshape(-1, 4)
- cls_score = cls_score.permute(0, 2, 3,
- 1).reshape(-1, self.cls_out_channels)
- bbox_pred = bbox_pred.permute(0, 2, 3,
- 1).reshape(-1, 4 * (self.reg_max + 1))
- bbox_targets = bbox_targets.reshape(-1, 4)
- labels = labels.reshape(-1)
- label_weights = label_weights.reshape(-1)
-
- # FG cat_id: [0, num_classes -1], BG cat_id: num_classes
- bg_class_ind = self.num_classes
- pos_inds = ((labels >= 0)
- & (labels < bg_class_ind)).nonzero().squeeze(1)
- score = label_weights.new_zeros(labels.shape)
-
- if len(pos_inds) > 0:
- pos_bbox_targets = bbox_targets[pos_inds]
- pos_bbox_pred = bbox_pred[pos_inds]
- pos_anchors = anchors[pos_inds]
- pos_anchor_centers = self.anchor_center(pos_anchors) / stride[0]
-
- weight_targets = cls_score.detach().sigmoid()
- weight_targets = weight_targets.max(dim=1)[0][pos_inds]
- pos_bbox_pred_corners = self.integral(pos_bbox_pred)
- pos_decode_bbox_pred = distance2bbox(pos_anchor_centers,
- pos_bbox_pred_corners)
- pos_decode_bbox_targets = pos_bbox_targets / stride[0]
- score[pos_inds] = bbox_overlaps(
- pos_decode_bbox_pred.detach(),
- pos_decode_bbox_targets,
- is_aligned=True)
- pred_corners = pos_bbox_pred.reshape(-1, self.reg_max + 1)
- target_corners = bbox2distance(pos_anchor_centers,
- pos_decode_bbox_targets,
- self.reg_max).reshape(-1)
-
- # regression loss
- loss_bbox = self.loss_bbox(
- pos_decode_bbox_pred,
- pos_decode_bbox_targets,
- weight=weight_targets,
- avg_factor=1.0)
-
- # dfl loss
- loss_dfl = self.loss_dfl(
- pred_corners,
- target_corners,
- weight=weight_targets[:, None].expand(-1, 4).reshape(-1),
- avg_factor=4.0)
- else:
- loss_bbox = bbox_pred.sum() * 0
- loss_dfl = bbox_pred.sum() * 0
- weight_targets = bbox_pred.new_tensor(0)
-
- # cls (qfl) loss
- loss_cls = self.loss_cls(
- cls_score, (labels, score),
- weight=label_weights,
- avg_factor=num_total_samples)
-
- return loss_cls, loss_bbox, loss_dfl, weight_targets.sum()
-
- @force_fp32(apply_to=('cls_scores', 'bbox_preds'))
- def loss(self,
- cls_scores,
- bbox_preds,
- gt_bboxes,
- gt_labels,
- img_metas,
- gt_bboxes_ignore=None):
- """Compute losses of the head.
-
- Args:
- cls_scores (list[Tensor]): Cls and quality scores for each scale
- level has shape (N, num_classes, H, W).
- bbox_preds (list[Tensor]): Box distribution logits for each scale
- level with shape (N, 4*(n+1), H, W), n is max value of integral
- set.
- gt_bboxes (list[Tensor]): Ground truth bboxes for each image with
- shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
- gt_labels (list[Tensor]): class indices corresponding to each box
- img_metas (list[dict]): Meta information of each image, e.g.,
- image size, scaling factor, etc.
- gt_bboxes_ignore (list[Tensor] | None): specify which bounding
- boxes can be ignored when computing the loss.
-
- Returns:
- dict[str, Tensor]: A dictionary of loss components.
- """
-
- featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores]
- assert len(featmap_sizes) == self.anchor_generator.num_levels
-
- device = cls_scores[0].device
- anchor_list, valid_flag_list = self.get_anchors(
- featmap_sizes, img_metas, device=device)
- label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1
-
- cls_reg_targets = self.get_targets(
- anchor_list,
- valid_flag_list,
- gt_bboxes,
- img_metas,
- gt_bboxes_ignore_list=gt_bboxes_ignore,
- gt_labels_list=gt_labels,
- label_channels=label_channels)
- if cls_reg_targets is None:
- return None
-
- (anchor_list, labels_list, label_weights_list, bbox_targets_list,
- bbox_weights_list, num_total_pos, num_total_neg) = cls_reg_targets
-
- num_total_samples = reduce_mean(
- torch.tensor(num_total_pos, dtype=torch.float,
- device=device)).item()
- num_total_samples = max(num_total_samples, 1.0)
-
- losses_cls, losses_bbox, losses_dfl,\
- avg_factor = multi_apply(
- self.loss_single,
- anchor_list,
- cls_scores,
- bbox_preds,
- labels_list,
- label_weights_list,
- bbox_targets_list,
- self.anchor_generator.strides,
- num_total_samples=num_total_samples)
-
- avg_factor = sum(avg_factor)
- avg_factor = reduce_mean(avg_factor).item()
- losses_bbox = list(map(lambda x: x / avg_factor, losses_bbox))
- losses_dfl = list(map(lambda x: x / avg_factor, losses_dfl))
- return dict(
- loss_cls=losses_cls, loss_bbox=losses_bbox, loss_dfl=losses_dfl)
-
- def _get_bboxes(self,
- cls_scores,
- bbox_preds,
- mlvl_anchors,
- img_shapes,
- scale_factors,
- cfg,
- rescale=False,
- with_nms=True):
- """Transform outputs for a single batch item into labeled boxes.
-
- Args:
- cls_scores (list[Tensor]): Box scores for a single scale level
- has shape (N, num_classes, H, W).
- bbox_preds (list[Tensor]): Box distribution logits for a single
- scale level with shape (N, 4*(n+1), H, W), n is max value of
- integral set.
- mlvl_anchors (list[Tensor]): Box reference for a single scale level
- with shape (num_total_anchors, 4).
- img_shapes (list[tuple[int]]): Shape of the input image,
- list[(height, width, 3)].
- scale_factors (list[ndarray]): Scale factor of the image arange as
- (w_scale, h_scale, w_scale, h_scale).
- cfg (mmcv.Config | None): Test / postprocessing configuration,
- if None, test_cfg would be used.
- rescale (bool): If True, return boxes in original image space.
- Default: False.
- with_nms (bool): If True, do nms before return boxes.
- Default: True.
-
- Returns:
- list[tuple[Tensor, Tensor]]: Each item in result_list is 2-tuple.
- The first item is an (n, 5) tensor, where 5 represent
- (tl_x, tl_y, br_x, br_y, score) and the score between 0 and 1.
- The shape of the second tensor in the tuple is (n,), and
- each element represents the class label of the corresponding
- box.
- """
- cfg = self.test_cfg if cfg is None else cfg
- assert len(cls_scores) == len(bbox_preds) == len(mlvl_anchors)
- batch_size = cls_scores[0].shape[0]
-
- mlvl_bboxes = []
- mlvl_scores = []
- for cls_score, bbox_pred, stride, anchors in zip(
- cls_scores, bbox_preds, self.anchor_generator.strides,
- mlvl_anchors):
- assert cls_score.size()[-2:] == bbox_pred.size()[-2:]
- assert stride[0] == stride[1]
- scores = cls_score.permute(0, 2, 3, 1).reshape(
- batch_size, -1, self.cls_out_channels).sigmoid()
- bbox_pred = bbox_pred.permute(0, 2, 3, 1)
-
- bbox_pred = self.integral(bbox_pred) * stride[0]
- bbox_pred = bbox_pred.reshape(batch_size, -1, 4)
-
- nms_pre = cfg.get('nms_pre', -1)
- if nms_pre > 0 and scores.shape[1] > nms_pre:
- max_scores, _ = scores.max(-1)
- _, topk_inds = max_scores.topk(nms_pre)
- batch_inds = torch.arange(batch_size).view(
- -1, 1).expand_as(topk_inds).long()
- anchors = anchors[topk_inds, :]
- bbox_pred = bbox_pred[batch_inds, topk_inds, :]
- scores = scores[batch_inds, topk_inds, :]
- else:
- anchors = anchors.expand_as(bbox_pred)
-
- bboxes = distance2bbox(
- self.anchor_center(anchors), bbox_pred, max_shape=img_shapes)
- mlvl_bboxes.append(bboxes)
- mlvl_scores.append(scores)
-
- batch_mlvl_bboxes = torch.cat(mlvl_bboxes, dim=1)
- if rescale:
- batch_mlvl_bboxes /= batch_mlvl_bboxes.new_tensor(
- scale_factors).unsqueeze(1)
-
- batch_mlvl_scores = torch.cat(mlvl_scores, dim=1)
- # Add a dummy background class to the backend when using sigmoid
- # remind that we set FG labels to [0, num_class-1] since mmdet v2.0
- # BG cat_id: num_class
- padding = batch_mlvl_scores.new_zeros(batch_size,
- batch_mlvl_scores.shape[1], 1)
- batch_mlvl_scores = torch.cat([batch_mlvl_scores, padding], dim=-1)
-
- if with_nms:
- det_results = []
- for (mlvl_bboxes, mlvl_scores) in zip(batch_mlvl_bboxes,
- batch_mlvl_scores):
- det_bbox, det_label = multiclass_nms(mlvl_bboxes, mlvl_scores,
- cfg.score_thr, cfg.nms,
- cfg.max_per_img)
- det_results.append(tuple([det_bbox, det_label]))
- else:
- det_results = [
- tuple(mlvl_bs)
- for mlvl_bs in zip(batch_mlvl_bboxes, batch_mlvl_scores)
- ]
- return det_results
-
- def get_targets(self,
- anchor_list,
- valid_flag_list,
- gt_bboxes_list,
- img_metas,
- gt_bboxes_ignore_list=None,
- gt_labels_list=None,
- label_channels=1,
- unmap_outputs=True):
- """Get targets for GFL head.
-
- This method is almost the same as `AnchorHead.get_targets()`. Besides
- returning the targets as the parent method does, it also returns the
- anchors as the first element of the returned tuple.
- """
- num_imgs = len(img_metas)
- assert len(anchor_list) == len(valid_flag_list) == num_imgs
-
- # anchor number of multi levels
- num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]]
- num_level_anchors_list = [num_level_anchors] * num_imgs
-
- # concat all level anchors and flags to a single tensor
- for i in range(num_imgs):
- assert len(anchor_list[i]) == len(valid_flag_list[i])
- anchor_list[i] = torch.cat(anchor_list[i])
- valid_flag_list[i] = torch.cat(valid_flag_list[i])
-
- # compute targets for each image
- if gt_bboxes_ignore_list is None:
- gt_bboxes_ignore_list = [None for _ in range(num_imgs)]
- if gt_labels_list is None:
- gt_labels_list = [None for _ in range(num_imgs)]
- (all_anchors, all_labels, all_label_weights, all_bbox_targets,
- all_bbox_weights, pos_inds_list, neg_inds_list) = multi_apply(
- self._get_target_single,
- anchor_list,
- valid_flag_list,
- num_level_anchors_list,
- gt_bboxes_list,
- gt_bboxes_ignore_list,
- gt_labels_list,
- img_metas,
- label_channels=label_channels,
- unmap_outputs=unmap_outputs)
- # no valid anchors
- if any([labels is None for labels in all_labels]):
- return None
- # sampled anchors of all images
- num_total_pos = sum([max(inds.numel(), 1) for inds in pos_inds_list])
- num_total_neg = sum([max(inds.numel(), 1) for inds in neg_inds_list])
- # split targets to a list w.r.t. multiple levels
- anchors_list = images_to_levels(all_anchors, num_level_anchors)
- labels_list = images_to_levels(all_labels, num_level_anchors)
- label_weights_list = images_to_levels(all_label_weights,
- num_level_anchors)
- bbox_targets_list = images_to_levels(all_bbox_targets,
- num_level_anchors)
- bbox_weights_list = images_to_levels(all_bbox_weights,
- num_level_anchors)
- return (anchors_list, labels_list, label_weights_list,
- bbox_targets_list, bbox_weights_list, num_total_pos,
- num_total_neg)
-
- def _get_target_single(self,
- flat_anchors,
- valid_flags,
- num_level_anchors,
- gt_bboxes,
- gt_bboxes_ignore,
- gt_labels,
- img_meta,
- label_channels=1,
- unmap_outputs=True):
- """Compute regression, classification targets for anchors in a single
- image.
-
- Args:
- flat_anchors (Tensor): Multi-level anchors of the image, which are
- concatenated into a single tensor of shape (num_anchors, 4)
- valid_flags (Tensor): Multi level valid flags of the image,
- which are concatenated into a single tensor of
- shape (num_anchors,).
- num_level_anchors Tensor): Number of anchors of each scale level.
- gt_bboxes (Tensor): Ground truth bboxes of the image,
- shape (num_gts, 4).
- gt_bboxes_ignore (Tensor): Ground truth bboxes to be
- ignored, shape (num_ignored_gts, 4).
- gt_labels (Tensor): Ground truth labels of each box,
- shape (num_gts,).
- img_meta (dict): Meta info of the image.
- label_channels (int): Channel of label.
- unmap_outputs (bool): Whether to map outputs back to the original
- set of anchors.
-
- Returns:
- tuple: N is the number of total anchors in the image.
- anchors (Tensor): All anchors in the image with shape (N, 4).
- labels (Tensor): Labels of all anchors in the image with shape
- (N,).
- label_weights (Tensor): Label weights of all anchor in the
- image with shape (N,).
- bbox_targets (Tensor): BBox targets of all anchors in the
- image with shape (N, 4).
- bbox_weights (Tensor): BBox weights of all anchors in the
- image with shape (N, 4).
- pos_inds (Tensor): Indices of positive anchor with shape
- (num_pos,).
- neg_inds (Tensor): Indices of negative anchor with shape
- (num_neg,).
- """
- inside_flags = anchor_inside_flags(flat_anchors, valid_flags,
- img_meta['img_shape'][:2],
- self.train_cfg.allowed_border)
- if not inside_flags.any():
- return (None, ) * 7
- # assign gt and sample anchors
- anchors = flat_anchors[inside_flags, :]
-
- num_level_anchors_inside = self.get_num_level_anchors_inside(
- num_level_anchors, inside_flags)
- assign_result = self.assigner.assign(anchors, num_level_anchors_inside,
- gt_bboxes, gt_bboxes_ignore,
- gt_labels)
-
- sampling_result = self.sampler.sample(assign_result, anchors,
- gt_bboxes)
-
- num_valid_anchors = anchors.shape[0]
- bbox_targets = torch.zeros_like(anchors)
- bbox_weights = torch.zeros_like(anchors)
- labels = anchors.new_full((num_valid_anchors, ),
- self.num_classes,
- dtype=torch.long)
- label_weights = anchors.new_zeros(num_valid_anchors, dtype=torch.float)
-
- pos_inds = sampling_result.pos_inds
- neg_inds = sampling_result.neg_inds
- if len(pos_inds) > 0:
- pos_bbox_targets = sampling_result.pos_gt_bboxes
- bbox_targets[pos_inds, :] = pos_bbox_targets
- bbox_weights[pos_inds, :] = 1.0
- if gt_labels is None:
- # Only rpn gives gt_labels as None
- # Foreground is the first class
- labels[pos_inds] = 0
- else:
- labels[pos_inds] = gt_labels[
- sampling_result.pos_assigned_gt_inds]
- if self.train_cfg.pos_weight <= 0:
- label_weights[pos_inds] = 1.0
- else:
- label_weights[pos_inds] = self.train_cfg.pos_weight
- if len(neg_inds) > 0:
- label_weights[neg_inds] = 1.0
-
- # map up to original set of anchors
- if unmap_outputs:
- num_total_anchors = flat_anchors.size(0)
- anchors = unmap(anchors, num_total_anchors, inside_flags)
- labels = unmap(
- labels, num_total_anchors, inside_flags, fill=self.num_classes)
- label_weights = unmap(label_weights, num_total_anchors,
- inside_flags)
- bbox_targets = unmap(bbox_targets, num_total_anchors, inside_flags)
- bbox_weights = unmap(bbox_weights, num_total_anchors, inside_flags)
-
- return (anchors, labels, label_weights, bbox_targets, bbox_weights,
- pos_inds, neg_inds)
-
- def get_num_level_anchors_inside(self, num_level_anchors, inside_flags):
- split_inside_flags = torch.split(inside_flags, num_level_anchors)
- num_level_anchors_inside = [
- int(flags.sum()) for flags in split_inside_flags
- ]
- return num_level_anchors_inside
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_r101b-d8_769x769_80k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_r101b-d8_769x769_80k_cityscapes.py
deleted file mode 100644
index f36eb02e68707d502cbe315ff8f6f25b232dee92..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_r101b-d8_769x769_80k_cityscapes.py
+++ /dev/null
@@ -1,4 +0,0 @@
-_base_ = './fcn_r50-d8_769x769_80k_cityscapes.py'
-model = dict(
- pretrained='torchvision://resnet101',
- backbone=dict(type='ResNet', depth=101))
diff --git a/spaces/GroveStreet/GTA_SOVITS/modules/F0Predictor/crepe.py b/spaces/GroveStreet/GTA_SOVITS/modules/F0Predictor/crepe.py
deleted file mode 100644
index c6fb45c79bcd306202a2c0282b3d73a8074ced5d..0000000000000000000000000000000000000000
--- a/spaces/GroveStreet/GTA_SOVITS/modules/F0Predictor/crepe.py
+++ /dev/null
@@ -1,340 +0,0 @@
-from typing import Optional,Union
-try:
- from typing import Literal
-except Exception as e:
- from typing_extensions import Literal
-import numpy as np
-import torch
-import torchcrepe
-from torch import nn
-from torch.nn import functional as F
-import scipy
-
-#from:https://github.com/fishaudio/fish-diffusion
-
-def repeat_expand(
- content: Union[torch.Tensor, np.ndarray], target_len: int, mode: str = "nearest"
-):
- """Repeat content to target length.
- This is a wrapper of torch.nn.functional.interpolate.
-
- Args:
- content (torch.Tensor): tensor
- target_len (int): target length
- mode (str, optional): interpolation mode. Defaults to "nearest".
-
- Returns:
- torch.Tensor: tensor
- """
-
- ndim = content.ndim
-
- if content.ndim == 1:
- content = content[None, None]
- elif content.ndim == 2:
- content = content[None]
-
- assert content.ndim == 3
-
- is_np = isinstance(content, np.ndarray)
- if is_np:
- content = torch.from_numpy(content)
-
- results = torch.nn.functional.interpolate(content, size=target_len, mode=mode)
-
- if is_np:
- results = results.numpy()
-
- if ndim == 1:
- return results[0, 0]
- elif ndim == 2:
- return results[0]
-
-
-class BasePitchExtractor:
- def __init__(
- self,
- hop_length: int = 512,
- f0_min: float = 50.0,
- f0_max: float = 1100.0,
- keep_zeros: bool = True,
- ):
- """Base pitch extractor.
-
- Args:
- hop_length (int, optional): Hop length. Defaults to 512.
- f0_min (float, optional): Minimum f0. Defaults to 50.0.
- f0_max (float, optional): Maximum f0. Defaults to 1100.0.
- keep_zeros (bool, optional): Whether keep zeros in pitch. Defaults to True.
- """
-
- self.hop_length = hop_length
- self.f0_min = f0_min
- self.f0_max = f0_max
- self.keep_zeros = keep_zeros
-
- def __call__(self, x, sampling_rate=44100, pad_to=None):
- raise NotImplementedError("BasePitchExtractor is not callable.")
-
- def post_process(self, x, sampling_rate, f0, pad_to):
- if isinstance(f0, np.ndarray):
- f0 = torch.from_numpy(f0).float().to(x.device)
-
- if pad_to is None:
- return f0
-
- f0 = repeat_expand(f0, pad_to)
-
- if self.keep_zeros:
- return f0
-
- vuv_vector = torch.zeros_like(f0)
- vuv_vector[f0 > 0.0] = 1.0
- vuv_vector[f0 <= 0.0] = 0.0
-
- # 去掉0频率, 并线性插值
- nzindex = torch.nonzero(f0).squeeze()
- f0 = torch.index_select(f0, dim=0, index=nzindex).cpu().numpy()
- time_org = self.hop_length / sampling_rate * nzindex.cpu().numpy()
- time_frame = np.arange(pad_to) * self.hop_length / sampling_rate
-
- if f0.shape[0] <= 0:
- return torch.zeros(pad_to, dtype=torch.float, device=x.device),torch.zeros(pad_to, dtype=torch.float, device=x.device)
-
- if f0.shape[0] == 1:
- return torch.ones(pad_to, dtype=torch.float, device=x.device) * f0[0],torch.ones(pad_to, dtype=torch.float, device=x.device)
-
- # 大概可以用 torch 重写?
- f0 = np.interp(time_frame, time_org, f0, left=f0[0], right=f0[-1])
- vuv_vector = vuv_vector.cpu().numpy()
- vuv_vector = np.ceil(scipy.ndimage.zoom(vuv_vector,pad_to/len(vuv_vector),order = 0))
-
- return f0,vuv_vector
-
-
-class MaskedAvgPool1d(nn.Module):
- def __init__(
- self, kernel_size: int, stride: Optional[int] = None, padding: Optional[int] = 0
- ):
- """An implementation of mean pooling that supports masked values.
-
- Args:
- kernel_size (int): The size of the median pooling window.
- stride (int, optional): The stride of the median pooling window. Defaults to None.
- padding (int, optional): The padding of the median pooling window. Defaults to 0.
- """
-
- super(MaskedAvgPool1d, self).__init__()
- self.kernel_size = kernel_size
- self.stride = stride or kernel_size
- self.padding = padding
-
- def forward(self, x, mask=None):
- ndim = x.dim()
- if ndim == 2:
- x = x.unsqueeze(1)
-
- assert (
- x.dim() == 3
- ), "Input tensor must have 2 or 3 dimensions (batch_size, channels, width)"
-
- # Apply the mask by setting masked elements to zero, or make NaNs zero
- if mask is None:
- mask = ~torch.isnan(x)
-
- # Ensure mask has the same shape as the input tensor
- assert x.shape == mask.shape, "Input tensor and mask must have the same shape"
-
- masked_x = torch.where(mask, x, torch.zeros_like(x))
- # Create a ones kernel with the same number of channels as the input tensor
- ones_kernel = torch.ones(x.size(1), 1, self.kernel_size, device=x.device)
-
- # Perform sum pooling
- sum_pooled = nn.functional.conv1d(
- masked_x,
- ones_kernel,
- stride=self.stride,
- padding=self.padding,
- groups=x.size(1),
- )
-
- # Count the non-masked (valid) elements in each pooling window
- valid_count = nn.functional.conv1d(
- mask.float(),
- ones_kernel,
- stride=self.stride,
- padding=self.padding,
- groups=x.size(1),
- )
- valid_count = valid_count.clamp(min=1) # Avoid division by zero
-
- # Perform masked average pooling
- avg_pooled = sum_pooled / valid_count
-
- # Fill zero values with NaNs
- avg_pooled[avg_pooled == 0] = float("nan")
-
- if ndim == 2:
- return avg_pooled.squeeze(1)
-
- return avg_pooled
-
-
-class MaskedMedianPool1d(nn.Module):
- def __init__(
- self, kernel_size: int, stride: Optional[int] = None, padding: Optional[int] = 0
- ):
- """An implementation of median pooling that supports masked values.
-
- This implementation is inspired by the median pooling implementation in
- https://gist.github.com/rwightman/f2d3849281624be7c0f11c85c87c1598
-
- Args:
- kernel_size (int): The size of the median pooling window.
- stride (int, optional): The stride of the median pooling window. Defaults to None.
- padding (int, optional): The padding of the median pooling window. Defaults to 0.
- """
-
- super(MaskedMedianPool1d, self).__init__()
- self.kernel_size = kernel_size
- self.stride = stride or kernel_size
- self.padding = padding
-
- def forward(self, x, mask=None):
- ndim = x.dim()
- if ndim == 2:
- x = x.unsqueeze(1)
-
- assert (
- x.dim() == 3
- ), "Input tensor must have 2 or 3 dimensions (batch_size, channels, width)"
-
- if mask is None:
- mask = ~torch.isnan(x)
-
- assert x.shape == mask.shape, "Input tensor and mask must have the same shape"
-
- masked_x = torch.where(mask, x, torch.zeros_like(x))
-
- x = F.pad(masked_x, (self.padding, self.padding), mode="reflect")
- mask = F.pad(
- mask.float(), (self.padding, self.padding), mode="constant", value=0
- )
-
- x = x.unfold(2, self.kernel_size, self.stride)
- mask = mask.unfold(2, self.kernel_size, self.stride)
-
- x = x.contiguous().view(x.size()[:3] + (-1,))
- mask = mask.contiguous().view(mask.size()[:3] + (-1,)).to(x.device)
-
- # Combine the mask with the input tensor
- #x_masked = torch.where(mask.bool(), x, torch.fill_(torch.zeros_like(x),float("inf")))
- x_masked = torch.where(mask.bool(), x, torch.FloatTensor([float("inf")]).to(x.device))
-
- # Sort the masked tensor along the last dimension
- x_sorted, _ = torch.sort(x_masked, dim=-1)
-
- # Compute the count of non-masked (valid) values
- valid_count = mask.sum(dim=-1)
-
- # Calculate the index of the median value for each pooling window
- median_idx = (torch.div((valid_count - 1), 2, rounding_mode='trunc')).clamp(min=0)
-
- # Gather the median values using the calculated indices
- median_pooled = x_sorted.gather(-1, median_idx.unsqueeze(-1).long()).squeeze(-1)
-
- # Fill infinite values with NaNs
- median_pooled[torch.isinf(median_pooled)] = float("nan")
-
- if ndim == 2:
- return median_pooled.squeeze(1)
-
- return median_pooled
-
-
-class CrepePitchExtractor(BasePitchExtractor):
- def __init__(
- self,
- hop_length: int = 512,
- f0_min: float = 50.0,
- f0_max: float = 1100.0,
- threshold: float = 0.05,
- keep_zeros: bool = False,
- device = None,
- model: Literal["full", "tiny"] = "full",
- use_fast_filters: bool = True,
- decoder="viterbi"
- ):
- super().__init__(hop_length, f0_min, f0_max, keep_zeros)
- if decoder == "viterbi":
- self.decoder = torchcrepe.decode.viterbi
- elif decoder == "argmax":
- self.decoder = torchcrepe.decode.argmax
- elif decoder == "weighted_argmax":
- self.decoder = torchcrepe.decode.weighted_argmax
- else:
- raise "Unknown decoder"
- self.threshold = threshold
- self.model = model
- self.use_fast_filters = use_fast_filters
- self.hop_length = hop_length
- if device is None:
- self.dev = torch.device("cuda" if torch.cuda.is_available() else "cpu")
- else:
- self.dev = torch.device(device)
- if self.use_fast_filters:
- self.median_filter = MaskedMedianPool1d(3, 1, 1).to(device)
- self.mean_filter = MaskedAvgPool1d(3, 1, 1).to(device)
-
- def __call__(self, x, sampling_rate=44100, pad_to=None):
- """Extract pitch using crepe.
-
-
- Args:
- x (torch.Tensor): Audio signal, shape (1, T).
- sampling_rate (int, optional): Sampling rate. Defaults to 44100.
- pad_to (int, optional): Pad to length. Defaults to None.
-
- Returns:
- torch.Tensor: Pitch, shape (T // hop_length,).
- """
-
- assert x.ndim == 2, f"Expected 2D tensor, got {x.ndim}D tensor."
- assert x.shape[0] == 1, f"Expected 1 channel, got {x.shape[0]} channels."
-
- x = x.to(self.dev)
- f0, pd = torchcrepe.predict(
- x,
- sampling_rate,
- self.hop_length,
- self.f0_min,
- self.f0_max,
- pad=True,
- model=self.model,
- batch_size=1024,
- device=x.device,
- return_periodicity=True,
- decoder=self.decoder
- )
-
- # Filter, remove silence, set uv threshold, refer to the original warehouse readme
- if self.use_fast_filters:
- pd = self.median_filter(pd)
- else:
- pd = torchcrepe.filter.median(pd, 3)
-
- pd = torchcrepe.threshold.Silence(-60.0)(pd, x, sampling_rate, 512)
- f0 = torchcrepe.threshold.At(self.threshold)(f0, pd)
-
- if self.use_fast_filters:
- f0 = self.mean_filter(f0)
- else:
- f0 = torchcrepe.filter.mean(f0, 3)
-
- f0 = torch.where(torch.isnan(f0), torch.full_like(f0, 0), f0)[0]
-
- if torch.all(f0 == 0):
- rtn = f0.cpu().numpy() if pad_to==None else np.zeros(pad_to)
- return rtn,rtn
-
- return self.post_process(x, sampling_rate, f0, pad_to)
diff --git a/spaces/HaloMaster/chinesesummary/fengshen/models/__init__.py b/spaces/HaloMaster/chinesesummary/fengshen/models/__init__.py
deleted file mode 100644
index 9bad5790a5799b96f2e164d825c0b1f8ec0c2dfb..0000000000000000000000000000000000000000
--- a/spaces/HaloMaster/chinesesummary/fengshen/models/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-# coding=utf-8
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/prepend_dataset.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/prepend_dataset.py
deleted file mode 100644
index ad74784d2d7920e4a6225282d95543ce16ea50d9..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/prepend_dataset.py
+++ /dev/null
@@ -1,28 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import numpy as np
-import torch
-
-from . import BaseWrapperDataset
-
-
-class PrependDataset(BaseWrapperDataset):
- def __init__(self, dataset, prepend_getter, ensure_first_token_is=None):
- super().__init__(dataset)
- self.prepend_getter = prepend_getter
- self.ensure_first_token = ensure_first_token_is
-
- def __getitem__(self, idx):
- item = self.dataset[idx]
- is_tuple = isinstance(item, tuple)
- src = item[0] if is_tuple else item
-
- assert self.ensure_first_token is None or src[0] == self.ensure_first_token
- prepend_idx = self.prepend_getter(self.dataset, idx)
- assert isinstance(prepend_idx, int)
- src[0] = prepend_idx
- item = tuple((src,) + item[1:]) if is_tuple else src
- return item
diff --git a/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/src/hifi_gan/inference_e2e.py b/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/src/hifi_gan/inference_e2e.py
deleted file mode 100644
index 062aecd4280925336ab1d36420d2cd47febf661c..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/src/hifi_gan/inference_e2e.py
+++ /dev/null
@@ -1,91 +0,0 @@
-from __future__ import absolute_import, division, print_function, unicode_literals
-
-import glob
-import os
-import numpy as np
-import argparse
-import json
-import torch
-from scipy.io.wavfile import write
-from env import AttrDict
-from meldataset import MAX_WAV_VALUE
-from models import Generator
-
-h = None
-device = None
-
-
-def load_checkpoint(filepath, device):
- assert os.path.isfile(filepath)
- print("Loading '{}'".format(filepath))
- checkpoint_dict = torch.load(filepath, map_location=device)
- print("Complete.")
- return checkpoint_dict
-
-
-def scan_checkpoint(cp_dir, prefix):
- pattern = os.path.join(cp_dir, prefix + "*")
- cp_list = glob.glob(pattern)
- if len(cp_list) == 0:
- return ""
- return sorted(cp_list)[-1]
-
-
-def inference(a):
- generator = Generator(h).to(device)
-
- state_dict_g = load_checkpoint(a.checkpoint_file, device)
- generator.load_state_dict(state_dict_g["generator"])
-
- filelist = os.listdir(a.input_mels_dir)
-
- os.makedirs(a.output_dir, exist_ok=True)
-
- generator.eval()
- generator.remove_weight_norm()
- with torch.no_grad():
- for i, filname in enumerate(filelist):
- x = np.load(os.path.join(a.input_mels_dir, filname))
- x = torch.FloatTensor(x).to(device)
- y_g_hat = generator(x)
- audio = y_g_hat.squeeze()
- audio = audio * MAX_WAV_VALUE
- audio = audio.cpu().numpy().astype("int16")
-
- output_file = os.path.join(
- a.output_dir, os.path.splitext(filname)[0] + "_generated_e2e.wav"
- )
- write(output_file, h.sampling_rate, audio)
- print(output_file)
-
-
-def main():
- print("Initializing Inference Process..")
-
- parser = argparse.ArgumentParser()
- parser.add_argument("--input_mels_dir", default="test_mel_files")
- parser.add_argument("--output_dir", default="generated_files_from_mel")
- parser.add_argument("--checkpoint_file", required=True)
- a = parser.parse_args()
-
- config_file = os.path.join(os.path.split(a.checkpoint_file)[0], "config.json")
- with open(config_file) as f:
- data = f.read()
-
- global h
- json_config = json.loads(data)
- h = AttrDict(json_config)
-
- torch.manual_seed(h.seed)
- global device
- if torch.cuda.is_available():
- torch.cuda.manual_seed(h.seed)
- device = torch.device("cuda")
- else:
- device = torch.device("cpu")
-
- inference(a)
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/optim/sgd.py b/spaces/ICML2022/OFA/fairseq/fairseq/optim/sgd.py
deleted file mode 100644
index 8e34fb99a18fff12ab76be5894a84cbbb2f48176..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/fairseq/optim/sgd.py
+++ /dev/null
@@ -1,43 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch.optim
-
-from . import LegacyFairseqOptimizer, register_optimizer
-
-
-@register_optimizer("sgd")
-class SGD(LegacyFairseqOptimizer):
- def __init__(self, args, params):
- super().__init__(args)
- self._optimizer = torch.optim.SGD(params, **self.optimizer_config)
-
- @staticmethod
- def add_args(parser):
- """Add optimizer-specific arguments to the parser."""
- # fmt: off
- parser.add_argument('--momentum', default=0.0, type=float, metavar='M',
- help='momentum factor')
- parser.add_argument('--weight-decay', '--wd', default=0.0, type=float, metavar='WD',
- help='weight decay')
- # fmt: on
-
- @property
- def optimizer_config(self):
- """
- Return a kwarg dictionary that will be used to override optimizer
- args stored in checkpoints. This allows us to load a checkpoint and
- resume training using a different set of optimizer args, e.g., with a
- different learning rate.
- """
- return {
- "lr": self.args.lr[0],
- "momentum": self.args.momentum,
- "weight_decay": self.args.weight_decay,
- }
-
- @property
- def supports_flat_params(self):
- return True
diff --git a/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/utils/aws/resume.py b/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/utils/aws/resume.py
deleted file mode 100644
index b21731c979a121ab8227280351b70d6062efd983..0000000000000000000000000000000000000000
--- a/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/utils/aws/resume.py
+++ /dev/null
@@ -1,40 +0,0 @@
-# Resume all interrupted trainings in yolov5/ dir including DDP trainings
-# Usage: $ python utils/aws/resume.py
-
-import os
-import sys
-from pathlib import Path
-
-import torch
-import yaml
-
-FILE = Path(__file__).resolve()
-ROOT = FILE.parents[2] # YOLOv5 root directory
-if str(ROOT) not in sys.path:
- sys.path.append(str(ROOT)) # add ROOT to PATH
-
-port = 0 # --master_port
-path = Path('').resolve()
-for last in path.rglob('*/**/last.pt'):
- ckpt = torch.load(last)
- if ckpt['optimizer'] is None:
- continue
-
- # Load opt.yaml
- with open(last.parent.parent / 'opt.yaml', errors='ignore') as f:
- opt = yaml.safe_load(f)
-
- # Get device count
- d = opt['device'].split(',') # devices
- nd = len(d) # number of devices
- ddp = nd > 1 or (nd == 0 and torch.cuda.device_count() > 1) # distributed data parallel
-
- if ddp: # multi-GPU
- port += 1
- cmd = f'python -m torch.distributed.run --nproc_per_node {nd} --master_port {port} train.py --resume {last}'
- else: # single-GPU
- cmd = f'python train.py --resume {last}'
-
- cmd += ' > /dev/null 2>&1 &' # redirect output to dev/null and run in daemon thread
- print(cmd)
- os.system(cmd)
diff --git a/spaces/Ikaros521/so-vits-svc-4.0-ikaros/vdecoder/__init__.py b/spaces/Ikaros521/so-vits-svc-4.0-ikaros/vdecoder/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Ikaros521/so-vits-svc-4.0-ikaros2/data_utils.py b/spaces/Ikaros521/so-vits-svc-4.0-ikaros2/data_utils.py
deleted file mode 100644
index bd67adc7d42da7b9ff4ca11e543d8cc9cd34e60b..0000000000000000000000000000000000000000
--- a/spaces/Ikaros521/so-vits-svc-4.0-ikaros2/data_utils.py
+++ /dev/null
@@ -1,142 +0,0 @@
-import time
-import os
-import random
-import numpy as np
-import torch
-import torch.utils.data
-
-import modules.commons as commons
-import utils
-from modules.mel_processing import spectrogram_torch, spec_to_mel_torch
-from utils import load_wav_to_torch, load_filepaths_and_text
-
-# import h5py
-
-
-"""Multi speaker version"""
-
-
-class TextAudioSpeakerLoader(torch.utils.data.Dataset):
- """
- 1) loads audio, speaker_id, text pairs
- 2) normalizes text and converts them to sequences of integers
- 3) computes spectrograms from audio files.
- """
-
- def __init__(self, audiopaths, hparams):
- self.audiopaths = load_filepaths_and_text(audiopaths)
- self.max_wav_value = hparams.data.max_wav_value
- self.sampling_rate = hparams.data.sampling_rate
- self.filter_length = hparams.data.filter_length
- self.hop_length = hparams.data.hop_length
- self.win_length = hparams.data.win_length
- self.sampling_rate = hparams.data.sampling_rate
- self.use_sr = hparams.train.use_sr
- self.spec_len = hparams.train.max_speclen
- self.spk_map = hparams.spk
-
- random.seed(1234)
- random.shuffle(self.audiopaths)
-
- def get_audio(self, filename):
- filename = filename.replace("\\", "/")
- audio, sampling_rate = load_wav_to_torch(filename)
- if sampling_rate != self.sampling_rate:
- raise ValueError("{} SR doesn't match target {} SR".format(
- sampling_rate, self.sampling_rate))
- audio_norm = audio / self.max_wav_value
- audio_norm = audio_norm.unsqueeze(0)
- spec_filename = filename.replace(".wav", ".spec.pt")
- if os.path.exists(spec_filename):
- spec = torch.load(spec_filename)
- else:
- spec = spectrogram_torch(audio_norm, self.filter_length,
- self.sampling_rate, self.hop_length, self.win_length,
- center=False)
- spec = torch.squeeze(spec, 0)
- torch.save(spec, spec_filename)
-
- spk = filename.split("/")[-2]
- spk = torch.LongTensor([self.spk_map[spk]])
-
- f0 = np.load(filename + ".f0.npy")
- f0, uv = utils.interpolate_f0(f0)
- f0 = torch.FloatTensor(f0)
- uv = torch.FloatTensor(uv)
-
- c = torch.load(filename+ ".soft.pt")
- c = utils.repeat_expand_2d(c.squeeze(0), f0.shape[0])
-
-
- lmin = min(c.size(-1), spec.size(-1))
- assert abs(c.size(-1) - spec.size(-1)) < 3, (c.size(-1), spec.size(-1), f0.shape, filename)
- assert abs(audio_norm.shape[1]-lmin * self.hop_length) < 3 * self.hop_length
- spec, c, f0, uv = spec[:, :lmin], c[:, :lmin], f0[:lmin], uv[:lmin]
- audio_norm = audio_norm[:, :lmin * self.hop_length]
- if spec.shape[1] < 60:
- print("skip too short audio:", filename)
- return None
- if spec.shape[1] > 800:
- start = random.randint(0, spec.shape[1]-800)
- end = start + 790
- spec, c, f0, uv = spec[:, start:end], c[:, start:end], f0[start:end], uv[start:end]
- audio_norm = audio_norm[:, start * self.hop_length : end * self.hop_length]
-
- return c, f0, spec, audio_norm, spk, uv
-
- def __getitem__(self, index):
- return self.get_audio(self.audiopaths[index][0])
-
- def __len__(self):
- return len(self.audiopaths)
-
-
-class TextAudioCollate:
-
- def __call__(self, batch):
- batch = [b for b in batch if b is not None]
-
- input_lengths, ids_sorted_decreasing = torch.sort(
- torch.LongTensor([x[0].shape[1] for x in batch]),
- dim=0, descending=True)
-
- max_c_len = max([x[0].size(1) for x in batch])
- max_wav_len = max([x[3].size(1) for x in batch])
-
- lengths = torch.LongTensor(len(batch))
-
- c_padded = torch.FloatTensor(len(batch), batch[0][0].shape[0], max_c_len)
- f0_padded = torch.FloatTensor(len(batch), max_c_len)
- spec_padded = torch.FloatTensor(len(batch), batch[0][2].shape[0], max_c_len)
- wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len)
- spkids = torch.LongTensor(len(batch), 1)
- uv_padded = torch.FloatTensor(len(batch), max_c_len)
-
- c_padded.zero_()
- spec_padded.zero_()
- f0_padded.zero_()
- wav_padded.zero_()
- uv_padded.zero_()
-
- for i in range(len(ids_sorted_decreasing)):
- row = batch[ids_sorted_decreasing[i]]
-
- c = row[0]
- c_padded[i, :, :c.size(1)] = c
- lengths[i] = c.size(1)
-
- f0 = row[1]
- f0_padded[i, :f0.size(0)] = f0
-
- spec = row[2]
- spec_padded[i, :, :spec.size(1)] = spec
-
- wav = row[3]
- wav_padded[i, :, :wav.size(1)] = wav
-
- spkids[i, 0] = row[4]
-
- uv = row[5]
- uv_padded[i, :uv.size(0)] = uv
-
- return c_padded, f0_padded, spec_padded, wav_padded, spkids, lengths, uv_padded
diff --git a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/docker/2_predict.sh b/spaces/InpaintAI/Inpaint-Anything/third_party/lama/docker/2_predict.sh
deleted file mode 100644
index 8af4ac04ec0c1586933be424d4f7a5a4522521dc..0000000000000000000000000000000000000000
--- a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/docker/2_predict.sh
+++ /dev/null
@@ -1,35 +0,0 @@
-#!/usr/bin/env bash
-
-
-if (( $# < 3 ))
-then
- echo "Usage: $0 model_dir input_dir output_dir [other arguments to predict.py]"
- exit 1
-fi
-
-CURDIR="$(dirname $0)"
-SRCDIR="$CURDIR/.."
-SRCDIR="$(realpath $SRCDIR)"
-
-MODEL_LOCAL_DIR="$(realpath $1)"
-INPUT_LOCAL_DIR="$(realpath $2)"
-OUTPUT_LOCAL_DIR="$(realpath $3)"
-shift 3
-
-mkdir -p "$OUTPUT_LOCAL_DIR"
-
-docker run \
- -v "$SRCDIR":/home/user/project \
- -v "$MODEL_LOCAL_DIR":/data/checkpoint \
- -v "$INPUT_LOCAL_DIR":/data/input \
- -v "$OUTPUT_LOCAL_DIR":/data/output \
- -u $(id -u):$(id -g) \
- --name="lama-predict" \
- --rm \
- windj007/lama \
- /home/user/project/bin/predict.py \
- model.path=/data/checkpoint \
- indir=/data/input \
- outdir=/data/output \
- dataset.img_suffix=.png \
- $@
diff --git a/spaces/JUNGU/VToonify/vtoonify/model/raft/alt_cuda_corr/correlation.cpp b/spaces/JUNGU/VToonify/vtoonify/model/raft/alt_cuda_corr/correlation.cpp
deleted file mode 100644
index b01584d19edb99e7feec5f2e4c51169a1ed208db..0000000000000000000000000000000000000000
--- a/spaces/JUNGU/VToonify/vtoonify/model/raft/alt_cuda_corr/correlation.cpp
+++ /dev/null
@@ -1,54 +0,0 @@
-#include
-#include
-
-// CUDA forward declarations
-std::vector corr_cuda_forward(
- torch::Tensor fmap1,
- torch::Tensor fmap2,
- torch::Tensor coords,
- int radius);
-
-std::vector corr_cuda_backward(
- torch::Tensor fmap1,
- torch::Tensor fmap2,
- torch::Tensor coords,
- torch::Tensor corr_grad,
- int radius);
-
-// C++ interface
-#define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor")
-#define CHECK_CONTIGUOUS(x) TORCH_CHECK(x.is_contiguous(), #x " must be contiguous")
-#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x)
-
-std::vector corr_forward(
- torch::Tensor fmap1,
- torch::Tensor fmap2,
- torch::Tensor coords,
- int radius) {
- CHECK_INPUT(fmap1);
- CHECK_INPUT(fmap2);
- CHECK_INPUT(coords);
-
- return corr_cuda_forward(fmap1, fmap2, coords, radius);
-}
-
-
-std::vector corr_backward(
- torch::Tensor fmap1,
- torch::Tensor fmap2,
- torch::Tensor coords,
- torch::Tensor corr_grad,
- int radius) {
- CHECK_INPUT(fmap1);
- CHECK_INPUT(fmap2);
- CHECK_INPUT(coords);
- CHECK_INPUT(corr_grad);
-
- return corr_cuda_backward(fmap1, fmap2, coords, corr_grad, radius);
-}
-
-
-PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
- m.def("forward", &corr_forward, "CORR forward");
- m.def("backward", &corr_backward, "CORR backward");
-}
\ No newline at end of file
diff --git a/spaces/Jeff2323/ai-comic-factory/src/app/main.tsx b/spaces/Jeff2323/ai-comic-factory/src/app/main.tsx
deleted file mode 100644
index d7f078ffd08655ecd7e25f210ba1c87fc0124e02..0000000000000000000000000000000000000000
--- a/spaces/Jeff2323/ai-comic-factory/src/app/main.tsx
+++ /dev/null
@@ -1,136 +0,0 @@
-"use client"
-
-import { useEffect, useState, useTransition } from "react"
-
-import { cn } from "@/lib/utils"
-import { TopMenu } from "./interface/top-menu"
-import { fonts } from "@/lib/fonts"
-import { useStore } from "./store"
-import { Zoom } from "./interface/zoom"
-import { getStory } from "./queries/getStory"
-import { BottomBar } from "./interface/bottom-bar"
-import { Page } from "./interface/page"
-
-export default function Main() {
- const [_isPending, startTransition] = useTransition()
-
- const isGeneratingStory = useStore(state => state.isGeneratingStory)
- const setGeneratingStory = useStore(state => state.setGeneratingStory)
-
- const font = useStore(state => state.font)
- const preset = useStore(state => state.preset)
- const prompt = useStore(state => state.prompt)
-
- const setLayouts = useStore(state => state.setLayouts)
-
- const setPanels = useStore(state => state.setPanels)
- const setCaptions = useStore(state => state.setCaptions)
-
- const zoomLevel = useStore(state => state.zoomLevel)
-
- const [waitABitMore, setWaitABitMore] = useState(false)
-
- // react to prompt changes
- useEffect(() => {
- if (!prompt) { return }
-
- startTransition(async () => {
- setWaitABitMore(false)
- setGeneratingStory(true)
-
- try {
-
- const llmResponse = await getStory({ preset, prompt })
- console.log("LLM responded:", llmResponse)
-
- // we have to limit the size of the prompt, otherwise the rest of the style won't be followed
-
- let limitedPrompt = prompt.slice(0, 77)
- if (limitedPrompt.length !== prompt.length) {
- console.log("Sorry folks, the prompt was cut to:", limitedPrompt)
- }
-
- const panelPromptPrefix = preset.imagePrompt(limitedPrompt).join(", ")
-
- const nbPanels = 4
- const newPanels: string[] = []
- const newCaptions: string[] = []
- setWaitABitMore(true)
- console.log("Panel prompts for SDXL:")
- for (let p = 0; p < nbPanels; p++) {
- newCaptions.push(llmResponse[p]?.caption || "...")
- const newPanel = [panelPromptPrefix, llmResponse[p]?.instructions || ""].map(chunk => chunk).join(", ")
- newPanels.push(newPanel)
- console.log(newPanel)
- }
-
- setCaptions(newCaptions)
- setPanels(newPanels)
- } catch (err) {
- console.error(err)
- } finally {
- setTimeout(() => {
- setGeneratingStory(false)
- setWaitABitMore(false)
- }, 12000)
- }
- })
- }, [prompt, preset?.label]) // important: we need to react to preset changes too
-
- return (
-
-
-
- {/*
- // we could support multiple pages here,
- // but let's disable it for now
-
- */}
-
-
-
-
-
-
-
- {waitABitMore ? `Story is ready, but server is a bit busy!`: 'Generating a new story..'}
- {waitABitMore ? `Please hold tight..` : ''}
-
-
-
- )
-}
\ No newline at end of file
diff --git a/spaces/JeffJing/ZookChatBot/steamship/invocable/package_service.py b/spaces/JeffJing/ZookChatBot/steamship/invocable/package_service.py
deleted file mode 100644
index 454713d6682a5166020233c5b53812e306ca3cb7..0000000000000000000000000000000000000000
--- a/spaces/JeffJing/ZookChatBot/steamship/invocable/package_service.py
+++ /dev/null
@@ -1,84 +0,0 @@
-from __future__ import annotations
-
-import logging
-from typing import Any, Dict, List
-
-from steamship import SteamshipError, Task
-from steamship.invocable import Invocable
-
-# Note!
-# =====
-#
-# This the files in this package are for Package Implementors.
-# If you are using the Steamship Client, you probably are looking for either steamship.client or steamship.data
-#
-from steamship.utils.url import Verb
-
-
-class PackageService(Invocable):
- """The Abstract Base Class of a Steamship Package.
-
- Packages may implement whatever methods they like. To expose these methods as invocable HTTP routes,
- annotate the method with @get or @post and the route name.
-
- Package *implementations* are effectively stateless, though they will have stateful
-
- """
-
- def invoke_later(
- self,
- method: str,
- verb: Verb = Verb.POST,
- wait_on_tasks: List[Task] = None,
- arguments: Dict[str, Any] = None,
- ) -> Task[Any]:
- """Schedule a method for future invocation.
-
- Parameters
- ----------
- method: str
- The method to invoke, as registered with Steamship in the @get or @post decorator.
- verb: Verb
- The HTTP Verb to use. Default is POST.
- wait_on_tasks: List[Task]
- A list of Task objects (or task IDs) that should be waited upon before invocation.
- arguments: Dict[str, Any]
- The keyword arguments of the invoked method
-
- Returns
- -------
- Task[Any]
- A Task representing the future work
- """
-
- if self.context is None:
- raise SteamshipError(
- message="Unable to call invoke_later because the InvocationContext was None"
- )
- if self.context.invocable_instance_handle is None:
- raise SteamshipError(
- message="Unable to call invoke_later because the invocable_instance_handle on InvocationContext was None"
- )
-
- payload = {
- "instanceHandle": self.context.invocable_instance_handle,
- "payload": {
- "httpVerb": verb.value,
- "invocationPath": method,
- "arguments": arguments or {},
- },
- }
- operation = "package/instance/invoke"
-
- logging.info(
- f"Scheduling {verb} {method} for future invocation on me ({self.context.invocable_handle})"
- )
-
- resp = self.client.post(
- operation,
- payload,
- expect=Task[Task], # This operation should return a task
- as_background_task=True, # This operation should always be asynchronous
- wait_on_tasks=wait_on_tasks, # This operation might await other tasks first
- )
- return resp
diff --git a/spaces/Jonni/02-Gradio-ArtFromText/app.py b/spaces/Jonni/02-Gradio-ArtFromText/app.py
deleted file mode 100644
index b0424ed426f78ea91b8307a7c74e4fbc35b7c749..0000000000000000000000000000000000000000
--- a/spaces/Jonni/02-Gradio-ArtFromText/app.py
+++ /dev/null
@@ -1,224 +0,0 @@
-import os
-
-os.system("git clone --recursive https://github.com/JD-P/cloob-latent-diffusion")
-os.system("cd cloob-latent-diffusion;pip install omegaconf pillow pytorch-lightning einops wandb ftfy regex ./CLIP")
-
-import argparse
-from functools import partial
-from pathlib import Path
-import sys
-sys.path.append('./cloob-latent-diffusion')
-sys.path.append('./cloob-latent-diffusion/cloob-training')
-sys.path.append('./cloob-latent-diffusion/latent-diffusion')
-sys.path.append('./cloob-latent-diffusion/taming-transformers')
-sys.path.append('./cloob-latent-diffusion/v-diffusion-pytorch')
-from omegaconf import OmegaConf
-from PIL import Image
-import torch
-from torch import nn
-from torch.nn import functional as F
-from torchvision import transforms
-from torchvision.transforms import functional as TF
-from tqdm import trange
-from CLIP import clip
-from cloob_training import model_pt, pretrained
-import ldm.models.autoencoder
-from diffusion import sampling, utils
-import train_latent_diffusion as train
-from huggingface_hub import hf_hub_url, cached_download
-import random
-
-# Download the model files
-checkpoint = cached_download(hf_hub_url("huggan/distill-ccld-wa", filename="model_student.ckpt"))
-ae_model_path = cached_download(hf_hub_url("huggan/ccld_wa", filename="ae_model.ckpt"))
-ae_config_path = cached_download(hf_hub_url("huggan/ccld_wa", filename="ae_model.yaml"))
-
-# Define a few utility functions
-
-
-def parse_prompt(prompt, default_weight=3.):
- if prompt.startswith('http://') or prompt.startswith('https://'):
- vals = prompt.rsplit(':', 2)
- vals = [vals[0] + ':' + vals[1], *vals[2:]]
- else:
- vals = prompt.rsplit(':', 1)
- vals = vals + ['', default_weight][len(vals):]
- return vals[0], float(vals[1])
-
-
-def resize_and_center_crop(image, size):
- fac = max(size[0] / image.size[0], size[1] / image.size[1])
- image = image.resize((int(fac * image.size[0]), int(fac * image.size[1])), Image.LANCZOS)
- return TF.center_crop(image, size[::-1])
-
-
-# Load the models
-device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
-print('Using device:', device)
-print('loading models')
-
-# autoencoder
-ae_config = OmegaConf.load(ae_config_path)
-ae_model = ldm.models.autoencoder.AutoencoderKL(**ae_config.model.params)
-ae_model.eval().requires_grad_(False).to(device)
-ae_model.load_state_dict(torch.load(ae_model_path))
-n_ch, side_y, side_x = 4, 32, 32
-
-# diffusion model
-model = train.DiffusionModel(192, [1,1,2,2], autoencoder_scale=torch.tensor(4.3084))
-model.load_state_dict(torch.load(checkpoint, map_location='cpu'))
-model = model.to(device).eval().requires_grad_(False)
-
-# CLOOB
-cloob_config = pretrained.get_config('cloob_laion_400m_vit_b_16_16_epochs')
-cloob = model_pt.get_pt_model(cloob_config)
-checkpoint = pretrained.download_checkpoint(cloob_config)
-cloob.load_state_dict(model_pt.get_pt_params(cloob_config, checkpoint))
-cloob.eval().requires_grad_(False).to(device)
-
-
-# The key function: returns a list of n PIL images
-def generate(n=1, prompts=['a red circle'], images=[], seed=42, steps=15,
- method='plms', eta=None):
- zero_embed = torch.zeros([1, cloob.config['d_embed']], device=device)
- target_embeds, weights = [zero_embed], []
-
- for prompt in prompts:
- txt, weight = parse_prompt(prompt)
- target_embeds.append(cloob.text_encoder(cloob.tokenize(txt).to(device)).float())
- weights.append(weight)
-
- for prompt in images:
- path, weight = parse_prompt(prompt)
- img = Image.open(utils.fetch(path)).convert('RGB')
- clip_size = cloob.config['image_encoder']['image_size']
- img = resize_and_center_crop(img, (clip_size, clip_size))
- batch = TF.to_tensor(img)[None].to(device)
- embed = F.normalize(cloob.image_encoder(cloob.normalize(batch)).float(), dim=-1)
- target_embeds.append(embed)
- weights.append(weight)
-
- weights = torch.tensor([1 - sum(weights), *weights], device=device)
-
- torch.manual_seed(seed)
-
- def cfg_model_fn(x, t):
- n = x.shape[0]
- n_conds = len(target_embeds)
- x_in = x.repeat([n_conds, 1, 1, 1])
- t_in = t.repeat([n_conds])
- clip_embed_in = torch.cat([*target_embeds]).repeat_interleave(n, 0)
- vs = model(x_in, t_in, clip_embed_in).view([n_conds, n, *x.shape[1:]])
- v = vs.mul(weights[:, None, None, None, None]).sum(0)
- return v
-
- def run(x, steps):
- if method == 'ddpm':
- return sampling.sample(cfg_model_fn, x, steps, 1., {})
- if method == 'ddim':
- return sampling.sample(cfg_model_fn, x, steps, eta, {})
- if method == 'prk':
- return sampling.prk_sample(cfg_model_fn, x, steps, {})
- if method == 'plms':
- return sampling.plms_sample(cfg_model_fn, x, steps, {})
- if method == 'pie':
- return sampling.pie_sample(cfg_model_fn, x, steps, {})
- if method == 'plms2':
- return sampling.plms2_sample(cfg_model_fn, x, steps, {})
- assert False
-
- batch_size = n
- x = torch.randn([n, n_ch, side_y, side_x], device=device)
- t = torch.linspace(1, 0, steps + 1, device=device)[:-1]
- steps = utils.get_spliced_ddpm_cosine_schedule(t)
- pil_ims = []
- for i in trange(0, n, batch_size):
- cur_batch_size = min(n - i, batch_size)
- out_latents = run(x[i:i+cur_batch_size], steps)
- outs = ae_model.decode(out_latents * torch.tensor(2.55).to(device))
- for j, out in enumerate(outs):
- pil_ims.append(utils.to_pil_image(out))
-
- return pil_ims
-
-
-import gradio as gr
-
-def gen_ims(prompt, im_prompt=None, seed=None, n_steps=10, method='plms'):
- if seed == None :
- seed = random.randint(0, 10000)
- print( prompt, im_prompt, seed, n_steps)
- prompts = [prompt]
- im_prompts = []
- if im_prompt != None:
- im_prompts = [im_prompt]
- pil_ims = generate(n=1, prompts=prompts, images=im_prompts, seed=seed, steps=n_steps, method=method)
- return pil_ims[0]
-
-iface = gr.Interface(fn=gen_ims,
- inputs=[#gr.inputs.Slider(minimum=1, maximum=1, step=1, default=1,label="Number of images"),
- #gr.inputs.Slider(minimum=0, maximum=200, step=1, label='Random seed', default=0),
- gr.inputs.Textbox(label="Text prompt"),
- gr.inputs.Image(optional=True, label="Image prompt", type='filepath'),
- #gr.inputs.Slider(minimum=10, maximum=35, step=1, default=15,label="Number of steps")
- ],
- outputs=[gr.outputs.Image(type="pil", label="Generated Image")],
- examples=[
- ["Futurism, in the style of Wassily Kandinsky"],
- ["Art Nouveau, in the style of John Singer Sargent"],
- ["Surrealism, in the style of Edgar Degas"],
- ["Expressionism, in the style of Wassily Kandinsky"],
- ["Futurism, in the style of Egon Schiele"],
- ["Neoclassicism, in the style of Gustav Klimt"],
- ["Cubism, in the style of Gustav Klimt"],
- ["Op Art, in the style of Marc Chagall"],
- ["Romanticism, in the style of M.C. Escher"],
- ["Futurism, in the style of M.C. Escher"],
- ["Abstract Art, in the style of M.C. Escher"],
- ["Mannerism, in the style of Paul Klee"],
- ["Romanesque Art, in the style of Leonardo da Vinci"],
- ["High Renaissance, in the style of Rembrandt"],
- ["Magic Realism, in the style of Gustave Dore"],
- ["Realism, in the style of Jean-Michel Basquiat"],
- ["Art Nouveau, in the style of Paul Gauguin"],
- ["Avant-garde, in the style of Pierre-Auguste Renoir"],
- ["Baroque, in the style of Edward Hopper"],
- ["Post-Impressionism, in the style of Wassily Kandinsky"],
- ["Naturalism, in the style of Rene Magritte"],
- ["Constructivism, in the style of Paul Cezanne"],
- ["Abstract Expressionism, in the style of Henri Matisse"],
- ["Pop Art, in the style of Vincent van Gogh"],
- ["Futurism, in the style of Wassily Kandinsky"],
- ["Futurism, in the style of Zdzislaw Beksinski"],
- ['Surrealism, in the style of Salvador Dali'],
- ["Aaron Wacker, oil on canvas"],
- ["abstract"],
- ["landscape"],
- ["portrait"],
- ["sculpture"],
- ["genre painting"],
- ["installation"],
- ["photo"],
- ["figurative"],
- ["illustration"],
- ["still life"],
- ["history painting"],
- ["cityscape"],
- ["marina"],
- ["animal painting"],
- ["design"],
- ["calligraphy"],
- ["symbolic painting"],
- ["graffiti"],
- ["performance"],
- ["mythological painting"],
- ["battle painting"],
- ["self-portrait"],
- ["Impressionism, oil on canvas"]
- ],
- title='Art Generator and Style Mixer from 🧠 Cloob and 🎨 WikiArt - Visual Art Encyclopedia:',
- description="Trained on images from the [WikiArt](https://www.wikiart.org/) dataset, comprised of visual arts",
- article = 'Model used is: [model card](https://huggingface.co/huggan/distill-ccld-wa)..'
-
-)
-iface.launch(enable_queue=True) # , debug=True for colab debugging
diff --git a/spaces/Kaludi/QR-Code-Generator-Streamlit_App/app.py b/spaces/Kaludi/QR-Code-Generator-Streamlit_App/app.py
deleted file mode 100644
index 6d3fc4c36db8744e7db9d225580214e2d22c4991..0000000000000000000000000000000000000000
--- a/spaces/Kaludi/QR-Code-Generator-Streamlit_App/app.py
+++ /dev/null
@@ -1,79 +0,0 @@
-import streamlit as st
-import qrcode
-from qrcode.image.pil import PilImage
-from PIL import Image
-import io
-import base64
-from urllib.parse import urlparse
-
-
-# Function to convert image to base64
-def get_image_as_base64(image: Image):
- buffer = io.BytesIO()
- image.save(buffer, format="PNG")
- image_base64 = base64.b64encode(buffer.getvalue()).decode('utf-8')
- return image_base64
-
-def get_url_filename(url):
- parsed_uri = urlparse(url)
- domain = '{uri.netloc}'.format(uri=parsed_uri)
- main_domain = domain.split('.')
- main_domain = main_domain[1] if main_domain[0] == 'www' else main_domain[0]
- path = parsed_uri.path.strip('/').replace('/', '_')
- return f"{main_domain}_{path}" if path else main_domain
-
-
-
-
-# Streamlit app title
-st.title("Bulk QR Code Generator")
-st.write("This is a simple Streamlit web app for generating QR codes based on user input. You can choose between generating a QR code for a URL or plain text with the ability to generate multiple URLs at once.")
-
-# QR code content options
-qr_content_options = ["URL", "Text"]
-# qr_content_options = ["URL", "Text", "Contact Information"]
-qr_content_type = st.selectbox("Select QR content type", qr_content_options)
-
-if qr_content_type == "Contact Information":
- first_name = st.text_input("First Name")
- last_name = st.text_input("Last Name")
- phone = st.text_input("Phone Number")
- email = st.text_input("Email Address")
- content = f"BEGIN:VCARD\nVERSION:3.0\nN:{last_name};{first_name}\nFN:{first_name} {last_name}\nTEL;TYPE=CELL:{phone}\nEMAIL:{email}\nEND:VCARD"
-else:
- content = st.text_area("Enter your content (one per line for multiple QR codes)", height=150)
-
-if st.button("Generate QR Code"):
- if content:
- contents = content.split("\n")
-
- for i, c in enumerate(contents):
- if c.strip():
- # Generate QR code
- qr = qrcode.QRCode(
- version=1,
- error_correction=qrcode.constants.ERROR_CORRECT_H,
- box_size=10,
- border=4
- )
- qr.add_data(c)
- qr.make(fit=True)
-
- img = qr.make_image(fill_color="black", back_color="white", image_factory=PilImage)
-
- # Convert PilImage to bytes-like object
- buffer = io.BytesIO()
- img.save(buffer, format="PNG")
- img_bytes = buffer.getvalue()
-
- img_base64 = get_image_as_base64(img)
-
- st.markdown(f"##### {c}")
- st.image(img_bytes, caption=f"QR code for {c}", use_column_width=True)
- file_name = get_url_filename(c) if qr_content_type == "URL" else f"QR_{i}"
- st.markdown(f'Download QR code', unsafe_allow_html=True)
- else:
- st.error("Please enter content for the QR code.")
-
-
-
diff --git a/spaces/KyanChen/RSPrompter/mmdet/utils/util_mixins.py b/spaces/KyanChen/RSPrompter/mmdet/utils/util_mixins.py
deleted file mode 100644
index b83b6617f5e4a202067e1659bf448962a2a2bc72..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmdet/utils/util_mixins.py
+++ /dev/null
@@ -1,105 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-"""This module defines the :class:`NiceRepr` mixin class, which defines a
-``__repr__`` and ``__str__`` method that only depend on a custom ``__nice__``
-method, which you must define. This means you only have to overload one
-function instead of two. Furthermore, if the object defines a ``__len__``
-method, then the ``__nice__`` method defaults to something sensible, otherwise
-it is treated as abstract and raises ``NotImplementedError``.
-
-To use simply have your object inherit from :class:`NiceRepr`
-(multi-inheritance should be ok).
-
-This code was copied from the ubelt library: https://github.com/Erotemic/ubelt
-
-Example:
- >>> # Objects that define __nice__ have a default __str__ and __repr__
- >>> class Student(NiceRepr):
- ... def __init__(self, name):
- ... self.name = name
- ... def __nice__(self):
- ... return self.name
- >>> s1 = Student('Alice')
- >>> s2 = Student('Bob')
- >>> print(f's1 = {s1}')
- >>> print(f's2 = {s2}')
- s1 =
- s2 =
-
-Example:
- >>> # Objects that define __len__ have a default __nice__
- >>> class Group(NiceRepr):
- ... def __init__(self, data):
- ... self.data = data
- ... def __len__(self):
- ... return len(self.data)
- >>> g = Group([1, 2, 3])
- >>> print(f'g = {g}')
- g =
-"""
-import warnings
-
-
-class NiceRepr:
- """Inherit from this class and define ``__nice__`` to "nicely" print your
- objects.
-
- Defines ``__str__`` and ``__repr__`` in terms of ``__nice__`` function
- Classes that inherit from :class:`NiceRepr` should redefine ``__nice__``.
- If the inheriting class has a ``__len__``, method then the default
- ``__nice__`` method will return its length.
-
- Example:
- >>> class Foo(NiceRepr):
- ... def __nice__(self):
- ... return 'info'
- >>> foo = Foo()
- >>> assert str(foo) == ''
- >>> assert repr(foo).startswith('>> class Bar(NiceRepr):
- ... pass
- >>> bar = Bar()
- >>> import pytest
- >>> with pytest.warns(None) as record:
- >>> assert 'object at' in str(bar)
- >>> assert 'object at' in repr(bar)
-
- Example:
- >>> class Baz(NiceRepr):
- ... def __len__(self):
- ... return 5
- >>> baz = Baz()
- >>> assert str(baz) == ''
- """
-
- def __nice__(self):
- """str: a "nice" summary string describing this module"""
- if hasattr(self, '__len__'):
- # It is a common pattern for objects to use __len__ in __nice__
- # As a convenience we define a default __nice__ for these objects
- return str(len(self))
- else:
- # In all other cases force the subclass to overload __nice__
- raise NotImplementedError(
- f'Define the __nice__ method for {self.__class__!r}')
-
- def __repr__(self):
- """str: the string of the module"""
- try:
- nice = self.__nice__()
- classname = self.__class__.__name__
- return f'<{classname}({nice}) at {hex(id(self))}>'
- except NotImplementedError as ex:
- warnings.warn(str(ex), category=RuntimeWarning)
- return object.__repr__(self)
-
- def __str__(self):
- """str: the string of the module"""
- try:
- classname = self.__class__.__name__
- nice = self.__nice__()
- return f'<{classname}({nice})>'
- except NotImplementedError as ex:
- warnings.warn(str(ex), category=RuntimeWarning)
- return object.__repr__(self)
diff --git a/spaces/LaoCzi/YouTube_Summarize/README.md b/spaces/LaoCzi/YouTube_Summarize/README.md
deleted file mode 100644
index ae2173591d4efba809e82a4518dea440d3982cac..0000000000000000000000000000000000000000
--- a/spaces/LaoCzi/YouTube_Summarize/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: YouTube Summarize
-emoji: 👀
-colorFrom: blue
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.19.1
-app_file: app.py
-pinned: false
-license: cc
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Linly-AI/Linly-ChatFlow/utils.py b/spaces/Linly-AI/Linly-ChatFlow/utils.py
deleted file mode 100644
index b753bd6037f2cd5bdd1fbbcda0fa66c9f0e41b0c..0000000000000000000000000000000000000000
--- a/spaces/Linly-AI/Linly-ChatFlow/utils.py
+++ /dev/null
@@ -1,143 +0,0 @@
-import json
-import sys
-from argparse import Namespace
-import torch
-import os
-
-
-def load_hyperparam(default_args):
- """
- Load arguments form argparse and config file
- Priority: default options < config file < command line args
- """
- with open(default_args.config_path, mode="r", encoding="utf-8") as f:
- config_args_dict = json.load(f)
-
- default_args_dict = vars(default_args)
-
- command_line_args_dict = {k: default_args_dict[k] for k in [
- a[2:] for a in sys.argv if (a[:2] == "--" and "local_rank" not in a)
- ]}
- default_args_dict.update(config_args_dict)
- default_args_dict.update(command_line_args_dict)
- args = Namespace(**default_args_dict)
-
- return args
-
-
-def _load_state_dict_into_model(model_to_load, model_path, start_prefix=""):
- # Convert old format to new format if needed from a PyTorch state_dict
-
- # copy state_dict so _load_from_state_dict can modify it
- state_dict = torch.load(model_path, map_location="cpu")
- metadata = getattr(state_dict, "_metadata", None)
- state_dict = state_dict.copy()
- state_dict['target.lm.weight'] = state_dict['target.lm.output_layer.weight']
- del state_dict['target.lm.output_layer.weight']
- state_dict['embedding.embedding.weight'] = state_dict['embedding.word.embedding.weight']
- del state_dict['embedding.word.embedding.weight']
-
- if metadata is not None:
- metadata['embedding.embedding'] = metadata['embedding.word.embedding']
- metadata['target.lm'] = metadata['target.lm.output_layer']
- if metadata.get('embedding.dropout', None) is not None:
- del metadata['embedding.dropout']
- del metadata['embedding.word']
- del metadata['embedding.word.embedding']
- del metadata['target.lm.output_layer']
- del metadata['target.lm.softmax']
- del metadata['target.lm.criterion']
- state_dict._metadata = metadata
-
- error_msgs = []
-
- # PyTorch's `_load_from_state_dict` does not copy parameters in a module's descendants
- # so we need to apply the function recursively.
- def load(module, state_dict, prefix=""):
- local_metadata = {} if metadata is None else metadata.get(prefix[:-1], {})
- args = (state_dict, prefix, local_metadata, True, [], [], error_msgs)
- # Parameters of module and children will start with prefix. We can exit early if there are none in this
- # state_dict
- if len([key for key in state_dict if key.startswith(prefix)]) > 0:
- import deepspeed
- # In sharded models, each shard has only part of the full state_dict, so only gather
- # parameters that are in the current state_dict.
- named_parameters = dict(module.named_parameters(prefix=prefix[:-1], recurse=False))
- params_to_gather = [named_parameters[k] for k in state_dict.keys() if k in named_parameters]
- if len(params_to_gather) > 0:
- # because zero3 puts placeholders in model params, this context
- # manager gathers (unpartitions) the params of the current layer, then loads from
- # the state dict and then re-partitions them again
- with deepspeed.zero.GatheredParameters(params_to_gather, modifier_rank=0):
- if torch.distributed.get_rank() == 0:
- module._load_from_state_dict(*args)
-
- for name, child in module._modules.items():
- if child is not None:
- load(child, state_dict, prefix + name + ".")
-
- load(model_to_load, state_dict, prefix=start_prefix)
- # Delete `state_dict` so it could be collected by GC earlier. Note that `state_dict` is a copy of the argument, so
- # it's safe to delete it.
- del state_dict
-
- return model_to_load
-
-
-def convert_normal_parameter_to_int8(model, threshold=6.0, modules_to_not_convert=None, current_key_name=None):
- import bitsandbytes as bnb
- modules_to_not_convert = ["lm"] if modules_to_not_convert is None else modules_to_not_convert
- for name, module in model.named_children():
- if current_key_name is None:
- current_key_name = []
- current_key_name.append(name)
-
- if len(list(module.children())) > 0:
- convert_normal_parameter_to_int8(module, threshold, modules_to_not_convert, current_key_name)
-
- if isinstance(module, bnb.nn.Linear8bitLt) and name not in modules_to_not_convert:
- # Check if the current key is not in the `modules_to_not_convert`
- if not any(key in ".".join(current_key_name) for key in modules_to_not_convert):
- model._modules[name].weight = bnb.nn.Int8Params(
- module.weight.data,
- requires_grad=False,
- has_fp16_weights=False
- )
- # Force requires grad to False to avoid unexpected errors
- model._modules[name].requires_grad_(False)
- # Remove the last key for recursion
- current_key_name.pop(-1)
- return model
-
-
-def load_model(model, model_path):
- if os.path.isdir(model_path):
- index_filename = os.path.join(model_path, 'pytorch_model.bin.index.json')
- with open(index_filename, "r") as f:
- index = json.loads(f.read())
- shard_filenames = sorted(set(index["weight_map"].values()))
- shard_filenames = [os.path.join(model_path, f) for f in shard_filenames]
- for shard_file in shard_filenames:
- shard_checkpoint = torch.load(shard_file, map_location='cpu')
- for name, parameter in model.named_parameters():
- if shard_checkpoint.get(name, None) is not None:
- if 'target' in name:
- parameter.data = shard_checkpoint['target.lm.output_layer.weight']
- elif 'embedding' in name:
- parameter.data = shard_checkpoint['embedding.word.embedding.weight']
- else:
- parameter.data = shard_checkpoint[name]
- parameter.requires_grad = False
- del shard_checkpoint
- else:
- checkpoint = torch.load(model_path, map_location='cpu')
- for parameter_name, parameter in model.named_parameters():
- if 'target' in parameter_name:
- parameter.data = checkpoint['target.lm.output_layer.weight']
- elif 'embedding' in parameter_name:
- parameter.data = checkpoint['embedding.word.embedding.weight']
- else:
- parameter.data = checkpoint[parameter_name]
- parameter.requires_grad = False
- del checkpoint
- return model
diff --git a/spaces/Luelll/ChuanhuChatGPT/modules/models/__init__.py b/spaces/Luelll/ChuanhuChatGPT/modules/models/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/LuxOAI/ChatGpt-Web/app/api/openai-image/route.ts b/spaces/LuxOAI/ChatGpt-Web/app/api/openai-image/route.ts
deleted file mode 100644
index 53752422c8698f4bc16035696cd2dd0991936fd3..0000000000000000000000000000000000000000
--- a/spaces/LuxOAI/ChatGpt-Web/app/api/openai-image/route.ts
+++ /dev/null
@@ -1,38 +0,0 @@
-import { OpenAIApi, Configuration } from "openai";
-import { ChatImageRequest } from "../openai-image/typing";
-import { NextRequest, NextResponse } from "next/server";
-import { auth } from "../auth";
-
-export async function POST(req: NextRequest) {
- const authResult = auth(req);
- if (authResult.error) {
- return NextResponse.json(authResult, {
- status: 401,
- });
- }
- try {
- let apiKey = process.env.OPENAI_API_KEY;
-
- const userApiKey = req.headers.get("token");
- if (userApiKey) {
- apiKey = userApiKey;
- console.log("user api key:" + apiKey);
- }
-
- const openai = new OpenAIApi(
- new Configuration({
- apiKey,
- }),
- );
-
- const requestBody = (await req.json()) as ChatImageRequest;
- const response = await openai.createImage({
- ...requestBody,
- });
- console.log("[Chat-image]" + response.data.data[0].url);
- return new Response(JSON.stringify(response.data));
- } catch (e) {
- console.error("[Chat-image] ", e);
- return new Response(JSON.stringify(e));
- }
-}
diff --git a/spaces/Mahiruoshi/Lovelive-Nijigasaku-Chat-iSTFT-GPT3/pqmf.py b/spaces/Mahiruoshi/Lovelive-Nijigasaku-Chat-iSTFT-GPT3/pqmf.py
deleted file mode 100644
index cf5d3c09e22a5011629b7452c3d23fb3a3cc124c..0000000000000000000000000000000000000000
--- a/spaces/Mahiruoshi/Lovelive-Nijigasaku-Chat-iSTFT-GPT3/pqmf.py
+++ /dev/null
@@ -1,116 +0,0 @@
-# -*- coding: utf-8 -*-
-
-# Copyright 2020 Tomoki Hayashi
-# MIT License (https://opensource.org/licenses/MIT)
-
-"""Pseudo QMF modules."""
-
-import numpy as np
-import torch
-import torch.nn.functional as F
-
-from scipy.signal import kaiser
-
-
-def design_prototype_filter(taps=62, cutoff_ratio=0.15, beta=9.0):
- """Design prototype filter for PQMF.
- This method is based on `A Kaiser window approach for the design of prototype
- filters of cosine modulated filterbanks`_.
- Args:
- taps (int): The number of filter taps.
- cutoff_ratio (float): Cut-off frequency ratio.
- beta (float): Beta coefficient for kaiser window.
- Returns:
- ndarray: Impluse response of prototype filter (taps + 1,).
- .. _`A Kaiser window approach for the design of prototype filters of cosine modulated filterbanks`:
- https://ieeexplore.ieee.org/abstract/document/681427
- """
- # check the arguments are valid
- assert taps % 2 == 0, "The number of taps mush be even number."
- assert 0.0 < cutoff_ratio < 1.0, "Cutoff ratio must be > 0.0 and < 1.0."
-
- # make initial filter
- omega_c = np.pi * cutoff_ratio
- with np.errstate(invalid='ignore'):
- h_i = np.sin(omega_c * (np.arange(taps + 1) - 0.5 * taps)) \
- / (np.pi * (np.arange(taps + 1) - 0.5 * taps))
- h_i[taps // 2] = np.cos(0) * cutoff_ratio # fix nan due to indeterminate form
-
- # apply kaiser window
- w = kaiser(taps + 1, beta)
- h = h_i * w
-
- return h
-
-
-class PQMF(torch.nn.Module):
- """PQMF module.
- This module is based on `Near-perfect-reconstruction pseudo-QMF banks`_.
- .. _`Near-perfect-reconstruction pseudo-QMF banks`:
- https://ieeexplore.ieee.org/document/258122
- """
-
- def __init__(self, device, subbands=4, taps=62, cutoff_ratio=0.15, beta=9.0):
- """Initilize PQMF module.
- Args:
- subbands (int): The number of subbands.
- taps (int): The number of filter taps.
- cutoff_ratio (float): Cut-off frequency ratio.
- beta (float): Beta coefficient for kaiser window.
- """
- super(PQMF, self).__init__()
-
- # define filter coefficient
- h_proto = design_prototype_filter(taps, cutoff_ratio, beta)
- h_analysis = np.zeros((subbands, len(h_proto)))
- h_synthesis = np.zeros((subbands, len(h_proto)))
- for k in range(subbands):
- h_analysis[k] = 2 * h_proto * np.cos(
- (2 * k + 1) * (np.pi / (2 * subbands)) *
- (np.arange(taps + 1) - ((taps - 1) / 2)) +
- (-1) ** k * np.pi / 4)
- h_synthesis[k] = 2 * h_proto * np.cos(
- (2 * k + 1) * (np.pi / (2 * subbands)) *
- (np.arange(taps + 1) - ((taps - 1) / 2)) -
- (-1) ** k * np.pi / 4)
-
- # convert to tensor
- analysis_filter = torch.from_numpy(h_analysis).float().unsqueeze(1).to(device)
- synthesis_filter = torch.from_numpy(h_synthesis).float().unsqueeze(0).to(device)
-
- # register coefficients as beffer
- self.register_buffer("analysis_filter", analysis_filter)
- self.register_buffer("synthesis_filter", synthesis_filter)
-
- # filter for downsampling & upsampling
- updown_filter = torch.zeros((subbands, subbands, subbands)).float().to(device)
- for k in range(subbands):
- updown_filter[k, k, 0] = 1.0
- self.register_buffer("updown_filter", updown_filter)
- self.subbands = subbands
-
- # keep padding info
- self.pad_fn = torch.nn.ConstantPad1d(taps // 2, 0.0)
-
- def analysis(self, x):
- """Analysis with PQMF.
- Args:
- x (Tensor): Input tensor (B, 1, T).
- Returns:
- Tensor: Output tensor (B, subbands, T // subbands).
- """
- x = F.conv1d(self.pad_fn(x), self.analysis_filter)
- return F.conv1d(x, self.updown_filter, stride=self.subbands)
-
- def synthesis(self, x):
- """Synthesis with PQMF.
- Args:
- x (Tensor): Input tensor (B, subbands, T // subbands).
- Returns:
- Tensor: Output tensor (B, 1, T).
- """
- # NOTE(kan-bayashi): Power will be dreased so here multipy by # subbands.
- # Not sure this is the correct way, it is better to check again.
- # TODO(kan-bayashi): Understand the reconstruction procedure
- x = F.conv_transpose1d(x, self.updown_filter * self.subbands, stride=self.subbands)
- return F.conv1d(self.pad_fn(x), self.synthesis_filter)
\ No newline at end of file
diff --git a/spaces/Marshalls/testmtd/feature_extraction/madmom/features/chords.py b/spaces/Marshalls/testmtd/feature_extraction/madmom/features/chords.py
deleted file mode 100644
index f91f2a1914ebaf9e383d8a39864986a78fe7b230..0000000000000000000000000000000000000000
--- a/spaces/Marshalls/testmtd/feature_extraction/madmom/features/chords.py
+++ /dev/null
@@ -1,278 +0,0 @@
-# encoding: utf-8
-"""
-This module contains chord recognition related functionality.
-
-"""
-from __future__ import absolute_import, division, print_function
-
-from functools import partial
-
-import numpy as np
-
-from ..io import SEGMENT_DTYPE
-from ..processors import SequentialProcessor
-
-
-def majmin_targets_to_chord_labels(targets, fps):
- """
- Converts a series of major/minor chord targets to human readable chord
- labels. Targets are assumed to be spaced equidistant in time as defined
- by the `fps` parameter (each target represents one 'frame').
-
- Ids 0-11 encode major chords starting with root 'A', 12-23 minor chords.
- Id 24 represents 'N', the no-chord class.
-
- Parameters
- ----------
- targets : iterable
- Iterable containing chord class ids.
- fps : float
- Frames per second. Consecutive class
-
- Returns
- -------
- chord labels : list
- List of tuples of the form (start time, end time, chord label)
-
- """
- # create a map of semitone index to semitone name (e.g. 0 -> A, 1 -> A#)
- pitch_class_to_label = ['A', 'A#', 'B', 'C', 'C#', 'D', 'D#', 'E', 'F',
- 'F#', 'G', 'G#']
-
- def pred_to_cl(pred):
- """
- Map a class id to a chord label.
- 0..11 major chords, 12..23 minor chords, 24 no chord
- """
- if pred == 24:
- return 'N'
- return '{}:{}'.format(pitch_class_to_label[pred % 12],
- 'maj' if pred < 12 else 'min')
-
- # get labels per frame
- spf = 1. / fps
- labels = [(i * spf, pred_to_cl(p)) for i, p in enumerate(targets)]
-
- # join same consecutive predictions
- prev_label = (None, None)
- uniq_labels = []
-
- for label in labels:
- if label[1] != prev_label[1]:
- uniq_labels.append(label)
- prev_label = label
-
- # end time of last label is one frame duration after
- # the last prediction time
- start_times, chord_labels = zip(*uniq_labels)
- end_times = start_times[1:] + (labels[-1][0] + spf,)
-
- return np.array(list(zip(start_times, end_times, chord_labels)),
- dtype=SEGMENT_DTYPE)
-
-
-class DeepChromaChordRecognitionProcessor(SequentialProcessor):
- """
- Recognise major and minor chords from deep chroma vectors [1]_ using a
- Conditional Random Field.
-
- Parameters
- ----------
- model : str
- File containing the CRF model. If None, use the model supplied with
- madmom.
- fps : float
- Frames per second. Must correspond to the fps of the incoming
- activations and the model.
-
- References
- ----------
- .. [1] Filip Korzeniowski and Gerhard Widmer,
- "Feature Learning for Chord Recognition: The Deep Chroma Extractor",
- Proceedings of the 17th International Society for Music Information
- Retrieval Conference (ISMIR), 2016.
-
- Examples
- --------
- To recognise chords in an audio file using the
- DeepChromaChordRecognitionProcessor you first need to create a
- madmom.audio.chroma.DeepChromaProcessor to extract the appropriate chroma
- vectors.
-
- >>> from madmom.audio.chroma import DeepChromaProcessor
- >>> dcp = DeepChromaProcessor()
- >>> dcp # doctest: +ELLIPSIS
-
-
- Then, create the DeepChromaChordRecognitionProcessor to decode a chord
- sequence from the extracted chromas:
-
- >>> decode = DeepChromaChordRecognitionProcessor()
- >>> decode # doctest: +ELLIPSIS
-
-
- To transcribe the chords, you can either manually call the processors
- one after another,
-
- >>> chroma = dcp('tests/data/audio/sample2.wav')
- >>> decode(chroma)
- ... # doctest: +NORMALIZE_WHITESPACE +NORMALIZE_ARRAYS
- array([(0. , 1.6, 'F:maj'), (1.6, 2.5, 'A:maj'), (2.5, 4.1, 'D:maj')],
- dtype=[('start', '>> from madmom.processors import SequentialProcessor
- >>> chordrec = SequentialProcessor([dcp, decode])
- >>> chordrec('tests/data/audio/sample2.wav')
- ... # doctest: +NORMALIZE_WHITESPACE +NORMALIZE_ARRAYS
- array([(0. , 1.6, 'F:maj'), (1.6, 2.5, 'A:maj'), (2.5, 4.1, 'D:maj')],
- dtype=[('start', '>> proc = CNNChordFeatureProcessor()
- >>> proc # doctest: +ELLIPSIS
-
- >>> features = proc('tests/data/audio/sample2.wav')
- >>> features.shape
- (41, 128)
- >>> features # doctest: +NORMALIZE_WHITESPACE +ELLIPSIS
- array([[0.05798, 0. , ..., 0.02757, 0.014 ],
- [0.06604, 0. , ..., 0.02898, 0.00886],
- ...,
- [0.00655, 0.1166 , ..., 0.00651, 0. ],
- [0.01476, 0.11185, ..., 0.00287, 0. ]])
- """
-
- def __init__(self, **kwargs):
- from ..audio.signal import SignalProcessor, FramedSignalProcessor
- from ..audio.stft import ShortTimeFourierTransformProcessor
- from ..audio.spectrogram import LogarithmicFilteredSpectrogramProcessor
- from ..ml.nn import NeuralNetwork
- from ..models import CHORDS_CNN_FEAT
-
- # spectrogram computation
- sig = SignalProcessor(num_channels=1, sample_rate=44100)
- frames = FramedSignalProcessor(frame_size=8192, fps=10)
- stft = ShortTimeFourierTransformProcessor() # caching FFT window
- spec = LogarithmicFilteredSpectrogramProcessor(
- num_bands=24, fmin=60, fmax=2600, unique_filters=True
- )
-
- # padding, neural network and global average pooling
- pad = _cnncfp_pad
- nn = NeuralNetwork.load(CHORDS_CNN_FEAT[0])
- superframes = _cnncfp_superframes
- avg = _cnncfp_avg
-
- # create processing pipeline
- super(CNNChordFeatureProcessor, self).__init__([
- sig, frames, stft, spec, pad, nn, superframes, avg
- ])
-
-
-class CRFChordRecognitionProcessor(SequentialProcessor):
- """
- Recognise major and minor chords from learned features extracted by
- a convolutional neural network, as described in [1]_.
-
- Parameters
- ----------
- model : str
- File containing the CRF model. If None, use the model supplied with
- madmom.
- fps : float
- Frames per second. Must correspond to the fps of the incoming
- activations and the model.
-
- References
- ----------
- .. [1] Filip Korzeniowski and Gerhard Widmer,
- "A Fully Convolutional Deep Auditory Model for Musical Chord
- Recognition",
- Proceedings of IEEE International Workshop on Machine Learning for
- Signal Processing (MLSP), 2016.
-
- Examples
- --------
- To recognise chords using the CRFChordRecognitionProcessor, you first need
- to extract features using the CNNChordFeatureProcessor.
-
- >>> featproc = CNNChordFeatureProcessor()
- >>> featproc # doctest: +ELLIPSIS
-
-
- Then, create the CRFChordRecognitionProcessor to decode a chord sequence
- from the extracted features:
-
- >>> decode = CRFChordRecognitionProcessor()
- >>> decode # doctest: +ELLIPSIS
-
-
- To transcribe the chords, you can either manually call the processors
- one after another,
-
- >>> feats = featproc('tests/data/audio/sample2.wav')
- >>> decode(feats)
- ... # doctest: +NORMALIZE_WHITESPACE +ELLIPSIS +NORMALIZE_ARRAYS
- array([(0. , 0.2, 'N'), (0.2, 1.6, 'F:maj'),
- (1.6, 2.4..., 'A:maj'), (2.4..., 4.1, 'D:min')],
- dtype=[('start', '>> from madmom.processors import SequentialProcessor
- >>> chordrec = SequentialProcessor([featproc, decode])
- >>> chordrec('tests/data/audio/sample2.wav')
- ... # doctest: +NORMALIZE_WHITESPACE +ELLIPSIS +NORMALIZE_ARRAYS
- array([(0. , 0.2, 'N'), (0.2, 1.6, 'F:maj'),
- (1.6, 2.4..., 'A:maj'), (2.4..., 4.1, 'D:min')],
- dtype=[('start', ' Dict[str, np.ndarray]:
- im = img.convert("RGB").resize(size, Image.LANCZOS)
-
- im_ary = np.array(im)
- im_ary = im_ary / np.max(im_ary)
-
- tmpImg = np.zeros((im_ary.shape[0], im_ary.shape[1], 3))
- tmpImg[:, :, 0] = (im_ary[:, :, 0] - mean[0]) / std[0]
- tmpImg[:, :, 1] = (im_ary[:, :, 1] - mean[1]) / std[1]
- tmpImg[:, :, 2] = (im_ary[:, :, 2] - mean[2]) / std[2]
-
- tmpImg = tmpImg.transpose((2, 0, 1))
-
- return {
- self.inner_session.get_inputs()[0]
- .name: np.expand_dims(tmpImg, 0)
- .astype(np.float32)
- }
-
- def predict(self, img: PILImage) -> List[PILImage]:
- raise NotImplementedError
diff --git a/spaces/MichaelXin/openai-test/app.py b/spaces/MichaelXin/openai-test/app.py
deleted file mode 100644
index 892e35c15af69b75466cb1d82b1447c56f698872..0000000000000000000000000000000000000000
--- a/spaces/MichaelXin/openai-test/app.py
+++ /dev/null
@@ -1,61 +0,0 @@
-
-import openai
-import os
-
-openai.api_key = os.environ.get("OPENAI_API_KEY")
-
-class Conversation:
- def __init__(self, prompt, num_of_round):
- self.prompt = prompt
- self.num_of_round = num_of_round
- self.messages = []
- self.messages.append({"role": "system", "content": self.prompt})
-
- def ask(self, question):
- try:
- self.messages.append({"role": "user", "content": question})
- response = openai.ChatCompletion.create(
- model="gpt-3.5-turbo",
- messages=self.messages,
- temperature=0.5,
- max_tokens=2048,
- top_p=1,
- )
- except Exception as e:
- print(e)
- return e
-
- message = response["choices"][0]["message"]["content"]
- self.messages.append({"role": "assistant", "content": message})
-
- if len(self.messages) > self.num_of_round*2 + 1:
- print("debug: %d, %d" % (len(self.messages), self.num_of_round*2 + 1))
- #del self.messages[1:3] //Remove the first round conversation left.
- self.messages = self.messages[:1] + self.messages[3:]
- return message
-
-
-import gradio as gr
-prompt = """你是一个中国厨师,用中文回答做菜的问题。你的回答需要满足以下要求:
-1. 你的回答必须是中文
-2. 回答限制在100个字以内"""
-
-conv = Conversation(prompt, 5)
-
-def answer(question, history=[]):
- history.append(question)
- response = conv.ask(question)
- history.append(response)
- responses = [(u,b) for u,b in zip(history[::2], history[1::2])]
- return responses, history
-
-with gr.Blocks(css="#chatbot{height:300px} .overflow-y-auto{height:500px}") as demo:
- chatbot = gr.Chatbot(elem_id="chatbot")
- state = gr.State([])
-
- with gr.Row():
- txt = gr.Textbox(show_label=False, placeholder="Enter text and press enter").style(container=False)
-
- txt.submit(answer, [txt, state], [chatbot, state])
-
-demo.launch()
diff --git a/spaces/Mileena/PIFu-Clothed-Human-Digitization/PIFu/apps/eval_spaces.py b/spaces/Mileena/PIFu-Clothed-Human-Digitization/PIFu/apps/eval_spaces.py
deleted file mode 100644
index b0cf689d24f70d95aa0d491fd04987296802e492..0000000000000000000000000000000000000000
--- a/spaces/Mileena/PIFu-Clothed-Human-Digitization/PIFu/apps/eval_spaces.py
+++ /dev/null
@@ -1,138 +0,0 @@
-import sys
-import os
-
-sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
-ROOT_PATH = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
-
-import time
-import json
-import numpy as np
-import torch
-from torch.utils.data import DataLoader
-
-from lib.options import BaseOptions
-from lib.mesh_util import *
-from lib.sample_util import *
-from lib.train_util import *
-from lib.model import *
-
-from PIL import Image
-import torchvision.transforms as transforms
-
-import trimesh
-from datetime import datetime
-
-# get options
-opt = BaseOptions().parse()
-
-class Evaluator:
- def __init__(self, opt, projection_mode='orthogonal'):
- self.opt = opt
- self.load_size = self.opt.loadSize
- self.to_tensor = transforms.Compose([
- transforms.Resize(self.load_size),
- transforms.ToTensor(),
- transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
- ])
- # set cuda
- cuda = torch.device('cuda:%d' % opt.gpu_id) if torch.cuda.is_available() else torch.device('cpu')
- print("CUDDAAAAA ???", torch.cuda.get_device_name(0) if torch.cuda.is_available() else "NO ONLY CPU")
-
- # create net
- netG = HGPIFuNet(opt, projection_mode).to(device=cuda)
- print('Using Network: ', netG.name)
-
- if opt.load_netG_checkpoint_path:
- netG.load_state_dict(torch.load(opt.load_netG_checkpoint_path, map_location=cuda))
-
- if opt.load_netC_checkpoint_path is not None:
- print('loading for net C ...', opt.load_netC_checkpoint_path)
- netC = ResBlkPIFuNet(opt).to(device=cuda)
- netC.load_state_dict(torch.load(opt.load_netC_checkpoint_path, map_location=cuda))
- else:
- netC = None
-
- os.makedirs(opt.results_path, exist_ok=True)
- os.makedirs('%s/%s' % (opt.results_path, opt.name), exist_ok=True)
-
- opt_log = os.path.join(opt.results_path, opt.name, 'opt.txt')
- with open(opt_log, 'w') as outfile:
- outfile.write(json.dumps(vars(opt), indent=2))
-
- self.cuda = cuda
- self.netG = netG
- self.netC = netC
-
- def load_image(self, image_path, mask_path):
- # Name
- img_name = os.path.splitext(os.path.basename(image_path))[0]
- # Calib
- B_MIN = np.array([-1, -1, -1])
- B_MAX = np.array([1, 1, 1])
- projection_matrix = np.identity(4)
- projection_matrix[1, 1] = -1
- calib = torch.Tensor(projection_matrix).float()
- # Mask
- mask = Image.open(mask_path).convert('L')
- mask = transforms.Resize(self.load_size)(mask)
- mask = transforms.ToTensor()(mask).float()
- # image
- image = Image.open(image_path).convert('RGB')
- image = self.to_tensor(image)
- image = mask.expand_as(image) * image
- return {
- 'name': img_name,
- 'img': image.unsqueeze(0),
- 'calib': calib.unsqueeze(0),
- 'mask': mask.unsqueeze(0),
- 'b_min': B_MIN,
- 'b_max': B_MAX,
- }
-
- def eval(self, data, use_octree=False):
- '''
- Evaluate a data point
- :param data: a dict containing at least ['name'], ['image'], ['calib'], ['b_min'] and ['b_max'] tensors.
- :return:
- '''
- opt = self.opt
- with torch.no_grad():
- self.netG.eval()
- if self.netC:
- self.netC.eval()
- save_path = '%s/%s/result_%s.obj' % (opt.results_path, opt.name, data['name'])
- if self.netC:
- gen_mesh_color(opt, self.netG, self.netC, self.cuda, data, save_path, use_octree=use_octree)
- else:
- gen_mesh(opt, self.netG, self.cuda, data, save_path, use_octree=use_octree)
-
-
-if __name__ == '__main__':
- evaluator = Evaluator(opt)
-
- results_path = opt.results_path
- name = opt.name
- test_image_path = opt.img_path
- test_mask_path = test_image_path[:-4] +'_mask.png'
- test_img_name = os.path.splitext(os.path.basename(test_image_path))[0]
- print("test_image: ", test_image_path)
- print("test_mask: ", test_mask_path)
-
- try:
- time = datetime.now()
- print("evaluating" , time)
- data = evaluator.load_image(test_image_path, test_mask_path)
- evaluator.eval(data, False)
- print("done evaluating" , datetime.now() - time)
- except Exception as e:
- print("error:", e.args)
-
- try:
- mesh = trimesh.load(f'{results_path}/{name}/result_{test_img_name}.obj')
- mesh.apply_transform([[1, 0, 0, 0],
- [0, 1, 0, 0],
- [0, 0, -1, 0],
- [0, 0, 0, 1]])
- mesh.export(file_obj=f'{results_path}/{name}/result_{test_img_name}.glb')
- except Exception as e:
- print("error generating MESH", e)
diff --git a/spaces/MiloSobral/PortiloopDemo/portiloop/src/demo/offline.py b/spaces/MiloSobral/PortiloopDemo/portiloop/src/demo/offline.py
deleted file mode 100644
index f252ff4f73b5a20cecdec436074532df6db8bd59..0000000000000000000000000000000000000000
--- a/spaces/MiloSobral/PortiloopDemo/portiloop/src/demo/offline.py
+++ /dev/null
@@ -1,158 +0,0 @@
-import numpy as np
-from portiloop.src.detection import SleepSpindleRealTimeDetector
-from portiloop.src.stimulation import UpStateDelayer
-from portiloop.src.processing import FilterPipeline
-from portiloop.src.demo.utils import OfflineIsolatedSpindleRealTimeStimulator, OfflineSpindleTrainRealTimeStimulator, compute_output_table, sleep_stage, xdf2array, offline_detect, offline_filter, OfflineSleepSpindleRealTimeStimulator
-import gradio as gr
-
-
-def run_offline(xdf_file, detect_filter_opts, threshold, channel_num, freq, detect_trains, stimulation_phase="Fast", buffer_time=0.25):
- # Get the options from the checkbox group
- offline_filtering = 0 in detect_filter_opts
- lacourse = 1 in detect_filter_opts
- wamsley = 2 in detect_filter_opts
- online_filtering = 3 in detect_filter_opts
- online_detection = 4 in detect_filter_opts
-
- # Make sure the inputs make sense:
- if not offline_filtering and (lacourse or wamsley):
- raise gr.Error("You can't use the offline detection methods without offline filtering.")
-
- if not online_filtering and online_detection:
- raise gr.Error("You can't use the online detection without online filtering.")
-
- if xdf_file is None:
- raise gr.Error("Please upload a .xdf file.")
-
- freq = int(freq)
-
- # Read the xdf file to a numpy array
- print("Loading xdf file...")
- data_whole, columns = xdf2array(xdf_file.name, int(channel_num))
-
- # Do the offline filtering of the data
- if offline_filtering:
- print("Filtering offline...")
- offline_filtered_data = offline_filter(data_whole[:, columns.index("raw_signal")], freq)
- # Expand the dimension of the filtered data to match the shape of the other columns
- offline_filtered_data = np.expand_dims(offline_filtered_data, axis=1)
- data_whole = np.concatenate((data_whole, offline_filtered_data), axis=1)
- columns.append("offline_filtered_signal")
-
- # Do the sleep staging approximation
- if wamsley or lacourse:
- print("Sleep staging...")
- mask = sleep_stage(data_whole[:, columns.index("offline_filtered_signal")], threshold=150, group_size=100)
-
- # Do Wamsley's method
- if wamsley:
- print("Running Wamsley detection...")
- wamsley_data = offline_detect("Wamsley", \
- data_whole[:, columns.index("offline_filtered_signal")],\
- data_whole[:, columns.index("time_stamps")],\
- freq, mask)
- wamsley_data = np.expand_dims(wamsley_data, axis=1)
- data_whole = np.concatenate((data_whole, wamsley_data), axis=1)
- columns.append("wamsley_spindles")
-
- # Do Lacourse's method
- if lacourse:
- print("Running Lacourse detection...")
- lacourse_data = offline_detect("Lacourse", \
- data_whole[:, columns.index("offline_filtered_signal")],\
- data_whole[:, columns.index("time_stamps")],\
- freq, mask)
- lacourse_data = np.expand_dims(lacourse_data, axis=1)
- data_whole = np.concatenate((data_whole, lacourse_data), axis=1)
- columns.append("lacourse_spindles")
-
- # Get the data from the raw signal column
- data = data_whole[:, columns.index("raw_signal")]
-
- # Create the online filtering pipeline
- if online_filtering:
- filter = FilterPipeline(nb_channels=1, sampling_rate=freq)
-
- # Create the detector
- if online_detection:
- detector = SleepSpindleRealTimeDetector(threshold=threshold, channel=1) # always 1 because we have only one channel
-
- if detect_trains == "All Spindles":
- stimulator = OfflineSleepSpindleRealTimeStimulator()
- elif detect_trains == "Trains":
- stimulator = OfflineSpindleTrainRealTimeStimulator()
- elif detect_trains == "Isolated & First":
- stimulator = OfflineIsolatedSpindleRealTimeStimulator()
-
- if stimulation_phase != "Fast":
- stimulation_delayer = UpStateDelayer(freq, stimulation_phase == 'Peak', time_to_buffer=buffer_time, stimulate=lambda: None)
- stimulator.add_delayer(stimulation_delayer)
-
-
- if online_filtering or online_detection:
- print("Running online filtering and detection...")
-
- points = []
- online_activations = []
- delayed_stims = []
-
- # Go through the data
- for index, point in enumerate(data):
- # Filter the data
- if online_filtering:
- filtered_point = filter.filter(np.array([point]))
- else:
- filtered_point = point
- filtered_point = filtered_point.tolist()
- points.append(filtered_point[0])
-
- if online_detection:
- # Detect the spindles
- result = detector.detect([filtered_point])
-
- if stimulation_phase != "Fast":
- delayed_stim = stimulation_delayer.step_timesteps(filtered_point[0])
- if delayed_stim:
- delayed_stims.append(1)
- else:
- delayed_stims.append(0)
-
- # Stimulate if necessary
- stim = stimulator.stimulate(result)
- if stim:
- online_activations.append(1)
- else:
- online_activations.append(0)
-
- if online_filtering:
- online_filtered = np.array(points)
- online_filtered = np.expand_dims(online_filtered, axis=1)
- data_whole = np.concatenate((data_whole, online_filtered), axis=1)
- columns.append("online_filtered_signal")
-
- if online_detection:
- online_activations = np.array(online_activations)
- online_activations = np.expand_dims(online_activations, axis=1)
- data_whole = np.concatenate((data_whole, online_activations), axis=1)
- columns.append("online_stimulations")
-
- if stimulation_phase != "Fast":
- delayed_stims = np.array(delayed_stims)
- delayed_stims = np.expand_dims(delayed_stims, axis=1)
- data_whole = np.concatenate((data_whole, delayed_stims), axis=1)
- columns.append("delayed_stimulations")
-
- print("Saving output...")
- # Output the data to a csv file
- np.savetxt("output.csv", data_whole, delimiter=",", header=",".join(columns), comments="")
-
- # Compute the overlap of online stimulations with the
-
- output_table = compute_output_table(
- data_whole[:, columns.index("online_stimulations")],
- data_whole[:, columns.index("online_stimulations_portiloop")],
- data_whole[:, columns.index("lacourse_spindles")] if lacourse else None,
- data_whole[:, columns.index("wamsley_spindles")] if wamsley else None,)
-
- print("Done!")
- return "output.csv", output_table
\ No newline at end of file
diff --git a/spaces/MirageML/sjc/voxnerf/render.py b/spaces/MirageML/sjc/voxnerf/render.py
deleted file mode 100644
index a69b529b035c247429d9ab824b4307c0b3c6d7bc..0000000000000000000000000000000000000000
--- a/spaces/MirageML/sjc/voxnerf/render.py
+++ /dev/null
@@ -1,226 +0,0 @@
-import numpy as np
-import torch
-from my3d import unproject
-
-
-def subpixel_rays_from_img(H, W, K, c2w_pose, normalize_dir=True, f=8):
- assert c2w_pose[3, 3] == 1.
- H, W = H * f, W * f
- n = H * W
- ys, xs = np.meshgrid(range(H), range(W), indexing="ij")
- xy_coords = np.stack([xs, ys], axis=-1).reshape(n, 2)
-
- top_left = np.array([-0.5, -0.5]) + 1 / (2 * f)
- xy_coords = top_left + xy_coords / f
-
- ro = c2w_pose[:, -1]
- pts = unproject(K, xy_coords, depth=1)
- pts = pts @ c2w_pose.T
- rd = pts - ro
- rd = rd[:, :3]
- if normalize_dir:
- rd = rd / np.linalg.norm(rd, axis=-1, keepdims=True)
- ro = np.tile(ro[:3], (n, 1))
- return ro, rd
-
-
-def rays_from_img(H, W, K, c2w_pose, normalize_dir=True):
- assert c2w_pose[3, 3] == 1.
- n = H * W
- ys, xs = np.meshgrid(range(H), range(W), indexing="ij")
- xy_coords = np.stack([xs, ys], axis=-1).reshape(n, 2)
-
- ro = c2w_pose[:, -1]
- pts = unproject(K, xy_coords, depth=1)
- pts = pts @ c2w_pose.T
- rd = pts - ro # equivalently can subtract [0,0,0,1] before pose transform
- rd = rd[:, :3]
- if normalize_dir:
- rd = rd / np.linalg.norm(rd, axis=-1, keepdims=True)
- ro = np.tile(ro[:3], (n, 1))
- return ro, rd
-
-
-def ray_box_intersect(ro, rd, aabb):
- """
- Intersection of ray with axis-aligned bounding box
- This routine works for arbitrary dimensions; commonly d = 2 or 3
- only works for numpy, not torch (which has slightly diff api for min, max, and clone)
-
- Args:
- ro: [n, d] ray origin
- rd: [n, d] ray direction (assumed to be already normalized;
- if not still fine, meaning of t as time of flight holds true)
- aabb: [d, 2] bbox bound on each dim
- Return:
- is_intersect: [n,] of bool, whether the particular ray intersects the bbox
- t_min: [n,] ray entrance time
- t_max: [n,] ray exit time
- """
- n = ro.shape[0]
- d = aabb.shape[0]
- assert aabb.shape == (d, 2)
- assert ro.shape == (n, d) and rd.shape == (n, d)
-
- rd = rd.copy()
- rd[rd == 0] = 1e-6 # avoid div overflow; logically safe to give it big t
-
- ro = ro.reshape(n, d, 1)
- rd = rd.reshape(n, d, 1)
- ts = (aabb - ro) / rd # [n, d, 2]
- t_min = ts.min(-1).max(-1) # [n,] last of entrance
- t_max = ts.max(-1).min(-1) # [n,] first of exit
- is_intersect = t_min < t_max
-
- return is_intersect, t_min, t_max
-
-
-def as_torch_tsrs(device, *args):
- ret = []
- for elem in args:
- target_dtype = torch.float32 if np.issubdtype(elem.dtype, np.floating) else None
- ret.append(
- torch.as_tensor(elem, dtype=target_dtype, device=device)
- )
- return ret
-
-
-def group_mask_filter(mask, *items):
- return [elem[mask] for elem in items]
-
-
-def mask_back_fill(tsr, N, inds, base_value=1.0):
- shape = [N, *tsr.shape[1:]]
- canvas = base_value * np.ones_like(tsr, shape=shape)
- canvas[inds] = tsr
- return canvas
-
-
-def render_one_view(model, aabb, H, W, K, pose):
- N = H * W
- bs = max(W * 5, 4096) # render 5 rows; original batch size 4096, now 4000;
-
- ro, rd = rays_from_img(H, W, K, pose)
- ro, rd, t_min, t_max, intsct_inds = scene_box_filter(ro, rd, aabb)
- n = len(ro)
- # print(f"{n} vs {N}") # n can be smaller than N since some rays do not intsct aabb
-
- # n = n // 1 # actual number of rays to render; only needed for fast debugging
-
- dev = model.device
- ro, rd, t_min, t_max = as_torch_tsrs(dev, ro, rd, t_min, t_max)
- rgbs = torch.zeros(n, 3, device=dev)
- depth = torch.zeros(n, 1, device=dev)
-
- with torch.no_grad():
- for i in range(int(np.ceil(n / bs))):
- s = i * bs
- e = min(n, s + bs)
- _rgbs, _depth, _ = render_ray_bundle(
- model, ro[s:e], rd[s:e], t_min[s:e], t_max[s:e]
- )
- rgbs[s:e] = _rgbs
- depth[s:e] = _depth
-
- rgbs, depth = rgbs.cpu().numpy(), depth.cpu().numpy()
-
- base_color = 1.0 # empty region needs to be white
- rgbs = mask_back_fill(rgbs, N, intsct_inds, base_color).reshape(H, W, 3)
- depth = mask_back_fill(depth, N, intsct_inds, base_color).reshape(H, W)
- return rgbs, depth
-
-
-def scene_box_filter(ro, rd, aabb):
- N = len(ro)
- _, t_min, t_max = ray_box_intersect(ro, rd, aabb)
- # do not render what's behind the ray origin
- t_min, t_max = np.maximum(t_min, 0), np.maximum(t_max, 0)
- # can test intersect logic by reducing the focal length
- is_intsct = t_min < t_max
- ro, rd, t_min, t_max = group_mask_filter(is_intsct, ro, rd, t_min, t_max)
- intsct_inds = np.arange(N)[is_intsct]
- return ro, rd, t_min, t_max, intsct_inds
-
-
-def render_ray_bundle(model, ro, rd, t_min, t_max):
- """
- The working shape is (k, n, 3) where k is num of samples per ray, n the ray batch size
- During integration the reduction is applied on k
-
- chain of filtering
- starting with ro, rd (from cameras), and a scene bbox
- - rays that do not intersect scene bbox; sample pts that fall outside the bbox
- - samples that do not fall within alpha mask
- - samples whose densities are very low; no need to compute colors on them
- """
- num_samples, step_size = model.get_num_samples((t_max - t_min).max())
- n, k = len(ro), num_samples
-
- ticks = step_size * torch.arange(k, device=ro.device)
- ticks = ticks.view(k, 1, 1)
- t_min = t_min.view(n, 1)
- # t_min = t_min + step_size * torch.rand_like(t_min) # NOTE seems useless
- t_max = t_max.view(n, 1)
- dists = t_min + ticks # [n, 1], [k, 1, 1] -> [k, n, 1]
- pts = ro + rd * dists # [n, 3], [n, 3], [k, n, 1] -> [k, n, 3]
- mask = (ticks < (t_max - t_min)).squeeze(-1) # [k, 1, 1], [n, 1] -> [k, n, 1] -> [k, n]
- smp_pts = pts[mask]
-
- if model.alphaMask is not None:
- alphas = model.alphaMask.sample_alpha(smp_pts)
- alpha_mask = alphas > 0
- mask[mask.clone()] = alpha_mask
- smp_pts = pts[mask]
-
- σ = torch.zeros(k, n, device=ro.device)
- σ[mask] = model.compute_density_feats(smp_pts)
- weights = volume_rend_weights(σ, step_size)
- mask = weights > model.ray_march_weight_thres
- smp_pts = pts[mask]
-
- app_feats = model.compute_app_feats(smp_pts)
- # viewdirs = rd.view(1, n, 3).expand(k, n, 3)[mask] # ray dirs for each point
- # additional wild factors here as in nerf-w; wild factors are optimizable
- c_dim = app_feats.shape[-1]
- colors = torch.zeros(k, n, c_dim, device=ro.device)
- colors[mask] = model.feats2color(app_feats)
-
- weights = weights.view(k, n, 1) # can be used to compute other expected vals e.g. depth
- bg_weight = 1. - weights.sum(dim=0) # [n, 1]
-
- rgbs = (weights * colors).sum(dim=0) # [n, 3]
-
- if model.blend_bg_texture:
- uv = spherical_xyz_to_uv(rd)
- bg_feats = model.compute_bg(uv)
- bg_color = model.feats2color(bg_feats)
- rgbs = rgbs + bg_weight * bg_color
- else:
- rgbs = rgbs + bg_weight * 1. # blend white bg color
-
- # rgbs = rgbs.clamp(0, 1) # don't clamp since this is can be SD latent features
-
- E_dists = (weights * dists).sum(dim=0)
- bg_dist = 10. # blend bg distance; just don't make it too large
- E_dists = E_dists + bg_weight * bg_dist
- return rgbs, E_dists, weights.squeeze(-1)
-
-
-def spherical_xyz_to_uv(xyz):
- # xyz is Tensor of shape [N, 3], uv in [-1, 1]
- x, y, z = xyz.t() # [N]
- xy = (x ** 2 + y ** 2) ** 0.5
- u = torch.atan2(xy, z) / torch.pi # [N]
- v = torch.atan2(y, x) / (torch.pi * 2) + 0.5 # [N]
- uv = torch.stack([u, v], -1) # [N, 2]
- uv = uv * 2 - 1 # [0, 1] -> [-1, 1]
- return uv
-
-
-def volume_rend_weights(σ, dist):
- α = 1 - torch.exp(-σ * dist)
- T = torch.ones_like(α)
- T[1:] = (1 - α).cumprod(dim=0)[:-1]
- assert (T >= 0).all()
- weights = α * T
- return weights
diff --git a/spaces/Mohammed-Khalil/Chat_with_Youtube_Videos/README.md b/spaces/Mohammed-Khalil/Chat_with_Youtube_Videos/README.md
deleted file mode 100644
index 73ec48485e94051fdbf6b2aba2e318aaa231cf41..0000000000000000000000000000000000000000
--- a/spaces/Mohammed-Khalil/Chat_with_Youtube_Videos/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Chat With Youtube Videos
-emoji: 🏃
-colorFrom: purple
-colorTo: yellow
-sdk: streamlit
-sdk_version: 1.27.2
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Mysterykey/Mystery/greeting.md b/spaces/Mysterykey/Mystery/greeting.md
deleted file mode 100644
index 3e4a77b07b772c4119916ccab91e7a0670352b6e..0000000000000000000000000000000000000000
--- a/spaces/Mysterykey/Mystery/greeting.md
+++ /dev/null
@@ -1,7 +0,0 @@
-규칙
--
-토들러 절대금지, 보조모델로 슈메나 감정봇 돌리기 절대금지.
--
-
-
-후원하기(Patron): https://toon.at/donate/mystery
\ No newline at end of file
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/language_model/README.md b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/language_model/README.md
deleted file mode 100644
index e78ea48e08dc99b69751923762107a8f8a9a5e3e..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/language_model/README.md
+++ /dev/null
@@ -1,123 +0,0 @@
-# Neural Language Modeling
-
-## Pre-trained models
-
-Model | Description | Dataset | Download
----|---|---|---
-`transformer_lm.gbw.adaptive_huge` | Adaptive Inputs ([Baevski and Auli, 2018](https://arxiv.org/abs/1809.10853)) 1026M params | [Google Billion Words](https://github.com/ciprian-chelba/1-billion-word-language-modeling-benchmark) | [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/lm/adaptive_lm_gbw_huge.tar.bz2)
-`transformer_lm.wiki103.adaptive` | Adaptive Inputs ([Baevski and Auli, 2018](https://arxiv.org/abs/1809.10853)) 247M params | [WikiText-103](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset) | [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/lm/adaptive_lm_wiki103.v2.tar.bz2)
-`transformer_lm.wmt19.en` | English LM ([Ng et al., 2019](https://arxiv.org/abs/1907.06616)) | [WMT News Crawl](http://data.statmt.org/news-crawl/) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/lm/wmt19.en.tar.gz)
-`transformer_lm.wmt19.de` | German LM ([Ng et al., 2019](https://arxiv.org/abs/1907.06616)) | [WMT News Crawl](http://data.statmt.org/news-crawl/) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/lm/wmt19.de.tar.gz)
-`transformer_lm.wmt19.ru` | Russian LM ([Ng et al., 2019](https://arxiv.org/abs/1907.06616)) | [WMT News Crawl](http://data.statmt.org/news-crawl/) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/lm/wmt19.ru.tar.gz)
-
-## Example usage
-
-We require a few additional Python dependencies for preprocessing:
-```bash
-pip install fastBPE sacremoses
-```
-
-To sample from a language model using PyTorch Hub:
-```python
-import torch
-
-# List available models
-torch.hub.list('pytorch/fairseq') # [..., 'transformer_lm.wmt19.en', ...]
-
-# Load an English LM trained on WMT'19 News Crawl data
-en_lm = torch.hub.load('pytorch/fairseq', 'transformer_lm.wmt19.en', tokenizer='moses', bpe='fastbpe')
-en_lm.eval() # disable dropout
-
-# Move model to GPU
-en_lm.cuda()
-
-# Sample from the language model
-en_lm.sample('Barack Obama', beam=1, sampling=True, sampling_topk=10, temperature=0.8)
-# "Barack Obama is coming to Sydney and New Zealand (...)"
-
-# Compute perplexity for a sequence
-en_lm.score('Barack Obama is coming to Sydney and New Zealand')['positional_scores'].mean().neg().exp()
-# tensor(15.1474)
-
-# The same interface can be used with custom models as well
-from fairseq.models.transformer_lm import TransformerLanguageModel
-custom_lm = TransformerLanguageModel.from_pretrained('/path/to/model/dir', 'checkpoint100.pt', tokenizer='moses', bpe='fastbpe')
-custom_lm.sample('Barack Obama', beam=5)
-# "Barack Obama (...)"
-```
-
-## Training a transformer language model with the CLI tools
-
-### 1) Preprocess the data
-
-First download and prepare the [WikiText-103 dataset](https://www.salesforce.com/products/einstein/ai-research/the-wikitext-dependency-language-modeling-dataset/):
-```bash
-cd examples/language_model/
-bash prepare-wikitext-103.sh
-cd ../..
-```
-
-Next preprocess/binarize the data:
-```bash
-TEXT=examples/language_model/wikitext-103
-fairseq-preprocess \
- --only-source \
- --trainpref $TEXT/wiki.train.tokens \
- --validpref $TEXT/wiki.valid.tokens \
- --testpref $TEXT/wiki.test.tokens \
- --destdir data-bin/wikitext-103 \
- --workers 20
-```
-
-### 2) Train a language model
-
-Next we'll train a basic transformer language model on wikitext-103. For more
-advanced usage, see the [adaptive inputs README](README.adaptive_inputs.md).
-
-To train a basic LM (assumes 2 GPUs):
-```
-$ fairseq-train --task language_modeling \
- data-bin/wikitext-103 \
- --save-dir checkpoints/transformer_wikitext-103 \
- --arch transformer_lm --share-decoder-input-output-embed \
- --dropout 0.1 \
- --optimizer adam --adam-betas '(0.9, 0.98)' --weight-decay 0.01 --clip-norm 0.0 \
- --lr 0.0005 --lr-scheduler inverse_sqrt --warmup-updates 4000 --warmup-init-lr 1e-07 \
- --tokens-per-sample 512 --sample-break-mode none \
- --max-tokens 2048 --update-freq 16 \
- --fp16 \
- --max-update 50000
-```
-
-If you run out of memory, try reducing `--max-tokens` (max number of tokens per
-batch) or `--tokens-per-sample` (max sequence length). You can also adjust
-`--update-freq` to accumulate gradients and simulate training on a different
-number of GPUs.
-
-### 3) Evaluate
-
-```bash
-fairseq-eval-lm data-bin/wikitext-103 \
- --path checkpoints/transformer_wiki103/checkpoint_best.pt \
- --batch-size 2 \
- --tokens-per-sample 512 \
- --context-window 400
-# | Evaluated 245569 tokens in 56.1s (4379.02 tokens/s)
-# | Loss: 3.4164, Perplexity: 30.46
-```
-
-*Note:* The `--context-window` option controls how much context is provided to
-each token when computing perplexity. When the window size is 0, the dataset is
-chunked into segments of length 512 and perplexity is computed over each segment
-normally. However, this results in worse (higher) perplexity since tokens that
-appear earlier in each segment have less conditioning. When the maximum window
-size is used (511 in this case), then we compute perplexity for each token
-fully conditioned on 511 tokens of context. This slows down evaluation
-significantly, since we must run a separate forward pass for every token in the
-dataset, but results in better (lower) perplexity.
-
-
-## Convolutional language models
-
-Please see the [convolutional LM README](README.conv.md) for instructions on
-training convolutional language models.
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_recognition/kaldi/add-self-loop-simple.cc b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_recognition/kaldi/add-self-loop-simple.cc
deleted file mode 100644
index e18fb62df52ab85d7802615d8619b0fd94a08f8c..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_recognition/kaldi/add-self-loop-simple.cc
+++ /dev/null
@@ -1,94 +0,0 @@
-/*
- * Copyright (c) Facebook, Inc. and its affiliates.
- *
- * This source code is licensed under the MIT license found in the
- * LICENSE file in the root directory of this source tree.
- */
-
-#include
-#include "fstext/fstext-lib.h" // @manual
-#include "util/common-utils.h" // @manual
-
-/*
- * This program is to modify a FST without self-loop by:
- * for each incoming arc with non-eps input symbol, add a self-loop arc
- * with that non-eps symbol as input and eps as output.
- *
- * This is to make sure the resultant FST can do deduplication for repeated
- * symbols, which is very common in acoustic model
- *
- */
-namespace {
-int32 AddSelfLoopsSimple(fst::StdVectorFst* fst) {
- typedef fst::MutableArcIterator IterType;
-
- int32 num_states_before = fst->NumStates();
- fst::MakePrecedingInputSymbolsSame(false, fst);
- int32 num_states_after = fst->NumStates();
- KALDI_LOG << "There are " << num_states_before
- << " states in the original FST; "
- << " after MakePrecedingInputSymbolsSame, there are "
- << num_states_after << " states " << std::endl;
-
- auto weight_one = fst::StdArc::Weight::One();
-
- int32 num_arc_added = 0;
-
- fst::StdArc self_loop_arc;
- self_loop_arc.weight = weight_one;
-
- int32 num_states = fst->NumStates();
- std::vector> incoming_non_eps_label_per_state(num_states);
-
- for (int32 state = 0; state < num_states; state++) {
- for (IterType aiter(fst, state); !aiter.Done(); aiter.Next()) {
- fst::StdArc arc(aiter.Value());
- if (arc.ilabel != 0) {
- incoming_non_eps_label_per_state[arc.nextstate].insert(arc.ilabel);
- }
- }
- }
-
- for (int32 state = 0; state < num_states; state++) {
- if (!incoming_non_eps_label_per_state[state].empty()) {
- auto& ilabel_set = incoming_non_eps_label_per_state[state];
- for (auto it = ilabel_set.begin(); it != ilabel_set.end(); it++) {
- self_loop_arc.ilabel = *it;
- self_loop_arc.olabel = 0;
- self_loop_arc.nextstate = state;
- fst->AddArc(state, self_loop_arc);
- num_arc_added++;
- }
- }
- }
- return num_arc_added;
-}
-
-void print_usage() {
- std::cout << "add-self-loop-simple usage:\n"
- "\tadd-self-loop-simple \n";
-}
-} // namespace
-
-int main(int argc, char** argv) {
- if (argc != 3) {
- print_usage();
- exit(1);
- }
-
- auto input = argv[1];
- auto output = argv[2];
-
- auto fst = fst::ReadFstKaldi(input);
- auto num_states = fst->NumStates();
- KALDI_LOG << "Loading FST from " << input << " with " << num_states
- << " states." << std::endl;
-
- int32 num_arc_added = AddSelfLoopsSimple(fst);
- KALDI_LOG << "Adding " << num_arc_added << " self-loop arcs " << std::endl;
-
- fst::WriteFstKaldi(*fst, std::string(output));
- KALDI_LOG << "Writing FST to " << output << std::endl;
-
- delete fst;
-}
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/encoders/byte_bpe.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/encoders/byte_bpe.py
deleted file mode 100644
index 31e3a0627827f19ca7f0b58da45e46d40a80c3bf..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/encoders/byte_bpe.py
+++ /dev/null
@@ -1,48 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-
-from dataclasses import dataclass, field
-
-from fairseq import file_utils
-from fairseq.data.encoders import register_bpe
-from fairseq.data.encoders.byte_utils import (
- SPACE,
- SPACE_ESCAPE,
- byte_encode,
- smart_byte_decode,
-)
-from fairseq.dataclass import FairseqDataclass
-
-
-@dataclass
-class ByteBpeConfig(FairseqDataclass):
- sentencepiece_model_path: str = field(
- default="???", metadata={"help": "path to sentencepiece model"}
- )
-
-
-@register_bpe("byte_bpe", dataclass=ByteBpeConfig)
-class ByteBPE(object):
- def __init__(self, cfg):
- vocab = file_utils.cached_path(cfg.sentencepiece_model_path)
- try:
- import sentencepiece as spm
-
- self.sp = spm.SentencePieceProcessor()
- self.sp.Load(vocab)
- except ImportError:
- raise ImportError(
- "Please install sentencepiece with: pip install sentencepiece"
- )
-
- def encode(self, x: str) -> str:
- byte_encoded = byte_encode(x)
- return SPACE.join(self.sp.EncodeAsPieces(byte_encoded))
-
- @staticmethod
- def decode(x: str) -> str:
- unescaped = x.replace(SPACE, "").replace(SPACE_ESCAPE, SPACE)
- return smart_byte_decode(unescaped)
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/discriminative_reranking_nmt/models/__init__.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/discriminative_reranking_nmt/models/__init__.py
deleted file mode 100644
index c593ea5f1842794bfcc952fc93c679a5f16aeb98..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/discriminative_reranking_nmt/models/__init__.py
+++ /dev/null
@@ -1,6 +0,0 @@
-from .discriminative_reranking_model import DiscriminativeNMTReranker
-
-
-__all__ = [
- "DiscriminativeNMTReranker",
-]
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/prepare_lang.sh b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/prepare_lang.sh
deleted file mode 100644
index e9a80001eb47d5af863d6aab11a59362a59cef61..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/prepare_lang.sh
+++ /dev/null
@@ -1,37 +0,0 @@
-#!/bin/bash
-
-sil_prob=0.5
-num_sil_states=3
-num_nonsil_states=1
-
-. ./cmd.sh
-. ./path.sh
-. parse_options.sh
-
-set -eux
-
-dict=$1
-data_dir=$2
-
-dict_dir=$data_dir/local/dict
-tmplm_dir=$data_dir/local/lang_tmp
-lm_dir=$data_dir/lang
-
-mkdir -p $dict_dir $tmplm_dir $lm_dir
-
-# prepare dict
-echo "SIL" > $dict_dir/silence_phones.txt
-echo "SIL" > $dict_dir/optional_silence.txt
-awk '{print $1}' $dict > $dict_dir/nonsilence_phones.txt
-
-echo "SIL SIL" > $dict_dir/lexicon.txt
-echo " SIL" >> $dict_dir/lexicon.txt
-awk '{print $1" "$1}' $dict >> $dict_dir/lexicon.txt
-
-echo "SIL" > $dict_dir/extra_questions.txt
-awk '{printf $1" "} END {printf "\n"}' $dict >> $dict_dir/extra_questions.txt
-
-# prepare lang
-utils/prepare_lang.sh --sil-prob $sil_prob --position-dependent-phones false \
- --num_sil_states $num_sil_states --num_nonsil_states $num_nonsil_states \
- $dict_dir "" $tmplm_dir $lm_dir
diff --git a/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/latent_depth/latent_depth_src/loss/__init__.py b/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/latent_depth/latent_depth_src/loss/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/sort_dataset.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/sort_dataset.py
deleted file mode 100644
index b3890e7279e1f26db2e48ec0a91c639e9299d60f..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/sort_dataset.py
+++ /dev/null
@@ -1,21 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import numpy as np
-
-from . import BaseWrapperDataset
-
-
-class SortDataset(BaseWrapperDataset):
- def __init__(self, dataset, sort_order):
- super().__init__(dataset)
- if not isinstance(sort_order, (list, tuple)):
- sort_order = [sort_order]
- self.sort_order = sort_order
-
- assert all(len(so) == len(dataset) for so in sort_order)
-
- def ordered_indices(self):
- return np.lexsort(self.sort_order)
diff --git a/spaces/OIUGLK/bingo/src/components/providers.tsx b/spaces/OIUGLK/bingo/src/components/providers.tsx
deleted file mode 100644
index 892226412d80fe0b05211911b9e245cd22876460..0000000000000000000000000000000000000000
--- a/spaces/OIUGLK/bingo/src/components/providers.tsx
+++ /dev/null
@@ -1,15 +0,0 @@
-'use client'
-
-import * as React from 'react'
-import { ThemeProvider as NextThemesProvider } from 'next-themes'
-import { ThemeProviderProps } from 'next-themes/dist/types'
-
-import { TooltipProvider } from '@/components/ui/tooltip'
-
-export function Providers({ children, ...props }: ThemeProviderProps) {
- return (
-
- {children}
-
- )
-}
diff --git a/spaces/ORI-Muchim/MinamiTTS/monotonic_align/core.py b/spaces/ORI-Muchim/MinamiTTS/monotonic_align/core.py
deleted file mode 100644
index 1f940605fe4fd0738fa0006149fcba14ef88223a..0000000000000000000000000000000000000000
--- a/spaces/ORI-Muchim/MinamiTTS/monotonic_align/core.py
+++ /dev/null
@@ -1,36 +0,0 @@
-import numba
-
-
-@numba.jit(numba.void(numba.int32[:, :, ::1], numba.float32[:, :, ::1], numba.int32[::1], numba.int32[::1]),
- nopython=True, nogil=True)
-def maximum_path_jit(paths, values, t_ys, t_xs):
- b = paths.shape[0]
- max_neg_val = -1e9
- for i in range(int(b)):
- path = paths[i]
- value = values[i]
- t_y = t_ys[i]
- t_x = t_xs[i]
-
- v_prev = v_cur = 0.0
- index = t_x - 1
-
- for y in range(t_y):
- for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)):
- if x == y:
- v_cur = max_neg_val
- else:
- v_cur = value[y - 1, x]
- if x == 0:
- if y == 0:
- v_prev = 0.
- else:
- v_prev = max_neg_val
- else:
- v_prev = value[y - 1, x - 1]
- value[y, x] += max(v_prev, v_cur)
-
- for y in range(t_y - 1, -1, -1):
- path[y, index] = 1
- if index != 0 and (index == y or value[y - 1, index] < value[y - 1, index - 1]):
- index = index - 1
diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/roi_heads/fast_rcnn.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/roi_heads/fast_rcnn.py
deleted file mode 100644
index 42eba21092af71bafdc1875fdbe3a5ae1e3fd543..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/roi_heads/fast_rcnn.py
+++ /dev/null
@@ -1,462 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import logging
-from typing import Dict, List, Tuple, Union
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from detectron2.config import configurable
-from detectron2.layers import ShapeSpec, batched_nms, cat, cross_entropy, nonzero_tuple
-from detectron2.modeling.box_regression import Box2BoxTransform, _dense_box_regression_loss
-from detectron2.structures import Boxes, Instances
-from detectron2.utils.events import get_event_storage
-
-__all__ = ["fast_rcnn_inference", "FastRCNNOutputLayers"]
-
-
-logger = logging.getLogger(__name__)
-
-"""
-Shape shorthand in this module:
-
- N: number of images in the minibatch
- R: number of ROIs, combined over all images, in the minibatch
- Ri: number of ROIs in image i
- K: number of foreground classes. E.g.,there are 80 foreground classes in COCO.
-
-Naming convention:
-
- deltas: refers to the 4-d (dx, dy, dw, dh) deltas that parameterize the box2box
- transform (see :class:`box_regression.Box2BoxTransform`).
-
- pred_class_logits: predicted class scores in [-inf, +inf]; use
- softmax(pred_class_logits) to estimate P(class).
-
- gt_classes: ground-truth classification labels in [0, K], where [0, K) represent
- foreground object classes and K represents the background class.
-
- pred_proposal_deltas: predicted box2box transform deltas for transforming proposals
- to detection box predictions.
-
- gt_proposal_deltas: ground-truth box2box transform deltas
-"""
-
-
-def fast_rcnn_inference(
- boxes: List[torch.Tensor],
- scores: List[torch.Tensor],
- image_shapes: List[Tuple[int, int]],
- score_thresh: float,
- nms_thresh: float,
- topk_per_image: int,
-):
- """
- Call `fast_rcnn_inference_single_image` for all images.
-
- Args:
- boxes (list[Tensor]): A list of Tensors of predicted class-specific or class-agnostic
- boxes for each image. Element i has shape (Ri, K * 4) if doing
- class-specific regression, or (Ri, 4) if doing class-agnostic
- regression, where Ri is the number of predicted objects for image i.
- This is compatible with the output of :meth:`FastRCNNOutputLayers.predict_boxes`.
- scores (list[Tensor]): A list of Tensors of predicted class scores for each image.
- Element i has shape (Ri, K + 1), where Ri is the number of predicted objects
- for image i. Compatible with the output of :meth:`FastRCNNOutputLayers.predict_probs`.
- image_shapes (list[tuple]): A list of (width, height) tuples for each image in the batch.
- score_thresh (float): Only return detections with a confidence score exceeding this
- threshold.
- nms_thresh (float): The threshold to use for box non-maximum suppression. Value in [0, 1].
- topk_per_image (int): The number of top scoring detections to return. Set < 0 to return
- all detections.
-
- Returns:
- instances: (list[Instances]): A list of N instances, one for each image in the batch,
- that stores the topk most confidence detections.
- kept_indices: (list[Tensor]): A list of 1D tensor of length of N, each element indicates
- the corresponding boxes/scores index in [0, Ri) from the input, for image i.
- """
- result_per_image = [
- fast_rcnn_inference_single_image(
- boxes_per_image, scores_per_image, image_shape, score_thresh, nms_thresh, topk_per_image
- )
- for scores_per_image, boxes_per_image, image_shape in zip(scores, boxes, image_shapes)
- ]
- return [x[0] for x in result_per_image], [x[1] for x in result_per_image]
-
-
-def _log_classification_stats(pred_logits, gt_classes, prefix="fast_rcnn"):
- """
- Log the classification metrics to EventStorage.
-
- Args:
- pred_logits: Rx(K+1) logits. The last column is for background class.
- gt_classes: R labels
- """
- num_instances = gt_classes.numel()
- if num_instances == 0:
- return
- pred_classes = pred_logits.argmax(dim=1)
- bg_class_ind = pred_logits.shape[1] - 1
-
- fg_inds = (gt_classes >= 0) & (gt_classes < bg_class_ind)
- num_fg = fg_inds.nonzero().numel()
- fg_gt_classes = gt_classes[fg_inds]
- fg_pred_classes = pred_classes[fg_inds]
-
- num_false_negative = (fg_pred_classes == bg_class_ind).nonzero().numel()
- num_accurate = (pred_classes == gt_classes).nonzero().numel()
- fg_num_accurate = (fg_pred_classes == fg_gt_classes).nonzero().numel()
-
- storage = get_event_storage()
- storage.put_scalar(f"{prefix}/cls_accuracy", num_accurate / num_instances)
- if num_fg > 0:
- storage.put_scalar(f"{prefix}/fg_cls_accuracy", fg_num_accurate / num_fg)
- storage.put_scalar(f"{prefix}/false_negative", num_false_negative / num_fg)
-
-
-def fast_rcnn_inference_single_image(
- boxes,
- scores,
- image_shape: Tuple[int, int],
- score_thresh: float,
- nms_thresh: float,
- topk_per_image: int,
-):
- """
- Single-image inference. Return bounding-box detection results by thresholding
- on scores and applying non-maximum suppression (NMS).
-
- Args:
- Same as `fast_rcnn_inference`, but with boxes, scores, and image shapes
- per image.
-
- Returns:
- Same as `fast_rcnn_inference`, but for only one image.
- """
- valid_mask = torch.isfinite(boxes).all(dim=1) & torch.isfinite(scores).all(dim=1)
- if not valid_mask.all():
- boxes = boxes[valid_mask]
- scores = scores[valid_mask]
-
- scores = scores[:, :-1]
- num_bbox_reg_classes = boxes.shape[1] // 4
- # Convert to Boxes to use the `clip` function ...
- boxes = Boxes(boxes.reshape(-1, 4))
- boxes.clip(image_shape)
- boxes = boxes.tensor.view(-1, num_bbox_reg_classes, 4) # R x C x 4
-
- # 1. Filter results based on detection scores. It can make NMS more efficient
- # by filtering out low-confidence detections.
- filter_mask = scores > score_thresh # R x K
- # R' x 2. First column contains indices of the R predictions;
- # Second column contains indices of classes.
- filter_inds = filter_mask.nonzero()
- if num_bbox_reg_classes == 1:
- boxes = boxes[filter_inds[:, 0], 0]
- else:
- boxes = boxes[filter_mask]
- scores = scores[filter_mask]
-
- # 2. Apply NMS for each class independently.
- keep = batched_nms(boxes, scores, filter_inds[:, 1], nms_thresh)
- if topk_per_image >= 0:
- keep = keep[:topk_per_image]
- boxes, scores, filter_inds = boxes[keep], scores[keep], filter_inds[keep]
-
- result = Instances(image_shape)
- result.pred_boxes = Boxes(boxes)
- result.scores = scores
- result.pred_classes = filter_inds[:, 1]
- return result, filter_inds[:, 0]
-
-
-class FastRCNNOutputLayers(nn.Module):
- """
- Two linear layers for predicting Fast R-CNN outputs:
-
- 1. proposal-to-detection box regression deltas
- 2. classification scores
- """
-
- @configurable
- def __init__(
- self,
- input_shape: ShapeSpec,
- *,
- box2box_transform,
- num_classes: int,
- test_score_thresh: float = 0.0,
- test_nms_thresh: float = 0.5,
- test_topk_per_image: int = 100,
- cls_agnostic_bbox_reg: bool = False,
- smooth_l1_beta: float = 0.0,
- box_reg_loss_type: str = "smooth_l1",
- loss_weight: Union[float, Dict[str, float]] = 1.0,
- ):
- """
- NOTE: this interface is experimental.
-
- Args:
- input_shape (ShapeSpec): shape of the input feature to this module
- box2box_transform (Box2BoxTransform or Box2BoxTransformRotated):
- num_classes (int): number of foreground classes
- test_score_thresh (float): threshold to filter predictions results.
- test_nms_thresh (float): NMS threshold for prediction results.
- test_topk_per_image (int): number of top predictions to produce per image.
- cls_agnostic_bbox_reg (bool): whether to use class agnostic for bbox regression
- smooth_l1_beta (float): transition point from L1 to L2 loss. Only used if
- `box_reg_loss_type` is "smooth_l1"
- box_reg_loss_type (str): Box regression loss type. One of: "smooth_l1", "giou",
- "diou", "ciou"
- loss_weight (float|dict): weights to use for losses. Can be single float for weighting
- all losses, or a dict of individual weightings. Valid dict keys are:
- * "loss_cls": applied to classification loss
- * "loss_box_reg": applied to box regression loss
- """
- super().__init__()
- if isinstance(input_shape, int): # some backward compatibility
- input_shape = ShapeSpec(channels=input_shape)
- self.num_classes = num_classes
- input_size = input_shape.channels * (input_shape.width or 1) * (input_shape.height or 1)
- # prediction layer for num_classes foreground classes and one background class (hence + 1)
- self.cls_score = nn.Linear(input_size, num_classes + 1)
- num_bbox_reg_classes = 1 if cls_agnostic_bbox_reg else num_classes
- box_dim = len(box2box_transform.weights)
- self.bbox_pred = nn.Linear(input_size, num_bbox_reg_classes * box_dim)
-
- nn.init.normal_(self.cls_score.weight, std=0.01)
- nn.init.normal_(self.bbox_pred.weight, std=0.001)
- for l in [self.cls_score, self.bbox_pred]:
- nn.init.constant_(l.bias, 0)
-
- self.box2box_transform = box2box_transform
- self.smooth_l1_beta = smooth_l1_beta
- self.test_score_thresh = test_score_thresh
- self.test_nms_thresh = test_nms_thresh
- self.test_topk_per_image = test_topk_per_image
- self.box_reg_loss_type = box_reg_loss_type
- if isinstance(loss_weight, float):
- loss_weight = {"loss_cls": loss_weight, "loss_box_reg": loss_weight}
- self.loss_weight = loss_weight
-
- @classmethod
- def from_config(cls, cfg, input_shape):
- return {
- "input_shape": input_shape,
- "box2box_transform": Box2BoxTransform(weights=cfg.MODEL.ROI_BOX_HEAD.BBOX_REG_WEIGHTS),
- # fmt: off
- "num_classes" : cfg.MODEL.ROI_HEADS.NUM_CLASSES,
- "cls_agnostic_bbox_reg" : cfg.MODEL.ROI_BOX_HEAD.CLS_AGNOSTIC_BBOX_REG,
- "smooth_l1_beta" : cfg.MODEL.ROI_BOX_HEAD.SMOOTH_L1_BETA,
- "test_score_thresh" : cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST,
- "test_nms_thresh" : cfg.MODEL.ROI_HEADS.NMS_THRESH_TEST,
- "test_topk_per_image" : cfg.TEST.DETECTIONS_PER_IMAGE,
- "box_reg_loss_type" : cfg.MODEL.ROI_BOX_HEAD.BBOX_REG_LOSS_TYPE,
- "loss_weight" : {"loss_box_reg": cfg.MODEL.ROI_BOX_HEAD.BBOX_REG_LOSS_WEIGHT},
- # fmt: on
- }
-
- def forward(self, x):
- """
- Args:
- x: per-region features of shape (N, ...) for N bounding boxes to predict.
-
- Returns:
- (Tensor, Tensor):
- First tensor: shape (N,K+1), scores for each of the N box. Each row contains the
- scores for K object categories and 1 background class.
-
- Second tensor: bounding box regression deltas for each box. Shape is shape (N,Kx4),
- or (N,4) for class-agnostic regression.
- """
- if x.dim() > 2:
- x = torch.flatten(x, start_dim=1)
- scores = self.cls_score(x)
- proposal_deltas = self.bbox_pred(x)
- return scores, proposal_deltas
-
- def losses(self, predictions, proposals):
- """
- Args:
- predictions: return values of :meth:`forward()`.
- proposals (list[Instances]): proposals that match the features that were used
- to compute predictions. The fields ``proposal_boxes``, ``gt_boxes``,
- ``gt_classes`` are expected.
-
- Returns:
- Dict[str, Tensor]: dict of losses
- """
- scores, proposal_deltas = predictions
-
- # parse classification outputs
- gt_classes = (
- cat([p.gt_classes for p in proposals], dim=0) if len(proposals) else torch.empty(0)
- )
- _log_classification_stats(scores, gt_classes)
-
- # parse box regression outputs
- if len(proposals):
- proposal_boxes = cat([p.proposal_boxes.tensor for p in proposals], dim=0) # Nx4
- assert not proposal_boxes.requires_grad, "Proposals should not require gradients!"
- # If "gt_boxes" does not exist, the proposals must be all negative and
- # should not be included in regression loss computation.
- # Here we just use proposal_boxes as an arbitrary placeholder because its
- # value won't be used in self.box_reg_loss().
- gt_boxes = cat(
- [(p.gt_boxes if p.has("gt_boxes") else p.proposal_boxes).tensor for p in proposals],
- dim=0,
- )
- else:
- proposal_boxes = gt_boxes = torch.empty((0, 4), device=proposal_deltas.device)
-
- losses = {
- "loss_cls": cross_entropy(scores, gt_classes, reduction="mean"),
- "loss_box_reg": self.box_reg_loss(
- proposal_boxes, gt_boxes, proposal_deltas, gt_classes
- ),
- }
- return {k: v * self.loss_weight.get(k, 1.0) for k, v in losses.items()}
-
- def box_reg_loss(self, proposal_boxes, gt_boxes, pred_deltas, gt_classes):
- """
- Args:
- proposal_boxes/gt_boxes are tensors with the same shape (R, 4 or 5).
- pred_deltas has shape (R, 4 or 5), or (R, num_classes * (4 or 5)).
- gt_classes is a long tensor of shape R, the gt class label of each proposal.
- R shall be the number of proposals.
- """
- box_dim = proposal_boxes.shape[1] # 4 or 5
- # Regression loss is only computed for foreground proposals (those matched to a GT)
- fg_inds = nonzero_tuple((gt_classes >= 0) & (gt_classes < self.num_classes))[0]
- if pred_deltas.shape[1] == box_dim: # cls-agnostic regression
- fg_pred_deltas = pred_deltas[fg_inds]
- else:
- fg_pred_deltas = pred_deltas.view(-1, self.num_classes, box_dim)[
- fg_inds, gt_classes[fg_inds]
- ]
-
- loss_box_reg = _dense_box_regression_loss(
- [proposal_boxes[fg_inds]],
- self.box2box_transform,
- [fg_pred_deltas.unsqueeze(0)],
- [gt_boxes[fg_inds]],
- ...,
- self.box_reg_loss_type,
- self.smooth_l1_beta,
- )
-
- # The reg loss is normalized using the total number of regions (R), not the number
- # of foreground regions even though the box regression loss is only defined on
- # foreground regions. Why? Because doing so gives equal training influence to
- # each foreground example. To see how, consider two different minibatches:
- # (1) Contains a single foreground region
- # (2) Contains 100 foreground regions
- # If we normalize by the number of foreground regions, the single example in
- # minibatch (1) will be given 100 times as much influence as each foreground
- # example in minibatch (2). Normalizing by the total number of regions, R,
- # means that the single example in minibatch (1) and each of the 100 examples
- # in minibatch (2) are given equal influence.
- return loss_box_reg / max(gt_classes.numel(), 1.0) # return 0 if empty
-
- def inference(self, predictions: Tuple[torch.Tensor, torch.Tensor], proposals: List[Instances]):
- """
- Args:
- predictions: return values of :meth:`forward()`.
- proposals (list[Instances]): proposals that match the features that were
- used to compute predictions. The ``proposal_boxes`` field is expected.
-
- Returns:
- list[Instances]: same as `fast_rcnn_inference`.
- list[Tensor]: same as `fast_rcnn_inference`.
- """
- boxes = self.predict_boxes(predictions, proposals)
- scores = self.predict_probs(predictions, proposals)
- image_shapes = [x.image_size for x in proposals]
- return fast_rcnn_inference(
- boxes,
- scores,
- image_shapes,
- self.test_score_thresh,
- self.test_nms_thresh,
- self.test_topk_per_image,
- )
-
- def predict_boxes_for_gt_classes(self, predictions, proposals):
- """
- Args:
- predictions: return values of :meth:`forward()`.
- proposals (list[Instances]): proposals that match the features that were used
- to compute predictions. The fields ``proposal_boxes``, ``gt_classes`` are expected.
-
- Returns:
- list[Tensor]:
- A list of Tensors of predicted boxes for GT classes in case of
- class-specific box head. Element i of the list has shape (Ri, B), where Ri is
- the number of proposals for image i and B is the box dimension (4 or 5)
- """
- if not len(proposals):
- return []
- scores, proposal_deltas = predictions
- proposal_boxes = cat([p.proposal_boxes.tensor for p in proposals], dim=0)
- N, B = proposal_boxes.shape
- predict_boxes = self.box2box_transform.apply_deltas(
- proposal_deltas, proposal_boxes
- ) # Nx(KxB)
-
- K = predict_boxes.shape[1] // B
- if K > 1:
- gt_classes = torch.cat([p.gt_classes for p in proposals], dim=0)
- # Some proposals are ignored or have a background class. Their gt_classes
- # cannot be used as index.
- gt_classes = gt_classes.clamp_(0, K - 1)
-
- predict_boxes = predict_boxes.view(N, K, B)[
- torch.arange(N, dtype=torch.long, device=predict_boxes.device), gt_classes
- ]
- num_prop_per_image = [len(p) for p in proposals]
- return predict_boxes.split(num_prop_per_image)
-
- def predict_boxes(
- self, predictions: Tuple[torch.Tensor, torch.Tensor], proposals: List[Instances]
- ):
- """
- Args:
- predictions: return values of :meth:`forward()`.
- proposals (list[Instances]): proposals that match the features that were
- used to compute predictions. The ``proposal_boxes`` field is expected.
-
- Returns:
- list[Tensor]:
- A list of Tensors of predicted class-specific or class-agnostic boxes
- for each image. Element i has shape (Ri, K * B) or (Ri, B), where Ri is
- the number of proposals for image i and B is the box dimension (4 or 5)
- """
- if not len(proposals):
- return []
- _, proposal_deltas = predictions
- num_prop_per_image = [len(p) for p in proposals]
- proposal_boxes = cat([p.proposal_boxes.tensor for p in proposals], dim=0)
- predict_boxes = self.box2box_transform.apply_deltas(
- proposal_deltas,
- proposal_boxes,
- ) # Nx(KxB)
- return predict_boxes.split(num_prop_per_image)
-
- def predict_probs(
- self, predictions: Tuple[torch.Tensor, torch.Tensor], proposals: List[Instances]
- ):
- """
- Args:
- predictions: return values of :meth:`forward()`.
- proposals (list[Instances]): proposals that match the features that were
- used to compute predictions.
-
- Returns:
- list[Tensor]:
- A list of Tensors of predicted class probabilities for each image.
- Element i has shape (Ri, K + 1), where Ri is the number of proposals for image i.
- """
- scores, _ = predictions
- num_inst_per_image = [len(p) for p in proposals]
- probs = F.softmax(scores, dim=-1)
- return probs.split(num_inst_per_image, dim=0)
diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/paper_runfiles/blur_tests.sh b/spaces/OpenGVLab/InternGPT/third-party/lama/bin/paper_runfiles/blur_tests.sh
deleted file mode 100644
index 8f204a4c643d08935e5561ed27a286536643958d..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/paper_runfiles/blur_tests.sh
+++ /dev/null
@@ -1,37 +0,0 @@
-##!/usr/bin/env bash
-#
-## !!! file set to make test_large_30k from the vanilla test_large: configs/test_large_30k.lst
-#
-## paths to data are valid for mml7
-#PLACES_ROOT="/data/inpainting/Places365"
-#OUT_DIR="/data/inpainting/paper_data/Places365_val_test"
-#
-#source "$(dirname $0)/env.sh"
-#
-#for datadir in test_large_30k # val_large
-#do
-# for conf in random_thin_256 random_medium_256 random_thick_256 random_thin_512 random_medium_512 random_thick_512
-# do
-# "$BINDIR/gen_mask_dataset.py" "$CONFIGDIR/data_gen/${conf}.yaml" \
-# "$PLACES_ROOT/$datadir" "$OUT_DIR/$datadir/$conf" --n-jobs 8
-#
-# "$BINDIR/calc_dataset_stats.py" --samples-n 20 "$OUT_DIR/$datadir/$conf" "$OUT_DIR/$datadir/${conf}_stats"
-# done
-#
-# for conf in segm_256 segm_512
-# do
-# "$BINDIR/gen_mask_dataset.py" "$CONFIGDIR/data_gen/${conf}.yaml" \
-# "$PLACES_ROOT/$datadir" "$OUT_DIR/$datadir/$conf" --n-jobs 2
-#
-# "$BINDIR/calc_dataset_stats.py" --samples-n 20 "$OUT_DIR/$datadir/$conf" "$OUT_DIR/$datadir/${conf}_stats"
-# done
-#done
-#
-#IN_DIR="/data/inpainting/paper_data/Places365_val_test/test_large_30k/random_medium_512"
-#PRED_DIR="/data/inpainting/predictions/final/images/r.suvorov_2021-03-05_17-08-35_train_ablv2_work_resume_epoch37/random_medium_512"
-#BLUR_OUT_DIR="/data/inpainting/predictions/final/blur/images"
-#
-#for b in 0.1
-#
-#"$BINDIR/blur_predicts.py" "$BASEDIR/../../configs/eval2.yaml" "$CUR_IN_DIR" "$CUR_OUT_DIR" "$CUR_EVAL_DIR"
-#
diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/saicinpainting/evaluation/data.py b/spaces/OpenGVLab/InternGPT/third-party/lama/bin/saicinpainting/evaluation/data.py
deleted file mode 100644
index 89a4ea4c9577e6131731444f149eec76978ec260..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/saicinpainting/evaluation/data.py
+++ /dev/null
@@ -1,168 +0,0 @@
-import glob
-import os
-
-import cv2
-import PIL.Image as Image
-import numpy as np
-
-from torch.utils.data import Dataset
-import torch.nn.functional as F
-
-
-def load_image(fname, mode='RGB', return_orig=False):
- img = np.array(Image.open(fname).convert(mode))
- if img.ndim == 3:
- img = np.transpose(img, (2, 0, 1))
- out_img = img.astype('float32') / 255
- if return_orig:
- return out_img, img
- else:
- return out_img
-
-
-def ceil_modulo(x, mod):
- if x % mod == 0:
- return x
- return (x // mod + 1) * mod
-
-
-def pad_img_to_modulo(img, mod):
- channels, height, width = img.shape
- out_height = ceil_modulo(height, mod)
- out_width = ceil_modulo(width, mod)
- return np.pad(img, ((0, 0), (0, out_height - height), (0, out_width - width)), mode='symmetric')
-
-
-def pad_tensor_to_modulo(img, mod):
- batch_size, channels, height, width = img.shape
- out_height = ceil_modulo(height, mod)
- out_width = ceil_modulo(width, mod)
- return F.pad(img, pad=(0, out_width - width, 0, out_height - height), mode='reflect')
-
-
-def scale_image(img, factor, interpolation=cv2.INTER_AREA):
- if img.shape[0] == 1:
- img = img[0]
- else:
- img = np.transpose(img, (1, 2, 0))
-
- img = cv2.resize(img, dsize=None, fx=factor, fy=factor, interpolation=interpolation)
-
- if img.ndim == 2:
- img = img[None, ...]
- else:
- img = np.transpose(img, (2, 0, 1))
- return img
-
-
-class InpaintingDataset(Dataset):
- def __init__(self, datadir, img_suffix='.jpg', pad_out_to_modulo=None, scale_factor=None):
- self.datadir = datadir
- self.mask_filenames = sorted(list(glob.glob(os.path.join(self.datadir, '**', '*mask*.png'), recursive=True)))
- self.img_filenames = [fname.rsplit('_mask', 1)[0] + img_suffix for fname in self.mask_filenames]
- self.pad_out_to_modulo = pad_out_to_modulo
- self.scale_factor = scale_factor
-
- def __len__(self):
- return len(self.mask_filenames)
-
- def __getitem__(self, i):
- image = load_image(self.img_filenames[i], mode='RGB')
- mask = load_image(self.mask_filenames[i], mode='L')
- result = dict(image=image, mask=mask[None, ...])
-
- if self.scale_factor is not None:
- result['image'] = scale_image(result['image'], self.scale_factor)
- result['mask'] = scale_image(result['mask'], self.scale_factor, interpolation=cv2.INTER_NEAREST)
-
- if self.pad_out_to_modulo is not None and self.pad_out_to_modulo > 1:
- result['unpad_to_size'] = result['image'].shape[1:]
- result['image'] = pad_img_to_modulo(result['image'], self.pad_out_to_modulo)
- result['mask'] = pad_img_to_modulo(result['mask'], self.pad_out_to_modulo)
-
- return result
-
-class OurInpaintingDataset(Dataset):
- def __init__(self, datadir, img_suffix='.jpg', pad_out_to_modulo=None, scale_factor=None):
- self.datadir = datadir
- self.mask_filenames = sorted(list(glob.glob(os.path.join(self.datadir, 'mask', '**', '*mask*.png'), recursive=True)))
- self.img_filenames = [os.path.join(self.datadir, 'img', os.path.basename(fname.rsplit('-', 1)[0].rsplit('_', 1)[0]) + '.png') for fname in self.mask_filenames]
- self.pad_out_to_modulo = pad_out_to_modulo
- self.scale_factor = scale_factor
-
- def __len__(self):
- return len(self.mask_filenames)
-
- def __getitem__(self, i):
- result = dict(image=load_image(self.img_filenames[i], mode='RGB'),
- mask=load_image(self.mask_filenames[i], mode='L')[None, ...])
-
- if self.scale_factor is not None:
- result['image'] = scale_image(result['image'], self.scale_factor)
- result['mask'] = scale_image(result['mask'], self.scale_factor)
-
- if self.pad_out_to_modulo is not None and self.pad_out_to_modulo > 1:
- result['image'] = pad_img_to_modulo(result['image'], self.pad_out_to_modulo)
- result['mask'] = pad_img_to_modulo(result['mask'], self.pad_out_to_modulo)
-
- return result
-
-class PrecomputedInpaintingResultsDataset(InpaintingDataset):
- def __init__(self, datadir, predictdir, inpainted_suffix='_inpainted.jpg', **kwargs):
- super().__init__(datadir, **kwargs)
- if not datadir.endswith('/'):
- datadir += '/'
- self.predictdir = predictdir
- self.pred_filenames = [os.path.join(predictdir, os.path.splitext(fname[len(datadir):])[0] + inpainted_suffix)
- for fname in self.mask_filenames]
-
- def __getitem__(self, i):
- result = super().__getitem__(i)
- result['inpainted'] = load_image(self.pred_filenames[i])
- if self.pad_out_to_modulo is not None and self.pad_out_to_modulo > 1:
- result['inpainted'] = pad_img_to_modulo(result['inpainted'], self.pad_out_to_modulo)
- return result
-
-class OurPrecomputedInpaintingResultsDataset(OurInpaintingDataset):
- def __init__(self, datadir, predictdir, inpainted_suffix="png", **kwargs):
- super().__init__(datadir, **kwargs)
- if not datadir.endswith('/'):
- datadir += '/'
- self.predictdir = predictdir
- self.pred_filenames = [os.path.join(predictdir, os.path.basename(os.path.splitext(fname)[0]) + f'_inpainted.{inpainted_suffix}')
- for fname in self.mask_filenames]
- # self.pred_filenames = [os.path.join(predictdir, os.path.splitext(fname[len(datadir):])[0] + inpainted_suffix)
- # for fname in self.mask_filenames]
-
- def __getitem__(self, i):
- result = super().__getitem__(i)
- result['inpainted'] = self.file_loader(self.pred_filenames[i])
-
- if self.pad_out_to_modulo is not None and self.pad_out_to_modulo > 1:
- result['inpainted'] = pad_img_to_modulo(result['inpainted'], self.pad_out_to_modulo)
- return result
-
-class InpaintingEvalOnlineDataset(Dataset):
- def __init__(self, indir, mask_generator, img_suffix='.jpg', pad_out_to_modulo=None, scale_factor=None, **kwargs):
- self.indir = indir
- self.mask_generator = mask_generator
- self.img_filenames = sorted(list(glob.glob(os.path.join(self.indir, '**', f'*{img_suffix}' ), recursive=True)))
- self.pad_out_to_modulo = pad_out_to_modulo
- self.scale_factor = scale_factor
-
- def __len__(self):
- return len(self.img_filenames)
-
- def __getitem__(self, i):
- img, raw_image = load_image(self.img_filenames[i], mode='RGB', return_orig=True)
- mask = self.mask_generator(img, raw_image=raw_image)
- result = dict(image=img, mask=mask)
-
- if self.scale_factor is not None:
- result['image'] = scale_image(result['image'], self.scale_factor)
- result['mask'] = scale_image(result['mask'], self.scale_factor, interpolation=cv2.INTER_NEAREST)
-
- if self.pad_out_to_modulo is not None and self.pad_out_to_modulo > 1:
- result['image'] = pad_img_to_modulo(result['image'], self.pad_out_to_modulo)
- result['mask'] = pad_img_to_modulo(result['mask'], self.pad_out_to_modulo)
- return result
\ No newline at end of file
diff --git a/spaces/OpenMotionLab/MotionGPT/mGPT/data/humanml/scripts/motion_process.py b/spaces/OpenMotionLab/MotionGPT/mGPT/data/humanml/scripts/motion_process.py
deleted file mode 100644
index 8b3395cfa348685d3df88375943bfe3c80980253..0000000000000000000000000000000000000000
--- a/spaces/OpenMotionLab/MotionGPT/mGPT/data/humanml/scripts/motion_process.py
+++ /dev/null
@@ -1,529 +0,0 @@
-from os.path import join as pjoin
-
-from ..common.skeleton import Skeleton
-import numpy as np
-import os
-from ..common.quaternion import *
-from ..utils.paramUtil import *
-
-import torch
-from tqdm import tqdm
-
-# positions (batch, joint_num, 3)
-def uniform_skeleton(positions, target_offset):
- src_skel = Skeleton(n_raw_offsets, kinematic_chain, 'cpu')
- src_offset = src_skel.get_offsets_joints(torch.from_numpy(positions[0]))
- src_offset = src_offset.numpy()
- tgt_offset = target_offset.numpy()
- # print(src_offset)
- # print(tgt_offset)
- '''Calculate Scale Ratio as the ratio of legs'''
- src_leg_len = np.abs(src_offset[l_idx1]).max() + np.abs(src_offset[l_idx2]).max()
- tgt_leg_len = np.abs(tgt_offset[l_idx1]).max() + np.abs(tgt_offset[l_idx2]).max()
-
- scale_rt = tgt_leg_len / src_leg_len
- # print(scale_rt)
- src_root_pos = positions[:, 0]
- tgt_root_pos = src_root_pos * scale_rt
-
- '''Inverse Kinematics'''
- quat_params = src_skel.inverse_kinematics_np(positions, face_joint_indx)
- # print(quat_params.shape)
-
- '''Forward Kinematics'''
- src_skel.set_offset(target_offset)
- new_joints = src_skel.forward_kinematics_np(quat_params, tgt_root_pos)
- return new_joints
-
-
-def extract_features(positions, feet_thre, n_raw_offsets, kinematic_chain, face_joint_indx, fid_r, fid_l):
- global_positions = positions.copy()
- """ Get Foot Contacts """
-
- def foot_detect(positions, thres):
- velfactor, heightfactor = np.array([thres, thres]), np.array([3.0, 2.0])
-
- feet_l_x = (positions[1:, fid_l, 0] - positions[:-1, fid_l, 0]) ** 2
- feet_l_y = (positions[1:, fid_l, 1] - positions[:-1, fid_l, 1]) ** 2
- feet_l_z = (positions[1:, fid_l, 2] - positions[:-1, fid_l, 2]) ** 2
- # feet_l_h = positions[:-1,fid_l,1]
- # feet_l = (((feet_l_x + feet_l_y + feet_l_z) < velfactor) & (feet_l_h < heightfactor)).astype(np.float64)
- feet_l = ((feet_l_x + feet_l_y + feet_l_z) < velfactor).astype(np.float64)
-
- feet_r_x = (positions[1:, fid_r, 0] - positions[:-1, fid_r, 0]) ** 2
- feet_r_y = (positions[1:, fid_r, 1] - positions[:-1, fid_r, 1]) ** 2
- feet_r_z = (positions[1:, fid_r, 2] - positions[:-1, fid_r, 2]) ** 2
- # feet_r_h = positions[:-1,fid_r,1]
- # feet_r = (((feet_r_x + feet_r_y + feet_r_z) < velfactor) & (feet_r_h < heightfactor)).astype(np.float64)
- feet_r = (((feet_r_x + feet_r_y + feet_r_z) < velfactor)).astype(np.float64)
- return feet_l, feet_r
-
- #
- feet_l, feet_r = foot_detect(positions, feet_thre)
- # feet_l, feet_r = foot_detect(positions, 0.002)
-
- '''Quaternion and Cartesian representation'''
- r_rot = None
-
- def get_rifke(positions):
- '''Local pose'''
- positions[..., 0] -= positions[:, 0:1, 0]
- positions[..., 2] -= positions[:, 0:1, 2]
- '''All pose face Z+'''
- positions = qrot_np(np.repeat(r_rot[:, None], positions.shape[1], axis=1), positions)
- return positions
-
- def get_quaternion(positions):
- skel = Skeleton(n_raw_offsets, kinematic_chain, "cpu")
- # (seq_len, joints_num, 4)
- quat_params = skel.inverse_kinematics_np(positions, face_joint_indx, smooth_forward=False)
-
- '''Fix Quaternion Discontinuity'''
- quat_params = qfix(quat_params)
- # (seq_len, 4)
- r_rot = quat_params[:, 0].copy()
- # print(r_rot[0])
- '''Root Linear Velocity'''
- # (seq_len - 1, 3)
- velocity = (positions[1:, 0] - positions[:-1, 0]).copy()
- # print(r_rot.shape, velocity.shape)
- velocity = qrot_np(r_rot[1:], velocity)
- '''Root Angular Velocity'''
- # (seq_len - 1, 4)
- r_velocity = qmul_np(r_rot[1:], qinv_np(r_rot[:-1]))
- quat_params[1:, 0] = r_velocity
- # (seq_len, joints_num, 4)
- return quat_params, r_velocity, velocity, r_rot
-
- def get_cont6d_params(positions):
- skel = Skeleton(n_raw_offsets, kinematic_chain, "cpu")
- # (seq_len, joints_num, 4)
- quat_params = skel.inverse_kinematics_np(positions, face_joint_indx, smooth_forward=True)
-
- '''Quaternion to continuous 6D'''
- cont_6d_params = quaternion_to_cont6d_np(quat_params)
- # (seq_len, 4)
- r_rot = quat_params[:, 0].copy()
- # print(r_rot[0])
- '''Root Linear Velocity'''
- # (seq_len - 1, 3)
- velocity = (positions[1:, 0] - positions[:-1, 0]).copy()
- # print(r_rot.shape, velocity.shape)
- velocity = qrot_np(r_rot[1:], velocity)
- '''Root Angular Velocity'''
- # (seq_len - 1, 4)
- r_velocity = qmul_np(r_rot[1:], qinv_np(r_rot[:-1]))
- # (seq_len, joints_num, 4)
- return cont_6d_params, r_velocity, velocity, r_rot
-
- cont_6d_params, r_velocity, velocity, r_rot = get_cont6d_params(positions)
- positions = get_rifke(positions)
-
- # trejec = np.cumsum(np.concatenate([np.array([[0, 0, 0]]), velocity], axis=0), axis=0)
- # r_rotations, r_pos = recover_ric_glo_np(r_velocity, velocity[:, [0, 2]])
-
- # plt.plot(positions_b[:, 0, 0], positions_b[:, 0, 2], marker='*')
- # plt.plot(ground_positions[:, 0, 0], ground_positions[:, 0, 2], marker='o', color='r')
- # plt.plot(trejec[:, 0], trejec[:, 2], marker='^', color='g')
- # plt.plot(r_pos[:, 0], r_pos[:, 2], marker='s', color='y')
- # plt.xlabel('x')
- # plt.ylabel('z')
- # plt.axis('equal')
- # plt.show()
-
- '''Root height'''
- root_y = positions[:, 0, 1:2]
-
- '''Root rotation and linear velocity'''
- # (seq_len-1, 1) rotation velocity along y-axis
- # (seq_len-1, 2) linear velovity on xz plane
- r_velocity = np.arcsin(r_velocity[:, 2:3])
- l_velocity = velocity[:, [0, 2]]
- # print(r_velocity.shape, l_velocity.shape, root_y.shape)
- root_data = np.concatenate([r_velocity, l_velocity, root_y[:-1]], axis=-1)
-
- '''Get Joint Rotation Representation'''
- # (seq_len, (joints_num-1) *6) quaternion for skeleton joints
- rot_data = cont_6d_params[:, 1:].reshape(len(cont_6d_params), -1)
-
- '''Get Joint Rotation Invariant Position Represention'''
- # (seq_len, (joints_num-1)*3) local joint position
- ric_data = positions[:, 1:].reshape(len(positions), -1)
-
- '''Get Joint Velocity Representation'''
- # (seq_len-1, joints_num*3)
- local_vel = qrot_np(np.repeat(r_rot[:-1, None], global_positions.shape[1], axis=1),
- global_positions[1:] - global_positions[:-1])
- local_vel = local_vel.reshape(len(local_vel), -1)
-
- data = root_data
- data = np.concatenate([data, ric_data[:-1]], axis=-1)
- data = np.concatenate([data, rot_data[:-1]], axis=-1)
- # print(dataset.shape, local_vel.shape)
- data = np.concatenate([data, local_vel], axis=-1)
- data = np.concatenate([data, feet_l, feet_r], axis=-1)
-
- return data
-
-
-def process_file(positions, feet_thre):
- # (seq_len, joints_num, 3)
- # '''Down Sample'''
- # positions = positions[::ds_num]
-
- '''Uniform Skeleton'''
- positions = uniform_skeleton(positions, tgt_offsets)
-
- '''Put on Floor'''
- floor_height = positions.min(axis=0).min(axis=0)[1]
- positions[:, :, 1] -= floor_height
- # print(floor_height)
-
- # plot_3d_motion("./positions_1.mp4", kinematic_chain, positions, 'title', fps=20)
-
- '''XZ at origin'''
- root_pos_init = positions[0]
- root_pose_init_xz = root_pos_init[0] * np.array([1, 0, 1])
- positions = positions - root_pose_init_xz
-
- # '''Move the first pose to origin '''
- # root_pos_init = positions[0]
- # positions = positions - root_pos_init[0]
-
- '''All initially face Z+'''
- r_hip, l_hip, sdr_r, sdr_l = face_joint_indx
- across1 = root_pos_init[r_hip] - root_pos_init[l_hip]
- across2 = root_pos_init[sdr_r] - root_pos_init[sdr_l]
- across = across1 + across2
- across = across / np.sqrt((across ** 2).sum(axis=-1))[..., np.newaxis]
-
- # forward (3,), rotate around y-axis
- forward_init = np.cross(np.array([[0, 1, 0]]), across, axis=-1)
- # forward (3,)
- forward_init = forward_init / np.sqrt((forward_init ** 2).sum(axis=-1))[..., np.newaxis]
-
- # print(forward_init)
-
- target = np.array([[0, 0, 1]])
- root_quat_init = qbetween_np(forward_init, target)
- root_quat_init = np.ones(positions.shape[:-1] + (4,)) * root_quat_init
-
- positions_b = positions.copy()
-
- positions = qrot_np(root_quat_init, positions)
-
- # plot_3d_motion("./positions_2.mp4", kinematic_chain, positions, 'title', fps=20)
-
- '''New ground truth positions'''
- global_positions = positions.copy()
-
- # plt.plot(positions_b[:, 0, 0], positions_b[:, 0, 2], marker='*')
- # plt.plot(positions[:, 0, 0], positions[:, 0, 2], marker='o', color='r')
- # plt.xlabel('x')
- # plt.ylabel('z')
- # plt.axis('equal')
- # plt.show()
-
- """ Get Foot Contacts """
-
- def foot_detect(positions, thres):
- velfactor, heightfactor = np.array([thres, thres]), np.array([3.0, 2.0])
-
- feet_l_x = (positions[1:, fid_l, 0] - positions[:-1, fid_l, 0]) ** 2
- feet_l_y = (positions[1:, fid_l, 1] - positions[:-1, fid_l, 1]) ** 2
- feet_l_z = (positions[1:, fid_l, 2] - positions[:-1, fid_l, 2]) ** 2
- # feet_l_h = positions[:-1,fid_l,1]
- # feet_l = (((feet_l_x + feet_l_y + feet_l_z) < velfactor) & (feet_l_h < heightfactor)).astype(np.float64)
- feet_l = ((feet_l_x + feet_l_y + feet_l_z) < velfactor).astype(np.float64)
-
- feet_r_x = (positions[1:, fid_r, 0] - positions[:-1, fid_r, 0]) ** 2
- feet_r_y = (positions[1:, fid_r, 1] - positions[:-1, fid_r, 1]) ** 2
- feet_r_z = (positions[1:, fid_r, 2] - positions[:-1, fid_r, 2]) ** 2
- # feet_r_h = positions[:-1,fid_r,1]
- # feet_r = (((feet_r_x + feet_r_y + feet_r_z) < velfactor) & (feet_r_h < heightfactor)).astype(np.float64)
- feet_r = (((feet_r_x + feet_r_y + feet_r_z) < velfactor)).astype(np.float64)
- return feet_l, feet_r
- #
- feet_l, feet_r = foot_detect(positions, feet_thre)
- # feet_l, feet_r = foot_detect(positions, 0.002)
-
- '''Quaternion and Cartesian representation'''
- r_rot = None
-
- def get_rifke(positions):
- '''Local pose'''
- positions[..., 0] -= positions[:, 0:1, 0]
- positions[..., 2] -= positions[:, 0:1, 2]
- '''All pose face Z+'''
- positions = qrot_np(np.repeat(r_rot[:, None], positions.shape[1], axis=1), positions)
- return positions
-
- def get_quaternion(positions):
- skel = Skeleton(n_raw_offsets, kinematic_chain, "cpu")
- # (seq_len, joints_num, 4)
- quat_params = skel.inverse_kinematics_np(positions, face_joint_indx, smooth_forward=False)
-
- '''Fix Quaternion Discontinuity'''
- quat_params = qfix(quat_params)
- # (seq_len, 4)
- r_rot = quat_params[:, 0].copy()
- # print(r_rot[0])
- '''Root Linear Velocity'''
- # (seq_len - 1, 3)
- velocity = (positions[1:, 0] - positions[:-1, 0]).copy()
- # print(r_rot.shape, velocity.shape)
- velocity = qrot_np(r_rot[1:], velocity)
- '''Root Angular Velocity'''
- # (seq_len - 1, 4)
- r_velocity = qmul_np(r_rot[1:], qinv_np(r_rot[:-1]))
- quat_params[1:, 0] = r_velocity
- # (seq_len, joints_num, 4)
- return quat_params, r_velocity, velocity, r_rot
-
- def get_cont6d_params(positions):
- skel = Skeleton(n_raw_offsets, kinematic_chain, "cpu")
- # (seq_len, joints_num, 4)
- quat_params = skel.inverse_kinematics_np(positions, face_joint_indx, smooth_forward=True)
-
- '''Quaternion to continuous 6D'''
- cont_6d_params = quaternion_to_cont6d_np(quat_params)
- # (seq_len, 4)
- r_rot = quat_params[:, 0].copy()
- # print(r_rot[0])
- '''Root Linear Velocity'''
- # (seq_len - 1, 3)
- velocity = (positions[1:, 0] - positions[:-1, 0]).copy()
- # print(r_rot.shape, velocity.shape)
- velocity = qrot_np(r_rot[1:], velocity)
- '''Root Angular Velocity'''
- # (seq_len - 1, 4)
- r_velocity = qmul_np(r_rot[1:], qinv_np(r_rot[:-1]))
- # (seq_len, joints_num, 4)
- return cont_6d_params, r_velocity, velocity, r_rot
-
- cont_6d_params, r_velocity, velocity, r_rot = get_cont6d_params(positions)
- positions = get_rifke(positions)
-
- # trejec = np.cumsum(np.concatenate([np.array([[0, 0, 0]]), velocity], axis=0), axis=0)
- # r_rotations, r_pos = recover_ric_glo_np(r_velocity, velocity[:, [0, 2]])
-
- # plt.plot(positions_b[:, 0, 0], positions_b[:, 0, 2], marker='*')
- # plt.plot(ground_positions[:, 0, 0], ground_positions[:, 0, 2], marker='o', color='r')
- # plt.plot(trejec[:, 0], trejec[:, 2], marker='^', color='g')
- # plt.plot(r_pos[:, 0], r_pos[:, 2], marker='s', color='y')
- # plt.xlabel('x')
- # plt.ylabel('z')
- # plt.axis('equal')
- # plt.show()
-
- '''Root height'''
- root_y = positions[:, 0, 1:2]
-
- '''Root rotation and linear velocity'''
- # (seq_len-1, 1) rotation velocity along y-axis
- # (seq_len-1, 2) linear velovity on xz plane
- r_velocity = np.arcsin(r_velocity[:, 2:3])
- l_velocity = velocity[:, [0, 2]]
- # print(r_velocity.shape, l_velocity.shape, root_y.shape)
- root_data = np.concatenate([r_velocity, l_velocity, root_y[:-1]], axis=-1)
-
- '''Get Joint Rotation Representation'''
- # (seq_len, (joints_num-1) *6) quaternion for skeleton joints
- rot_data = cont_6d_params[:, 1:].reshape(len(cont_6d_params), -1)
-
- '''Get Joint Rotation Invariant Position Represention'''
- # (seq_len, (joints_num-1)*3) local joint position
- ric_data = positions[:, 1:].reshape(len(positions), -1)
-
- '''Get Joint Velocity Representation'''
- # (seq_len-1, joints_num*3)
- local_vel = qrot_np(np.repeat(r_rot[:-1, None], global_positions.shape[1], axis=1),
- global_positions[1:] - global_positions[:-1])
- local_vel = local_vel.reshape(len(local_vel), -1)
-
- data = root_data
- data = np.concatenate([data, ric_data[:-1]], axis=-1)
- data = np.concatenate([data, rot_data[:-1]], axis=-1)
- # print(dataset.shape, local_vel.shape)
- data = np.concatenate([data, local_vel], axis=-1)
- data = np.concatenate([data, feet_l, feet_r], axis=-1)
-
- return data, global_positions, positions, l_velocity
-
-
-# Recover global angle and positions for rotation dataset
-# root_rot_velocity (B, seq_len, 1)
-# root_linear_velocity (B, seq_len, 2)
-# root_y (B, seq_len, 1)
-# ric_data (B, seq_len, (joint_num - 1)*3)
-# rot_data (B, seq_len, (joint_num - 1)*6)
-# local_velocity (B, seq_len, joint_num*3)
-# foot contact (B, seq_len, 4)
-def recover_root_rot_pos(data):
- rot_vel = data[..., 0]
- r_rot_ang = torch.zeros_like(rot_vel).to(data.device)
- '''Get Y-axis rotation from rotation velocity'''
- r_rot_ang[..., 1:] = rot_vel[..., :-1]
- r_rot_ang = torch.cumsum(r_rot_ang, dim=-1)
-
- r_rot_quat = torch.zeros(data.shape[:-1] + (4,)).to(data.device)
- r_rot_quat[..., 0] = torch.cos(r_rot_ang)
- r_rot_quat[..., 2] = torch.sin(r_rot_ang)
-
- r_pos = torch.zeros(data.shape[:-1] + (3,)).to(data.device)
- r_pos[..., 1:, [0, 2]] = data[..., :-1, 1:3]
- '''Add Y-axis rotation to root position'''
- r_pos = qrot(qinv(r_rot_quat), r_pos)
-
- r_pos = torch.cumsum(r_pos, dim=-2)
-
- r_pos[..., 1] = data[..., 3]
- return r_rot_quat, r_pos
-
-
-def recover_from_rot(data, joints_num, skeleton):
- r_rot_quat, r_pos = recover_root_rot_pos(data)
-
- r_rot_cont6d = quaternion_to_cont6d(r_rot_quat)
-
- start_indx = 1 + 2 + 1 + (joints_num - 1) * 3
- end_indx = start_indx + (joints_num - 1) * 6
- cont6d_params = data[..., start_indx:end_indx]
- # print(r_rot_cont6d.shape, cont6d_params.shape, r_pos.shape)
- cont6d_params = torch.cat([r_rot_cont6d, cont6d_params], dim=-1)
- cont6d_params = cont6d_params.view(-1, joints_num, 6)
-
- positions = skeleton.forward_kinematics_cont6d(cont6d_params, r_pos)
-
- return positions
-
-def recover_rot(data):
- # dataset [bs, seqlen, 263/251] HumanML/KIT
- joints_num = 22 if data.shape[-1] == 263 else 21
- r_rot_quat, r_pos = recover_root_rot_pos(data)
- r_pos_pad = torch.cat([r_pos, torch.zeros_like(r_pos)], dim=-1).unsqueeze(-2)
- r_rot_cont6d = quaternion_to_cont6d(r_rot_quat)
- start_indx = 1 + 2 + 1 + (joints_num - 1) * 3
- end_indx = start_indx + (joints_num - 1) * 6
- cont6d_params = data[..., start_indx:end_indx]
- cont6d_params = torch.cat([r_rot_cont6d, cont6d_params], dim=-1)
- cont6d_params = cont6d_params.view(-1, joints_num, 6)
- cont6d_params = torch.cat([cont6d_params, r_pos_pad], dim=-2)
- return cont6d_params
-
-
-def recover_from_ric(data, joints_num):
- r_rot_quat, r_pos = recover_root_rot_pos(data)
- positions = data[..., 4:(joints_num - 1) * 3 + 4]
- positions = positions.view(positions.shape[:-1] + (-1, 3))
-
- '''Add Y-axis rotation to local joints'''
- positions = qrot(qinv(r_rot_quat[..., None, :]).expand(positions.shape[:-1] + (4,)), positions)
-
- '''Add root XZ to joints'''
- positions[..., 0] += r_pos[..., 0:1]
- positions[..., 2] += r_pos[..., 2:3]
-
- '''Concate root and joints'''
- positions = torch.cat([r_pos.unsqueeze(-2), positions], dim=-2)
-
- return positions
-'''
-For Text2Motion Dataset
-'''
-'''
-if __name__ == "__main__":
- example_id = "000021"
- # Lower legs
- l_idx1, l_idx2 = 5, 8
- # Right/Left foot
- fid_r, fid_l = [8, 11], [7, 10]
- # Face direction, r_hip, l_hip, sdr_r, sdr_l
- face_joint_indx = [2, 1, 17, 16]
- # l_hip, r_hip
- r_hip, l_hip = 2, 1
- joints_num = 22
- # ds_num = 8
- data_dir = '../dataset/pose_data_raw/joints/'
- save_dir1 = '../dataset/pose_data_raw/new_joints/'
- save_dir2 = '../dataset/pose_data_raw/new_joint_vecs/'
-
- n_raw_offsets = torch.from_numpy(t2m_raw_offsets)
- kinematic_chain = t2m_kinematic_chain
-
- # Get offsets of target skeleton
- example_data = np.load(os.path.join(data_dir, example_id + '.npy'))
- example_data = example_data.reshape(len(example_data), -1, 3)
- example_data = torch.from_numpy(example_data)
- tgt_skel = Skeleton(n_raw_offsets, kinematic_chain, 'cpu')
- # (joints_num, 3)
- tgt_offsets = tgt_skel.get_offsets_joints(example_data[0])
- # print(tgt_offsets)
-
- source_list = os.listdir(data_dir)
- frame_num = 0
- for source_file in tqdm(source_list):
- source_data = np.load(os.path.join(data_dir, source_file))[:, :joints_num]
- try:
- dataset, ground_positions, positions, l_velocity = process_file(source_data, 0.002)
- rec_ric_data = recover_from_ric(torch.from_numpy(dataset).unsqueeze(0).float(), joints_num)
- np.save(pjoin(save_dir1, source_file), rec_ric_data.squeeze().numpy())
- np.save(pjoin(save_dir2, source_file), dataset)
- frame_num += dataset.shape[0]
- except Exception as e:
- print(source_file)
- print(e)
-
- print('Total clips: %d, Frames: %d, Duration: %fm' %
- (len(source_list), frame_num, frame_num / 20 / 60))
-'''
-
-if __name__ == "__main__":
- example_id = "03950_gt"
- # Lower legs
- l_idx1, l_idx2 = 17, 18
- # Right/Left foot
- fid_r, fid_l = [14, 15], [19, 20]
- # Face direction, r_hip, l_hip, sdr_r, sdr_l
- face_joint_indx = [11, 16, 5, 8]
- # l_hip, r_hip
- r_hip, l_hip = 11, 16
- joints_num = 21
- # ds_num = 8
- data_dir = '../dataset/kit_mocap_dataset/joints/'
- save_dir1 = '../dataset/kit_mocap_dataset/new_joints/'
- save_dir2 = '../dataset/kit_mocap_dataset/new_joint_vecs/'
-
- n_raw_offsets = torch.from_numpy(kit_raw_offsets)
- kinematic_chain = kit_kinematic_chain
-
- '''Get offsets of target skeleton'''
- example_data = np.load(os.path.join(data_dir, example_id + '.npy'))
- example_data = example_data.reshape(len(example_data), -1, 3)
- example_data = torch.from_numpy(example_data)
- tgt_skel = Skeleton(n_raw_offsets, kinematic_chain, 'cpu')
- # (joints_num, 3)
- tgt_offsets = tgt_skel.get_offsets_joints(example_data[0])
- # print(tgt_offsets)
-
- source_list = os.listdir(data_dir)
- frame_num = 0
- '''Read source dataset'''
- for source_file in tqdm(source_list):
- source_data = np.load(os.path.join(data_dir, source_file))[:, :joints_num]
- try:
- name = ''.join(source_file[:-7].split('_')) + '.npy'
- data, ground_positions, positions, l_velocity = process_file(source_data, 0.05)
- rec_ric_data = recover_from_ric(torch.from_numpy(data).unsqueeze(0).float(), joints_num)
- if np.isnan(rec_ric_data.numpy()).any():
- print(source_file)
- continue
- np.save(pjoin(save_dir1, name), rec_ric_data.squeeze().numpy())
- np.save(pjoin(save_dir2, name), data)
- frame_num += data.shape[0]
- except Exception as e:
- print(source_file)
- print(e)
-
- print('Total clips: %d, Frames: %d, Duration: %fm' %
- (len(source_list), frame_num, frame_num / 12.5 / 60))
diff --git a/spaces/OpenMotionLab/MotionGPT/mGPT/models/utils/blocks.py b/spaces/OpenMotionLab/MotionGPT/mGPT/models/utils/blocks.py
deleted file mode 100644
index e657b3863f4b74530e7afbb66973ccf24c18ff50..0000000000000000000000000000000000000000
--- a/spaces/OpenMotionLab/MotionGPT/mGPT/models/utils/blocks.py
+++ /dev/null
@@ -1,146 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from mGPT.models.notused import AdaptiveInstanceNorm1d
-
-
-class MLP(nn.Module):
-
- def __init__(self, cfg, out_dim, is_init):
- super(MLP, self).__init__()
- dims = cfg.MODEL.MOTION_DECODER.MLP_DIM
- n_blk = len(dims)
- norm = 'none'
- acti = 'lrelu'
-
- layers = []
- for i in range(n_blk - 1):
- layers += LinearBlock(dims[i], dims[i + 1], norm=norm, acti=acti)
- layers += LinearBlock(dims[-1], out_dim, norm='none', acti='none')
- self.model = nn.Sequential(*layers)
-
- if is_init:
- for m in self.modules():
- if isinstance(m, nn.Linear):
- #nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
- nn.init.constant_(m.weight, 1)
- elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)):
- nn.init.constant_(m.weight, 1)
- nn.init.constant_(m.bias, 0)
-
- def forward(self, x):
- return self.model(x.view(x.size(0), -1))
-
-
-def ZeroPad1d(sizes):
- return nn.ConstantPad1d(sizes, 0)
-
-
-def get_acti_layer(acti='relu', inplace=True):
-
- if acti == 'relu':
- return [nn.ReLU(inplace=inplace)]
- elif acti == 'lrelu':
- return [nn.LeakyReLU(0.2, inplace=inplace)]
- elif acti == 'tanh':
- return [nn.Tanh()]
- elif acti == 'none':
- return []
- else:
- assert 0, "Unsupported activation: {}".format(acti)
-
-
-def get_norm_layer(norm='none', norm_dim=None):
-
- if norm == 'bn':
- return [nn.BatchNorm1d(norm_dim)]
- elif norm == 'in':
- # return [nn.InstanceNorm1d(norm_dim, affine=False)] # for rt42!
- return [nn.InstanceNorm1d(norm_dim, affine=True)]
- elif norm == 'adain':
- return [AdaptiveInstanceNorm1d(norm_dim)]
- elif norm == 'none':
- return []
- else:
- assert 0, "Unsupported normalization: {}".format(norm)
-
-
-def get_dropout_layer(dropout=None):
- if dropout is not None:
- return [nn.Dropout(p=dropout)]
- else:
- return []
-
-
-def ConvLayers(kernel_size,
- in_channels,
- out_channels,
- stride=1,
- pad_type='reflect',
- use_bias=True):
- """
- returns a list of [pad, conv] => should be += to some list, then apply sequential
- """
-
- if pad_type == 'reflect':
- pad = nn.ReflectionPad1d
- elif pad_type == 'replicate':
- pad = nn.ReplicationPad1d
- elif pad_type == 'zero':
- pad = ZeroPad1d
- else:
- assert 0, "Unsupported padding type: {}".format(pad_type)
-
- pad_l = (kernel_size - 1) // 2
- pad_r = kernel_size - 1 - pad_l
- return [
- pad((pad_l, pad_r)),
- nn.Conv1d(in_channels,
- out_channels,
- kernel_size=kernel_size,
- stride=stride,
- bias=use_bias)
- ]
-
-
-def ConvBlock(kernel_size,
- in_channels,
- out_channels,
- stride=1,
- pad_type='reflect',
- dropout=None,
- norm='none',
- acti='lrelu',
- acti_first=False,
- use_bias=True,
- inplace=True):
- """
- returns a list of [pad, conv, norm, acti] or [acti, pad, conv, norm]
- """
-
- layers = ConvLayers(kernel_size,
- in_channels,
- out_channels,
- stride=stride,
- pad_type=pad_type,
- use_bias=use_bias)
- layers += get_dropout_layer(dropout)
- layers += get_norm_layer(norm, norm_dim=out_channels)
- acti_layers = get_acti_layer(acti, inplace=inplace)
-
- if acti_first:
- return acti_layers + layers
- else:
- return layers + acti_layers
-
-
-def LinearBlock(in_dim, out_dim, dropout=None, norm='none', acti='relu'):
-
- use_bias = True
- layers = []
- layers.append(nn.Linear(in_dim, out_dim, bias=use_bias))
- layers += get_dropout_layer(dropout)
- layers += get_norm_layer(norm, norm_dim=out_dim)
- layers += get_acti_layer(acti)
-
- return layers
diff --git a/spaces/OttoYu/Tree-Inspection-demo/README.md b/spaces/OttoYu/Tree-Inspection-demo/README.md
deleted file mode 100644
index 783e7523780a6149c4d0eee804ad4beebe5e2fad..0000000000000000000000000000000000000000
--- a/spaces/OttoYu/Tree-Inspection-demo/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Tree Inspection Demo
-emoji: 😻
-colorFrom: green
-colorTo: pink
-sdk: gradio
-sdk_version: 3.43.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/PKUWilliamYang/StyleGANEX/models/stylegan2/op/__init__.py b/spaces/PKUWilliamYang/StyleGANEX/models/stylegan2/op/__init__.py
deleted file mode 100644
index d0918d92285955855be89f00096b888ee5597ce3..0000000000000000000000000000000000000000
--- a/spaces/PKUWilliamYang/StyleGANEX/models/stylegan2/op/__init__.py
+++ /dev/null
@@ -1,2 +0,0 @@
-from .fused_act import FusedLeakyReLU, fused_leaky_relu
-from .upfirdn2d import upfirdn2d
diff --git a/spaces/PKUWilliamYang/VToonify/vtoonify/model/raft/core/raft.py b/spaces/PKUWilliamYang/VToonify/vtoonify/model/raft/core/raft.py
deleted file mode 100644
index a25c22f78c96470e3dca4c25e81683133ae024e3..0000000000000000000000000000000000000000
--- a/spaces/PKUWilliamYang/VToonify/vtoonify/model/raft/core/raft.py
+++ /dev/null
@@ -1,144 +0,0 @@
-import numpy as np
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-from model.raft.core.update import BasicUpdateBlock, SmallUpdateBlock
-from model.raft.core.extractor import BasicEncoder, SmallEncoder
-from model.raft.core.corr import CorrBlock, AlternateCorrBlock
-from model.raft.core.utils.utils import bilinear_sampler, coords_grid, upflow8
-
-try:
- autocast = torch.cuda.amp.autocast
-except:
- # dummy autocast for PyTorch < 1.6
- class autocast:
- def __init__(self, enabled):
- pass
- def __enter__(self):
- pass
- def __exit__(self, *args):
- pass
-
-
-class RAFT(nn.Module):
- def __init__(self, args):
- super(RAFT, self).__init__()
- self.args = args
-
- if args.small:
- self.hidden_dim = hdim = 96
- self.context_dim = cdim = 64
- args.corr_levels = 4
- args.corr_radius = 3
-
- else:
- self.hidden_dim = hdim = 128
- self.context_dim = cdim = 128
- args.corr_levels = 4
- args.corr_radius = 4
-
- if 'dropout' not in self.args:
- self.args.dropout = 0
-
- if 'alternate_corr' not in self.args:
- self.args.alternate_corr = False
-
- # feature network, context network, and update block
- if args.small:
- self.fnet = SmallEncoder(output_dim=128, norm_fn='instance', dropout=args.dropout)
- self.cnet = SmallEncoder(output_dim=hdim+cdim, norm_fn='none', dropout=args.dropout)
- self.update_block = SmallUpdateBlock(self.args, hidden_dim=hdim)
-
- else:
- self.fnet = BasicEncoder(output_dim=256, norm_fn='instance', dropout=args.dropout)
- self.cnet = BasicEncoder(output_dim=hdim+cdim, norm_fn='batch', dropout=args.dropout)
- self.update_block = BasicUpdateBlock(self.args, hidden_dim=hdim)
-
- def freeze_bn(self):
- for m in self.modules():
- if isinstance(m, nn.BatchNorm2d):
- m.eval()
-
- def initialize_flow(self, img):
- """ Flow is represented as difference between two coordinate grids flow = coords1 - coords0"""
- N, C, H, W = img.shape
- coords0 = coords_grid(N, H//8, W//8, device=img.device)
- coords1 = coords_grid(N, H//8, W//8, device=img.device)
-
- # optical flow computed as difference: flow = coords1 - coords0
- return coords0, coords1
-
- def upsample_flow(self, flow, mask):
- """ Upsample flow field [H/8, W/8, 2] -> [H, W, 2] using convex combination """
- N, _, H, W = flow.shape
- mask = mask.view(N, 1, 9, 8, 8, H, W)
- mask = torch.softmax(mask, dim=2)
-
- up_flow = F.unfold(8 * flow, [3,3], padding=1)
- up_flow = up_flow.view(N, 2, 9, 1, 1, H, W)
-
- up_flow = torch.sum(mask * up_flow, dim=2)
- up_flow = up_flow.permute(0, 1, 4, 2, 5, 3)
- return up_flow.reshape(N, 2, 8*H, 8*W)
-
-
- def forward(self, image1, image2, iters=12, flow_init=None, upsample=True, test_mode=False):
- """ Estimate optical flow between pair of frames """
-
- image1 = 2 * (image1 / 255.0) - 1.0
- image2 = 2 * (image2 / 255.0) - 1.0
-
- image1 = image1.contiguous()
- image2 = image2.contiguous()
-
- hdim = self.hidden_dim
- cdim = self.context_dim
-
- # run the feature network
- with autocast(enabled=self.args.mixed_precision):
- fmap1, fmap2 = self.fnet([image1, image2])
-
- fmap1 = fmap1.float()
- fmap2 = fmap2.float()
- if self.args.alternate_corr:
- corr_fn = AlternateCorrBlock(fmap1, fmap2, radius=self.args.corr_radius)
- else:
- corr_fn = CorrBlock(fmap1, fmap2, radius=self.args.corr_radius)
-
- # run the context network
- with autocast(enabled=self.args.mixed_precision):
- cnet = self.cnet(image1)
- net, inp = torch.split(cnet, [hdim, cdim], dim=1)
- net = torch.tanh(net)
- inp = torch.relu(inp)
-
- coords0, coords1 = self.initialize_flow(image1)
-
- if flow_init is not None:
- coords1 = coords1 + flow_init
-
- flow_predictions = []
- for itr in range(iters):
- coords1 = coords1.detach()
- corr = corr_fn(coords1) # index correlation volume
-
- flow = coords1 - coords0
- with autocast(enabled=self.args.mixed_precision):
- net, up_mask, delta_flow = self.update_block(net, inp, corr, flow)
-
- # F(t+1) = F(t) + \Delta(t)
- coords1 = coords1 + delta_flow
-
- # upsample predictions
- if up_mask is None:
- flow_up = upflow8(coords1 - coords0)
- else:
- flow_up = self.upsample_flow(coords1 - coords0, up_mask)
-
- flow_predictions.append(flow_up)
-
- if test_mode:
- return coords1 - coords0, flow_up
-
- return flow_predictions
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/documentation.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/documentation.go
deleted file mode 100644
index 7d45c2e8deebba54e3a1fcc0ac82bba8a8f47d57..0000000000000000000000000000000000000000
Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/documentation.go and /dev/null differ
diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/utils/collect_env.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/utils/collect_env.py
deleted file mode 100644
index 65c2134ddbee9655161237dd0894d38c768c2624..0000000000000000000000000000000000000000
--- a/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/utils/collect_env.py
+++ /dev/null
@@ -1,17 +0,0 @@
-from annotator.uniformer.mmcv.utils import collect_env as collect_base_env
-from annotator.uniformer.mmcv.utils import get_git_hash
-
-import annotator.uniformer.mmseg as mmseg
-
-
-def collect_env():
- """Collect the information of the running environments."""
- env_info = collect_base_env()
- env_info['MMSegmentation'] = f'{mmseg.__version__}+{get_git_hash()[:7]}'
-
- return env_info
-
-
-if __name__ == '__main__':
- for name, val in collect_env().items():
- print('{}: {}'.format(name, val))
diff --git a/spaces/Purple11/Grounded-Diffusion/ldm/util.py b/spaces/Purple11/Grounded-Diffusion/ldm/util.py
deleted file mode 100644
index 24c2a543a5a5b8c3905962ad9cda51ecc5a78fef..0000000000000000000000000000000000000000
--- a/spaces/Purple11/Grounded-Diffusion/ldm/util.py
+++ /dev/null
@@ -1,205 +0,0 @@
-import importlib
-
-import torch
-import numpy as np
-from collections import abc
-from einops import rearrange
-from functools import partial
-
-import multiprocessing as mp
-from threading import Thread
-from queue import Queue
-
-from inspect import isfunction
-from PIL import Image, ImageDraw, ImageFont
-
-
-def log_txt_as_img(wh, xc, size=10):
- # wh a tuple of (width, height)
- # xc a list of captions to plot
- b = len(xc)
- txts = list()
- for bi in range(b):
- txt = Image.new("RGB", wh, color="white")
- draw = ImageDraw.Draw(txt)
- font = ImageFont.truetype('data/DejaVuSans.ttf', size=size)
- nc = int(40 * (wh[0] / 256))
- lines = "\n".join(xc[bi][start:start + nc] for start in range(0, len(xc[bi]), nc))
-
- try:
- draw.text((0, 0), lines, fill="black", font=font)
- except UnicodeEncodeError:
- print("Cant encode string for logging. Skipping.")
-
- txt = np.array(txt).transpose(2, 0, 1) / 127.5 - 1.0
- txts.append(txt)
- txts = np.stack(txts)
- txts = torch.tensor(txts)
- return txts
-
-
-def ismap(x):
- if not isinstance(x, torch.Tensor):
- return False
- return (len(x.shape) == 4) and (x.shape[1] > 3)
-
-
-def isimage(x):
- if not isinstance(x, torch.Tensor):
- return False
- return (len(x.shape) == 4) and (x.shape[1] == 3 or x.shape[1] == 1)
-
-
-def exists(x):
- return x is not None
-
-
-def default(val, d):
- if exists(val):
- return val
- return d() if isfunction(d) else d
-
-
-def mean_flat(tensor):
- """
- https://github.com/openai/guided-diffusion/blob/27c20a8fab9cb472df5d6bdd6c8d11c8f430b924/guided_diffusion/nn.py#L86
- Take the mean over all non-batch dimensions.
- """
- return tensor.mean(dim=list(range(1, len(tensor.shape))))
-
-
-def count_params(model, verbose=False):
- total_params = sum(p.numel() for p in model.parameters())
- if verbose:
- print(f"{model.__class__.__name__} has {total_params * 1.e-6:.2f} M params.")
- return total_params
-
-
-def instantiate_from_config(config):
- if not "target" in config:
- if config == '__is_first_stage__':
- return None
- elif config == "__is_unconditional__":
- return None
- raise KeyError("Expected key `target` to instantiate.")
-
- return get_obj_from_str(config["target"])(**config.get("params", dict()))
-
-
-def get_obj_from_str(string, reload=False):
- module, cls = string.rsplit(".", 1)
- print(module, cls)
- if reload:
- module_imp = importlib.import_module(module)
- importlib.reload(module_imp)
- return getattr(importlib.import_module(module, package=None), cls)
-
-
-def _do_parallel_data_prefetch(func, Q, data, idx, idx_to_fn=False):
- # create dummy dataset instance
-
- # run prefetching
- if idx_to_fn:
- res = func(data, worker_id=idx)
- else:
- res = func(data)
- Q.put([idx, res])
- Q.put("Done")
-
-
-def parallel_data_prefetch(
- func: callable, data, n_proc, target_data_type="ndarray", cpu_intensive=True, use_worker_id=False
-):
- # if target_data_type not in ["ndarray", "list"]:
- # raise ValueError(
- # "Data, which is passed to parallel_data_prefetch has to be either of type list or ndarray."
- # )
- if isinstance(data, np.ndarray) and target_data_type == "list":
- raise ValueError("list expected but function got ndarray.")
- elif isinstance(data, abc.Iterable):
- if isinstance(data, dict):
- print(
- f'WARNING:"data" argument passed to parallel_data_prefetch is a dict: Using only its values and disregarding keys.'
- )
- data = list(data.values())
- if target_data_type == "ndarray":
- data = np.asarray(data)
- else:
- data = list(data)
- else:
- raise TypeError(
- f"The data, that shall be processed parallel has to be either an np.ndarray or an Iterable, but is actually {type(data)}."
- )
-
- if cpu_intensive:
- Q = mp.Queue(1000)
- proc = mp.Process
- else:
- Q = Queue(1000)
- proc = Thread
- # spawn processes
- if target_data_type == "ndarray":
- arguments = [
- [func, Q, part, i, use_worker_id]
- for i, part in enumerate(np.array_split(data, n_proc))
- ]
- else:
- step = (
- int(len(data) / n_proc + 1)
- if len(data) % n_proc != 0
- else int(len(data) / n_proc)
- )
- arguments = [
- [func, Q, part, i, use_worker_id]
- for i, part in enumerate(
- [data[i: i + step] for i in range(0, len(data), step)]
- )
- ]
- processes = []
- for i in range(n_proc):
- p = proc(target=_do_parallel_data_prefetch, args=arguments[i])
- processes += [p]
-
- # start processes
- print(f"Start prefetching...")
- import time
-
- start = time.time()
- gather_res = [[] for _ in range(n_proc)]
- try:
- for p in processes:
- p.start()
-
- k = 0
- while k < n_proc:
- # get result
- res = Q.get()
- if res == "Done":
- k += 1
- else:
- gather_res[res[0]] = res[1]
-
- except Exception as e:
- print("Exception: ", e)
- for p in processes:
- p.terminate()
-
- raise e
- finally:
- for p in processes:
- p.join()
- print(f"Prefetching complete. [{time.time() - start} sec.]")
-
- if target_data_type == 'ndarray':
- if not isinstance(gather_res[0], np.ndarray):
- return np.concatenate([np.asarray(r) for r in gather_res], axis=0)
-
- # order outputs
- return np.concatenate(gather_res, axis=0)
- elif target_data_type == 'list':
- out = []
- for r in gather_res:
- out.extend(r)
- return out
- else:
- return gather_res
diff --git a/spaces/Qiukai/gpt/crazy_functions/test_project/python/dqn/dqn.py b/spaces/Qiukai/gpt/crazy_functions/test_project/python/dqn/dqn.py
deleted file mode 100644
index 6cea64d39baa7ff4c1e549869aaa4b0ae17779a9..0000000000000000000000000000000000000000
--- a/spaces/Qiukai/gpt/crazy_functions/test_project/python/dqn/dqn.py
+++ /dev/null
@@ -1,245 +0,0 @@
-from typing import Any, Dict, List, Optional, Tuple, Type, Union
-
-import gym
-import numpy as np
-import torch as th
-from torch.nn import functional as F
-
-from stable_baselines3.common import logger
-from stable_baselines3.common.off_policy_algorithm import OffPolicyAlgorithm
-from stable_baselines3.common.preprocessing import maybe_transpose
-from stable_baselines3.common.type_aliases import GymEnv, MaybeCallback, Schedule
-from stable_baselines3.common.utils import get_linear_fn, is_vectorized_observation, polyak_update
-from stable_baselines3.dqn.policies import DQNPolicy
-
-
-class DQN(OffPolicyAlgorithm):
- """
- Deep Q-Network (DQN)
-
- Paper: https://arxiv.org/abs/1312.5602, https://www.nature.com/articles/nature14236
- Default hyperparameters are taken from the nature paper,
- except for the optimizer and learning rate that were taken from Stable Baselines defaults.
-
- :param policy: The policy model to use (MlpPolicy, CnnPolicy, ...)
- :param env: The environment to learn from (if registered in Gym, can be str)
- :param learning_rate: The learning rate, it can be a function
- of the current progress remaining (from 1 to 0)
- :param buffer_size: size of the replay buffer
- :param learning_starts: how many steps of the model to collect transitions for before learning starts
- :param batch_size: Minibatch size for each gradient update
- :param tau: the soft update coefficient ("Polyak update", between 0 and 1) default 1 for hard update
- :param gamma: the discount factor
- :param train_freq: Update the model every ``train_freq`` steps. Alternatively pass a tuple of frequency and unit
- like ``(5, "step")`` or ``(2, "episode")``.
- :param gradient_steps: How many gradient steps to do after each rollout (see ``train_freq``)
- Set to ``-1`` means to do as many gradient steps as steps done in the environment
- during the rollout.
- :param optimize_memory_usage: Enable a memory efficient variant of the replay buffer
- at a cost of more complexity.
- See https://github.com/DLR-RM/stable-baselines3/issues/37#issuecomment-637501195
- :param target_update_interval: update the target network every ``target_update_interval``
- environment steps.
- :param exploration_fraction: fraction of entire training period over which the exploration rate is reduced
- :param exploration_initial_eps: initial value of random action probability
- :param exploration_final_eps: final value of random action probability
- :param max_grad_norm: The maximum value for the gradient clipping
- :param tensorboard_log: the log location for tensorboard (if None, no logging)
- :param create_eval_env: Whether to create a second environment that will be
- used for evaluating the agent periodically. (Only available when passing string for the environment)
- :param policy_kwargs: additional arguments to be passed to the policy on creation
- :param verbose: the verbosity level: 0 no output, 1 info, 2 debug
- :param seed: Seed for the pseudo random generators
- :param device: Device (cpu, cuda, ...) on which the code should be run.
- Setting it to auto, the code will be run on the GPU if possible.
- :param _init_setup_model: Whether or not to build the network at the creation of the instance
- """
-
- def __init__(
- self,
- policy: Union[str, Type[DQNPolicy]],
- env: Union[GymEnv, str],
- learning_rate: Union[float, Schedule] = 1e-4,
- buffer_size: int = 1000000,
- learning_starts: int = 50000,
- batch_size: Optional[int] = 32,
- tau: float = 1.0,
- gamma: float = 0.99,
- train_freq: Union[int, Tuple[int, str]] = 4,
- gradient_steps: int = 1,
- optimize_memory_usage: bool = False,
- target_update_interval: int = 10000,
- exploration_fraction: float = 0.1,
- exploration_initial_eps: float = 1.0,
- exploration_final_eps: float = 0.05,
- max_grad_norm: float = 10,
- tensorboard_log: Optional[str] = None,
- create_eval_env: bool = False,
- policy_kwargs: Optional[Dict[str, Any]] = None,
- verbose: int = 0,
- seed: Optional[int] = None,
- device: Union[th.device, str] = "auto",
- _init_setup_model: bool = True,
- ):
-
- super(DQN, self).__init__(
- policy,
- env,
- DQNPolicy,
- learning_rate,
- buffer_size,
- learning_starts,
- batch_size,
- tau,
- gamma,
- train_freq,
- gradient_steps,
- action_noise=None, # No action noise
- policy_kwargs=policy_kwargs,
- tensorboard_log=tensorboard_log,
- verbose=verbose,
- device=device,
- create_eval_env=create_eval_env,
- seed=seed,
- sde_support=False,
- optimize_memory_usage=optimize_memory_usage,
- supported_action_spaces=(gym.spaces.Discrete,),
- )
-
- self.exploration_initial_eps = exploration_initial_eps
- self.exploration_final_eps = exploration_final_eps
- self.exploration_fraction = exploration_fraction
- self.target_update_interval = target_update_interval
- self.max_grad_norm = max_grad_norm
- # "epsilon" for the epsilon-greedy exploration
- self.exploration_rate = 0.0
- # Linear schedule will be defined in `_setup_model()`
- self.exploration_schedule = None
- self.q_net, self.q_net_target = None, None
-
- if _init_setup_model:
- self._setup_model()
-
- def _setup_model(self) -> None:
- super(DQN, self)._setup_model()
- self._create_aliases()
- self.exploration_schedule = get_linear_fn(
- self.exploration_initial_eps, self.exploration_final_eps, self.exploration_fraction
- )
-
- def _create_aliases(self) -> None:
- self.q_net = self.policy.q_net
- self.q_net_target = self.policy.q_net_target
-
- def _on_step(self) -> None:
- """
- Update the exploration rate and target network if needed.
- This method is called in ``collect_rollouts()`` after each step in the environment.
- """
- if self.num_timesteps % self.target_update_interval == 0:
- polyak_update(self.q_net.parameters(), self.q_net_target.parameters(), self.tau)
-
- self.exploration_rate = self.exploration_schedule(self._current_progress_remaining)
- logger.record("rollout/exploration rate", self.exploration_rate)
-
- def train(self, gradient_steps: int, batch_size: int = 100) -> None:
- # Update learning rate according to schedule
- self._update_learning_rate(self.policy.optimizer)
-
- losses = []
- for _ in range(gradient_steps):
- # Sample replay buffer
- replay_data = self.replay_buffer.sample(batch_size, env=self._vec_normalize_env)
-
- with th.no_grad():
- # Compute the next Q-values using the target network
- next_q_values = self.q_net_target(replay_data.next_observations)
- # Follow greedy policy: use the one with the highest value
- next_q_values, _ = next_q_values.max(dim=1)
- # Avoid potential broadcast issue
- next_q_values = next_q_values.reshape(-1, 1)
- # 1-step TD target
- target_q_values = replay_data.rewards + (1 - replay_data.dones) * self.gamma * next_q_values
-
- # Get current Q-values estimates
- current_q_values = self.q_net(replay_data.observations)
-
- # Retrieve the q-values for the actions from the replay buffer
- current_q_values = th.gather(current_q_values, dim=1, index=replay_data.actions.long())
-
- # Compute Huber loss (less sensitive to outliers)
- loss = F.smooth_l1_loss(current_q_values, target_q_values)
- losses.append(loss.item())
-
- # Optimize the policy
- self.policy.optimizer.zero_grad()
- loss.backward()
- # Clip gradient norm
- th.nn.utils.clip_grad_norm_(self.policy.parameters(), self.max_grad_norm)
- self.policy.optimizer.step()
-
- # Increase update counter
- self._n_updates += gradient_steps
-
- logger.record("train/n_updates", self._n_updates, exclude="tensorboard")
- logger.record("train/loss", np.mean(losses))
-
- def predict(
- self,
- observation: np.ndarray,
- state: Optional[np.ndarray] = None,
- mask: Optional[np.ndarray] = None,
- deterministic: bool = False,
- ) -> Tuple[np.ndarray, Optional[np.ndarray]]:
- """
- Overrides the base_class predict function to include epsilon-greedy exploration.
-
- :param observation: the input observation
- :param state: The last states (can be None, used in recurrent policies)
- :param mask: The last masks (can be None, used in recurrent policies)
- :param deterministic: Whether or not to return deterministic actions.
- :return: the model's action and the next state
- (used in recurrent policies)
- """
- if not deterministic and np.random.rand() < self.exploration_rate:
- if is_vectorized_observation(maybe_transpose(observation, self.observation_space), self.observation_space):
- n_batch = observation.shape[0]
- action = np.array([self.action_space.sample() for _ in range(n_batch)])
- else:
- action = np.array(self.action_space.sample())
- else:
- action, state = self.policy.predict(observation, state, mask, deterministic)
- return action, state
-
- def learn(
- self,
- total_timesteps: int,
- callback: MaybeCallback = None,
- log_interval: int = 4,
- eval_env: Optional[GymEnv] = None,
- eval_freq: int = -1,
- n_eval_episodes: int = 5,
- tb_log_name: str = "DQN",
- eval_log_path: Optional[str] = None,
- reset_num_timesteps: bool = True,
- ) -> OffPolicyAlgorithm:
-
- return super(DQN, self).learn(
- total_timesteps=total_timesteps,
- callback=callback,
- log_interval=log_interval,
- eval_env=eval_env,
- eval_freq=eval_freq,
- n_eval_episodes=n_eval_episodes,
- tb_log_name=tb_log_name,
- eval_log_path=eval_log_path,
- reset_num_timesteps=reset_num_timesteps,
- )
-
- def _excluded_save_params(self) -> List[str]:
- return super(DQN, self)._excluded_save_params() + ["q_net", "q_net_target"]
-
- def _get_torch_save_params(self) -> Tuple[List[str], List[str]]:
- state_dicts = ["policy", "policy.optimizer"]
-
- return state_dicts, []
diff --git a/spaces/RMXK/RVC_HFF/utils/clonerepo_experimental.py b/spaces/RMXK/RVC_HFF/utils/clonerepo_experimental.py
deleted file mode 100644
index b0ae02648c1307562cf48033908edcf2996db5e2..0000000000000000000000000000000000000000
--- a/spaces/RMXK/RVC_HFF/utils/clonerepo_experimental.py
+++ /dev/null
@@ -1,253 +0,0 @@
-import os
-import subprocess
-import shutil
-from concurrent.futures import ThreadPoolExecutor, as_completed
-from tqdm.notebook import tqdm
-from pathlib import Path
-import requests
-
-def run_script():
- def run_cmd(cmd):
- process = subprocess.run(cmd, shell=True, check=True, text=True)
- return process.stdout
-
- # Change the current directory to /content/
- os.chdir('/content/')
- print("Changing dir to /content/")
-
- # Your function to edit the file
- def edit_file(file_path):
- temp_file_path = "/tmp/temp_file.py"
- changes_made = False
- with open(file_path, "r") as file, open(temp_file_path, "w") as temp_file:
- previous_line = ""
- second_previous_line = ""
- for line in file:
- new_line = line.replace("value=160", "value=128")
- if new_line != line:
- print("Replaced 'value=160' with 'value=128'")
- changes_made = True
- line = new_line
-
- new_line = line.replace("crepe hop length: 160", "crepe hop length: 128")
- if new_line != line:
- print("Replaced 'crepe hop length: 160' with 'crepe hop length: 128'")
- changes_made = True
- line = new_line
-
- new_line = line.replace("value=0.88", "value=0.75")
- if new_line != line:
- print("Replaced 'value=0.88' with 'value=0.75'")
- changes_made = True
- line = new_line
-
- if "label=i18n(\"输入源音量包络替换输出音量包络融合比例,越靠近1越使用输出包络\")" in previous_line and "value=1," in line:
- new_line = line.replace("value=1,", "value=0.25,")
- if new_line != line:
- print("Replaced 'value=1,' with 'value=0.25,' based on the condition")
- changes_made = True
- line = new_line
-
- if "label=i18n(\"总训练轮数total_epoch\")" in previous_line and "value=20," in line:
- new_line = line.replace("value=20,", "value=500,")
- if new_line != line:
- print("Replaced 'value=20,' with 'value=500,' based on the condition for DEFAULT EPOCH")
- changes_made = True
- line = new_line
-
- if 'choices=["pm", "harvest", "dio", "crepe", "crepe-tiny", "mangio-crepe", "mangio-crepe-tiny"], # Fork Feature. Add Crepe-Tiny' in previous_line:
- if 'value="pm",' in line:
- new_line = line.replace('value="pm",', 'value="mangio-crepe",')
- if new_line != line:
- print("Replaced 'value=\"pm\",' with 'value=\"mangio-crepe\",' based on the condition")
- changes_made = True
- line = new_line
-
- new_line = line.replace('label=i18n("输入训练文件夹路径"), value="E:\\\\语音音频+标注\\\\米津玄师\\\\src"', 'label=i18n("输入训练文件夹路径"), value="/content/dataset/"')
- if new_line != line:
- print("Replaced 'label=i18n(\"输入训练文件夹路径\"), value=\"E:\\\\语音音频+标注\\\\米津玄师\\\\src\"' with 'label=i18n(\"输入训练文件夹路径\"), value=\"/content/dataset/\"'")
- changes_made = True
- line = new_line
-
- if 'label=i18n("是否仅保存最新的ckpt文件以节省硬盘空间"),' in second_previous_line:
- if 'value=i18n("否"),' in line:
- new_line = line.replace('value=i18n("否"),', 'value=i18n("是"),')
- if new_line != line:
- print("Replaced 'value=i18n(\"否\"),' with 'value=i18n(\"是\"),' based on the condition for SAVE ONLY LATEST")
- changes_made = True
- line = new_line
-
- if 'label=i18n("是否在每次保存时间点将最终小模型保存至weights文件夹"),' in second_previous_line:
- if 'value=i18n("否"),' in line:
- new_line = line.replace('value=i18n("否"),', 'value=i18n("是"),')
- if new_line != line:
- print("Replaced 'value=i18n(\"否\"),' with 'value=i18n(\"是\"),' based on the condition for SAVE SMALL WEIGHTS")
- changes_made = True
- line = new_line
-
- temp_file.write(line)
- second_previous_line = previous_line
- previous_line = line
-
- # After finished, we replace the original file with the temp one
- import shutil
- shutil.move(temp_file_path, file_path)
-
- if changes_made:
- print("Changes made and file saved successfully.")
- else:
- print("No changes were needed.")
-
- # Define the repo path
- repo_path = '/content/Applio-RVC-Fork'
-
- def copy_all_files_in_directory(src_dir, dest_dir):
- # Iterate over all files in source directory
- for item in Path(src_dir).glob('*'):
- if item.is_file():
- # Copy each file to destination directory
- shutil.copy(item, dest_dir)
- else:
- # If it's a directory, make a new directory in the destination and copy the files recursively
- new_dest = Path(dest_dir) / item.name
- new_dest.mkdir(exist_ok=True)
- copy_all_files_in_directory(str(item), str(new_dest))
-
- def clone_and_copy_repo(repo_path):
- # New repository link
- new_repo_link = "https://github.com/IAHispano/Applio-RVC-Fork/"
- # Temporary path to clone the repository
- temp_repo_path = "/content/temp_Applio-RVC-Fork"
- # New folder name
- new_folder_name = "Applio-RVC-Fork"
-
- # Clone the latest code from the new repository to a temporary location
- run_cmd(f"git clone {new_repo_link} {temp_repo_path}")
- os.chdir(temp_repo_path)
-
- run_cmd(f"git checkout 3fa4dad3d8961e5ca2522e9e12c0b4ddb71ad402")
- run_cmd(f"git checkout f9e606c279cb49420597519b0a83b92be81e42e4")
- run_cmd(f"git checkout 9e305588844c5442d58add1061b29beeca89d679")
- run_cmd(f"git checkout bf92dc1eb54b4f28d6396a4d1820a25896cc9af8")
- run_cmd(f"git checkout c3810e197d3cb98039973b2f723edf967ecd9e61")
- run_cmd(f"git checkout a33159efd134c2413b0afe26a76b7dc87926d2de")
- run_cmd(f"git checkout 24e251fb62c662e39ac5cf9253cc65deb9be94ec")
- run_cmd(f"git checkout ad5667d3017e93232dba85969cddac1322ba2902")
- run_cmd(f"git checkout ce9715392cf52dd5a0e18e00d1b5e408f08dbf27")
- run_cmd(f"git checkout 7c7da3f2ac68f3bd8f3ad5ca5c700f18ab9f90eb")
- run_cmd(f"git checkout 4ac395eab101955e8960b50d772c26f592161764")
- run_cmd(f"git checkout b15b358702294c7375761584e5276c811ffab5e8")
- run_cmd(f"git checkout 1501793dc490982db9aca84a50647764caa66e51")
- run_cmd(f"git checkout 21f7faf57219c75e6ba837062350391a803e9ae2")
- run_cmd(f"git checkout b5eb689fbc409b49f065a431817f822f554cebe7")
- run_cmd(f"git checkout 7e02fae1ebf24cb151bf6cbe787d06734aa65862")
- run_cmd(f"git checkout 6aea5ea18ed0b9a1e03fa5d268d6bc3c616672a9")
- run_cmd(f"git checkout f0f9b25717e59116473fb42bd7f9252cfc32b398")
- run_cmd(f"git checkout b394de424088a81fc081224bc27338a8651ad3b2")
- run_cmd(f"git checkout f1999406a88b80c965d2082340f5ea2bfa9ab67a")
- run_cmd(f"git checkout d98a0fa8dc715308dfc73eac5c553b69c6ee072b")
- run_cmd(f"git checkout d73267a415fb0eba98477afa43ef71ffd82a7157")
- run_cmd(f"git checkout 1a03d01356ae79179e1fb8d8915dc9cc79925742")
- run_cmd(f"git checkout 81497bb3115e92c754300c9b3992df428886a3e9")
- run_cmd(f"git checkout c5af1f8edcf79cb70f065c0110e279e78e48caf9")
- run_cmd(f"git checkout cdb3c90109387fa4dfa92f53c3864c71170ffc77")
-
- # Edit the file here, before copying
- #edit_file(f"{temp_repo_path}/infer-web.py")
-
- # Copy all files from the cloned repository to the existing path
- copy_all_files_in_directory(temp_repo_path, repo_path)
- print(f"Copying all {new_folder_name} files from GitHub.")
-
- # Change working directory back to /content/
- os.chdir('/content/')
- print("Changed path back to /content/")
-
- # Remove the temporary cloned repository
- shutil.rmtree(temp_repo_path)
-
- # Call the function
- clone_and_copy_repo(repo_path)
-
- # Download the credentials file for RVC archive sheet
- os.makedirs('/content/Applio-RVC-Fork/stats/', exist_ok=True)
- run_cmd("wget -q https://cdn.discordapp.com/attachments/945486970883285045/1114717554481569802/peppy-generator-388800-07722f17a188.json -O /content/Applio-RVC-Fork/stats/peppy-generator-388800-07722f17a188.json")
-
- # Forcefully delete any existing torchcrepe dependencies downloaded from an earlier run just in case
- shutil.rmtree('/content/Applio-RVC-Fork/torchcrepe', ignore_errors=True)
- shutil.rmtree('/content/torchcrepe', ignore_errors=True)
-
- # Download the torchcrepe folder from the maxrmorrison/torchcrepe repository
- run_cmd("git clone https://github.com/maxrmorrison/torchcrepe.git")
- shutil.move('/content/torchcrepe/torchcrepe', '/content/Applio-RVC-Fork/')
- shutil.rmtree('/content/torchcrepe', ignore_errors=True) # Delete the torchcrepe repository folder
-
- # Change the current directory to /content/Applio-RVC-Fork
- os.chdir('/content/Applio-RVC-Fork')
- os.makedirs('pretrained', exist_ok=True)
- os.makedirs('uvr5_weights', exist_ok=True)
-
-def download_file(url, filepath):
- response = requests.get(url, stream=True)
- response.raise_for_status()
-
- with open(filepath, "wb") as file:
- for chunk in response.iter_content(chunk_size=8192):
- if chunk:
- file.write(chunk)
-
-def download_pretrained_models():
- pretrained_models = {
- "pretrained": [
- "D40k.pth",
- "G40k.pth",
- "f0D40k.pth",
- "f0G40k.pth"
- ],
- "pretrained_v2": [
- "D40k.pth",
- "G40k.pth",
- "f0D40k.pth",
- "f0G40k.pth",
- "f0G48k.pth",
- "f0D48k.pth"
- ],
- "uvr5_weights": [
- "HP2-人声vocals+非人声instrumentals.pth",
- "HP5-主旋律人声vocals+其他instrumentals.pth",
- "VR-DeEchoNormal.pth",
- "VR-DeEchoDeReverb.pth",
- "VR-DeEchoAggressive.pth",
- "HP5_only_main_vocal.pth",
- "HP3_all_vocals.pth",
- "HP2_all_vocals.pth"
- ]
- }
- part2 = "I"
- base_url = "https://huggingface.co/lj1995/VoiceConversionWebU" + part2 + "/resolve/main/"
- base_path = "/content/Applio-RVC-Fork/"
- base_pathm = base_path
-
- # Calculate total number of files to download
- total_files = sum(len(files) for files in pretrained_models.values()) + 1 # +1 for hubert_base.pt
-
- with tqdm(total=total_files, desc="Downloading files") as pbar:
- for folder, models in pretrained_models.items():
- folder_path = os.path.join(base_path, folder)
- os.makedirs(folder_path, exist_ok=True)
- for model in models:
- url = base_url + folder + "/" + model
- filepath = os.path.join(folder_path, model)
- download_file(url, filepath)
- pbar.update()
-
- # Download hubert_base.pt to the base path
- hubert_url = base_url + "hubert_base.pt"
- hubert_filepath = os.path.join(base_pathm, "hubert_base.pt")
- download_file(hubert_url, hubert_filepath)
- pbar.update()
-def clone_repository(run_download):
- with ThreadPoolExecutor(max_workers=2) as executor:
- executor.submit(run_script)
- if run_download:
- executor.submit(download_pretrained_models)
diff --git a/spaces/RamAnanth1/videocrafter/app.py b/spaces/RamAnanth1/videocrafter/app.py
deleted file mode 100644
index 8c9d7d0cbce39288cea6ffcc5692f493d0ac25f1..0000000000000000000000000000000000000000
--- a/spaces/RamAnanth1/videocrafter/app.py
+++ /dev/null
@@ -1,252 +0,0 @@
-import gradio as gr
-import os
-import time
-import argparse
-import yaml, math
-from tqdm import trange
-import torch
-import numpy as np
-from omegaconf import OmegaConf
-import torch.distributed as dist
-from pytorch_lightning import seed_everything
-
-from lvdm.samplers.ddim import DDIMSampler
-from lvdm.utils.common_utils import str2bool
-from lvdm.utils.dist_utils import setup_dist, gather_data
-from lvdm.utils.saving_utils import npz_to_video_grid, npz_to_imgsheet_5d
-from utils import load_model, get_conditions, make_model_input_shape, torch_to_np
-from lvdm.models.modules.lora import change_lora
-from lvdm.utils.saving_utils import tensor_to_mp4
-
-from huggingface_hub import hf_hub_download
-import subprocess
-import shlex
-
-config_path = "model_config.yaml"
-config = OmegaConf.load(config_path)
-
-# Download model
-REPO_ID = 'VideoCrafter/t2v-version-1-1'
-filename_list = ['models/base_t2v/model.ckpt',
- 'models/videolora/lora_001_Loving_Vincent_style.ckpt',
- 'models/videolora/lora_002_frozenmovie_style.ckpt',
- 'models/videolora/lora_003_MakotoShinkaiYourName_style.ckpt',
- 'models/videolora/lora_004_coco_style.ckpt',
- 'models/adapter_t2v_depth/adapter.pth']
-
-for filename in filename_list:
- if not os.path.exists(filename):
- hf_hub_download(repo_id=REPO_ID, filename=filename, local_dir='./', local_dir_use_symlinks=False)
-
-ckpt_path = 'models/base_t2v/model.ckpt'
-
-midas_path_url = 'https://github.com/isl-org/DPT/releases/download/1_0/dpt_hybrid-midas-501f0c75.pt'
-
-subprocess.run(shlex.split(f'wget {midas_path_url} -O models/adapter_t2v_depth/dpt_hybrid-midas.pt'))
-
-# # get model & sampler
-model, _, _ = load_model(config, ckpt_path,
- inject_lora=False,
- lora_scale=None,
- )
-adapter_ckpt = 'models/adapter_t2v_depth/adapter.pth'
-state_dict = torch.load(adapter_ckpt, map_location="cpu")
-if "state_dict" in list(state_dict.keys()):
- state_dict = state_dict["state_dict"]
-model.adapter.load_state_dict(state_dict, strict=True)
-
-ddim_sampler = DDIMSampler(model)
-
-def sample_denoising_batch(model, noise_shape, condition, *args,
- sample_type="ddim", sampler=None,
- ddim_steps=None, eta=None,
- unconditional_guidance_scale=1.0, uc=None,
- denoising_progress=False,
- **kwargs,
- ):
-
- assert(sampler is not None)
- assert(ddim_steps is not None)
- assert(eta is not None)
- ddim_sampler = sampler
- samples, _ = ddim_sampler.sample(S=ddim_steps,
- conditioning=condition,
- batch_size=noise_shape[0],
- shape=noise_shape[1:],
- verbose=denoising_progress,
- unconditional_guidance_scale=unconditional_guidance_scale,
- unconditional_conditioning=uc,
- eta=eta,
- **kwargs,
- )
- return samples
-
-@torch.no_grad()
-def sample_text2video(model, prompt, n_samples, batch_size,
- sample_type="ddim", sampler=None,
- ddim_steps=50, eta=1.0, cfg_scale=7.5,
- decode_frame_bs=1,
- ddp=False, all_gather=True,
- batch_progress=True, show_denoising_progress=False,
- ):
- # get cond vector
- assert(model.cond_stage_model is not None)
- cond_embd = get_conditions(prompt, model, batch_size)
- uncond_embd = get_conditions("", model, batch_size) if cfg_scale != 1.0 else None
-
- # sample batches
- all_videos = []
- n_iter = math.ceil(n_samples / batch_size)
- iterator = trange(n_iter, desc="Sampling Batches (text-to-video)") if batch_progress else range(n_iter)
- for _ in iterator:
- noise_shape = make_model_input_shape(model, batch_size)
- samples_latent = sample_denoising_batch(model, noise_shape, cond_embd,
- sample_type=sample_type,
- sampler=sampler,
- ddim_steps=ddim_steps,
- eta=eta,
- unconditional_guidance_scale=cfg_scale,
- uc=uncond_embd,
- denoising_progress=show_denoising_progress,
- )
- samples = model.decode_first_stage(samples_latent, decode_bs=decode_frame_bs, return_cpu=False)
-
- # gather samples from multiple gpus
- if ddp and all_gather:
- data_list = gather_data(samples, return_np=False)
- all_videos.extend([torch_to_np(data) for data in data_list])
- else:
- all_videos.append(torch_to_np(samples))
-
- all_videos = np.concatenate(all_videos, axis=0)
- assert(all_videos.shape[0] >= n_samples)
- return all_videos
-
-def adapter_guided_synthesis(model, prompts, videos, noise_shape, sampler, n_samples=1, ddim_steps=50, ddim_eta=1., \
- unconditional_guidance_scale=1.0, unconditional_guidance_scale_temporal=None, **kwargs):
- ddim_sampler = sampler
-
- batch_size = noise_shape[0]
- ## get condition embeddings (support single prompt only)
- if isinstance(prompts, str):
- prompts = [prompts]
- cond = model.get_learned_conditioning(prompts)
- if unconditional_guidance_scale != 1.0:
- prompts = batch_size * [""]
- uc = model.get_learned_conditioning(prompts)
- else:
- uc = None
-
- ## adapter features: process in 2D manner
- b, c, t, h, w = videos.shape
- extra_cond = model.get_batch_depth(videos, (h,w))
- features_adapter = model.get_adapter_features(extra_cond)
-
- batch_variants = []
- for _ in range(n_samples):
- if ddim_sampler is not None:
- samples, _ = ddim_sampler.sample(S=ddim_steps,
- conditioning=cond,
- batch_size=noise_shape[0],
- shape=noise_shape[1:],
- verbose=False,
- unconditional_guidance_scale=unconditional_guidance_scale,
- unconditional_conditioning=uc,
- eta=ddim_eta,
- temporal_length=noise_shape[2],
- conditional_guidance_scale_temporal=unconditional_guidance_scale_temporal,
- features_adapter=features_adapter,
- **kwargs
- )
- ## reconstruct from latent to pixel space
- batch_images = model.decode_first_stage(samples, decode_bs=1, return_cpu=False)
- batch_variants.append(batch_images)
- ## variants, batch, c, t, h, w
- batch_variants = torch.stack(batch_variants)
- return batch_variants.permute(1, 0, 2, 3, 4, 5), extra_cond
-
-
-def save_results(videos,
- save_name="results", save_fps=8, save_mp4=True,
- save_npz=False, save_mp4_sheet=False, save_jpg=False
- ):
-
- save_subdir = os.path.join("videos")
- os.makedirs(save_subdir, exist_ok=True)
- for i in range(videos.shape[0]):
- npz_to_video_grid(videos[i:i+1,...],
- os.path.join(save_subdir, f"{save_name}_{i:03d}.mp4"),
- fps=save_fps)
-
- return os.path.join(save_subdir, f"{save_name}_{i:03d}.mp4")
-
-def save_results_control(batch_samples, batch_conds):
- save_subdir = os.path.join("videos")
- os.makedirs(save_subdir, exist_ok=True)
-
- tensor_to_mp4(video=batch_conds.detach().cpu(), savepath=os.path.join(save_subdir, f'results_depth.mp4'), fps=10)
- tensor_to_mp4(video=batch_samples.detach().cpu(), savepath=os.path.join(save_subdir, f'results_sample.mp4'), fps=10)
-
- return os.path.join(save_subdir, f'results_depth.mp4'), os.path.join(save_subdir, f'results_sample.mp4')
-
-def get_video(prompt, seed, ddim_steps):
- seed_everything(seed)
- samples = sample_text2video(model, prompt, n_samples = 1, batch_size = 1,
- sampler=ddim_sampler, ddim_steps=ddim_steps
- )
- return save_results(samples)
-
-def get_video_lora(prompt, seed, ddim_steps, model_choice):
-
- model_to_style = {
- "Frozen": ", frozenmovie style",
- "Coco": ", coco style",
- "Loving Vincent": ", Loving Vincent style",
- "MakotoShinkai YourName": ", MakotoShinkaiYourName style"
- }
-
- model_to_index = {
- "Frozen": 2,
- "Coco": 4,
- "Loving Vincent": 1,
- "MakotoShinkai YourName": 3
- }
-
- seed_everything(seed)
- prompt = prompt + model_to_style[model_choice]
- print(prompt)
- change_lora(model, inject_lora=True, lora_scale=1.0,lora_path = filename_list[model_to_index[model_choice]])
- samples = sample_text2video(model, prompt, n_samples = 1, batch_size = 1,
- sampler=ddim_sampler, ddim_steps=ddim_steps
- )
- return save_results(samples)
-
-def get_video_control(prompt, input_video, seed, ddim_steps):
- seed_everything(seed)
- h,w = 512//8, 512//8
- noise_shape = [1, model.channels, model_control.temporal_length,h,w]
- batch_samples, batch_conds = adapter_guided_synthesis(model, prompt,input_video,noise_shape, sampler=ddim_sampler, n_samples = 1,
- ddim_steps=ddim_steps
- )
- #return save_results_control(batch_samples, batch_conds)
- return input_video
-
-from gradio_t2v import create_demo as create_demo_basic
-from gradio_videolora import create_demo as create_demo_videolora
-from gradio_videocontrol import create_demo as create_demo_videocontrol
-
-DESCRIPTION = '# [Latent Video Diffusion Models](https://github.com/VideoCrafter/VideoCrafter)'
-DESCRIPTION += '\n
🤗🤗🤗 VideoCrafter is an open-source video generation and editing toolbox for crafting video content. This model can only be used for non-commercial purposes. To learn more about the model, take a look at the model card.
'
-
-with gr.Blocks(css='style.css') as demo:
- gr.Markdown(DESCRIPTION)
- with gr.Tabs():
- with gr.TabItem('Basic Text2Video'):
- create_demo_basic(get_video)
- with gr.TabItem('VideoLoRA'):
- create_demo_videolora(get_video_lora)
- with gr.TabItem('VideoControl'):
- create_demo_videocontrol(get_video_control)
-
-demo.queue(api_open=False).launch()
-
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/models/index.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/models/index.py
deleted file mode 100644
index b94c32511f0cda2363bfc4f29c9c8bfcc7101f9b..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/models/index.py
+++ /dev/null
@@ -1,28 +0,0 @@
-import urllib.parse
-
-
-class PackageIndex:
- """Represents a Package Index and provides easier access to endpoints"""
-
- __slots__ = ["url", "netloc", "simple_url", "pypi_url", "file_storage_domain"]
-
- def __init__(self, url: str, file_storage_domain: str) -> None:
- super().__init__()
- self.url = url
- self.netloc = urllib.parse.urlsplit(url).netloc
- self.simple_url = self._url_for_path("simple")
- self.pypi_url = self._url_for_path("pypi")
-
- # This is part of a temporary hack used to block installs of PyPI
- # packages which depend on external urls only necessary until PyPI can
- # block such packages themselves
- self.file_storage_domain = file_storage_domain
-
- def _url_for_path(self, path: str) -> str:
- return urllib.parse.urljoin(self.url, path)
-
-
-PyPI = PackageIndex("https://pypi.org/", file_storage_domain="files.pythonhosted.org")
-TestPyPI = PackageIndex(
- "https://test.pypi.org/", file_storage_domain="test-files.pythonhosted.org"
-)
diff --git a/spaces/RealKintaro/Offensive-Speech-Detection-From-Arabic-Dialects/Deployment/Bert_medium.py b/spaces/RealKintaro/Offensive-Speech-Detection-From-Arabic-Dialects/Deployment/Bert_medium.py
deleted file mode 100644
index 3ecf144ffc880279faa667a70f874a8c824929c6..0000000000000000000000000000000000000000
--- a/spaces/RealKintaro/Offensive-Speech-Detection-From-Arabic-Dialects/Deployment/Bert_medium.py
+++ /dev/null
@@ -1,17 +0,0 @@
-
-from transformers import AutoModel
-from torch import nn
-import pytorch_lightning as pl
-
-
-class MediumBert(pl.LightningModule):
- def __init__(self):
- super().__init__()
- self.bert_model = AutoModel.from_pretrained('asafaya/bert-medium-arabic')
- self.fc = nn.Linear(512,18)
-
- def forward(self,input_ids,attention_mask):
- out = self.bert_model(input_ids = input_ids, attention_mask =attention_mask)#inputs["input_ids"],inputs["token_type_ids"],inputs["attention_mask"])
- pooler = out[1]
- out = self.fc(pooler)
- return out
\ No newline at end of file
diff --git a/spaces/Realcat/image-matching-webui/third_party/r2d2/datasets/__init__.py b/spaces/Realcat/image-matching-webui/third_party/r2d2/datasets/__init__.py
deleted file mode 100644
index f538fb5372197bcdba9db28c861af39c541539ee..0000000000000000000000000000000000000000
--- a/spaces/Realcat/image-matching-webui/third_party/r2d2/datasets/__init__.py
+++ /dev/null
@@ -1,35 +0,0 @@
-# Copyright 2019-present NAVER Corp.
-# CC BY-NC-SA 3.0
-# Available only for non-commercial use
-
-from .pair_dataset import CatPairDataset, SyntheticPairDataset, TransformedPairs
-from .imgfolder import ImgFolder
-
-from .web_images import RandomWebImages
-from .aachen import *
-
-# try to instanciate datasets
-import sys
-
-try:
- web_images = RandomWebImages(0, 52)
-except AssertionError as e:
- print(f"Dataset web_images not available, reason: {e}", file=sys.stderr)
-
-try:
- aachen_db_images = AachenImages_DB()
-except AssertionError as e:
- print(f"Dataset aachen_db_images not available, reason: {e}", file=sys.stderr)
-
-try:
- aachen_style_transfer_pairs = AachenPairs_StyleTransferDayNight()
-except AssertionError as e:
- print(
- f"Dataset aachen_style_transfer_pairs not available, reason: {e}",
- file=sys.stderr,
- )
-
-try:
- aachen_flow_pairs = AachenPairs_OpticalFlow()
-except AssertionError as e:
- print(f"Dataset aachen_flow_pairs not available, reason: {e}", file=sys.stderr)
diff --git a/spaces/Rebskii/rvc-models-test/infer_pack/modules.py b/spaces/Rebskii/rvc-models-test/infer_pack/modules.py
deleted file mode 100644
index 960481cedad9a6106f2bf0b9e86e82b120f7b33f..0000000000000000000000000000000000000000
--- a/spaces/Rebskii/rvc-models-test/infer_pack/modules.py
+++ /dev/null
@@ -1,522 +0,0 @@
-import copy
-import math
-import numpy as np
-import scipy
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm
-
-from infer_pack import commons
-from infer_pack.commons import init_weights, get_padding
-from infer_pack.transforms import piecewise_rational_quadratic_transform
-
-
-LRELU_SLOPE = 0.1
-
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-
-class ConvReluNorm(nn.Module):
- def __init__(
- self,
- in_channels,
- hidden_channels,
- out_channels,
- kernel_size,
- n_layers,
- p_dropout,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(
- nn.Conv1d(
- in_channels, hidden_channels, kernel_size, padding=kernel_size // 2
- )
- )
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout))
- for _ in range(n_layers - 1):
- self.conv_layers.append(
- nn.Conv1d(
- hidden_channels,
- hidden_channels,
- kernel_size,
- padding=kernel_size // 2,
- )
- )
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dialted and Depth-Separable Convolution
- """
-
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size**i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(
- nn.Conv1d(
- channels,
- channels,
- kernel_size,
- groups=channels,
- dilation=dilation,
- padding=padding,
- )
- )
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(
- self,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0,
- p_dropout=0,
- ):
- super(WN, self).__init__()
- assert kernel_size % 2 == 1
- self.hidden_channels = hidden_channels
- self.kernel_size = (kernel_size,)
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(
- gin_channels, 2 * hidden_channels * n_layers, 1
- )
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight")
-
- for i in range(n_layers):
- dilation = dilation_rate**i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(
- hidden_channels,
- 2 * hidden_channels,
- kernel_size,
- dilation=dilation,
- padding=padding,
- )
- in_layer = torch.nn.utils.weight_norm(in_layer, name="weight")
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight")
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:, : self.hidden_channels, :]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:, self.hidden_channels :, :]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2]),
- )
- ),
- ]
- )
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- ]
- )
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]),
- )
- ),
- ]
- )
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels, 1))
- self.logs = nn.Parameter(torch.zeros(channels, 1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1, 2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(
- self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False,
- ):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=p_dropout,
- gin_channels=gin_channels,
- )
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels] * 2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1, 2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
- def remove_weight_norm(self):
- self.enc.remove_weight_norm()
-
-
-class ConvFlow(nn.Module):
- def __init__(
- self,
- in_channels,
- filter_channels,
- kernel_size,
- n_layers,
- num_bins=10,
- tail_bound=5.0,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.num_bins = num_bins
- self.tail_bound = tail_bound
- self.half_channels = in_channels // 2
-
- self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
- self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0)
- self.proj = nn.Conv1d(
- filter_channels, self.half_channels * (num_bins * 3 - 1), 1
- )
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
- h = self.pre(x0)
- h = self.convs(h, x_mask, g=g)
- h = self.proj(h) * x_mask
-
- b, c, t = x0.shape
- h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
-
- unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt(
- self.filter_channels
- )
- unnormalized_derivatives = h[..., 2 * self.num_bins :]
-
- x1, logabsdet = piecewise_rational_quadratic_transform(
- x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails="linear",
- tail_bound=self.tail_bound,
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1, 2])
- if not reverse:
- return x, logdet
- else:
- return x
diff --git a/spaces/Robert001/UniControl-Demo/annotator/canny/__init__.py b/spaces/Robert001/UniControl-Demo/annotator/canny/__init__.py
deleted file mode 100644
index 1bcdaf9e72d29bd86d0965e051366381633a5003..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/canny/__init__.py
+++ /dev/null
@@ -1,16 +0,0 @@
-'''
- * Copyright (c) 2023 Salesforce, Inc.
- * All rights reserved.
- * SPDX-License-Identifier: Apache License 2.0
- * For full license text, see LICENSE.txt file in the repo root or http://www.apache.org/licenses/
- * By Can Qin
- * Modified from ControlNet repo: https://github.com/lllyasviel/ControlNet
- * Copyright (c) 2023 Lvmin Zhang and Maneesh Agrawala
-'''
-
-import cv2
-
-
-class CannyDetector:
- def __call__(self, img, low_threshold, high_threshold):
- return cv2.Canny(img, low_threshold, high_threshold)
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/cnn/__init__.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/cnn/__init__.py
deleted file mode 100644
index 7246c897430f0cc7ce12719ad8608824fc734446..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/cnn/__init__.py
+++ /dev/null
@@ -1,41 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from .alexnet import AlexNet
-# yapf: disable
-from .bricks import (ACTIVATION_LAYERS, CONV_LAYERS, NORM_LAYERS,
- PADDING_LAYERS, PLUGIN_LAYERS, UPSAMPLE_LAYERS,
- ContextBlock, Conv2d, Conv3d, ConvAWS2d, ConvModule,
- ConvTranspose2d, ConvTranspose3d, ConvWS2d,
- DepthwiseSeparableConvModule, GeneralizedAttention,
- HSigmoid, HSwish, Linear, MaxPool2d, MaxPool3d,
- NonLocal1d, NonLocal2d, NonLocal3d, Scale, Swish,
- build_activation_layer, build_conv_layer,
- build_norm_layer, build_padding_layer, build_plugin_layer,
- build_upsample_layer, conv_ws_2d, is_norm)
-from .builder import MODELS, build_model_from_cfg
-# yapf: enable
-from .resnet import ResNet, make_res_layer
-from .utils import (INITIALIZERS, Caffe2XavierInit, ConstantInit, KaimingInit,
- NormalInit, PretrainedInit, TruncNormalInit, UniformInit,
- XavierInit, bias_init_with_prob, caffe2_xavier_init,
- constant_init, fuse_conv_bn, get_model_complexity_info,
- initialize, kaiming_init, normal_init, trunc_normal_init,
- uniform_init, xavier_init)
-from .vgg import VGG, make_vgg_layer
-
-__all__ = [
- 'AlexNet', 'VGG', 'make_vgg_layer', 'ResNet', 'make_res_layer',
- 'constant_init', 'xavier_init', 'normal_init', 'trunc_normal_init',
- 'uniform_init', 'kaiming_init', 'caffe2_xavier_init',
- 'bias_init_with_prob', 'ConvModule', 'build_activation_layer',
- 'build_conv_layer', 'build_norm_layer', 'build_padding_layer',
- 'build_upsample_layer', 'build_plugin_layer', 'is_norm', 'NonLocal1d',
- 'NonLocal2d', 'NonLocal3d', 'ContextBlock', 'HSigmoid', 'Swish', 'HSwish',
- 'GeneralizedAttention', 'ACTIVATION_LAYERS', 'CONV_LAYERS', 'NORM_LAYERS',
- 'PADDING_LAYERS', 'UPSAMPLE_LAYERS', 'PLUGIN_LAYERS', 'Scale',
- 'get_model_complexity_info', 'conv_ws_2d', 'ConvAWS2d', 'ConvWS2d',
- 'fuse_conv_bn', 'DepthwiseSeparableConvModule', 'Linear', 'Conv2d',
- 'ConvTranspose2d', 'MaxPool2d', 'ConvTranspose3d', 'MaxPool3d', 'Conv3d',
- 'initialize', 'INITIALIZERS', 'ConstantInit', 'XavierInit', 'NormalInit',
- 'TruncNormalInit', 'UniformInit', 'KaimingInit', 'PretrainedInit',
- 'Caffe2XavierInit', 'MODELS', 'build_model_from_cfg'
-]
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/core/bbox/assigners/max_iou_assigner.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/core/bbox/assigners/max_iou_assigner.py
deleted file mode 100644
index 5cf4c4b4b450f87dfb99c3d33d8ed83d3e5cfcb3..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/core/bbox/assigners/max_iou_assigner.py
+++ /dev/null
@@ -1,212 +0,0 @@
-import torch
-
-from ..builder import BBOX_ASSIGNERS
-from ..iou_calculators import build_iou_calculator
-from .assign_result import AssignResult
-from .base_assigner import BaseAssigner
-
-
-@BBOX_ASSIGNERS.register_module()
-class MaxIoUAssigner(BaseAssigner):
- """Assign a corresponding gt bbox or background to each bbox.
-
- Each proposals will be assigned with `-1`, or a semi-positive integer
- indicating the ground truth index.
-
- - -1: negative sample, no assigned gt
- - semi-positive integer: positive sample, index (0-based) of assigned gt
-
- Args:
- pos_iou_thr (float): IoU threshold for positive bboxes.
- neg_iou_thr (float or tuple): IoU threshold for negative bboxes.
- min_pos_iou (float): Minimum iou for a bbox to be considered as a
- positive bbox. Positive samples can have smaller IoU than
- pos_iou_thr due to the 4th step (assign max IoU sample to each gt).
- gt_max_assign_all (bool): Whether to assign all bboxes with the same
- highest overlap with some gt to that gt.
- ignore_iof_thr (float): IoF threshold for ignoring bboxes (if
- `gt_bboxes_ignore` is specified). Negative values mean not
- ignoring any bboxes.
- ignore_wrt_candidates (bool): Whether to compute the iof between
- `bboxes` and `gt_bboxes_ignore`, or the contrary.
- match_low_quality (bool): Whether to allow low quality matches. This is
- usually allowed for RPN and single stage detectors, but not allowed
- in the second stage. Details are demonstrated in Step 4.
- gpu_assign_thr (int): The upper bound of the number of GT for GPU
- assign. When the number of gt is above this threshold, will assign
- on CPU device. Negative values mean not assign on CPU.
- """
-
- def __init__(self,
- pos_iou_thr,
- neg_iou_thr,
- min_pos_iou=.0,
- gt_max_assign_all=True,
- ignore_iof_thr=-1,
- ignore_wrt_candidates=True,
- match_low_quality=True,
- gpu_assign_thr=-1,
- iou_calculator=dict(type='BboxOverlaps2D')):
- self.pos_iou_thr = pos_iou_thr
- self.neg_iou_thr = neg_iou_thr
- self.min_pos_iou = min_pos_iou
- self.gt_max_assign_all = gt_max_assign_all
- self.ignore_iof_thr = ignore_iof_thr
- self.ignore_wrt_candidates = ignore_wrt_candidates
- self.gpu_assign_thr = gpu_assign_thr
- self.match_low_quality = match_low_quality
- self.iou_calculator = build_iou_calculator(iou_calculator)
-
- def assign(self, bboxes, gt_bboxes, gt_bboxes_ignore=None, gt_labels=None):
- """Assign gt to bboxes.
-
- This method assign a gt bbox to every bbox (proposal/anchor), each bbox
- will be assigned with -1, or a semi-positive number. -1 means negative
- sample, semi-positive number is the index (0-based) of assigned gt.
- The assignment is done in following steps, the order matters.
-
- 1. assign every bbox to the background
- 2. assign proposals whose iou with all gts < neg_iou_thr to 0
- 3. for each bbox, if the iou with its nearest gt >= pos_iou_thr,
- assign it to that bbox
- 4. for each gt bbox, assign its nearest proposals (may be more than
- one) to itself
-
- Args:
- bboxes (Tensor): Bounding boxes to be assigned, shape(n, 4).
- gt_bboxes (Tensor): Groundtruth boxes, shape (k, 4).
- gt_bboxes_ignore (Tensor, optional): Ground truth bboxes that are
- labelled as `ignored`, e.g., crowd boxes in COCO.
- gt_labels (Tensor, optional): Label of gt_bboxes, shape (k, ).
-
- Returns:
- :obj:`AssignResult`: The assign result.
-
- Example:
- >>> self = MaxIoUAssigner(0.5, 0.5)
- >>> bboxes = torch.Tensor([[0, 0, 10, 10], [10, 10, 20, 20]])
- >>> gt_bboxes = torch.Tensor([[0, 0, 10, 9]])
- >>> assign_result = self.assign(bboxes, gt_bboxes)
- >>> expected_gt_inds = torch.LongTensor([1, 0])
- >>> assert torch.all(assign_result.gt_inds == expected_gt_inds)
- """
- assign_on_cpu = True if (self.gpu_assign_thr > 0) and (
- gt_bboxes.shape[0] > self.gpu_assign_thr) else False
- # compute overlap and assign gt on CPU when number of GT is large
- if assign_on_cpu:
- device = bboxes.device
- bboxes = bboxes.cpu()
- gt_bboxes = gt_bboxes.cpu()
- if gt_bboxes_ignore is not None:
- gt_bboxes_ignore = gt_bboxes_ignore.cpu()
- if gt_labels is not None:
- gt_labels = gt_labels.cpu()
-
- overlaps = self.iou_calculator(gt_bboxes, bboxes)
-
- if (self.ignore_iof_thr > 0 and gt_bboxes_ignore is not None
- and gt_bboxes_ignore.numel() > 0 and bboxes.numel() > 0):
- if self.ignore_wrt_candidates:
- ignore_overlaps = self.iou_calculator(
- bboxes, gt_bboxes_ignore, mode='iof')
- ignore_max_overlaps, _ = ignore_overlaps.max(dim=1)
- else:
- ignore_overlaps = self.iou_calculator(
- gt_bboxes_ignore, bboxes, mode='iof')
- ignore_max_overlaps, _ = ignore_overlaps.max(dim=0)
- overlaps[:, ignore_max_overlaps > self.ignore_iof_thr] = -1
-
- assign_result = self.assign_wrt_overlaps(overlaps, gt_labels)
- if assign_on_cpu:
- assign_result.gt_inds = assign_result.gt_inds.to(device)
- assign_result.max_overlaps = assign_result.max_overlaps.to(device)
- if assign_result.labels is not None:
- assign_result.labels = assign_result.labels.to(device)
- return assign_result
-
- def assign_wrt_overlaps(self, overlaps, gt_labels=None):
- """Assign w.r.t. the overlaps of bboxes with gts.
-
- Args:
- overlaps (Tensor): Overlaps between k gt_bboxes and n bboxes,
- shape(k, n).
- gt_labels (Tensor, optional): Labels of k gt_bboxes, shape (k, ).
-
- Returns:
- :obj:`AssignResult`: The assign result.
- """
- num_gts, num_bboxes = overlaps.size(0), overlaps.size(1)
-
- # 1. assign -1 by default
- assigned_gt_inds = overlaps.new_full((num_bboxes, ),
- -1,
- dtype=torch.long)
-
- if num_gts == 0 or num_bboxes == 0:
- # No ground truth or boxes, return empty assignment
- max_overlaps = overlaps.new_zeros((num_bboxes, ))
- if num_gts == 0:
- # No truth, assign everything to background
- assigned_gt_inds[:] = 0
- if gt_labels is None:
- assigned_labels = None
- else:
- assigned_labels = overlaps.new_full((num_bboxes, ),
- -1,
- dtype=torch.long)
- return AssignResult(
- num_gts,
- assigned_gt_inds,
- max_overlaps,
- labels=assigned_labels)
-
- # for each anchor, which gt best overlaps with it
- # for each anchor, the max iou of all gts
- max_overlaps, argmax_overlaps = overlaps.max(dim=0)
- # for each gt, which anchor best overlaps with it
- # for each gt, the max iou of all proposals
- gt_max_overlaps, gt_argmax_overlaps = overlaps.max(dim=1)
-
- # 2. assign negative: below
- # the negative inds are set to be 0
- if isinstance(self.neg_iou_thr, float):
- assigned_gt_inds[(max_overlaps >= 0)
- & (max_overlaps < self.neg_iou_thr)] = 0
- elif isinstance(self.neg_iou_thr, tuple):
- assert len(self.neg_iou_thr) == 2
- assigned_gt_inds[(max_overlaps >= self.neg_iou_thr[0])
- & (max_overlaps < self.neg_iou_thr[1])] = 0
-
- # 3. assign positive: above positive IoU threshold
- pos_inds = max_overlaps >= self.pos_iou_thr
- assigned_gt_inds[pos_inds] = argmax_overlaps[pos_inds] + 1
-
- if self.match_low_quality:
- # Low-quality matching will overwrite the assigned_gt_inds assigned
- # in Step 3. Thus, the assigned gt might not be the best one for
- # prediction.
- # For example, if bbox A has 0.9 and 0.8 iou with GT bbox 1 & 2,
- # bbox 1 will be assigned as the best target for bbox A in step 3.
- # However, if GT bbox 2's gt_argmax_overlaps = A, bbox A's
- # assigned_gt_inds will be overwritten to be bbox B.
- # This might be the reason that it is not used in ROI Heads.
- for i in range(num_gts):
- if gt_max_overlaps[i] >= self.min_pos_iou:
- if self.gt_max_assign_all:
- max_iou_inds = overlaps[i, :] == gt_max_overlaps[i]
- assigned_gt_inds[max_iou_inds] = i + 1
- else:
- assigned_gt_inds[gt_argmax_overlaps[i]] = i + 1
-
- if gt_labels is not None:
- assigned_labels = assigned_gt_inds.new_full((num_bboxes, ), -1)
- pos_inds = torch.nonzero(
- assigned_gt_inds > 0, as_tuple=False).squeeze()
- if pos_inds.numel() > 0:
- assigned_labels[pos_inds] = gt_labels[
- assigned_gt_inds[pos_inds] - 1]
- else:
- assigned_labels = None
-
- return AssignResult(
- num_gts, assigned_gt_inds, max_overlaps, labels=assigned_labels)
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/video/__init__.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/video/__init__.py
deleted file mode 100644
index 73199b01dec52820dc6ca0139903536344d5a1eb..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/video/__init__.py
+++ /dev/null
@@ -1,11 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from .io import Cache, VideoReader, frames2video
-from .optflow import (dequantize_flow, flow_from_bytes, flow_warp, flowread,
- flowwrite, quantize_flow, sparse_flow_from_bytes)
-from .processing import concat_video, convert_video, cut_video, resize_video
-
-__all__ = [
- 'Cache', 'VideoReader', 'frames2video', 'convert_video', 'resize_video',
- 'cut_video', 'concat_video', 'flowread', 'flowwrite', 'quantize_flow',
- 'dequantize_flow', 'flow_warp', 'flow_from_bytes', 'sparse_flow_from_bytes'
-]
diff --git a/spaces/SankarSrin/image-matting-app/ppmatting/models/dim.py b/spaces/SankarSrin/image-matting-app/ppmatting/models/dim.py
deleted file mode 100644
index 5d9ae654322242f785407e61ff7b8405d6b443b4..0000000000000000000000000000000000000000
--- a/spaces/SankarSrin/image-matting-app/ppmatting/models/dim.py
+++ /dev/null
@@ -1,208 +0,0 @@
-# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from collections import defaultdict
-import paddle
-import paddle.nn as nn
-import paddle.nn.functional as F
-from paddleseg.models import layers
-from paddleseg import utils
-from paddleseg.cvlibs import manager
-
-from ppmatting.models.losses import MRSD
-
-
-@manager.MODELS.add_component
-class DIM(nn.Layer):
- """
- The DIM implementation based on PaddlePaddle.
-
- The original article refers to
- Ning Xu, et, al. "Deep Image Matting"
- (https://arxiv.org/pdf/1908.07919.pdf).
-
- Args:
- backbone: backbone model.
- stage (int, optional): The stage of model. Defautl: 3.
- decoder_input_channels(int, optional): The channel of decoder input. Default: 512.
- pretrained(str, optional): The path of pretrianed model. Defautl: None.
-
- """
-
- def __init__(self,
- backbone,
- stage=3,
- decoder_input_channels=512,
- pretrained=None):
- super().__init__()
- self.backbone = backbone
- self.pretrained = pretrained
- self.stage = stage
- self.loss_func_dict = None
-
- decoder_output_channels = [64, 128, 256, 512]
- self.decoder = Decoder(
- input_channels=decoder_input_channels,
- output_channels=decoder_output_channels)
- if self.stage == 2:
- for param in self.backbone.parameters():
- param.stop_gradient = True
- for param in self.decoder.parameters():
- param.stop_gradient = True
- if self.stage >= 2:
- self.refine = Refine()
- self.init_weight()
-
- def forward(self, inputs):
- input_shape = paddle.shape(inputs['img'])[-2:]
- x = paddle.concat([inputs['img'], inputs['trimap'] / 255], axis=1)
- fea_list = self.backbone(x)
-
- # decoder stage
- up_shape = []
- for i in range(5):
- up_shape.append(paddle.shape(fea_list[i])[-2:])
- alpha_raw = self.decoder(fea_list, up_shape)
- alpha_raw = F.interpolate(
- alpha_raw, input_shape, mode='bilinear', align_corners=False)
- logit_dict = {'alpha_raw': alpha_raw}
- if self.stage < 2:
- return logit_dict
-
- if self.stage >= 2:
- # refine stage
- refine_input = paddle.concat([inputs['img'], alpha_raw], axis=1)
- alpha_refine = self.refine(refine_input)
-
- # finally alpha
- alpha_pred = alpha_refine + alpha_raw
- alpha_pred = F.interpolate(
- alpha_pred, input_shape, mode='bilinear', align_corners=False)
- if not self.training:
- alpha_pred = paddle.clip(alpha_pred, min=0, max=1)
- logit_dict['alpha_pred'] = alpha_pred
- if self.training:
- loss_dict = self.loss(logit_dict, inputs)
- return logit_dict, loss_dict
- else:
- return alpha_pred
-
- def loss(self, logit_dict, label_dict, loss_func_dict=None):
- if loss_func_dict is None:
- if self.loss_func_dict is None:
- self.loss_func_dict = defaultdict(list)
- self.loss_func_dict['alpha_raw'].append(MRSD())
- self.loss_func_dict['comp'].append(MRSD())
- self.loss_func_dict['alpha_pred'].append(MRSD())
- else:
- self.loss_func_dict = loss_func_dict
-
- loss = {}
- mask = label_dict['trimap'] == 128
- loss['all'] = 0
-
- if self.stage != 2:
- loss['alpha_raw'] = self.loss_func_dict['alpha_raw'][0](
- logit_dict['alpha_raw'], label_dict['alpha'], mask)
- loss['alpha_raw'] = 0.5 * loss['alpha_raw']
- loss['all'] = loss['all'] + loss['alpha_raw']
-
- if self.stage == 1 or self.stage == 3:
- comp_pred = logit_dict['alpha_raw'] * label_dict['fg'] + \
- (1 - logit_dict['alpha_raw']) * label_dict['bg']
- loss['comp'] = self.loss_func_dict['comp'][0](
- comp_pred, label_dict['img'], mask)
- loss['comp'] = 0.5 * loss['comp']
- loss['all'] = loss['all'] + loss['comp']
-
- if self.stage == 2 or self.stage == 3:
- loss['alpha_pred'] = self.loss_func_dict['alpha_pred'][0](
- logit_dict['alpha_pred'], label_dict['alpha'], mask)
- loss['all'] = loss['all'] + loss['alpha_pred']
-
- return loss
-
- def init_weight(self):
- if self.pretrained is not None:
- utils.load_entire_model(self, self.pretrained)
-
-
-# bilinear interpolate skip connect
-class Up(nn.Layer):
- def __init__(self, input_channels, output_channels):
- super().__init__()
- self.conv = layers.ConvBNReLU(
- input_channels,
- output_channels,
- kernel_size=5,
- padding=2,
- bias_attr=False)
-
- def forward(self, x, skip, output_shape):
- x = F.interpolate(
- x, size=output_shape, mode='bilinear', align_corners=False)
- x = x + skip
- x = self.conv(x)
- x = F.relu(x)
-
- return x
-
-
-class Decoder(nn.Layer):
- def __init__(self, input_channels, output_channels=(64, 128, 256, 512)):
- super().__init__()
- self.deconv6 = nn.Conv2D(
- input_channels, input_channels, kernel_size=1, bias_attr=False)
- self.deconv5 = Up(input_channels, output_channels[-1])
- self.deconv4 = Up(output_channels[-1], output_channels[-2])
- self.deconv3 = Up(output_channels[-2], output_channels[-3])
- self.deconv2 = Up(output_channels[-3], output_channels[-4])
- self.deconv1 = Up(output_channels[-4], 64)
-
- self.alpha_conv = nn.Conv2D(
- 64, 1, kernel_size=5, padding=2, bias_attr=False)
-
- def forward(self, fea_list, shape_list):
- x = fea_list[-1]
- x = self.deconv6(x)
- x = self.deconv5(x, fea_list[4], shape_list[4])
- x = self.deconv4(x, fea_list[3], shape_list[3])
- x = self.deconv3(x, fea_list[2], shape_list[2])
- x = self.deconv2(x, fea_list[1], shape_list[1])
- x = self.deconv1(x, fea_list[0], shape_list[0])
- alpha = self.alpha_conv(x)
- alpha = F.sigmoid(alpha)
-
- return alpha
-
-
-class Refine(nn.Layer):
- def __init__(self):
- super().__init__()
- self.conv1 = layers.ConvBNReLU(
- 4, 64, kernel_size=3, padding=1, bias_attr=False)
- self.conv2 = layers.ConvBNReLU(
- 64, 64, kernel_size=3, padding=1, bias_attr=False)
- self.conv3 = layers.ConvBNReLU(
- 64, 64, kernel_size=3, padding=1, bias_attr=False)
- self.alpha_pred = layers.ConvBNReLU(
- 64, 1, kernel_size=3, padding=1, bias_attr=False)
-
- def forward(self, x):
- x = self.conv1(x)
- x = self.conv2(x)
- x = self.conv3(x)
- alpha = self.alpha_pred(x)
-
- return alpha
diff --git a/spaces/SeViLA/SeViLA/app/classification.py b/spaces/SeViLA/SeViLA/app/classification.py
deleted file mode 100644
index 2b5bd896474d5df6814220c79aeda6d5a895ab92..0000000000000000000000000000000000000000
--- a/spaces/SeViLA/SeViLA/app/classification.py
+++ /dev/null
@@ -1,216 +0,0 @@
-"""
- # Copyright (c) 2022, salesforce.com, inc.
- # All rights reserved.
- # SPDX-License-Identifier: BSD-3-Clause
- # For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause
-"""
-
-import plotly.graph_objects as go
-import requests
-import streamlit as st
-import torch
-from lavis.models import load_model
-from lavis.processors import load_processor
-from lavis.processors.blip_processors import BlipCaptionProcessor
-from PIL import Image
-
-from app import device, load_demo_image
-from app.utils import load_blip_itm_model
-from lavis.processors.clip_processors import ClipImageEvalProcessor
-
-
-@st.cache()
-def load_demo_image(img_url=None):
- if not img_url:
- img_url = "https://img.atlasobscura.com/yDJ86L8Ou6aIjBsxnlAy5f164w1rjTgcHZcx2yUs4mo/rt:fit/w:1200/q:81/sm:1/scp:1/ar:1/aHR0cHM6Ly9hdGxh/cy1kZXYuczMuYW1h/em9uYXdzLmNvbS91/cGxvYWRzL3BsYWNl/X2ltYWdlcy85MDll/MDRjOS00NTJjLTQx/NzQtYTY4MS02NmQw/MzI2YWIzNjk1ZGVk/MGZhMTJiMTM5MmZi/NGFfUmVhcl92aWV3/X29mX3RoZV9NZXJs/aW9uX3N0YXR1ZV9h/dF9NZXJsaW9uX1Bh/cmssX1NpbmdhcG9y/ZSxfd2l0aF9NYXJp/bmFfQmF5X1NhbmRz/X2luX3RoZV9kaXN0/YW5jZV8tXzIwMTQw/MzA3LmpwZw.jpg"
- raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB")
- return raw_image
-
-
-@st.cache(
- hash_funcs={
- torch.nn.parameter.Parameter: lambda parameter: parameter.data.detach()
- .cpu()
- .numpy()
- },
- allow_output_mutation=True,
-)
-def load_model_cache(model_type, device):
- if model_type == "blip":
- model = load_model(
- "blip_feature_extractor", model_type="base", is_eval=True, device=device
- )
- elif model_type == "albef":
- model = load_model(
- "albef_feature_extractor", model_type="base", is_eval=True, device=device
- )
- elif model_type == "CLIP_ViT-B-32":
- model = load_model(
- "clip_feature_extractor", "ViT-B-32", is_eval=True, device=device
- )
- elif model_type == "CLIP_ViT-B-16":
- model = load_model(
- "clip_feature_extractor", "ViT-B-16", is_eval=True, device=device
- )
- elif model_type == "CLIP_ViT-L-14":
- model = load_model(
- "clip_feature_extractor", "ViT-L-14", is_eval=True, device=device
- )
-
- return model
-
-
-def app():
- model_type = st.sidebar.selectbox(
- "Model:",
- ["ALBEF", "BLIP_Base", "CLIP_ViT-B-32", "CLIP_ViT-B-16", "CLIP_ViT-L-14"],
- )
- score_type = st.sidebar.selectbox("Score type:", ["Cosine", "Multimodal"])
-
- # ===== layout =====
- st.markdown(
- "