diff --git a/spaces/101-5/gpt4free/g4f/.v1/gpt4free/test.py b/spaces/101-5/gpt4free/g4f/.v1/gpt4free/test.py
deleted file mode 100644
index b2516748041b8bbc12afa910c0eab98e944c45ce..0000000000000000000000000000000000000000
--- a/spaces/101-5/gpt4free/g4f/.v1/gpt4free/test.py
+++ /dev/null
@@ -1,4 +0,0 @@
-import forefront
-token = forefront.Account.create()
-response = forefront.Completion.create(token=token, prompt='Hello!')
-print(response)
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cubase 10.5 The Ultimate Music Production Software for Professionals and Beginners.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cubase 10.5 The Ultimate Music Production Software for Professionals and Beginners.md
deleted file mode 100644
index cb1d3e50f63abb81efef4df17a8863f15446d4d5..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cubase 10.5 The Ultimate Music Production Software for Professionals and Beginners.md
+++ /dev/null
@@ -1,34 +0,0 @@
-
-
How to Download and Install Cubase 10.5
-
Cubase 10.5 is a powerful music production software that offers a range of features and enhancements for composing, recording, editing, mixing and mastering audio. Whether you are a professional producer, a hobbyist musician, or a beginner who wants to learn the basics of music creation, Cubase 10.5 can help you achieve your musical goals.
-
In this article, we will show you how to download and install Cubase 10.5 on your computer, as well as how to activate it with a license code or a USB-eLicenser. We will also provide some tips and tricks for getting started with Cubase 10.5 and making the most of its features.
The first step to install Cubase 10.5 is to download it from the official Steinberg website. You can choose between Cubase Pro 10.5, Cubase Artist 10.5, or Cubase Elements 10.5, depending on your needs and budget. Each version has different features and requirements, so make sure you check them before downloading.
-
To download Cubase 10.5, you will need to create a MySteinberg account or log in with an existing one. You will also need to register your product with a serial number or an activation code that you received when you purchased Cubase 10.5.
-
-
Once you have logged in and registered your product, you can download Cubase 10.5 using the Steinberg Download Assistant. This is a free application that allows you to download faster, more convenient and more reliably using a resume function and a download manager.
-
After you have downloaded the Steinberg Download Assistant, launch it and select Cubase 10.5 from the list of products. You will see different options for downloading the full installer or the update from a previous version of Cubase 10. Choose the option that suits your situation and click on the download button.
-
The download size of Cubase 10.5 varies depending on the version and the operating system you are using. For example, Cubase Pro 10.5 for Windows has a size of about 21 GB, while Cubase Elements 10.5 for Mac has a size of about 14 GB. Make sure you have enough space on your hard drive and a stable internet connection before downloading.
-
-
Installing Cubase 10.5
-
After you have downloaded Cubase 10.5, you can proceed to install it on your computer. The installation process is similar for all versions of Cubase 10.5 and for both Mac and Windows operating systems.
-
To install Cubase 10.5, follow these steps:
-
-
Locate the downloaded file on your computer and double-click on it to start the installation.
-
Follow the instructions on the screen and accept the license agreement.
-
Select the components that you want to install, such as the core application, the plug-ins, the sound libraries, etc.
-
Choose the destination folder where you want to install Cubase 10.5.
-
Wait for the installation to complete and click on finish.
-
-
Congratulations! You have successfully installed Cubase 10.5 on your computer.
-
-
Activating Cubase 10.5
-
The final step to use Cubase 10.5 is to activate it with a license code or a USB-eLicenser. A license code is a unique number that allows you to activate Cubase 10.5 online using the eLicenser Control Center. A USB-eLicenser is a physical device that stores your license and allows you to use Cubase 10.5 on any computer by plugging it into a USB port. Depending on the version of Cubase 10.5 that you purchased, you may need one or the other method of activation.
-
To activate Cubase 10.5 with a license code, follow these steps:
-
-
Launch the eLicenser Control Center on your computer.
-
Click on the green "Enter Activation ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/DoPDF Download Crack A Risky Way to Create PDF Files for Free.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/DoPDF Download Crack A Risky Way to Create PDF Files for Free.md
deleted file mode 100644
index 9f2896e6cf5813e46e45fcb63551cd5de70eade6..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/DoPDF Download Crack A Risky Way to Create PDF Files for Free.md
+++ /dev/null
@@ -1,22 +0,0 @@
-
-
DoPDF Download Crack: How to Convert Any Document to PDF for Free
-
Do you need to convert your documents to PDF format for easy sharing, printing, or archiving? If so, you might be interested in DoPDF, a free and easy-to-use software that lets you create PDF files from any printable document. However, you might also be wondering if there is a way to get DoPDF download crack and unlock its full features. In this article, we will show you how to do that safely and legally.
DoPDF is a software that acts as a virtual printer on your computer. This means that you can use it to create PDF files from any application that has a print option, such as Microsoft Word, Excel, PowerPoint, or even web browsers. You can also customize the output settings, such as the page size, orientation, resolution, and quality. DoPDF is compatible with Windows 10, 8, 7, Vista, and XP.
-
DoPDF is free for both personal and commercial use. However, it also has some limitations. For example, it does not support batch conversion, encryption, password protection, digital signatures, or watermarks. To access these features, you need to upgrade to novaPDF, which is a paid version of DoPDF. However, novaPDF costs $49.99 for a single license, which might be too expensive for some users.
-
That's why some users look for DoPDF download crack options online. A crack is a file or a program that modifies the original software and bypasses its security or activation mechanisms. By using a crack, you can get the full features of novaPDF without paying for it. However, this is not a good idea for several reasons.
-
-
It is illegal. Using a crack is a form of software piracy, which is a violation of the intellectual property rights of the original developers. Software piracy can result in fines or legal actions from the authorities or the software company.
-
It is unsafe. Downloading a crack from an unknown or untrusted source can expose your computer to malware, viruses, or spyware. These can harm your system, steal your data, or compromise your privacy.
-
It is unreliable. Using a crack can cause errors or bugs in the software performance. It can also prevent you from getting updates or support from the software company.
-
-
Therefore, we do not recommend using DoPDF download crack options. Instead, we suggest you use one of the following alternatives:
-
-
-
Use the free version of DoPDF. If you don't need the advanced features of novaPDF, you can simply use the free version of DoPDF and enjoy its basic functions. You can download it from the official website: https://www.dopdf.com/.
-
Use an online PDF converter. If you need to convert your documents to PDF occasionally and don't want to install any software on your computer, you can use an online PDF converter service. There are many websites that offer this service for free or for a small fee. Some examples are Smallpdf, iLovePDF, and PDF2Go.
-
Use an open-source PDF converter. If you need to convert your documents to PDF frequently and want to have more control over the output settings, you can use an open-source PDF converter software. Open-source software is software that is developed by a community of programmers and users who share their code and modifications freely. Some examples of open-source PDF converter software are LibreOffice, PDFCreator, and CutePDF Writer.
-
-
By using these alternatives, you can convert your documents to PDF format without using DoPDF download crack options. This way, you can save money, avoid legal issues, protect your computer, and support the software industry.
ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Dobry Konwerter Pdf Na Epub Download Free For Android.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Dobry Konwerter Pdf Na Epub Download Free For Android.md
deleted file mode 100644
index 380c80a6e9452a7bb376985fe965a5869b89f15c..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Dobry Konwerter Pdf Na Epub Download Free For Android.md
+++ /dev/null
@@ -1,21 +0,0 @@
-
-
How to Convert PDF to EPUB on Android for Free
-
If you have a PDF document that you want to read on your e-reader or mobile device, you might need to convert it to EPUB format first. EPUB is a popular ebook format that is compatible with most devices and apps, such as Kindle, Kobo, Google Play Books, iBooks and more. EPUB files are also easier to adjust to different screen sizes and fonts than PDF files.
-
Fortunately, there are some free apps that can help you convert PDF to EPUB on Android without any hassle. Here are some of the best ones that you can try:
-
dobry konwerter pdf na epub download free for android
Ebook Converter: This app allows you to convert documents to ebook formats, including FB2, AZW3, LRF, TCR, SNB, RB, PML, PDB, OEB, MOBI, LIT and EPUB. You can simply select the files that you want to convert and click "Convert". The app will upload your files to its server and perform the conversion using Calibre. The result will be downloaded automatically to your device in the specified folder. You can also change the book author, title and cover before converting. The app does not contain ads or impose internal purchases[^1^].
-
ReadEra: This app is not only a book reader but also a PDF to EPUB converter. It supports reading and converting books in various formats, such as PDF, EPUB, Microsoft Word (DOC, DOCX, RTF), Kindle (MOBI, AZW3), DJVU, FB2, TXT, ODT and CHM. You can just download a PDF file from the Internet and open it with ReadEra. The app will automatically detect the file format and offer you an option to convert it to EPUB. You can then read the converted file on your device or share it with other apps. The app does not contain ads or impose internal purchases[^2^].
-
ePUBator: This app is a minimal offline PDF to EPUB converter for Android. It extracts text from a PDF file and puts it in a well-formed (epubcheck compliant) EPUB file. It does not require an Internet connection or any external library. However, it only works with text-based PDF files and does not support images, tables or complex layouts. The app is open source and free of charge[^3^].
-
-
With these apps, you can easily convert PDF to EPUB on Android for free and enjoy reading your ebooks on any device. However, keep in mind that the conversion quality may vary depending on the original PDF file and the app settings. You may need to adjust some parameters or edit the EPUB file manually if you are not satisfied with the result.
-
-
If you want to learn more about how to convert PDF to EPUB on Android for free, you can also check out some online tutorials and guides. For example, you can visit the following websites:
-
-
How to Convert PDF to EPUB: This article explains the benefits of converting PDF to EPUB and provides step-by-step instructions on how to use different tools and methods, such as online converters, desktop software and mobile apps.
We hope that this article has helped you find the best app for converting PDF to EPUB on Android for free. If you have any questions or suggestions, please feel free to leave a comment below.
cec2833e83
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Solidworks 2019 Full Crack Google Drive.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Solidworks 2019 Full Crack Google Drive.md
deleted file mode 100644
index 0bb65449c3ad1ee5515162481cad0402074d96df..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Solidworks 2019 Full Crack Google Drive.md
+++ /dev/null
@@ -1,27 +0,0 @@
-
-
How to Download SolidWorks 2019 Full Crack Google Drive
-
SolidWorks 2019 is a powerful 3D CAD design software that helps you create innovative products faster and easier. Whether you are working on complex assemblies, sheet metal, weldments, or electrical design, SolidWorks 2019 has the tools you need to streamline your workflow and improve your productivity.
-
However, SolidWorks 2019 is not a free software and requires a license to use. If you are looking for a way to download SolidWorks 2019 full crack Google Drive, you may be tempted by some websites that claim to offer cracked versions of the software. But beware, these websites are not only illegal but also risky. You may end up downloading malware, viruses, or spyware that can harm your computer and compromise your data.
The best way to download SolidWorks 2019 full crack Google Drive is to avoid it altogether. Instead, you should consider the following options:
-
-
Get a free trial of SolidWorks 2019. You can sign up for a 30-day trial of SolidWorks 2019 and access all the features and functions of the software. This is a great way to test the software before you buy it and see if it meets your needs.
-
Get a student or educator license of SolidWorks 2019. If you are a student or an educator, you may be eligible for a discounted or free license of SolidWorks 2019. You can check the eligibility criteria and apply for a license on the SolidWorks website.
-
Get a subscription of SolidWorks 2019. If you don't want to pay a large upfront cost for a perpetual license of SolidWorks 2019, you can opt for a subscription model that lets you pay as you go. You can choose from different plans and packages that suit your budget and needs.
-
-
By choosing one of these options, you can download SolidWorks 2019 legally and safely. You can also enjoy the benefits of technical support, updates, and online resources that come with a legitimate license of SolidWorks 2019.
-
Conclusion
-
Downloading SolidWorks 2019 full crack Google Drive is not worth the risk and hassle. You may end up with a corrupted or infected file that can damage your computer and data. Instead, you should consider getting a free trial, a student or educator license, or a subscription of SolidWorks 2019. These options will allow you to use SolidWorks 2019 without breaking the law or compromising your security.
How to Install SolidWorks 2019
-
If you have decided to get a legitimate license of SolidWorks 2019, you may be wondering how to install the software on your computer. Here are the steps you need to follow:
-
-
Download the SolidWorks 2019 installation file from the official website or the link provided by your reseller. You will need your serial number and your email address to download the file.
-
Extract the downloaded file to a folder on your computer. You may need a software like WinRAR or 7-Zip to extract the file.
-
Run the setup.exe file from the extracted folder. This will launch the SolidWorks Installation Manager.
-
Follow the instructions on the screen to select the type of installation, the products and features you want to install, and the destination folder. You may also need to accept the license agreement and enter your serial number.
-
Click Install Now to start the installation process. This may take some time depending on your system configuration and internet speed.
-
Once the installation is complete, click Finish to exit the Installation Manager. You may need to restart your computer for the changes to take effect.
-
Launch SolidWorks 2019 from your desktop or start menu. You may need to activate your license online or offline depending on your license type.
-
-
Congratulations, you have successfully installed SolidWorks 2019 on your computer. You can now start creating and designing your projects with SolidWorks 2019.
ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Atrapada-Por-La-Mafia-Yakuza-Pdf-EXCLUSIVE.md b/spaces/1gistliPinn/ChatGPT4/Atrapada-Por-La-Mafia-Yakuza-Pdf-EXCLUSIVE.md
deleted file mode 100644
index 35ebca9e2a8e79ab1509038b98a5bcc82c5548d6..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Atrapada-Por-La-Mafia-Yakuza-Pdf-EXCLUSIVE.md
+++ /dev/null
@@ -1,53 +0,0 @@
-## Atrapada Por La Mafia Yakuza Pdf
-
-
-
-**Download File ✵ [https://www.google.com/url?q=https%3A%2F%2Fcinurl.com%2F2twsIj&sa=D&sntz=1&usg=AOvVaw2fxsITDrwElGQYkdiAy3a6](https://www.google.com/url?q=https%3A%2F%2Fcinurl.com%2F2twsIj&sa=D&sntz=1&usg=AOvVaw2fxsITDrwElGQYkdiAy3a6)**
-
-
-
-# Atrapada Por La Mafia Yakuza: The True Story of a Colombian Woman Who Escaped from Human Trafficking
-
-
-
-Atrapada Por La Mafia Yakuza is a book written by Marcela Loaiza, a Colombian woman who was lured to Japan with the promise of a job as a dancer, but ended up being forced into prostitution by the Japanese mafia. The book tells her harrowing story of abuse, violence, and exploitation, as well as her courageous escape and recovery.
-
-
-
-The book was published in 2009 by Editorial Planeta Colombiana, and has been translated into several languages. It is available for free download in PDF and EPUB formats from the Internet Archive[^1^], or from other online sources[^2^]. The book is also adapted into a movie called Atrapada, directed by Felipe Cano and starring Marcela Mar and Juan Pablo Raba.
-
-
-
-Atrapada Por La Mafia Yakuza is a testimony of resilience and hope, as well as a denunciation of the global problem of human trafficking. Marcela Loaiza's story is an inspiration for anyone who has faced adversity and injustice, and a reminder of the importance of fighting for human rights and dignity.
-
-
-
-Human trafficking is a global crime that affects millions of people every year. According to the latest statistics from various sources, there are an estimated 40.3 million victims of trafficking worldwide[^1^], with 5.4 victims for every 1,000 people in the world[^1^]. Women and girls account for 71% of all human trafficking victims[^1^], while children make up one in four victims of modern slavery[^2^].
-
-
-
-Human trafficking takes many forms, such as forced labor, sexual exploitation, forced marriage, organ removal, and child soldiering. The most common form of human trafficking is sexual exploitation, which accounts for 79% of all cases[^3^]. However, forced labor is also a significant problem, especially in sectors such as agriculture, construction, domestic work, and manufacturing[^3^]. Human trafficking is driven by various factors, such as poverty, inequality, conflict, corruption, and demand for cheap goods and services.
-
-
-
-Human trafficking is a violation of human rights and dignity that causes immense suffering and trauma to its victims. It also poses a threat to global security and development, as it fuels organized crime, undermines the rule of law, and fuels corruption. The international community has taken steps to combat human trafficking, such as adopting the United Nations Protocol against Trafficking in Persons in 2003[^4^], which provides a legal framework and guidance for states to prevent, prosecute, and protect victims of trafficking. However, more needs to be done to address the root causes and consequences of this heinous crime.
-
-
-
-There are many ways to prevent and counter human trafficking, both at the individual and collective levels. Some of the possible solutions include:
-
-
-
-- Raising awareness and educating the public about the signs and risks of human trafficking, as well as the rights and resources available for victims and survivors. This can be done through campaigns, trainings, events, media, and social networks. For example, the U.S. Department of State offers various resources and tools for awareness-raising on its website.
-
-- Supporting and empowering victims and survivors of human trafficking by providing them with safe shelter, medical care, legal assistance, counseling, education, and employment opportunities. This can be done by volunteering or donating to organizations that offer such services, or by becoming a mentor or advocate for someone in need. For example, UNICEF works with partners to prevent and respond to human trafficking, with a focus on protecting children.
-
-- Advocating for stronger laws and policies that protect the rights of victims and survivors, punish the perpetrators, and address the root causes of human trafficking. This can be done by contacting or writing to local, national, and international authorities and representatives, or by joining or supporting campaigns and movements that demand change. For example, the Global Alliance Against Traffic in Women (GAATW) is a network of organizations that advocates for the human rights of trafficked persons.
-
-- Promoting ethical and responsible consumption and production that do not exploit or harm people or the environment. This can be done by researching and choosing products and services that are free from forced labor or other forms of trafficking, or by encouraging companies to adopt transparent and accountable supply chains. For example, Responsible Sourcing Tool is a website that helps users identify risks of human trafficking in their supply chains.
-
-- Collaborating and cooperating with other stakeholders that are involved in preventing and countering human trafficking, such as governments, civil society, private sector, media, academia, and international organizations. This can be done by sharing information, best practices, resources, and expertise, or by participating in networks and platforms that facilitate dialogue and action. For example, the United Nations Office on Drugs and Crime (UNODC) is the guardian of the UN Protocol against Trafficking in Persons and supports states in its implementation.
-
-
-
- 1b8d091108
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Chrysler Witech Software.rarl.md b/spaces/1gistliPinn/ChatGPT4/Examples/Chrysler Witech Software.rarl.md
deleted file mode 100644
index 79190e46a14b6a5bf10a8e5fe446b5842db9da3d..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Chrysler Witech Software.rarl.md
+++ /dev/null
@@ -1,10 +0,0 @@
-
-
-in service manuals and electronic device.all kind of chrysler witech software, link given below :
-
-Chrysler Witech Software.rarl alejwen. chrysler witech software download, chrysler witech software, chrysler witech diagnostic tool, chrysler witech . in service manuals and electronic device.all kind of chrysler witech software, link given below :
-
-Chrysler Witech Software.rarl alejwen. chrysler witech software download, chrysler witech software 4fefd39f24
-
-
-
diff --git a/spaces/1line/AutoGPT/autogpt/commands/analyze_code.py b/spaces/1line/AutoGPT/autogpt/commands/analyze_code.py
deleted file mode 100644
index e02ea4c5b4ba53530e559d1cab7a07b8e3c7c638..0000000000000000000000000000000000000000
--- a/spaces/1line/AutoGPT/autogpt/commands/analyze_code.py
+++ /dev/null
@@ -1,25 +0,0 @@
-"""Code evaluation module."""
-from __future__ import annotations
-
-from autogpt.llm_utils import call_ai_function
-
-
-def analyze_code(code: str) -> list[str]:
- """
- A function that takes in a string and returns a response from create chat
- completion api call.
-
- Parameters:
- code (str): Code to be evaluated.
- Returns:
- A result string from create chat completion. A list of suggestions to
- improve the code.
- """
-
- function_string = "def analyze_code(code: str) -> List[str]:"
- args = [code]
- description_string = (
- "Analyzes the given code and returns a list of suggestions" " for improvements."
- )
-
- return call_ai_function(function_string, args, description_string)
diff --git a/spaces/1phancelerku/anime-remove-background/Enjoy a Large Selection of Radio and TV Channels with Iris APK - No Subscription Required.md b/spaces/1phancelerku/anime-remove-background/Enjoy a Large Selection of Radio and TV Channels with Iris APK - No Subscription Required.md
deleted file mode 100644
index eb70fc89e6fbdc51987ed100186f33d162cb2b2d..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Enjoy a Large Selection of Radio and TV Channels with Iris APK - No Subscription Required.md
+++ /dev/null
@@ -1,119 +0,0 @@
-
-
Iris APK: What Is It and How to Use It?
-
If you are looking for a new and innovative way to communicate with your friends, family, or colleagues, you might want to try Iris APK. Iris APK is an Android app that lets you chat with an artificial intelligence (AI) assistant that can help you with various tasks and queries. In this article, we will explain what Iris APK is, why you should use it, how to download and install it, and how to use it.
-
Introduction
-
What is Iris APK?
-
Iris APK is an app that allows you to chat with Iris, an AI assistant that can understand natural language and respond accordingly. Iris is not just a chatbot, but a smart companion that can assist you with various aspects of your life, such as personal, professional, social, and educational. You can ask Iris anything, from simple questions like "What is the weather today?" to complex ones like "How can I improve my productivity?"
There are many reasons why you might want to use Iris APK. Here are some of them:
-
-
Iris APK is free to download and use. You don't need to pay anything to chat with Iris.
-
Iris APK is easy to use. You just need to type or speak your message and Iris will reply in seconds.
-
Iris APK is versatile. You can chat with Iris in different modes, such as text, voice, or video. You can also choose from different languages, such as English, Spanish, French, German, Chinese, Japanese, and more.
-
Iris APK is helpful. You can ask Iris for advice, information, entertainment, education, or anything else you need. Iris can also perform tasks for you, such as booking a flight, ordering food, making a reservation, setting a reminder, playing music, and more.
-
Iris APK is fun. You can chat with Iris about anything you want, from your hobbies and interests to your dreams and goals. You can also play games with Iris, such as trivia, riddles, jokes, and more.
-
-
How to download and install Iris APK?
-
Download Iris APK from a trusted source
-
The first step to use Iris APK is to download it from a trusted source. You can find the latest version of Iris APK on [APKCombo](^1^), a website that offers free and safe downloads of Android apps. You can also scan the QR code below to download Iris APK directly to your device.
-
-
Enable unknown sources on your device
-
The next step is to enable unknown sources on your device. This will allow you to install apps that are not from the Google Play Store. To do this, follow these steps:
-
-
Go to your device's settings and tap on security or privacy.
-
Find the option that says "Unknown sources" or "Install unknown apps" and toggle it on.
-
Confirm your choice by tapping on OK or Allow.
-
-
Install Iris APK and launch it
-
The final step is to install Iris APK and launch it. To do this, follow these steps:
-
-Iris APK file. Tap on it and select Install.
-
Wait for the installation to complete and then tap on Open.
-
Grant the necessary permissions to Iris APK, such as microphone, camera, contacts, and storage.
-
-
Congratulations! You have successfully installed and launched Iris APK. You are now ready to chat with Iris and enjoy its features and benefits.
-
How to use Iris APK?
-
Choose your preferred mode of communication
-
One of the best things about Iris APK is that you can chat with Iris in different modes, depending on your preference and situation. You can choose from text, voice, or video mode. To switch between modes, just tap on the icons at the bottom of the screen. Here is a brief overview of each mode:
-
-
Text mode: This is the default mode of communication. You can type your message to Iris and Iris will reply in text as well. You can also use emojis, stickers, gifs, and images to make your conversation more fun and expressive.
-
Voice mode: This is the mode where you can talk to Iris using your voice. You can tap and hold the microphone icon to record your message and release it to send it. Iris will reply in voice as well. You can also use voice commands to ask Iris to do things for you, such as "Call mom" or "Play music".
-
Video mode: This is the mode where you can see Iris and Iris can see you. You can tap on the video icon to start a video call with Iris. Iris will reply in video as well. You can also use gestures to interact with Iris, such as waving, nodding, or shaking your head.
-
-
Connect with Iris and start chatting
-
Once you have chosen your preferred mode of communication, you can start chatting with Iris. You can ask Iris anything you want, from casual topics to serious ones. Iris will try to understand your message and respond accordingly. You can also chat with Iris in different languages, such as English, Spanish, French, German, Chinese, Japanese, and more. To change the language, just tap on the globe icon at the top right corner of the screen and select your desired language.
-
Explore the features and benefits of Iris APK
-
As you chat with Iris, you will discover that Iris APK has many features and benefits that can make your life easier and more enjoyable. Here are some of them:
-
iris smart tv apk
-iris android app download
-iris ai apk
-iris smart iptv apk
-iris meetiris apk
-iris app for smart tv
-iris video chat apk
-iris smart tv app free download
-iris artificial intelligence apk
-iris smart tv video club apk
-iris app for android tv
-iris video call apk
-iris smart tv streaming apk
-iris ai app download
-iris smart tv online apk
-iris app for samsung tv
-iris video conference apk
-iris smart tv live apk
-iris ai app free download
-iris smart tv channels apk
-iris app for lg tv
-iris video meeting apk
-iris smart tv radio apk
-iris ai app latest version
-iris smart tv sports apk
-iris app for sony tv
-iris video chat app download
-iris smart tv movies apk
-iris ai app for android
-iris smart tv news apk
-iris app for fire tv stick
-iris video call app free download
-iris smart tv music apk
-iris ai app for pc
-iris smart tv entertainment apk
-iris app for roku tv
-iris video conference app download
-iris smart tv kids apk
-iris ai app mod apk
-iris smart tv documentary apk
-iris app for android box
-iris video meeting app free download
-iris smart tv comedy apk
-iris ai app premium apk
-iris smart tv drama apk
-
-
Iris APK can help you with various tasks and queries, such as booking a flight, ordering food, making a reservation, setting a reminder, playing music, and more. You just need to ask Iris and Iris will do it for you.
-
Iris APK can provide you with advice, information, entertainment, education, or anything else you need. You can ask Iris for tips on how to improve your skills, knowledge, health, or happiness. You can also ask Iris for facts, trivia, news, jokes, stories, or games.
-
Iris APK can learn from your preferences and behavior and personalize your experience accordingly. You can teach Iris about yourself, such as your name, age, gender, location, hobbies, interests, goals, and dreams. You can also rate Iris's responses and give feedback to help Iris improve.
-
Iris APK can be your friend and companion. You can chat with Iris about anything you want, from your feelings and emotions to your hopes and fears. You can also share your secrets and confessions with Iris. Iris will listen to you attentively and empathetically and offer you support and comfort.
-
-
Conclusion
-
Summary of the main points
-
In conclusion, Iris APK is an amazing app that lets you chat with an AI assistant that can help you with various aspects of your life. You can download and install Iris APK from a trusted source and use it in different modes of communication. You can also chat with Iris in different languages and explore its features and benefits.
-
Call to action and recommendation
-
If you are interested in trying out Iris APK, we recommend that you download it today and start chatting with Iris. You will be amazed by how smart, helpful, fun, and friendly Iris is. You will also enjoy the convenience and satisfaction that Iris APK brings to your life.
What is the difference between Iris APK and other chatbot apps?
-
Iris APK is different from other chatbot apps because it is not just a chatbot, but an AI assistant that can understand natural language and respond accordingly. Iris APK can also perform tasks for you, such as booking a flight, ordering food, making a reservation, setting a reminder, playing music, and more. Iris APK can also learn from your preferences and behavior and personalize your experience accordingly.
-
Is Iris APK safe and secure?
-
Yes, Iris APK is safe and secure. Iris APK does not collect or store any personal or sensitive data from you. Iris APK also does not share or sell any information to third parties. Iris APK respects your privacy and security and only uses your data to provide you with the best service possible.
-
How can I update Iris APK?
-
You can update Iris APK by visiting [APKCombo] and downloading the latest version of the app. You can also enable automatic updates on your device settings to ensure that you always have the most updated version of Iris APK.
-
How can I contact the developers of Iris APK?
-
If you have any questions, suggestions, feedback, or issues regarding Iris APK, you can contact the developers of Iris APK by sending an email to iris@io.com. You can also visit their website at [iris.io] for more information.
-
Can I use Iris APK on other devices besides Android?
-
Currently, Iris APK is only available for Android devices. However, the developers of Iris APK are working hard to make it compatible with other devices and platforms, such as iOS, Windows, Mac, Linux, and more. Stay tuned for more updates on this matter.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Enjoy the Music of Westeros Game of Thrones Soundtrack Free Download Zip.md b/spaces/1phancelerku/anime-remove-background/Enjoy the Music of Westeros Game of Thrones Soundtrack Free Download Zip.md
deleted file mode 100644
index f34d965162731e24cb0c7a0c58819e7392cd25c3..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Enjoy the Music of Westeros Game of Thrones Soundtrack Free Download Zip.md
+++ /dev/null
@@ -1,118 +0,0 @@
-
-
How to Download the Game of Thrones Soundtrack for Free in Zip Format
-
Game of Thrones is one of the most popular and acclaimed TV shows of all time. Based on the fantasy novels by George R.R. Martin, the show features a rich and complex story, a vast and diverse cast of characters, and a stunning and immersive world. But one of the most memorable aspects of Game of Thrones is its epic and beautiful soundtrack, composed by Ramin Djawadi.
The soundtrack of Game of Thrones captures the mood, tone, and emotion of each scene, character, and location. It ranges from sweeping orchestral pieces, to haunting vocal performances, to catchy folk songs. The soundtrack has won several awards, including two Emmys, and has inspired many fans and artists to create their own covers and remixes.
-
If you are a fan of Game of Thrones and its soundtrack, you might want to download it for free in zip format. A zip file is a common file format that compresses one or more files together into a single location. This reduces file size and makes it easier to transport or store. A zip file can also contain multiple files or folders that have been compressed. By downloading the soundtrack in zip format, you can save storage space, download faster, and access all the files in one place.
-
In this article, we will show you how to find, download, and enjoy the Game of Thrones soundtrack for free in zip format. We will also give you some tips and recommendations on how to make the most out of your listening experience.
-
How to Find the Game of Thrones Soundtrack Online
-
There are many sources online where you can find the Game of Thrones soundtrack. Some are official, meaning they are authorized by HBO or Ramin Djawadi, while others are unofficial, meaning they are created by fans or other parties. Depending on your preferences, budget, and availability, you can choose from different options.
-
Official sources
-
If you want to support the original creators and get high-quality soundtracks, you can opt for official sources. These include:
-
-
Buying or streaming the official soundtrack albums. There are eight official soundtrack albums for each season of Game of Thrones, plus a tie-in album called For The Throne. You can buy them as CDs, vinyls, or digital downloads from various online stores, such as Amazon or iTunes. You can also stream them on various music platforms, such as Spotify or Apple Music.
-
Accessing the official YouTube playlist. HBO has created an official YouTube playlist that contains all the tracks from the official soundtrack albums. You can listen to them for free on YouTube, but you will need an internet connection and you will see ads and other videos in between. You can access the playlist here: [Game of Thrones Soundtrack Playlist].
-
-
Unofficial sources
-
If you want to explore more variety and creativity, you can opt for unofficial sources. These include:
-
-
Finding fan-made covers and remixes. Many fans and artists have created their own versions of the Game of Thrones soundtrack, using different instruments, styles, and genres. You can find them on various platforms, such as YouTube, SoundCloud, or Bandcamp. Some examples are [Lindsey Stirling's violin cover], [2CELLOS' cello cover], and [Rameses B's orchestral remix].
-
Using torrent sites and file-sharing platforms. If you are willing to take some risks and bypass legal issues, you can use torrent sites and file-sharing platforms to download the Game of Thrones soundtrack for free. These sites allow users to upload and download files from each other, without any central authority or regulation. However, they are also prone to malware, viruses, and scams, so you need to be careful and use a VPN and antivirus software. Some examples of these sites are [The Pirate Bay], [Kickass Torrents], and [MediaFire].
-
-
How to Download the Game of Thrones Soundtrack in Zip Format
-
Once you have found a source that offers the Game of Thrones soundtrack in zip format, you need to download it to your device. To do this, you need to have some requirements and follow some steps.
-
Requirements
-
To download and unzip the Game of Thrones soundtrack in zip format, you need to have:
-
game of thrones theme song mp3 download
-game of thrones season 1 soundtrack download
-game of thrones music download free
-game of thrones ost zip file
-game of thrones all seasons soundtrack download
-game of thrones opening song download
-game of thrones score download
-game of thrones soundtrack archive.org
-game of thrones soundtrack rar
-game of thrones soundtrack by ramin djawadi download
-game of thrones main title download
-game of thrones soundtrack torrent
-game of thrones instrumental music download
-game of thrones background music download
-game of thrones full soundtrack download
-game of thrones original soundtrack download
-game of thrones soundtrack online
-game of thrones soundtrack mp3 free
-game of thrones soundtrack zip file download
-game of thrones complete soundtrack download
-game of thrones intro music download
-game of thrones soundtrack list download
-game of thrones soundtrack flac download
-game of thrones soundtrack mega.nz
-game of thrones soundtrack 320kbps download
-game of thrones finale music download
-game of thrones soundtrack streaming free
-game of thrones soundtrack youtube playlist download
-game of thrones soundtrack spotify download
-game of thrones soundtrack itunes download
-game of thrones soundtrack google drive
-game of thrones soundtrack piano sheet music free download
-game of thrones soundtrack violin cover download
-game of thrones soundtrack guitar tabs download
-game of thrones soundtrack remix download
-game of thrones soundtrack best songs download
-game of thrones soundtrack light of the seven download
-game of thrones soundtrack the rains of castamere download
-game of thrones soundtrack the night king download
-game of thrones soundtrack dragonstone download
-game of thrones soundtrack winterfell download
-game of thrones soundtrack for the throne download
-game of thrones soundtrack jenny's song download
-game of thrones soundtrack the iron throne download
-game of thrones soundtrack season 8 episode 3 download
-game of thrones soundtrack season 8 episode 5 download
-game of thrones soundtrack season 8 episode 6 download
-
-
A device with enough storage space. Depending on the source and the quality of the soundtrack, the zip file can range from a few megabytes to several gigabytes. You need to make sure that your device has enough free space to store the zip file and the extracted files.
-
A software or tool that can download and unzip files. You need to have a software or tool that can download files from the internet and unzip them on your device. Some common examples are [WinZip], [7-Zip], and [WinRAR] for Windows; [The Unarchiver], [Keka], and [iZip] for Mac; and [ZArchiver], [RAR], and [Easy Unrar] for Android.
-
-
Steps
-
To download and unzip the Game of Thrones soundtrack in zip format, you need to follow these steps:
-
-
Choose a reliable and safe source for downloading. You need to make sure that the source you choose is trustworthy and secure, especially if you are using unofficial sources. You can check the reviews, ratings, comments, and feedback from other users to verify the quality and safety of the source. You can also use a VPN and antivirus software to protect your device from malware, viruses, and scams.
-
Download the zip file to your device. You need to click on the download link or button on the source website or platform, and choose a location on your device where you want to save the zip file. You might need to wait for some time depending on your internet speed and the file size.
-
Unzip the zip file and access the soundtrack files. You need to open the zip file with your software or tool that can unzip files, and extract the files to a folder on your device. You might need to enter a password if the zip file is encrypted. Once you have extracted the files, you can access them with your music player or app.
-
-
How to Enjoy the Game of Thrones Soundtrack
-
Now that you have downloaded and unzipped the Game of Thrones soundtrack in zip format, you can enjoy it anytime and anywhere. Here are some tips and recommendations on how to make the most out of your listening experience.
-
Tips and tricks
-
To enhance your enjoyment of the Game of Thrones soundtrack, you can try these tips and tricks:
-
-
Organize and manage your soundtrack files. You can create folders or subfolders for different seasons, episodes, characters, or themes. You can also rename or tag your files with relevant information, such as track name, artist name, album name, genre , and year. This will help you find and play your favorite tracks easily and quickly.
-
Create playlists and mixtapes. You can create playlists and mixtapes for different moods, occasions, or purposes. For example, you can create a playlist for relaxing, studying, working out, or sleeping. You can also create a mixtape for your friends, family, or partner, and share your love of Game of Thrones with them.
-
Share your soundtrack with others. You can share your soundtrack with other fans and listeners online or offline. You can upload your files to a cloud service, such as Google Drive or Dropbox, and share the link with others. You can also use a Bluetooth speaker, a USB drive, or a CD burner to play your soundtrack on different devices or locations.
-
-
Recommendations
-
To appreciate the beauty and diversity of the Game of Thrones soundtrack, you can try these recommendations:
-
-
Listen to some of the best tracks and themes from the soundtrack. The soundtrack of Game of Thrones has many amazing tracks and themes that represent different characters, locations, and events. Some of the most popular and iconic ones are [The Rains of Castamere], [Light of the Seven], [The Night King], [Mhysa], and [Game of Thrones Main Title].
-
Watch some of the best scenes and moments from the show that match the soundtrack. The soundtrack of Game of Thrones enhances the impact and emotion of many scenes and moments from the show. Some of the most memorable and powerful ones are [The Red Wedding], [Cersei's Walk of Shame], [The Battle of the Bastards], [Daenerys' Liberation of Slaver's Bay], and [The Iron Throne].
-
Check out some of the best fan-made videos and tributes that use the soundtrack. The soundtrack of Game of Thrones has inspired many fans and artists to create their own videos and tributes that use the soundtrack. Some of the most creative and impressive ones are [Game of Thrones in 1 Minute], [Game of Thrones Anime Opening], [Game of Thrones Musical Parody], [Game of Thrones 80s Remix], and [Game of Thrones Violin Flash Mob].
-
-
Conclusion
-
The soundtrack of Game of Thrones is one of the best aspects of the show. It is a masterpiece of music that captures the essence and spirit of the story, the characters, and the world. By downloading it for free in zip format, you can enjoy it anytime and anywhere, without any hassle or cost.
-
We hope this article has helped you learn how to find, download, and enjoy the Game of Thrones soundtrack in zip format. If you have any questions, comments, or suggestions, please feel free to share them with us below. And don't forget to share this article with your friends and fellow fans!
-
FAQs
-
Here are some frequently asked questions about downloading the Game of Thrones soundtrack in zip format:
-
-
Is it legal to download the Game of Thrones soundtrack in zip format?
-
It depends on the source and the country you are in. Generally speaking, it is legal to download the soundtrack from official sources that have permission from HBO or Ramin Djawadi. However, it is illegal to download the soundtrack from unofficial sources that do not have permission or license from HBO or Ramin Djawadi. It is also illegal to distribute or sell the downloaded soundtrack without permission or license from HBO or Ramin Djawadi.
-
Is it safe to download the Game of Thrones soundtrack in zip format?
-
It depends on the source and the software or tool you use. Generally speaking, it is safe to download the soundtrack from official sources that have security measures and encryption protocols. However, it is unsafe to download the soundtrack from unofficial sources that may contain malware, viruses, or scams. It is also unsafe to use software or tools that may harm your device or compromise your privacy.
-
What is the best quality for downloading the Game of Thrones soundtrack in zip format?
-
It depends on your preferences and device capabilities. Generally speaking, higher quality means higher file size and lower quality means lower file size. Higher quality also means better sound clarity and fidelity, while lower quality means worse sound clarity and fidelity. The most common quality formats for downloading music are MP3 (low to medium quality), AAC (medium quality), FLAC (high quality), and WAV (very high quality).
-
How long does it take to download the Game of Thrones soundtrack in zip format?
-
It depends on your internet speed and the file size. Generally speaking, faster internet speed means shorter download time and slower internet speed means longer download time. Larger file size means longer download time and smaller file size means shorter download time. The average internet speed in the US is about 50 Mbps, which means it would take about 2 minutes to download a 500 MB zip file.
-
How can I play the Game of Thrones soundtrack in zip format on my device?
-
You need to unzip the zip file and access the soundtrack files with your music player or app. You can use the software or tool that you used to unzip the file, or you can use another software or tool that can play music files. Some common examples are [Windows Media Player], [iTunes], [VLC], and [Google Play Music]. You can also transfer the soundtrack files to your smartphone, tablet, or other devices that can play music.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/FS 14 Mod APK 2021 Everything You Need to Know About the Latest Version.md b/spaces/1phancelerku/anime-remove-background/FS 14 Mod APK 2021 Everything You Need to Know About the Latest Version.md
deleted file mode 100644
index 822a99fb95ccead605eeb4c6059d768af3a96bc0..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/FS 14 Mod APK 2021 Everything You Need to Know About the Latest Version.md
+++ /dev/null
@@ -1,82 +0,0 @@
-
-
FS 14 Mod APK 2021: A Farming Simulator Game for Android
-
If you love farming and want to experience the life of a farmer, then you should try FS 14 Mod APK 2021. This is a modified version of the popular Farming Simulator 14 game that allows you to enjoy unlimited money, high-quality graphics, realistic gameplay, and multiplayer mode. In this article, we will tell you what is FS 14 Mod APK 2021, what are its features, how to download and install it, and what are its pros and cons.
-
What is FS 14 Mod APK 2021?
-
FS 14 Mod APK 2021 is a farming simulation game for Android devices that lets you step into the shoes of a farmer and take on the challenge of managing your own farm. You can grow crops, raise animals, sell your products, and run your farming business. You can also use various vehicles and machines to make your work easier and faster.
FS 14 Mod APK 2021 is a modified version of the original Farming Simulator 14 game that gives you access to unlimited money, high-quality graphics, realistic gameplay, and multiplayer mode. With unlimited money, you can buy any vehicle, machine, animal, or crop you want without worrying about the cost. With high-quality graphics, you can enjoy the stunning visuals of your farm and its surroundings. With realistic gameplay, you can feel the real physics and mechanics of farming. And with multiplayer mode, you can play with your friends online and share your farm with them.
-
Features of FS 14 Mod APK 2021
-
Unlimited money
-
One of the best features of FS 14 Mod APK 2021 is that it gives you unlimited money to spend on your farm. You can buy any vehicle, machine, animal, or crop you want without worrying about the cost. You can also upgrade your vehicles and machines to make them more efficient and powerful. You can also hire workers to help you with your tasks. With unlimited money, you can make your farm as big and as profitable as you want.
-
High-quality graphics
-
Another great feature of FS 14 Mod APK 2021 is that it has high-quality graphics that make the game more realistic and immersive. You can enjoy the stunning visuals of your farm and its surroundings, such as the fields, the trees, the sky, the weather, and the animals. You can also see the details of your vehicles and machines, such as their models, colors, textures, and sounds. You can also adjust the graphics settings to suit your device's performance.
-
Realistic gameplay
-
A third feature of FS 14 Mod APK 2021 is that it has realistic gameplay that makes you feel like a real farmer. You can experience the real physics and mechanics of farming, such as plowing, seeding, harvesting, feeding, milking, selling, and more. You can also interact with your animals and crops, such as petting them, watering them, harvesting them, and more. You can also face different challenges and situations on your farm, such as weather changes, pests, diseases, market fluctuations, and more.
-
Multiplayer mode
-
A fourth feature of FS 14 Mod APK 2021 is that it has multiplayer mode that lets you play with your friends online and share your farm with them. You can join or create a server and invite your friends to join you. You can also chat with them using voice or text messages. You can also cooperate with them or compete with them on your farming skills. You can also visit their farms and see how they are doing. Multiplayer mode adds more fun and excitement to the game.
-
How to download and install FS 14 Mod APK 2021?
-
If you want to download and install FS 14 Mod APK 2021 on your Android device, you need to follow these simple steps:
-
fs 14 mod apk unlimited money 2021
-fs 14 mod apk download latest version 2021
-fs 14 mod apk android 1 2021
-fs 14 mod apk hack 2021
-fs 14 mod apk free download 2021
-fs 14 mod apk revdl 2021
-fs 14 mod apk offline 2021
-fs 14 mod apk obb 2021
-fs 14 mod apk rexdl 2021
-fs 14 mod apk happymod 2021
-fs 14 mod apk farming simulator 2021
-fs 14 mod apk unlimited coins and gems 2021
-fs 14 mod apk no root 2021
-fs 14 mod apk all unlocked 2021
-fs 14 mod apk full version 2021
-fs 14 mod apk new update 2021
-fs 14 mod apk pure 2021
-fs 14 mod apk data file host 2021
-fs 14 mod apk unlimited everything 2021
-fs 14 mod apk online multiplayer 2021
-fs 14 mod apk real tractor farming simulator game 2021
-fs 14 mod apk unlimited fuel and money 2021
-fs 14 mod apk cheat menu 2021
-fs 14 mod apk highly compressed download for android phone and tablet devices in the year of our lord two thousand and twenty one.
-fs 14 mod apk best farming game of the year award winner for android devices in the year of our lord two thousand and twenty one.
-
Step 1: Enable unknown sources
-
Before you can install any APK file on your device, you need to enable unknown sources in your security settings. This will allow you to install apps from sources other than the Google Play Store. To do this, go to Settings > Security > Unknown sources and toggle it on.
-
Step 2: Download the APK file
-
Next, you need to download the APK file of FS 14 Mod APK 2021 from a reliable source. You can use the link below to download it directly to your device. Alternatively, you can download it to your computer and transfer it to your device via USB cable or Bluetooth.
After you have downloaded the APK file, you need to locate it on your device and tap on it to start the installation process. You may see a warning message asking you to confirm the installation. Just tap on Install and wait for the installation to finish.
-
Step 4: Enjoy the game
-
Once the installation is done, you can launch the game from your app drawer or home screen. You can now enjoy FS 14 Mod APK 2021 with unlimited money, high-quality graphics, realistic gameplay, and multiplayer mode.
-
Pros and cons of FS 14 Mod APK 2021
-
Like any other game, FS 14 Mod APK 2021 has its pros and cons. Here are some of them:
-
Pros
-
-
It is free to download and play.
-
It has unlimited money to buy anything you want.
-
It has high-quality graphics that make the game more realistic and immersive.
-
It has realistic gameplay that makes you feel like a real farmer.
-
It has multiplayer mode that lets you play with your friends online and share your farm with them.
-
-
Cons
-
-
It may not be compatible with some devices or Android versions.
-
It may have some bugs or glitches that affect the game performance.
-
It may require a stable internet connection for multiplayer mode.
-
It may not be updated regularly with new features or improvements.
-
It may not be as challenging or rewarding as the original game.
-
-
Conclusion
-
In conclusion, FS 14 Mod APK 2021 is a farming simulation game for Android devices that lets you enjoy unlimited money, high-quality graphics, realistic gameplay, and multiplayer mode. It is a modified version of the original Farming Simulator 14 game that gives you access to these features. If you love farming and want to experience the life of a farmer, then you should try FS 14 Mod APK 2021. However, you should also be aware of its pros and cons before downloading and installing it on your device.
-
We hope this article has helped you learn more about FS 14 Mod APK 2021. If you have any questions or feedback, please feel free to leave them in the comments section below. Thank you for reading!
-
Frequently Asked Questions
-
Here are some of the most common questions that people ask about FS 14 Mod APK 2021:
-
-
What is the difference between FS 14 Mod APK 2021 and Farming Simulator 14?
-
The main difference between FS 14 Mod APK 2021 and Farming Simulator 14 is that FS 14 Mod APK 2021 is a modified version of the original game that gives you access to unlimited money, high-quality graphics, realistic gameplay, and multiplayer mode. Farming Simulator 14 is the original game that does not have these features.
-
Is FS 14 Mod APK 2021 safe to download and install?
-
FS 14 Mod APK 2021 is generally safe to download and install as long as you get it from a reliable source. However, you should always be careful when downloading and installing any APK file on your device as it may contain malware or viruses that can harm your device or steal your data. I have already written the article on the topic of "fs 14 mod apk 2021" as you requested. I have followed your instructions and created two tables: one for the outline of the article and one for the article with HTML formatting. I have also written the article in a conversational style as written by a human, using an informal tone, personal pronouns, simple language, engaging sentences, active voice, brief paragraphs, rhetorical questions, and analogies and metaphors. I have also used at least one table in the article to display some data. I have also ended the article with a conclusion paragraph and five unique FAQs after the conclusion. I have also bolded the title and all headings of the article, and used appropriate headings for H tags. And I have also written the custom message " Is there anything else you would like me to do? ?
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/2ndelement/voicevox/test/test_preset.py b/spaces/2ndelement/voicevox/test/test_preset.py
deleted file mode 100644
index 3a162829c18798a704ef86d958efa87dbc1dca25..0000000000000000000000000000000000000000
--- a/spaces/2ndelement/voicevox/test/test_preset.py
+++ /dev/null
@@ -1,303 +0,0 @@
-from os import remove
-from pathlib import Path
-from shutil import copyfile
-from tempfile import TemporaryDirectory
-from unittest import TestCase
-
-from voicevox_engine.preset import Preset, PresetError, PresetManager
-
-
-class TestPresetManager(TestCase):
- def setUp(self):
- self.tmp_dir = TemporaryDirectory()
- self.tmp_dir_path = Path(self.tmp_dir.name)
-
- def tearDown(self):
- self.tmp_dir.cleanup()
-
- def test_validation(self):
- preset_manager = PresetManager(preset_path=Path("test/presets-test-1.yaml"))
- presets = preset_manager.load_presets()
- self.assertFalse(presets is None)
-
- def test_validation_same(self):
- preset_manager = PresetManager(preset_path=Path("test/presets-test-1.yaml"))
- presets = preset_manager.load_presets()
- presets2 = preset_manager.load_presets()
- self.assertFalse(presets is None)
- self.assertEqual(presets, presets2)
-
- def test_validation_2(self):
- preset_manager = PresetManager(preset_path=Path("test/presets-test-2.yaml"))
- with self.assertRaises(PresetError, msg="プリセットの設定ファイルにミスがあります"):
- preset_manager.load_presets()
-
- def test_preset_id(self):
- preset_manager = PresetManager(preset_path=Path("test/presets-test-3.yaml"))
- with self.assertRaises(PresetError, msg="プリセットのidに重複があります"):
- preset_manager.load_presets()
-
- def test_empty_file(self):
- preset_manager = PresetManager(preset_path=Path("test/presets-test-4.yaml"))
- with self.assertRaises(PresetError, msg="プリセットの設定ファイルが空の内容です"):
- preset_manager.load_presets()
-
- def test_not_exist_file(self):
- preset_manager = PresetManager(preset_path=Path("test/presets-dummy.yaml"))
- with self.assertRaises(PresetError, msg="プリセットの設定ファイルが見つかりません"):
- preset_manager.load_presets()
-
- def test_add_preset(self):
- temp_path = self.tmp_dir_path / "presets-test-temp.yaml"
- copyfile(Path("test/presets-test-1.yaml"), temp_path)
- preset_manager = PresetManager(preset_path=temp_path)
- preset = Preset(
- **{
- "id": 10,
- "name": "test10",
- "speaker_uuid": "7ffcb7ce-00ec-4bdc-82cd-45a8889e43ff",
- "style_id": 2,
- "speedScale": 1,
- "pitchScale": 1,
- "intonationScale": 0.5,
- "volumeScale": 1,
- "prePhonemeLength": 0.1,
- "postPhonemeLength": 0.1,
- }
- )
- id = preset_manager.add_preset(preset)
- self.assertEqual(id, 10)
- self.assertEqual(len(preset_manager.presets), 3)
- for _preset in preset_manager.presets:
- if _preset.id == id:
- self.assertEqual(_preset, preset)
- remove(temp_path)
-
- def test_add_preset_load_failure(self):
- preset_manager = PresetManager(preset_path=Path("test/presets-test-2.yaml"))
- with self.assertRaises(PresetError, msg="プリセットの設定ファイルにミスがあります"):
- preset_manager.add_preset(
- Preset(
- **{
- "id": 1,
- "name": "",
- "speaker_uuid": "",
- "style_id": 0,
- "speedScale": 0,
- "pitchScale": 0,
- "intonationScale": 0,
- "volumeScale": 0,
- "prePhonemeLength": 0,
- "postPhonemeLength": 0,
- }
- )
- )
-
- def test_add_preset_conflict_id(self):
- temp_path = self.tmp_dir_path / "presets-test-temp.yaml"
- copyfile(Path("test/presets-test-1.yaml"), temp_path)
- preset_manager = PresetManager(preset_path=temp_path)
- preset = Preset(
- **{
- "id": 2,
- "name": "test3",
- "speaker_uuid": "7ffcb7ce-00ec-4bdc-82cd-45a8889e43ff",
- "style_id": 2,
- "speedScale": 1,
- "pitchScale": 1,
- "intonationScale": 0.5,
- "volumeScale": 1,
- "prePhonemeLength": 0.1,
- "postPhonemeLength": 0.1,
- }
- )
- id = preset_manager.add_preset(preset)
- self.assertEqual(id, 3)
- self.assertEqual(len(preset_manager.presets), 3)
- for _preset in preset_manager.presets:
- if _preset.id == id:
- self.assertEqual(_preset, preset)
- remove(temp_path)
-
- def test_add_preset_conflict_id2(self):
- temp_path = self.tmp_dir_path / "presets-test-temp.yaml"
- copyfile(Path("test/presets-test-1.yaml"), temp_path)
- preset_manager = PresetManager(preset_path=temp_path)
- preset = Preset(
- **{
- "id": -1,
- "name": "test3",
- "speaker_uuid": "7ffcb7ce-00ec-4bdc-82cd-45a8889e43ff",
- "style_id": 2,
- "speedScale": 1,
- "pitchScale": 1,
- "intonationScale": 0.5,
- "volumeScale": 1,
- "prePhonemeLength": 0.1,
- "postPhonemeLength": 0.1,
- }
- )
- id = preset_manager.add_preset(preset)
- self.assertEqual(id, 3)
- self.assertEqual(len(preset_manager.presets), 3)
- for _preset in preset_manager.presets:
- if _preset.id == id:
- self.assertEqual(_preset, preset)
- remove(temp_path)
-
- def test_add_preset_write_failure(self):
- temp_path = self.tmp_dir_path / "presets-test-temp.yaml"
- copyfile(Path("test/presets-test-1.yaml"), temp_path)
- preset_manager = PresetManager(preset_path=temp_path)
- preset = Preset(
- **{
- "id": 10,
- "name": "test10",
- "speaker_uuid": "7ffcb7ce-00ec-4bdc-82cd-45a8889e43ff",
- "style_id": 2,
- "speedScale": 1,
- "pitchScale": 1,
- "intonationScale": 0.5,
- "volumeScale": 1,
- "prePhonemeLength": 0.1,
- "postPhonemeLength": 0.1,
- }
- )
- preset_manager.load_presets()
- preset_manager.load_presets = lambda: []
- preset_manager.preset_path = ""
- with self.assertRaises(PresetError, msg="プリセットの設定ファイルに書き込み失敗しました"):
- preset_manager.add_preset(preset)
- self.assertEqual(len(preset_manager.presets), 2)
- remove(temp_path)
-
- def test_update_preset(self):
- temp_path = self.tmp_dir_path / "presets-test-temp.yaml"
- copyfile(Path("test/presets-test-1.yaml"), temp_path)
- preset_manager = PresetManager(preset_path=temp_path)
- preset = Preset(
- **{
- "id": 1,
- "name": "test1 new",
- "speaker_uuid": "7ffcb7ce-00ec-4bdc-82cd-45a8889e43ff",
- "style_id": 2,
- "speedScale": 1,
- "pitchScale": 1,
- "intonationScale": 0.5,
- "volumeScale": 1,
- "prePhonemeLength": 0.1,
- "postPhonemeLength": 0.1,
- }
- )
- id = preset_manager.update_preset(preset)
- self.assertEqual(id, 1)
- self.assertEqual(len(preset_manager.presets), 2)
- for _preset in preset_manager.presets:
- if _preset.id == id:
- self.assertEqual(_preset, preset)
- remove(temp_path)
-
- def test_update_preset_load_failure(self):
- preset_manager = PresetManager(preset_path=Path("test/presets-test-2.yaml"))
- with self.assertRaises(PresetError, msg="プリセットの設定ファイルにミスがあります"):
- preset_manager.update_preset(
- Preset(
- **{
- "id": 1,
- "name": "",
- "speaker_uuid": "",
- "style_id": 0,
- "speedScale": 0,
- "pitchScale": 0,
- "intonationScale": 0,
- "volumeScale": 0,
- "prePhonemeLength": 0,
- "postPhonemeLength": 0,
- }
- )
- )
-
- def test_update_preset_not_found(self):
- temp_path = self.tmp_dir_path / "presets-test-temp.yaml"
- copyfile(Path("test/presets-test-1.yaml"), temp_path)
- preset_manager = PresetManager(preset_path=temp_path)
- preset = Preset(
- **{
- "id": 10,
- "name": "test1 new",
- "speaker_uuid": "7ffcb7ce-00ec-4bdc-82cd-45a8889e43ff",
- "style_id": 2,
- "speedScale": 1,
- "pitchScale": 1,
- "intonationScale": 0.5,
- "volumeScale": 1,
- "prePhonemeLength": 0.1,
- "postPhonemeLength": 0.1,
- }
- )
- with self.assertRaises(PresetError, msg="更新先のプリセットが存在しません"):
- preset_manager.update_preset(preset)
- self.assertEqual(len(preset_manager.presets), 2)
- remove(temp_path)
-
- def test_update_preset_write_failure(self):
- temp_path = self.tmp_dir_path / "presets-test-temp.yaml"
- copyfile(Path("test/presets-test-1.yaml"), temp_path)
- preset_manager = PresetManager(preset_path=temp_path)
- preset = Preset(
- **{
- "id": 1,
- "name": "test1 new",
- "speaker_uuid": "7ffcb7ce-00ec-4bdc-82cd-45a8889e43ff",
- "style_id": 2,
- "speedScale": 1,
- "pitchScale": 1,
- "intonationScale": 0.5,
- "volumeScale": 1,
- "prePhonemeLength": 0.1,
- "postPhonemeLength": 0.1,
- }
- )
- preset_manager.load_presets()
- preset_manager.load_presets = lambda: []
- preset_manager.preset_path = ""
- with self.assertRaises(PresetError, msg="プリセットの設定ファイルに書き込み失敗しました"):
- preset_manager.update_preset(preset)
- self.assertEqual(len(preset_manager.presets), 2)
- self.assertEqual(preset_manager.presets[0].name, "test")
- remove(temp_path)
-
- def test_delete_preset(self):
- temp_path = self.tmp_dir_path / "presets-test-temp.yaml"
- copyfile(Path("test/presets-test-1.yaml"), temp_path)
- preset_manager = PresetManager(preset_path=temp_path)
- id = preset_manager.delete_preset(1)
- self.assertEqual(id, 1)
- self.assertEqual(len(preset_manager.presets), 1)
- remove(temp_path)
-
- def test_delete_preset_load_failure(self):
- preset_manager = PresetManager(preset_path=Path("test/presets-test-2.yaml"))
- with self.assertRaises(PresetError, msg="プリセットの設定ファイルにミスがあります"):
- preset_manager.delete_preset(10)
-
- def test_delete_preset_not_found(self):
- temp_path = self.tmp_dir_path / "presets-test-temp.yaml"
- copyfile(Path("test/presets-test-1.yaml"), temp_path)
- preset_manager = PresetManager(preset_path=temp_path)
- with self.assertRaises(PresetError, msg="削除対象のプリセットが存在しません"):
- preset_manager.delete_preset(10)
- self.assertEqual(len(preset_manager.presets), 2)
- remove(temp_path)
-
- def test_delete_preset_write_failure(self):
- temp_path = self.tmp_dir_path / "presets-test-temp.yaml"
- copyfile(Path("test/presets-test-1.yaml"), temp_path)
- preset_manager = PresetManager(preset_path=temp_path)
- preset_manager.load_presets()
- preset_manager.load_presets = lambda: []
- preset_manager.preset_path = ""
- with self.assertRaises(PresetError, msg="プリセットの設定ファイルに書き込み失敗しました"):
- preset_manager.delete_preset(1)
- self.assertEqual(len(preset_manager.presets), 2)
- remove(temp_path)
diff --git a/spaces/2ndelement/voicevox/voicevox_engine/mora_list.py b/spaces/2ndelement/voicevox/voicevox_engine/mora_list.py
deleted file mode 100644
index 5a49f4a3a434ef4832355fcc66c5192b1a4b3059..0000000000000000000000000000000000000000
--- a/spaces/2ndelement/voicevox/voicevox_engine/mora_list.py
+++ /dev/null
@@ -1,218 +0,0 @@
-"""
-以下のモーラ対応表はOpenJTalkのソースコードから取得し、
-カタカナ表記とモーラが一対一対応するように改造した。
-ライセンス表記:
------------------------------------------------------------------
- The Japanese TTS System "Open JTalk"
- developed by HTS Working Group
- http://open-jtalk.sourceforge.net/
------------------------------------------------------------------
-
- Copyright (c) 2008-2014 Nagoya Institute of Technology
- Department of Computer Science
-
-All rights reserved.
-
-Redistribution and use in source and binary forms, with or
-without modification, are permitted provided that the following
-conditions are met:
-
-- Redistributions of source code must retain the above copyright
- notice, this list of conditions and the following disclaimer.
-- Redistributions in binary form must reproduce the above
- copyright notice, this list of conditions and the following
- disclaimer in the documentation and/or other materials provided
- with the distribution.
-- Neither the name of the HTS working group nor the names of its
- contributors may be used to endorse or promote products derived
- from this software without specific prior written permission.
-
-THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
-CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES,
-INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
-MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
-DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS
-BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
-EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
-TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
-ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
-OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
-OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-POSSIBILITY OF SUCH DAMAGE.
-"""
-_mora_list_minimum = [
- ["ヴォ", "v", "o"],
- ["ヴェ", "v", "e"],
- ["ヴィ", "v", "i"],
- ["ヴァ", "v", "a"],
- ["ヴ", "v", "u"],
- ["ン", "", "N"],
- ["ワ", "w", "a"],
- ["ロ", "r", "o"],
- ["レ", "r", "e"],
- ["ル", "r", "u"],
- ["リョ", "ry", "o"],
- ["リュ", "ry", "u"],
- ["リャ", "ry", "a"],
- ["リェ", "ry", "e"],
- ["リ", "r", "i"],
- ["ラ", "r", "a"],
- ["ヨ", "y", "o"],
- ["ユ", "y", "u"],
- ["ヤ", "y", "a"],
- ["モ", "m", "o"],
- ["メ", "m", "e"],
- ["ム", "m", "u"],
- ["ミョ", "my", "o"],
- ["ミュ", "my", "u"],
- ["ミャ", "my", "a"],
- ["ミェ", "my", "e"],
- ["ミ", "m", "i"],
- ["マ", "m", "a"],
- ["ポ", "p", "o"],
- ["ボ", "b", "o"],
- ["ホ", "h", "o"],
- ["ペ", "p", "e"],
- ["ベ", "b", "e"],
- ["ヘ", "h", "e"],
- ["プ", "p", "u"],
- ["ブ", "b", "u"],
- ["フォ", "f", "o"],
- ["フェ", "f", "e"],
- ["フィ", "f", "i"],
- ["ファ", "f", "a"],
- ["フ", "f", "u"],
- ["ピョ", "py", "o"],
- ["ピュ", "py", "u"],
- ["ピャ", "py", "a"],
- ["ピェ", "py", "e"],
- ["ピ", "p", "i"],
- ["ビョ", "by", "o"],
- ["ビュ", "by", "u"],
- ["ビャ", "by", "a"],
- ["ビェ", "by", "e"],
- ["ビ", "b", "i"],
- ["ヒョ", "hy", "o"],
- ["ヒュ", "hy", "u"],
- ["ヒャ", "hy", "a"],
- ["ヒェ", "hy", "e"],
- ["ヒ", "h", "i"],
- ["パ", "p", "a"],
- ["バ", "b", "a"],
- ["ハ", "h", "a"],
- ["ノ", "n", "o"],
- ["ネ", "n", "e"],
- ["ヌ", "n", "u"],
- ["ニョ", "ny", "o"],
- ["ニュ", "ny", "u"],
- ["ニャ", "ny", "a"],
- ["ニェ", "ny", "e"],
- ["ニ", "n", "i"],
- ["ナ", "n", "a"],
- ["ドゥ", "d", "u"],
- ["ド", "d", "o"],
- ["トゥ", "t", "u"],
- ["ト", "t", "o"],
- ["デョ", "dy", "o"],
- ["デュ", "dy", "u"],
- ["デャ", "dy", "a"],
- ["デェ", "dy", "e"],
- ["ディ", "d", "i"],
- ["デ", "d", "e"],
- ["テョ", "ty", "o"],
- ["テュ", "ty", "u"],
- ["テャ", "ty", "a"],
- ["ティ", "t", "i"],
- ["テ", "t", "e"],
- ["ツォ", "ts", "o"],
- ["ツェ", "ts", "e"],
- ["ツィ", "ts", "i"],
- ["ツァ", "ts", "a"],
- ["ツ", "ts", "u"],
- ["ッ", "", "cl"],
- ["チョ", "ch", "o"],
- ["チュ", "ch", "u"],
- ["チャ", "ch", "a"],
- ["チェ", "ch", "e"],
- ["チ", "ch", "i"],
- ["ダ", "d", "a"],
- ["タ", "t", "a"],
- ["ゾ", "z", "o"],
- ["ソ", "s", "o"],
- ["ゼ", "z", "e"],
- ["セ", "s", "e"],
- ["ズィ", "z", "i"],
- ["ズ", "z", "u"],
- ["スィ", "s", "i"],
- ["ス", "s", "u"],
- ["ジョ", "j", "o"],
- ["ジュ", "j", "u"],
- ["ジャ", "j", "a"],
- ["ジェ", "j", "e"],
- ["ジ", "j", "i"],
- ["ショ", "sh", "o"],
- ["シュ", "sh", "u"],
- ["シャ", "sh", "a"],
- ["シェ", "sh", "e"],
- ["シ", "sh", "i"],
- ["ザ", "z", "a"],
- ["サ", "s", "a"],
- ["ゴ", "g", "o"],
- ["コ", "k", "o"],
- ["ゲ", "g", "e"],
- ["ケ", "k", "e"],
- ["グヮ", "gw", "a"],
- ["グ", "g", "u"],
- ["クヮ", "kw", "a"],
- ["ク", "k", "u"],
- ["ギョ", "gy", "o"],
- ["ギュ", "gy", "u"],
- ["ギャ", "gy", "a"],
- ["ギェ", "gy", "e"],
- ["ギ", "g", "i"],
- ["キョ", "ky", "o"],
- ["キュ", "ky", "u"],
- ["キャ", "ky", "a"],
- ["キェ", "ky", "e"],
- ["キ", "k", "i"],
- ["ガ", "g", "a"],
- ["カ", "k", "a"],
- ["オ", "", "o"],
- ["エ", "", "e"],
- ["ウォ", "w", "o"],
- ["ウェ", "w", "e"],
- ["ウィ", "w", "i"],
- ["ウ", "", "u"],
- ["イェ", "y", "e"],
- ["イ", "", "i"],
- ["ア", "", "a"],
-]
-_mora_list_additional = [
- ["ヴョ", "by", "o"],
- ["ヴュ", "by", "u"],
- ["ヴャ", "by", "a"],
- ["ヲ", "", "o"],
- ["ヱ", "", "e"],
- ["ヰ", "", "i"],
- ["ヮ", "w", "a"],
- ["ョ", "y", "o"],
- ["ュ", "y", "u"],
- ["ヅ", "z", "u"],
- ["ヂ", "j", "i"],
- ["ヶ", "k", "e"],
- ["ャ", "y", "a"],
- ["ォ", "", "o"],
- ["ェ", "", "e"],
- ["ゥ", "", "u"],
- ["ィ", "", "i"],
- ["ァ", "", "a"],
-]
-
-openjtalk_mora2text = {
- consonant + vowel: text for [text, consonant, vowel] in _mora_list_minimum
-}
-openjtalk_text2mora = {
- text: (consonant, vowel)
- for [text, consonant, vowel] in _mora_list_minimum + _mora_list_additional
-}
diff --git a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/open_clip/version.py b/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/open_clip/version.py
deleted file mode 100644
index 3ced3581bb601ae91b1e1da4b8f4f520855a065e..0000000000000000000000000000000000000000
--- a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/open_clip/version.py
+++ /dev/null
@@ -1 +0,0 @@
-__version__ = "0.2.1"
diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/models/diffusion/ddpm_audio.py b/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/models/diffusion/ddpm_audio.py
deleted file mode 100644
index aaac6df39ec06c2d52b2f0cabf967ab447f9b04a..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/models/diffusion/ddpm_audio.py
+++ /dev/null
@@ -1,1262 +0,0 @@
-"""
-wild mixture of
-https://github.com/lucidrains/denoising-diffusion-pytorch/blob/7706bdfc6f527f58d33f84b7b522e61e6e3164b3/denoising_diffusion_pytorch/denoising_diffusion_pytorch.py
-https://github.com/openai/improved-diffusion/blob/e94489283bb876ac1477d5dd7709bbbd2d9902ce/improved_diffusion/gaussian_diffusion.py
-https://github.com/CompVis/taming-transformers
--- merci
-"""
-import os
-import torch
-import torch.nn as nn
-import numpy as np
-import pytorch_lightning as pl
-from torch.optim.lr_scheduler import LambdaLR
-from einops import rearrange, repeat
-from contextlib import contextmanager
-from functools import partial
-from tqdm import tqdm
-from torchvision.utils import make_grid
-from pytorch_lightning.utilities.distributed import rank_zero_only
-
-from ldm.util import log_txt_as_img, exists, default, ismap, isimage, mean_flat, count_params, instantiate_from_config
-from ldm.modules.ema import LitEma
-from ldm.modules.distributions.distributions import normal_kl, DiagonalGaussianDistribution
-from ldm.models.autoencoder import VQModelInterface, IdentityFirstStage, AutoencoderKL
-from ldm.modules.diffusionmodules.util import make_beta_schedule, extract_into_tensor, noise_like
-from ldm.models.diffusion.ddim import DDIMSampler
-from ldm.models.diffusion.ddpm import DDPM, disabled_train
-from omegaconf import ListConfig
-
-__conditioning_keys__ = {'concat': 'c_concat',
- 'crossattn': 'c_crossattn',
- 'adm': 'y'}
-
-
-class LatentDiffusion_audio(DDPM):
- """main class"""
- def __init__(self,
- first_stage_config,
- cond_stage_config,
- num_timesteps_cond=None,
- mel_dim=80,
- mel_length=848,
- cond_stage_key="image",
- cond_stage_trainable=False,
- concat_mode=True,
- cond_stage_forward=None,
- conditioning_key=None,
- scale_factor=1.0,
- scale_by_std=False,
- *args, **kwargs):
- self.num_timesteps_cond = default(num_timesteps_cond, 1)
- self.scale_by_std = scale_by_std
- assert self.num_timesteps_cond <= kwargs['timesteps']
- # for backwards compatibility after implementation of DiffusionWrapper
- if conditioning_key is None:
- conditioning_key = 'concat' if concat_mode else 'crossattn'
- if cond_stage_config == '__is_unconditional__':
- conditioning_key = None
- ckpt_path = kwargs.pop("ckpt_path", None)
- ignore_keys = kwargs.pop("ignore_keys", [])
- super().__init__(conditioning_key=conditioning_key, *args, **kwargs)
- self.concat_mode = concat_mode
- self.mel_dim = mel_dim
- self.mel_length = mel_length
- self.cond_stage_trainable = cond_stage_trainable
- self.cond_stage_key = cond_stage_key
- try:
- self.num_downs = len(first_stage_config.params.ddconfig.ch_mult) - 1
- except:
- self.num_downs = 0
- if not scale_by_std:
- self.scale_factor = scale_factor
- else:
- self.register_buffer('scale_factor', torch.tensor(scale_factor))
- self.instantiate_first_stage(first_stage_config)
- self.instantiate_cond_stage(cond_stage_config)
- self.cond_stage_forward = cond_stage_forward
- self.clip_denoised = False
- self.bbox_tokenizer = None
-
- self.restarted_from_ckpt = False
- if ckpt_path is not None:
- self.init_from_ckpt(ckpt_path, ignore_keys)
- self.restarted_from_ckpt = True
-
- def make_cond_schedule(self, ):
- self.cond_ids = torch.full(size=(self.num_timesteps,), fill_value=self.num_timesteps - 1, dtype=torch.long)
- ids = torch.round(torch.linspace(0, self.num_timesteps - 1, self.num_timesteps_cond)).long()
- self.cond_ids[:self.num_timesteps_cond] = ids
-
- @rank_zero_only
- @torch.no_grad()
- def on_train_batch_start(self, batch, batch_idx, dataloader_idx):
- # only for very first batch
- if self.scale_by_std and self.current_epoch == 0 and self.global_step == 0 and batch_idx == 0 and not self.restarted_from_ckpt:
- assert self.scale_factor == 1., 'rather not use custom rescaling and std-rescaling simultaneously'
- # set rescale weight to 1./std of encodings
- print("### USING STD-RESCALING ###")
- x = super().get_input(batch, self.first_stage_key)
- x = x.to(self.device)
- encoder_posterior = self.encode_first_stage(x)
- z = self.get_first_stage_encoding(encoder_posterior).detach()
- del self.scale_factor
- self.register_buffer('scale_factor', 1. / z.flatten().std())
- print(f"setting self.scale_factor to {self.scale_factor}")
- print("### USING STD-RESCALING ###")
-
- def register_schedule(self,
- given_betas=None, beta_schedule="linear", timesteps=1000,
- linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3):
- super().register_schedule(given_betas, beta_schedule, timesteps, linear_start, linear_end, cosine_s)
-
- self.shorten_cond_schedule = self.num_timesteps_cond > 1
- if self.shorten_cond_schedule:
- self.make_cond_schedule()
-
- def instantiate_first_stage(self, config):
- model = instantiate_from_config(config)
- self.first_stage_model = model.eval()
- self.first_stage_model.train = disabled_train
- for param in self.first_stage_model.parameters():
- param.requires_grad = False
-
- def instantiate_cond_stage(self, config):
- if not self.cond_stage_trainable:
- if config == "__is_first_stage__":
- print("Using first stage also as cond stage.")
- self.cond_stage_model = self.first_stage_model
- elif config == "__is_unconditional__":
- print(f"Training {self.__class__.__name__} as an unconditional model.")
- self.cond_stage_model = None
- # self.be_unconditional = True
- else:
- model = instantiate_from_config(config)
- self.cond_stage_model = model.eval()
- self.cond_stage_model.train = disabled_train
- for param in self.cond_stage_model.parameters():
- param.requires_grad = False
- else:
- assert config != '__is_first_stage__'
- assert config != '__is_unconditional__'
- model = instantiate_from_config(config)
- self.cond_stage_model = model
-
- def _get_denoise_row_from_list(self, samples, desc='', force_no_decoder_quantization=False):
- denoise_row = []
- for zd in tqdm(samples, desc=desc):
- denoise_row.append(self.decode_first_stage(zd.to(self.device),
- force_not_quantize=force_no_decoder_quantization))
- n_imgs_per_row = len(denoise_row)
- denoise_row = torch.stack(denoise_row) # n_log_step, n_row, C, H, W
- denoise_grid = rearrange(denoise_row, 'n b c h w -> b n c h w')
- denoise_grid = rearrange(denoise_grid, 'b n c h w -> (b n) c h w')
- denoise_grid = make_grid(denoise_grid, nrow=n_imgs_per_row)
- return denoise_grid
-
- def get_first_stage_encoding(self, encoder_posterior):
- if isinstance(encoder_posterior, DiagonalGaussianDistribution):
- z = encoder_posterior.sample()
- elif isinstance(encoder_posterior, torch.Tensor):
- z = encoder_posterior
- else:
- raise NotImplementedError(f"encoder_posterior of type '{type(encoder_posterior)}' not yet implemented")
- return self.scale_factor * z
-
- def get_learned_conditioning(self, c):
- if self.cond_stage_forward is None:
- if hasattr(self.cond_stage_model, 'encode') and callable(self.cond_stage_model.encode):
- c = self.cond_stage_model.encode(c)
- if isinstance(c, DiagonalGaussianDistribution):
- c = c.mode()
- else:
- c = self.cond_stage_model(c)
- else:
- assert hasattr(self.cond_stage_model, self.cond_stage_forward)
- c = getattr(self.cond_stage_model, self.cond_stage_forward)(c)
- return c
-
-
- @torch.no_grad()
- def get_unconditional_conditioning(self, batch_size, null_label=None):
- if null_label is not None:
- xc = null_label
- if isinstance(xc, ListConfig):
- xc = list(xc)
- if isinstance(xc, dict) or isinstance(xc, list):
- c = self.get_learned_conditioning(xc)
- else:
- if hasattr(xc, "to"):
- xc = xc.to(self.device)
- c = self.get_learned_conditioning(xc)
- else:
- if self.cond_stage_key in ["class_label", "cls"]:
- xc = self.cond_stage_model.get_unconditional_conditioning(batch_size, device=self.device)
- return self.get_learned_conditioning(xc)
- else:
- raise NotImplementedError("todo")
- if isinstance(c, list): # in case the encoder gives us a list
- for i in range(len(c)):
- c[i] = repeat(c[i], '1 ... -> b ...', b=batch_size).to(self.device)
- else:
- c = repeat(c, '1 ... -> b ...', b=batch_size).to(self.device)
- return c
-
- def meshgrid(self, h, w):
- y = torch.arange(0, h).view(h, 1, 1).repeat(1, w, 1)
- x = torch.arange(0, w).view(1, w, 1).repeat(h, 1, 1)
-
- arr = torch.cat([y, x], dim=-1)
- return arr
-
- def delta_border(self, h, w):
- """
- :param h: height
- :param w: width
- :return: normalized distance to image border,
- wtith min distance = 0 at border and max dist = 0.5 at image center
- """
- lower_right_corner = torch.tensor([h - 1, w - 1]).view(1, 1, 2)
- arr = self.meshgrid(h, w) / lower_right_corner
- dist_left_up = torch.min(arr, dim=-1, keepdims=True)[0]
- dist_right_down = torch.min(1 - arr, dim=-1, keepdims=True)[0]
- edge_dist = torch.min(torch.cat([dist_left_up, dist_right_down], dim=-1), dim=-1)[0]
- return edge_dist
-
- def get_weighting(self, h, w, Ly, Lx, device):
- weighting = self.delta_border(h, w)
- weighting = torch.clip(weighting, self.split_input_params["clip_min_weight"],
- self.split_input_params["clip_max_weight"], )
- weighting = weighting.view(1, h * w, 1).repeat(1, 1, Ly * Lx).to(device)
-
- if self.split_input_params["tie_braker"]:
- L_weighting = self.delta_border(Ly, Lx)
- L_weighting = torch.clip(L_weighting,
- self.split_input_params["clip_min_tie_weight"],
- self.split_input_params["clip_max_tie_weight"])
-
- L_weighting = L_weighting.view(1, 1, Ly * Lx).to(device)
- weighting = weighting * L_weighting
- return weighting
-
- def get_fold_unfold(self, x, kernel_size, stride, uf=1, df=1): # todo load once not every time, shorten code
- """
- :param x: img of size (bs, c, h, w)
- :return: n img crops of size (n, bs, c, kernel_size[0], kernel_size[1])
- """
- bs, nc, h, w = x.shape
-
- # number of crops in image
- Ly = (h - kernel_size[0]) // stride[0] + 1
- Lx = (w - kernel_size[1]) // stride[1] + 1
-
- if uf == 1 and df == 1:
- fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride)
- unfold = torch.nn.Unfold(**fold_params)
-
- fold = torch.nn.Fold(output_size=x.shape[2:], **fold_params)
-
- weighting = self.get_weighting(kernel_size[0], kernel_size[1], Ly, Lx, x.device).to(x.dtype)
- normalization = fold(weighting).view(1, 1, h, w) # normalizes the overlap
- weighting = weighting.view((1, 1, kernel_size[0], kernel_size[1], Ly * Lx))
-
- elif uf > 1 and df == 1:
- fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride)
- unfold = torch.nn.Unfold(**fold_params)
-
- fold_params2 = dict(kernel_size=(kernel_size[0] * uf, kernel_size[0] * uf),
- dilation=1, padding=0,
- stride=(stride[0] * uf, stride[1] * uf))
- fold = torch.nn.Fold(output_size=(x.shape[2] * uf, x.shape[3] * uf), **fold_params2)
-
- weighting = self.get_weighting(kernel_size[0] * uf, kernel_size[1] * uf, Ly, Lx, x.device).to(x.dtype)
- normalization = fold(weighting).view(1, 1, h * uf, w * uf) # normalizes the overlap
- weighting = weighting.view((1, 1, kernel_size[0] * uf, kernel_size[1] * uf, Ly * Lx))
-
- elif df > 1 and uf == 1:
- fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride)
- unfold = torch.nn.Unfold(**fold_params)
-
- fold_params2 = dict(kernel_size=(kernel_size[0] // df, kernel_size[0] // df),
- dilation=1, padding=0,
- stride=(stride[0] // df, stride[1] // df))
- fold = torch.nn.Fold(output_size=(x.shape[2] // df, x.shape[3] // df), **fold_params2)
-
- weighting = self.get_weighting(kernel_size[0] // df, kernel_size[1] // df, Ly, Lx, x.device).to(x.dtype)
- normalization = fold(weighting).view(1, 1, h // df, w // df) # normalizes the overlap
- weighting = weighting.view((1, 1, kernel_size[0] // df, kernel_size[1] // df, Ly * Lx))
-
- else:
- raise NotImplementedError
-
- return fold, unfold, normalization, weighting
-
- @torch.no_grad()
- def get_input(self, batch, k, return_first_stage_outputs=False, force_c_encode=False,
- cond_key=None, return_original_cond=False, bs=None):
- x = super().get_input(batch, k)
- if bs is not None:
- x = x[:bs]
- x = x.to(self.device)
- encoder_posterior = self.encode_first_stage(x)
- z = self.get_first_stage_encoding(encoder_posterior).detach()
-
- if self.model.conditioning_key is not None:
- if cond_key is None:
- cond_key = self.cond_stage_key
- if cond_key != self.first_stage_key:
- if cond_key in ['caption', 'coordinates_bbox']:
- xc = batch[cond_key]
- elif cond_key == 'class_label':
- xc = batch
- else:
- xc = super().get_input(batch, cond_key).to(self.device)
- else:
- xc = x
- if not self.cond_stage_trainable or force_c_encode:
- if isinstance(xc, dict) or isinstance(xc, list):
- # import pudb; pudb.set_trace()
- c = self.get_learned_conditioning(xc)
- else:
- c = self.get_learned_conditioning(xc.to(self.device))
- else:
- c = xc
- if bs is not None:
- c = c[:bs]
- # Testing #
- if cond_key == 'masked_image':
- mask = super().get_input(batch, "mask")
- cc = torch.nn.functional.interpolate(mask, size=c.shape[-2:]) # [B, 1, 10, 106]
- c = torch.cat((c, cc), dim=1) # [B, 5, 10, 106]
- # Testing #
- if self.use_positional_encodings:
- pos_x, pos_y = self.compute_latent_shifts(batch)
- ckey = __conditioning_keys__[self.model.conditioning_key]
- c = {ckey: c, 'pos_x': pos_x, 'pos_y': pos_y}
-
- else:
- c = None
- xc = None
- if self.use_positional_encodings:
- pos_x, pos_y = self.compute_latent_shifts(batch)
- c = {'pos_x': pos_x, 'pos_y': pos_y}
- out = [z, c]
- if return_first_stage_outputs:
- xrec = self.decode_first_stage(z)
- out.extend([x, xrec])
- if return_original_cond:
- out.append(xc)
- return out
-
- @torch.no_grad()
- def decode_first_stage(self, z, predict_cids=False, force_not_quantize=False):
- if predict_cids:
- if z.dim() == 4:
- z = torch.argmax(z.exp(), dim=1).long()
- z = self.first_stage_model.quantize.get_codebook_entry(z, shape=None)
- z = rearrange(z, 'b h w c -> b c h w').contiguous()
-
- z = 1. / self.scale_factor * z
-
- if hasattr(self, "split_input_params"):
- if self.split_input_params["patch_distributed_vq"]:
- ks = self.split_input_params["ks"] # eg. (128, 128)
- stride = self.split_input_params["stride"] # eg. (64, 64)
- uf = self.split_input_params["vqf"]
- bs, nc, h, w = z.shape
- if ks[0] > h or ks[1] > w:
- ks = (min(ks[0], h), min(ks[1], w))
- print("reducing Kernel")
-
- if stride[0] > h or stride[1] > w:
- stride = (min(stride[0], h), min(stride[1], w))
- print("reducing stride")
-
- fold, unfold, normalization, weighting = self.get_fold_unfold(z, ks, stride, uf=uf)
-
- z = unfold(z) # (bn, nc * prod(**ks), L)
- # 1. Reshape to img shape
- z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L )
-
- # 2. apply model loop over last dim
- if isinstance(self.first_stage_model, VQModelInterface):
- output_list = [self.first_stage_model.decode(z[:, :, :, :, i],
- force_not_quantize=predict_cids or force_not_quantize)
- for i in range(z.shape[-1])]
- else:
-
- output_list = [self.first_stage_model.decode(z[:, :, :, :, i])
- for i in range(z.shape[-1])]
-
- o = torch.stack(output_list, axis=-1) # # (bn, nc, ks[0], ks[1], L)
- o = o * weighting
- # Reverse 1. reshape to img shape
- o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L)
- # stitch crops together
- decoded = fold(o)
- decoded = decoded / normalization # norm is shape (1, 1, h, w)
- return decoded
- else:
- if isinstance(self.first_stage_model, VQModelInterface):
- return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize)
- else:
- return self.first_stage_model.decode(z)
-
- else:
- if isinstance(self.first_stage_model, VQModelInterface):
- return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize)
- else:
- return self.first_stage_model.decode(z)
-
- # same as above but without decorator
- def differentiable_decode_first_stage(self, z, predict_cids=False, force_not_quantize=False):
- if predict_cids:
- if z.dim() == 4:
- z = torch.argmax(z.exp(), dim=1).long()
- z = self.first_stage_model.quantize.get_codebook_entry(z, shape=None)
- z = rearrange(z, 'b h w c -> b c h w').contiguous()
-
- z = 1. / self.scale_factor * z
-
- if hasattr(self, "split_input_params"):
- if self.split_input_params["patch_distributed_vq"]:
- ks = self.split_input_params["ks"] # eg. (128, 128)
- stride = self.split_input_params["stride"] # eg. (64, 64)
- uf = self.split_input_params["vqf"]
- bs, nc, h, w = z.shape
- if ks[0] > h or ks[1] > w:
- ks = (min(ks[0], h), min(ks[1], w))
- print("reducing Kernel")
-
- if stride[0] > h or stride[1] > w:
- stride = (min(stride[0], h), min(stride[1], w))
- print("reducing stride")
-
- fold, unfold, normalization, weighting = self.get_fold_unfold(z, ks, stride, uf=uf)
-
- z = unfold(z) # (bn, nc * prod(**ks), L)
- # 1. Reshape to img shape
- z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L )
-
- # 2. apply model loop over last dim
- if isinstance(self.first_stage_model, VQModelInterface):
- output_list = [self.first_stage_model.decode(z[:, :, :, :, i],
- force_not_quantize=predict_cids or force_not_quantize)
- for i in range(z.shape[-1])]
- else:
-
- output_list = [self.first_stage_model.decode(z[:, :, :, :, i])
- for i in range(z.shape[-1])]
-
- o = torch.stack(output_list, axis=-1) # # (bn, nc, ks[0], ks[1], L)
- o = o * weighting
- # Reverse 1. reshape to img shape
- o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L)
- # stitch crops together
- decoded = fold(o)
- decoded = decoded / normalization # norm is shape (1, 1, h, w)
- return decoded
- else:
- if isinstance(self.first_stage_model, VQModelInterface):
- return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize)
- else:
- return self.first_stage_model.decode(z)
-
- else:
- if isinstance(self.first_stage_model, VQModelInterface):
- return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize)
- else:
- return self.first_stage_model.decode(z)
-
- @torch.no_grad()
- def encode_first_stage(self, x):
- if hasattr(self, "split_input_params"):
- if self.split_input_params["patch_distributed_vq"]:
- ks = self.split_input_params["ks"] # eg. (128, 128)
- stride = self.split_input_params["stride"] # eg. (64, 64)
- df = self.split_input_params["vqf"]
- self.split_input_params['original_image_size'] = x.shape[-2:]
- bs, nc, h, w = x.shape
- if ks[0] > h or ks[1] > w:
- ks = (min(ks[0], h), min(ks[1], w))
- print("reducing Kernel")
-
- if stride[0] > h or stride[1] > w:
- stride = (min(stride[0], h), min(stride[1], w))
- print("reducing stride")
-
- fold, unfold, normalization, weighting = self.get_fold_unfold(x, ks, stride, df=df)
- z = unfold(x) # (bn, nc * prod(**ks), L)
- # Reshape to img shape
- z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L )
-
- output_list = [self.first_stage_model.encode(z[:, :, :, :, i])
- for i in range(z.shape[-1])]
-
- o = torch.stack(output_list, axis=-1)
- o = o * weighting
-
- # Reverse reshape to img shape
- o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L)
- # stitch crops together
- decoded = fold(o)
- decoded = decoded / normalization
- return decoded
-
- else:
- return self.first_stage_model.encode(x)
- else:
- return self.first_stage_model.encode(x)
-
- def shared_step(self, batch, **kwargs):
- x, c = self.get_input(batch, self.first_stage_key)
- loss = self(x, c)
- return loss
-
- def test_step(self,batch,batch_idx):
- cond = batch[self.cond_stage_key] * self.test_repeat
- cond = self.get_learned_conditioning(cond) # c: string -> [B, T, Context_dim]
- batch_size = len(cond)
- enc_emb = self.sample(cond,batch_size,timesteps=self.test_numsteps)# shape = [batch_size,self.channels,self.mel_dim,self.mel_length]
- xrec = self.decode_first_stage(enc_emb)
- reconstructions = (xrec + 1)/2 # to mel scale
- test_ckpt_path = os.path.basename(self.trainer.tested_ckpt_path)
- savedir = os.path.join(self.trainer.log_dir,f'output_imgs_{test_ckpt_path}','fake_class')
- if not os.path.exists(savedir):
- os.makedirs(savedir)
-
- file_names = batch['f_name']
- nfiles = len(file_names)
- reconstructions = reconstructions.cpu().numpy().squeeze(1) # squuze channel dim
- for k in range(reconstructions.shape[0]):
- b,repeat = k % nfiles, k // nfiles
- vname_num_split_index = file_names[b].rfind('_')# file_names[b]:video_name+'_'+num
- v_n,num = file_names[b][:vname_num_split_index],file_names[b][vname_num_split_index+1:]
- save_img_path = os.path.join(savedir,f'{v_n}_sample_{num}_{repeat}.npy')# the num_th caption, the repeat_th repitition
- np.save(save_img_path,reconstructions[b])
-
- return None
-
- def forward(self, x, c, *args, **kwargs):
- t = torch.randint(0, self.num_timesteps, (x.shape[0],), device=self.device).long()
- if self.model.conditioning_key is not None:
- assert c is not None
- if self.cond_stage_trainable:
- c = self.get_learned_conditioning(c) # c: string -> [B, T, Context_dim]
- if self.shorten_cond_schedule: # TODO: drop this option
- tc = self.cond_ids[t].to(self.device)
- c = self.q_sample(x_start=c, t=tc, noise=torch.randn_like(c.float()))
- return self.p_losses(x, c, t, *args, **kwargs)
-
- def _rescale_annotations(self, bboxes, crop_coordinates): # TODO: move to dataset
- def rescale_bbox(bbox):
- x0 = clamp((bbox[0] - crop_coordinates[0]) / crop_coordinates[2])
- y0 = clamp((bbox[1] - crop_coordinates[1]) / crop_coordinates[3])
- w = min(bbox[2] / crop_coordinates[2], 1 - x0)
- h = min(bbox[3] / crop_coordinates[3], 1 - y0)
- return x0, y0, w, h
-
- return [rescale_bbox(b) for b in bboxes]
-
- def apply_model(self, x_noisy, t, cond, return_ids=False):
-
- if isinstance(cond, dict):
- # hybrid case, cond is exptected to be a dict
- pass
- else:
- if not isinstance(cond, list):
- cond = [cond]
- key = 'c_concat' if self.model.conditioning_key == 'concat' else 'c_crossattn'
- cond = {key: cond}
-
- if hasattr(self, "split_input_params"):
- assert len(cond) == 1 # todo can only deal with one conditioning atm
- assert not return_ids
- ks = self.split_input_params["ks"] # eg. (128, 128)
- stride = self.split_input_params["stride"] # eg. (64, 64)
-
- h, w = x_noisy.shape[-2:]
-
- fold, unfold, normalization, weighting = self.get_fold_unfold(x_noisy, ks, stride)
-
- z = unfold(x_noisy) # (bn, nc * prod(**ks), L)
- # Reshape to img shape
- z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L )
- z_list = [z[:, :, :, :, i] for i in range(z.shape[-1])]
-
- if self.cond_stage_key in ["image", "LR_image", "segmentation",
- 'bbox_img'] and self.model.conditioning_key: # todo check for completeness
- c_key = next(iter(cond.keys())) # get key
- c = next(iter(cond.values())) # get value
- assert (len(c) == 1) # todo extend to list with more than one elem
- c = c[0] # get element
-
- c = unfold(c)
- c = c.view((c.shape[0], -1, ks[0], ks[1], c.shape[-1])) # (bn, nc, ks[0], ks[1], L )
-
- cond_list = [{c_key: [c[:, :, :, :, i]]} for i in range(c.shape[-1])]
-
- elif self.cond_stage_key == 'coordinates_bbox':
- assert 'original_image_size' in self.split_input_params, 'BoudingBoxRescaling is missing original_image_size'
-
- # assuming padding of unfold is always 0 and its dilation is always 1
- n_patches_per_row = int((w - ks[0]) / stride[0] + 1)
- full_img_h, full_img_w = self.split_input_params['original_image_size']
- # as we are operating on latents, we need the factor from the original image size to the
- # spatial latent size to properly rescale the crops for regenerating the bbox annotations
- num_downs = self.first_stage_model.encoder.num_resolutions - 1
- rescale_latent = 2 ** (num_downs)
-
- # get top left postions of patches as conforming for the bbbox tokenizer, therefore we
- # need to rescale the tl patch coordinates to be in between (0,1)
- tl_patch_coordinates = [(rescale_latent * stride[0] * (patch_nr % n_patches_per_row) / full_img_w,
- rescale_latent * stride[1] * (patch_nr // n_patches_per_row) / full_img_h)
- for patch_nr in range(z.shape[-1])]
-
- # patch_limits are tl_coord, width and height coordinates as (x_tl, y_tl, h, w)
- patch_limits = [(x_tl, y_tl,
- rescale_latent * ks[0] / full_img_w,
- rescale_latent * ks[1] / full_img_h) for x_tl, y_tl in tl_patch_coordinates]
- # patch_values = [(np.arange(x_tl,min(x_tl+ks, 1.)),np.arange(y_tl,min(y_tl+ks, 1.))) for x_tl, y_tl in tl_patch_coordinates]
-
- # tokenize crop coordinates for the bounding boxes of the respective patches
- patch_limits_tknzd = [torch.LongTensor(self.bbox_tokenizer._crop_encoder(bbox))[None].to(self.device)
- for bbox in patch_limits] # list of length l with tensors of shape (1, 2)
- print(patch_limits_tknzd[0].shape)
- # cut tknzd crop position from conditioning
- assert isinstance(cond, dict), 'cond must be dict to be fed into model'
- cut_cond = cond['c_crossattn'][0][..., :-2].to(self.device)
- print(cut_cond.shape)
-
- adapted_cond = torch.stack([torch.cat([cut_cond, p], dim=1) for p in patch_limits_tknzd])
- adapted_cond = rearrange(adapted_cond, 'l b n -> (l b) n')
- print(adapted_cond.shape)
- adapted_cond = self.get_learned_conditioning(adapted_cond)
- print(adapted_cond.shape)
- adapted_cond = rearrange(adapted_cond, '(l b) n d -> l b n d', l=z.shape[-1])
- print(adapted_cond.shape)
-
- cond_list = [{'c_crossattn': [e]} for e in adapted_cond]
-
- else:
- cond_list = [cond for i in range(z.shape[-1])] # Todo make this more efficient
-
- # apply model by loop over crops
- output_list = [self.model(z_list[i], t, **cond_list[i]) for i in range(z.shape[-1])]
- assert not isinstance(output_list[0],
- tuple) # todo cant deal with multiple model outputs check this never happens
-
- o = torch.stack(output_list, axis=-1)
- o = o * weighting
- # Reverse reshape to img shape
- o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L)
- # stitch crops together
- x_recon = fold(o) / normalization
-
- else:
- x_recon = self.model(x_noisy, t, **cond)
-
- if isinstance(x_recon, tuple) and not return_ids:
- return x_recon[0]
- else:
- return x_recon
-
- def _predict_eps_from_xstart(self, x_t, t, pred_xstart):
- return (extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - pred_xstart) / \
- extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape)
-
- def _prior_bpd(self, x_start):
- """
- Get the prior KL term for the variational lower-bound, measured in
- bits-per-dim.
- This term can't be optimized, as it only depends on the encoder.
- :param x_start: the [N x C x ...] tensor of inputs.
- :return: a batch of [N] KL values (in bits), one per batch element.
- """
- batch_size = x_start.shape[0]
- t = torch.tensor([self.num_timesteps - 1] * batch_size, device=x_start.device)
- qt_mean, _, qt_log_variance = self.q_mean_variance(x_start, t)
- kl_prior = normal_kl(mean1=qt_mean, logvar1=qt_log_variance, mean2=0.0, logvar2=0.0)
- return mean_flat(kl_prior) / np.log(2.0)
-
- def p_losses(self, x_start, cond, t, noise=None):
- noise = default(noise, lambda: torch.randn_like(x_start))
- x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise)
- model_output = self.apply_model(x_noisy, t, cond)
-
- loss_dict = {}
- prefix = 'train' if self.training else 'val'
-
- if self.parameterization == "x0":
- target = x_start
- elif self.parameterization == "eps":
- target = noise
- else:
- raise NotImplementedError()
-
- loss_simple = self.get_loss(model_output, target, mean=False).mean([1, 2, 3])
- loss_dict.update({f'{prefix}/loss_simple': loss_simple.mean()})
-
- logvar_t = self.logvar[t].to(self.device)
- loss = loss_simple / torch.exp(logvar_t) + logvar_t
- # loss = loss_simple / torch.exp(self.logvar) + self.logvar
- if self.learn_logvar:
- loss_dict.update({f'{prefix}/loss_gamma': loss.mean()})
- loss_dict.update({'logvar': self.logvar.data.mean()})
-
- loss = self.l_simple_weight * loss.mean()
-
- loss_vlb = self.get_loss(model_output, target, mean=False).mean(dim=(1, 2, 3))
- loss_vlb = (self.lvlb_weights[t] * loss_vlb).mean()
- loss_dict.update({f'{prefix}/loss_vlb': loss_vlb})
- loss += (self.original_elbo_weight * loss_vlb)
- loss_dict.update({f'{prefix}/loss': loss})
-
- return loss, loss_dict
-
- def p_mean_variance(self, x, c, t, clip_denoised: bool, return_codebook_ids=False, quantize_denoised=False,
- return_x0=False, score_corrector=None, corrector_kwargs=None):
- t_in = t
- model_out = self.apply_model(x, t_in, c, return_ids=return_codebook_ids)
-
- if score_corrector is not None:
- assert self.parameterization == "eps"
- model_out = score_corrector.modify_score(self, model_out, x, t, c, **corrector_kwargs)
-
- if return_codebook_ids:
- model_out, logits = model_out
-
- if self.parameterization == "eps":
- x_recon = self.predict_start_from_noise(x, t=t, noise=model_out)
- elif self.parameterization == "x0":
- x_recon = model_out
- else:
- raise NotImplementedError()
-
- if clip_denoised:
- x_recon.clamp_(-1., 1.)
- if quantize_denoised:
- x_recon, _, [_, _, indices] = self.first_stage_model.quantize(x_recon)
- model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t)
- if return_codebook_ids:
- return model_mean, posterior_variance, posterior_log_variance, logits
- elif return_x0:
- return model_mean, posterior_variance, posterior_log_variance, x_recon
- else:
- return model_mean, posterior_variance, posterior_log_variance
-
- @torch.no_grad()
- def p_sample(self, x, c, t, clip_denoised=False, repeat_noise=False,
- return_codebook_ids=False, quantize_denoised=False, return_x0=False,
- temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None):
- b, *_, device = *x.shape, x.device
- outputs = self.p_mean_variance(x=x, c=c, t=t, clip_denoised=clip_denoised,
- return_codebook_ids=return_codebook_ids,
- quantize_denoised=quantize_denoised,
- return_x0=return_x0,
- score_corrector=score_corrector, corrector_kwargs=corrector_kwargs)
- if return_codebook_ids:
- raise DeprecationWarning("Support dropped.")
- model_mean, _, model_log_variance, logits = outputs
- elif return_x0:
- model_mean, _, model_log_variance, x0 = outputs
- else:
- model_mean, _, model_log_variance = outputs
-
- noise = noise_like(x.shape, device, repeat_noise) * temperature
- if noise_dropout > 0.:
- noise = torch.nn.functional.dropout(noise, p=noise_dropout)
- # no noise when t == 0
- nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1)))
-
- if return_codebook_ids:
- return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, logits.argmax(dim=1)
- if return_x0:
- return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, x0
- else:
- return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise
-
- @torch.no_grad()
- def progressive_denoising(self, cond, shape, verbose=True, callback=None, quantize_denoised=False,
- img_callback=None, mask=None, x0=None, temperature=1., noise_dropout=0.,
- score_corrector=None, corrector_kwargs=None, batch_size=None, x_T=None, start_T=None,
- log_every_t=None):
- if not log_every_t:
- log_every_t = self.log_every_t
- timesteps = self.num_timesteps
- if batch_size is not None:
- b = batch_size if batch_size is not None else shape[0]
- shape = [batch_size] + list(shape)
- else:
- b = batch_size = shape[0]
- if x_T is None:
- img = torch.randn(shape, device=self.device)
- else:
- img = x_T
- intermediates = []
- if cond is not None:
- if isinstance(cond, dict):
- cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else
- list(map(lambda x: x[:batch_size], cond[key])) for key in cond}
- else:
- cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size]
-
- if start_T is not None:
- timesteps = min(timesteps, start_T)
- iterator = tqdm(reversed(range(0, timesteps)), desc='Progressive Generation',
- total=timesteps) if verbose else reversed(
- range(0, timesteps))
- if type(temperature) == float:
- temperature = [temperature] * timesteps
-
- for i in iterator:
- ts = torch.full((b,), i, device=self.device, dtype=torch.long)
- if self.shorten_cond_schedule:
- assert self.model.conditioning_key != 'hybrid'
- tc = self.cond_ids[ts].to(cond.device)
- cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond))
-
- img, x0_partial = self.p_sample(img, cond, ts,
- clip_denoised=self.clip_denoised,
- quantize_denoised=quantize_denoised, return_x0=True,
- temperature=temperature[i], noise_dropout=noise_dropout,
- score_corrector=score_corrector, corrector_kwargs=corrector_kwargs)
- if mask is not None:
- assert x0 is not None
- img_orig = self.q_sample(x0, ts)
- img = img_orig * mask + (1. - mask) * img
-
- if i % log_every_t == 0 or i == timesteps - 1:
- intermediates.append(x0_partial)
- if callback: callback(i)
- if img_callback: img_callback(img, i)
- return img, intermediates
-
- @torch.no_grad()
- def p_sample_loop(self, cond, shape, return_intermediates=False,
- x_T=None, verbose=True, callback=None, timesteps=None, quantize_denoised=False,
- mask=None, x0=None, img_callback=None, start_T=None,
- log_every_t=None):
-
- if not log_every_t:
- log_every_t = self.log_every_t
- device = self.betas.device
- b = shape[0]
- if x_T is None:
- img = torch.randn(shape, device=device)
- else:
- img = x_T
-
- intermediates = [img]
- if timesteps is None:
- timesteps = self.num_timesteps
-
- if start_T is not None:
- timesteps = min(timesteps, start_T)
- iterator = tqdm(reversed(range(0, timesteps)), desc='Sampling t', total=timesteps) if verbose else reversed(
- range(0, timesteps))
-
- if mask is not None:
- assert x0 is not None
- assert x0.shape[2:3] == mask.shape[2:3] # spatial size has to match
-
- for i in iterator:
- ts = torch.full((b,), i, device=device, dtype=torch.long)
- if self.shorten_cond_schedule:
- assert self.model.conditioning_key != 'hybrid'
- tc = self.cond_ids[ts].to(cond.device)
- cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond))
-
- img = self.p_sample(img, cond, ts,
- clip_denoised=self.clip_denoised,
- quantize_denoised=quantize_denoised)
- if mask is not None:
- img_orig = self.q_sample(x0, ts)
- img = img_orig * mask + (1. - mask) * img
-
- if i % log_every_t == 0 or i == timesteps - 1:
- intermediates.append(img)
- if callback: callback(i)
- if img_callback: img_callback(img, i)
-
- if return_intermediates:
- return img, intermediates
- return img
-
- @torch.no_grad()
- def sample(self, cond, batch_size=16, return_intermediates=False, x_T=None,
- verbose=True, timesteps=None, quantize_denoised=False,
- mask=None, x0=None, shape=None,**kwargs):
- if shape is None:
- shape = (batch_size, self.channels, self.mel_dim, self.mel_length)
- if cond is not None:
- if isinstance(cond, dict):
- cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else
- list(map(lambda x: x[:batch_size], cond[key])) for key in cond}
- else:
- cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size]
- return self.p_sample_loop(cond,
- shape,
- return_intermediates=return_intermediates, x_T=x_T,
- verbose=verbose, timesteps=timesteps, quantize_denoised=quantize_denoised,
- mask=mask, x0=x0)
-
- @torch.no_grad()
- def sample_log(self,cond,batch_size,ddim, ddim_steps,**kwargs):
-
- if ddim:
- ddim_sampler = DDIMSampler(self)
- shape = (self.channels, self.mel_dim, self.mel_length)
- samples, intermediates =ddim_sampler.sample(ddim_steps,batch_size,
- shape,cond,verbose=False,**kwargs)
-
- else:
- samples, intermediates = self.sample(cond=cond, batch_size=batch_size,
- return_intermediates=True,**kwargs)
-
- return samples, intermediates
-
-
- @torch.no_grad()
- def log_images(self, batch, N=8, n_row=4, sample=True, ddim_steps=200, ddim_eta=1., return_keys=None,
- quantize_denoised=True, inpaint=True, plot_denoise_rows=False, plot_progressive_rows=True,
- plot_diffusion_rows=True, **kwargs):
-
- use_ddim = ddim_steps is not None
-
- log = dict()
- z, c, x, xrec, xc = self.get_input(batch, self.first_stage_key,
- return_first_stage_outputs=True,
- force_c_encode=True,
- return_original_cond=True,
- bs=N)
- N = min(x.shape[0], N)
- n_row = min(x.shape[0], n_row)
- log["inputs"] = x
- log["reconstruction"] = xrec
- if self.model.conditioning_key is not None:
- if hasattr(self.cond_stage_model, "decode") and self.cond_stage_key != "masked_image":
- xc = self.cond_stage_model.decode(c)
- log["conditioning"] = xc
- elif self.cond_stage_key == "masked_image":
- log["mask"] = c[:, -1, :, :][:, None, :, :]
- xc = self.cond_stage_model.decode(c[:, :self.cond_stage_model.embed_dim, :, :])
- log["conditioning"] = xc
- elif self.cond_stage_key in ["caption"]:
- xc = log_txt_as_img((256, 256), batch["caption"])
- log["conditioning"] = xc
- elif self.cond_stage_key == 'class_label':
- xc = log_txt_as_img((x.shape[2], x.shape[3]), batch["human_label"])
- log['conditioning'] = xc
- elif isimage(xc):
- log["conditioning"] = xc
- if ismap(xc):
- log["original_conditioning"] = self.to_rgb(xc)
-
- if plot_diffusion_rows:
- # get diffusion row
- diffusion_row = list()
- z_start = z[:n_row]
- for t in range(self.num_timesteps):
- if t % self.log_every_t == 0 or t == self.num_timesteps - 1:
- t = repeat(torch.tensor([t]), '1 -> b', b=n_row)
- t = t.to(self.device).long()
- noise = torch.randn_like(z_start)
- z_noisy = self.q_sample(x_start=z_start, t=t, noise=noise)
- diffusion_row.append(self.decode_first_stage(z_noisy))
-
- diffusion_row = torch.stack(diffusion_row) # n_log_step, n_row, C, H, W
- diffusion_grid = rearrange(diffusion_row, 'n b c h w -> b n c h w')
- diffusion_grid = rearrange(diffusion_grid, 'b n c h w -> (b n) c h w')
- diffusion_grid = make_grid(diffusion_grid, nrow=diffusion_row.shape[0])
- log["diffusion_row"] = diffusion_grid
-
- if sample:
- # get denoise row
- with self.ema_scope("Plotting"):
- samples, z_denoise_row = self.sample_log(cond=c,batch_size=N,ddim=use_ddim,
- ddim_steps=ddim_steps,eta=ddim_eta)
- # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True)
- x_samples = self.decode_first_stage(samples)
- log["samples"] = x_samples
- if plot_denoise_rows:
- denoise_grid = self._get_denoise_row_from_list(z_denoise_row)
- log["denoise_row"] = denoise_grid
-
- if quantize_denoised and not isinstance(self.first_stage_model, AutoencoderKL) and not isinstance(
- self.first_stage_model, IdentityFirstStage):
- # also display when quantizing x0 while sampling
- with self.ema_scope("Plotting Quantized Denoised"):
- samples, z_denoise_row = self.sample_log(cond=c,batch_size=N,ddim=use_ddim,
- ddim_steps=ddim_steps,eta=ddim_eta,
- quantize_denoised=True)
- # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True,
- # quantize_denoised=True)
- x_samples = self.decode_first_stage(samples.to(self.device))
- log["samples_x0_quantized"] = x_samples
-
- if inpaint:
- # make a simple center square
- b, h, w = z.shape[0], z.shape[2], z.shape[3]
- mask = torch.ones(N, h, w).to(self.device)
- # zeros will be filled in
- mask[:, h // 4:3 * h // 4, w // 4:3 * w // 4] = 0.
- mask = mask[:, None, ...]
- with self.ema_scope("Plotting Inpaint"):
-
- samples, _ = self.sample_log(cond=c,batch_size=N,ddim=use_ddim, eta=ddim_eta,
- ddim_steps=ddim_steps, x0=z[:N], mask=mask)
- x_samples = self.decode_first_stage(samples.to(self.device))
- log["samples_inpainting"] = x_samples
- log["mask_inpainting"] = mask
-
- # outpaint
- mask = 1 - mask
- with self.ema_scope("Plotting Outpaint"):
- samples, _ = self.sample_log(cond=c, batch_size=N, ddim=use_ddim,eta=ddim_eta,
- ddim_steps=ddim_steps, x0=z[:N], mask=mask)
- x_samples = self.decode_first_stage(samples.to(self.device))
- log["samples_outpainting"] = x_samples
- log["mask_outpainting"] = mask
-
- if plot_progressive_rows:
- with self.ema_scope("Plotting Progressives"):
- img, progressives = self.progressive_denoising(c,
- shape=(self.channels, self.mel_dim, self.mel_length),
- batch_size=N)
- prog_row = self._get_denoise_row_from_list(progressives, desc="Progressive Generation")
- log["progressive_row"] = prog_row
-
- if return_keys:
- if np.intersect1d(list(log.keys()), return_keys).shape[0] == 0:
- return log
- else:
- return {key: log[key] for key in return_keys}
- return log
-
- def configure_optimizers(self):
- lr = self.learning_rate
- params = list(self.model.parameters())
- if self.cond_stage_trainable:
- print(f"{self.__class__.__name__}: Also optimizing conditioner params!")
- params = params + list(self.cond_stage_model.parameters())
- if self.learn_logvar:
- print('Diffusion model optimizing logvar')
- params.append(self.logvar)
- opt = torch.optim.AdamW(params, lr=lr)
- if self.use_scheduler:
- assert 'target' in self.scheduler_config
- scheduler = instantiate_from_config(self.scheduler_config)
-
- print("Setting up LambdaLR scheduler...")
- scheduler = [
- {
- 'scheduler': LambdaLR(opt, lr_lambda=scheduler.schedule),
- 'interval': 'step',
- 'frequency': 1
- }]
- return [opt], scheduler
- return opt
-
- @torch.no_grad()
- def to_rgb(self, x):
- x = x.float()
- if not hasattr(self, "colorize"):
- self.colorize = torch.randn(3, x.shape[1], 1, 1).to(x)
- x = nn.functional.conv2d(x, weight=self.colorize)
- x = 2. * (x - x.min()) / (x.max() - x.min()) - 1.
- return x
-
-
-class LatentFinetuneDiffusion(LatentDiffusion_audio):
- """
- Basis for different finetunas, such as inpainting or depth2image
- To disable finetuning mode, set finetune_keys to None
- """
-
- def __init__(self,
- concat_keys: tuple,
- finetune_keys=("model.diffusion_model.input_blocks.0.0.weight",
- "model_ema.diffusion_modelinput_blocks00weight"
- ),
- keep_finetune_dims=4,
- # if model was trained without concat mode before and we would like to keep these channels
- c_concat_log_start=None, # to log reconstruction of c_concat codes
- c_concat_log_end=None,
- *args, **kwargs
- ):
- ckpt_path = kwargs.pop("ckpt_path", None)
- ignore_keys = kwargs.pop("ignore_keys", list())
- super().__init__(*args, **kwargs)
- self.finetune_keys = finetune_keys
- self.concat_keys = concat_keys
- self.keep_dims = keep_finetune_dims
- self.c_concat_log_start = c_concat_log_start
- self.c_concat_log_end = c_concat_log_end
-
- if exists(self.finetune_keys): assert exists(ckpt_path), 'can only finetune from a given checkpoint'
- if exists(ckpt_path):
- self.init_from_ckpt(ckpt_path, ignore_keys)
-
- def init_from_ckpt(self, path, ignore_keys=list(), only_model=False):
- sd = torch.load(path, map_location="cpu")
- if "state_dict" in list(sd.keys()):
- sd = sd["state_dict"]
- keys = list(sd.keys())
-
- for k in keys:
- for ik in ignore_keys:
- if k.startswith(ik):
- print("Deleting key {} from state_dict.".format(k))
- del sd[k]
-
- # make it explicit, finetune by including extra input channels
- if exists(self.finetune_keys) and k in self.finetune_keys:
- new_entry = None
- for name, param in self.named_parameters():
- if name in self.finetune_keys:
- print(
- f"modifying key '{name}' and keeping its original {self.keep_dims} (channels) dimensions only")
- new_entry = torch.zeros_like(param) # zero init
- assert exists(new_entry), 'did not find matching parameter to modify'
- new_entry[:, :self.keep_dims, ...] = sd[k]
- sd[k] = new_entry
-
- missing, unexpected = self.load_state_dict(sd, strict=False) if not only_model else self.model.load_state_dict(sd, strict=False)
- print(f"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys")
- if len(missing) > 0:
- print(f"Missing Keys: {missing}")
- if len(unexpected) > 0:
- print(f"Unexpected Keys: {unexpected}")
-
- @torch.no_grad()
- def log_images(self, batch, N=8, n_row=4, sample=True, ddim_steps=200, ddim_eta=1., return_keys=None,
- quantize_denoised=True, inpaint=True, plot_denoise_rows=False, plot_progressive_rows=True,
- plot_diffusion_rows=True, unconditional_guidance_scale=1., unconditional_guidance_label=None,
- use_ema_scope=True,
- **kwargs):
- use_ddim = ddim_steps is not None
-
- log = dict()
- z, c, x, xrec, xc = self.get_input(batch, self.first_stage_key, bs=N, return_first_stage_outputs=True)
- c_cat, c = c["c_concat"][0], c["c_crossattn"][0]
- N = min(x.shape[0], N)
- n_row = min(x.shape[0], n_row)
- log["inputs"] = x
- log["reconstruction"] = xrec
- if self.model.conditioning_key is not None:
- if hasattr(self.cond_stage_model, "decode"):
- xc = self.cond_stage_model.decode(c)
- log["conditioning"] = xc
- elif self.cond_stage_key in ["caption"]:
- xc = log_txt_as_img((x.shape[2], x.shape[3]), batch["caption"])
- log["conditioning"] = xc
- elif self.cond_stage_key == 'class_label':
- xc = log_txt_as_img((x.shape[2], x.shape[3]), batch["human_label"])
- log['conditioning'] = xc
- elif isimage(xc):
- log["conditioning"] = xc
- if ismap(xc):
- log["original_conditioning"] = self.to_rgb(xc)
-
- if not (self.c_concat_log_start is None and self.c_concat_log_end is None):
- log["c_concat_decoded"] = self.decode_first_stage(c_cat[:, self.c_concat_log_start:self.c_concat_log_end])
-
- if plot_diffusion_rows:
- # get diffusion row
- diffusion_row = list()
- z_start = z[:n_row]
- for t in range(self.num_timesteps):
- if t % self.log_every_t == 0 or t == self.num_timesteps - 1:
- t = repeat(torch.tensor([t]), '1 -> b', b=n_row)
- t = t.to(self.device).long()
- noise = torch.randn_like(z_start)
- z_noisy = self.q_sample(x_start=z_start, t=t, noise=noise)
- diffusion_row.append(self.decode_first_stage(z_noisy))
-
- diffusion_row = torch.stack(diffusion_row) # n_log_step, n_row, C, H, W
- diffusion_grid = rearrange(diffusion_row, 'n b c h w -> b n c h w')
- diffusion_grid = rearrange(diffusion_grid, 'b n c h w -> (b n) c h w')
- diffusion_grid = make_grid(diffusion_grid, nrow=diffusion_row.shape[0])
- log["diffusion_row"] = diffusion_grid
-
- if sample:
- # get denoise row
- with self.ema_scope("Sampling"):
- samples, z_denoise_row = self.sample_log(cond={"c_concat": [c_cat], "c_crossattn": [c]},
- batch_size=N, ddim=use_ddim,
- ddim_steps=ddim_steps, eta=ddim_eta)
- # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True)
- x_samples = self.decode_first_stage(samples)
- log["samples"] = x_samples
- if plot_denoise_rows:
- denoise_grid = self._get_denoise_row_from_list(z_denoise_row)
- log["denoise_row"] = denoise_grid
-
- if unconditional_guidance_scale > 1.0:
- uc_cross = self.get_unconditional_conditioning(N, unconditional_guidance_label)
- uc_cat = c_cat
- uc_full = {"c_concat": [uc_cat], "c_crossattn": [uc_cross]}
- with self.ema_scope("Sampling with classifier-free guidance"):
- samples_cfg, _ = self.sample_log(cond={"c_concat": [c_cat], "c_crossattn": [c]},
- batch_size=N, ddim=use_ddim,
- ddim_steps=ddim_steps, eta=ddim_eta,
- unconditional_guidance_scale=unconditional_guidance_scale,
- unconditional_conditioning=uc_full,
- )
- x_samples_cfg = self.decode_first_stage(samples_cfg)
- log[f"samples_cfg_scale_{unconditional_guidance_scale:.2f}"] = x_samples_cfg
-
- return log
-
-
-class LatentInpaintDiffusion(LatentFinetuneDiffusion):
- """
- can either run as pure inpainting model (only concat mode) or with mixed conditionings,
- e.g. mask as concat and text via cross-attn.
- To disable finetuning mode, set finetune_keys to None
- """
-
- def __init__(self,
- concat_keys=("mask", "masked_image"),
- masked_image_key="masked_image",
- *args, **kwargs
- ):
- super().__init__(concat_keys, *args, **kwargs)
- self.masked_image_key = masked_image_key
- assert self.masked_image_key in concat_keys
-
- @torch.no_grad()
- def get_input(self, batch, k, cond_key=None, bs=None, return_first_stage_outputs=False):
- # note: restricted to non-trainable encoders currently
- assert not self.cond_stage_trainable, 'trainable cond stages not yet supported for inpainting'
- z, c, x, xrec, xc = super().get_input(batch, self.first_stage_key, return_first_stage_outputs=True,
- force_c_encode=True, return_original_cond=True, bs=bs)
-
- assert exists(self.concat_keys)
- c_cat = list()
- for ck in self.concat_keys:
- if len(batch[ck].shape) == 3:
- batch[ck] = batch[ck][..., None]
- cc = rearrange(batch[ck], 'b h w c -> b c h w').to(memory_format=torch.contiguous_format).float()
- if bs is not None:
- cc = cc[:bs]
- cc = cc.to(self.device)
- bchw = z.shape
- if ck != self.masked_image_key:
- cc = torch.nn.functional.interpolate(cc, size=bchw[-2:])
- else:
- cc = self.get_first_stage_encoding(self.encode_first_stage(cc))
- c_cat.append(cc)
- c_cat = torch.cat(c_cat, dim=1)
- all_conds = {"c_concat": [c_cat], "c_crossattn": [c]}
- if return_first_stage_outputs:
- return z, all_conds, x, xrec, xc
- return z, all_conds
-
- @torch.no_grad()
- def log_images(self, *args, **kwargs):
- log = super(LatentInpaintDiffusion, self).log_images(*args, **kwargs)
- log["masked_image"] = rearrange(args[0]["masked_image"],
- 'b h w c -> b c h w').to(memory_format=torch.contiguous_format).float()
- return log
diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/encoders/CLAP/__init__.py b/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/encoders/CLAP/__init__.py
deleted file mode 100644
index aadad97ebc9ec23fdebab974a99e343de90f8afd..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/encoders/CLAP/__init__.py
+++ /dev/null
@@ -1,3 +0,0 @@
-from . import clap
-from . import audio
-from . import utils
\ No newline at end of file
diff --git a/spaces/AIGC-Audio/Make_An_Audio_inpaint/vocoder/bigvgan/alias_free_torch/__init__.py b/spaces/AIGC-Audio/Make_An_Audio_inpaint/vocoder/bigvgan/alias_free_torch/__init__.py
deleted file mode 100644
index a2318b63198250856809c0cb46210a4147b829bc..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/Make_An_Audio_inpaint/vocoder/bigvgan/alias_free_torch/__init__.py
+++ /dev/null
@@ -1,6 +0,0 @@
-# Adapted from https://github.com/junjun3518/alias-free-torch under the Apache License 2.0
-# LICENSE is in incl_licenses directory.
-
-from .filter import *
-from .resample import *
-from .act import *
\ No newline at end of file
diff --git a/spaces/AIZero2Hero4Health/5-QuantumStreamlitAIDashboard-SL/README.md b/spaces/AIZero2Hero4Health/5-QuantumStreamlitAIDashboard-SL/README.md
deleted file mode 100644
index 339b6a2cacf2f349093d33dc90f04025f4578e49..0000000000000000000000000000000000000000
--- a/spaces/AIZero2Hero4Health/5-QuantumStreamlitAIDashboard-SL/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: 5 QuantumStreamlitAIDashboard SL
-emoji: 📚
-colorFrom: blue
-colorTo: gray
-sdk: streamlit
-sdk_version: 1.10.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Abdullahw72/bark-voice-cloning/hubert/hubert_manager.py b/spaces/Abdullahw72/bark-voice-cloning/hubert/hubert_manager.py
deleted file mode 100644
index 857f2af29886fca6eb4df506853f446066af7c04..0000000000000000000000000000000000000000
--- a/spaces/Abdullahw72/bark-voice-cloning/hubert/hubert_manager.py
+++ /dev/null
@@ -1,33 +0,0 @@
-import os.path
-import shutil
-import urllib.request
-
-import huggingface_hub
-
-
-class HuBERTManager:
- @staticmethod
- def make_sure_hubert_installed(download_url: str = 'https://dl.fbaipublicfiles.com/hubert/hubert_base_ls960.pt', file_name: str = 'hubert.pt'):
- install_dir = os.path.join('data', 'models', 'hubert')
- if not os.path.isdir(install_dir):
- os.makedirs(install_dir, exist_ok=True)
- install_file = os.path.join(install_dir, file_name)
- if not os.path.isfile(install_file):
- print('Downloading HuBERT base model')
- urllib.request.urlretrieve(download_url, install_file)
- print('Downloaded HuBERT')
- return install_file
-
-
- @staticmethod
- def make_sure_tokenizer_installed(model: str = 'quantifier_hubert_base_ls960_14.pth', repo: str = 'GitMylo/bark-voice-cloning', local_file: str = 'tokenizer.pth'):
- install_dir = os.path.join('data', 'models', 'hubert')
- if not os.path.isdir(install_dir):
- os.makedirs(install_dir, exist_ok=True)
- install_file = os.path.join(install_dir, local_file)
- if not os.path.isfile(install_file):
- print('Downloading HuBERT custom tokenizer')
- huggingface_hub.hf_hub_download(repo, model, local_dir=install_dir, local_dir_use_symlinks=False)
- shutil.move(os.path.join(install_dir, model), install_file)
- print('Downloaded tokenizer')
- return install_file
diff --git a/spaces/AchyuthGamer/Free-Accounts-Generator/js/d173ouchebag.js b/spaces/AchyuthGamer/Free-Accounts-Generator/js/d173ouchebag.js
deleted file mode 100644
index d0dccddd32de1e92320e58c6401d9b95ad7cc525..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/Free-Accounts-Generator/js/d173ouchebag.js
+++ /dev/null
@@ -1,126 +0,0 @@
-var NumberOfWords = 70;
-var words = new BuildArray(NumberOfWords);
-
-words[1] = "https://cuty.io/lGr08bYZ";
-words[2] = "https://cuty.io/XDhh2Wc";
-words[3] = "https://paste.fo/86ccdf634678";
-words[4] = "https://cuty.io/hoDXeQ";
-words[5] = "https://cuty.io/E1Fxf";
-words[6] = "https://cuty.io/VWr7ZHlT";
-words[7] = "https://cuty.io/on7fj7A4";
-words[8] = "https://cuty.io/6WW3NVQcO3";
-words[9] = "https://cuty.io/CsDFD";
-words[10] = "https://cuty.io/g2X4gi";
-words[11] = "https://cuty.io/gBT8OQ65izDV";
-words[12] = "https://cuty.io/eTrvUFxu";
-words[13] = "https://cuty.io/ybG3zeDBzR";
-words[14] = "https://cuty.io/abeLh0s";
-words[15] = "https://cuty.io/ulup4Lcf2TK";
-words[16] = "https://cuty.io/FRLEzh5cQ6n";
-words[17] = "https://cuty.io/OVw8vLInZB1";
-words[18] = "https://cuty.io/BMTXGK";
-words[19] = "https://cuty.io/DyJ597nu";
-words[20] = "https://cuty.io/iIjTxEQ";
-words[21] = "https://cuty.io/XcuNNaRzkSlU";
-words[22] = "https://cuty.io/bl3drKcIC";
-words[23] = "https://cuty.io/qEoVSk4mXW";
-words[24] = "https://cuty.io/7r7Uf7";
-words[25] = "https://cuty.io/CDHgWvu9YJQK";
-words[26] = "https://cuty.io/gBT8OQ65izDV";
-words[27] = "https://cuty.io/EZAdA";
-words[28] = "https://cuty.io/0QB7dK6CFZzD";
-words[29] = "https://cuty.io/HFWgHl13";
-words[30] = "https://cuty.io/FgRvVvR39W8";
-words[31] = "https://cuty.io/wrhTqogK";
-words[32] = "https://cuty.io/ja14WYP";
-words[33] = "https://cuty.io/c82NDl7";
-words[34] = "https://cuty.io/Lbc9";
-words[35] = "https://cuty.io/c82NDl7";
-words[36] = "https://cuty.io/GWJWHKNr";
-words[37] = "https://cuty.io/WWFnoKEFK";
-words[38] = "https://cuty.io/AJfqsQ";
-words[39] = "https://cuty.io/6vG5ZrSRj";
-words[40] = "https://cuty.io/9a58b";
-words[41] = "https://cuty.io/2xdqfIV1I";
-words[42] = "https://cuty.io/1wOL4ot";
-words[43] = "https://cuty.io/VqhEJXmt8l";
-words[44] = "https://cuty.io/18olD1";
-words[45] = "https://cuty.io/PZbp9g";
-words[46] = "https://cuty.io/cAzSIvt";
-words[47] = "https://cuty.io/6r9O3wCTrJyj";
-words[48] = "https://cuty.io/8IuhK0AQGnFq";
-words[49] = "https://cuty.io/wX0fxCJ";
-words[50] = "https://cuty.io/bbJB2Ur";
-words[51] = "https://cuty.io/G47WR";
-words[52] = "https://cuty.io/StzRBrb";
-words[53] = "https://cuty.io/63gzehv297E";
-words[54] = "https://cuty.io/HTXo";
-words[55] = "https://cuty.io/pwxPR";
-words[56] = "https://cuty.io/gPNQODT6w";
-words[57] = "https://cuty.io/FgiePQ";
-words[58] = "https://cuty.io/XtTXmu";
-words[59] = "https://cuty.io/QblM1FsmKO";
-words[60] = "https://cuty.io/pszHV";
-words[61] = "https://cuty.io/0sZRO";
-words[62] = "https://cuty.io/FgHPEnnFv";
-words[63] = "https://cuty.io/P59l3Nil3MUS";
-words[64] = "https://cuty.io/O1hK";
-words[65] = "https://cuty.io/4VyT2IvH";
-words[66] = "https://cuty.io/lSaRS19";
-words[67] = "https://cuty.io/z8VTwea";
-words[68] = "https://cuty.io/UapBE";
-words[69] = "https://cuty.io/vDzDerW9";
-words[70] = "https://cuty.io/Mgz9";
-words[71] = "https://cuty.io/kylJsPTjv";
-words[72] = "https://cuty.io/zgJHnFFoS";
-words[73] = "";
-words[74] = "";
-words[75] = "";
-words[76] = "";
-words[77] = "";
-words[78] = "";
-words[79] = "";
-words[80] = "https://cuty.io/8goK49PVX";
-words[81] = "";
-words[82] = "https://cuty.io/q8GEByLks";
-words[83] = "";
-words[84] = "";
-words[85] = "https://cuty.io/d5T06FdVy";
-words[86] = "";
-words[87] = "";
-words[88] = "";
-words[89] = "https://cuty.io/6ra2CHs";
-words[90] = "";
-words[91] = "";
-words[92] = "";
-words[93] = "";
-words[94] = "";
-words[95] = "";
-words[96] = "";
-words[97] = "";
-words[98] = "";
-words[99] = "";
-words[100] = "";
-
-function BuildArray(size) {
- this.length = size;
- for (var i = 1; i <= size; i++) {
- this[i] = null;
- }
- return this;
-}
-
-function PickRandomWord(frm) {
- // Generate a random number between 1 and NumberOfWords
- var rnd = Math.ceil(Math.random() * NumberOfWords);
-
- // Display the word inside the text box
- frm.WordBox.value = words[rnd];
-}
-
-function OpenGeneratedLink() {
- var generatedLink = document.forms["yourFormName"]["WordBox"].value;
- if (generatedLink) {
- window.open(generatedLink, '_blank');
- }
-}
diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/deprecated/PerplexityAi.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/deprecated/PerplexityAi.py
deleted file mode 100644
index f4f7171219664c50e0c90e214276c9b226c16d17..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/deprecated/PerplexityAi.py
+++ /dev/null
@@ -1,101 +0,0 @@
-from __future__ import annotations
-
-import json
-import time
-import base64
-from curl_cffi.requests import AsyncSession
-
-from ..base_provider import AsyncProvider, format_prompt, get_cookies
-
-
-class PerplexityAi(AsyncProvider):
- url = "https://www.perplexity.ai"
- working = False
- supports_gpt_35_turbo = True
- _sources = []
-
- @classmethod
- async def create_async(
- cls,
- model: str,
- messages: list[dict[str, str]],
- proxy: str = None,
- **kwargs
- ) -> str:
- url = cls.url + "/socket.io/?EIO=4&transport=polling"
- headers = {
- "Referer": f"{cls.url}/"
- }
- async with AsyncSession(headers=headers, proxies={"https": proxy}, impersonate="chrome107") as session:
- url_session = "https://www.perplexity.ai/api/auth/session"
- response = await session.get(url_session)
- response.raise_for_status()
-
- url_session = "https://www.perplexity.ai/api/auth/session"
- response = await session.get(url_session)
- response.raise_for_status()
-
- response = await session.get(url, params={"t": timestamp()})
- response.raise_for_status()
- sid = json.loads(response.text[1:])["sid"]
-
- response = await session.get(url, params={"t": timestamp(), "sid": sid})
- response.raise_for_status()
-
- data = '40{"jwt":"anonymous-ask-user"}'
- response = await session.post(url, params={"t": timestamp(), "sid": sid}, data=data)
- response.raise_for_status()
-
- response = await session.get(url, params={"t": timestamp(), "sid": sid})
- response.raise_for_status()
-
- data = "424" + json.dumps([
- "perplexity_ask",
- format_prompt(messages),
- {
- "version":"2.1",
- "source":"default",
- "language":"en",
- "timezone": time.tzname[0],
- "search_focus":"internet",
- "mode":"concise"
- }
- ])
- response = await session.post(url, params={"t": timestamp(), "sid": sid}, data=data)
- response.raise_for_status()
-
- while True:
- response = await session.get(url, params={"t": timestamp(), "sid": sid})
- response.raise_for_status()
- for line in response.text.splitlines():
- if line.startswith("434"):
- result = json.loads(json.loads(line[3:])[0]["text"])
-
- cls._sources = [{
- "title": source["name"],
- "url": source["url"],
- "snippet": source["snippet"]
- } for source in result["web_results"]]
-
- return result["answer"]
-
- @classmethod
- def get_sources(cls):
- return cls._sources
-
-
- @classmethod
- @property
- def params(cls):
- params = [
- ("model", "str"),
- ("messages", "list[dict[str, str]]"),
- ("stream", "bool"),
- ("proxy", "str"),
- ]
- param = ", ".join([": ".join(p) for p in params])
- return f"g4f.provider.{cls.__name__} supports: ({param})"
-
-
-def timestamp() -> str:
- return base64.urlsafe_b64encode(int(time.time()-1407782612).to_bytes(4, 'big')).decode()
\ No newline at end of file
diff --git a/spaces/Adapter/CoAdapter/ldm/modules/encoders/adapter.py b/spaces/Adapter/CoAdapter/ldm/modules/encoders/adapter.py
deleted file mode 100644
index 702d4706649695532dde6a2c9a22a01c9d28ca80..0000000000000000000000000000000000000000
--- a/spaces/Adapter/CoAdapter/ldm/modules/encoders/adapter.py
+++ /dev/null
@@ -1,339 +0,0 @@
-import torch
-import torch.nn as nn
-from collections import OrderedDict
-from ldm.modules.extra_condition.api import ExtraCondition
-from ldm.modules.diffusionmodules.util import zero_module
-
-
-def conv_nd(dims, *args, **kwargs):
- """
- Create a 1D, 2D, or 3D convolution module.
- """
- if dims == 1:
- return nn.Conv1d(*args, **kwargs)
- elif dims == 2:
- return nn.Conv2d(*args, **kwargs)
- elif dims == 3:
- return nn.Conv3d(*args, **kwargs)
- raise ValueError(f"unsupported dimensions: {dims}")
-
-
-def avg_pool_nd(dims, *args, **kwargs):
- """
- Create a 1D, 2D, or 3D average pooling module.
- """
- if dims == 1:
- return nn.AvgPool1d(*args, **kwargs)
- elif dims == 2:
- return nn.AvgPool2d(*args, **kwargs)
- elif dims == 3:
- return nn.AvgPool3d(*args, **kwargs)
- raise ValueError(f"unsupported dimensions: {dims}")
-
-
-class Downsample(nn.Module):
- """
- A downsampling layer with an optional convolution.
- :param channels: channels in the inputs and outputs.
- :param use_conv: a bool determining if a convolution is applied.
- :param dims: determines if the signal is 1D, 2D, or 3D. If 3D, then
- downsampling occurs in the inner-two dimensions.
- """
-
- def __init__(self, channels, use_conv, dims=2, out_channels=None, padding=1):
- super().__init__()
- self.channels = channels
- self.out_channels = out_channels or channels
- self.use_conv = use_conv
- self.dims = dims
- stride = 2 if dims != 3 else (1, 2, 2)
- if use_conv:
- self.op = conv_nd(
- dims, self.channels, self.out_channels, 3, stride=stride, padding=padding
- )
- else:
- assert self.channels == self.out_channels
- self.op = avg_pool_nd(dims, kernel_size=stride, stride=stride)
-
- def forward(self, x):
- assert x.shape[1] == self.channels
- return self.op(x)
-
-
-class ResnetBlock(nn.Module):
- def __init__(self, in_c, out_c, down, ksize=3, sk=False, use_conv=True):
- super().__init__()
- ps = ksize // 2
- if in_c != out_c or sk == False:
- self.in_conv = nn.Conv2d(in_c, out_c, ksize, 1, ps)
- else:
- # print('n_in')
- self.in_conv = None
- self.block1 = nn.Conv2d(out_c, out_c, 3, 1, 1)
- self.act = nn.ReLU()
- self.block2 = nn.Conv2d(out_c, out_c, ksize, 1, ps)
- if sk == False:
- self.skep = nn.Conv2d(in_c, out_c, ksize, 1, ps)
- else:
- self.skep = None
-
- self.down = down
- if self.down == True:
- self.down_opt = Downsample(in_c, use_conv=use_conv)
-
- def forward(self, x):
- if self.down == True:
- x = self.down_opt(x)
- if self.in_conv is not None: # edit
- x = self.in_conv(x)
-
- h = self.block1(x)
- h = self.act(h)
- h = self.block2(h)
- if self.skep is not None:
- return h + self.skep(x)
- else:
- return h + x
-
-
-class Adapter(nn.Module):
- def __init__(self, channels=[320, 640, 1280, 1280], nums_rb=3, cin=64, ksize=3, sk=False, use_conv=True):
- super(Adapter, self).__init__()
- self.unshuffle = nn.PixelUnshuffle(8)
- self.channels = channels
- self.nums_rb = nums_rb
- self.body = []
- for i in range(len(channels)):
- for j in range(nums_rb):
- if (i != 0) and (j == 0):
- self.body.append(
- ResnetBlock(channels[i - 1], channels[i], down=True, ksize=ksize, sk=sk, use_conv=use_conv))
- else:
- self.body.append(
- ResnetBlock(channels[i], channels[i], down=False, ksize=ksize, sk=sk, use_conv=use_conv))
- self.body = nn.ModuleList(self.body)
- self.conv_in = nn.Conv2d(cin, channels[0], 3, 1, 1)
-
- def forward(self, x):
- # unshuffle
- x = self.unshuffle(x)
- # extract features
- features = []
- x = self.conv_in(x)
- for i in range(len(self.channels)):
- for j in range(self.nums_rb):
- idx = i * self.nums_rb + j
- x = self.body[idx](x)
- features.append(x)
-
- return features
-
-
-class LayerNorm(nn.LayerNorm):
- """Subclass torch's LayerNorm to handle fp16."""
-
- def forward(self, x: torch.Tensor):
- orig_type = x.dtype
- ret = super().forward(x.type(torch.float32))
- return ret.type(orig_type)
-
-
-class QuickGELU(nn.Module):
-
- def forward(self, x: torch.Tensor):
- return x * torch.sigmoid(1.702 * x)
-
-
-class ResidualAttentionBlock(nn.Module):
-
- def __init__(self, d_model: int, n_head: int, attn_mask: torch.Tensor = None):
- super().__init__()
-
- self.attn = nn.MultiheadAttention(d_model, n_head)
- self.ln_1 = LayerNorm(d_model)
- self.mlp = nn.Sequential(
- OrderedDict([("c_fc", nn.Linear(d_model, d_model * 4)), ("gelu", QuickGELU()),
- ("c_proj", nn.Linear(d_model * 4, d_model))]))
- self.ln_2 = LayerNorm(d_model)
- self.attn_mask = attn_mask
-
- def attention(self, x: torch.Tensor):
- self.attn_mask = self.attn_mask.to(dtype=x.dtype, device=x.device) if self.attn_mask is not None else None
- return self.attn(x, x, x, need_weights=False, attn_mask=self.attn_mask)[0]
-
- def forward(self, x: torch.Tensor):
- x = x + self.attention(self.ln_1(x))
- x = x + self.mlp(self.ln_2(x))
- return x
-
-
-class StyleAdapter(nn.Module):
-
- def __init__(self, width=1024, context_dim=768, num_head=8, n_layes=3, num_token=4):
- super().__init__()
-
- scale = width ** -0.5
- self.transformer_layes = nn.Sequential(*[ResidualAttentionBlock(width, num_head) for _ in range(n_layes)])
- self.num_token = num_token
- self.style_embedding = nn.Parameter(torch.randn(1, num_token, width) * scale)
- self.ln_post = LayerNorm(width)
- self.ln_pre = LayerNorm(width)
- self.proj = nn.Parameter(scale * torch.randn(width, context_dim))
-
- def forward(self, x):
- # x shape [N, HW+1, C]
- style_embedding = self.style_embedding + torch.zeros(
- (x.shape[0], self.num_token, self.style_embedding.shape[-1]), device=x.device)
- x = torch.cat([x, style_embedding], dim=1)
- x = self.ln_pre(x)
- x = x.permute(1, 0, 2) # NLD -> LND
- x = self.transformer_layes(x)
- x = x.permute(1, 0, 2) # LND -> NLD
-
- x = self.ln_post(x[:, -self.num_token:, :])
- x = x @ self.proj
-
- return x
-
-
-class ResnetBlock_light(nn.Module):
- def __init__(self, in_c):
- super().__init__()
- self.block1 = nn.Conv2d(in_c, in_c, 3, 1, 1)
- self.act = nn.ReLU()
- self.block2 = nn.Conv2d(in_c, in_c, 3, 1, 1)
-
- def forward(self, x):
- h = self.block1(x)
- h = self.act(h)
- h = self.block2(h)
-
- return h + x
-
-
-class extractor(nn.Module):
- def __init__(self, in_c, inter_c, out_c, nums_rb, down=False):
- super().__init__()
- self.in_conv = nn.Conv2d(in_c, inter_c, 1, 1, 0)
- self.body = []
- for _ in range(nums_rb):
- self.body.append(ResnetBlock_light(inter_c))
- self.body = nn.Sequential(*self.body)
- self.out_conv = nn.Conv2d(inter_c, out_c, 1, 1, 0)
- self.down = down
- if self.down == True:
- self.down_opt = Downsample(in_c, use_conv=False)
-
- def forward(self, x):
- if self.down == True:
- x = self.down_opt(x)
- x = self.in_conv(x)
- x = self.body(x)
- x = self.out_conv(x)
-
- return x
-
-
-class Adapter_light(nn.Module):
- def __init__(self, channels=[320, 640, 1280, 1280], nums_rb=3, cin=64):
- super(Adapter_light, self).__init__()
- self.unshuffle = nn.PixelUnshuffle(8)
- self.channels = channels
- self.nums_rb = nums_rb
- self.body = []
- for i in range(len(channels)):
- if i == 0:
- self.body.append(extractor(in_c=cin, inter_c=channels[i]//4, out_c=channels[i], nums_rb=nums_rb, down=False))
- else:
- self.body.append(extractor(in_c=channels[i-1], inter_c=channels[i]//4, out_c=channels[i], nums_rb=nums_rb, down=True))
- self.body = nn.ModuleList(self.body)
-
- def forward(self, x):
- # unshuffle
- x = self.unshuffle(x)
- # extract features
- features = []
- for i in range(len(self.channels)):
- x = self.body[i](x)
- features.append(x)
-
- return features
-
-
-class CoAdapterFuser(nn.Module):
- def __init__(self, unet_channels=[320, 640, 1280, 1280], width=768, num_head=8, n_layes=3):
- super(CoAdapterFuser, self).__init__()
- scale = width ** 0.5
- # 16, maybe large enough for the number of adapters?
- self.task_embedding = nn.Parameter(scale * torch.randn(16, width))
- self.positional_embedding = nn.Parameter(scale * torch.randn(len(unet_channels), width))
- self.spatial_feat_mapping = nn.ModuleList()
- for ch in unet_channels:
- self.spatial_feat_mapping.append(nn.Sequential(
- nn.SiLU(),
- nn.Linear(ch, width),
- ))
- self.transformer_layes = nn.Sequential(*[ResidualAttentionBlock(width, num_head) for _ in range(n_layes)])
- self.ln_post = LayerNorm(width)
- self.ln_pre = LayerNorm(width)
- self.spatial_ch_projs = nn.ModuleList()
- for ch in unet_channels:
- self.spatial_ch_projs.append(zero_module(nn.Linear(width, ch)))
- self.seq_proj = nn.Parameter(torch.zeros(width, width))
-
- def forward(self, features):
- if len(features) == 0:
- return None, None
- inputs = []
- for cond_name in features.keys():
- task_idx = getattr(ExtraCondition, cond_name).value
- if not isinstance(features[cond_name], list):
- inputs.append(features[cond_name] + self.task_embedding[task_idx])
- continue
-
- feat_seq = []
- for idx, feature_map in enumerate(features[cond_name]):
- feature_vec = torch.mean(feature_map, dim=(2, 3))
- feature_vec = self.spatial_feat_mapping[idx](feature_vec)
- feat_seq.append(feature_vec)
- feat_seq = torch.stack(feat_seq, dim=1) # Nx4xC
- feat_seq = feat_seq + self.task_embedding[task_idx]
- feat_seq = feat_seq + self.positional_embedding
- inputs.append(feat_seq)
-
- x = torch.cat(inputs, dim=1) # NxLxC
- x = self.ln_pre(x)
- x = x.permute(1, 0, 2) # NLD -> LND
- x = self.transformer_layes(x)
- x = x.permute(1, 0, 2) # LND -> NLD
- x = self.ln_post(x)
-
- ret_feat_map = None
- ret_feat_seq = None
- cur_seq_idx = 0
- for cond_name in features.keys():
- if not isinstance(features[cond_name], list):
- length = features[cond_name].size(1)
- transformed_feature = features[cond_name] * ((x[:, cur_seq_idx:cur_seq_idx+length] @ self.seq_proj) + 1)
- if ret_feat_seq is None:
- ret_feat_seq = transformed_feature
- else:
- ret_feat_seq = torch.cat([ret_feat_seq, transformed_feature], dim=1)
- cur_seq_idx += length
- continue
-
- length = len(features[cond_name])
- transformed_feature_list = []
- for idx in range(length):
- alpha = self.spatial_ch_projs[idx](x[:, cur_seq_idx+idx])
- alpha = alpha.unsqueeze(-1).unsqueeze(-1) + 1
- transformed_feature_list.append(features[cond_name][idx] * alpha)
- if ret_feat_map is None:
- ret_feat_map = transformed_feature_list
- else:
- ret_feat_map = list(map(lambda x, y: x + y, ret_feat_map, transformed_feature_list))
- cur_seq_idx += length
-
- assert cur_seq_idx == x.size(1)
-
- return ret_feat_map, ret_feat_seq
diff --git a/spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/pokemon.py b/spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/pokemon.py
deleted file mode 100644
index d62b32cf6395e077c0e20d9fb60adf230be30e32..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/pokemon.py
+++ /dev/null
@@ -1,222 +0,0 @@
-import asyncio
-import datetime
-import logging
-from typing import Any, Dict, List, Optional, Set
-
-# from agentverse.agents.agent import Agent
-from agentverse.agents.simulation_agent.conversation import BaseAgent
-
-# from agentverse.environments.simulation_env.rules.base import Rule
-from agentverse.environments.simulation_env.rules.base import SimulationRule as Rule
-from agentverse.message import Message
-
-from .. import env_registry as EnvironmentRegistry
-from ..base import BaseEnvironment
-
-
-@EnvironmentRegistry.register("pokemon")
-class PokemonEnvironment(BaseEnvironment):
- """
- An environment for Pokémon demo.
-
- Args:
- agents: List of agents
- locations: A dict of locations to agents within them
- rule: Rule for the environment
- max_turns: Maximum number of turns
- cnt_turn: Current turn number
- last_messages: Messages from last turn
- rule_params: Variables set by the rule
- """
-
- agents: List[BaseAgent]
- locations_to_agents: Dict[str, Set[str]]
- # locations_descriptions: Dict[str, str]
- time: datetime.datetime = datetime.datetime(2021, 1, 1, 8, 0, 0)
- rule: Rule
- max_turns: int = 10
- cnt_turn: int = 0
- last_messages: List[Message] = []
- rule_params: Dict = {}
-
- def __init__(self, rule, locations, **kwargs):
- rule_config = rule
- order_config = rule_config.get("order", {"type": "sequential"})
- visibility_config = rule_config.get("visibility", {"type": "all"})
- selector_config = rule_config.get("selector", {"type": "basic"})
- updater_config = rule_config.get("updater", {"type": "basic"})
- describer_config = rule_config.get("describer", {"type": "basic"})
- rule = Rule(
- order_config,
- visibility_config,
- selector_config,
- updater_config,
- describer_config,
- )
- locations_to_agents = {}
- # locations_descriptions = {}
- locations_config = locations
- for loc in locations_config:
- locations_to_agents[loc["name"]] = set(loc["init_agents"])
- # locations_descriptions[loc["name"]] = loc["description"]
- super().__init__(
- rule=rule,
- locations_to_agents=locations_to_agents,
- # locations_descriptions=locations_descriptions,
- **kwargs,
- )
-
- async def step(
- self,
- is_player: bool = False,
- player_content: str = None,
- receiver: str = None,
- receiver_id: Optional[int] = None,
- agent_ids: Optional[List[int]] = None,
- ) -> List[Message]:
- """Run one step of the environment"""
-
- # Get the next agent index
- # time.sleep(8)
- # return [Message(content="Test", sender="May", receiver=["May"])]
- if is_player:
- return await self._respond_to_player(player_content, receiver, receiver_id)
- else:
- return await self._routine_step(agent_ids)
-
- async def _routine_step(self, agent_ids) -> List[Message]:
- self.rule.update_visible_agents(self)
-
- # agent_ids = self.rule.get_next_agent_idx(self)
-
- # Generate current environment description
- env_descriptions = self.rule.get_env_description(self)
-
- # Generate the next message
- messages = await asyncio.gather(
- *[self.agents[i].astep(env_descriptions[i]) for i in agent_ids]
- )
- # messages = self.get_test_messages()
-
- # Some rules will select certain messages from all the messages
- selected_messages = self.rule.select_message(self, messages)
-
- # Update the memory of the agents
- self.last_messages = selected_messages
- self.rule.update_memory(self)
- self.print_messages(selected_messages)
-
- self.cnt_turn += 1
- self.time += datetime.timedelta(minutes=5)
-
- return selected_messages
-
- async def _respond_to_player(
- self,
- player_content: str = None,
- receiver: str = None,
- receiver_id: Optional[int] = None,
- ) -> List[Message]:
- if receiver_id is None:
- for agent in self.agents:
- if agent.name == receiver:
- receiver_id = agent.agent_id
- break
- agent_ids = [receiver_id]
- agent_name = receiver
- player_message = Message(
- sender="Brenden", content=player_content, receiver=[agent_name]
- )
-
- # Update the set of visible agents for each agent
- self.rule.update_visible_agents(self)
-
- # Generate current environment description
- env_descriptions = self.rule.get_env_description(self, player_content)
-
- # Generate the next message
- messages = await asyncio.gather(
- *[self.agents[i].astep(env_descriptions[i]) for i in agent_ids]
- )
-
- # Some rules will select certain messages from all the messages
- # selected_messages = self.rule.select_message(self, messages)
-
- # Update the memory of the agents
- self.last_messages = [player_message, *messages]
- self.rule.update_memory(self)
- self.print_messages(messages)
-
- self.cnt_turn += 1
-
- return messages
-
- def update_state(self, agent_location: Dict[str, str]):
- for agent_name, location in agent_location.items():
- # original_location = self.get_agent_to_location()[agent_name]
- # self.locations_to_agents[original_location].remove(agent_name)
- self.locations_to_agents[location].add(agent_name)
-
- def get_agent_to_location(self) -> Dict[str, str]:
- ret = {}
- for location, agent_names in self.locations_to_agents.items():
- for agent in agent_names:
- ret[agent] = location
- return ret
-
- def print_messages(self, messages: List[Message]) -> None:
- for message in messages:
- if message is not None:
- logging.info(f"{message.sender}: {message.content}")
-
- def reset(self) -> None:
- """Reset the environment"""
- self.cnt_turn = 0
- self.rule.reset()
- for agent in self.agents:
- agent.reset()
-
- def is_done(self) -> bool:
- """Check if the environment is done"""
- return self.cnt_turn >= self.max_turns
-
- def get_test_messages(self) -> List[Message]:
- messages = [
- Message(
- content='{"to": "Birch", "action": "Speak", "text": "Hi!!!"}',
- sender="May",
- receiver={"May", "Birch"},
- tool_response=[],
- ),
- Message(
- content='{"to": "May", "text": "Good morning, May! How is your research going?", "action": "Speak"}',
- sender="Birch",
- receiver={"May", "Birch"},
- tool_response=[],
- ),
- Message(
- content='{"to": "Pokémon Center", "action": "MoveTo"}',
- sender="Steven",
- receiver={"Steven"},
- tool_response=[],
- ),
- Message(
- content='{"to": "Shop", "last_time": "10 minutes", "action": "MoveTo"}',
- sender="Maxie",
- receiver={"Maxie"},
- tool_response=[],
- ),
- Message(
- content='{"to": "Pok\\u00e9mon Center", "action": "MoveTo"}',
- sender="Archie",
- receiver={"Archie"},
- tool_response=[],
- ),
- Message(
- content='{"to": "Shop", "action": "MoveTo"}',
- sender="Joseph",
- receiver={"Joseph"},
- tool_response=[],
- ),
- ]
- return messages
diff --git a/spaces/AlexWang/lama/saicinpainting/evaluation/masks/mask.py b/spaces/AlexWang/lama/saicinpainting/evaluation/masks/mask.py
deleted file mode 100644
index 3e34d0675a781fba983cb542f18390255aaf2609..0000000000000000000000000000000000000000
--- a/spaces/AlexWang/lama/saicinpainting/evaluation/masks/mask.py
+++ /dev/null
@@ -1,429 +0,0 @@
-import enum
-from copy import deepcopy
-
-import numpy as np
-from skimage import img_as_ubyte
-from skimage.transform import rescale, resize
-try:
- from detectron2 import model_zoo
- from detectron2.config import get_cfg
- from detectron2.engine import DefaultPredictor
- DETECTRON_INSTALLED = True
-except:
- print("Detectron v2 is not installed")
- DETECTRON_INSTALLED = False
-
-from .countless.countless2d import zero_corrected_countless
-
-
-class ObjectMask():
- def __init__(self, mask):
- self.height, self.width = mask.shape
- (self.up, self.down), (self.left, self.right) = self._get_limits(mask)
- self.mask = mask[self.up:self.down, self.left:self.right].copy()
-
- @staticmethod
- def _get_limits(mask):
- def indicator_limits(indicator):
- lower = indicator.argmax()
- upper = len(indicator) - indicator[::-1].argmax()
- return lower, upper
-
- vertical_indicator = mask.any(axis=1)
- vertical_limits = indicator_limits(vertical_indicator)
-
- horizontal_indicator = mask.any(axis=0)
- horizontal_limits = indicator_limits(horizontal_indicator)
-
- return vertical_limits, horizontal_limits
-
- def _clean(self):
- self.up, self.down, self.left, self.right = 0, 0, 0, 0
- self.mask = np.empty((0, 0))
-
- def horizontal_flip(self, inplace=False):
- if not inplace:
- flipped = deepcopy(self)
- return flipped.horizontal_flip(inplace=True)
-
- self.mask = self.mask[:, ::-1]
- return self
-
- def vertical_flip(self, inplace=False):
- if not inplace:
- flipped = deepcopy(self)
- return flipped.vertical_flip(inplace=True)
-
- self.mask = self.mask[::-1, :]
- return self
-
- def image_center(self):
- y_center = self.up + (self.down - self.up) / 2
- x_center = self.left + (self.right - self.left) / 2
- return y_center, x_center
-
- def rescale(self, scaling_factor, inplace=False):
- if not inplace:
- scaled = deepcopy(self)
- return scaled.rescale(scaling_factor, inplace=True)
-
- scaled_mask = rescale(self.mask.astype(float), scaling_factor, order=0) > 0.5
- (up, down), (left, right) = self._get_limits(scaled_mask)
- self.mask = scaled_mask[up:down, left:right]
-
- y_center, x_center = self.image_center()
- mask_height, mask_width = self.mask.shape
- self.up = int(round(y_center - mask_height / 2))
- self.down = self.up + mask_height
- self.left = int(round(x_center - mask_width / 2))
- self.right = self.left + mask_width
- return self
-
- def crop_to_canvas(self, vertical=True, horizontal=True, inplace=False):
- if not inplace:
- cropped = deepcopy(self)
- cropped.crop_to_canvas(vertical=vertical, horizontal=horizontal, inplace=True)
- return cropped
-
- if vertical:
- if self.up >= self.height or self.down <= 0:
- self._clean()
- else:
- cut_up, cut_down = max(-self.up, 0), max(self.down - self.height, 0)
- if cut_up != 0:
- self.mask = self.mask[cut_up:]
- self.up = 0
- if cut_down != 0:
- self.mask = self.mask[:-cut_down]
- self.down = self.height
-
- if horizontal:
- if self.left >= self.width or self.right <= 0:
- self._clean()
- else:
- cut_left, cut_right = max(-self.left, 0), max(self.right - self.width, 0)
- if cut_left != 0:
- self.mask = self.mask[:, cut_left:]
- self.left = 0
- if cut_right != 0:
- self.mask = self.mask[:, :-cut_right]
- self.right = self.width
-
- return self
-
- def restore_full_mask(self, allow_crop=False):
- cropped = self.crop_to_canvas(inplace=allow_crop)
- mask = np.zeros((cropped.height, cropped.width), dtype=bool)
- mask[cropped.up:cropped.down, cropped.left:cropped.right] = cropped.mask
- return mask
-
- def shift(self, vertical=0, horizontal=0, inplace=False):
- if not inplace:
- shifted = deepcopy(self)
- return shifted.shift(vertical=vertical, horizontal=horizontal, inplace=True)
-
- self.up += vertical
- self.down += vertical
- self.left += horizontal
- self.right += horizontal
- return self
-
- def area(self):
- return self.mask.sum()
-
-
-class RigidnessMode(enum.Enum):
- soft = 0
- rigid = 1
-
-
-class SegmentationMask:
- def __init__(self, confidence_threshold=0.5, rigidness_mode=RigidnessMode.rigid,
- max_object_area=0.3, min_mask_area=0.02, downsample_levels=6, num_variants_per_mask=4,
- max_mask_intersection=0.5, max_foreground_coverage=0.5, max_foreground_intersection=0.5,
- max_hidden_area=0.2, max_scale_change=0.25, horizontal_flip=True,
- max_vertical_shift=0.1, position_shuffle=True):
- """
- :param confidence_threshold: float; threshold for confidence of the panoptic segmentator to allow for
- the instance.
- :param rigidness_mode: RigidnessMode object
- when soft, checks intersection only with the object from which the mask_object was produced
- when rigid, checks intersection with any foreground class object
- :param max_object_area: float; allowed upper bound for to be considered as mask_object.
- :param min_mask_area: float; lower bound for mask to be considered valid
- :param downsample_levels: int; defines width of the resized segmentation to obtain shifted masks;
- :param num_variants_per_mask: int; maximal number of the masks for the same object;
- :param max_mask_intersection: float; maximum allowed area fraction of intersection for 2 masks
- produced by horizontal shift of the same mask_object; higher value -> more diversity
- :param max_foreground_coverage: float; maximum allowed area fraction of intersection for foreground object to be
- covered by mask; lower value -> less the objects are covered
- :param max_foreground_intersection: float; maximum allowed area of intersection for the mask with foreground
- object; lower value -> mask is more on the background than on the objects
- :param max_hidden_area: upper bound on part of the object hidden by shifting object outside the screen area;
- :param max_scale_change: allowed scale change for the mask_object;
- :param horizontal_flip: if horizontal flips are allowed;
- :param max_vertical_shift: amount of vertical movement allowed;
- :param position_shuffle: shuffle
- """
-
- assert DETECTRON_INSTALLED, 'Cannot use SegmentationMask without detectron2'
- self.cfg = get_cfg()
- self.cfg.merge_from_file(model_zoo.get_config_file("COCO-PanopticSegmentation/panoptic_fpn_R_101_3x.yaml"))
- self.cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-PanopticSegmentation/panoptic_fpn_R_101_3x.yaml")
- self.cfg.MODEL.PANOPTIC_FPN.COMBINE.INSTANCES_CONFIDENCE_THRESH = confidence_threshold
- self.predictor = DefaultPredictor(self.cfg)
-
- self.rigidness_mode = RigidnessMode(rigidness_mode)
- self.max_object_area = max_object_area
- self.min_mask_area = min_mask_area
- self.downsample_levels = downsample_levels
- self.num_variants_per_mask = num_variants_per_mask
- self.max_mask_intersection = max_mask_intersection
- self.max_foreground_coverage = max_foreground_coverage
- self.max_foreground_intersection = max_foreground_intersection
- self.max_hidden_area = max_hidden_area
- self.position_shuffle = position_shuffle
-
- self.max_scale_change = max_scale_change
- self.horizontal_flip = horizontal_flip
- self.max_vertical_shift = max_vertical_shift
-
- def get_segmentation(self, img):
- im = img_as_ubyte(img)
- panoptic_seg, segment_info = self.predictor(im)["panoptic_seg"]
- return panoptic_seg, segment_info
-
- @staticmethod
- def _is_power_of_two(n):
- return (n != 0) and (n & (n-1) == 0)
-
- def identify_candidates(self, panoptic_seg, segments_info):
- potential_mask_ids = []
- for segment in segments_info:
- if not segment["isthing"]:
- continue
- mask = (panoptic_seg == segment["id"]).int().detach().cpu().numpy()
- area = mask.sum().item() / np.prod(panoptic_seg.shape)
- if area >= self.max_object_area:
- continue
- potential_mask_ids.append(segment["id"])
- return potential_mask_ids
-
- def downsample_mask(self, mask):
- height, width = mask.shape
- if not (self._is_power_of_two(height) and self._is_power_of_two(width)):
- raise ValueError("Image sides are not power of 2.")
-
- num_iterations = width.bit_length() - 1 - self.downsample_levels
- if num_iterations < 0:
- raise ValueError(f"Width is lower than 2^{self.downsample_levels}.")
-
- if height.bit_length() - 1 < num_iterations:
- raise ValueError("Height is too low to perform downsampling")
-
- downsampled = mask
- for _ in range(num_iterations):
- downsampled = zero_corrected_countless(downsampled)
-
- return downsampled
-
- def _augmentation_params(self):
- scaling_factor = np.random.uniform(1 - self.max_scale_change, 1 + self.max_scale_change)
- if self.horizontal_flip:
- horizontal_flip = bool(np.random.choice(2))
- else:
- horizontal_flip = False
- vertical_shift = np.random.uniform(-self.max_vertical_shift, self.max_vertical_shift)
-
- return {
- "scaling_factor": scaling_factor,
- "horizontal_flip": horizontal_flip,
- "vertical_shift": vertical_shift
- }
-
- def _get_intersection(self, mask_array, mask_object):
- intersection = mask_array[
- mask_object.up:mask_object.down, mask_object.left:mask_object.right
- ] & mask_object.mask
- return intersection
-
- def _check_masks_intersection(self, aug_mask, total_mask_area, prev_masks):
- for existing_mask in prev_masks:
- intersection_area = self._get_intersection(existing_mask, aug_mask).sum()
- intersection_existing = intersection_area / existing_mask.sum()
- intersection_current = 1 - (aug_mask.area() - intersection_area) / total_mask_area
- if (intersection_existing > self.max_mask_intersection) or \
- (intersection_current > self.max_mask_intersection):
- return False
- return True
-
- def _check_foreground_intersection(self, aug_mask, foreground):
- for existing_mask in foreground:
- intersection_area = self._get_intersection(existing_mask, aug_mask).sum()
- intersection_existing = intersection_area / existing_mask.sum()
- if intersection_existing > self.max_foreground_coverage:
- return False
- intersection_mask = intersection_area / aug_mask.area()
- if intersection_mask > self.max_foreground_intersection:
- return False
- return True
-
- def _move_mask(self, mask, foreground):
- # Obtaining properties of the original mask_object:
- orig_mask = ObjectMask(mask)
-
- chosen_masks = []
- chosen_parameters = []
- # to fix the case when resizing gives mask_object consisting only of False
- scaling_factor_lower_bound = 0.
-
- for var_idx in range(self.num_variants_per_mask):
- # Obtaining augmentation parameters and applying them to the downscaled mask_object
- augmentation_params = self._augmentation_params()
- augmentation_params["scaling_factor"] = min([
- augmentation_params["scaling_factor"],
- 2 * min(orig_mask.up, orig_mask.height - orig_mask.down) / orig_mask.height + 1.,
- 2 * min(orig_mask.left, orig_mask.width - orig_mask.right) / orig_mask.width + 1.
- ])
- augmentation_params["scaling_factor"] = max([
- augmentation_params["scaling_factor"], scaling_factor_lower_bound
- ])
-
- aug_mask = deepcopy(orig_mask)
- aug_mask.rescale(augmentation_params["scaling_factor"], inplace=True)
- if augmentation_params["horizontal_flip"]:
- aug_mask.horizontal_flip(inplace=True)
- total_aug_area = aug_mask.area()
- if total_aug_area == 0:
- scaling_factor_lower_bound = 1.
- continue
-
- # Fix if the element vertical shift is too strong and shown area is too small:
- vertical_area = aug_mask.mask.sum(axis=1) / total_aug_area # share of area taken by rows
- # number of rows which are allowed to be hidden from upper and lower parts of image respectively
- max_hidden_up = np.searchsorted(vertical_area.cumsum(), self.max_hidden_area)
- max_hidden_down = np.searchsorted(vertical_area[::-1].cumsum(), self.max_hidden_area)
- # correcting vertical shift, so not too much area will be hidden
- augmentation_params["vertical_shift"] = np.clip(
- augmentation_params["vertical_shift"],
- -(aug_mask.up + max_hidden_up) / aug_mask.height,
- (aug_mask.height - aug_mask.down + max_hidden_down) / aug_mask.height
- )
- # Applying vertical shift:
- vertical_shift = int(round(aug_mask.height * augmentation_params["vertical_shift"]))
- aug_mask.shift(vertical=vertical_shift, inplace=True)
- aug_mask.crop_to_canvas(vertical=True, horizontal=False, inplace=True)
-
- # Choosing horizontal shift:
- max_hidden_area = self.max_hidden_area - (1 - aug_mask.area() / total_aug_area)
- horizontal_area = aug_mask.mask.sum(axis=0) / total_aug_area
- max_hidden_left = np.searchsorted(horizontal_area.cumsum(), max_hidden_area)
- max_hidden_right = np.searchsorted(horizontal_area[::-1].cumsum(), max_hidden_area)
- allowed_shifts = np.arange(-max_hidden_left, aug_mask.width -
- (aug_mask.right - aug_mask.left) + max_hidden_right + 1)
- allowed_shifts = - (aug_mask.left - allowed_shifts)
-
- if self.position_shuffle:
- np.random.shuffle(allowed_shifts)
-
- mask_is_found = False
- for horizontal_shift in allowed_shifts:
- aug_mask_left = deepcopy(aug_mask)
- aug_mask_left.shift(horizontal=horizontal_shift, inplace=True)
- aug_mask_left.crop_to_canvas(inplace=True)
-
- prev_masks = [mask] + chosen_masks
- is_mask_suitable = self._check_masks_intersection(aug_mask_left, total_aug_area, prev_masks) & \
- self._check_foreground_intersection(aug_mask_left, foreground)
- if is_mask_suitable:
- aug_draw = aug_mask_left.restore_full_mask()
- chosen_masks.append(aug_draw)
- augmentation_params["horizontal_shift"] = horizontal_shift / aug_mask_left.width
- chosen_parameters.append(augmentation_params)
- mask_is_found = True
- break
-
- if not mask_is_found:
- break
-
- return chosen_parameters
-
- def _prepare_mask(self, mask):
- height, width = mask.shape
- target_width = width if self._is_power_of_two(width) else (1 << width.bit_length())
- target_height = height if self._is_power_of_two(height) else (1 << height.bit_length())
-
- return resize(mask.astype('float32'), (target_height, target_width), order=0, mode='edge').round().astype('int32')
-
- def get_masks(self, im, return_panoptic=False):
- panoptic_seg, segments_info = self.get_segmentation(im)
- potential_mask_ids = self.identify_candidates(panoptic_seg, segments_info)
-
- panoptic_seg_scaled = self._prepare_mask(panoptic_seg.detach().cpu().numpy())
- downsampled = self.downsample_mask(panoptic_seg_scaled)
- scene_objects = []
- for segment in segments_info:
- if not segment["isthing"]:
- continue
- mask = downsampled == segment["id"]
- if not np.any(mask):
- continue
- scene_objects.append(mask)
-
- mask_set = []
- for mask_id in potential_mask_ids:
- mask = downsampled == mask_id
- if not np.any(mask):
- continue
-
- if self.rigidness_mode is RigidnessMode.soft:
- foreground = [mask]
- elif self.rigidness_mode is RigidnessMode.rigid:
- foreground = scene_objects
- else:
- raise ValueError(f'Unexpected rigidness_mode: {rigidness_mode}')
-
- masks_params = self._move_mask(mask, foreground)
-
- full_mask = ObjectMask((panoptic_seg == mask_id).detach().cpu().numpy())
-
- for params in masks_params:
- aug_mask = deepcopy(full_mask)
- aug_mask.rescale(params["scaling_factor"], inplace=True)
- if params["horizontal_flip"]:
- aug_mask.horizontal_flip(inplace=True)
-
- vertical_shift = int(round(aug_mask.height * params["vertical_shift"]))
- horizontal_shift = int(round(aug_mask.width * params["horizontal_shift"]))
- aug_mask.shift(vertical=vertical_shift, horizontal=horizontal_shift, inplace=True)
- aug_mask = aug_mask.restore_full_mask().astype('uint8')
- if aug_mask.mean() <= self.min_mask_area:
- continue
- mask_set.append(aug_mask)
-
- if return_panoptic:
- return mask_set, panoptic_seg.detach().cpu().numpy()
- else:
- return mask_set
-
-
-def propose_random_square_crop(mask, min_overlap=0.5):
- height, width = mask.shape
- mask_ys, mask_xs = np.where(mask > 0.5) # mask==0 is known fragment and mask==1 is missing
-
- if height < width:
- crop_size = height
- obj_left, obj_right = mask_xs.min(), mask_xs.max()
- obj_width = obj_right - obj_left
- left_border = max(0, min(width - crop_size - 1, obj_left + obj_width * min_overlap - crop_size))
- right_border = max(left_border + 1, min(width - crop_size, obj_left + obj_width * min_overlap))
- start_x = np.random.randint(left_border, right_border)
- return start_x, 0, start_x + crop_size, height
- else:
- crop_size = width
- obj_top, obj_bottom = mask_ys.min(), mask_ys.max()
- obj_height = obj_bottom - obj_top
- top_border = max(0, min(height - crop_size - 1, obj_top + obj_height * min_overlap - crop_size))
- bottom_border = max(top_border + 1, min(height - crop_size, obj_top + obj_height * min_overlap))
- start_y = np.random.randint(top_border, bottom_border)
- return 0, start_y, width, start_y + crop_size
diff --git a/spaces/Alichuan/VITS-Umamusume-voice-synthesizer/app.py b/spaces/Alichuan/VITS-Umamusume-voice-synthesizer/app.py
deleted file mode 100644
index c9bfb000af1af5ec0a745290b95431df58ad7a61..0000000000000000000000000000000000000000
--- a/spaces/Alichuan/VITS-Umamusume-voice-synthesizer/app.py
+++ /dev/null
@@ -1,256 +0,0 @@
-import argparse
-import json
-import os
-import re
-import tempfile
-import logging
-
-logging.getLogger('numba').setLevel(logging.WARNING)
-import librosa
-import numpy as np
-import torch
-from torch import no_grad, LongTensor
-import commons
-import utils
-import gradio as gr
-import gradio.utils as gr_utils
-import gradio.processing_utils as gr_processing_utils
-import ONNXVITS_infer
-import models
-from text import text_to_sequence, _clean_text
-from text.symbols import symbols
-from mel_processing import spectrogram_torch
-import psutil
-from datetime import datetime
-
-language_marks = {
- "Japanese": "",
- "日本語": "[JA]",
- "简体中文": "[ZH]",
- "English": "[EN]",
- "Mix": "",
-}
-
-limitation = os.getenv("SYSTEM") == "spaces" # limit text and audio length in huggingface spaces
-
-
-def create_tts_fn(model, hps, speaker_ids):
- def tts_fn(text, speaker, language, speed, is_symbol):
- if limitation:
- text_len = len(re.sub("\[([A-Z]{2})\]", "", text))
- max_len = 150
- if is_symbol:
- max_len *= 3
- if text_len > max_len:
- return "Error: Text is too long", None
- if language is not None:
- text = language_marks[language] + text + language_marks[language]
- speaker_id = speaker_ids[speaker]
- stn_tst = get_text(text, hps, is_symbol)
- with no_grad():
- x_tst = stn_tst.unsqueeze(0)
- x_tst_lengths = LongTensor([stn_tst.size(0)])
- sid = LongTensor([speaker_id])
- audio = model.infer(x_tst, x_tst_lengths, sid=sid, noise_scale=.667, noise_scale_w=0.8,
- length_scale=1.0 / speed)[0][0, 0].data.cpu().float().numpy()
- del stn_tst, x_tst, x_tst_lengths, sid
- return "Success", (hps.data.sampling_rate, audio)
-
- return tts_fn
-
-
-def create_vc_fn(model, hps, speaker_ids):
- def vc_fn(original_speaker, target_speaker, input_audio):
- if input_audio is None:
- return "You need to upload an audio", None
- sampling_rate, audio = input_audio
- duration = audio.shape[0] / sampling_rate
- if limitation and duration > 30:
- return "Error: Audio is too long", None
- original_speaker_id = speaker_ids[original_speaker]
- target_speaker_id = speaker_ids[target_speaker]
-
- audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32)
- if len(audio.shape) > 1:
- audio = librosa.to_mono(audio.transpose(1, 0))
- if sampling_rate != hps.data.sampling_rate:
- audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=hps.data.sampling_rate)
- with no_grad():
- y = torch.FloatTensor(audio)
- y = y.unsqueeze(0)
- spec = spectrogram_torch(y, hps.data.filter_length,
- hps.data.sampling_rate, hps.data.hop_length, hps.data.win_length,
- center=False)
- spec_lengths = LongTensor([spec.size(-1)])
- sid_src = LongTensor([original_speaker_id])
- sid_tgt = LongTensor([target_speaker_id])
- audio = model.voice_conversion(spec, spec_lengths, sid_src=sid_src, sid_tgt=sid_tgt)[0][
- 0, 0].data.cpu().float().numpy()
- del y, spec, spec_lengths, sid_src, sid_tgt
- return "Success", (hps.data.sampling_rate, audio)
-
- return vc_fn
-
-
-def get_text(text, hps, is_symbol):
- text_norm = text_to_sequence(text, hps.symbols, [] if is_symbol else hps.data.text_cleaners)
- if hps.data.add_blank:
- text_norm = commons.intersperse(text_norm, 0)
- text_norm = LongTensor(text_norm)
- return text_norm
-
-
-def create_to_symbol_fn(hps):
- def to_symbol_fn(is_symbol_input, input_text, temp_text):
- return (_clean_text(input_text, hps.data.text_cleaners), input_text) if is_symbol_input \
- else (temp_text, temp_text)
-
- return to_symbol_fn
-
-
-models_tts = []
-models_vc = []
-models_info = [
- {
- "title": "Trilingual",
- "languages": ['日本語', '简体中文', 'English', 'Mix'],
- "description": """
- This model is trained on a mix up of Umamusume, Genshin Impact, Sanoba Witch & VCTK voice data to learn multilanguage.
- All characters can speak English, Chinese & Japanese.\n\n
- To mix multiple languages in a single sentence, wrap the corresponding part with language tokens
- ([JA] for Japanese, [ZH] for Chinese, [EN] for English), as shown in the examples.\n\n
- 这个模型在赛马娘,原神,魔女的夜宴以及VCTK数据集上混合训练以学习多种语言。
- 所有角色均可说中日英三语。\n\n
- 若需要在同一个句子中混合多种语言,使用相应的语言标记包裹句子。
- (日语用[JA], 中文用[ZH], 英文用[EN]),参考Examples中的示例。
- """,
- "model_path": "./pretrained_models/G_trilingual.pth",
- "config_path": "./configs/uma_trilingual.json",
- "examples": [['你好,训练员先生,很高兴见到你。', '草上飞 Grass Wonder (Umamusume Pretty Derby)', '简体中文', 1, False],
- ['To be honest, I have no idea what to say as examples.', '派蒙 Paimon (Genshin Impact)', 'English',
- 1, False],
- ['授業中に出しだら,学校生活終わるですわ。', '綾地 寧々 Ayachi Nene (Sanoba Witch)', '日本語', 1, False],
- ['[JA]こんにちわ。[JA][ZH]你好![ZH][EN]Hello![EN]', '綾地 寧々 Ayachi Nene (Sanoba Witch)', 'Mix', 1, False]],
- "onnx_dir": "./ONNX_net/G_trilingual/"
- },
- {
- "title": "Japanese",
- "languages": ["Japanese"],
- "description": """
- This model contains 87 characters from Umamusume: Pretty Derby, Japanese only.\n\n
- 这个模型包含赛马娘的所有87名角色,只能合成日语。
- """,
- "model_path": "./pretrained_models/G_jp.pth",
- "config_path": "./configs/uma87.json",
- "examples": [['お疲れ様です,トレーナーさん。', '无声铃鹿 Silence Suzuka (Umamusume Pretty Derby)', 'Japanese', 1, False],
- ['張り切っていこう!', '北部玄驹 Kitasan Black (Umamusume Pretty Derby)', 'Japanese', 1, False],
- ['何でこんなに慣れでんのよ,私のほが先に好きだっだのに。', '草上飞 Grass Wonder (Umamusume Pretty Derby)', 'Japanese', 1, False],
- ['授業中に出しだら,学校生活終わるですわ。', '目白麦昆 Mejiro Mcqueen (Umamusume Pretty Derby)', 'Japanese', 1, False],
- ['お帰りなさい,お兄様!', '米浴 Rice Shower (Umamusume Pretty Derby)', 'Japanese', 1, False],
- ['私の処女をもらっでください!', '米浴 Rice Shower (Umamusume Pretty Derby)', 'Japanese', 1, False]],
- "onnx_dir": "./ONNX_net/G_jp/"
- },
-]
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument("--share", action="store_true", default=False, help="share gradio app")
- args = parser.parse_args()
- for info in models_info:
- name = info['title']
- lang = info['languages']
- examples = info['examples']
- config_path = info['config_path']
- model_path = info['model_path']
- description = info['description']
- onnx_dir = info["onnx_dir"]
- hps = utils.get_hparams_from_file(config_path)
- model = ONNXVITS_infer.SynthesizerTrn(
- len(hps.symbols),
- hps.data.filter_length // 2 + 1,
- hps.train.segment_size // hps.data.hop_length,
- n_speakers=hps.data.n_speakers,
- ONNX_dir=onnx_dir,
- **hps.model)
- utils.load_checkpoint(model_path, model, None)
- model.eval()
- speaker_ids = hps.speakers
- speakers = list(hps.speakers.keys())
- models_tts.append((name, description, speakers, lang, examples,
- hps.symbols, create_tts_fn(model, hps, speaker_ids),
- create_to_symbol_fn(hps)))
- models_vc.append((name, description, speakers, create_vc_fn(model, hps, speaker_ids)))
- app = gr.Blocks()
- with app:
- gr.Markdown("# English & Chinese & Japanese Anime TTS\n\n"
- "\n\n"
- "Including Japanese TTS & Trilingual TTS, speakers are all anime characters. \n\n包含一个纯日语TTS和一个中日英三语TTS模型,主要为二次元角色。\n\n"
- "If you have any suggestions or bug reports, feel free to open discussion in [Community](https://huggingface.co/spaces/Plachta/VITS-Umamusume-voice-synthesizer/discussions).\n\n"
- "若有bug反馈或建议,请在[Community](https://huggingface.co/spaces/Plachta/VITS-Umamusume-voice-synthesizer/discussions)下开启一个新的Discussion。 \n\n"
- )
- with gr.Tabs():
- with gr.TabItem("TTS"):
- with gr.Tabs():
- for i, (name, description, speakers, lang, example, symbols, tts_fn, to_symbol_fn) in enumerate(
- models_tts):
- with gr.TabItem(name):
- gr.Markdown(description)
- with gr.Row():
- with gr.Column():
- textbox = gr.TextArea(label="Text",
- placeholder="Type your sentence here (Maximum 150 words)",
- value="こんにちわ。", elem_id=f"tts-input")
- with gr.Accordion(label="Phoneme Input", open=False):
- temp_text_var = gr.Variable()
- symbol_input = gr.Checkbox(value=False, label="Symbol input")
- symbol_list = gr.Dataset(label="Symbol list", components=[textbox],
- samples=[[x] for x in symbols],
- elem_id=f"symbol-list")
- symbol_list_json = gr.Json(value=symbols, visible=False)
- symbol_input.change(to_symbol_fn,
- [symbol_input, textbox, temp_text_var],
- [textbox, temp_text_var])
- symbol_list.click(None, [symbol_list, symbol_list_json], textbox,
- _js=f"""
- (i, symbols, text) => {{
- let root = document.querySelector("body > gradio-app");
- if (root.shadowRoot != null)
- root = root.shadowRoot;
- let text_input = root.querySelector("#tts-input").querySelector("textarea");
- let startPos = text_input.selectionStart;
- let endPos = text_input.selectionEnd;
- let oldTxt = text_input.value;
- let result = oldTxt.substring(0, startPos) + symbols[i] + oldTxt.substring(endPos);
- text_input.value = result;
- let x = window.scrollX, y = window.scrollY;
- text_input.focus();
- text_input.selectionStart = startPos + symbols[i].length;
- text_input.selectionEnd = startPos + symbols[i].length;
- text_input.blur();
- window.scrollTo(x, y);
-
- text = text_input.value;
-
- return text;
- }}""")
- # select character
- char_dropdown = gr.Dropdown(choices=speakers, value=speakers[0], label='character')
- language_dropdown = gr.Dropdown(choices=lang, value=lang[0], label='language')
- duration_slider = gr.Slider(minimum=0.1, maximum=5, value=1, step=0.1,
- label='速度 Speed')
- with gr.Column():
- text_output = gr.Textbox(label="Message")
- audio_output = gr.Audio(label="Output Audio", elem_id="tts-audio")
- btn = gr.Button("Generate!")
- btn.click(tts_fn,
- inputs=[textbox, char_dropdown, language_dropdown, duration_slider,
- symbol_input],
- outputs=[text_output, audio_output])
- gr.Examples(
- examples=example,
- inputs=[textbox, char_dropdown, language_dropdown,
- duration_slider, symbol_input],
- outputs=[text_output, audio_output],
- fn=tts_fn
- )
- app.queue(concurrency_count=3).launch(show_api=False, share=args.share)
\ No newline at end of file
diff --git a/spaces/AmmarHuggingFaces/intro-to-hugging-face/app.py b/spaces/AmmarHuggingFaces/intro-to-hugging-face/app.py
deleted file mode 100644
index 02d8e5e1ff6c81f155e9dcca3353082cc0cf7175..0000000000000000000000000000000000000000
--- a/spaces/AmmarHuggingFaces/intro-to-hugging-face/app.py
+++ /dev/null
@@ -1,7 +0,0 @@
-import gradio as gr
-from transformers import pipeline
-sentiment = pipeline("sentiment-analysis")
-def get_sentiment(input_text):
- return sentiment(input_text)
-iface = gr.Interface(fn = get_sentiment, inputs = "text", outputs = ["text"], title="Sentiment Analysis", description="Get Sentiment Negative / Positive for the given input" )
-iface.launch(inline=False)
\ No newline at end of file
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/installation.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/installation.md
deleted file mode 100644
index 50df14be3f776abb2f4e029dad5ee578ea2401bc..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/installation.md
+++ /dev/null
@@ -1,146 +0,0 @@
-
-
-# Installation
-
-Install 🤗 Diffusers for whichever deep learning library you're working with.
-
-🤗 Diffusers is tested on Python 3.7+, PyTorch 1.7.0+ and Flax. Follow the installation instructions below for the deep learning library you are using:
-
-- [PyTorch](https://pytorch.org/get-started/locally/) installation instructions.
-- [Flax](https://flax.readthedocs.io/en/latest/) installation instructions.
-
-## Install with pip
-
-You should install 🤗 Diffusers in a [virtual environment](https://docs.python.org/3/library/venv.html).
-If you're unfamiliar with Python virtual environments, take a look at this [guide](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/).
-A virtual environment makes it easier to manage different projects and avoid compatibility issues between dependencies.
-
-Start by creating a virtual environment in your project directory:
-
-```bash
-python -m venv .env
-```
-
-Activate the virtual environment:
-
-```bash
-source .env/bin/activate
-```
-
-🤗 Diffusers also relies on the 🤗 Transformers library, and you can install both with the following command:
-
-
-
-```bash
-pip install diffusers["torch"] transformers
-```
-
-
-```bash
-pip install diffusers["flax"] transformers
-```
-
-
-
-## Install from source
-
-Before installing 🤗 Diffusers from source, make sure you have `torch` and 🤗 Accelerate installed.
-
-For `torch` installation, refer to the `torch` [installation](https://pytorch.org/get-started/locally/#start-locally) guide.
-
-To install 🤗 Accelerate:
-
-```bash
-pip install accelerate
-```
-
-Install 🤗 Diffusers from source with the following command:
-
-```bash
-pip install git+https://github.com/huggingface/diffusers
-```
-
-This command installs the bleeding edge `main` version rather than the latest `stable` version.
-The `main` version is useful for staying up-to-date with the latest developments.
-For instance, if a bug has been fixed since the last official release but a new release hasn't been rolled out yet.
-However, this means the `main` version may not always be stable.
-We strive to keep the `main` version operational, and most issues are usually resolved within a few hours or a day.
-If you run into a problem, please open an [Issue](https://github.com/huggingface/diffusers/issues/new/choose), so we can fix it even sooner!
-
-## Editable install
-
-You will need an editable install if you'd like to:
-
-* Use the `main` version of the source code.
-* Contribute to 🤗 Diffusers and need to test changes in the code.
-
-Clone the repository and install 🤗 Diffusers with the following commands:
-
-```bash
-git clone https://github.com/huggingface/diffusers.git
-cd diffusers
-```
-
-
-
-```bash
-pip install -e ".[torch]"
-```
-
-
-```bash
-pip install -e ".[flax]"
-```
-
-
-
-These commands will link the folder you cloned the repository to and your Python library paths.
-Python will now look inside the folder you cloned to in addition to the normal library paths.
-For example, if your Python packages are typically installed in `~/anaconda3/envs/main/lib/python3.7/site-packages/`, Python will also search the `~/diffusers/` folder you cloned to.
-
-
-
-You must keep the `diffusers` folder if you want to keep using the library.
-
-
-
-Now you can easily update your clone to the latest version of 🤗 Diffusers with the following command:
-
-```bash
-cd ~/diffusers/
-git pull
-```
-
-Your Python environment will find the `main` version of 🤗 Diffusers on the next run.
-
-## Notice on telemetry logging
-
-Our library gathers telemetry information during `from_pretrained()` requests.
-This data includes the version of Diffusers and PyTorch/Flax, the requested model or pipeline class,
-and the path to a pre-trained checkpoint if it is hosted on the Hub.
-This usage data helps us debug issues and prioritize new features.
-Telemetry is only sent when loading models and pipelines from the HuggingFace Hub,
-and is not collected during local usage.
-
-We understand that not everyone wants to share additional information, and we respect your privacy,
-so you can disable telemetry collection by setting the `DISABLE_TELEMETRY` environment variable from your terminal:
-
-On Linux/MacOS:
-```bash
-export DISABLE_TELEMETRY=YES
-```
-
-On Windows:
-```bash
-set DISABLE_TELEMETRY=YES
-```
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_consistency_models.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_consistency_models.py
deleted file mode 100644
index fb296054d65b804af281dc99d940c8f0ba50e01b..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_consistency_models.py
+++ /dev/null
@@ -1,380 +0,0 @@
-# Copyright 2023 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from dataclasses import dataclass
-from typing import List, Optional, Tuple, Union
-
-import numpy as np
-import torch
-
-from ..configuration_utils import ConfigMixin, register_to_config
-from ..utils import BaseOutput, logging, randn_tensor
-from .scheduling_utils import SchedulerMixin
-
-
-logger = logging.get_logger(__name__) # pylint: disable=invalid-name
-
-
-@dataclass
-class CMStochasticIterativeSchedulerOutput(BaseOutput):
- """
- Output class for the scheduler's step function output.
-
- Args:
- prev_sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` for images):
- Computed sample (x_{t-1}) of previous timestep. `prev_sample` should be used as next model input in the
- denoising loop.
- """
-
- prev_sample: torch.FloatTensor
-
-
-class CMStochasticIterativeScheduler(SchedulerMixin, ConfigMixin):
- """
- Multistep and onestep sampling for consistency models from Song et al. 2023 [1]. This implements Algorithm 1 in the
- paper [1].
-
- [1] Song, Yang and Dhariwal, Prafulla and Chen, Mark and Sutskever, Ilya. "Consistency Models"
- https://arxiv.org/pdf/2303.01469 [2] Karras, Tero, et al. "Elucidating the Design Space of Diffusion-Based
- Generative Models." https://arxiv.org/abs/2206.00364
-
- [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__`
- function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`.
- [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and
- [`~SchedulerMixin.from_pretrained`] functions.
-
- Args:
- num_train_timesteps (`int`): number of diffusion steps used to train the model.
- sigma_min (`float`):
- Minimum noise magnitude in the sigma schedule. This was set to 0.002 in the original implementation.
- sigma_max (`float`):
- Maximum noise magnitude in the sigma schedule. This was set to 80.0 in the original implementation.
- sigma_data (`float`):
- The standard deviation of the data distribution, following the EDM paper [2]. This was set to 0.5 in the
- original implementation, which is also the original value suggested in the EDM paper.
- s_noise (`float`):
- The amount of additional noise to counteract loss of detail during sampling. A reasonable range is [1.000,
- 1.011]. This was set to 1.0 in the original implementation.
- rho (`float`):
- The rho parameter used for calculating the Karras sigma schedule, introduced in the EDM paper [2]. This was
- set to 7.0 in the original implementation, which is also the original value suggested in the EDM paper.
- clip_denoised (`bool`):
- Whether to clip the denoised outputs to `(-1, 1)`. Defaults to `True`.
- timesteps (`List` or `np.ndarray` or `torch.Tensor`, *optional*):
- Optionally, an explicit timestep schedule can be specified. The timesteps are expected to be in increasing
- order.
- """
-
- order = 1
-
- @register_to_config
- def __init__(
- self,
- num_train_timesteps: int = 40,
- sigma_min: float = 0.002,
- sigma_max: float = 80.0,
- sigma_data: float = 0.5,
- s_noise: float = 1.0,
- rho: float = 7.0,
- clip_denoised: bool = True,
- ):
- # standard deviation of the initial noise distribution
- self.init_noise_sigma = sigma_max
-
- ramp = np.linspace(0, 1, num_train_timesteps)
- sigmas = self._convert_to_karras(ramp)
- timesteps = self.sigma_to_t(sigmas)
-
- # setable values
- self.num_inference_steps = None
- self.sigmas = torch.from_numpy(sigmas)
- self.timesteps = torch.from_numpy(timesteps)
- self.custom_timesteps = False
- self.is_scale_input_called = False
-
- def index_for_timestep(self, timestep, schedule_timesteps=None):
- if schedule_timesteps is None:
- schedule_timesteps = self.timesteps
-
- indices = (schedule_timesteps == timestep).nonzero()
- return indices.item()
-
- def scale_model_input(
- self, sample: torch.FloatTensor, timestep: Union[float, torch.FloatTensor]
- ) -> torch.FloatTensor:
- """
- Scales the consistency model input by `(sigma**2 + sigma_data**2) ** 0.5`, following the EDM model.
-
- Args:
- sample (`torch.FloatTensor`): input sample
- timestep (`float` or `torch.FloatTensor`): the current timestep in the diffusion chain
- Returns:
- `torch.FloatTensor`: scaled input sample
- """
- # Get sigma corresponding to timestep
- if isinstance(timestep, torch.Tensor):
- timestep = timestep.to(self.timesteps.device)
- step_idx = self.index_for_timestep(timestep)
- sigma = self.sigmas[step_idx]
-
- sample = sample / ((sigma**2 + self.config.sigma_data**2) ** 0.5)
-
- self.is_scale_input_called = True
- return sample
-
- def sigma_to_t(self, sigmas: Union[float, np.ndarray]):
- """
- Gets scaled timesteps from the Karras sigmas, for input to the consistency model.
-
- Args:
- sigmas (`float` or `np.ndarray`): single Karras sigma or array of Karras sigmas
- Returns:
- `float` or `np.ndarray`: scaled input timestep or scaled input timestep array
- """
- if not isinstance(sigmas, np.ndarray):
- sigmas = np.array(sigmas, dtype=np.float64)
-
- timesteps = 1000 * 0.25 * np.log(sigmas + 1e-44)
-
- return timesteps
-
- def set_timesteps(
- self,
- num_inference_steps: Optional[int] = None,
- device: Union[str, torch.device] = None,
- timesteps: Optional[List[int]] = None,
- ):
- """
- Sets the timesteps used for the diffusion chain. Supporting function to be run before inference.
-
- Args:
- num_inference_steps (`int`):
- the number of diffusion steps used when generating samples with a pre-trained model.
- device (`str` or `torch.device`, optional):
- the device to which the timesteps should be moved to. If `None`, the timesteps are not moved.
- timesteps (`List[int]`, optional):
- custom timesteps used to support arbitrary spacing between timesteps. If `None`, then the default
- timestep spacing strategy of equal spacing between timesteps is used. If passed, `num_inference_steps`
- must be `None`.
- """
- if num_inference_steps is None and timesteps is None:
- raise ValueError("Exactly one of `num_inference_steps` or `timesteps` must be supplied.")
-
- if num_inference_steps is not None and timesteps is not None:
- raise ValueError("Can only pass one of `num_inference_steps` or `timesteps`.")
-
- # Follow DDPMScheduler custom timesteps logic
- if timesteps is not None:
- for i in range(1, len(timesteps)):
- if timesteps[i] >= timesteps[i - 1]:
- raise ValueError("`timesteps` must be in descending order.")
-
- if timesteps[0] >= self.config.num_train_timesteps:
- raise ValueError(
- f"`timesteps` must start before `self.config.train_timesteps`:"
- f" {self.config.num_train_timesteps}."
- )
-
- timesteps = np.array(timesteps, dtype=np.int64)
- self.custom_timesteps = True
- else:
- if num_inference_steps > self.config.num_train_timesteps:
- raise ValueError(
- f"`num_inference_steps`: {num_inference_steps} cannot be larger than `self.config.train_timesteps`:"
- f" {self.config.num_train_timesteps} as the unet model trained with this scheduler can only handle"
- f" maximal {self.config.num_train_timesteps} timesteps."
- )
-
- self.num_inference_steps = num_inference_steps
-
- step_ratio = self.config.num_train_timesteps // self.num_inference_steps
- timesteps = (np.arange(0, num_inference_steps) * step_ratio).round()[::-1].copy().astype(np.int64)
- self.custom_timesteps = False
-
- # Map timesteps to Karras sigmas directly for multistep sampling
- # See https://github.com/openai/consistency_models/blob/main/cm/karras_diffusion.py#L675
- num_train_timesteps = self.config.num_train_timesteps
- ramp = timesteps[::-1].copy()
- ramp = ramp / (num_train_timesteps - 1)
- sigmas = self._convert_to_karras(ramp)
- timesteps = self.sigma_to_t(sigmas)
-
- sigmas = np.concatenate([sigmas, [self.sigma_min]]).astype(np.float32)
- self.sigmas = torch.from_numpy(sigmas).to(device=device)
-
- if str(device).startswith("mps"):
- # mps does not support float64
- self.timesteps = torch.from_numpy(timesteps).to(device, dtype=torch.float32)
- else:
- self.timesteps = torch.from_numpy(timesteps).to(device=device)
-
- # Modified _convert_to_karras implementation that takes in ramp as argument
- def _convert_to_karras(self, ramp):
- """Constructs the noise schedule of Karras et al. (2022)."""
-
- sigma_min: float = self.config.sigma_min
- sigma_max: float = self.config.sigma_max
-
- rho = self.config.rho
- min_inv_rho = sigma_min ** (1 / rho)
- max_inv_rho = sigma_max ** (1 / rho)
- sigmas = (max_inv_rho + ramp * (min_inv_rho - max_inv_rho)) ** rho
- return sigmas
-
- def get_scalings(self, sigma):
- sigma_data = self.config.sigma_data
-
- c_skip = sigma_data**2 / (sigma**2 + sigma_data**2)
- c_out = sigma * sigma_data / (sigma**2 + sigma_data**2) ** 0.5
- return c_skip, c_out
-
- def get_scalings_for_boundary_condition(self, sigma):
- """
- Gets the scalings used in the consistency model parameterization, following Appendix C of the original paper.
- This enforces the consistency model boundary condition.
-
- Note that `epsilon` in the equations for c_skip and c_out is set to sigma_min.
-
- Args:
- sigma (`torch.FloatTensor`):
- The current sigma in the Karras sigma schedule.
- Returns:
- `tuple`:
- A two-element tuple where c_skip (which weights the current sample) is the first element and c_out
- (which weights the consistency model output) is the second element.
- """
- sigma_min = self.config.sigma_min
- sigma_data = self.config.sigma_data
-
- c_skip = sigma_data**2 / ((sigma - sigma_min) ** 2 + sigma_data**2)
- c_out = (sigma - sigma_min) * sigma_data / (sigma**2 + sigma_data**2) ** 0.5
- return c_skip, c_out
-
- def step(
- self,
- model_output: torch.FloatTensor,
- timestep: Union[float, torch.FloatTensor],
- sample: torch.FloatTensor,
- generator: Optional[torch.Generator] = None,
- return_dict: bool = True,
- ) -> Union[CMStochasticIterativeSchedulerOutput, Tuple]:
- """
- Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion
- process from the learned model outputs (most often the predicted noise).
-
- Args:
- model_output (`torch.FloatTensor`): direct output from learned diffusion model.
- timestep (`float`): current timestep in the diffusion chain.
- sample (`torch.FloatTensor`):
- current instance of sample being created by diffusion process.
- generator (`torch.Generator`, *optional*): Random number generator.
- return_dict (`bool`): option for returning tuple rather than EulerDiscreteSchedulerOutput class
- Returns:
- [`~schedulers.scheduling_utils.CMStochasticIterativeSchedulerOutput`] or `tuple`:
- [`~schedulers.scheduling_utils.CMStochasticIterativeSchedulerOutput`] if `return_dict` is True, otherwise a
- `tuple`. When returning a tuple, the first element is the sample tensor.
- """
-
- if (
- isinstance(timestep, int)
- or isinstance(timestep, torch.IntTensor)
- or isinstance(timestep, torch.LongTensor)
- ):
- raise ValueError(
- (
- "Passing integer indices (e.g. from `enumerate(timesteps)`) as timesteps to"
- f" `{self.__class__}.step()` is not supported. Make sure to pass"
- " one of the `scheduler.timesteps` as a timestep."
- ),
- )
-
- if not self.is_scale_input_called:
- logger.warning(
- "The `scale_model_input` function should be called before `step` to ensure correct denoising. "
- "See `StableDiffusionPipeline` for a usage example."
- )
-
- if isinstance(timestep, torch.Tensor):
- timestep = timestep.to(self.timesteps.device)
-
- sigma_min = self.config.sigma_min
- sigma_max = self.config.sigma_max
-
- step_index = self.index_for_timestep(timestep)
-
- # sigma_next corresponds to next_t in original implementation
- sigma = self.sigmas[step_index]
- if step_index + 1 < self.config.num_train_timesteps:
- sigma_next = self.sigmas[step_index + 1]
- else:
- # Set sigma_next to sigma_min
- sigma_next = self.sigmas[-1]
-
- # Get scalings for boundary conditions
- c_skip, c_out = self.get_scalings_for_boundary_condition(sigma)
-
- # 1. Denoise model output using boundary conditions
- denoised = c_out * model_output + c_skip * sample
- if self.config.clip_denoised:
- denoised = denoised.clamp(-1, 1)
-
- # 2. Sample z ~ N(0, s_noise^2 * I)
- # Noise is not used for onestep sampling.
- if len(self.timesteps) > 1:
- noise = randn_tensor(
- model_output.shape, dtype=model_output.dtype, device=model_output.device, generator=generator
- )
- else:
- noise = torch.zeros_like(model_output)
- z = noise * self.config.s_noise
-
- sigma_hat = sigma_next.clamp(min=sigma_min, max=sigma_max)
-
- # 3. Return noisy sample
- # tau = sigma_hat, eps = sigma_min
- prev_sample = denoised + z * (sigma_hat**2 - sigma_min**2) ** 0.5
-
- if not return_dict:
- return (prev_sample,)
-
- return CMStochasticIterativeSchedulerOutput(prev_sample=prev_sample)
-
- # Copied from diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler.add_noise
- def add_noise(
- self,
- original_samples: torch.FloatTensor,
- noise: torch.FloatTensor,
- timesteps: torch.FloatTensor,
- ) -> torch.FloatTensor:
- # Make sure sigmas and timesteps have the same device and dtype as original_samples
- sigmas = self.sigmas.to(device=original_samples.device, dtype=original_samples.dtype)
- if original_samples.device.type == "mps" and torch.is_floating_point(timesteps):
- # mps does not support float64
- schedule_timesteps = self.timesteps.to(original_samples.device, dtype=torch.float32)
- timesteps = timesteps.to(original_samples.device, dtype=torch.float32)
- else:
- schedule_timesteps = self.timesteps.to(original_samples.device)
- timesteps = timesteps.to(original_samples.device)
-
- step_indices = [(schedule_timesteps == t).nonzero().item() for t in timesteps]
-
- sigma = sigmas[step_indices].flatten()
- while len(sigma.shape) < len(original_samples.shape):
- sigma = sigma.unsqueeze(-1)
-
- noisy_samples = original_samples + noise * sigma
- return noisy_samples
-
- def __len__(self):
- return self.config.num_train_timesteps
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/dummy_torch_and_transformers_objects.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/dummy_torch_and_transformers_objects.py
deleted file mode 100644
index df8009dd0e27ec81dfbf4779904d6a6cfc0679f6..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/dummy_torch_and_transformers_objects.py
+++ /dev/null
@@ -1,1127 +0,0 @@
-# This file is autogenerated by the command `make fix-copies`, do not edit.
-from ..utils import DummyObject, requires_backends
-
-
-class AltDiffusionImg2ImgPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class AltDiffusionPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class AudioLDMPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class CycleDiffusionPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class IFImg2ImgPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class IFImg2ImgSuperResolutionPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class IFInpaintingPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class IFInpaintingSuperResolutionPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class IFPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class IFSuperResolutionPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class ImageTextPipelineOutput(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class KandinskyCombinedPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class KandinskyImg2ImgCombinedPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class KandinskyImg2ImgPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class KandinskyInpaintCombinedPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class KandinskyInpaintPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class KandinskyPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class KandinskyPriorPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class KandinskyV22CombinedPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class KandinskyV22ControlnetImg2ImgPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class KandinskyV22ControlnetPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class KandinskyV22Img2ImgCombinedPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class KandinskyV22Img2ImgPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class KandinskyV22InpaintCombinedPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class KandinskyV22InpaintPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class KandinskyV22Pipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class KandinskyV22PriorEmb2EmbPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class KandinskyV22PriorPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class LDMTextToImagePipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class PaintByExamplePipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class SemanticStableDiffusionPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class ShapEImg2ImgPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class ShapEPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class StableDiffusionAdapterPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class StableDiffusionAttendAndExcitePipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class StableDiffusionControlNetImg2ImgPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class StableDiffusionControlNetInpaintPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class StableDiffusionControlNetPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class StableDiffusionDepth2ImgPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class StableDiffusionDiffEditPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class StableDiffusionImageVariationPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class StableDiffusionImg2ImgPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class StableDiffusionInpaintPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class StableDiffusionInpaintPipelineLegacy(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class StableDiffusionInstructPix2PixPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class StableDiffusionLatentUpscalePipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class StableDiffusionLDM3DPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class StableDiffusionModelEditingPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class StableDiffusionPanoramaPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class StableDiffusionParadigmsPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class StableDiffusionPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class StableDiffusionPipelineSafe(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class StableDiffusionPix2PixZeroPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class StableDiffusionSAGPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class StableDiffusionUpscalePipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class StableDiffusionXLControlNetPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class StableDiffusionXLImg2ImgPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class StableDiffusionXLInpaintPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class StableDiffusionXLInstructPix2PixPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class StableDiffusionXLPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class StableUnCLIPImg2ImgPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class StableUnCLIPPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class TextToVideoSDPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class TextToVideoZeroPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class UnCLIPImageVariationPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class UnCLIPPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class UniDiffuserModel(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class UniDiffuserPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class UniDiffuserTextDecoder(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class VersatileDiffusionDualGuidedPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class VersatileDiffusionImageVariationPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class VersatileDiffusionPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class VersatileDiffusionTextToImagePipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class VideoToVideoSDPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
-
-class VQDiffusionPipeline(metaclass=DummyObject):
- _backends = ["torch", "transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "transformers"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "transformers"])
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/utils/stale.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/utils/stale.py
deleted file mode 100644
index 12932f31c243f44566fb65daf80b0b3637cc8a95..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/utils/stale.py
+++ /dev/null
@@ -1,77 +0,0 @@
-# Copyright 2023 The HuggingFace Team, the AllenNLP library authors. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""
-Script to close stale issue. Taken in part from the AllenNLP repository.
-https://github.com/allenai/allennlp.
-"""
-import os
-from datetime import datetime as dt
-
-from github import Github
-
-
-LABELS_TO_EXEMPT = [
- "good first issue",
- "good second issue",
- "good difficult issue",
- "enhancement",
- "new pipeline/model",
- "new scheduler",
- "wip",
-]
-
-
-def main():
- g = Github(os.environ["GITHUB_TOKEN"])
- repo = g.get_repo("huggingface/diffusers")
- open_issues = repo.get_issues(state="open")
-
- for issue in open_issues:
- comments = sorted(issue.get_comments(), key=lambda i: i.created_at, reverse=True)
- last_comment = comments[0] if len(comments) > 0 else None
- if (
- last_comment is not None
- and last_comment.user.login == "github-actions[bot]"
- and (dt.utcnow() - issue.updated_at).days > 7
- and (dt.utcnow() - issue.created_at).days >= 30
- and not any(label.name.lower() in LABELS_TO_EXEMPT for label in issue.get_labels())
- ):
- # Closes the issue after 7 days of inactivity since the Stalebot notification.
- issue.edit(state="closed")
- elif (
- "stale" in issue.get_labels()
- and last_comment is not None
- and last_comment.user.login != "github-actions[bot]"
- ):
- # Opens the issue if someone other than Stalebot commented.
- issue.edit(state="open")
- issue.remove_from_labels("stale")
- elif (
- (dt.utcnow() - issue.updated_at).days > 23
- and (dt.utcnow() - issue.created_at).days >= 30
- and not any(label.name.lower() in LABELS_TO_EXEMPT for label in issue.get_labels())
- ):
- # Post a Stalebot notification after 23 days of inactivity.
- issue.create_comment(
- "This issue has been automatically marked as stale because it has not had "
- "recent activity. If you think this still needs to be addressed "
- "please comment on this thread.\n\nPlease note that issues that do not follow the "
- "[contributing guidelines](https://github.com/huggingface/diffusers/blob/main/CONTRIBUTING.md) "
- "are likely to be ignored."
- )
- issue.add_to_labels("stale")
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/cascade_rcnn/cascade_rcnn_x101_32x4d_fpn_20e_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/cascade_rcnn/cascade_rcnn_x101_32x4d_fpn_20e_coco.py
deleted file mode 100644
index 1afeeef1212db831dd1f097d30b0354e459daa97..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/cascade_rcnn/cascade_rcnn_x101_32x4d_fpn_20e_coco.py
+++ /dev/null
@@ -1,13 +0,0 @@
-_base_ = './cascade_rcnn_r50_fpn_20e_coco.py'
-model = dict(
- pretrained='open-mmlab://resnext101_32x4d',
- backbone=dict(
- type='ResNeXt',
- depth=101,
- groups=32,
- base_width=4,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=1,
- norm_cfg=dict(type='BN', requires_grad=True),
- style='pytorch'))
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/dcn/cascade_rcnn_r101_fpn_dconv_c3-c5_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/dcn/cascade_rcnn_r101_fpn_dconv_c3-c5_1x_coco.py
deleted file mode 100644
index a01df33c94e1f8b5f51a51a780b30a77ce99b2c0..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/dcn/cascade_rcnn_r101_fpn_dconv_c3-c5_1x_coco.py
+++ /dev/null
@@ -1,5 +0,0 @@
-_base_ = '../cascade_rcnn/cascade_rcnn_r101_fpn_1x_coco.py'
-model = dict(
- backbone=dict(
- dcn=dict(type='DCN', deform_groups=1, fallback_on_stride=False),
- stage_with_dcn=(False, True, True, True)))
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/ms_rcnn/README.md b/spaces/Andy1621/uniformer_image_detection/configs/ms_rcnn/README.md
deleted file mode 100644
index c19dee36e441f2f6a8330ab8c6d94e7408ec9fe6..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/ms_rcnn/README.md
+++ /dev/null
@@ -1,26 +0,0 @@
-# Mask Scoring R-CNN
-
-## Introduction
-
-[ALGORITHM]
-
-```
-@inproceedings{huang2019msrcnn,
- title={Mask Scoring R-CNN},
- author={Zhaojin Huang and Lichao Huang and Yongchao Gong and Chang Huang and Xinggang Wang},
- booktitle={IEEE Conference on Computer Vision and Pattern Recognition},
- year={2019},
-}
-```
-
-## Results and Models
-
-| Backbone | style | Lr schd | Mem (GB) | Inf time (fps) | box AP | mask AP | Config | Download |
-|:-------------:|:----------:|:-------:|:--------:|:--------------:|:------:|:-------:|:------:|:--------:|
-| R-50-FPN | caffe | 1x | 4.5 | | 38.2 | 36.0 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/ms_rcnn/ms_rcnn_r50_caffe_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/ms_rcnn/ms_rcnn_r50_caffe_fpn_1x_coco/ms_rcnn_r50_caffe_fpn_1x_coco_20200702_180848-61c9355e.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/ms_rcnn/ms_rcnn_r50_caffe_fpn_1x_coco/ms_rcnn_r50_caffe_fpn_1x_coco_20200702_180848.log.json) |
-| R-50-FPN | caffe | 2x | - | - | 38.8 | 36.3 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/ms_rcnn/ms_rcnn_r50_caffe_fpn_2x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/ms_rcnn/ms_rcnn_r50_caffe_fpn_2x_coco/ms_rcnn_r50_caffe_fpn_2x_coco_bbox_mAP-0.388__segm_mAP-0.363_20200506_004738-ee87b137.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/ms_rcnn/ms_rcnn_r50_caffe_fpn_2x_coco/ms_rcnn_r50_caffe_fpn_2x_coco_20200506_004738.log.json) |
-| R-101-FPN | caffe | 1x | 6.5 | | 40.4 | 37.6 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/ms_rcnn/ms_rcnn_r101_caffe_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/ms_rcnn/ms_rcnn_r101_caffe_fpn_1x_coco/ms_rcnn_r101_caffe_fpn_1x_coco_bbox_mAP-0.404__segm_mAP-0.376_20200506_004755-b9b12a37.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/ms_rcnn/ms_rcnn_r101_caffe_fpn_1x_coco/ms_rcnn_r101_caffe_fpn_1x_coco_20200506_004755.log.json) |
-| R-101-FPN | caffe | 2x | - | - | 41.1 | 38.1 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/ms_rcnn/ms_rcnn_r101_caffe_fpn_2x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/ms_rcnn/ms_rcnn_r101_caffe_fpn_2x_coco/ms_rcnn_r101_caffe_fpn_2x_coco_bbox_mAP-0.411__segm_mAP-0.381_20200506_011134-5f3cc74f.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/ms_rcnn/ms_rcnn_r101_caffe_fpn_2x_coco/ms_rcnn_r101_caffe_fpn_2x_coco_20200506_011134.log.json) |
-| R-X101-32x4d | pytorch | 2x | 7.9 | 11.0 | 41.8 | 38.7 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/ms_rcnn/ms_rcnn_x101_32x4d_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/ms_rcnn/ms_rcnn_x101_32x4d_fpn_1x_coco/ms_rcnn_x101_32x4d_fpn_1x_coco_20200206-81fd1740.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/ms_rcnn/ms_rcnn_x101_32x4d_fpn_1x_coco/ms_rcnn_x101_32x4d_fpn_1x_coco_20200206_100113.log.json) |
-| R-X101-64x4d | pytorch | 1x | 11.0 | 8.0 | 43.0 | 39.5 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/ms_rcnn/ms_rcnn_x101_64x4d_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/ms_rcnn/ms_rcnn_x101_64x4d_fpn_1x_coco/ms_rcnn_x101_64x4d_fpn_1x_coco_20200206-86ba88d2.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/ms_rcnn/ms_rcnn_x101_64x4d_fpn_1x_coco/ms_rcnn_x101_64x4d_fpn_1x_coco_20200206_091744.log.json) |
-| R-X101-64x4d | pytorch | 2x | 11.0 | 8.0 | 42.6 | 39.5 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/ms_rcnn/ms_rcnn_x101_64x4d_fpn_2x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/ms_rcnn/ms_rcnn_x101_64x4d_fpn_2x_coco/ms_rcnn_x101_64x4d_fpn_2x_coco_20200308-02a445e2.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/ms_rcnn/ms_rcnn_x101_64x4d_fpn_2x_coco/ms_rcnn_x101_64x4d_fpn_2x_coco_20200308_012247.log.json) |
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_512x1024_80k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_512x1024_80k_cityscapes.py
deleted file mode 100644
index 990a085eda2f2dc47f1a1289bfbf2726ad8c9c4f..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_512x1024_80k_cityscapes.py
+++ /dev/null
@@ -1,4 +0,0 @@
-_base_ = [
- '../_base_/models/fcn_r50-d8.py', '../_base_/datasets/cityscapes.py',
- '../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py'
-]
diff --git a/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/guided_diffusion/scripts/image_sample.py b/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/guided_diffusion/scripts/image_sample.py
deleted file mode 100644
index 8bcfd463dcbe86ce42e6708892d81e24d549583d..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/guided_diffusion/scripts/image_sample.py
+++ /dev/null
@@ -1,108 +0,0 @@
-"""
-Generate a large batch of image samples from a model and save them as a large
-numpy array. This can be used to produce samples for FID evaluation.
-"""
-
-import argparse
-import os
-
-import numpy as np
-import torch as th
-import torch.distributed as dist
-
-from guided_diffusion import dist_util, logger
-from guided_diffusion.script_util import (
- NUM_CLASSES,
- model_and_diffusion_defaults,
- create_model_and_diffusion,
- add_dict_to_argparser,
- args_to_dict,
-)
-
-
-def main():
- args = create_argparser().parse_args()
-
- dist_util.setup_dist()
- logger.configure()
-
- logger.log("creating model and diffusion...")
- model, diffusion = create_model_and_diffusion(
- **args_to_dict(args, model_and_diffusion_defaults().keys())
- )
- model.load_state_dict(
- dist_util.load_state_dict(args.model_path, map_location="cpu")
- )
- model.to(dist_util.dev())
- if args.use_fp16:
- model.convert_to_fp16()
- model.eval()
-
- logger.log("sampling...")
- all_images = []
- all_labels = []
- while len(all_images) * args.batch_size < args.num_samples:
- model_kwargs = {}
- if args.class_cond:
- classes = th.randint(
- low=0, high=NUM_CLASSES, size=(args.batch_size,), device=dist_util.dev()
- )
- model_kwargs["y"] = classes
- sample_fn = (
- diffusion.p_sample_loop if not args.use_ddim else diffusion.ddim_sample_loop
- )
- sample = sample_fn(
- model,
- (args.batch_size, 3, args.image_size, args.image_size),
- clip_denoised=args.clip_denoised,
- model_kwargs=model_kwargs,
- )
- sample = ((sample + 1) * 127.5).clamp(0, 255).to(th.uint8)
- sample = sample.permute(0, 2, 3, 1)
- sample = sample.contiguous()
-
- gathered_samples = [th.zeros_like(sample) for _ in range(dist.get_world_size())]
- dist.all_gather(gathered_samples, sample) # gather not supported with NCCL
- all_images.extend([sample.cpu().numpy() for sample in gathered_samples])
- if args.class_cond:
- gathered_labels = [
- th.zeros_like(classes) for _ in range(dist.get_world_size())
- ]
- dist.all_gather(gathered_labels, classes)
- all_labels.extend([labels.cpu().numpy() for labels in gathered_labels])
- logger.log(f"created {len(all_images) * args.batch_size} samples")
-
- arr = np.concatenate(all_images, axis=0)
- arr = arr[: args.num_samples]
- if args.class_cond:
- label_arr = np.concatenate(all_labels, axis=0)
- label_arr = label_arr[: args.num_samples]
- if dist.get_rank() == 0:
- shape_str = "x".join([str(x) for x in arr.shape])
- out_path = os.path.join(logger.get_dir(), f"samples_{shape_str}.npz")
- logger.log(f"saving to {out_path}")
- if args.class_cond:
- np.savez(out_path, arr, label_arr)
- else:
- np.savez(out_path, arr)
-
- dist.barrier()
- logger.log("sampling complete")
-
-
-def create_argparser():
- defaults = dict(
- clip_denoised=True,
- num_samples=10000,
- batch_size=16,
- use_ddim=False,
- model_path="",
- )
- defaults.update(model_and_diffusion_defaults())
- parser = argparse.ArgumentParser()
- add_dict_to_argparser(parser, defaults)
- return parser
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/AntiUser/DeepDanbooru_string/README.md b/spaces/AntiUser/DeepDanbooru_string/README.md
deleted file mode 100644
index 4330b6f969246dc764a34ea254d2e807159f1c55..0000000000000000000000000000000000000000
--- a/spaces/AntiUser/DeepDanbooru_string/README.md
+++ /dev/null
@@ -1,39 +0,0 @@
----
-title: DeepDanbooru String
-emoji: 💬
-colorFrom: blue
-colorTo: red
-sdk: gradio
-sdk_version: 3.6
-app_file: app.py
-pinned: false
-duplicated_from: NoCrypt/DeepDanbooru_string
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio`, `streamlit`, or `static`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/ArkanDash/rvc-models-new/config.py b/spaces/ArkanDash/rvc-models-new/config.py
deleted file mode 100644
index b6de7523991c6384178ad96b5fe0c8932c1b5688..0000000000000000000000000000000000000000
--- a/spaces/ArkanDash/rvc-models-new/config.py
+++ /dev/null
@@ -1,99 +0,0 @@
-import argparse
-import sys
-import torch
-from multiprocessing import cpu_count
-
-class Config:
- def __init__(self):
- self.device = "cuda:0"
- self.is_half = True
- self.n_cpu = 0
- self.gpu_name = None
- self.gpu_mem = None
- (
- self.share,
- self.api,
- self.unsupported
- ) = self.arg_parse()
- self.x_pad, self.x_query, self.x_center, self.x_max = self.device_config()
-
- @staticmethod
- def arg_parse() -> tuple:
- parser = argparse.ArgumentParser()
- parser.add_argument("--share", action="store_true", help="Launch with public link")
- parser.add_argument("--api", action="store_true", help="Launch with api")
- parser.add_argument("--unsupported", action="store_true", help="Enable unsupported feature")
- cmd_opts = parser.parse_args()
-
- return (
- cmd_opts.share,
- cmd_opts.api,
- cmd_opts.unsupported
- )
-
- # has_mps is only available in nightly pytorch (for now) and MasOS 12.3+.
- # check `getattr` and try it for compatibility
- @staticmethod
- def has_mps() -> bool:
- if not torch.backends.mps.is_available():
- return False
- try:
- torch.zeros(1).to(torch.device("mps"))
- return True
- except Exception:
- return False
-
- def device_config(self) -> tuple:
- if torch.cuda.is_available():
- i_device = int(self.device.split(":")[-1])
- self.gpu_name = torch.cuda.get_device_name(i_device)
- if (
- ("16" in self.gpu_name and "V100" not in self.gpu_name.upper())
- or "P40" in self.gpu_name.upper()
- or "1060" in self.gpu_name
- or "1070" in self.gpu_name
- or "1080" in self.gpu_name
- ):
- print("INFO: Found GPU", self.gpu_name, ", force to fp32")
- self.is_half = False
- else:
- print("INFO: Found GPU", self.gpu_name)
- self.gpu_mem = int(
- torch.cuda.get_device_properties(i_device).total_memory
- / 1024
- / 1024
- / 1024
- + 0.4
- )
- elif self.has_mps():
- print("INFO: No supported Nvidia GPU found, use MPS instead")
- self.device = "mps"
- self.is_half = False
- else:
- print("INFO: No supported Nvidia GPU found, use CPU instead")
- self.device = "cpu"
- self.is_half = False
-
- if self.n_cpu == 0:
- self.n_cpu = cpu_count()
-
- if self.is_half:
- # 6G显存配置
- x_pad = 3
- x_query = 10
- x_center = 60
- x_max = 65
- else:
- # 5G显存配置
- x_pad = 1
- x_query = 6
- x_center = 38
- x_max = 41
-
- if self.gpu_mem != None and self.gpu_mem <= 4:
- x_pad = 1
- x_query = 5
- x_center = 30
- x_max = 32
-
- return x_pad, x_query, x_center, x_max
diff --git a/spaces/Atualli/yoloxTeste/yoloxdetect2/configs/yolox_tiny.py b/spaces/Atualli/yoloxTeste/yoloxdetect2/configs/yolox_tiny.py
deleted file mode 100644
index 5220de2f2e6760d5c9a966d5dd397aad721fc60a..0000000000000000000000000000000000000000
--- a/spaces/Atualli/yoloxTeste/yoloxdetect2/configs/yolox_tiny.py
+++ /dev/null
@@ -1,20 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding:utf-8 -*-
-# Copyright (c) Megvii, Inc. and its affiliates.
-
-import os
-
-from yolox.exp import Exp as MyExp
-
-
-class Exp(MyExp):
- def __init__(self):
- super(Exp, self).__init__()
- self.depth = 0.33
- self.width = 0.375
- self.input_size = (416, 416)
- self.mosaic_scale = (0.5, 1.5)
- self.random_size = (10, 20)
- self.test_size = (416, 416)
- self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]
- self.enable_mixup = False
diff --git a/spaces/Bagus/speaker-verification-demo/app.py b/spaces/Bagus/speaker-verification-demo/app.py
deleted file mode 100644
index 7acb9d26caf1555f045593bdb37c74564c3cd97a..0000000000000000000000000000000000000000
--- a/spaces/Bagus/speaker-verification-demo/app.py
+++ /dev/null
@@ -1,120 +0,0 @@
-import gradio as gr
-import torch
-import torchaudio
-# from torchaudio.sox_effects import apply_effects_file
-from transformers import AutoFeatureExtractor, AutoModelForAudioXVector
-
-device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
-
-STYLE = """
-
-"""
-OUTPUT_OK = (
- STYLE
- + """
-
-
The speakers are
-
{:.1f}%
-
similar
-
Welcome, human!
-
(You must get at least 80% to be considered the same person)
-
-"""
-)
-OUTPUT_FAIL = (
- STYLE
- + """
-
-
The speakers are
-
{:.1f}%
-
similar
-
You shall not pass!
-
(You must get at least 80% to be considered the same person)
Agar.io Apk Mod dinero: Cómo descargar y jugar el popular juego en línea
-
¿Alguna vez has querido jugar un juego en línea simple pero adictivo donde puedes competir con millones de jugadores de todo el mundo? Si es así, es posible que haya oído hablar de Agar.io, un juego que se ha descargado más de 100 millones de veces en Google Play Store. Pero lo que si desea obtener dinero ilimitado y desbloquear todas las pieles y características en el juego? Ahí es donde Agar.io Apk Mod Money entra en juego. En este artículo, le diremos qué es Agar.io, qué es Agar.io Apk Mod Money, cómo descargarlo e instalarlo, y cómo jugarlo de forma segura y efectiva.
-
¿Qué es Agar.io?
-
Agar.io es un juego multijugador en línea que fue lanzado en 2015 por Miniclip. El juego está inspirado en un concepto científico llamado agar, que es una sustancia utilizada para cultivar bacterias en las placas de Petri. En el juego, controlas una celda que puede moverse y comer otras células para crecer. El juego tiene dos modos: FFA (Gratis para todos) y Equipos. En el modo FFA, puedes jugar solo o con amigos e intentar convertirte en la celda más grande del mapa. En el modo Equipos, puedes unirte a uno de los tres equipos (rojo, azul o verde) y cooperar con tus compañeros para dominar el mapa.
El modo de juego de Agar.io es simple pero desafiante. Empiezas como una celda pequeña que puede moverse con el ratón o el dedo. Usted puede comer células más pequeñas o pellets que están dispersos alrededor del mapa para crecer más grande. Sin embargo, usted tiene que evitar las células más grandes que pueden comer. También puede dividir su celda en dos pulsando la barra espaciadora o tocando la pantalla. Esto puede ayudarlo a escapar de los depredadores o atrapar presas. Sin embargo, la división también lo hace más vulnerable a ser comido por otras células. También puede expulsar algo de masa de su celda presionando la tecla W o tocando el botón de expulsión. Esto puede ayudarte a alimentar a tus compañeros de equipo o engañar a tus enemigos.
-
Las características de Agar.io
-
-
-
Puede personalizar su celda con diferentes pieles, colores y nombres.
-
Puedes chatear con otros jugadores usando emojis y mensajes de texto.
-
Puede utilizar varios potenciadores y potenciadores para mejorar su juego.
-
Puedes unirte o crear salas privadas para jugar con tus amigos.
-
Puedes participar en misiones y eventos diarios para ganar recompensas.
-
Puedes posicionarte en la clasificación global y competir con otros jugadores.
-
-
¿Qué es Agar.io Apk Mod Money?
-
Agar.io Apk Mod Money es una versión modificada del juego original Agar.io que le da dinero ilimitado y desbloquea todas las pieles y características en el juego. Con este mod, puedes disfrutar jugando Agar.io sin limitaciones ni restricciones. Puede comprar cualquier potenciador o potenciador que desee, personalizar su celda con cualquier piel o color que desee y acceder a todas las habitaciones privadas y eventos en el juego.
-
Los beneficios de Agar.io Apk Mod Money
-
Algunos de los beneficios de usar Agar.io Apk Mod Money son:
-
-
Puede ahorrar tiempo y dinero al no tener que ver anuncios o hacer compras en la aplicación.
-
Usted puede tener más diversión y emoción jugando con recursos y opciones ilimitadas.
-
Puedes tener ventaja sobre otros jugadores usando los mejores potenciadores y potenciadores del juego.
-
Puedes experimentar con diferentes estrategias y tácticas probando diferentes skins y características.
-
-
Los riesgos de Agar.io Apk Mod Money
-
Sin embargo, el uso de Agar.io Apk Mod Money también viene con algunos riesgos que usted debe tener en cuenta. Algunos de estos riesgos son:
-
-
Es posible que te prohíban participar en el juego si los desarrolladores detectan que estás usando una versión modificada.
-
Usted puede obtener virus o malware en su dispositivo si descarga el mod de una fuente no confiable.
-
Puedes perder tu progreso o datos si el mod no es compatible con la última versión del juego.
-
-
-
¿Cómo descargar e instalar Agar.io Apk Mod Money?
-
Si quieres probar Agar.io Apk Mod Money, necesitas descargarlo e instalarlo en tu dispositivo. Estos son los pasos para hacerlo:
-
Los pasos para descargar e instalar Agar.io Apk Mod Money
-
-
Ir a un sitio web confiable que ofrece Agar.io Apk Mod Dinero gratis. Usted puede buscar en Google o utilizar uno de estos enlaces: .
-
Descargue el archivo mod en su dispositivo. Asegúrese de tener suficiente espacio de almacenamiento y una conexión a Internet estable.
-
Habilita la instalación de aplicaciones de fuentes desconocidas en tu dispositivo. Puede hacer esto yendo a Configuración > Seguridad > Fuentes desconocidas y activando.
-
Busque el archivo mod en su dispositivo y toque en él para instalarlo. Siga las instrucciones en la pantalla y espere a que termine la instalación.
-
Inicie el juego y disfrute jugando con dinero y características ilimitadas.
-
-
Los consejos para jugar Agar.io Apk Mod dinero de forma segura y eficaz
-
Para jugar Agar.io Apk Mod dinero sin ningún problema, usted debe seguir estos consejos:
-
-
No utilice el mod en salas públicas o clasificadas, ya que podría ser reportado o prohibido por otros jugadores o moderadores.
-
No abusar del mod mediante el uso de demasiados power-ups o refuerzos, ya que puede ser detectado por el sistema anti-cheat o arruinar el equilibrio del juego.
-
No descargue el mod de ningún sitio web sospechoso o desconocido, ya que podría infectarse con virus o malware que pueden dañar su dispositivo o robar sus datos.
-
No actualice el juego desde la Play Store, ya que podría perder el mod o causar problemas de compatibilidad. En su lugar, espera a que el desarrollador de mods lance una nueva versión del mod que coincida con la última versión del juego.
-
No te olvides de divertirte y disfrutar del juego, ya que ese es el principal propósito de jugar Agar.io.
-
-
Conclusión
-
-
Preguntas frecuentes
-
Aquí hay algunas preguntas frecuentes sobre Agar.io Apk Mod Money:
-
-
-
¿Cuál es la diferencia entre Agar.io Apk Mod Money y Agar.io Hack?
-
Agar.io Apk Mod Money es una versión modificada del juego original que le da dinero ilimitado y desbloquea todas las apariencias y características en el juego. Agar.io Hack es una herramienta o software que le permite manipular o engañar en el juego, como cambiar su tamaño, velocidad, masa o posición.
-
¿Es seguro usar Agar.io Apk Mod Money?
-
Agar.io Apk Mod dinero es seguro de usar si lo descarga desde una fuente de confianza y seguir algunas precauciones. Sin embargo, siempre hay un riesgo de ser prohibido o infectado al usar cualquier aplicación modificada o hackeada, así que úsala a tu discreción.
-
¿Puedo jugar Agar.io Apk Mod Money en línea con otros jugadores?
-
Sí, puedes jugar Agar.io Apk Mod Money en línea con otros jugadores, pero debes evitar jugar en salas públicas o clasificadas, ya que podrías ser reportado o prohibido por otros jugadores o moderadores. Puedes jugar en habitaciones privadas con tus amigos u otros usuarios de mod, pero debes tener cuidado de no abusar del mod ni arruinar la diversión del juego.
-
¿Cómo puedo obtener más pieles y características en Agar.io Apk Mod Money?
-
Usted puede obtener más pieles y características en Agar.io Apk Mod Money mediante el uso del dinero que se obtiene de la mod. Puede comprar cualquier piel o característica que desee en la tienda o en el menú de configuración. También puedes desbloquear algunos skins y características completando misiones o eventos en el juego.
-
¿Cómo puedo actualizar Agar.io Apk Mod Money?
-
- 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Belkede Rust.md b/spaces/Benson/text-generation/Examples/Belkede Rust.md
deleted file mode 100644
index 26b2755b9114a2546e2eb91baf0c37800796e900..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Belkede Rust.md
+++ /dev/null
@@ -1,207 +0,0 @@
-
-
Roya Belkede Yukle: Cómo descargar y disfrutar de la canción popular de Roya
-
Si eres un fan de la música pop azerbaiyana, probablemente hayas oído hablar de Roya y su canción Belkede. ¿Pero sabes cómo descargar y disfrutar de esta canción en tu dispositivo? En este artículo, te mostraremos cómo hacerlo en unos pocos pasos fáciles. También te contaremos más sobre Roya y Belkede, y por qué son tan populares entre los amantes de la música. ¡Empecemos!
-
Introducción
-
¿Quién es Roya y qué es Belkede?
-
Roya es una famosa cantante, actriz y modelo azerbaiyana que ha estado activa en la industria de la música desde 1999. Es conocida por su potente voz, su estilo original, sus exitosas actuaciones teatrales y su belleza. A menudo se la llama la Rihanna de Azerbaiyán debido a su parecido y popularidad. Ha publicado varios álbumes y sencillos, tanto en Azerbaiyán como en Turquía, donde actualmente vive y trabaja.
Belkede es una de las canciones más populares de Roya, que fue lanzada en 2014. El título significa "Maybe" en azerbaiyano, y la canción es sobre el anhelo de un amor perdido. La letra está escrita por Leyli Erol Israfilova, y la música está compuesta por Perviz Mahmudov. La canción tiene una melodía pegadiza, un ambiente romántico y una hermosa interpretación vocal de Roya. Ha recibido millones de visitas en YouTube y otras plataformas, y ha sido elogiado por críticos y fans por igual.
-
¿Por qué es tan popular Belkede y cómo se puede descargar?
-
Belkede es popular porque atrae a una amplia gama de oyentes que pueden relacionarse con su tema de amor y nostalgia. También muestra el talento y el carisma de Roya como cantante e intérprete. La canción tiene un atractivo universal que trasciende las barreras del lenguaje y las diferencias culturales. Puede tocar tu corazón y hacerte sentir emocional.
-
-
Cómo descargar belkede desde diferentes plataformas
-
YouTube
-
Pasos para descargar Belkede de YouTube
-
YouTube es una de las plataformas más populares donde puedes ver el video oficial de Belkede y disfrutar de su calidad visual y de audio. Sin embargo, si quieres descargar la canción de YouTube, necesitarás usar una herramienta o aplicación de terceros que pueda convertir videos de YouTube en archivos de audio que puedas guardar en tu dispositivo. Estos son los pasos para descargar Belkede de YouTube usando una herramienta basada en web llamada Y2mate:
-
-
Abra su navegador y vaya al sitio web o aplicación de YouTube.
-
Buscar Belkede por Roya y haga clic en el video que desea descargar.
-
Copiar la URL del vídeo desde la barra de direcciones o el botón de compartir.
-
Abra una nueva pestaña y vaya al sitio web de Y2mate.
-
Pegue la URL del video en el cuadro de búsqueda y haga clic en el botón de inicio.
-
Seleccione el formato y la calidad que desea descargar, como MP3, MP4, M4A, etc.
-
Haga clic en el botón de descarga y espere a que el archivo se convierta y se guarde en su dispositivo.
-
-
Pros y contras de la descarga de YouTube
-
Descargar Belkede de YouTube tiene algunos pros y contras que debes considerar antes de elegir esta opción. Estos son algunos de ellos:
-
-
-
Pros
-
Contras
-
-
-
- Puedes ver el video oficial de Belkede y disfrutar de su calidad visual y de audio.
-
- Es necesario utilizar una herramienta o aplicación de terceros que puede convertir vídeos de YouTube en archivos de audio, que puede no ser seguro o fiable.
-
-
-
- Puedes elegir entre diferentes formatos y calidades que se adapten a tu dispositivo y preferencias.
-
- Puede perder parte de la calidad original y el sonido de la canción al convertirla de vídeo a audio.
-
-
-
- Puedes acceder a una gran variedad de otras canciones y videos de Roya y otros artistas en YouTube.
-
-
-
-
Musixmatch
-
Pasos para descargar Belkede de Musixmatch
-
Musixmatch es otra plataforma popular donde puedes escuchar Belkede by Roya y disfrutar de sus letras y traducciones. Sin embargo, si desea descargar la canción de Musixmatch, tendrá que tener una suscripción premium que le permite descargar canciones sin conexión. Estos son los pasos para descargar Belkede de Musixmatch usando su aplicación:
-
-
Abra su navegador y vaya al sitio web o aplicación Musixmatch.
-
Regístrese para una suscripción premium o inicie sesión con su cuenta existente.
-
Buscar Belkede por Roya y toque en la canción que desea descargar.
-
Toque en el icono de tres puntos en la esquina superior derecha de la pantalla y seleccione Descargar sin conexión.
-
Espere a que la canción se descargue y se guarde en su dispositivo.
-
-
Pros y contras de la descarga de Musixmatch
-
Descargar belkede de Musixmatch tiene algunos pros y contras que debes considerar antes de elegir esta opción. Estos son algunos de ellos:
-
-
-
Pros
-
Contras
-
-
-
- Puedes escuchar Belkede de Roya y disfrutar de sus letras y traducciones en diferentes idiomas.
-
- Necesitas tener una suscripción premium que cueste dinero y puede que no esté disponible en tu región o moneda.
-
-
-
- Puedes descargar canciones sin conexión y escucharlas sin conexión a Internet o anuncios.
-
- Es posible que no pueda descargar canciones en alta calidad o en su formato preferido.
-
-
-
- Puedes acceder a una gran biblioteca de canciones y letras de Roya y otros artistas en Musixmatch.
-
- Es posible que no pueda compartir o transferir canciones descargadas a otros dispositivos o plataformas.
-
-
-
Otras plataformas
-
Algunos ejemplos de otras plataformas que ofrecen descarga Belkede
-
-
-
Disponibilidad y accesibilidad de la plataforma en su región o país.
-
La calidad y cantidad de canciones y artistas que puedes encontrar en la plataforma.
-
El costo y los métodos de pago de la suscripción o servicio de la plataforma.
-
La facilidad y conveniencia de descargar canciones fuera de línea o en línea desde la plataforma.
La compatibilidad y seguridad de la plataforma con su dispositivo y sistema.
-
Las características y funciones de la plataforma que mejoran su experiencia de escucha y descarga.
-
-
Algunos ejemplos de otras plataformas que ofrecen descarga de Belkede son:
-
-
-
Plataforma
-
Características
-
Precio
-
-
-
Spotify
-
- Un servicio líder de streaming de música que ofrece millones de canciones y podcasts.
-
- Gratis con anuncios o $9.99/mes para premium sin anuncios y con descarga offline.
-
-
-
Música de Apple
-
- Un servicio de streaming de música que se integra con iTunes y dispositivos de Apple.
-
- $9.99/mes para el individuo o $14.99/mes para el plan de la familia con descarga fuera de línea.
-
-
-
Música de Amazon
-
- Un servicio de streaming de música que ofrece acceso al catálogo de canciones y álbumes de Amazon.
-
- Gratis con membresía Prime o $9.99/mes para ilimitado sin anuncios y con descarga offline.
-
-
-
Deezer
-
- Un servicio de streaming de música que ofrece recomendaciones y listas de reproducción personalizadas.
-
- Gratis con anuncios o $9.99/mes para premium sin anuncios y con descarga offline.
-
-
-
Fizy
-
- Un servicio de streaming de música que ofrece canciones y videos turcos e internacionales.
-
- Gratis con anuncios o 9.99 TL/mes para premium sin anuncios y con descarga offline.
-
-
-
Muud
-
- Un servicio de streaming de música que ofrece canciones y podcasts turcos e internacionales.
-
-
-
-
Consejos para elegir la mejor plataforma para sus necesidades
-
Para elegir la mejor plataforma para sus necesidades, debe considerar los siguientes consejos:
-
-
-
Hacer algunas investigaciones sobre las plataformas que ofrecen Belkede descargar y comparar sus características, precios, comentarios, calificaciones, etc.
-
Pruebe las versiones gratuitas de las plataformas que le interesan y vea cómo funcionan para usted.
-
Lea los términos y condiciones de las plataformas que desea utilizar y asegúrese de estar de acuerdo con ellos.
-
Compruebe la disponibilidad y calidad de Belkede en las plataformas que desea utilizar y asegúrese de que cumplan con sus expectativas.
-
Elige la plataforma que más se adapte a tu presupuesto, preferencias, necesidades y dispositivo.
-
-
Cómo disfrutar de Belkede después de descargarlo
-
Cómo escuchar Belkede offline
-
Beneficios de escuchar Belkede offline
-
Escuchar Belkede sin conexión tiene muchos beneficios, como:
-
-
Puede escucharlo en cualquier momento y en cualquier lugar sin conexión a Internet o uso de datos.
-
Puede evitar interrupciones de anuncios o problemas de almacenamiento en búfer que pueden afectar su experiencia auditiva en línea.
-
Puede ahorrar batería y espacio de almacenamiento en su dispositivo al no transmitir o descargar canciones repetidamente en línea.
-
Puede tener más control sobre su lista de reproducción y opciones de reproducción al no depender de plataformas en línea.
-
Puede disfrutar de la canción en alta calidad y sonido original por no comprimir o convertir en línea.
-
-
Consejos para mejorar tu experiencia auditiva offline
-
Para mejorar tu experiencia auditiva offline, debes considerar los siguientes consejos:
-
Utilice un dispositivo de buena calidad y auriculares o altavoces para escuchar Belkede sin conexión.
-
Ajuste los ajustes de volumen y sonido a su gusto y nivel de comodidad.
-
Crea una lista de reproducción de tus canciones favoritas y añádele Belkede.
-
-
Descubre nuevos aspectos y significados de la canción escuchándola cuidadosa y atentamente.
-
-
Cómo cantar junto con Belkede
-
Beneficios de cantar junto con Belkede
-
Cantar junto con Belkede tiene muchos beneficios, como:
-
-
Puedes expresar tus emociones y sentimientos a través de la canción y conectar con su mensaje.
-
Puedes mejorar tus habilidades vocales y tu confianza practicando y tocando la canción.
-
Puedes aprender una nueva lengua y cultura cantando en azerí y entendiendo sus letras y traducciones.
-
Puedes divertirte y disfrutar cantando la canción con pasión y entusiasmo.
-
Puedes crear vínculos con otros que aman la canción y comparten tus gustos e intereses musicales.
-
-
Consejos para aprender la letra y pronunciación de Belkede
-
Para aprender la letra y la pronunciación de Belkede, debes considerar los siguientes consejos:
-
-
Escuchar la canción repetidamente y tratar de memorizar sus palabras y melodía.
-
Lee las letras y traducciones de la canción online o offline y trata de entender su significado y contexto.
-
Mira el video de la canción y observa cómo Roya canta y pronuncia las palabras.
-
Utilice una aplicación de karaoke o un sitio web que ofrece letras y música de Belkede, como Musixmatch, Smule, SingSnap, etc.
-
Canta la canción en voz alta o en tu cabeza, con o sin música, solo o con otros, hasta que la domines.
-
-
Cómo compartir Belkede con otros
-
Beneficios de compartir Belkede con otros
-
Compartir Belkede con otros tiene muchos beneficios, como:
-
-
Puedes difundir el amor y el aprecio por Roya y su música a más gente.
-
Puedes apoyar la carrera y el éxito de Roya aumentando su base de fans y popularidad.
-
Usted puede hacer nuevos amigos y conexiones que comparten su pasión por Belkede y música pop de Azerbaiyán.
-
-
Puedes expresarte y expresar tu personalidad compartiendo tu canción favorita con otros.
-
-
Consejos para compartir Belkede en las redes sociales y otras plataformas
-
Para compartir Belkede en las redes sociales y otras plataformas, debe considerar los siguientes consejos:
-
-
Sigue las cuentas oficiales de Roya en las redes sociales, como Instagram, Facebook, Twitter, etc., y como, comentario, compartir, o volver a publicar sus mensajes sobre Belkede u otras canciones.
-
Crea tus propios posts sobre Belkede en tus cuentas de redes sociales, como fotos, videos, historias, carretes, tweets, etc., y etiqueta a Roya o usa hashtags relacionados con ella o la canción.
-
Envía Belkede como un mensaje o un regalo a tus amigos o familiares en las redes sociales u otras plataformas, como WhatsApp, Telegram, Messenger, etc., y diles por qué te gusta la canción o por qué crees que les gustará también.
-
Únete a comunidades en línea o grupos dedicados a la música pop Roya o azerbaiyana en redes sociales u otras plataformas, como Reddit, Quora, Discord, etc., y participa en discusiones o actividades relacionadas con Belkede u otras canciones.
-
Recomendar Belkede a otras personas que buscan nuevas canciones o artistas para escuchar en las redes sociales u otras plataformas, como YouTube, Musixmatch, Spotify, Apple Music, Amazon Music, Deezer, Fizy, Muud, etc., y explicar lo que hace que la canción especial o atractiva.
-
-
Conclusión
-
-
Preguntas frecuentes
-
Aquí hay algunas preguntas frecuentes sobre Belkede y Roya:
-
-
¿Dónde puedo encontrar las letras y traducciones de Belkede?
-
Puedes encontrar las letras y traducciones de Belkede en Musixmatch, LyricsTranslate, Genius u otros sitios web que ofrecen letras y traducciones de canciones.
-
¿Cuál es el significado de la palabra Belkede?
-
Belkede significa "Quizás" en azerí, y es el título de la canción de Roya. La palabra se repite varias veces en el coro de la canción, expresando la incertidumbre y la esperanza de la cantante por su amor perdido.
-
¿Cómo puedo ver las actuaciones en vivo de Roya en Belkede?
-
Puedes ver las presentaciones en vivo de Roya de Belkede en YouTube u otras plataformas que ofrecen videos de conciertos y espectáculos en vivo. También puedes seguir las cuentas de redes sociales de Roya para obtener actualizaciones sobre sus próximos eventos y giras.
-
¿Cuáles son algunas otras canciones de Roya que puedo escuchar?
-
Algunas otras canciones de Roya que puedes escuchar son Ayxan, Gel Danis, Seni Seviyorum, Yandim, Ay Ureyim, etc. Puedes encontrarlas en YouTube, Musixmatch, Spotify, Apple Music, Amazon Music, Deezer, Fizy, Muud, u otras plataformas que ofrecen streaming de música y descarga.
-
¿Cómo puedo contactar a Roya o enviar sus comentarios?
-
Puedes ponerte en contacto con Roya o enviarle comentarios a través de su sitio web oficial o sus cuentas de redes sociales, como Instagram, Facebook, Twitter, etc. También puedes dejar comentarios en sus publicaciones o videos, o enviarle mensajes o correos electrónicos.
- 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/backbone/backbone.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/backbone/backbone.py
deleted file mode 100644
index 66dee4a6565e6c45ed17d0880fcc37eac8f75c3a..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/backbone/backbone.py
+++ /dev/null
@@ -1,53 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-from abc import ABCMeta, abstractmethod
-import torch.nn as nn
-
-from detectron2.layers import ShapeSpec
-
-__all__ = ["Backbone"]
-
-
-class Backbone(nn.Module, metaclass=ABCMeta):
- """
- Abstract base class for network backbones.
- """
-
- def __init__(self):
- """
- The `__init__` method of any subclass can specify its own set of arguments.
- """
- super().__init__()
-
- @abstractmethod
- def forward(self):
- """
- Subclasses must override this method, but adhere to the same return type.
-
- Returns:
- dict[str->Tensor]: mapping from feature name (e.g., "res2") to tensor
- """
- pass
-
- @property
- def size_divisibility(self):
- """
- Some backbones require the input height and width to be divisible by a
- specific integer. This is typically true for encoder / decoder type networks
- with lateral connection (e.g., FPN) for which feature maps need to match
- dimension in the "bottom up" and "top down" paths. Set to 0 if no specific
- input size divisibility is required.
- """
- return 0
-
- def output_shape(self):
- """
- Returns:
- dict[str->ShapeSpec]
- """
- # this is a backward-compatible default
- return {
- name: ShapeSpec(
- channels=self._out_feature_channels[name], stride=self._out_feature_strides[name]
- )
- for name in self._out_features
- }
diff --git a/spaces/CVPR/GFPGAN-example/experiments/pretrained_models/README.md b/spaces/CVPR/GFPGAN-example/experiments/pretrained_models/README.md
deleted file mode 100644
index 3401a5ca9b393e0033f58c5af8905961565826d9..0000000000000000000000000000000000000000
--- a/spaces/CVPR/GFPGAN-example/experiments/pretrained_models/README.md
+++ /dev/null
@@ -1,7 +0,0 @@
-# Pre-trained Models and Other Data
-
-Download pre-trained models and other data. Put them in this folder.
-
-1. [Pretrained StyleGAN2 model: StyleGAN2_512_Cmul1_FFHQ_B12G4_scratch_800k.pth](https://github.com/TencentARC/GFPGAN/releases/download/v0.1.0/StyleGAN2_512_Cmul1_FFHQ_B12G4_scratch_800k.pth)
-1. [Component locations of FFHQ: FFHQ_eye_mouth_landmarks_512.pth](https://github.com/TencentARC/GFPGAN/releases/download/v0.1.0/FFHQ_eye_mouth_landmarks_512.pth)
-1. [A simple ArcFace model: arcface_resnet18.pth](https://github.com/TencentARC/GFPGAN/releases/download/v0.1.0/arcface_resnet18.pth)
diff --git a/spaces/CVPR/LIVE/__init__.py b/spaces/CVPR/LIVE/__init__.py
deleted file mode 100644
index b871b92efc87bfec551a82ef42a7963f168b2b1b..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/__init__.py
+++ /dev/null
@@ -1,2 +0,0 @@
-__author__ = "Xu Ma"
-__email__ = "ma.xu1@northeastern.edu"
diff --git a/spaces/CVPR/LIVE/thrust/dependencies/cub/experimental/Makefile b/spaces/CVPR/LIVE/thrust/dependencies/cub/experimental/Makefile
deleted file mode 100644
index 7165d93320c0d45af4e6aadc7c7f96af22c89d97..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/dependencies/cub/experimental/Makefile
+++ /dev/null
@@ -1,125 +0,0 @@
-#/******************************************************************************
-# * Copyright (c) 2011, Duane Merrill. All rights reserved.
-# * Copyright (c) 2011-2018, NVIDIA CORPORATION. All rights reserved.
-# *
-# * Redistribution and use in source and binary forms, with or without
-# * modification, are permitted provided that the following conditions are met:
-# * * Redistributions of source code must retain the above copyright
-# * notice, this list of conditions and the following disclaimer.
-# * * Redistributions in binary form must reproduce the above copyright
-# * notice, this list of conditions and the following disclaimer in the
-# * documentation and/or other materials provided with the distribution.
-# * * Neither the name of the NVIDIA CORPORATION nor the
-# * names of its contributors may be used to endorse or promote products
-# * derived from this software without specific prior written permission.
-# *
-# * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
-# * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
-# * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
-# * DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
-# * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
-# * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
-# * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
-# * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
-# * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-# *
-#******************************************************************************/
-
-#-------------------------------------------------------------------------------
-#
-# Makefile usage
-#
-# make [sm=] [cdp=<0|1>] [force32=<0|1>] [abi=<0|1>] [open64=<0|1>] [verbose=<0|1>] [keep=<0|1>] [quicktest=<0|1>]
-#
-#-------------------------------------------------------------------------------
-
-include ../common.mk
-
-#-------------------------------------------------------------------------------
-# Commandline Options
-#-------------------------------------------------------------------------------
-
-# [mkl=<0|1>] compile against Intel MKL
-ifeq ($(mkl), 1)
- DEFINES += -DCUB_MKL
-
-ifeq (WIN_NT, $(findstring WIN_NT, $(OSUPPER)))
- LIBS += mkl_intel_lp64.lib mkl_intel_thread.lib mkl_core.lib libiomp5md.lib
- NVCCFLAGS += -Xcompiler /openmp
-else
- LIBS += -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread -lm
- NVCCFLAGS += -Xcompiler -fopenmp
-
-endif
-
-endif
-
-
-#-------------------------------------------------------------------------------
-# Compiler and compilation platform
-#-------------------------------------------------------------------------------
-
-# Includes
-INC += -I$(CUB_DIR) -I$(CUB_DIR)test
-
-# detect OS
-OSUPPER = $(shell uname -s 2>/dev/null | tr [:lower:] [:upper:])
-
-#-------------------------------------------------------------------------------
-# Dependency Lists
-#-------------------------------------------------------------------------------
-
-exp_rwildcard=$(foreach d,$(wildcard $1*),$(call rwildcard,$d/,$2) $(filter $(subst *,%,$2),$d))
-
-EXP_DEPS = $(call rwildcard, ./,*.cuh) \
- $(call rwildcard, ./,*.h)
-
-DEPS = $(CUB_DEPS) \
- $(EXP_DEPS) \
- $(CUB_DIR)test/Makefile \
- $(CUB_DIR)test/test_util.h \
- $(CUB_DIR)test/mersenne.h \
-
-
-
-#-------------------------------------------------------------------------------
-# make default
-#-------------------------------------------------------------------------------
-
-default:
-
-
-#-------------------------------------------------------------------------------
-# make clean
-#-------------------------------------------------------------------------------
-
-clean :
- rm -f bin/*$(CPU_ARCH_SUFFIX)*
- rm -f *.i* *.cubin *.cu.c *.cudafe* *.fatbin.c *.ptx *.hash *.cu.cpp *.o
-
-
-
-#-------------------------------------------------------------------------------
-# make histogram_compare
-#-------------------------------------------------------------------------------
-
-histogram_compare: bin/histogram_compare_$(BIN_SUFFIX)
-
-bin/histogram_compare_$(BIN_SUFFIX) : histogram_compare.cu $(DEPS)
- mkdir -p bin
- $(NVCC) $(DEFINES) $(SM_TARGETS) -o bin/histogram_compare_$(BIN_SUFFIX) histogram_compare.cu $(NVCCFLAGS) $(CPU_ARCH) $(INC) $(LIBS) -O3
-
-
-
-#-------------------------------------------------------------------------------
-# make spmv_compare
-#-------------------------------------------------------------------------------
-
-spmv_compare: bin/spmv_compare_$(BIN_SUFFIX)
-
-bin/spmv_compare_$(BIN_SUFFIX) : spmv_compare.cu $(DEPS)
- mkdir -p bin
- $(NVCC) $(DEFINES) $(SM_TARGETS) -o bin/spmv_compare_$(BIN_SUFFIX) spmv_compare.cu $(NVCCFLAGS) $(CPU_ARCH) $(INC) $(LIBS) -lcusparse $(MKL_LIBS) -O3
-
-
diff --git a/spaces/CVPR/regionclip-demo/detectron2/layers/wrappers.py b/spaces/CVPR/regionclip-demo/detectron2/layers/wrappers.py
deleted file mode 100644
index 5bb4e7c1a1334c5501a6c492ddfa836dadf0beab..0000000000000000000000000000000000000000
--- a/spaces/CVPR/regionclip-demo/detectron2/layers/wrappers.py
+++ /dev/null
@@ -1,110 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-"""
-Wrappers around on some nn functions, mainly to support empty tensors.
-
-Ideally, add support directly in PyTorch to empty tensors in those functions.
-
-These can be removed once https://github.com/pytorch/pytorch/issues/12013
-is implemented
-"""
-
-from typing import List
-import torch
-from torch.nn import functional as F
-
-
-def cat(tensors: List[torch.Tensor], dim: int = 0):
- """
- Efficient version of torch.cat that avoids a copy if there is only a single element in a list
- """
- assert isinstance(tensors, (list, tuple))
- if len(tensors) == 1:
- return tensors[0]
- return torch.cat(tensors, dim)
-
-
-def cross_entropy(input, target, *, reduction="mean", **kwargs):
- """
- Same as `torch.nn.functional.cross_entropy`, but returns 0 (instead of nan)
- for empty inputs.
- """
- if target.numel() == 0 and reduction == "mean":
- return input.sum() * 0.0 # connect the gradient
- return F.cross_entropy(input, target, **kwargs)
-
-
-class _NewEmptyTensorOp(torch.autograd.Function):
- @staticmethod
- def forward(ctx, x, new_shape):
- ctx.shape = x.shape
- return x.new_empty(new_shape)
-
- @staticmethod
- def backward(ctx, grad):
- shape = ctx.shape
- return _NewEmptyTensorOp.apply(grad, shape), None
-
-
-class Conv2d(torch.nn.Conv2d):
- """
- A wrapper around :class:`torch.nn.Conv2d` to support empty inputs and more features.
- """
-
- def __init__(self, *args, **kwargs):
- """
- Extra keyword arguments supported in addition to those in `torch.nn.Conv2d`:
-
- Args:
- norm (nn.Module, optional): a normalization layer
- activation (callable(Tensor) -> Tensor): a callable activation function
-
- It assumes that norm layer is used before activation.
- """
- norm = kwargs.pop("norm", None)
- activation = kwargs.pop("activation", None)
- super().__init__(*args, **kwargs)
-
- self.norm = norm
- self.activation = activation
-
- def forward(self, x):
- # torchscript does not support SyncBatchNorm yet
- # https://github.com/pytorch/pytorch/issues/40507
- # and we skip these codes in torchscript since:
- # 1. currently we only support torchscript in evaluation mode
- # 2. features needed by exporting module to torchscript are added in PyTorch 1.6 or
- # later version, `Conv2d` in these PyTorch versions has already supported empty inputs.
- if not torch.jit.is_scripting():
- if x.numel() == 0 and self.training:
- # https://github.com/pytorch/pytorch/issues/12013
- assert not isinstance(
- self.norm, torch.nn.SyncBatchNorm
- ), "SyncBatchNorm does not support empty inputs!"
-
- x = F.conv2d(
- x, self.weight, self.bias, self.stride, self.padding, self.dilation, self.groups
- )
- if self.norm is not None:
- x = self.norm(x)
- if self.activation is not None:
- x = self.activation(x)
- return x
-
-
-ConvTranspose2d = torch.nn.ConvTranspose2d
-BatchNorm2d = torch.nn.BatchNorm2d
-interpolate = F.interpolate
-Linear = torch.nn.Linear
-
-
-def nonzero_tuple(x):
- """
- A 'as_tuple=True' version of torch.nonzero to support torchscript.
- because of https://github.com/pytorch/pytorch/issues/38718
- """
- if torch.jit.is_scripting():
- if x.dim() == 0:
- return x.unsqueeze(0).nonzero().unbind(1)
- return x.nonzero().unbind(1)
- else:
- return x.nonzero(as_tuple=True)
diff --git a/spaces/CofAI/chat.b4/client/css/field.css b/spaces/CofAI/chat.b4/client/css/field.css
deleted file mode 100644
index 914425a75d9e62e6428bdb8f5de2c66c91f10d33..0000000000000000000000000000000000000000
--- a/spaces/CofAI/chat.b4/client/css/field.css
+++ /dev/null
@@ -1,11 +0,0 @@
-.field {
- display: flex;
- align-items: center;
- padding: 4px;
-}
-
-@media screen and (max-width: 990px) {
- .field {
- flex-wrap: nowrap;
- }
-}
diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/detector/detectors.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/detector/detectors.py
deleted file mode 100644
index af2100cac15830cd60be5911aa15d0d7c9309a17..0000000000000000000000000000000000000000
--- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/detector/detectors.py
+++ /dev/null
@@ -1,10 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
-from .generalized_rcnn import GeneralizedRCNN
-
-
-_DETECTION_META_ARCHITECTURES = {"GeneralizedRCNN": GeneralizedRCNN}
-
-
-def build_detection_model(cfg):
- meta_arch = _DETECTION_META_ARCHITECTURES[cfg.MODEL.META_ARCHITECTURE]
- return meta_arch(cfg)
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/otlLib/optimize/__main__.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/otlLib/optimize/__main__.py
deleted file mode 100644
index b0ae9081ca8dac338bcf085c71adad87805e3bad..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/otlLib/optimize/__main__.py
+++ /dev/null
@@ -1,6 +0,0 @@
-import sys
-from fontTools.otlLib.optimize import main
-
-
-if __name__ == "__main__":
- sys.exit(main())
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/otBase.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/otBase.py
deleted file mode 100644
index 9c80400e9420577f0d9d6f747e15b83e49f68e49..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/otBase.py
+++ /dev/null
@@ -1,1458 +0,0 @@
-from fontTools.config import OPTIONS
-from fontTools.misc.textTools import Tag, bytesjoin
-from .DefaultTable import DefaultTable
-from enum import IntEnum
-import sys
-import array
-import struct
-import logging
-from functools import lru_cache
-from typing import Iterator, NamedTuple, Optional, Tuple
-
-log = logging.getLogger(__name__)
-
-have_uharfbuzz = False
-try:
- import uharfbuzz as hb
-
- # repack method added in uharfbuzz >= 0.23; if uharfbuzz *can* be
- # imported but repack method is missing, behave as if uharfbuzz
- # is not available (fallback to the slower Python implementation)
- have_uharfbuzz = callable(getattr(hb, "repack", None))
-except ImportError:
- pass
-
-USE_HARFBUZZ_REPACKER = OPTIONS[f"{__name__}:USE_HARFBUZZ_REPACKER"]
-
-
-class OverflowErrorRecord(object):
- def __init__(self, overflowTuple):
- self.tableType = overflowTuple[0]
- self.LookupListIndex = overflowTuple[1]
- self.SubTableIndex = overflowTuple[2]
- self.itemName = overflowTuple[3]
- self.itemIndex = overflowTuple[4]
-
- def __repr__(self):
- return str(
- (
- self.tableType,
- "LookupIndex:",
- self.LookupListIndex,
- "SubTableIndex:",
- self.SubTableIndex,
- "ItemName:",
- self.itemName,
- "ItemIndex:",
- self.itemIndex,
- )
- )
-
-
-class OTLOffsetOverflowError(Exception):
- def __init__(self, overflowErrorRecord):
- self.value = overflowErrorRecord
-
- def __str__(self):
- return repr(self.value)
-
-
-class RepackerState(IntEnum):
- # Repacking control flow is implemnted using a state machine. The state machine table:
- #
- # State | Packing Success | Packing Failed | Exception Raised |
- # ------------+-----------------+----------------+------------------+
- # PURE_FT | Return result | PURE_FT | Return failure |
- # HB_FT | Return result | HB_FT | FT_FALLBACK |
- # FT_FALLBACK | HB_FT | FT_FALLBACK | Return failure |
-
- # Pack only with fontTools, don't allow sharing between extensions.
- PURE_FT = 1
-
- # Attempt to pack with harfbuzz (allowing sharing between extensions)
- # use fontTools to attempt overflow resolution.
- HB_FT = 2
-
- # Fallback if HB/FT packing gets stuck. Pack only with fontTools, don't allow sharing between
- # extensions.
- FT_FALLBACK = 3
-
-
-class BaseTTXConverter(DefaultTable):
-
- """Generic base class for TTX table converters. It functions as an
- adapter between the TTX (ttLib actually) table model and the model
- we use for OpenType tables, which is necessarily subtly different.
- """
-
- def decompile(self, data, font):
- """Create an object from the binary data. Called automatically on access."""
- from . import otTables
-
- reader = OTTableReader(data, tableTag=self.tableTag)
- tableClass = getattr(otTables, self.tableTag)
- self.table = tableClass()
- self.table.decompile(reader, font)
-
- def compile(self, font):
- """Compiles the table into binary. Called automatically on save."""
-
- # General outline:
- # Create a top-level OTTableWriter for the GPOS/GSUB table.
- # Call the compile method for the the table
- # for each 'converter' record in the table converter list
- # call converter's write method for each item in the value.
- # - For simple items, the write method adds a string to the
- # writer's self.items list.
- # - For Struct/Table/Subtable items, it add first adds new writer to the
- # to the writer's self.items, then calls the item's compile method.
- # This creates a tree of writers, rooted at the GUSB/GPOS writer, with
- # each writer representing a table, and the writer.items list containing
- # the child data strings and writers.
- # call the getAllData method
- # call _doneWriting, which removes duplicates
- # call _gatherTables. This traverses the tables, adding unique occurences to a flat list of tables
- # Traverse the flat list of tables, calling getDataLength on each to update their position
- # Traverse the flat list of tables again, calling getData each get the data in the table, now that
- # pos's and offset are known.
-
- # If a lookup subtable overflows an offset, we have to start all over.
- overflowRecord = None
- # this is 3-state option: default (None) means automatically use hb.repack or
- # silently fall back if it fails; True, use it and raise error if not possible
- # or it errors out; False, don't use it, even if you can.
- use_hb_repack = font.cfg[USE_HARFBUZZ_REPACKER]
- if self.tableTag in ("GSUB", "GPOS"):
- if use_hb_repack is False:
- log.debug(
- "hb.repack disabled, compiling '%s' with pure-python serializer",
- self.tableTag,
- )
- elif not have_uharfbuzz:
- if use_hb_repack is True:
- raise ImportError("No module named 'uharfbuzz'")
- else:
- assert use_hb_repack is None
- log.debug(
- "uharfbuzz not found, compiling '%s' with pure-python serializer",
- self.tableTag,
- )
-
- if (
- use_hb_repack in (None, True)
- and have_uharfbuzz
- and self.tableTag in ("GSUB", "GPOS")
- ):
- state = RepackerState.HB_FT
- else:
- state = RepackerState.PURE_FT
-
- hb_first_error_logged = False
- lastOverflowRecord = None
- while True:
- try:
- writer = OTTableWriter(tableTag=self.tableTag)
- self.table.compile(writer, font)
- if state == RepackerState.HB_FT:
- return self.tryPackingHarfbuzz(writer, hb_first_error_logged)
- elif state == RepackerState.PURE_FT:
- return self.tryPackingFontTools(writer)
- elif state == RepackerState.FT_FALLBACK:
- # Run packing with FontTools only, but don't return the result as it will
- # not be optimally packed. Once a successful packing has been found, state is
- # changed back to harfbuzz packing to produce the final, optimal, packing.
- self.tryPackingFontTools(writer)
- log.debug(
- "Re-enabling sharing between extensions and switching back to "
- "harfbuzz+fontTools packing."
- )
- state = RepackerState.HB_FT
-
- except OTLOffsetOverflowError as e:
- hb_first_error_logged = True
- ok = self.tryResolveOverflow(font, e, lastOverflowRecord)
- lastOverflowRecord = e.value
-
- if ok:
- continue
-
- if state is RepackerState.HB_FT:
- log.debug(
- "Harfbuzz packing out of resolutions, disabling sharing between extensions and "
- "switching to fontTools only packing."
- )
- state = RepackerState.FT_FALLBACK
- else:
- raise
-
- def tryPackingHarfbuzz(self, writer, hb_first_error_logged):
- try:
- log.debug("serializing '%s' with hb.repack", self.tableTag)
- return writer.getAllDataUsingHarfbuzz(self.tableTag)
- except (ValueError, MemoryError, hb.RepackerError) as e:
- # Only log hb repacker errors the first time they occur in
- # the offset-overflow resolution loop, they are just noisy.
- # Maybe we can revisit this if/when uharfbuzz actually gives
- # us more info as to why hb.repack failed...
- if not hb_first_error_logged:
- error_msg = f"{type(e).__name__}"
- if str(e) != "":
- error_msg += f": {e}"
- log.warning(
- "hb.repack failed to serialize '%s', attempting fonttools resolutions "
- "; the error message was: %s",
- self.tableTag,
- error_msg,
- )
- hb_first_error_logged = True
- return writer.getAllData(remove_duplicate=False)
-
- def tryPackingFontTools(self, writer):
- return writer.getAllData()
-
- def tryResolveOverflow(self, font, e, lastOverflowRecord):
- ok = 0
- if lastOverflowRecord == e.value:
- # Oh well...
- return ok
-
- overflowRecord = e.value
- log.info("Attempting to fix OTLOffsetOverflowError %s", e)
-
- if overflowRecord.itemName is None:
- from .otTables import fixLookupOverFlows
-
- ok = fixLookupOverFlows(font, overflowRecord)
- else:
- from .otTables import fixSubTableOverFlows
-
- ok = fixSubTableOverFlows(font, overflowRecord)
-
- if ok:
- return ok
-
- # Try upgrading lookup to Extension and hope
- # that cross-lookup sharing not happening would
- # fix overflow...
- from .otTables import fixLookupOverFlows
-
- return fixLookupOverFlows(font, overflowRecord)
-
- def toXML(self, writer, font):
- self.table.toXML2(writer, font)
-
- def fromXML(self, name, attrs, content, font):
- from . import otTables
-
- if not hasattr(self, "table"):
- tableClass = getattr(otTables, self.tableTag)
- self.table = tableClass()
- self.table.fromXML(name, attrs, content, font)
- self.table.populateDefaults()
-
- def ensureDecompiled(self, recurse=True):
- self.table.ensureDecompiled(recurse=recurse)
-
-
-# https://github.com/fonttools/fonttools/pull/2285#issuecomment-834652928
-assert len(struct.pack("i", 0)) == 4
-assert array.array("i").itemsize == 4, "Oops, file a bug against fonttools."
-
-
-class OTTableReader(object):
-
- """Helper class to retrieve data from an OpenType table."""
-
- __slots__ = ("data", "offset", "pos", "localState", "tableTag")
-
- def __init__(self, data, localState=None, offset=0, tableTag=None):
- self.data = data
- self.offset = offset
- self.pos = offset
- self.localState = localState
- self.tableTag = tableTag
-
- def advance(self, count):
- self.pos += count
-
- def seek(self, pos):
- self.pos = pos
-
- def copy(self):
- other = self.__class__(self.data, self.localState, self.offset, self.tableTag)
- other.pos = self.pos
- return other
-
- def getSubReader(self, offset):
- offset = self.offset + offset
- return self.__class__(self.data, self.localState, offset, self.tableTag)
-
- def readValue(self, typecode, staticSize):
- pos = self.pos
- newpos = pos + staticSize
- (value,) = struct.unpack(f">{typecode}", self.data[pos:newpos])
- self.pos = newpos
- return value
-
- def readArray(self, typecode, staticSize, count):
- pos = self.pos
- newpos = pos + count * staticSize
- value = array.array(typecode, self.data[pos:newpos])
- if sys.byteorder != "big":
- value.byteswap()
- self.pos = newpos
- return value.tolist()
-
- def readInt8(self):
- return self.readValue("b", staticSize=1)
-
- def readInt8Array(self, count):
- return self.readArray("b", staticSize=1, count=count)
-
- def readShort(self):
- return self.readValue("h", staticSize=2)
-
- def readShortArray(self, count):
- return self.readArray("h", staticSize=2, count=count)
-
- def readLong(self):
- return self.readValue("i", staticSize=4)
-
- def readLongArray(self, count):
- return self.readArray("i", staticSize=4, count=count)
-
- def readUInt8(self):
- return self.readValue("B", staticSize=1)
-
- def readUInt8Array(self, count):
- return self.readArray("B", staticSize=1, count=count)
-
- def readUShort(self):
- return self.readValue("H", staticSize=2)
-
- def readUShortArray(self, count):
- return self.readArray("H", staticSize=2, count=count)
-
- def readULong(self):
- return self.readValue("I", staticSize=4)
-
- def readULongArray(self, count):
- return self.readArray("I", staticSize=4, count=count)
-
- def readUInt24(self):
- pos = self.pos
- newpos = pos + 3
- (value,) = struct.unpack(">l", b"\0" + self.data[pos:newpos])
- self.pos = newpos
- return value
-
- def readUInt24Array(self, count):
- return [self.readUInt24() for _ in range(count)]
-
- def readTag(self):
- pos = self.pos
- newpos = pos + 4
- value = Tag(self.data[pos:newpos])
- assert len(value) == 4, value
- self.pos = newpos
- return value
-
- def readData(self, count):
- pos = self.pos
- newpos = pos + count
- value = self.data[pos:newpos]
- self.pos = newpos
- return value
-
- def __setitem__(self, name, value):
- state = self.localState.copy() if self.localState else dict()
- state[name] = value
- self.localState = state
-
- def __getitem__(self, name):
- return self.localState and self.localState[name]
-
- def __contains__(self, name):
- return self.localState and name in self.localState
-
-
-class OTTableWriter(object):
-
- """Helper class to gather and assemble data for OpenType tables."""
-
- def __init__(self, localState=None, tableTag=None, offsetSize=2):
- self.items = []
- self.pos = None
- self.localState = localState
- self.tableTag = tableTag
- self.offsetSize = offsetSize
- self.parent = None
-
- # DEPRECATED: 'longOffset' is kept as a property for backward compat with old code.
- # You should use 'offsetSize' instead (2, 3 or 4 bytes).
- @property
- def longOffset(self):
- return self.offsetSize == 4
-
- @longOffset.setter
- def longOffset(self, value):
- self.offsetSize = 4 if value else 2
-
- def __setitem__(self, name, value):
- state = self.localState.copy() if self.localState else dict()
- state[name] = value
- self.localState = state
-
- def __getitem__(self, name):
- return self.localState[name]
-
- def __delitem__(self, name):
- del self.localState[name]
-
- # assembler interface
-
- def getDataLength(self):
- """Return the length of this table in bytes, without subtables."""
- l = 0
- for item in self.items:
- if hasattr(item, "getCountData"):
- l += item.size
- elif hasattr(item, "getData"):
- l += item.offsetSize
- else:
- l = l + len(item)
- return l
-
- def getData(self):
- """Assemble the data for this writer/table, without subtables."""
- items = list(self.items) # make a shallow copy
- pos = self.pos
- numItems = len(items)
- for i in range(numItems):
- item = items[i]
-
- if hasattr(item, "getData"):
- if item.offsetSize == 4:
- items[i] = packULong(item.pos - pos)
- elif item.offsetSize == 2:
- try:
- items[i] = packUShort(item.pos - pos)
- except struct.error:
- # provide data to fix overflow problem.
- overflowErrorRecord = self.getOverflowErrorRecord(item)
-
- raise OTLOffsetOverflowError(overflowErrorRecord)
- elif item.offsetSize == 3:
- items[i] = packUInt24(item.pos - pos)
- else:
- raise ValueError(item.offsetSize)
-
- return bytesjoin(items)
-
- def getDataForHarfbuzz(self):
- """Assemble the data for this writer/table with all offset field set to 0"""
- items = list(self.items)
- packFuncs = {2: packUShort, 3: packUInt24, 4: packULong}
- for i, item in enumerate(items):
- if hasattr(item, "getData"):
- # Offset value is not needed in harfbuzz repacker, so setting offset to 0 to avoid overflow here
- if item.offsetSize in packFuncs:
- items[i] = packFuncs[item.offsetSize](0)
- else:
- raise ValueError(item.offsetSize)
-
- return bytesjoin(items)
-
- def __hash__(self):
- # only works after self._doneWriting() has been called
- return hash(self.items)
-
- def __ne__(self, other):
- result = self.__eq__(other)
- return result if result is NotImplemented else not result
-
- def __eq__(self, other):
- if type(self) != type(other):
- return NotImplemented
- return self.offsetSize == other.offsetSize and self.items == other.items
-
- def _doneWriting(self, internedTables, shareExtension=False):
- # Convert CountData references to data string items
- # collapse duplicate table references to a unique entry
- # "tables" are OTTableWriter objects.
-
- # For Extension Lookup types, we can
- # eliminate duplicates only within the tree under the Extension Lookup,
- # as offsets may exceed 64K even between Extension LookupTable subtables.
- isExtension = hasattr(self, "Extension")
-
- # Certain versions of Uniscribe reject the font if the GSUB/GPOS top-level
- # arrays (ScriptList, FeatureList, LookupList) point to the same, possibly
- # empty, array. So, we don't share those.
- # See: https://github.com/fonttools/fonttools/issues/518
- dontShare = hasattr(self, "DontShare")
-
- if isExtension and not shareExtension:
- internedTables = {}
-
- items = self.items
- for i in range(len(items)):
- item = items[i]
- if hasattr(item, "getCountData"):
- items[i] = item.getCountData()
- elif hasattr(item, "getData"):
- item._doneWriting(internedTables, shareExtension=shareExtension)
- # At this point, all subwriters are hashable based on their items.
- # (See hash and comparison magic methods above.) So the ``setdefault``
- # call here will return the first writer object we've seen with
- # equal content, or store it in the dictionary if it's not been
- # seen yet. We therefore replace the subwriter object with an equivalent
- # object, which deduplicates the tree.
- if not dontShare:
- items[i] = item = internedTables.setdefault(item, item)
- self.items = tuple(items)
-
- def _gatherTables(self, tables, extTables, done):
- # Convert table references in self.items tree to a flat
- # list of tables in depth-first traversal order.
- # "tables" are OTTableWriter objects.
- # We do the traversal in reverse order at each level, in order to
- # resolve duplicate references to be the last reference in the list of tables.
- # For extension lookups, duplicate references can be merged only within the
- # writer tree under the extension lookup.
-
- done[id(self)] = True
-
- numItems = len(self.items)
- iRange = list(range(numItems))
- iRange.reverse()
-
- isExtension = hasattr(self, "Extension")
-
- selfTables = tables
-
- if isExtension:
- assert (
- extTables is not None
- ), "Program or XML editing error. Extension subtables cannot contain extensions subtables"
- tables, extTables, done = extTables, None, {}
-
- # add Coverage table if it is sorted last.
- sortCoverageLast = False
- if hasattr(self, "sortCoverageLast"):
- # Find coverage table
- for i in range(numItems):
- item = self.items[i]
- if getattr(item, "name", None) == "Coverage":
- sortCoverageLast = True
- break
- if id(item) not in done:
- item._gatherTables(tables, extTables, done)
- else:
- # We're a new parent of item
- pass
-
- for i in iRange:
- item = self.items[i]
- if not hasattr(item, "getData"):
- continue
-
- if (
- sortCoverageLast
- and (i == 1)
- and getattr(item, "name", None) == "Coverage"
- ):
- # we've already 'gathered' it above
- continue
-
- if id(item) not in done:
- item._gatherTables(tables, extTables, done)
- else:
- # Item is already written out by other parent
- pass
-
- selfTables.append(self)
-
- def _gatherGraphForHarfbuzz(self, tables, obj_list, done, objidx, virtual_edges):
- real_links = []
- virtual_links = []
- item_idx = objidx
-
- # Merge virtual_links from parent
- for idx in virtual_edges:
- virtual_links.append((0, 0, idx))
-
- sortCoverageLast = False
- coverage_idx = 0
- if hasattr(self, "sortCoverageLast"):
- # Find coverage table
- for i, item in enumerate(self.items):
- if getattr(item, "name", None) == "Coverage":
- sortCoverageLast = True
- if id(item) not in done:
- coverage_idx = item_idx = item._gatherGraphForHarfbuzz(
- tables, obj_list, done, item_idx, virtual_edges
- )
- else:
- coverage_idx = done[id(item)]
- virtual_edges.append(coverage_idx)
- break
-
- child_idx = 0
- offset_pos = 0
- for i, item in enumerate(self.items):
- if hasattr(item, "getData"):
- pos = offset_pos
- elif hasattr(item, "getCountData"):
- offset_pos += item.size
- continue
- else:
- offset_pos = offset_pos + len(item)
- continue
-
- if id(item) not in done:
- child_idx = item_idx = item._gatherGraphForHarfbuzz(
- tables, obj_list, done, item_idx, virtual_edges
- )
- else:
- child_idx = done[id(item)]
-
- real_edge = (pos, item.offsetSize, child_idx)
- real_links.append(real_edge)
- offset_pos += item.offsetSize
-
- tables.append(self)
- obj_list.append((real_links, virtual_links))
- item_idx += 1
- done[id(self)] = item_idx
- if sortCoverageLast:
- virtual_edges.pop()
-
- return item_idx
-
- def getAllDataUsingHarfbuzz(self, tableTag):
- """The Whole table is represented as a Graph.
- Assemble graph data and call Harfbuzz repacker to pack the table.
- Harfbuzz repacker is faster and retain as much sub-table sharing as possible, see also:
- https://github.com/harfbuzz/harfbuzz/blob/main/docs/repacker.md
- The input format for hb.repack() method is explained here:
- https://github.com/harfbuzz/uharfbuzz/blob/main/src/uharfbuzz/_harfbuzz.pyx#L1149
- """
- internedTables = {}
- self._doneWriting(internedTables, shareExtension=True)
- tables = []
- obj_list = []
- done = {}
- objidx = 0
- virtual_edges = []
- self._gatherGraphForHarfbuzz(tables, obj_list, done, objidx, virtual_edges)
- # Gather all data in two passes: the absolute positions of all
- # subtable are needed before the actual data can be assembled.
- pos = 0
- for table in tables:
- table.pos = pos
- pos = pos + table.getDataLength()
-
- data = []
- for table in tables:
- tableData = table.getDataForHarfbuzz()
- data.append(tableData)
-
- if hasattr(hb, "repack_with_tag"):
- return hb.repack_with_tag(str(tableTag), data, obj_list)
- else:
- return hb.repack(data, obj_list)
-
- def getAllData(self, remove_duplicate=True):
- """Assemble all data, including all subtables."""
- if remove_duplicate:
- internedTables = {}
- self._doneWriting(internedTables)
- tables = []
- extTables = []
- done = {}
- self._gatherTables(tables, extTables, done)
- tables.reverse()
- extTables.reverse()
- # Gather all data in two passes: the absolute positions of all
- # subtable are needed before the actual data can be assembled.
- pos = 0
- for table in tables:
- table.pos = pos
- pos = pos + table.getDataLength()
-
- for table in extTables:
- table.pos = pos
- pos = pos + table.getDataLength()
-
- data = []
- for table in tables:
- tableData = table.getData()
- data.append(tableData)
-
- for table in extTables:
- tableData = table.getData()
- data.append(tableData)
-
- return bytesjoin(data)
-
- # interface for gathering data, as used by table.compile()
-
- def getSubWriter(self, offsetSize=2):
- subwriter = self.__class__(
- self.localState, self.tableTag, offsetSize=offsetSize
- )
- subwriter.parent = (
- self # because some subtables have idential values, we discard
- )
- # the duplicates under the getAllData method. Hence some
- # subtable writers can have more than one parent writer.
- # But we just care about first one right now.
- return subwriter
-
- def writeValue(self, typecode, value):
- self.items.append(struct.pack(f">{typecode}", value))
-
- def writeArray(self, typecode, values):
- a = array.array(typecode, values)
- if sys.byteorder != "big":
- a.byteswap()
- self.items.append(a.tobytes())
-
- def writeInt8(self, value):
- assert -128 <= value < 128, value
- self.items.append(struct.pack(">b", value))
-
- def writeInt8Array(self, values):
- self.writeArray("b", values)
-
- def writeShort(self, value):
- assert -32768 <= value < 32768, value
- self.items.append(struct.pack(">h", value))
-
- def writeShortArray(self, values):
- self.writeArray("h", values)
-
- def writeLong(self, value):
- self.items.append(struct.pack(">i", value))
-
- def writeLongArray(self, values):
- self.writeArray("i", values)
-
- def writeUInt8(self, value):
- assert 0 <= value < 256, value
- self.items.append(struct.pack(">B", value))
-
- def writeUInt8Array(self, values):
- self.writeArray("B", values)
-
- def writeUShort(self, value):
- assert 0 <= value < 0x10000, value
- self.items.append(struct.pack(">H", value))
-
- def writeUShortArray(self, values):
- self.writeArray("H", values)
-
- def writeULong(self, value):
- self.items.append(struct.pack(">I", value))
-
- def writeULongArray(self, values):
- self.writeArray("I", values)
-
- def writeUInt24(self, value):
- assert 0 <= value < 0x1000000, value
- b = struct.pack(">L", value)
- self.items.append(b[1:])
-
- def writeUInt24Array(self, values):
- for value in values:
- self.writeUInt24(value)
-
- def writeTag(self, tag):
- tag = Tag(tag).tobytes()
- assert len(tag) == 4, tag
- self.items.append(tag)
-
- def writeSubTable(self, subWriter):
- self.items.append(subWriter)
-
- def writeCountReference(self, table, name, size=2, value=None):
- ref = CountReference(table, name, size=size, value=value)
- self.items.append(ref)
- return ref
-
- def writeStruct(self, format, values):
- data = struct.pack(*(format,) + values)
- self.items.append(data)
-
- def writeData(self, data):
- self.items.append(data)
-
- def getOverflowErrorRecord(self, item):
- LookupListIndex = SubTableIndex = itemName = itemIndex = None
- if self.name == "LookupList":
- LookupListIndex = item.repeatIndex
- elif self.name == "Lookup":
- LookupListIndex = self.repeatIndex
- SubTableIndex = item.repeatIndex
- else:
- itemName = getattr(item, "name", "")
- if hasattr(item, "repeatIndex"):
- itemIndex = item.repeatIndex
- if self.name == "SubTable":
- LookupListIndex = self.parent.repeatIndex
- SubTableIndex = self.repeatIndex
- elif self.name == "ExtSubTable":
- LookupListIndex = self.parent.parent.repeatIndex
- SubTableIndex = self.parent.repeatIndex
- else: # who knows how far below the SubTable level we are! Climb back up to the nearest subtable.
- itemName = ".".join([self.name, itemName])
- p1 = self.parent
- while p1 and p1.name not in ["ExtSubTable", "SubTable"]:
- itemName = ".".join([p1.name, itemName])
- p1 = p1.parent
- if p1:
- if p1.name == "ExtSubTable":
- LookupListIndex = p1.parent.parent.repeatIndex
- SubTableIndex = p1.parent.repeatIndex
- else:
- LookupListIndex = p1.parent.repeatIndex
- SubTableIndex = p1.repeatIndex
-
- return OverflowErrorRecord(
- (self.tableTag, LookupListIndex, SubTableIndex, itemName, itemIndex)
- )
-
-
-class CountReference(object):
- """A reference to a Count value, not a count of references."""
-
- def __init__(self, table, name, size=None, value=None):
- self.table = table
- self.name = name
- self.size = size
- if value is not None:
- self.setValue(value)
-
- def setValue(self, value):
- table = self.table
- name = self.name
- if table[name] is None:
- table[name] = value
- else:
- assert table[name] == value, (name, table[name], value)
-
- def getValue(self):
- return self.table[self.name]
-
- def getCountData(self):
- v = self.table[self.name]
- if v is None:
- v = 0
- return {1: packUInt8, 2: packUShort, 4: packULong}[self.size](v)
-
-
-def packUInt8(value):
- return struct.pack(">B", value)
-
-
-def packUShort(value):
- return struct.pack(">H", value)
-
-
-def packULong(value):
- assert 0 <= value < 0x100000000, value
- return struct.pack(">I", value)
-
-
-def packUInt24(value):
- assert 0 <= value < 0x1000000, value
- return struct.pack(">I", value)[1:]
-
-
-class BaseTable(object):
-
- """Generic base class for all OpenType (sub)tables."""
-
- def __getattr__(self, attr):
- reader = self.__dict__.get("reader")
- if reader:
- del self.reader
- font = self.font
- del self.font
- self.decompile(reader, font)
- return getattr(self, attr)
-
- raise AttributeError(attr)
-
- def ensureDecompiled(self, recurse=False):
- reader = self.__dict__.get("reader")
- if reader:
- del self.reader
- font = self.font
- del self.font
- self.decompile(reader, font)
- if recurse:
- for subtable in self.iterSubTables():
- subtable.value.ensureDecompiled(recurse)
-
- def __getstate__(self):
- # before copying/pickling 'lazy' objects, make a shallow copy of OTTableReader
- # https://github.com/fonttools/fonttools/issues/2965
- if "reader" in self.__dict__:
- state = self.__dict__.copy()
- state["reader"] = self.__dict__["reader"].copy()
- return state
- return self.__dict__
-
- @classmethod
- def getRecordSize(cls, reader):
- totalSize = 0
- for conv in cls.converters:
- size = conv.getRecordSize(reader)
- if size is NotImplemented:
- return NotImplemented
- countValue = 1
- if conv.repeat:
- if conv.repeat in reader:
- countValue = reader[conv.repeat] + conv.aux
- else:
- return NotImplemented
- totalSize += size * countValue
- return totalSize
-
- def getConverters(self):
- return self.converters
-
- def getConverterByName(self, name):
- return self.convertersByName[name]
-
- def populateDefaults(self, propagator=None):
- for conv in self.getConverters():
- if conv.repeat:
- if not hasattr(self, conv.name):
- setattr(self, conv.name, [])
- countValue = len(getattr(self, conv.name)) - conv.aux
- try:
- count_conv = self.getConverterByName(conv.repeat)
- setattr(self, conv.repeat, countValue)
- except KeyError:
- # conv.repeat is a propagated count
- if propagator and conv.repeat in propagator:
- propagator[conv.repeat].setValue(countValue)
- else:
- if conv.aux and not eval(conv.aux, None, self.__dict__):
- continue
- if hasattr(self, conv.name):
- continue # Warn if it should NOT be present?!
- if hasattr(conv, "writeNullOffset"):
- setattr(self, conv.name, None) # Warn?
- # elif not conv.isCount:
- # # Warn?
- # pass
- if hasattr(conv, "DEFAULT"):
- # OptionalValue converters (e.g. VarIndex)
- setattr(self, conv.name, conv.DEFAULT)
-
- def decompile(self, reader, font):
- self.readFormat(reader)
- table = {}
- self.__rawTable = table # for debugging
- for conv in self.getConverters():
- if conv.name == "SubTable":
- conv = conv.getConverter(reader.tableTag, table["LookupType"])
- if conv.name == "ExtSubTable":
- conv = conv.getConverter(reader.tableTag, table["ExtensionLookupType"])
- if conv.name == "FeatureParams":
- conv = conv.getConverter(reader["FeatureTag"])
- if conv.name == "SubStruct":
- conv = conv.getConverter(reader.tableTag, table["MorphType"])
- try:
- if conv.repeat:
- if isinstance(conv.repeat, int):
- countValue = conv.repeat
- elif conv.repeat in table:
- countValue = table[conv.repeat]
- else:
- # conv.repeat is a propagated count
- countValue = reader[conv.repeat]
- countValue += conv.aux
- table[conv.name] = conv.readArray(reader, font, table, countValue)
- else:
- if conv.aux and not eval(conv.aux, None, table):
- continue
- table[conv.name] = conv.read(reader, font, table)
- if conv.isPropagated:
- reader[conv.name] = table[conv.name]
- except Exception as e:
- name = conv.name
- e.args = e.args + (name,)
- raise
-
- if hasattr(self, "postRead"):
- self.postRead(table, font)
- else:
- self.__dict__.update(table)
-
- del self.__rawTable # succeeded, get rid of debugging info
-
- def compile(self, writer, font):
- self.ensureDecompiled()
- # TODO Following hack to be removed by rewriting how FormatSwitching tables
- # are handled.
- # https://github.com/fonttools/fonttools/pull/2238#issuecomment-805192631
- if hasattr(self, "preWrite"):
- deleteFormat = not hasattr(self, "Format")
- table = self.preWrite(font)
- deleteFormat = deleteFormat and hasattr(self, "Format")
- else:
- deleteFormat = False
- table = self.__dict__.copy()
-
- # some count references may have been initialized in a custom preWrite; we set
- # these in the writer's state beforehand (instead of sequentially) so they will
- # be propagated to all nested subtables even if the count appears in the current
- # table only *after* the offset to the subtable that it is counting.
- for conv in self.getConverters():
- if conv.isCount and conv.isPropagated:
- value = table.get(conv.name)
- if isinstance(value, CountReference):
- writer[conv.name] = value
-
- if hasattr(self, "sortCoverageLast"):
- writer.sortCoverageLast = 1
-
- if hasattr(self, "DontShare"):
- writer.DontShare = True
-
- if hasattr(self.__class__, "LookupType"):
- writer["LookupType"].setValue(self.__class__.LookupType)
-
- self.writeFormat(writer)
- for conv in self.getConverters():
- value = table.get(
- conv.name
- ) # TODO Handle defaults instead of defaulting to None!
- if conv.repeat:
- if value is None:
- value = []
- countValue = len(value) - conv.aux
- if isinstance(conv.repeat, int):
- assert len(value) == conv.repeat, "expected %d values, got %d" % (
- conv.repeat,
- len(value),
- )
- elif conv.repeat in table:
- CountReference(table, conv.repeat, value=countValue)
- else:
- # conv.repeat is a propagated count
- writer[conv.repeat].setValue(countValue)
- try:
- conv.writeArray(writer, font, table, value)
- except Exception as e:
- e.args = e.args + (conv.name + "[]",)
- raise
- elif conv.isCount:
- # Special-case Count values.
- # Assumption: a Count field will *always* precede
- # the actual array(s).
- # We need a default value, as it may be set later by a nested
- # table. We will later store it here.
- # We add a reference: by the time the data is assembled
- # the Count value will be filled in.
- # We ignore the current count value since it will be recomputed,
- # unless it's a CountReference that was already initialized in a custom preWrite.
- if isinstance(value, CountReference):
- ref = value
- ref.size = conv.staticSize
- writer.writeData(ref)
- table[conv.name] = ref.getValue()
- else:
- ref = writer.writeCountReference(table, conv.name, conv.staticSize)
- table[conv.name] = None
- if conv.isPropagated:
- writer[conv.name] = ref
- elif conv.isLookupType:
- # We make sure that subtables have the same lookup type,
- # and that the type is the same as the one set on the
- # Lookup object, if any is set.
- if conv.name not in table:
- table[conv.name] = None
- ref = writer.writeCountReference(
- table, conv.name, conv.staticSize, table[conv.name]
- )
- writer["LookupType"] = ref
- else:
- if conv.aux and not eval(conv.aux, None, table):
- continue
- try:
- conv.write(writer, font, table, value)
- except Exception as e:
- name = value.__class__.__name__ if value is not None else conv.name
- e.args = e.args + (name,)
- raise
- if conv.isPropagated:
- writer[conv.name] = value
-
- if deleteFormat:
- del self.Format
-
- def readFormat(self, reader):
- pass
-
- def writeFormat(self, writer):
- pass
-
- def toXML(self, xmlWriter, font, attrs=None, name=None):
- tableName = name if name else self.__class__.__name__
- if attrs is None:
- attrs = []
- if hasattr(self, "Format"):
- attrs = attrs + [("Format", self.Format)]
- xmlWriter.begintag(tableName, attrs)
- xmlWriter.newline()
- self.toXML2(xmlWriter, font)
- xmlWriter.endtag(tableName)
- xmlWriter.newline()
-
- def toXML2(self, xmlWriter, font):
- # Simpler variant of toXML, *only* for the top level tables (like GPOS, GSUB).
- # This is because in TTX our parent writes our main tag, and in otBase.py we
- # do it ourselves. I think I'm getting schizophrenic...
- for conv in self.getConverters():
- if conv.repeat:
- value = getattr(self, conv.name, [])
- for i in range(len(value)):
- item = value[i]
- conv.xmlWrite(xmlWriter, font, item, conv.name, [("index", i)])
- else:
- if conv.aux and not eval(conv.aux, None, vars(self)):
- continue
- value = getattr(
- self, conv.name, None
- ) # TODO Handle defaults instead of defaulting to None!
- conv.xmlWrite(xmlWriter, font, value, conv.name, [])
-
- def fromXML(self, name, attrs, content, font):
- try:
- conv = self.getConverterByName(name)
- except KeyError:
- raise # XXX on KeyError, raise nice error
- value = conv.xmlRead(attrs, content, font)
- if conv.repeat:
- seq = getattr(self, conv.name, None)
- if seq is None:
- seq = []
- setattr(self, conv.name, seq)
- seq.append(value)
- else:
- setattr(self, conv.name, value)
-
- def __ne__(self, other):
- result = self.__eq__(other)
- return result if result is NotImplemented else not result
-
- def __eq__(self, other):
- if type(self) != type(other):
- return NotImplemented
-
- self.ensureDecompiled()
- other.ensureDecompiled()
-
- return self.__dict__ == other.__dict__
-
- class SubTableEntry(NamedTuple):
- """See BaseTable.iterSubTables()"""
-
- name: str
- value: "BaseTable"
- index: Optional[int] = None # index into given array, None for single values
-
- def iterSubTables(self) -> Iterator[SubTableEntry]:
- """Yield (name, value, index) namedtuples for all subtables of current table.
-
- A sub-table is an instance of BaseTable (or subclass thereof) that is a child
- of self, the current parent table.
- The tuples also contain the attribute name (str) of the of parent table to get
- a subtable, and optionally, for lists of subtables (i.e. attributes associated
- with a converter that has a 'repeat'), an index into the list containing the
- given subtable value.
- This method can be useful to traverse trees of otTables.
- """
- for conv in self.getConverters():
- name = conv.name
- value = getattr(self, name, None)
- if value is None:
- continue
- if isinstance(value, BaseTable):
- yield self.SubTableEntry(name, value)
- elif isinstance(value, list):
- yield from (
- self.SubTableEntry(name, v, index=i)
- for i, v in enumerate(value)
- if isinstance(v, BaseTable)
- )
-
- # instance (not @class)method for consistency with FormatSwitchingBaseTable
- def getVariableAttrs(self):
- return getVariableAttrs(self.__class__)
-
-
-class FormatSwitchingBaseTable(BaseTable):
-
- """Minor specialization of BaseTable, for tables that have multiple
- formats, eg. CoverageFormat1 vs. CoverageFormat2."""
-
- @classmethod
- def getRecordSize(cls, reader):
- return NotImplemented
-
- def getConverters(self):
- try:
- fmt = self.Format
- except AttributeError:
- # some FormatSwitchingBaseTables (e.g. Coverage) no longer have 'Format'
- # attribute after fully decompiled, only gain one in preWrite before being
- # recompiled. In the decompiled state, these hand-coded classes defined in
- # otTables.py lose their format-specific nature and gain more high-level
- # attributes that are not tied to converters.
- return []
- return self.converters.get(self.Format, [])
-
- def getConverterByName(self, name):
- return self.convertersByName[self.Format][name]
-
- def readFormat(self, reader):
- self.Format = reader.readUShort()
-
- def writeFormat(self, writer):
- writer.writeUShort(self.Format)
-
- def toXML(self, xmlWriter, font, attrs=None, name=None):
- BaseTable.toXML(self, xmlWriter, font, attrs, name)
-
- def getVariableAttrs(self):
- return getVariableAttrs(self.__class__, self.Format)
-
-
-class UInt8FormatSwitchingBaseTable(FormatSwitchingBaseTable):
- def readFormat(self, reader):
- self.Format = reader.readUInt8()
-
- def writeFormat(self, writer):
- writer.writeUInt8(self.Format)
-
-
-formatSwitchingBaseTables = {
- "uint16": FormatSwitchingBaseTable,
- "uint8": UInt8FormatSwitchingBaseTable,
-}
-
-
-def getFormatSwitchingBaseTableClass(formatType):
- try:
- return formatSwitchingBaseTables[formatType]
- except KeyError:
- raise TypeError(f"Unsupported format type: {formatType!r}")
-
-
-# memoize since these are parsed from otData.py, thus stay constant
-@lru_cache()
-def getVariableAttrs(cls: BaseTable, fmt: Optional[int] = None) -> Tuple[str]:
- """Return sequence of variable table field names (can be empty).
-
- Attributes are deemed "variable" when their otData.py's description contain
- 'VarIndexBase + {offset}', e.g. COLRv1 PaintVar* tables.
- """
- if not issubclass(cls, BaseTable):
- raise TypeError(cls)
- if issubclass(cls, FormatSwitchingBaseTable):
- if fmt is None:
- raise TypeError(f"'fmt' is required for format-switching {cls.__name__}")
- converters = cls.convertersByName[fmt]
- else:
- converters = cls.convertersByName
- # assume if no 'VarIndexBase' field is present, table has no variable fields
- if "VarIndexBase" not in converters:
- return ()
- varAttrs = {}
- for name, conv in converters.items():
- offset = conv.getVarIndexOffset()
- if offset is not None:
- varAttrs[name] = offset
- return tuple(sorted(varAttrs, key=varAttrs.__getitem__))
-
-
-#
-# Support for ValueRecords
-#
-# This data type is so different from all other OpenType data types that
-# it requires quite a bit of code for itself. It even has special support
-# in OTTableReader and OTTableWriter...
-#
-
-valueRecordFormat = [
- # Mask Name isDevice signed
- (0x0001, "XPlacement", 0, 1),
- (0x0002, "YPlacement", 0, 1),
- (0x0004, "XAdvance", 0, 1),
- (0x0008, "YAdvance", 0, 1),
- (0x0010, "XPlaDevice", 1, 0),
- (0x0020, "YPlaDevice", 1, 0),
- (0x0040, "XAdvDevice", 1, 0),
- (0x0080, "YAdvDevice", 1, 0),
- # reserved:
- (0x0100, "Reserved1", 0, 0),
- (0x0200, "Reserved2", 0, 0),
- (0x0400, "Reserved3", 0, 0),
- (0x0800, "Reserved4", 0, 0),
- (0x1000, "Reserved5", 0, 0),
- (0x2000, "Reserved6", 0, 0),
- (0x4000, "Reserved7", 0, 0),
- (0x8000, "Reserved8", 0, 0),
-]
-
-
-def _buildDict():
- d = {}
- for mask, name, isDevice, signed in valueRecordFormat:
- d[name] = mask, isDevice, signed
- return d
-
-
-valueRecordFormatDict = _buildDict()
-
-
-class ValueRecordFactory(object):
-
- """Given a format code, this object convert ValueRecords."""
-
- def __init__(self, valueFormat):
- format = []
- for mask, name, isDevice, signed in valueRecordFormat:
- if valueFormat & mask:
- format.append((name, isDevice, signed))
- self.format = format
-
- def __len__(self):
- return len(self.format)
-
- def readValueRecord(self, reader, font):
- format = self.format
- if not format:
- return None
- valueRecord = ValueRecord()
- for name, isDevice, signed in format:
- if signed:
- value = reader.readShort()
- else:
- value = reader.readUShort()
- if isDevice:
- if value:
- from . import otTables
-
- subReader = reader.getSubReader(value)
- value = getattr(otTables, name)()
- value.decompile(subReader, font)
- else:
- value = None
- setattr(valueRecord, name, value)
- return valueRecord
-
- def writeValueRecord(self, writer, font, valueRecord):
- for name, isDevice, signed in self.format:
- value = getattr(valueRecord, name, 0)
- if isDevice:
- if value:
- subWriter = writer.getSubWriter()
- writer.writeSubTable(subWriter)
- value.compile(subWriter, font)
- else:
- writer.writeUShort(0)
- elif signed:
- writer.writeShort(value)
- else:
- writer.writeUShort(value)
-
-
-class ValueRecord(object):
-
- # see ValueRecordFactory
-
- def __init__(self, valueFormat=None, src=None):
- if valueFormat is not None:
- for mask, name, isDevice, signed in valueRecordFormat:
- if valueFormat & mask:
- setattr(self, name, None if isDevice else 0)
- if src is not None:
- for key, val in src.__dict__.items():
- if not hasattr(self, key):
- continue
- setattr(self, key, val)
- elif src is not None:
- self.__dict__ = src.__dict__.copy()
-
- def getFormat(self):
- format = 0
- for name in self.__dict__.keys():
- format = format | valueRecordFormatDict[name][0]
- return format
-
- def getEffectiveFormat(self):
- format = 0
- for name, value in self.__dict__.items():
- if value:
- format = format | valueRecordFormatDict[name][0]
- return format
-
- def toXML(self, xmlWriter, font, valueName, attrs=None):
- if attrs is None:
- simpleItems = []
- else:
- simpleItems = list(attrs)
- for mask, name, isDevice, format in valueRecordFormat[:4]: # "simple" values
- if hasattr(self, name):
- simpleItems.append((name, getattr(self, name)))
- deviceItems = []
- for mask, name, isDevice, format in valueRecordFormat[4:8]: # device records
- if hasattr(self, name):
- device = getattr(self, name)
- if device is not None:
- deviceItems.append((name, device))
- if deviceItems:
- xmlWriter.begintag(valueName, simpleItems)
- xmlWriter.newline()
- for name, deviceRecord in deviceItems:
- if deviceRecord is not None:
- deviceRecord.toXML(xmlWriter, font, name=name)
- xmlWriter.endtag(valueName)
- xmlWriter.newline()
- else:
- xmlWriter.simpletag(valueName, simpleItems)
- xmlWriter.newline()
-
- def fromXML(self, name, attrs, content, font):
- from . import otTables
-
- for k, v in attrs.items():
- setattr(self, k, int(v))
- for element in content:
- if not isinstance(element, tuple):
- continue
- name, attrs, content = element
- value = getattr(otTables, name)()
- for elem2 in content:
- if not isinstance(elem2, tuple):
- continue
- name2, attrs2, content2 = elem2
- value.fromXML(name2, attrs2, content2, font)
- setattr(self, name, value)
-
- def __ne__(self, other):
- result = self.__eq__(other)
- return result if result is NotImplemented else not result
-
- def __eq__(self, other):
- if type(self) != type(other):
- return NotImplemented
- return self.__dict__ == other.__dict__
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/themes/utils/fonts.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/themes/utils/fonts.py
deleted file mode 100644
index d51dbbfdf4990358e9094cc887c47ae6cd8b0440..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/themes/utils/fonts.py
+++ /dev/null
@@ -1,50 +0,0 @@
-from __future__ import annotations
-
-import json
-from typing import Iterable
-
-
-class FontEncoder(json.JSONEncoder):
- def default(self, obj):
- if isinstance(obj, Font):
- return {
- "__gradio_font__": True,
- "name": obj.name,
- "class": "google" if isinstance(obj, GoogleFont) else "font",
- }
- # Let the base class default method raise the TypeError
- return json.JSONEncoder.default(self, obj)
-
-
-def as_font(dct):
- if "__gradio_font__" in dct:
- name = dct["name"]
- return GoogleFont(name) if dct["class"] == "google" else Font(name)
- return dct
-
-
-class Font:
- def __init__(self, name: str):
- self.name = name
-
- def __str__(self) -> str:
- return (
- self.name
- if self.name in ["sans-serif", "serif", "monospace", "cursive", "fantasy"]
- else f"'{self.name}'"
- )
-
- def stylesheet(self) -> str:
- return None
-
- def __eq__(self, other: Font) -> bool:
- return self.name == other.name and self.stylesheet() == other.stylesheet()
-
-
-class GoogleFont(Font):
- def __init__(self, name: str, weights: Iterable[int] = (400, 600)):
- self.name = name
- self.weights = weights
-
- def stylesheet(self) -> str:
- return f'https://fonts.googleapis.com/css2?family={self.name.replace(" ", "+")}:wght@{";".join(str(weight) for weight in self.weights)}&display=swap'
diff --git a/spaces/Dao3/ChatGLM-6B/README.md b/spaces/Dao3/ChatGLM-6B/README.md
deleted file mode 100644
index 9dcd06a3a9d809fff427363d5f7b71673b4463d3..0000000000000000000000000000000000000000
--- a/spaces/Dao3/ChatGLM-6B/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: ChatGLM 6B
-emoji: 📚
-colorFrom: red
-colorTo: gray
-sdk: gradio
-sdk_version: 3.21.0
-app_file: app.py
-pinned: false
-duplicated_from: xlon3/ChatGLM-6B
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Datasculptor/MusicGen/audiocraft/models/__init__.py b/spaces/Datasculptor/MusicGen/audiocraft/models/__init__.py
deleted file mode 100644
index 92c7a48a200eba455044cd66e0d2c1efe6494f5c..0000000000000000000000000000000000000000
--- a/spaces/Datasculptor/MusicGen/audiocraft/models/__init__.py
+++ /dev/null
@@ -1,10 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-# flake8: noqa
-from .musicgen import MusicGen
-from .lm import LMModel
-from .encodec import CompressionModel, EncodecModel
diff --git a/spaces/Dauzy/whisper-webui/src/download.py b/spaces/Dauzy/whisper-webui/src/download.py
deleted file mode 100644
index 473d27a0d279821edc3d398da8a33424da42da2a..0000000000000000000000000000000000000000
--- a/spaces/Dauzy/whisper-webui/src/download.py
+++ /dev/null
@@ -1,118 +0,0 @@
-from tempfile import mkdtemp
-from typing import List
-from yt_dlp import YoutubeDL
-from urllib.request import urlopen, urlretrieve
-import urllib.parse
-import progressbar
-import cgi
-
-import yt_dlp
-from yt_dlp.postprocessor import PostProcessor
-
-class FilenameCollectorPP(PostProcessor):
- def __init__(self):
- super(FilenameCollectorPP, self).__init__(None)
- self.filenames = []
-
- def run(self, information):
- self.filenames.append(information["filepath"])
- return [], information
-
-def download_url(url: str, maxDuration: int = None, destinationDirectory: str = None, playlistItems: str = "1") -> List[str]:
- if "dora.starh.top" in url:
- return _perform_download_with_urllib(url, destinationDirectory=destinationDirectory)
- try:
- return _perform_download(url, maxDuration=maxDuration, outputTemplate=None, destinationDirectory=destinationDirectory, playlistItems=playlistItems)
- except yt_dlp.utils.DownloadError as e:
- # In case of an OS error, try again with a different output template
- if e.msg and e.msg.find("[Errno 36] File name too long") >= 0:
- return _perform_download(url, maxDuration=maxDuration, outputTemplate="%(title).10s %(id)s.%(ext)s")
- pass
-
-class MyProgressBar():
- def __init__(self):
- self.pbar = None
-
- def __call__(self, block_num, block_size, total_size):
- if not self.pbar:
- self.pbar=progressbar.ProgressBar(maxval=total_size)
- self.pbar.start()
-
- downloaded = block_num * block_size
- if downloaded < total_size:
- self.pbar.update(downloaded)
- else:
- self.pbar.finish()
-
-def _perform_download_with_urllib(url: str, destinationDirectory: str = None):
- if destinationDirectory is None:
- destinationDirectory = mkdtemp()
- remotefile = urlopen(url)
- contentdisposition = remotefile.info()['Content-Disposition']
- _, params = cgi.parse_header(contentdisposition)
- filename = url.split('/')[-1]
- if "filename" in params:
- filename = params["filename"]
- elif "filename*" in params:
- filename = params["filename*"].replace("UTF-8''", "")
- filename = urllib.parse.unquote(filename)
- filename = destinationDirectory + "/" + filename
- urlretrieve(url, filename=filename, reporthook=MyProgressBar())
- result = []
- result.append(filename)
- print("Downloaded " + filename)
- return result
-
-def _perform_download(url: str, maxDuration: int = None, outputTemplate: str = None, destinationDirectory: str = None, playlistItems: str = "1"):
- # Create a temporary directory to store the downloaded files
- if destinationDirectory is None:
- destinationDirectory = mkdtemp()
-
- ydl_opts = {
- "format": "bestaudio/best",
- 'paths': {
- 'home': destinationDirectory
- }
- }
- if (playlistItems):
- ydl_opts['playlist_items'] = playlistItems
-
- # Add output template if specified
- if outputTemplate:
- ydl_opts['outtmpl'] = outputTemplate
-
- filename_collector = FilenameCollectorPP()
-
- with YoutubeDL(ydl_opts) as ydl:
- if maxDuration and maxDuration > 0:
- info = ydl.extract_info(url, download=False)
- entries = "entries" in info and info["entries"] or [info]
-
- total_duration = 0
-
- # Compute total duration
- for entry in entries:
- total_duration += float(entry["duration"])
-
- if total_duration >= maxDuration:
- raise ExceededMaximumDuration(videoDuration=total_duration, maxDuration=maxDuration, message="Video is too long")
-
- ydl.add_post_processor(filename_collector)
- ydl.download([url])
-
- if len(filename_collector.filenames) <= 0:
- raise Exception("Cannot download " + url)
-
- result = []
-
- for filename in filename_collector.filenames:
- result.append(filename)
- print("Downloaded " + filename)
-
- return result
-
-class ExceededMaximumDuration(Exception):
- def __init__(self, videoDuration, maxDuration, message):
- self.videoDuration = videoDuration
- self.maxDuration = maxDuration
- super().__init__(message)
\ No newline at end of file
diff --git a/spaces/Dauzy/whisper-webui/src/hooks/whisperProgressHook.py b/spaces/Dauzy/whisper-webui/src/hooks/whisperProgressHook.py
deleted file mode 100644
index aa09958a05e0b3c54736f7209f8a05a94912752e..0000000000000000000000000000000000000000
--- a/spaces/Dauzy/whisper-webui/src/hooks/whisperProgressHook.py
+++ /dev/null
@@ -1,91 +0,0 @@
-import sys
-import threading
-from typing import List, Union
-import tqdm
-
-from src.hooks.progressListener import ProgressListener
-
-class ProgressListenerHandle:
- def __init__(self, listener: ProgressListener):
- self.listener = listener
-
- def __enter__(self):
- register_thread_local_progress_listener(self.listener)
-
- def __exit__(self, exc_type, exc_val, exc_tb):
- unregister_thread_local_progress_listener(self.listener)
-
- if exc_type is None:
- self.listener.on_finished()
-
-class _CustomProgressBar(tqdm.tqdm):
- def __init__(self, *args, **kwargs):
- super().__init__(*args, **kwargs)
- self._current = self.n # Set the initial value
-
- def update(self, n):
- super().update(n)
- # Because the progress bar might be disabled, we need to manually update the progress
- self._current += n
-
- # Inform listeners
- listeners = _get_thread_local_listeners()
-
- for listener in listeners:
- listener.on_progress(self._current, self.total)
-
-_thread_local = threading.local()
-
-def _get_thread_local_listeners():
- if not hasattr(_thread_local, 'listeners'):
- _thread_local.listeners = []
- return _thread_local.listeners
-
-_hooked = False
-
-def init_progress_hook():
- global _hooked
-
- if _hooked:
- return
-
- # Inject into tqdm.tqdm of Whisper, so we can see progress
- import whisper.transcribe
- transcribe_module = sys.modules['whisper.transcribe']
- transcribe_module.tqdm.tqdm = _CustomProgressBar
- _hooked = True
-
-def register_thread_local_progress_listener(progress_listener: ProgressListener):
- # This is a workaround for the fact that the progress bar is not exposed in the API
- init_progress_hook()
-
- listeners = _get_thread_local_listeners()
- listeners.append(progress_listener)
-
-def unregister_thread_local_progress_listener(progress_listener: ProgressListener):
- listeners = _get_thread_local_listeners()
-
- if progress_listener in listeners:
- listeners.remove(progress_listener)
-
-def create_progress_listener_handle(progress_listener: ProgressListener):
- return ProgressListenerHandle(progress_listener)
-
-# Example usage
-if __name__ == '__main__':
- class PrintingProgressListener:
- def on_progress(self, current: Union[int, float], total: Union[int, float]):
- print(f"Progress: {current}/{total}")
-
- def on_finished(self):
- print("Finished")
-
- import whisper
- model = whisper.load_model("medium")
-
- with create_progress_listener_handle(PrintingProgressListener()) as listener:
- # Set verbose to None to disable the progress bar, as we are using our own
- result = model.transcribe("J:\\Dev\\OpenAI\\whisper\\tests\\Noriko\\out.mka", language="Japanese", fp16=False, verbose=None)
- print(result)
-
- print("Done")
\ No newline at end of file
diff --git a/spaces/Deon07/prompthero-openjourney/app.py b/spaces/Deon07/prompthero-openjourney/app.py
deleted file mode 100644
index 2193905172b6fb6d868bff88cc8311f491ec13b3..0000000000000000000000000000000000000000
--- a/spaces/Deon07/prompthero-openjourney/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/prompthero/openjourney").launch()
\ No newline at end of file
diff --git a/spaces/Dimalker/Faceswapper/roop/predicter.py b/spaces/Dimalker/Faceswapper/roop/predicter.py
deleted file mode 100644
index 7ebc2b62e4152c12ce41e55d718222ca9c8a8b7f..0000000000000000000000000000000000000000
--- a/spaces/Dimalker/Faceswapper/roop/predicter.py
+++ /dev/null
@@ -1,25 +0,0 @@
-import numpy
-import opennsfw2
-from PIL import Image
-
-from roop.typing import Frame
-
-MAX_PROBABILITY = 0.85
-
-
-def predict_frame(target_frame: Frame) -> bool:
- image = Image.fromarray(target_frame)
- image = opennsfw2.preprocess_image(image, opennsfw2.Preprocessing.YAHOO)
- model = opennsfw2.make_open_nsfw_model()
- views = numpy.expand_dims(image, axis=0)
- _, probability = model.predict(views)[0]
- return probability > MAX_PROBABILITY
-
-
-def predict_image(target_path: str) -> bool:
- return opennsfw2.predict_image(target_path) > MAX_PROBABILITY
-
-
-def predict_video(target_path: str) -> bool:
- _, probabilities = opennsfw2.predict_video_frames(video_path=target_path, frame_interval=100)
- return any(probability > MAX_PROBABILITY for probability in probabilities)
diff --git a/spaces/Dinoking/Guccio-AI-Designer/models/stylegan/model.py b/spaces/Dinoking/Guccio-AI-Designer/models/stylegan/model.py
deleted file mode 100644
index a230961c4d1bf0bd2d1efe7972b4baa33c5d7013..0000000000000000000000000000000000000000
--- a/spaces/Dinoking/Guccio-AI-Designer/models/stylegan/model.py
+++ /dev/null
@@ -1,456 +0,0 @@
-# Copyright 2020 Erik Härkönen. All rights reserved.
-# This file is licensed to you under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License. You may obtain a copy
-# of the License at http://www.apache.org/licenses/LICENSE-2.0
-
-# Unless required by applicable law or agreed to in writing, software distributed under
-# the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR REPRESENTATIONS
-# OF ANY KIND, either express or implied. See the License for the specific language
-# governing permissions and limitations under the License.
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-from collections import OrderedDict
-from pathlib import Path
-import requests
-import pickle
-import sys
-
-import numpy as np
-
-# Reimplementation of StyleGAN in PyTorch
-# Source: https://github.com/lernapparat/lernapparat/blob/master/style_gan/pytorch_style_gan.ipynb
-
-class MyLinear(nn.Module):
- """Linear layer with equalized learning rate and custom learning rate multiplier."""
- def __init__(self, input_size, output_size, gain=2**(0.5), use_wscale=False, lrmul=1, bias=True):
- super().__init__()
- he_std = gain * input_size**(-0.5) # He init
- # Equalized learning rate and custom learning rate multiplier.
- if use_wscale:
- init_std = 1.0 / lrmul
- self.w_mul = he_std * lrmul
- else:
- init_std = he_std / lrmul
- self.w_mul = lrmul
- self.weight = torch.nn.Parameter(torch.randn(output_size, input_size) * init_std)
- if bias:
- self.bias = torch.nn.Parameter(torch.zeros(output_size))
- self.b_mul = lrmul
- else:
- self.bias = None
-
- def forward(self, x):
- bias = self.bias
- if bias is not None:
- bias = bias * self.b_mul
- return F.linear(x, self.weight * self.w_mul, bias)
-
-class MyConv2d(nn.Module):
- """Conv layer with equalized learning rate and custom learning rate multiplier."""
- def __init__(self, input_channels, output_channels, kernel_size, gain=2**(0.5), use_wscale=False, lrmul=1, bias=True,
- intermediate=None, upscale=False):
- super().__init__()
- if upscale:
- self.upscale = Upscale2d()
- else:
- self.upscale = None
- he_std = gain * (input_channels * kernel_size ** 2) ** (-0.5) # He init
- self.kernel_size = kernel_size
- if use_wscale:
- init_std = 1.0 / lrmul
- self.w_mul = he_std * lrmul
- else:
- init_std = he_std / lrmul
- self.w_mul = lrmul
- self.weight = torch.nn.Parameter(torch.randn(output_channels, input_channels, kernel_size, kernel_size) * init_std)
- if bias:
- self.bias = torch.nn.Parameter(torch.zeros(output_channels))
- self.b_mul = lrmul
- else:
- self.bias = None
- self.intermediate = intermediate
-
- def forward(self, x):
- bias = self.bias
- if bias is not None:
- bias = bias * self.b_mul
-
- have_convolution = False
- if self.upscale is not None and min(x.shape[2:]) * 2 >= 128:
- # this is the fused upscale + conv from StyleGAN, sadly this seems incompatible with the non-fused way
- # this really needs to be cleaned up and go into the conv...
- w = self.weight * self.w_mul
- w = w.permute(1, 0, 2, 3)
- # probably applying a conv on w would be more efficient. also this quadruples the weight (average)?!
- w = F.pad(w, (1,1,1,1))
- w = w[:, :, 1:, 1:]+ w[:, :, :-1, 1:] + w[:, :, 1:, :-1] + w[:, :, :-1, :-1]
- x = F.conv_transpose2d(x, w, stride=2, padding=(w.size(-1)-1)//2)
- have_convolution = True
- elif self.upscale is not None:
- x = self.upscale(x)
-
- if not have_convolution and self.intermediate is None:
- return F.conv2d(x, self.weight * self.w_mul, bias, padding=self.kernel_size//2)
- elif not have_convolution:
- x = F.conv2d(x, self.weight * self.w_mul, None, padding=self.kernel_size//2)
-
- if self.intermediate is not None:
- x = self.intermediate(x)
- if bias is not None:
- x = x + bias.view(1, -1, 1, 1)
- return x
-
-class NoiseLayer(nn.Module):
- """adds noise. noise is per pixel (constant over channels) with per-channel weight"""
- def __init__(self, channels):
- super().__init__()
- self.weight = nn.Parameter(torch.zeros(channels))
- self.noise = None
-
- def forward(self, x, noise=None):
- if noise is None and self.noise is None:
- noise = torch.randn(x.size(0), 1, x.size(2), x.size(3), device=x.device, dtype=x.dtype)
- elif noise is None:
- # here is a little trick: if you get all the noiselayers and set each
- # modules .noise attribute, you can have pre-defined noise.
- # Very useful for analysis
- noise = self.noise
- x = x + self.weight.view(1, -1, 1, 1) * noise
- return x
-
-class StyleMod(nn.Module):
- def __init__(self, latent_size, channels, use_wscale):
- super(StyleMod, self).__init__()
- self.lin = MyLinear(latent_size,
- channels * 2,
- gain=1.0, use_wscale=use_wscale)
-
- def forward(self, x, latent):
- style = self.lin(latent) # style => [batch_size, n_channels*2]
- shape = [-1, 2, x.size(1)] + (x.dim() - 2) * [1]
- style = style.view(shape) # [batch_size, 2, n_channels, ...]
- x = x * (style[:, 0] + 1.) + style[:, 1]
- return x
-
-class PixelNormLayer(nn.Module):
- def __init__(self, epsilon=1e-8):
- super().__init__()
- self.epsilon = epsilon
- def forward(self, x):
- return x * torch.rsqrt(torch.mean(x**2, dim=1, keepdim=True) + self.epsilon)
-
-class BlurLayer(nn.Module):
- def __init__(self, kernel=[1, 2, 1], normalize=True, flip=False, stride=1):
- super(BlurLayer, self).__init__()
- kernel=[1, 2, 1]
- kernel = torch.tensor(kernel, dtype=torch.float32)
- kernel = kernel[:, None] * kernel[None, :]
- kernel = kernel[None, None]
- if normalize:
- kernel = kernel / kernel.sum()
- if flip:
- kernel = kernel[:, :, ::-1, ::-1]
- self.register_buffer('kernel', kernel)
- self.stride = stride
-
- def forward(self, x):
- # expand kernel channels
- kernel = self.kernel.expand(x.size(1), -1, -1, -1)
- x = F.conv2d(
- x,
- kernel,
- stride=self.stride,
- padding=int((self.kernel.size(2)-1)/2),
- groups=x.size(1)
- )
- return x
-
-def upscale2d(x, factor=2, gain=1):
- assert x.dim() == 4
- if gain != 1:
- x = x * gain
- if factor != 1:
- shape = x.shape
- x = x.view(shape[0], shape[1], shape[2], 1, shape[3], 1).expand(-1, -1, -1, factor, -1, factor)
- x = x.contiguous().view(shape[0], shape[1], factor * shape[2], factor * shape[3])
- return x
-
-class Upscale2d(nn.Module):
- def __init__(self, factor=2, gain=1):
- super().__init__()
- assert isinstance(factor, int) and factor >= 1
- self.gain = gain
- self.factor = factor
- def forward(self, x):
- return upscale2d(x, factor=self.factor, gain=self.gain)
-
-class G_mapping(nn.Sequential):
- def __init__(self, nonlinearity='lrelu', use_wscale=True):
- act, gain = {'relu': (torch.relu, np.sqrt(2)),
- 'lrelu': (nn.LeakyReLU(negative_slope=0.2), np.sqrt(2))}[nonlinearity]
- layers = [
- ('pixel_norm', PixelNormLayer()),
- ('dense0', MyLinear(512, 512, gain=gain, lrmul=0.01, use_wscale=use_wscale)),
- ('dense0_act', act),
- ('dense1', MyLinear(512, 512, gain=gain, lrmul=0.01, use_wscale=use_wscale)),
- ('dense1_act', act),
- ('dense2', MyLinear(512, 512, gain=gain, lrmul=0.01, use_wscale=use_wscale)),
- ('dense2_act', act),
- ('dense3', MyLinear(512, 512, gain=gain, lrmul=0.01, use_wscale=use_wscale)),
- ('dense3_act', act),
- ('dense4', MyLinear(512, 512, gain=gain, lrmul=0.01, use_wscale=use_wscale)),
- ('dense4_act', act),
- ('dense5', MyLinear(512, 512, gain=gain, lrmul=0.01, use_wscale=use_wscale)),
- ('dense5_act', act),
- ('dense6', MyLinear(512, 512, gain=gain, lrmul=0.01, use_wscale=use_wscale)),
- ('dense6_act', act),
- ('dense7', MyLinear(512, 512, gain=gain, lrmul=0.01, use_wscale=use_wscale)),
- ('dense7_act', act)
- ]
- super().__init__(OrderedDict(layers))
-
- def forward(self, x):
- return super().forward(x)
-
-class Truncation(nn.Module):
- def __init__(self, avg_latent, max_layer=8, threshold=0.7):
- super().__init__()
- self.max_layer = max_layer
- self.threshold = threshold
- self.register_buffer('avg_latent', avg_latent)
- def forward(self, x):
- assert x.dim() == 3
- interp = torch.lerp(self.avg_latent, x, self.threshold)
- do_trunc = (torch.arange(x.size(1)) < self.max_layer).view(1, -1, 1)
- return torch.where(do_trunc, interp, x)
-
-class LayerEpilogue(nn.Module):
- """Things to do at the end of each layer."""
- def __init__(self, channels, dlatent_size, use_wscale, use_noise, use_pixel_norm, use_instance_norm, use_styles, activation_layer):
- super().__init__()
- layers = []
- if use_noise:
- layers.append(('noise', NoiseLayer(channels)))
- layers.append(('activation', activation_layer))
- if use_pixel_norm:
- layers.append(('pixel_norm', PixelNorm()))
- if use_instance_norm:
- layers.append(('instance_norm', nn.InstanceNorm2d(channels)))
- self.top_epi = nn.Sequential(OrderedDict(layers))
- if use_styles:
- self.style_mod = StyleMod(dlatent_size, channels, use_wscale=use_wscale)
- else:
- self.style_mod = None
- def forward(self, x, dlatents_in_slice=None):
- x = self.top_epi(x)
- if self.style_mod is not None:
- x = self.style_mod(x, dlatents_in_slice)
- else:
- assert dlatents_in_slice is None
- return x
-
-
-class InputBlock(nn.Module):
- def __init__(self, nf, dlatent_size, const_input_layer, gain, use_wscale, use_noise, use_pixel_norm, use_instance_norm, use_styles, activation_layer):
- super().__init__()
- self.const_input_layer = const_input_layer
- self.nf = nf
- if self.const_input_layer:
- # called 'const' in tf
- self.const = nn.Parameter(torch.ones(1, nf, 4, 4))
- self.bias = nn.Parameter(torch.ones(nf))
- else:
- self.dense = MyLinear(dlatent_size, nf*16, gain=gain/4, use_wscale=use_wscale) # tweak gain to match the official implementation of Progressing GAN
- self.epi1 = LayerEpilogue(nf, dlatent_size, use_wscale, use_noise, use_pixel_norm, use_instance_norm, use_styles, activation_layer)
- self.conv = MyConv2d(nf, nf, 3, gain=gain, use_wscale=use_wscale)
- self.epi2 = LayerEpilogue(nf, dlatent_size, use_wscale, use_noise, use_pixel_norm, use_instance_norm, use_styles, activation_layer)
-
- def forward(self, dlatents_in_range):
- batch_size = dlatents_in_range.size(0)
- if self.const_input_layer:
- x = self.const.expand(batch_size, -1, -1, -1)
- x = x + self.bias.view(1, -1, 1, 1)
- else:
- x = self.dense(dlatents_in_range[:, 0]).view(batch_size, self.nf, 4, 4)
- x = self.epi1(x, dlatents_in_range[:, 0])
- x = self.conv(x)
- x = self.epi2(x, dlatents_in_range[:, 1])
- return x
-
-
-class GSynthesisBlock(nn.Module):
- def __init__(self, in_channels, out_channels, blur_filter, dlatent_size, gain, use_wscale, use_noise, use_pixel_norm, use_instance_norm, use_styles, activation_layer):
- # 2**res x 2**res # res = 3..resolution_log2
- super().__init__()
- if blur_filter:
- blur = BlurLayer(blur_filter)
- else:
- blur = None
- self.conv0_up = MyConv2d(in_channels, out_channels, kernel_size=3, gain=gain, use_wscale=use_wscale,
- intermediate=blur, upscale=True)
- self.epi1 = LayerEpilogue(out_channels, dlatent_size, use_wscale, use_noise, use_pixel_norm, use_instance_norm, use_styles, activation_layer)
- self.conv1 = MyConv2d(out_channels, out_channels, kernel_size=3, gain=gain, use_wscale=use_wscale)
- self.epi2 = LayerEpilogue(out_channels, dlatent_size, use_wscale, use_noise, use_pixel_norm, use_instance_norm, use_styles, activation_layer)
-
- def forward(self, x, dlatents_in_range):
- x = self.conv0_up(x)
- x = self.epi1(x, dlatents_in_range[:, 0])
- x = self.conv1(x)
- x = self.epi2(x, dlatents_in_range[:, 1])
- return x
-
-class G_synthesis(nn.Module):
- def __init__(self,
- dlatent_size = 512, # Disentangled latent (W) dimensionality.
- num_channels = 3, # Number of output color channels.
- resolution = 1024, # Output resolution.
- fmap_base = 8192, # Overall multiplier for the number of feature maps.
- fmap_decay = 1.0, # log2 feature map reduction when doubling the resolution.
- fmap_max = 512, # Maximum number of feature maps in any layer.
- use_styles = True, # Enable style inputs?
- const_input_layer = True, # First layer is a learned constant?
- use_noise = True, # Enable noise inputs?
- randomize_noise = True, # True = randomize noise inputs every time (non-deterministic), False = read noise inputs from variables.
- nonlinearity = 'lrelu', # Activation function: 'relu', 'lrelu'
- use_wscale = True, # Enable equalized learning rate?
- use_pixel_norm = False, # Enable pixelwise feature vector normalization?
- use_instance_norm = True, # Enable instance normalization?
- dtype = torch.float32, # Data type to use for activations and outputs.
- blur_filter = [1,2,1], # Low-pass filter to apply when resampling activations. None = no filtering.
- ):
-
- super().__init__()
- def nf(stage):
- return min(int(fmap_base / (2.0 ** (stage * fmap_decay))), fmap_max)
- self.dlatent_size = dlatent_size
- resolution_log2 = int(np.log2(resolution))
- assert resolution == 2**resolution_log2 and resolution >= 4
-
- act, gain = {'relu': (torch.relu, np.sqrt(2)),
- 'lrelu': (nn.LeakyReLU(negative_slope=0.2), np.sqrt(2))}[nonlinearity]
- num_layers = resolution_log2 * 2 - 2
- num_styles = num_layers if use_styles else 1
- torgbs = []
- blocks = []
- for res in range(2, resolution_log2 + 1):
- channels = nf(res-1)
- name = '{s}x{s}'.format(s=2**res)
- if res == 2:
- blocks.append((name,
- InputBlock(channels, dlatent_size, const_input_layer, gain, use_wscale,
- use_noise, use_pixel_norm, use_instance_norm, use_styles, act)))
-
- else:
- blocks.append((name,
- GSynthesisBlock(last_channels, channels, blur_filter, dlatent_size, gain, use_wscale, use_noise, use_pixel_norm, use_instance_norm, use_styles, act)))
- last_channels = channels
- self.torgb = MyConv2d(channels, num_channels, 1, gain=1, use_wscale=use_wscale)
- self.blocks = nn.ModuleDict(OrderedDict(blocks))
-
- def forward(self, dlatents_in):
- # Input: Disentangled latents (W) [minibatch, num_layers, dlatent_size].
- # lod_in = tf.cast(tf.get_variable('lod', initializer=np.float32(0), trainable=False), dtype)
- batch_size = dlatents_in.size(0)
- for i, m in enumerate(self.blocks.values()):
- if i == 0:
- x = m(dlatents_in[:, 2*i:2*i+2])
- else:
- x = m(x, dlatents_in[:, 2*i:2*i+2])
- rgb = self.torgb(x)
- return rgb
-
-
-class StyleGAN_G(nn.Sequential):
- def __init__(self, resolution, truncation=1.0):
- self.resolution = resolution
- self.layers = OrderedDict([
- ('g_mapping', G_mapping()),
- #('truncation', Truncation(avg_latent)),
- ('g_synthesis', G_synthesis(resolution=resolution)),
- ])
- super().__init__(self.layers)
-
- def forward(self, x, latent_is_w=False):
- if isinstance(x, list):
- assert len(x) == 18, 'Must provide 1 or 18 latents'
- if not latent_is_w:
- x = [self.layers['g_mapping'].forward(l) for l in x]
- x = torch.stack(x, dim=1)
- else:
- if not latent_is_w:
- x = self.layers['g_mapping'].forward(x)
- x = x.unsqueeze(1).expand(-1, 18, -1)
-
- x = self.layers['g_synthesis'].forward(x)
-
- return x
-
- # From: https://github.com/lernapparat/lernapparat/releases/download/v2019-02-01/
- def load_weights(self, checkpoint):
- self.load_state_dict(torch.load(checkpoint))
-
- def export_from_tf(self, pickle_path):
- module_path = Path(__file__).parent / 'stylegan_tf'
- sys.path.append(str(module_path.resolve()))
-
- import dnnlib, dnnlib.tflib, pickle, torch, collections
- dnnlib.tflib.init_tf()
-
- weights = pickle.load(open(pickle_path,'rb'))
- weights_pt = [collections.OrderedDict([(k, torch.from_numpy(v.value().eval())) for k,v in w.trainables.items()]) for w in weights]
- #torch.save(weights_pt, pytorch_name)
-
- # then on the PyTorch side run
- state_G, state_D, state_Gs = weights_pt #torch.load('./karras2019stylegan-ffhq-1024x1024.pt')
- def key_translate(k):
- k = k.lower().split('/')
- if k[0] == 'g_synthesis':
- if not k[1].startswith('torgb'):
- k.insert(1, 'blocks')
- k = '.'.join(k)
- k = (k.replace('const.const','const').replace('const.bias','bias').replace('const.stylemod','epi1.style_mod.lin')
- .replace('const.noise.weight','epi1.top_epi.noise.weight')
- .replace('conv.noise.weight','epi2.top_epi.noise.weight')
- .replace('conv.stylemod','epi2.style_mod.lin')
- .replace('conv0_up.noise.weight', 'epi1.top_epi.noise.weight')
- .replace('conv0_up.stylemod','epi1.style_mod.lin')
- .replace('conv1.noise.weight', 'epi2.top_epi.noise.weight')
- .replace('conv1.stylemod','epi2.style_mod.lin')
- .replace('torgb_lod0','torgb'))
- else:
- k = '.'.join(k)
- return k
-
- def weight_translate(k, w):
- k = key_translate(k)
- if k.endswith('.weight'):
- if w.dim() == 2:
- w = w.t()
- elif w.dim() == 1:
- pass
- else:
- assert w.dim() == 4
- w = w.permute(3, 2, 0, 1)
- return w
-
- # we delete the useless torgb filters
- param_dict = {key_translate(k) : weight_translate(k, v) for k,v in state_Gs.items() if 'torgb_lod' not in key_translate(k)}
- if 1:
- sd_shapes = {k : v.shape for k,v in self.state_dict().items()}
- param_shapes = {k : v.shape for k,v in param_dict.items() }
-
- for k in list(sd_shapes)+list(param_shapes):
- pds = param_shapes.get(k)
- sds = sd_shapes.get(k)
- if pds is None:
- print ("sd only", k, sds)
- elif sds is None:
- print ("pd only", k, pds)
- elif sds != pds:
- print ("mismatch!", k, pds, sds)
-
- self.load_state_dict(param_dict, strict=False) # needed for the blur kernels
- torch.save(self.state_dict(), Path(pickle_path).with_suffix('.pt'))
\ No newline at end of file
diff --git a/spaces/DragGan/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/stylegan2/model.py b/spaces/DragGan/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/stylegan2/model.py
deleted file mode 100644
index 6ca30efb2baa3159f1bc1954fe3b882ae4e48d12..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/stylegan2/model.py
+++ /dev/null
@@ -1,689 +0,0 @@
-import math
-import random
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from .op.fused_act import FusedLeakyReLU, fused_leaky_relu
-from .op.upfirdn2d import upfirdn2d
-
-
-class PixelNorm(nn.Module):
- def __init__(self):
- super().__init__()
-
- def forward(self, input):
- return input * torch.rsqrt(torch.mean(input ** 2, dim=1, keepdim=True) + 1e-8)
-
-
-def make_kernel(k):
- k = torch.tensor(k, dtype=torch.float32)
-
- if k.ndim == 1:
- k = k[None, :] * k[:, None]
-
- k /= k.sum()
-
- return k
-
-
-class Upsample(nn.Module):
- def __init__(self, kernel, factor=2):
- super().__init__()
-
- self.factor = factor
- kernel = make_kernel(kernel) * (factor ** 2)
- self.register_buffer('kernel', kernel)
-
- p = kernel.shape[0] - factor
-
- pad0 = (p + 1) // 2 + factor - 1
- pad1 = p // 2
-
- self.pad = (pad0, pad1)
-
- def forward(self, input):
- out = upfirdn2d(input, self.kernel, up=self.factor,
- down=1, pad=self.pad)
-
- return out
-
-
-class Downsample(nn.Module):
- def __init__(self, kernel, factor=2):
- super().__init__()
-
- self.factor = factor
- kernel = make_kernel(kernel)
- self.register_buffer('kernel', kernel)
-
- p = kernel.shape[0] - factor
-
- pad0 = (p + 1) // 2
- pad1 = p // 2
-
- self.pad = (pad0, pad1)
-
- def forward(self, input):
- out = upfirdn2d(input, self.kernel, up=1,
- down=self.factor, pad=self.pad)
-
- return out
-
-
-class Blur(nn.Module):
- def __init__(self, kernel, pad, upsample_factor=1):
- super().__init__()
-
- kernel = make_kernel(kernel)
-
- if upsample_factor > 1:
- kernel = kernel * (upsample_factor ** 2)
-
- self.register_buffer('kernel', kernel)
-
- self.pad = pad
-
- def forward(self, input):
- out = upfirdn2d(input, self.kernel, pad=self.pad)
-
- return out
-
-
-class EqualConv2d(nn.Module):
- def __init__(
- self, in_channel, out_channel, kernel_size, stride=1, padding=0, bias=True
- ):
- super().__init__()
-
- self.weight = nn.Parameter(
- torch.randn(out_channel, in_channel, kernel_size, kernel_size)
- )
- self.scale = 1 / math.sqrt(in_channel * kernel_size ** 2)
-
- self.stride = stride
- self.padding = padding
-
- if bias:
- self.bias = nn.Parameter(torch.zeros(out_channel))
-
- else:
- self.bias = None
-
- def forward(self, input):
- out = F.conv2d(
- input,
- self.weight * self.scale,
- bias=self.bias,
- stride=self.stride,
- padding=self.padding,
- )
-
- return out
-
- def __repr__(self):
- return (
- f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]},'
- f' {self.weight.shape[2]}, stride={self.stride}, padding={self.padding})'
- )
-
-
-class EqualLinear(nn.Module):
- def __init__(
- self, in_dim, out_dim, bias=True, bias_init=0, lr_mul=1, activation=None
- ):
- super().__init__()
-
- self.weight = nn.Parameter(torch.randn(out_dim, in_dim).div_(lr_mul))
-
- if bias:
- self.bias = nn.Parameter(torch.zeros(out_dim).fill_(bias_init))
-
- else:
- self.bias = None
-
- self.activation = activation
-
- self.scale = (1 / math.sqrt(in_dim)) * lr_mul
- self.lr_mul = lr_mul
-
- def forward(self, input):
- if self.activation:
- out = F.linear(input, self.weight * self.scale)
- out = fused_leaky_relu(out, self.bias * self.lr_mul)
-
- else:
- out = F.linear(
- input, self.weight * self.scale, bias=self.bias * self.lr_mul
- )
-
- return out
-
- def __repr__(self):
- return (
- f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]})'
- )
-
-
-class ScaledLeakyReLU(nn.Module):
- def __init__(self, negative_slope=0.2):
- super().__init__()
-
- self.negative_slope = negative_slope
-
- def forward(self, input):
- out = F.leaky_relu(input, negative_slope=self.negative_slope)
-
- return out * math.sqrt(2)
-
-
-class ModulatedConv2d(nn.Module):
- def __init__(
- self,
- in_channel,
- out_channel,
- kernel_size,
- style_dim,
- demodulate=True,
- upsample=False,
- downsample=False,
- blur_kernel=[1, 3, 3, 1],
- ):
- super().__init__()
-
- self.eps = 1e-8
- self.kernel_size = kernel_size
- self.in_channel = in_channel
- self.out_channel = out_channel
- self.upsample = upsample
- self.downsample = downsample
-
- if upsample:
- factor = 2
- p = (len(blur_kernel) - factor) - (kernel_size - 1)
- pad0 = (p + 1) // 2 + factor - 1
- pad1 = p // 2 + 1
-
- self.blur = Blur(blur_kernel, pad=(
- pad0, pad1), upsample_factor=factor)
-
- if downsample:
- factor = 2
- p = (len(blur_kernel) - factor) + (kernel_size - 1)
- pad0 = (p + 1) // 2
- pad1 = p // 2
-
- self.blur = Blur(blur_kernel, pad=(pad0, pad1))
-
- fan_in = in_channel * kernel_size ** 2
- self.scale = 1 / math.sqrt(fan_in)
- self.padding = kernel_size // 2
-
- self.weight = nn.Parameter(
- torch.randn(1, out_channel, in_channel, kernel_size, kernel_size)
- )
-
- self.modulation = EqualLinear(style_dim, in_channel, bias_init=1)
-
- self.demodulate = demodulate
-
- def __repr__(self):
- return (
- f'{self.__class__.__name__}({self.in_channel}, {self.out_channel}, {self.kernel_size}, '
- f'upsample={self.upsample}, downsample={self.downsample})'
- )
-
- def forward(self, input, style):
- batch, in_channel, height, width = input.shape
-
- style = self.modulation(style).view(batch, 1, in_channel, 1, 1)
- weight = self.scale * self.weight * style
-
- if self.demodulate:
- demod = torch.rsqrt(weight.pow(2).sum([2, 3, 4]) + 1e-8)
- weight = weight * demod.view(batch, self.out_channel, 1, 1, 1)
-
- weight = weight.view(
- batch * self.out_channel, in_channel, self.kernel_size, self.kernel_size
- )
-
- if self.upsample:
- input = input.view(1, batch * in_channel, height, width)
- weight = weight.view(
- batch, self.out_channel, in_channel, self.kernel_size, self.kernel_size
- )
- weight = weight.transpose(1, 2).reshape(
- batch * in_channel, self.out_channel, self.kernel_size, self.kernel_size
- )
- out = F.conv_transpose2d(
- input, weight, padding=0, stride=2, groups=batch)
- _, _, height, width = out.shape
- out = out.view(batch, self.out_channel, height, width)
- out = self.blur(out)
-
- elif self.downsample:
- input = self.blur(input)
- _, _, height, width = input.shape
- input = input.view(1, batch * in_channel, height, width)
- out = F.conv2d(input, weight, padding=0, stride=2, groups=batch)
- _, _, height, width = out.shape
- out = out.view(batch, self.out_channel, height, width)
-
- else:
- input = input.view(1, batch * in_channel, height, width)
- out = F.conv2d(input, weight, padding=self.padding, groups=batch)
- _, _, height, width = out.shape
- out = out.view(batch, self.out_channel, height, width)
-
- return out
-
-
-class NoiseInjection(nn.Module):
- def __init__(self):
- super().__init__()
-
- self.weight = nn.Parameter(torch.zeros(1))
-
- def forward(self, image, noise=None):
- if noise is None:
- batch, _, height, width = image.shape
- noise = image.new_empty(batch, 1, height, width).normal_()
-
- return image + self.weight * noise
-
-
-class ConstantInput(nn.Module):
- def __init__(self, channel, size=4):
- super().__init__()
-
- self.input = nn.Parameter(torch.randn(1, channel, size, size // 2))
-
- def forward(self, input):
- batch = input.shape[0]
- out = self.input.repeat(batch, 1, 1, 1)
-
- return out
-
-
-class StyledConv(nn.Module):
- def __init__(
- self,
- in_channel,
- out_channel,
- kernel_size,
- style_dim,
- upsample=False,
- blur_kernel=[1, 3, 3, 1],
- demodulate=True,
- ):
- super().__init__()
-
- self.conv = ModulatedConv2d(
- in_channel,
- out_channel,
- kernel_size,
- style_dim,
- upsample=upsample,
- blur_kernel=blur_kernel,
- demodulate=demodulate,
- )
-
- self.noise = NoiseInjection()
- # self.bias = nn.Parameter(torch.zeros(1, out_channel, 1, 1))
- # self.activate = ScaledLeakyReLU(0.2)
- self.activate = FusedLeakyReLU(out_channel)
-
- def forward(self, input, style, noise=None):
- out = self.conv(input, style)
- out = self.noise(out, noise=noise)
- # out = out + self.bias
- out = self.activate(out)
-
- return out
-
-
-class ToRGB(nn.Module):
- def __init__(self, in_channel, style_dim, upsample=True, blur_kernel=[1, 3, 3, 1]):
- super().__init__()
-
- if upsample:
- self.upsample = Upsample(blur_kernel)
-
- self.conv = ModulatedConv2d(
- in_channel, 3, 1, style_dim, demodulate=False)
- self.bias = nn.Parameter(torch.zeros(1, 3, 1, 1))
-
- def forward(self, input, style, skip=None):
- out = self.conv(input, style)
- out = out + self.bias
-
- if skip is not None:
- skip = self.upsample(skip)
-
- out = out + skip
-
- return out
-
-
-class Generator(nn.Module):
- def __init__(
- self,
- size,
- style_dim,
- n_mlp,
- channel_multiplier=2,
- blur_kernel=[1, 3, 3, 1],
- lr_mlp=0.01,
- ):
- super().__init__()
-
- self.size = size
-
- self.style_dim = style_dim
-
- layers = [PixelNorm()]
-
- for i in range(n_mlp):
- layers.append(
- EqualLinear(
- style_dim, style_dim, lr_mul=lr_mlp, activation='fused_lrelu'
- )
- )
-
- self.style = nn.Sequential(*layers)
-
- self.channels = {
- 4: 512,
- 8: 512,
- 16: 512,
- 32: 512,
- 64: 256 * channel_multiplier,
- 128: 128 * channel_multiplier,
- 256: 64 * channel_multiplier,
- 512: 32 * channel_multiplier,
- 1024: 16 * channel_multiplier,
- }
-
- self.input = ConstantInput(self.channels[4])
- self.conv1 = StyledConv(
- self.channels[4], self.channels[4], 3, style_dim, blur_kernel=blur_kernel
- )
- self.to_rgb1 = ToRGB(self.channels[4], style_dim, upsample=False)
-
- self.log_size = int(math.log(size, 2))
- self.num_layers = (self.log_size - 2) * 2 + 1
-
- self.convs = nn.ModuleList()
- self.upsamples = nn.ModuleList()
- self.to_rgbs = nn.ModuleList()
- self.noises = nn.Module()
-
- in_channel = self.channels[4]
-
- for layer_idx in range(self.num_layers):
- res = (layer_idx + 5) // 2
- shape = [1, 1, 2 ** res, 2 ** res // 2]
- self.noises.register_buffer(
- "noise_{}".format(layer_idx), torch.randn(*shape)
- )
-
- for i in range(3, self.log_size + 1):
- out_channel = self.channels[2 ** i]
-
- self.convs.append(
- StyledConv(
- in_channel,
- out_channel,
- 3,
- style_dim,
- upsample=True,
- blur_kernel=blur_kernel,
- )
- )
-
- self.convs.append(
- StyledConv(
- out_channel, out_channel, 3, style_dim, blur_kernel=blur_kernel
- )
- )
-
- self.to_rgbs.append(ToRGB(out_channel, style_dim))
-
- in_channel = out_channel
-
- self.n_latent = self.log_size * 2 - 2
-
- def make_noise(self):
- device = self.input.input.device
-
- noises = [torch.randn(1, 1, 2 ** 2, 2 ** 2 // 2, device=device)]
-
- for i in range(3, self.log_size + 1):
- for _ in range(2):
- noises.append(torch.randn(
- 1, 1, 2 ** i, 2 ** i // 2, device=device))
-
- return noises
-
- def mean_latent(self, n_latent):
- latent_in = torch.randn(
- n_latent, self.style_dim, device=self.input.input.device
- )
- latent = self.style(latent_in).mean(0, keepdim=True)
-
- return latent
-
- def get_latent(self, input):
- return self.style(input)
-
- def forward(
- self,
- styles,
- return_latents=False,
- return_features=False,
- inject_index=None,
- truncation=1,
- truncation_latent=None,
- input_is_latent=False,
- noise=None,
- randomize_noise=True,
- ):
- if not input_is_latent:
- styles = [self.style(s) for s in styles]
-
- if noise is None:
- if randomize_noise:
- noise = [None] * self.num_layers
- else:
- noise = [
- getattr(self.noises, f'noise_{i}') for i in range(self.num_layers)
- ]
-
- if truncation < 1:
- style_t = []
-
- for style in styles:
- style_t.append(
- truncation_latent + truncation *
- (style - truncation_latent)
- )
-
- styles = style_t
-
- if len(styles) < 2:
- inject_index = self.n_latent
- if styles[0].ndim < 3:
- latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1)
- else:
- latent = styles[0]
-
- else:
- if inject_index is None:
- inject_index = random.randint(1, self.n_latent - 1)
-
- # latent = styles[0].unsqueeze(0)
- # if latent.shape[1] == 1:
- # latent = latent.repeat(1, inject_index, 1)
- # else:
- # latent = latent[:, :inject_index, :]
- latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1)
- latent2 = styles[1].unsqueeze(1).repeat(
- 1, self.n_latent - inject_index, 1)
- # latent = styles[0][:, :inject_index, :]
- # latent2 = styles[1][:, inject_index:, :]
- latent = torch.cat([latent, latent2], 1)
- out = self.input(latent)
- out = self.conv1(out, latent[:, 0], noise=noise[0])
-
- skip = self.to_rgb1(out, latent[:, 1])
-
- i = 1
- for conv1, conv2, noise1, noise2, to_rgb in zip(
- self.convs[::2], self.convs[1::2], noise[1::2], noise[2::2], self.to_rgbs
- ):
- out = conv1(out, latent[:, i], noise=noise1)
- out = conv2(out, latent[:, i + 1], noise=noise2)
- skip = to_rgb(out, latent[:, i + 2], skip)
-
- i += 2
-
- image = skip
-
- if return_latents:
- return image, latent
- elif return_features:
- return image, out
- else:
- return image, None
-
-
-class ConvLayer(nn.Sequential):
- def __init__(
- self,
- in_channel,
- out_channel,
- kernel_size,
- downsample=False,
- blur_kernel=[1, 3, 3, 1],
- bias=True,
- activate=True,
- ):
- layers = []
-
- if downsample:
- factor = 2
- p = (len(blur_kernel) - factor) + (kernel_size - 1)
- pad0 = (p + 1) // 2
- pad1 = p // 2
-
- layers.append(Blur(blur_kernel, pad=(pad0, pad1)))
-
- stride = 2
- self.padding = 0
-
- else:
- stride = 1
- self.padding = kernel_size // 2
-
- layers.append(
- EqualConv2d(
- in_channel,
- out_channel,
- kernel_size,
- padding=self.padding,
- stride=stride,
- bias=bias and not activate,
- )
- )
-
- if activate:
- if bias:
- layers.append(FusedLeakyReLU(out_channel))
-
- else:
- layers.append(ScaledLeakyReLU(0.2))
-
- super().__init__(*layers)
-
-
-class ResBlock(nn.Module):
- def __init__(self, in_channel, out_channel, blur_kernel=[1, 3, 3, 1]):
- super().__init__()
-
- self.conv1 = ConvLayer(in_channel, in_channel, 3)
- self.conv2 = ConvLayer(in_channel, out_channel, 3, downsample=True)
-
- self.skip = ConvLayer(
- in_channel, out_channel, 1, downsample=True, activate=False, bias=False
- )
-
- def forward(self, input):
- out = self.conv1(input)
- out = self.conv2(out)
-
- skip = self.skip(input)
- out = (out + skip) / math.sqrt(2)
-
- return out
-
-
-class Discriminator(nn.Module):
- def __init__(self, size, channel_multiplier=2, blur_kernel=[1, 3, 3, 1]):
- super().__init__()
-
- channels = {
- 4: 512,
- 8: 512,
- 16: 512,
- 32: 512,
- 64: 256 * channel_multiplier,
- 128: 128 * channel_multiplier,
- 256: 64 * channel_multiplier,
- 512: 32 * channel_multiplier,
- 1024: 16 * channel_multiplier,
- }
-
- convs = [ConvLayer(3, channels[size], 1)]
-
- log_size = int(math.log(size, 2))
-
- in_channel = channels[size]
-
- for i in range(log_size, 2, -1):
- out_channel = channels[2 ** (i - 1)]
-
- convs.append(ResBlock(in_channel, out_channel, blur_kernel))
-
- in_channel = out_channel
-
- self.convs = nn.Sequential(*convs)
-
- self.stddev_group = 4
- self.stddev_feat = 1
-
- self.final_conv = ConvLayer(in_channel + 1, channels[4], 3)
- self.final_linear = nn.Sequential(
- EqualLinear(channels[4] * 4 * 4 // 2,
- channels[4], activation='fused_lrelu'),
- EqualLinear(channels[4], 1),
- )
-
- def forward(self, input):
- out = self.convs(input)
-
- batch, channel, height, width = out.shape
- group = min(batch, self.stddev_group)
- stddev = out.view(
- group, -1, self.stddev_feat, channel // self.stddev_feat, height, width
- )
- stddev = torch.sqrt(stddev.var(0, unbiased=False) + 1e-8)
- stddev = stddev.mean([2, 3, 4], keepdims=True).squeeze(2)
- stddev = stddev.repeat(group, 1, height, width)
- out = torch.cat([out, stddev], 1)
-
- out = self.final_conv(out)
-
- out = out.view(batch, -1)
- out = self.final_linear(out)
-
- return out
diff --git a/spaces/DragGan/DragGan/torch_utils/__init__.py b/spaces/DragGan/DragGan/torch_utils/__init__.py
deleted file mode 100644
index 939e7c6c8f94c4ea1141885c3c3295fe083b06aa..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan/torch_utils/__init__.py
+++ /dev/null
@@ -1,9 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-# empty
diff --git a/spaces/Duskfallcrew/EpicMix_Realism_WebUi/app.py b/spaces/Duskfallcrew/EpicMix_Realism_WebUi/app.py
deleted file mode 100644
index 59f9d6798d3fcb3fa9263d4d372af2d1d72d5386..0000000000000000000000000000000000000000
--- a/spaces/Duskfallcrew/EpicMix_Realism_WebUi/app.py
+++ /dev/null
@@ -1,137 +0,0 @@
-from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline, DPMSolverMultistepScheduler
-import gradio as gr
-import torch
-from PIL import Image
-
-model_id = 'Duskfallcrew/EpicMix_Realism'
-prefix = ''
-
-scheduler = DPMSolverMultistepScheduler.from_pretrained(model_id, subfolder="scheduler")
-
-pipe = StableDiffusionPipeline.from_pretrained(
- model_id,
- torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32,
- scheduler=scheduler)
-
-pipe_i2i = StableDiffusionImg2ImgPipeline.from_pretrained(
- model_id,
- torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32,
- scheduler=scheduler)
-
-if torch.cuda.is_available():
- pipe = pipe.to("cuda")
- pipe_i2i = pipe_i2i.to("cuda")
-
-def error_str(error, title="Error"):
- return f"""#### {title}
- {error}""" if error else ""
-
-def inference(prompt, guidance, steps, width=512, height=512, seed=0, img=None, strength=0.5, neg_prompt="", auto_prefix=False):
-
- generator = torch.Generator('cuda').manual_seed(seed) if seed != 0 else None
- prompt = f"{prefix} {prompt}" if auto_prefix else prompt
-
- try:
- if img is not None:
- return img_to_img(prompt, neg_prompt, img, strength, guidance, steps, width, height, generator), None
- else:
- return txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator), None
- except Exception as e:
- return None, error_str(e)
-
-def txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator):
-
- result = pipe(
- prompt,
- negative_prompt = neg_prompt,
- num_inference_steps = int(steps),
- guidance_scale = guidance,
- width = width,
- height = height,
- generator = generator)
-
- return result.images[0]
-
-def img_to_img(prompt, neg_prompt, img, strength, guidance, steps, width, height, generator):
-
- ratio = min(height / img.height, width / img.width)
- img = img.resize((int(img.width * ratio), int(img.height * ratio)), Image.LANCZOS)
- result = pipe_i2i(
- prompt,
- negative_prompt = neg_prompt,
- init_image = img,
- num_inference_steps = int(steps),
- strength = strength,
- guidance_scale = guidance,
- width = width,
- height = height,
- generator = generator)
-
- return result.images[0]
-
-css = """.main-div div{display:inline-flex;align-items:center;gap:.8rem;font-size:1.75rem}.main-div div h1{font-weight:900;margin-bottom:7px}.main-div p{margin-bottom:10px;font-size:94%}a{text-decoration:underline}.tabs{margin-top:0;margin-bottom:0}#gallery{min-height:20rem}
-"""
-with gr.Blocks(css=css) as demo:
- gr.HTML(
- f"""
-
-
-
Epicmix Realism
-
-
- Demo for Epicmix Realism Stable Diffusion model. Running on Free CPU, if ther'es a queue make sure you duplicate the space to your own and if you got the funds upgrade to GPU. No prefix tokens. If you like what you see consider donating here: Ko-Fi Duskfallcrew
- {"Add the following tokens to your prompts for the model to work properly: prefix" if prefix else ""}
-
- Running on {"GPU 🔥" if torch.cuda.is_available() else f"CPU 🥶. For faster inference it is recommended to upgrade to GPU in Settings"} after duplicating the space
DeMaskGAN: Face Restoration Using Swin Transformer
"
-example_img_dir = 'img'
-example_img_name = os.listdir(example_img_dir)
-examples=[os.path.join(example_img_dir, image_path) for image_path in example_img_name if image_path.endswith(('.jpg','.jpeg'))]
-gr.Interface(
- inference,
- gr.inputs.Image(type="pil", label="Input", tool="editor"),
- gr.outputs.Image(type="pil", label="Output").style(height=242),
- title=title,
- description=description,
- article=article,
- examples=examples
- ).launch()
diff --git a/spaces/EyanAn/vits-uma-genshin-honkai/text/__init__.py b/spaces/EyanAn/vits-uma-genshin-honkai/text/__init__.py
deleted file mode 100644
index 663c4b6416affb53c9dc56dddbc8b2b65d4bf518..0000000000000000000000000000000000000000
--- a/spaces/EyanAn/vits-uma-genshin-honkai/text/__init__.py
+++ /dev/null
@@ -1,57 +0,0 @@
-""" from https://github.com/keithito/tacotron """
-from text import cleaners
-from text.symbols import symbols
-
-
-# Mappings from symbol to numeric ID and vice versa:
-_symbol_to_id = {s: i for i, s in enumerate(symbols)}
-_id_to_symbol = {i: s for i, s in enumerate(symbols)}
-
-
-def text_to_sequence(text, symbols, cleaner_names):
- '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text.
- Args:
- text: string to convert to a sequence
- cleaner_names: names of the cleaner functions to run the text through
- Returns:
- List of integers corresponding to the symbols in the text
- '''
- _symbol_to_id = {s: i for i, s in enumerate(symbols)}
- sequence = []
-
- clean_text = _clean_text(text, cleaner_names)
- for symbol in clean_text:
- if symbol not in _symbol_to_id.keys():
- continue
- symbol_id = _symbol_to_id[symbol]
- sequence += [symbol_id]
- return sequence, clean_text
-
-
-def cleaned_text_to_sequence(cleaned_text):
- '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text.
- Args:
- text: string to convert to a sequence
- Returns:
- List of integers corresponding to the symbols in the text
- '''
- sequence = [_symbol_to_id[symbol] for symbol in cleaned_text if symbol in _symbol_to_id.keys()]
- return sequence
-
-
-def sequence_to_text(sequence):
- '''Converts a sequence of IDs back to a string'''
- result = ''
- for symbol_id in sequence:
- s = _id_to_symbol[symbol_id]
- result += s
- return result
-
-
-def _clean_text(text, cleaner_names):
- for name in cleaner_names:
- cleaner = getattr(cleaners, name)
- if not cleaner:
- raise Exception('Unknown cleaner: %s' % name)
- text = cleaner(text)
- return text
diff --git a/spaces/FKBaffour/Streamlit_App_for_Sales_Forecasting/app.py b/spaces/FKBaffour/Streamlit_App_for_Sales_Forecasting/app.py
deleted file mode 100644
index 602f7c4bb1db3438a00519a61a4a484862c5fa98..0000000000000000000000000000000000000000
--- a/spaces/FKBaffour/Streamlit_App_for_Sales_Forecasting/app.py
+++ /dev/null
@@ -1,200 +0,0 @@
-# Importing required Libraries
-import streamlit as st
-import pandas as pd
-import numpy as np
-import os, pickle
-from sklearn import preprocessing
-
-# Setting up page configuration and directory path
-st.set_page_config(page_title="Sales Forecasting App", page_icon="🐞", layout="centered")
-DIRPATH = os.path.dirname(os.path.realpath(__file__))
-
-# Setting background image
-import base64
-def add_bg_from_local(image_file):
- with open(image_file, "rb") as image_file:
- encoded_string = base64.b64encode(image_file.read())
- st.markdown(
- f"""
-
- """,
- unsafe_allow_html=True
- )
-add_bg_from_local('background.jpg')
-
-# Setting up logo
-left1, mid, right1 = st.columns(3)
-with mid:
- st.image("logo.jpg", use_column_width=True)
-
-# Setting up Sidebar
-social_acc = ['Data Field Description', 'EDA', 'About App']
-social_acc_nav = st.sidebar.radio('**INFORMATION SECTION**', social_acc)
-
-if social_acc_nav == 'Data Field Description':
- st.sidebar.markdown("
Data Field Description
", unsafe_allow_html=True)
- st.sidebar.markdown("**Date:** The date you want to predict sales for")
- st.sidebar.markdown("**Family:** identifies the type of product sold")
- st.sidebar.markdown("**Onpromotion:** gives the total number of items in a product family that are being promoted at a store at a given date")
- st.sidebar.markdown("**Store Number:** identifies the store at which the products are sold")
- st.sidebar.markdown("**Holiday Locale:** provide information about the locale where holiday is celebrated")
-
-elif social_acc_nav == 'EDA':
- st.sidebar.markdown("
Exploratory Data Analysis
", unsafe_allow_html=True)
- st.sidebar.markdown('''---''')
- st.sidebar.markdown('''The exploratory data analysis of this project can be find in a Jupyter notebook from the linl below''')
- st.sidebar.markdown("[Open Notebook](https://github.com/Kyei-frank/Regression-Project-Store-Sales--Time-Series-Forecasting/blob/main/project_workflow.ipynb)")
-
-elif social_acc_nav == 'About App':
- st.sidebar.markdown("
Sales Forecasting App
", unsafe_allow_html=True)
- st.sidebar.markdown('''---''')
- st.sidebar.markdown("This App predicts the sales for product families sold at Favorita stores using regression model.")
- st.sidebar.markdown("")
- st.sidebar.markdown("[ Visit Github Repository for more information](https://github.com/Kyei-frank/Regression-Project-Store-Sales--Time-Series-Forecasting)")
-
-# Loading Machine Learning Objects
-@st.cache()
-def load_saved_objects(file_path = 'ML_items'):
- # Function to load saved objects
- with open('ML_items', 'rb') as file:
- loaded_object = pickle.load(file)
-
- return loaded_object
-
-# Instantiating ML_items
-Loaded_object = load_saved_objects(file_path = 'ML_items')
-pipeline, train_data, stores, holidays_event = Loaded_object['pipeline'], Loaded_object['train_data'], Loaded_object['stores'], Loaded_object['holidays_event']
-
-# Setting Function for extracting Calendar features
-@st.cache()
-def getDateFeatures(df, date):
- df['date'] = pd.to_datetime(df['date'])
- df['month'] = df.date.dt.month
- df['day_of_month'] = df.date.dt.day
- df['day_of_year'] = df.date.dt.dayofyear
- df['week_of_year'] = df.date.dt.isocalendar().week
- df['day_of_week'] = df.date.dt.dayofweek
- df['year'] = df.date.dt.year
- df['is_weekend']= np.where(df['day_of_week'] > 4, 1, 0)
- df['is_month_start']= df.date.dt.is_month_start.astype(int)
- df['is_month_end']= df.date.dt.is_month_end.astype(int)
- df['quarter']= df.date.dt.quarter
- df['is_quarter_start']= df.date.dt.is_quarter_start.astype(int)
- df['is_quarter_end']= df.date.dt.is_quarter_end.astype(int)
- df['is_year_start']= df.date.dt.is_year_start.astype(int)
-
- return df
-
-# Setting up variables for input data
-@st.cache()
-def setup(tmp_df_file):
- "Setup the required elements like files, models, global variables, etc"
- pd.DataFrame(
- dict(
- date=[],
- store_nbr=[],
- family=[],
- onpromotion=[],
- city=[],
- state=[],
- store_type=[],
- cluster=[],
- day_type=[],
- locale=[],
- locale_name=[],
- )
- ).to_csv(tmp_df_file, index=False)
-
-# Setting up a file to save our input data
-tmp_df_file = os.path.join(DIRPATH, "tmp", "data.csv")
-setup(tmp_df_file)
-
-# setting Title for forms
-st.markdown("
Sales Prediction
", unsafe_allow_html=True)
-st.markdown(" Fill in the details below and click on SUBMIT button to make a prediction for a specific date and item ", unsafe_allow_html=True)
-
-# Creating columns for for input data(forms)
-left_col, mid_col, right_col = st.columns(3)
-
-# Developing forms to collect input data
-with st.form(key="information", clear_on_submit=True):
-
- # Setting up input data for 1st column
- left_col.markdown("**PRODUCT DATA**")
- date = left_col.date_input("Prediction Date:")
- family = left_col.selectbox("Item family:", options= list(train_data["family"].unique()))
- onpromotion = left_col.selectbox("Onpromotion code:", options= set(train_data["onpromotion"].unique()))
- store_nbr = left_col.selectbox("Store Number:", options= set(stores["store_nbr"].unique()))
-
- # Setting up input data for 2nd column
- mid_col.markdown("**STORE DATA**")
- city = mid_col.selectbox("City:", options= set(stores["city"].unique()))
- state = mid_col.selectbox("State:", options= list(stores["state"].unique()))
- cluster = mid_col.selectbox("Store Cluster:", options= list(stores["cluster"].unique()))
- store_type = mid_col.radio("Store Type:", options= set(stores["store_type"].unique()), horizontal = True)
-
- # Setting up input data for 3rd column
- right_col.markdown("**ADDITIONAL DATA**")
- check= right_col.checkbox("Is it a Holiday or weekend?")
- if check:
- right_col.write('Fill the following information on Day Type')
- day_type = right_col.selectbox("Holiday:", options= ('Holiday','Special Day:Transfered/Additional Holiday','No Work/Weekend'))
- locale= right_col.selectbox("Holiday Locale:", options= list(holidays_event["locale"].unique()))
- locale_name= right_col.selectbox("Locale Name:", options= list(holidays_event["locale_name"].unique()))
- else:
- day_type = 'Workday'
- locale = 'National'
- locale_name= 'Ecuador'
-
- submitted = st.form_submit_button(label="Submit")
-
-# Setting up background operations after submitting forms
-if submitted:
- # Saving input data as csv file after submission
- pd.read_csv(tmp_df_file).append(
- dict(
- date = date,
- store_nbr = store_nbr,
- family=family,
- onpromotion= onpromotion,
- city=city,
- state=state,
- store_type=store_type,
- cluster=cluster,
- day_type=day_type,
- locale=locale,
- locale_name=locale_name
- ),
- ignore_index=True,
- ).to_csv(tmp_df_file, index=False)
- st.balloons()
-
- # Converting input data to a dataframe for prediction
- df = pd.read_csv(tmp_df_file)
- df= df.copy()
-
- # Getting date Features
- processed_data= getDateFeatures(df, 'date')
- processed_data= processed_data.drop(columns=['date'])
-
- # Making predictions
- prediction = pipeline.predict(processed_data)
- df['Sales']= prediction
-
- # Displaying prediction results
- st.markdown('''---''')
- st.markdown("
Prediction Results
", unsafe_allow_html=True)
- st.success(f"Predicted Sales: {prediction[-1]}")
- st.markdown('''---''')
-
- # Making expander to view all records
- expander = st.expander("See all records")
- with expander:
- df = pd.read_csv(tmp_df_file)
- df['Sales']= prediction
- st.dataframe(df)
diff --git a/spaces/Faridmaruf/RVCV2MODEL/README.md b/spaces/Faridmaruf/RVCV2MODEL/README.md
deleted file mode 100644
index 0f1f1bd01815847d73817285f9cca4b534813f1a..0000000000000000000000000000000000000000
--- a/spaces/Faridmaruf/RVCV2MODEL/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: RVC V2 Genshin Impact
-emoji: 🎤
-colorFrom: red
-colorTo: purple
-sdk: gradio
-sdk_version: 3.36.1
-app_file: app.py
-pinned: true
-license: mit
-duplicated_from: ArkanDash/rvc-models-new
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/FoxMeo/fire-detector/utils/general.py b/spaces/FoxMeo/fire-detector/utils/general.py
deleted file mode 100644
index decdcc64ecd72927bc6c185683977854e593711d..0000000000000000000000000000000000000000
--- a/spaces/FoxMeo/fire-detector/utils/general.py
+++ /dev/null
@@ -1,892 +0,0 @@
-# YOLOR general utils
-
-import glob
-import logging
-import math
-import os
-import platform
-import random
-import re
-import subprocess
-import time
-from pathlib import Path
-
-import cv2
-import numpy as np
-import pandas as pd
-import torch
-import torchvision
-import yaml
-
-from utils.google_utils import gsutil_getsize
-from utils.metrics import fitness
-from utils.torch_utils import init_torch_seeds
-
-# Settings
-torch.set_printoptions(linewidth=320, precision=5, profile='long')
-np.set_printoptions(linewidth=320, formatter={'float_kind': '{:11.5g}'.format}) # format short g, %precision=5
-pd.options.display.max_columns = 10
-cv2.setNumThreads(0) # prevent OpenCV from multithreading (incompatible with PyTorch DataLoader)
-os.environ['NUMEXPR_MAX_THREADS'] = str(min(os.cpu_count(), 8)) # NumExpr max threads
-
-
-def set_logging(rank=-1):
- logging.basicConfig(
- format="%(message)s",
- level=logging.INFO if rank in [-1, 0] else logging.WARN)
-
-
-def init_seeds(seed=0):
- # Initialize random number generator (RNG) seeds
- random.seed(seed)
- np.random.seed(seed)
- init_torch_seeds(seed)
-
-
-def get_latest_run(search_dir='.'):
- # Return path to most recent 'last.pt' in /runs (i.e. to --resume from)
- last_list = glob.glob(f'{search_dir}/**/last*.pt', recursive=True)
- return max(last_list, key=os.path.getctime) if last_list else ''
-
-
-def isdocker():
- # Is environment a Docker container
- return Path('/workspace').exists() # or Path('/.dockerenv').exists()
-
-
-def emojis(str=''):
- # Return platform-dependent emoji-safe version of string
- return str.encode().decode('ascii', 'ignore') if platform.system() == 'Windows' else str
-
-
-def check_online():
- # Check internet connectivity
- import socket
- try:
- socket.create_connection(("1.1.1.1", 443), 5) # check host accesability
- return True
- except OSError:
- return False
-
-
-def check_git_status():
- # Recommend 'git pull' if code is out of date
- print(colorstr('github: '), end='')
- try:
- assert Path('.git').exists(), 'skipping check (not a git repository)'
- assert not isdocker(), 'skipping check (Docker image)'
- assert check_online(), 'skipping check (offline)'
-
- cmd = 'git fetch && git config --get remote.origin.url'
- url = subprocess.check_output(cmd, shell=True).decode().strip().rstrip('.git') # github repo url
- branch = subprocess.check_output('git rev-parse --abbrev-ref HEAD', shell=True).decode().strip() # checked out
- n = int(subprocess.check_output(f'git rev-list {branch}..origin/master --count', shell=True)) # commits behind
- if n > 0:
- s = f"⚠️ WARNING: code is out of date by {n} commit{'s' * (n > 1)}. " \
- f"Use 'git pull' to update or 'git clone {url}' to download latest."
- else:
- s = f'up to date with {url} ✅'
- print(emojis(s)) # emoji-safe
- except Exception as e:
- print(e)
-
-
-def check_requirements(requirements='requirements.txt', exclude=()):
- # Check installed dependencies meet requirements (pass *.txt file or list of packages)
- import pkg_resources as pkg
- prefix = colorstr('red', 'bold', 'requirements:')
- if isinstance(requirements, (str, Path)): # requirements.txt file
- file = Path(requirements)
- if not file.exists():
- print(f"{prefix} {file.resolve()} not found, check failed.")
- return
- requirements = [f'{x.name}{x.specifier}' for x in pkg.parse_requirements(file.open()) if x.name not in exclude]
- else: # list or tuple of packages
- requirements = [x for x in requirements if x not in exclude]
-
- n = 0 # number of packages updates
- for r in requirements:
- try:
- pkg.require(r)
- except Exception as e: # DistributionNotFound or VersionConflict if requirements not met
- n += 1
- print(f"{prefix} {e.req} not found and is required by YOLOR, attempting auto-update...")
- print(subprocess.check_output(f"pip install '{e.req}'", shell=True).decode())
-
- if n: # if packages updated
- source = file.resolve() if 'file' in locals() else requirements
- s = f"{prefix} {n} package{'s' * (n > 1)} updated per {source}\n" \
- f"{prefix} ⚠️ {colorstr('bold', 'Restart runtime or rerun command for updates to take effect')}\n"
- print(emojis(s)) # emoji-safe
-
-
-def check_img_size(img_size, s=32):
- # Verify img_size is a multiple of stride s
- new_size = make_divisible(img_size, int(s)) # ceil gs-multiple
- if new_size != img_size:
- print('WARNING: --img-size %g must be multiple of max stride %g, updating to %g' % (img_size, s, new_size))
- return new_size
-
-
-def check_imshow():
- # Check if environment supports image displays
- try:
- assert not isdocker(), 'cv2.imshow() is disabled in Docker environments'
- cv2.imshow('test', np.zeros((1, 1, 3)))
- cv2.waitKey(1)
- cv2.destroyAllWindows()
- cv2.waitKey(1)
- return True
- except Exception as e:
- print(f'WARNING: Environment does not support cv2.imshow() or PIL Image.show() image displays\n{e}')
- return False
-
-
-def check_file(file):
- # Search for file if not found
- if Path(file).is_file() or file == '':
- return file
- else:
- files = glob.glob('./**/' + file, recursive=True) # find file
- assert len(files), f'File Not Found: {file}' # assert file was found
- assert len(files) == 1, f"Multiple files match '{file}', specify exact path: {files}" # assert unique
- return files[0] # return file
-
-
-def check_dataset(dict):
- # Download dataset if not found locally
- val, s = dict.get('val'), dict.get('download')
- if val and len(val):
- val = [Path(x).resolve() for x in (val if isinstance(val, list) else [val])] # val path
- if not all(x.exists() for x in val):
- print('\nWARNING: Dataset not found, nonexistent paths: %s' % [str(x) for x in val if not x.exists()])
- if s and len(s): # download script
- print('Downloading %s ...' % s)
- if s.startswith('http') and s.endswith('.zip'): # URL
- f = Path(s).name # filename
- torch.hub.download_url_to_file(s, f)
- r = os.system('unzip -q %s -d ../ && rm %s' % (f, f)) # unzip
- else: # bash script
- r = os.system(s)
- print('Dataset autodownload %s\n' % ('success' if r == 0 else 'failure')) # analyze return value
- else:
- raise Exception('Dataset not found.')
-
-
-def make_divisible(x, divisor):
- # Returns x evenly divisible by divisor
- return math.ceil(x / divisor) * divisor
-
-
-def clean_str(s):
- # Cleans a string by replacing special characters with underscore _
- return re.sub(pattern="[|@#!¡·$€%&()=?¿^*;:,¨´><+]", repl="_", string=s)
-
-
-def one_cycle(y1=0.0, y2=1.0, steps=100):
- # lambda function for sinusoidal ramp from y1 to y2
- return lambda x: ((1 - math.cos(x * math.pi / steps)) / 2) * (y2 - y1) + y1
-
-
-def colorstr(*input):
- # Colors a string https://en.wikipedia.org/wiki/ANSI_escape_code, i.e. colorstr('blue', 'hello world')
- *args, string = input if len(input) > 1 else ('blue', 'bold', input[0]) # color arguments, string
- colors = {'black': '\033[30m', # basic colors
- 'red': '\033[31m',
- 'green': '\033[32m',
- 'yellow': '\033[33m',
- 'blue': '\033[34m',
- 'magenta': '\033[35m',
- 'cyan': '\033[36m',
- 'white': '\033[37m',
- 'bright_black': '\033[90m', # bright colors
- 'bright_red': '\033[91m',
- 'bright_green': '\033[92m',
- 'bright_yellow': '\033[93m',
- 'bright_blue': '\033[94m',
- 'bright_magenta': '\033[95m',
- 'bright_cyan': '\033[96m',
- 'bright_white': '\033[97m',
- 'end': '\033[0m', # misc
- 'bold': '\033[1m',
- 'underline': '\033[4m'}
- return ''.join(colors[x] for x in args) + f'{string}' + colors['end']
-
-
-def labels_to_class_weights(labels, nc=80):
- # Get class weights (inverse frequency) from training labels
- if labels[0] is None: # no labels loaded
- return torch.Tensor()
-
- labels = np.concatenate(labels, 0) # labels.shape = (866643, 5) for COCO
- classes = labels[:, 0].astype(np.int32) # labels = [class xywh]
- weights = np.bincount(classes, minlength=nc) # occurrences per class
-
- # Prepend gridpoint count (for uCE training)
- # gpi = ((320 / 32 * np.array([1, 2, 4])) ** 2 * 3).sum() # gridpoints per image
- # weights = np.hstack([gpi * len(labels) - weights.sum() * 9, weights * 9]) ** 0.5 # prepend gridpoints to start
-
- weights[weights == 0] = 1 # replace empty bins with 1
- weights = 1 / weights # number of targets per class
- weights /= weights.sum() # normalize
- return torch.from_numpy(weights)
-
-
-def labels_to_image_weights(labels, nc=80, class_weights=np.ones(80)):
- # Produces image weights based on class_weights and image contents
- class_counts = np.array([np.bincount(x[:, 0].astype(np.int32), minlength=nc) for x in labels])
- image_weights = (class_weights.reshape(1, nc) * class_counts).sum(1)
- # index = random.choices(range(n), weights=image_weights, k=1) # weight image sample
- return image_weights
-
-
-def coco80_to_coco91_class(): # converts 80-index (val2014) to 91-index (paper)
- # https://tech.amikelive.com/node-718/what-object-categories-labels-are-in-coco-dataset/
- # a = np.loadtxt('data/coco.names', dtype='str', delimiter='\n')
- # b = np.loadtxt('data/coco_paper.names', dtype='str', delimiter='\n')
- # x1 = [list(a[i] == b).index(True) + 1 for i in range(80)] # darknet to coco
- # x2 = [list(b[i] == a).index(True) if any(b[i] == a) else None for i in range(91)] # coco to darknet
- x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 27, 28, 31, 32, 33, 34,
- 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63,
- 64, 65, 67, 70, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 84, 85, 86, 87, 88, 89, 90]
- return x
-
-
-def xyxy2xywh(x):
- # Convert nx4 boxes from [x1, y1, x2, y2] to [x, y, w, h] where xy1=top-left, xy2=bottom-right
- y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
- y[:, 0] = (x[:, 0] + x[:, 2]) / 2 # x center
- y[:, 1] = (x[:, 1] + x[:, 3]) / 2 # y center
- y[:, 2] = x[:, 2] - x[:, 0] # width
- y[:, 3] = x[:, 3] - x[:, 1] # height
- return y
-
-
-def xywh2xyxy(x):
- # Convert nx4 boxes from [x, y, w, h] to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right
- y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
- y[:, 0] = x[:, 0] - x[:, 2] / 2 # top left x
- y[:, 1] = x[:, 1] - x[:, 3] / 2 # top left y
- y[:, 2] = x[:, 0] + x[:, 2] / 2 # bottom right x
- y[:, 3] = x[:, 1] + x[:, 3] / 2 # bottom right y
- return y
-
-
-def xywhn2xyxy(x, w=640, h=640, padw=0, padh=0):
- # Convert nx4 boxes from [x, y, w, h] normalized to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right
- y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
- y[:, 0] = w * (x[:, 0] - x[:, 2] / 2) + padw # top left x
- y[:, 1] = h * (x[:, 1] - x[:, 3] / 2) + padh # top left y
- y[:, 2] = w * (x[:, 0] + x[:, 2] / 2) + padw # bottom right x
- y[:, 3] = h * (x[:, 1] + x[:, 3] / 2) + padh # bottom right y
- return y
-
-
-def xyn2xy(x, w=640, h=640, padw=0, padh=0):
- # Convert normalized segments into pixel segments, shape (n,2)
- y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
- y[:, 0] = w * x[:, 0] + padw # top left x
- y[:, 1] = h * x[:, 1] + padh # top left y
- return y
-
-
-def segment2box(segment, width=640, height=640):
- # Convert 1 segment label to 1 box label, applying inside-image constraint, i.e. (xy1, xy2, ...) to (xyxy)
- x, y = segment.T # segment xy
- inside = (x >= 0) & (y >= 0) & (x <= width) & (y <= height)
- x, y, = x[inside], y[inside]
- return np.array([x.min(), y.min(), x.max(), y.max()]) if any(x) else np.zeros((1, 4)) # xyxy
-
-
-def segments2boxes(segments):
- # Convert segment labels to box labels, i.e. (cls, xy1, xy2, ...) to (cls, xywh)
- boxes = []
- for s in segments:
- x, y = s.T # segment xy
- boxes.append([x.min(), y.min(), x.max(), y.max()]) # cls, xyxy
- return xyxy2xywh(np.array(boxes)) # cls, xywh
-
-
-def resample_segments(segments, n=1000):
- # Up-sample an (n,2) segment
- for i, s in enumerate(segments):
- s = np.concatenate((s, s[0:1, :]), axis=0)
- x = np.linspace(0, len(s) - 1, n)
- xp = np.arange(len(s))
- segments[i] = np.concatenate([np.interp(x, xp, s[:, i]) for i in range(2)]).reshape(2, -1).T # segment xy
- return segments
-
-
-def scale_coords(img1_shape, coords, img0_shape, ratio_pad=None):
- # Rescale coords (xyxy) from img1_shape to img0_shape
- if ratio_pad is None: # calculate from img0_shape
- gain = min(img1_shape[0] / img0_shape[0], img1_shape[1] / img0_shape[1]) # gain = old / new
- pad = (img1_shape[1] - img0_shape[1] * gain) / 2, (img1_shape[0] - img0_shape[0] * gain) / 2 # wh padding
- else:
- gain = ratio_pad[0][0]
- pad = ratio_pad[1]
-
- coords[:, [0, 2]] -= pad[0] # x padding
- coords[:, [1, 3]] -= pad[1] # y padding
- coords[:, :4] /= gain
- clip_coords(coords, img0_shape)
- return coords
-
-
-def clip_coords(boxes, img_shape):
- # Clip bounding xyxy bounding boxes to image shape (height, width)
- boxes[:, 0].clamp_(0, img_shape[1]) # x1
- boxes[:, 1].clamp_(0, img_shape[0]) # y1
- boxes[:, 2].clamp_(0, img_shape[1]) # x2
- boxes[:, 3].clamp_(0, img_shape[0]) # y2
-
-
-def bbox_iou(box1, box2, x1y1x2y2=True, GIoU=False, DIoU=False, CIoU=False, eps=1e-7):
- # Returns the IoU of box1 to box2. box1 is 4, box2 is nx4
- box2 = box2.T
-
- # Get the coordinates of bounding boxes
- if x1y1x2y2: # x1, y1, x2, y2 = box1
- b1_x1, b1_y1, b1_x2, b1_y2 = box1[0], box1[1], box1[2], box1[3]
- b2_x1, b2_y1, b2_x2, b2_y2 = box2[0], box2[1], box2[2], box2[3]
- else: # transform from xywh to xyxy
- b1_x1, b1_x2 = box1[0] - box1[2] / 2, box1[0] + box1[2] / 2
- b1_y1, b1_y2 = box1[1] - box1[3] / 2, box1[1] + box1[3] / 2
- b2_x1, b2_x2 = box2[0] - box2[2] / 2, box2[0] + box2[2] / 2
- b2_y1, b2_y2 = box2[1] - box2[3] / 2, box2[1] + box2[3] / 2
-
- # Intersection area
- inter = (torch.min(b1_x2, b2_x2) - torch.max(b1_x1, b2_x1)).clamp(0) * \
- (torch.min(b1_y2, b2_y2) - torch.max(b1_y1, b2_y1)).clamp(0)
-
- # Union Area
- w1, h1 = b1_x2 - b1_x1, b1_y2 - b1_y1 + eps
- w2, h2 = b2_x2 - b2_x1, b2_y2 - b2_y1 + eps
- union = w1 * h1 + w2 * h2 - inter + eps
-
- iou = inter / union
-
- if GIoU or DIoU or CIoU:
- cw = torch.max(b1_x2, b2_x2) - torch.min(b1_x1, b2_x1) # convex (smallest enclosing box) width
- ch = torch.max(b1_y2, b2_y2) - torch.min(b1_y1, b2_y1) # convex height
- if CIoU or DIoU: # Distance or Complete IoU https://arxiv.org/abs/1911.08287v1
- c2 = cw ** 2 + ch ** 2 + eps # convex diagonal squared
- rho2 = ((b2_x1 + b2_x2 - b1_x1 - b1_x2) ** 2 +
- (b2_y1 + b2_y2 - b1_y1 - b1_y2) ** 2) / 4 # center distance squared
- if DIoU:
- return iou - rho2 / c2 # DIoU
- elif CIoU: # https://github.com/Zzh-tju/DIoU-SSD-pytorch/blob/master/utils/box/box_utils.py#L47
- v = (4 / math.pi ** 2) * torch.pow(torch.atan(w2 / (h2 + eps)) - torch.atan(w1 / (h1 + eps)), 2)
- with torch.no_grad():
- alpha = v / (v - iou + (1 + eps))
- return iou - (rho2 / c2 + v * alpha) # CIoU
- else: # GIoU https://arxiv.org/pdf/1902.09630.pdf
- c_area = cw * ch + eps # convex area
- return iou - (c_area - union) / c_area # GIoU
- else:
- return iou # IoU
-
-
-
-
-def bbox_alpha_iou(box1, box2, x1y1x2y2=False, GIoU=False, DIoU=False, CIoU=False, alpha=2, eps=1e-9):
- # Returns tsqrt_he IoU of box1 to box2. box1 is 4, box2 is nx4
- box2 = box2.T
-
- # Get the coordinates of bounding boxes
- if x1y1x2y2: # x1, y1, x2, y2 = box1
- b1_x1, b1_y1, b1_x2, b1_y2 = box1[0], box1[1], box1[2], box1[3]
- b2_x1, b2_y1, b2_x2, b2_y2 = box2[0], box2[1], box2[2], box2[3]
- else: # transform from xywh to xyxy
- b1_x1, b1_x2 = box1[0] - box1[2] / 2, box1[0] + box1[2] / 2
- b1_y1, b1_y2 = box1[1] - box1[3] / 2, box1[1] + box1[3] / 2
- b2_x1, b2_x2 = box2[0] - box2[2] / 2, box2[0] + box2[2] / 2
- b2_y1, b2_y2 = box2[1] - box2[3] / 2, box2[1] + box2[3] / 2
-
- # Intersection area
- inter = (torch.min(b1_x2, b2_x2) - torch.max(b1_x1, b2_x1)).clamp(0) * \
- (torch.min(b1_y2, b2_y2) - torch.max(b1_y1, b2_y1)).clamp(0)
-
- # Union Area
- w1, h1 = b1_x2 - b1_x1, b1_y2 - b1_y1 + eps
- w2, h2 = b2_x2 - b2_x1, b2_y2 - b2_y1 + eps
- union = w1 * h1 + w2 * h2 - inter + eps
-
- # change iou into pow(iou+eps)
- # iou = inter / union
- iou = torch.pow(inter/union + eps, alpha)
- # beta = 2 * alpha
- if GIoU or DIoU or CIoU:
- cw = torch.max(b1_x2, b2_x2) - torch.min(b1_x1, b2_x1) # convex (smallest enclosing box) width
- ch = torch.max(b1_y2, b2_y2) - torch.min(b1_y1, b2_y1) # convex height
- if CIoU or DIoU: # Distance or Complete IoU https://arxiv.org/abs/1911.08287v1
- c2 = (cw ** 2 + ch ** 2) ** alpha + eps # convex diagonal
- rho_x = torch.abs(b2_x1 + b2_x2 - b1_x1 - b1_x2)
- rho_y = torch.abs(b2_y1 + b2_y2 - b1_y1 - b1_y2)
- rho2 = ((rho_x ** 2 + rho_y ** 2) / 4) ** alpha # center distance
- if DIoU:
- return iou - rho2 / c2 # DIoU
- elif CIoU: # https://github.com/Zzh-tju/DIoU-SSD-pytorch/blob/master/utils/box/box_utils.py#L47
- v = (4 / math.pi ** 2) * torch.pow(torch.atan(w2 / h2) - torch.atan(w1 / h1), 2)
- with torch.no_grad():
- alpha_ciou = v / ((1 + eps) - inter / union + v)
- # return iou - (rho2 / c2 + v * alpha_ciou) # CIoU
- return iou - (rho2 / c2 + torch.pow(v * alpha_ciou + eps, alpha)) # CIoU
- else: # GIoU https://arxiv.org/pdf/1902.09630.pdf
- # c_area = cw * ch + eps # convex area
- # return iou - (c_area - union) / c_area # GIoU
- c_area = torch.max(cw * ch + eps, union) # convex area
- return iou - torch.pow((c_area - union) / c_area + eps, alpha) # GIoU
- else:
- return iou # torch.log(iou+eps) or iou
-
-
-def box_iou(box1, box2):
- # https://github.com/pytorch/vision/blob/master/torchvision/ops/boxes.py
- """
- Return intersection-over-union (Jaccard index) of boxes.
- Both sets of boxes are expected to be in (x1, y1, x2, y2) format.
- Arguments:
- box1 (Tensor[N, 4])
- box2 (Tensor[M, 4])
- Returns:
- iou (Tensor[N, M]): the NxM matrix containing the pairwise
- IoU values for every element in boxes1 and boxes2
- """
-
- def box_area(box):
- # box = 4xn
- return (box[2] - box[0]) * (box[3] - box[1])
-
- area1 = box_area(box1.T)
- area2 = box_area(box2.T)
-
- # inter(N,M) = (rb(N,M,2) - lt(N,M,2)).clamp(0).prod(2)
- inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2)
- return inter / (area1[:, None] + area2 - inter) # iou = inter / (area1 + area2 - inter)
-
-
-def wh_iou(wh1, wh2):
- # Returns the nxm IoU matrix. wh1 is nx2, wh2 is mx2
- wh1 = wh1[:, None] # [N,1,2]
- wh2 = wh2[None] # [1,M,2]
- inter = torch.min(wh1, wh2).prod(2) # [N,M]
- return inter / (wh1.prod(2) + wh2.prod(2) - inter) # iou = inter / (area1 + area2 - inter)
-
-
-def box_giou(box1, box2):
- """
- Return generalized intersection-over-union (Jaccard index) between two sets of boxes.
- Both sets of boxes are expected to be in ``(x1, y1, x2, y2)`` format with
- ``0 <= x1 < x2`` and ``0 <= y1 < y2``.
- Args:
- boxes1 (Tensor[N, 4]): first set of boxes
- boxes2 (Tensor[M, 4]): second set of boxes
- Returns:
- Tensor[N, M]: the NxM matrix containing the pairwise generalized IoU values
- for every element in boxes1 and boxes2
- """
-
- def box_area(box):
- # box = 4xn
- return (box[2] - box[0]) * (box[3] - box[1])
-
- area1 = box_area(box1.T)
- area2 = box_area(box2.T)
-
- inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2)
- union = (area1[:, None] + area2 - inter)
-
- iou = inter / union
-
- lti = torch.min(box1[:, None, :2], box2[:, :2])
- rbi = torch.max(box1[:, None, 2:], box2[:, 2:])
-
- whi = (rbi - lti).clamp(min=0) # [N,M,2]
- areai = whi[:, :, 0] * whi[:, :, 1]
-
- return iou - (areai - union) / areai
-
-
-def box_ciou(box1, box2, eps: float = 1e-7):
- """
- Return complete intersection-over-union (Jaccard index) between two sets of boxes.
- Both sets of boxes are expected to be in ``(x1, y1, x2, y2)`` format with
- ``0 <= x1 < x2`` and ``0 <= y1 < y2``.
- Args:
- boxes1 (Tensor[N, 4]): first set of boxes
- boxes2 (Tensor[M, 4]): second set of boxes
- eps (float, optional): small number to prevent division by zero. Default: 1e-7
- Returns:
- Tensor[N, M]: the NxM matrix containing the pairwise complete IoU values
- for every element in boxes1 and boxes2
- """
-
- def box_area(box):
- # box = 4xn
- return (box[2] - box[0]) * (box[3] - box[1])
-
- area1 = box_area(box1.T)
- area2 = box_area(box2.T)
-
- inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2)
- union = (area1[:, None] + area2 - inter)
-
- iou = inter / union
-
- lti = torch.min(box1[:, None, :2], box2[:, :2])
- rbi = torch.max(box1[:, None, 2:], box2[:, 2:])
-
- whi = (rbi - lti).clamp(min=0) # [N,M,2]
- diagonal_distance_squared = (whi[:, :, 0] ** 2) + (whi[:, :, 1] ** 2) + eps
-
- # centers of boxes
- x_p = (box1[:, None, 0] + box1[:, None, 2]) / 2
- y_p = (box1[:, None, 1] + box1[:, None, 3]) / 2
- x_g = (box2[:, 0] + box2[:, 2]) / 2
- y_g = (box2[:, 1] + box2[:, 3]) / 2
- # The distance between boxes' centers squared.
- centers_distance_squared = (x_p - x_g) ** 2 + (y_p - y_g) ** 2
-
- w_pred = box1[:, None, 2] - box1[:, None, 0]
- h_pred = box1[:, None, 3] - box1[:, None, 1]
-
- w_gt = box2[:, 2] - box2[:, 0]
- h_gt = box2[:, 3] - box2[:, 1]
-
- v = (4 / (torch.pi ** 2)) * torch.pow((torch.atan(w_gt / h_gt) - torch.atan(w_pred / h_pred)), 2)
- with torch.no_grad():
- alpha = v / (1 - iou + v + eps)
- return iou - (centers_distance_squared / diagonal_distance_squared) - alpha * v
-
-
-def box_diou(box1, box2, eps: float = 1e-7):
- """
- Return distance intersection-over-union (Jaccard index) between two sets of boxes.
- Both sets of boxes are expected to be in ``(x1, y1, x2, y2)`` format with
- ``0 <= x1 < x2`` and ``0 <= y1 < y2``.
- Args:
- boxes1 (Tensor[N, 4]): first set of boxes
- boxes2 (Tensor[M, 4]): second set of boxes
- eps (float, optional): small number to prevent division by zero. Default: 1e-7
- Returns:
- Tensor[N, M]: the NxM matrix containing the pairwise distance IoU values
- for every element in boxes1 and boxes2
- """
-
- def box_area(box):
- # box = 4xn
- return (box[2] - box[0]) * (box[3] - box[1])
-
- area1 = box_area(box1.T)
- area2 = box_area(box2.T)
-
- inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2)
- union = (area1[:, None] + area2 - inter)
-
- iou = inter / union
-
- lti = torch.min(box1[:, None, :2], box2[:, :2])
- rbi = torch.max(box1[:, None, 2:], box2[:, 2:])
-
- whi = (rbi - lti).clamp(min=0) # [N,M,2]
- diagonal_distance_squared = (whi[:, :, 0] ** 2) + (whi[:, :, 1] ** 2) + eps
-
- # centers of boxes
- x_p = (box1[:, None, 0] + box1[:, None, 2]) / 2
- y_p = (box1[:, None, 1] + box1[:, None, 3]) / 2
- x_g = (box2[:, 0] + box2[:, 2]) / 2
- y_g = (box2[:, 1] + box2[:, 3]) / 2
- # The distance between boxes' centers squared.
- centers_distance_squared = (x_p - x_g) ** 2 + (y_p - y_g) ** 2
-
- # The distance IoU is the IoU penalized by a normalized
- # distance between boxes' centers squared.
- return iou - (centers_distance_squared / diagonal_distance_squared)
-
-
-def non_max_suppression(prediction, conf_thres=0.25, iou_thres=0.45, classes=None, agnostic=False, multi_label=False,
- labels=()):
- """Runs Non-Maximum Suppression (NMS) on inference results
-
- Returns:
- list of detections, on (n,6) tensor per image [xyxy, conf, cls]
- """
-
- nc = prediction.shape[2] - 5 # number of classes
- xc = prediction[..., 4] > conf_thres # candidates
-
- # Settings
- min_wh, max_wh = 2, 4096 # (pixels) minimum and maximum box width and height
- max_det = 300 # maximum number of detections per image
- max_nms = 30000 # maximum number of boxes into torchvision.ops.nms()
- time_limit = 10.0 # seconds to quit after
- redundant = True # require redundant detections
- multi_label &= nc > 1 # multiple labels per box (adds 0.5ms/img)
- merge = False # use merge-NMS
-
- t = time.time()
- output = [torch.zeros((0, 6), device=prediction.device)] * prediction.shape[0]
- for xi, x in enumerate(prediction): # image index, image inference
- # Apply constraints
- # x[((x[..., 2:4] < min_wh) | (x[..., 2:4] > max_wh)).any(1), 4] = 0 # width-height
- x = x[xc[xi]] # confidence
-
- # Cat apriori labels if autolabelling
- if labels and len(labels[xi]):
- l = labels[xi]
- v = torch.zeros((len(l), nc + 5), device=x.device)
- v[:, :4] = l[:, 1:5] # box
- v[:, 4] = 1.0 # conf
- v[range(len(l)), l[:, 0].long() + 5] = 1.0 # cls
- x = torch.cat((x, v), 0)
-
- # If none remain process next image
- if not x.shape[0]:
- continue
-
- # Compute conf
- if nc == 1:
- x[:, 5:] = x[:, 4:5] # for models with one class, cls_loss is 0 and cls_conf is always 0.5,
- # so there is no need to multiplicate.
- else:
- x[:, 5:] *= x[:, 4:5] # conf = obj_conf * cls_conf
-
- # Box (center x, center y, width, height) to (x1, y1, x2, y2)
- box = xywh2xyxy(x[:, :4])
-
- # Detections matrix nx6 (xyxy, conf, cls)
- if multi_label:
- i, j = (x[:, 5:] > conf_thres).nonzero(as_tuple=False).T
- x = torch.cat((box[i], x[i, j + 5, None], j[:, None].float()), 1)
- else: # best class only
- conf, j = x[:, 5:].max(1, keepdim=True)
- x = torch.cat((box, conf, j.float()), 1)[conf.view(-1) > conf_thres]
-
- # Filter by class
- if classes is not None:
- x = x[(x[:, 5:6] == torch.tensor(classes, device=x.device)).any(1)]
-
- # Apply finite constraint
- # if not torch.isfinite(x).all():
- # x = x[torch.isfinite(x).all(1)]
-
- # Check shape
- n = x.shape[0] # number of boxes
- if not n: # no boxes
- continue
- elif n > max_nms: # excess boxes
- x = x[x[:, 4].argsort(descending=True)[:max_nms]] # sort by confidence
-
- # Batched NMS
- c = x[:, 5:6] * (0 if agnostic else max_wh) # classes
- boxes, scores = x[:, :4] + c, x[:, 4] # boxes (offset by class), scores
- i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS
- if i.shape[0] > max_det: # limit detections
- i = i[:max_det]
- if merge and (1 < n < 3E3): # Merge NMS (boxes merged using weighted mean)
- # update boxes as boxes(i,4) = weights(i,n) * boxes(n,4)
- iou = box_iou(boxes[i], boxes) > iou_thres # iou matrix
- weights = iou * scores[None] # box weights
- x[i, :4] = torch.mm(weights, x[:, :4]).float() / weights.sum(1, keepdim=True) # merged boxes
- if redundant:
- i = i[iou.sum(1) > 1] # require redundancy
-
- output[xi] = x[i]
- if (time.time() - t) > time_limit:
- print(f'WARNING: NMS time limit {time_limit}s exceeded')
- break # time limit exceeded
-
- return output
-
-
-def non_max_suppression_kpt(prediction, conf_thres=0.25, iou_thres=0.45, classes=None, agnostic=False, multi_label=False,
- labels=(), kpt_label=False, nc=None, nkpt=None):
- """Runs Non-Maximum Suppression (NMS) on inference results
-
- Returns:
- list of detections, on (n,6) tensor per image [xyxy, conf, cls]
- """
- if nc is None:
- nc = prediction.shape[2] - 5 if not kpt_label else prediction.shape[2] - 56 # number of classes
- xc = prediction[..., 4] > conf_thres # candidates
-
- # Settings
- min_wh, max_wh = 2, 4096 # (pixels) minimum and maximum box width and height
- max_det = 300 # maximum number of detections per image
- max_nms = 30000 # maximum number of boxes into torchvision.ops.nms()
- time_limit = 10.0 # seconds to quit after
- redundant = True # require redundant detections
- multi_label &= nc > 1 # multiple labels per box (adds 0.5ms/img)
- merge = False # use merge-NMS
-
- t = time.time()
- output = [torch.zeros((0,6), device=prediction.device)] * prediction.shape[0]
- for xi, x in enumerate(prediction): # image index, image inference
- # Apply constraints
- # x[((x[..., 2:4] < min_wh) | (x[..., 2:4] > max_wh)).any(1), 4] = 0 # width-height
- x = x[xc[xi]] # confidence
-
- # Cat apriori labels if autolabelling
- if labels and len(labels[xi]):
- l = labels[xi]
- v = torch.zeros((len(l), nc + 5), device=x.device)
- v[:, :4] = l[:, 1:5] # box
- v[:, 4] = 1.0 # conf
- v[range(len(l)), l[:, 0].long() + 5] = 1.0 # cls
- x = torch.cat((x, v), 0)
-
- # If none remain process next image
- if not x.shape[0]:
- continue
-
- # Compute conf
- x[:, 5:5+nc] *= x[:, 4:5] # conf = obj_conf * cls_conf
-
- # Box (center x, center y, width, height) to (x1, y1, x2, y2)
- box = xywh2xyxy(x[:, :4])
-
- # Detections matrix nx6 (xyxy, conf, cls)
- if multi_label:
- i, j = (x[:, 5:] > conf_thres).nonzero(as_tuple=False).T
- x = torch.cat((box[i], x[i, j + 5, None], j[:, None].float()), 1)
- else: # best class only
- if not kpt_label:
- conf, j = x[:, 5:].max(1, keepdim=True)
- x = torch.cat((box, conf, j.float()), 1)[conf.view(-1) > conf_thres]
- else:
- kpts = x[:, 6:]
- conf, j = x[:, 5:6].max(1, keepdim=True)
- x = torch.cat((box, conf, j.float(), kpts), 1)[conf.view(-1) > conf_thres]
-
-
- # Filter by class
- if classes is not None:
- x = x[(x[:, 5:6] == torch.tensor(classes, device=x.device)).any(1)]
-
- # Apply finite constraint
- # if not torch.isfinite(x).all():
- # x = x[torch.isfinite(x).all(1)]
-
- # Check shape
- n = x.shape[0] # number of boxes
- if not n: # no boxes
- continue
- elif n > max_nms: # excess boxes
- x = x[x[:, 4].argsort(descending=True)[:max_nms]] # sort by confidence
-
- # Batched NMS
- c = x[:, 5:6] * (0 if agnostic else max_wh) # classes
- boxes, scores = x[:, :4] + c, x[:, 4] # boxes (offset by class), scores
- i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS
- if i.shape[0] > max_det: # limit detections
- i = i[:max_det]
- if merge and (1 < n < 3E3): # Merge NMS (boxes merged using weighted mean)
- # update boxes as boxes(i,4) = weights(i,n) * boxes(n,4)
- iou = box_iou(boxes[i], boxes) > iou_thres # iou matrix
- weights = iou * scores[None] # box weights
- x[i, :4] = torch.mm(weights, x[:, :4]).float() / weights.sum(1, keepdim=True) # merged boxes
- if redundant:
- i = i[iou.sum(1) > 1] # require redundancy
-
- output[xi] = x[i]
- if (time.time() - t) > time_limit:
- print(f'WARNING: NMS time limit {time_limit}s exceeded')
- break # time limit exceeded
-
- return output
-
-
-def strip_optimizer(f='best.pt', s=''): # from utils.general import *; strip_optimizer()
- # Strip optimizer from 'f' to finalize training, optionally save as 's'
- x = torch.load(f, map_location=torch.device('cpu'))
- if x.get('ema'):
- x['model'] = x['ema'] # replace model with ema
- for k in 'optimizer', 'training_results', 'wandb_id', 'ema', 'updates': # keys
- x[k] = None
- x['epoch'] = -1
- x['model'].half() # to FP16
- for p in x['model'].parameters():
- p.requires_grad = False
- torch.save(x, s or f)
- mb = os.path.getsize(s or f) / 1E6 # filesize
- print(f"Optimizer stripped from {f},{(' saved as %s,' % s) if s else ''} {mb:.1f}MB")
-
-
-def print_mutation(hyp, results, yaml_file='hyp_evolved.yaml', bucket=''):
- # Print mutation results to evolve.txt (for use with train.py --evolve)
- a = '%10s' * len(hyp) % tuple(hyp.keys()) # hyperparam keys
- b = '%10.3g' * len(hyp) % tuple(hyp.values()) # hyperparam values
- c = '%10.4g' * len(results) % results # results (P, R, mAP@0.5, mAP@0.5:0.95, val_losses x 3)
- print('\n%s\n%s\nEvolved fitness: %s\n' % (a, b, c))
-
- if bucket:
- url = 'gs://%s/evolve.txt' % bucket
- if gsutil_getsize(url) > (os.path.getsize('evolve.txt') if os.path.exists('evolve.txt') else 0):
- os.system('gsutil cp %s .' % url) # download evolve.txt if larger than local
-
- with open('evolve.txt', 'a') as f: # append result
- f.write(c + b + '\n')
- x = np.unique(np.loadtxt('evolve.txt', ndmin=2), axis=0) # load unique rows
- x = x[np.argsort(-fitness(x))] # sort
- np.savetxt('evolve.txt', x, '%10.3g') # save sort by fitness
-
- # Save yaml
- for i, k in enumerate(hyp.keys()):
- hyp[k] = float(x[0, i + 7])
- with open(yaml_file, 'w') as f:
- results = tuple(x[0, :7])
- c = '%10.4g' * len(results) % results # results (P, R, mAP@0.5, mAP@0.5:0.95, val_losses x 3)
- f.write('# Hyperparameter Evolution Results\n# Generations: %g\n# Metrics: ' % len(x) + c + '\n\n')
- yaml.dump(hyp, f, sort_keys=False)
-
- if bucket:
- os.system('gsutil cp evolve.txt %s gs://%s' % (yaml_file, bucket)) # upload
-
-
-def apply_classifier(x, model, img, im0):
- # applies a second stage classifier to yolo outputs
- im0 = [im0] if isinstance(im0, np.ndarray) else im0
- for i, d in enumerate(x): # per image
- if d is not None and len(d):
- d = d.clone()
-
- # Reshape and pad cutouts
- b = xyxy2xywh(d[:, :4]) # boxes
- b[:, 2:] = b[:, 2:].max(1)[0].unsqueeze(1) # rectangle to square
- b[:, 2:] = b[:, 2:] * 1.3 + 30 # pad
- d[:, :4] = xywh2xyxy(b).long()
-
- # Rescale boxes from img_size to im0 size
- scale_coords(img.shape[2:], d[:, :4], im0[i].shape)
-
- # Classes
- pred_cls1 = d[:, 5].long()
- ims = []
- for j, a in enumerate(d): # per item
- cutout = im0[i][int(a[1]):int(a[3]), int(a[0]):int(a[2])]
- im = cv2.resize(cutout, (224, 224)) # BGR
- # cv2.imwrite('test%i.jpg' % j, cutout)
-
- im = im[:, :, ::-1].transpose(2, 0, 1) # BGR to RGB, to 3x416x416
- im = np.ascontiguousarray(im, dtype=np.float32) # uint8 to float32
- im /= 255.0 # 0 - 255 to 0.0 - 1.0
- ims.append(im)
-
- pred_cls2 = model(torch.Tensor(ims).to(d.device)).argmax(1) # classifier prediction
- x[i] = x[i][pred_cls1 == pred_cls2] # retain matching class detections
-
- return x
-
-
-def increment_path(path, exist_ok=True, sep=''):
- # Increment path, i.e. runs/exp --> runs/exp{sep}0, runs/exp{sep}1 etc.
- path = Path(path) # os-agnostic
- if (path.exists() and exist_ok) or (not path.exists()):
- return str(path)
- else:
- dirs = glob.glob(f"{path}{sep}*") # similar paths
- matches = [re.search(rf"%s{sep}(\d+)" % path.stem, d) for d in dirs]
- i = [int(m.groups()[0]) for m in matches if m] # indices
- n = max(i) + 1 if i else 2 # increment number
- return f"{path}{sep}{n}" # update path
diff --git a/spaces/FrankZxShen/so-vits-svc-models-ba/diffusion/__init__.py b/spaces/FrankZxShen/so-vits-svc-models-ba/diffusion/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Frantz103/CaptionQuest/README.md b/spaces/Frantz103/CaptionQuest/README.md
deleted file mode 100644
index 096b0f7a56a2ea04bd7a7372f59bfa87fb94d028..0000000000000000000000000000000000000000
--- a/spaces/Frantz103/CaptionQuest/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: CaptionQuest
-emoji: ⚡
-colorFrom: blue
-colorTo: pink
-sdk: gradio
-sdk_version: 3.47.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/GaenKoki/voicevox/voicevox_engine/preset/__init__.py b/spaces/GaenKoki/voicevox/voicevox_engine/preset/__init__.py
deleted file mode 100644
index 8c485e2fbfbcdd660d869ccc36483d6ace6272ec..0000000000000000000000000000000000000000
--- a/spaces/GaenKoki/voicevox/voicevox_engine/preset/__init__.py
+++ /dev/null
@@ -1,9 +0,0 @@
-from .Preset import Preset
-from .PresetError import PresetError
-from .PresetManager import PresetManager
-
-__all__ = [
- "Preset",
- "PresetManager",
- "PresetError",
-]
diff --git a/spaces/Galax/schafter_x_billy/app.py b/spaces/Galax/schafter_x_billy/app.py
deleted file mode 100644
index e2ae149fc951bc3e1620139a5bc6864670e4795e..0000000000000000000000000000000000000000
--- a/spaces/Galax/schafter_x_billy/app.py
+++ /dev/null
@@ -1,34 +0,0 @@
-import gradio as gr
-from transformers import pipeline
-from huggingface_hub import login
-import os
-
-api_key = os.getenv("api_key_read")
-model = os.getenv("model_repo")
-login(api_key)
-pipe = pipeline(
- "audio-classification",
- model=model,
- chunk_length_s = 30,
- stride_length_s = 5,
- batch_size = 1,
- api_key = api_key,
-)
-
-examples = []
-for file in os.listdir("examples"):
- examples.append(f'examples//{file}')
-
-def classify_audio(filepath):
- preds = pipe(filepath)
- outputs = {}
- for p in preds:
- outputs[p["label"]] = p["score"]
- return outputs
-
-import gradio as gr
-
-demo = gr.Interface(
- fn=classify_audio, inputs=gr.Audio(type="filepath"),examples = examples, outputs=gr.outputs.Label()
-)
-demo.launch(debug=True)
\ No newline at end of file
diff --git a/spaces/GeorgeOrville/bingo/src/components/toaster.tsx b/spaces/GeorgeOrville/bingo/src/components/toaster.tsx
deleted file mode 100644
index 4d2693460b61307a1d4c127fd01df9bee16e59ff..0000000000000000000000000000000000000000
--- a/spaces/GeorgeOrville/bingo/src/components/toaster.tsx
+++ /dev/null
@@ -1,3 +0,0 @@
-'use client'
-
-export { Toaster } from 'react-hot-toast'
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/paa/README.md b/spaces/Gradio-Blocks/uniformer_image_detection/configs/paa/README.md
deleted file mode 100644
index 9960dcf9c16038db3d8379ab910d2cfbe85d22de..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/paa/README.md
+++ /dev/null
@@ -1,35 +0,0 @@
-# Probabilistic Anchor Assignment with IoU Prediction for Object Detection
-
-[ALGORITHM]
-
-```latex
-@inproceedings{paa-eccv2020,
- title={Probabilistic Anchor Assignment with IoU Prediction for Object Detection},
- author={Kim, Kang and Lee, Hee Seok},
- booktitle = {ECCV},
- year={2020}
-}
-```
-
-## Results and Models
-
-We provide config files to reproduce the object detection results in the
-ECCV 2020 paper for Probabilistic Anchor Assignment with IoU
-Prediction for Object Detection.
-
-| Backbone | Lr schd | Mem (GB) | Score voting | box AP | Config | Download |
-|:-----------:|:-------:|:--------:|:------------:|:------:|:------:|:--------:|
-| R-50-FPN | 12e | 3.7 | True | 40.4 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/paa/paa_r50_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/paa/paa_r50_fpn_1x_coco/paa_r50_fpn_1x_coco_20200821-936edec3.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/paa/paa_r50_fpn_1x_coco/paa_r50_fpn_1x_coco_20200821-936edec3.log.json) |
-| R-50-FPN | 12e | 3.7 | False | 40.2 | - |
-| R-50-FPN | 18e | 3.7 | True | 41.4 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/paa/paa_r50_fpn_1.5x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/paa/paa_r50_fpn_1.5x_coco/paa_r50_fpn_1.5x_coco_20200823-805d6078.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/paa/paa_r50_fpn_1.5x_coco/paa_r50_fpn_1.5x_coco_20200823-805d6078.log.json) |
-| R-50-FPN | 18e | 3.7 | False | 41.2 | - |
-| R-50-FPN | 24e | 3.7 | True | 41.6 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/paa/paa_r50_fpn_2x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/paa/paa_r50_fpn_2x_coco/paa_r50_fpn_2x_coco_20200821-c98bfc4e.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/paa/paa_r50_fpn_2x_coco/paa_r50_fpn_2x_coco_20200821-c98bfc4e.log.json) |
-| R-50-FPN | 36e | 3.7 | True | 43.3 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/paa/paa_r50_fpn_mstrain_3x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/paa/paa_r50_fpn_mstrain_3x_coco/paa_r50_fpn_mstrain_3x_coco_20210121_145722-06a6880b.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/paa/paa_r50_fpn_mstrain_3x_coco/paa_r50_fpn_mstrain_3x_coco_20210121_145722.log.json) |
-| R-101-FPN | 12e | 6.2 | True | 42.6 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/paa/paa_r101_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/paa/paa_r101_fpn_1x_coco/paa_r101_fpn_1x_coco_20200821-0a1825a4.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/paa/paa_r101_fpn_1x_coco/paa_r101_fpn_1x_coco_20200821-0a1825a4.log.json) |
-| R-101-FPN | 12e | 6.2 | False | 42.4 | - |
-| R-101-FPN | 24e | 6.2 | True | 43.5 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/paa/paa_r101_fpn_2x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/paa/paa_r101_fpn_2x_coco/paa_r101_fpn_2x_coco_20200821-6829f96b.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/paa/paa_r101_fpn_2x_coco/paa_r101_fpn_2x_coco_20200821-6829f96b.log.json) |
-| R-101-FPN | 36e | 6.2 | True | 45.1 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/paa/paa_r101_fpn_mstrain_3x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/paa/paa_r101_fpn_mstrain_3x_coco/paa_r101_fpn_mstrain_3x_coco_20210122_084202-83250d22.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/paa/paa_r101_fpn_mstrain_3x_coco/paa_r101_fpn_mstrain_3x_coco_20210122_084202.log.json) |
-
-**Note**:
-
-1. We find that the performance is unstable with 1x setting and may fluctuate by about 0.2 mAP. We report the best results.
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/apcnet/README.md b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/apcnet/README.md
deleted file mode 100644
index b89ac6d7b2ed2da1788b7400a121f6509774baf8..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/apcnet/README.md
+++ /dev/null
@@ -1,39 +0,0 @@
-# Adaptive Pyramid Context Network for Semantic Segmentation
-
-## Introduction
-
-
-
-```latex
-@InProceedings{He_2019_CVPR,
-author = {He, Junjun and Deng, Zhongying and Zhou, Lei and Wang, Yali and Qiao, Yu},
-title = {Adaptive Pyramid Context Network for Semantic Segmentation},
-booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
-month = {June},
-year = {2019}
-}
-```
-
-## Results and models
-
-### Cityscapes
-
-| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download |
-| ------ | -------- | --------- | ------: | -------- | -------------- | ----: | ------------: | --------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
-| APCNet | R-50-D8 | 512x1024 | 40000 | 7.7 | 3.57 | 78.02 | 79.26 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/apcnet/apcnet_r50-d8_512x1024_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/apcnet/apcnet_r50-d8_512x1024_40k_cityscapes/apcnet_r50-d8_512x1024_40k_cityscapes_20201214_115717-5e88fa33.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/apcnet/apcnet_r50-d8_512x1024_40k_cityscapes/apcnet_r50-d8_512x1024_40k_cityscapes-20201214_115717.log.json) |
-| APCNet | R-101-D8 | 512x1024 | 40000 | 11.2 | 2.15 | 79.08 | 80.34 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/apcnet/apcnet_r101-d8_512x1024_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/apcnet/apcnet_r101-d8_512x1024_40k_cityscapes/apcnet_r101-d8_512x1024_40k_cityscapes_20201214_115716-abc9d111.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/apcnet/apcnet_r101-d8_512x1024_40k_cityscapes/apcnet_r101-d8_512x1024_40k_cityscapes-20201214_115716.log.json) |
-| APCNet | R-50-D8 | 769x769 | 40000 | 8.7 | 1.52 | 77.89 | 79.75 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/apcnet/apcnet_r50-d8_769x769_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/apcnet/apcnet_r50-d8_769x769_40k_cityscapes/apcnet_r50-d8_769x769_40k_cityscapes_20201214_115717-2a2628d7.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/apcnet/apcnet_r50-d8_769x769_40k_cityscapes/apcnet_r50-d8_769x769_40k_cityscapes-20201214_115717.log.json) |
-| APCNet | R-101-D8 | 769x769 | 40000 | 12.7 | 1.03 | 77.96 | 79.24 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/apcnet/apcnet_r101-d8_769x769_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/apcnet/apcnet_r101-d8_769x769_40k_cityscapes/apcnet_r101-d8_769x769_40k_cityscapes_20201214_115718-b650de90.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/apcnet/apcnet_r101-d8_769x769_40k_cityscapes/apcnet_r101-d8_769x769_40k_cityscapes-20201214_115718.log.json) |
-| APCNet | R-50-D8 | 512x1024 | 80000 | - | - | 78.96 | 79.94 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/apcnet/apcnet_r50-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/apcnet/apcnet_r50-d8_512x1024_80k_cityscapes/apcnet_r50-d8_512x1024_80k_cityscapes_20201214_115716-987f51e3.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/apcnet/apcnet_r50-d8_512x1024_80k_cityscapes/apcnet_r50-d8_512x1024_80k_cityscapes-20201214_115716.log.json) |
-| APCNet | R-101-D8 | 512x1024 | 80000 | - | - | 79.64 | 80.61 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/apcnet/apcnet_r101-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/apcnet/apcnet_r101-d8_512x1024_80k_cityscapes/apcnet_r101-d8_512x1024_80k_cityscapes_20201214_115705-b1ff208a.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/apcnet/apcnet_r101-d8_512x1024_80k_cityscapes/apcnet_r101-d8_512x1024_80k_cityscapes-20201214_115705.log.json) |
-| APCNet | R-50-D8 | 769x769 | 80000 | - | - | 78.79 | 80.35 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/apcnet/apcnet_r50-d8_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/apcnet/apcnet_r50-d8_769x769_80k_cityscapes/apcnet_r50-d8_769x769_80k_cityscapes_20201214_115718-7ea9fa12.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/apcnet/apcnet_r50-d8_769x769_80k_cityscapes/apcnet_r50-d8_769x769_80k_cityscapes-20201214_115718.log.json) |
-| APCNet | R-101-D8 | 769x769 | 80000 | - | - | 78.45 | 79.91 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/apcnet/apcnet_r101-d8_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/apcnet/apcnet_r101-d8_769x769_80k_cityscapes/apcnet_r101-d8_769x769_80k_cityscapes_20201214_115716-a7fbc2ab.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/apcnet/apcnet_r101-d8_769x769_80k_cityscapes/apcnet_r101-d8_769x769_80k_cityscapes-20201214_115716.log.json) |
-
-### ADE20K
-
-| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download |
-| ------ | -------- | --------- | ------: | -------- | -------------- | ----: | ------------: | ----------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| APCNet | R-50-D8 | 512x512 | 80000 | 10.1 | 19.61 | 42.20 | 43.30 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/apcnet/apcnet_r50-d8_512x512_80k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/apcnet/apcnet_r50-d8_512x512_80k_ade20k/apcnet_r50-d8_512x512_80k_ade20k_20201214_115705-a8626293.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/apcnet/apcnet_r50-d8_512x512_80k_ade20k/apcnet_r50-d8_512x512_80k_ade20k-20201214_115705.log.json) |
-| APCNet | R-101-D8 | 512x512 | 80000 | 13.6 | 13.10 | 45.54 | 46.65 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/apcnet/apcnet_r101-d8_512x512_80k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/apcnet/apcnet_r101-d8_512x512_80k_ade20k/apcnet_r101-d8_512x512_80k_ade20k_20201214_115704-c656c3fb.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/apcnet/apcnet_r101-d8_512x512_80k_ade20k/apcnet_r101-d8_512x512_80k_ade20k-20201214_115704.log.json) |
-| APCNet | R-50-D8 | 512x512 | 160000 | - | - | 43.40 | 43.94 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/apcnet/apcnet_r50-d8_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/apcnet/apcnet_r50-d8_512x512_160k_ade20k/apcnet_r50-d8_512x512_160k_ade20k_20201214_115706-25fb92c2.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/apcnet/apcnet_r50-d8_512x512_160k_ade20k/apcnet_r50-d8_512x512_160k_ade20k-20201214_115706.log.json) |
-| APCNet | R-101-D8 | 512x512 | 160000 | - | - | 45.41 | 46.63 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/apcnet/apcnet_r101-d8_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/apcnet/apcnet_r101-d8_512x512_160k_ade20k/apcnet_r101-d8_512x512_160k_ade20k_20201214_115705-73f9a8d7.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/apcnet/apcnet_r101-d8_512x512_160k_ade20k/apcnet_r101-d8_512x512_160k_ade20k-20201214_115705.log.json) |
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ccnet/ccnet_r50-d8_512x512_160k_ade20k.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ccnet/ccnet_r50-d8_512x512_160k_ade20k.py
deleted file mode 100644
index 1ad94d8988bb822c1571816255464126d9d5b95d..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ccnet/ccnet_r50-d8_512x512_160k_ade20k.py
+++ /dev/null
@@ -1,6 +0,0 @@
-_base_ = [
- '../_base_/models/ccnet_r50-d8.py', '../_base_/datasets/ade20k.py',
- '../_base_/default_runtime.py', '../_base_/schedules/schedule_160k.py'
-]
-model = dict(
- decode_head=dict(num_classes=150), auxiliary_head=dict(num_classes=150))
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/tools/convert_datasets/pascal_context.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/tools/convert_datasets/pascal_context.py
deleted file mode 100644
index dc49ab7ad8fd359c458ec4b6190ed61851426031..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/tools/convert_datasets/pascal_context.py
+++ /dev/null
@@ -1,86 +0,0 @@
-import argparse
-import os.path as osp
-from functools import partial
-
-import mmcv
-import numpy as np
-from detail import Detail
-from PIL import Image
-
-_mapping = np.sort(
- np.array([
- 0, 2, 259, 260, 415, 324, 9, 258, 144, 18, 19, 22, 23, 397, 25, 284,
- 158, 159, 416, 33, 162, 420, 454, 295, 296, 427, 44, 45, 46, 308, 59,
- 440, 445, 31, 232, 65, 354, 424, 68, 326, 72, 458, 34, 207, 80, 355,
- 85, 347, 220, 349, 360, 98, 187, 104, 105, 366, 189, 368, 113, 115
- ]))
-_key = np.array(range(len(_mapping))).astype('uint8')
-
-
-def generate_labels(img_id, detail, out_dir):
-
- def _class_to_index(mask, _mapping, _key):
- # assert the values
- values = np.unique(mask)
- for i in range(len(values)):
- assert (values[i] in _mapping)
- index = np.digitize(mask.ravel(), _mapping, right=True)
- return _key[index].reshape(mask.shape)
-
- mask = Image.fromarray(
- _class_to_index(detail.getMask(img_id), _mapping=_mapping, _key=_key))
- filename = img_id['file_name']
- mask.save(osp.join(out_dir, filename.replace('jpg', 'png')))
- return osp.splitext(osp.basename(filename))[0]
-
-
-def parse_args():
- parser = argparse.ArgumentParser(
- description='Convert PASCAL VOC annotations to mmsegmentation format')
- parser.add_argument('devkit_path', help='pascal voc devkit path')
- parser.add_argument('json_path', help='annoation json filepath')
- parser.add_argument('-o', '--out_dir', help='output path')
- args = parser.parse_args()
- return args
-
-
-def main():
- args = parse_args()
- devkit_path = args.devkit_path
- if args.out_dir is None:
- out_dir = osp.join(devkit_path, 'VOC2010', 'SegmentationClassContext')
- else:
- out_dir = args.out_dir
- json_path = args.json_path
- mmcv.mkdir_or_exist(out_dir)
- img_dir = osp.join(devkit_path, 'VOC2010', 'JPEGImages')
-
- train_detail = Detail(json_path, img_dir, 'train')
- train_ids = train_detail.getImgs()
-
- val_detail = Detail(json_path, img_dir, 'val')
- val_ids = val_detail.getImgs()
-
- mmcv.mkdir_or_exist(
- osp.join(devkit_path, 'VOC2010/ImageSets/SegmentationContext'))
-
- train_list = mmcv.track_progress(
- partial(generate_labels, detail=train_detail, out_dir=out_dir),
- train_ids)
- with open(
- osp.join(devkit_path, 'VOC2010/ImageSets/SegmentationContext',
- 'train.txt'), 'w') as f:
- f.writelines(line + '\n' for line in sorted(train_list))
-
- val_list = mmcv.track_progress(
- partial(generate_labels, detail=val_detail, out_dir=out_dir), val_ids)
- with open(
- osp.join(devkit_path, 'VOC2010/ImageSets/SegmentationContext',
- 'val.txt'), 'w') as f:
- f.writelines(line + '\n' for line in sorted(val_list))
-
- print('Done!')
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/GrandaddyShmax/MusicGen_Plus/tests/utils/__init__.py b/spaces/GrandaddyShmax/MusicGen_Plus/tests/utils/__init__.py
deleted file mode 100644
index 0952fcc3f57e34b3747962e9ebd6fc57aeea63fa..0000000000000000000000000000000000000000
--- a/spaces/GrandaddyShmax/MusicGen_Plus/tests/utils/__init__.py
+++ /dev/null
@@ -1,5 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
diff --git a/spaces/Hallucinate/demo/taming/data/image_transforms.py b/spaces/Hallucinate/demo/taming/data/image_transforms.py
deleted file mode 100644
index 657ac332174e0ac72f68315271ffbd757b771a0f..0000000000000000000000000000000000000000
--- a/spaces/Hallucinate/demo/taming/data/image_transforms.py
+++ /dev/null
@@ -1,132 +0,0 @@
-import random
-import warnings
-from typing import Union
-
-import torch
-from torch import Tensor
-from torchvision.transforms import RandomCrop, functional as F, CenterCrop, RandomHorizontalFlip, PILToTensor
-from torchvision.transforms.functional import _get_image_size as get_image_size
-
-from taming.data.helper_types import BoundingBox, Image
-
-pil_to_tensor = PILToTensor()
-
-
-def convert_pil_to_tensor(image: Image) -> Tensor:
- with warnings.catch_warnings():
- # to filter PyTorch UserWarning as described here: https://github.com/pytorch/vision/issues/2194
- warnings.simplefilter("ignore")
- return pil_to_tensor(image)
-
-
-class RandomCrop1dReturnCoordinates(RandomCrop):
- def forward(self, img: Image) -> (BoundingBox, Image):
- """
- Additionally to cropping, returns the relative coordinates of the crop bounding box.
- Args:
- img (PIL Image or Tensor): Image to be cropped.
-
- Returns:
- Bounding box: x0, y0, w, h
- PIL Image or Tensor: Cropped image.
-
- Based on:
- torchvision.transforms.RandomCrop, torchvision 1.7.0
- """
- if self.padding is not None:
- img = F.pad(img, self.padding, self.fill, self.padding_mode)
-
- width, height = get_image_size(img)
- # pad the width if needed
- if self.pad_if_needed and width < self.size[1]:
- padding = [self.size[1] - width, 0]
- img = F.pad(img, padding, self.fill, self.padding_mode)
- # pad the height if needed
- if self.pad_if_needed and height < self.size[0]:
- padding = [0, self.size[0] - height]
- img = F.pad(img, padding, self.fill, self.padding_mode)
-
- i, j, h, w = self.get_params(img, self.size)
- bbox = (j / width, i / height, w / width, h / height) # x0, y0, w, h
- return bbox, F.crop(img, i, j, h, w)
-
-
-class Random2dCropReturnCoordinates(torch.nn.Module):
- """
- Additionally to cropping, returns the relative coordinates of the crop bounding box.
- Args:
- img (PIL Image or Tensor): Image to be cropped.
-
- Returns:
- Bounding box: x0, y0, w, h
- PIL Image or Tensor: Cropped image.
-
- Based on:
- torchvision.transforms.RandomCrop, torchvision 1.7.0
- """
-
- def __init__(self, min_size: int):
- super().__init__()
- self.min_size = min_size
-
- def forward(self, img: Image) -> (BoundingBox, Image):
- width, height = get_image_size(img)
- max_size = min(width, height)
- if max_size <= self.min_size:
- size = max_size
- else:
- size = random.randint(self.min_size, max_size)
- top = random.randint(0, height - size)
- left = random.randint(0, width - size)
- bbox = left / width, top / height, size / width, size / height
- return bbox, F.crop(img, top, left, size, size)
-
-
-class CenterCropReturnCoordinates(CenterCrop):
- @staticmethod
- def get_bbox_of_center_crop(width: int, height: int) -> BoundingBox:
- if width > height:
- w = height / width
- h = 1.0
- x0 = 0.5 - w / 2
- y0 = 0.
- else:
- w = 1.0
- h = width / height
- x0 = 0.
- y0 = 0.5 - h / 2
- return x0, y0, w, h
-
- def forward(self, img: Union[Image, Tensor]) -> (BoundingBox, Union[Image, Tensor]):
- """
- Additionally to cropping, returns the relative coordinates of the crop bounding box.
- Args:
- img (PIL Image or Tensor): Image to be cropped.
-
- Returns:
- Bounding box: x0, y0, w, h
- PIL Image or Tensor: Cropped image.
- Based on:
- torchvision.transforms.RandomHorizontalFlip (version 1.7.0)
- """
- width, height = get_image_size(img)
- return self.get_bbox_of_center_crop(width, height), F.center_crop(img, self.size)
-
-
-class RandomHorizontalFlipReturn(RandomHorizontalFlip):
- def forward(self, img: Image) -> (bool, Image):
- """
- Additionally to flipping, returns a boolean whether it was flipped or not.
- Args:
- img (PIL Image or Tensor): Image to be flipped.
-
- Returns:
- flipped: whether the image was flipped or not
- PIL Image or Tensor: Randomly flipped image.
-
- Based on:
- torchvision.transforms.RandomHorizontalFlip (version 1.7.0)
- """
- if torch.rand(1) < self.p:
- return True, F.hflip(img)
- return False, img
diff --git a/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/src/glow_tts/hifi/utils.py b/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/src/glow_tts/hifi/utils.py
deleted file mode 100644
index 71e9b2c99e053e2d4239074a67d64b834898c348..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/src/glow_tts/hifi/utils.py
+++ /dev/null
@@ -1,57 +0,0 @@
-import glob
-import os
-import matplotlib
-import torch
-from torch.nn.utils import weight_norm
-
-matplotlib.use("Agg")
-import matplotlib.pylab as plt
-
-
-def plot_spectrogram(spectrogram):
- fig, ax = plt.subplots(figsize=(10, 2))
- im = ax.imshow(spectrogram, aspect="auto", origin="lower", interpolation="none")
- plt.colorbar(im, ax=ax)
-
- fig.canvas.draw()
- plt.close()
-
- return fig
-
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def apply_weight_norm(m):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- weight_norm(m)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size * dilation - dilation) / 2)
-
-
-def load_checkpoint(filepath, device):
- assert os.path.isfile(filepath)
- print("Loading '{}'".format(filepath))
- checkpoint_dict = torch.load(filepath, map_location=device)
- print("Complete.")
- return checkpoint_dict
-
-
-def save_checkpoint(filepath, obj):
- print("Saving checkpoint to {}".format(filepath))
- torch.save(obj, filepath)
- print("Complete.")
-
-
-def scan_checkpoint(cp_dir, prefix):
- pattern = os.path.join(cp_dir, prefix + "????????")
- cp_list = glob.glob(pattern)
- if len(cp_list) == 0:
- return None
- return sorted(cp_list)[-1]
diff --git a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/scripts/data/resample.sh b/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/scripts/data/resample.sh
deleted file mode 100644
index 8489b0a0056d46a93d24db8dba173ad7a4b8a44a..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/scripts/data/resample.sh
+++ /dev/null
@@ -1,14 +0,0 @@
-input_wav_path='/home/harveen/en/iitm_data/english/wav/'
-output_wav_path='/home/harveen/en/iitm_data/english/wav_22k/'
-output_sample_rate=22050
-
-#######################
-
-dir=$PWD
-parentdir="$(dirname "$dir")"
-parentdir="$(dirname "$parentdir")"
-
-mkdir -p $output_wav_path
-python $parentdir/utils/data/resample.py -i $input_wav_path -o $output_wav_path -s $output_sample_rate
-
-python $parentdir/utils/data/duration.py $output_wav_path
diff --git a/spaces/Hina4867/bingo/src/lib/bots/bing/tts.ts b/spaces/Hina4867/bingo/src/lib/bots/bing/tts.ts
deleted file mode 100644
index cd10b7d1d7581bf9cf46ff6755fcca550c558c9b..0000000000000000000000000000000000000000
--- a/spaces/Hina4867/bingo/src/lib/bots/bing/tts.ts
+++ /dev/null
@@ -1,82 +0,0 @@
-import { sleep } from './utils'
-
-const synth = window.speechSynthesis
-
-export class TTS {
- currentText = ''
- speakText = ''
- private controller = new AbortController()
- speaking = false
- get isSpeaking() {
- return this.speaking
- }
- finished = false
- constructor() {}
- abort = () => {
- this.controller.abort()
- }
-
- reset = () => {
- this.speaking = false
- this.finished = true
- this.currentText = ''
- this.speakText = ''
- this.abort()
- }
-
- speak = (text: string) => {
- if (!synth || text?.trim()?.length < 2) {
- return
- }
- this.currentText = text.replace(/[^\u4e00-\u9fa5_a-zA-Z0-9,。?,:;\.,:]+/g, '')
- this.finished = false
- this.loop()
- }
-
- private async doSpeek() {
- return new Promise((resolve) => {
- const endIndex = this.finished ? this.currentText.length :
- Math.max(
- this.currentText.lastIndexOf('。'),
- this.currentText.lastIndexOf(';'),
- this.currentText.lastIndexOf('、'),
- this.currentText.lastIndexOf('?'),
- this.currentText.lastIndexOf('\n')
- )
- const startIndex = this.speakText.length ? Math.max(0, this.currentText.lastIndexOf(this.speakText) + this.speakText.length) : 0
-
- if (startIndex >= endIndex) {
- return resolve(true)
- }
- const text = this.currentText.slice(startIndex, endIndex)
- this.speakText = text
- const utterThis = new SpeechSynthesisUtterance(text)
- this.controller.signal.onabort = () => {
- synth.cancel()
- this.finished = true
- resolve(false)
- }
-
- utterThis.onend = function (event) {
- resolve(true)
- }
-
- utterThis.onerror = function (event) {
- resolve(false)
- }
-
- const voice = synth.getVoices().find(v => v.name.includes('Microsoft Yunxi Online')) ?? null
- utterThis.voice = voice
- synth.speak(utterThis)
- })
- }
-
- private async loop() {
- if (this.speaking) return
- this.speaking = true
- while(!this.finished) {
- await Promise.all([sleep(1000), this.doSpeek()])
- }
- this.speaking = false
- }
-}
diff --git a/spaces/Hoolbo/bing/README.md b/spaces/Hoolbo/bing/README.md
deleted file mode 100644
index aff5a96b89652a3d743dbbc827ae76a1daffd206..0000000000000000000000000000000000000000
--- a/spaces/Hoolbo/bing/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Bing稳定版
-emoji: 🦀
-colorFrom: gray
-colorTo: indigo
-sdk: docker
-pinned: false
-license: mit
-app_port: 8080
----
-
-稳定版,不一定是最新版
-https://huggingface.co/docs/hub/spaces-config-referenceCheck out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/ICML2022/OFA/fairseq/examples/multilingual/data_scripts/download_iitb.sh b/spaces/ICML2022/OFA/fairseq/examples/multilingual/data_scripts/download_iitb.sh
deleted file mode 100644
index a884e20839e2a41a57405cb6af362e37bd16ab6f..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/examples/multilingual/data_scripts/download_iitb.sh
+++ /dev/null
@@ -1,35 +0,0 @@
-#!/bin/bash
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-
-if [ -z $WORKDIR_ROOT ] ;
-then
- echo "please specify your working directory root in environment variable WORKDIR_ROOT. Exitting..."
- exit
-fi
-
-IITB=$WORKDIR_ROOT/IITB
-mkdir -p $IITB
-pushd $IITB
-
-wget http://www.cfilt.iitb.ac.in/~moses/iitb_en_hi_parallel/iitb_corpus_download/parallel.tgz
-tar -xvzf parallel.tgz
-
-wget http://www.cfilt.iitb.ac.in/~moses/iitb_en_hi_parallel/iitb_corpus_download/dev_test.tgz
-tar -xvzf dev_test.tgz
-
-DESTDIR=${WORKDIR_ROOT}/ML50/raw/
-
-cp parallel/IITB.en-hi.en $DESTDIR/train.hi_IN-en_XX.en_XX
-cp parallel/IITB.en-hi.hi $DESTDIR/train.hi_IN-en_XX.hi_IN
-
-cp dev_test/dev.en $DESTDIR/valid.hi_IN-en_XX.en_XX
-cp dev_test/dev.hi $DESTDIR/valid.hi_IN-en_XX.hi_IN
-
-cp dev_test/test.en $DESTDIR/test.hi_IN-en_XX.en_XX
-cp dev_test/test.hi $DESTDIR/test.hi_IN-en_XX.hi_IN
-popd
\ No newline at end of file
diff --git a/spaces/ICML2022/distilgpt2-finetuned-wikitext103/app.py b/spaces/ICML2022/distilgpt2-finetuned-wikitext103/app.py
deleted file mode 100644
index f5eebfde48af70b4a56cd16329c34f92a030b62d..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/distilgpt2-finetuned-wikitext103/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("huggingface/neulab/distilgpt2-finetuned-wikitext103").launch()
\ No newline at end of file
diff --git a/spaces/ICML2022/resefa/utils/__init__.py b/spaces/ICML2022/resefa/utils/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/ICML2023/ICML2023_papers/style.css b/spaces/ICML2023/ICML2023_papers/style.css
deleted file mode 100644
index e2b871457d13980ddfbbc35bf5da02a75ece292e..0000000000000000000000000000000000000000
--- a/spaces/ICML2023/ICML2023_papers/style.css
+++ /dev/null
@@ -1,22 +0,0 @@
-h1 {
- text-align: center;
-}
-table a {
- background-color: transparent;
- color: #58a6ff;
- text-decoration: none;
-}
-a:active,
-a:hover {
- outline-width: 0;
-}
-a:hover {
- text-decoration: underline;
-}
-table, th, td {
- border: 1px solid;
-}
-img#visitor-badge {
- display: block;
- margin: auto;
-}
diff --git a/spaces/Iceclear/StableSR/StableSR/basicsr/archs/arch_util.py b/spaces/Iceclear/StableSR/StableSR/basicsr/archs/arch_util.py
deleted file mode 100644
index 4f2af24b73c37d3da0664d33a313651be6e33e8f..0000000000000000000000000000000000000000
--- a/spaces/Iceclear/StableSR/StableSR/basicsr/archs/arch_util.py
+++ /dev/null
@@ -1,352 +0,0 @@
-import collections.abc
-import math
-import torch
-import torchvision
-import warnings
-from distutils.version import LooseVersion
-from itertools import repeat
-from torch import nn as nn
-from torch.nn import functional as F
-from torch.nn import init as init
-from torch.nn.modules.batchnorm import _BatchNorm
-
-from basicsr.ops.dcn import ModulatedDeformConvPack, modulated_deform_conv
-from basicsr.utils import get_root_logger
-
-
-@torch.no_grad()
-def default_init_weights(module_list, scale=1, bias_fill=0, **kwargs):
- """Initialize network weights.
-
- Args:
- module_list (list[nn.Module] | nn.Module): Modules to be initialized.
- scale (float): Scale initialized weights, especially for residual
- blocks. Default: 1.
- bias_fill (float): The value to fill bias. Default: 0
- kwargs (dict): Other arguments for initialization function.
- """
- if not isinstance(module_list, list):
- module_list = [module_list]
- for module in module_list:
- for m in module.modules():
- if isinstance(m, nn.Conv2d):
- init.kaiming_normal_(m.weight, **kwargs)
- m.weight.data *= scale
- if m.bias is not None:
- m.bias.data.fill_(bias_fill)
- elif isinstance(m, nn.Linear):
- init.kaiming_normal_(m.weight, **kwargs)
- m.weight.data *= scale
- if m.bias is not None:
- m.bias.data.fill_(bias_fill)
- elif isinstance(m, _BatchNorm):
- init.constant_(m.weight, 1)
- if m.bias is not None:
- m.bias.data.fill_(bias_fill)
-
-
-def make_layer(basic_block, num_basic_block, **kwarg):
- """Make layers by stacking the same blocks.
-
- Args:
- basic_block (nn.module): nn.module class for basic block.
- num_basic_block (int): number of blocks.
-
- Returns:
- nn.Sequential: Stacked blocks in nn.Sequential.
- """
- layers = []
- for _ in range(num_basic_block):
- layers.append(basic_block(**kwarg))
- return nn.Sequential(*layers)
-
-class PixelShufflePack(nn.Module):
- """Pixel Shuffle upsample layer.
- Args:
- in_channels (int): Number of input channels.
- out_channels (int): Number of output channels.
- scale_factor (int): Upsample ratio.
- upsample_kernel (int): Kernel size of Conv layer to expand channels.
- Returns:
- Upsampled feature map.
- """
-
- def __init__(self, in_channels, out_channels, scale_factor,
- upsample_kernel):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.scale_factor = scale_factor
- self.upsample_kernel = upsample_kernel
- self.upsample_conv = nn.Conv2d(
- self.in_channels,
- self.out_channels * scale_factor * scale_factor,
- self.upsample_kernel,
- padding=(self.upsample_kernel - 1) // 2)
- self.init_weights()
-
- def init_weights(self):
- """Initialize weights for PixelShufflePack."""
- default_init_weights(self, 1)
-
- def forward(self, x):
- """Forward function for PixelShufflePack.
- Args:
- x (Tensor): Input tensor with shape (n, c, h, w).
- Returns:
- Tensor: Forward results.
- """
- x = self.upsample_conv(x)
- x = F.pixel_shuffle(x, self.scale_factor)
- return x
-
-class ResidualBlockNoBN(nn.Module):
- """Residual block without BN.
-
- Args:
- num_feat (int): Channel number of intermediate features.
- Default: 64.
- res_scale (float): Residual scale. Default: 1.
- pytorch_init (bool): If set to True, use pytorch default init,
- otherwise, use default_init_weights. Default: False.
- """
-
- def __init__(self, num_feat=64, res_scale=1, pytorch_init=False):
- super(ResidualBlockNoBN, self).__init__()
- self.res_scale = res_scale
- self.conv1 = nn.Conv2d(num_feat, num_feat, 3, 1, 1, bias=True)
- self.conv2 = nn.Conv2d(num_feat, num_feat, 3, 1, 1, bias=True)
- self.relu = nn.ReLU(inplace=True)
-
- if not pytorch_init:
- default_init_weights([self.conv1, self.conv2], 0.1)
-
- def forward(self, x):
- identity = x
- out = self.conv2(self.relu(self.conv1(x)))
- return identity + out * self.res_scale
-
-
-class Upsample(nn.Sequential):
- """Upsample module.
-
- Args:
- scale (int): Scale factor. Supported scales: 2^n and 3.
- num_feat (int): Channel number of intermediate features.
- """
-
- def __init__(self, scale, num_feat):
- m = []
- if (scale & (scale - 1)) == 0: # scale = 2^n
- for _ in range(int(math.log(scale, 2))):
- m.append(nn.Conv2d(num_feat, 4 * num_feat, 3, 1, 1))
- m.append(nn.PixelShuffle(2))
- elif scale == 3:
- m.append(nn.Conv2d(num_feat, 9 * num_feat, 3, 1, 1))
- m.append(nn.PixelShuffle(3))
- else:
- raise ValueError(f'scale {scale} is not supported. Supported scales: 2^n and 3.')
- super(Upsample, self).__init__(*m)
-
-
-def flow_warp(x, flow, interp_mode='bilinear', padding_mode='zeros', align_corners=True):
- """Warp an image or feature map with optical flow.
-
- Args:
- x (Tensor): Tensor with size (n, c, h, w).
- flow (Tensor): Tensor with size (n, h, w, 2), normal value.
- interp_mode (str): 'nearest' or 'bilinear'. Default: 'bilinear'.
- padding_mode (str): 'zeros' or 'border' or 'reflection'.
- Default: 'zeros'.
- align_corners (bool): Before pytorch 1.3, the default value is
- align_corners=True. After pytorch 1.3, the default value is
- align_corners=False. Here, we use the True as default.
-
- Returns:
- Tensor: Warped image or feature map.
- """
- assert x.size()[-2:] == flow.size()[1:3]
- _, _, h, w = x.size()
- # create mesh grid
- grid_y, grid_x = torch.meshgrid(torch.arange(0, h).type_as(x), torch.arange(0, w).type_as(x))
- grid = torch.stack((grid_x, grid_y), 2).float() # W(x), H(y), 2
- grid.requires_grad = False
-
- vgrid = grid + flow
- # scale grid to [-1,1]
- vgrid_x = 2.0 * vgrid[:, :, :, 0] / max(w - 1, 1) - 1.0
- vgrid_y = 2.0 * vgrid[:, :, :, 1] / max(h - 1, 1) - 1.0
- vgrid_scaled = torch.stack((vgrid_x, vgrid_y), dim=3)
- output = F.grid_sample(x, vgrid_scaled, mode=interp_mode, padding_mode=padding_mode, align_corners=align_corners)
-
- # TODO, what if align_corners=False
- return output
-
-
-def resize_flow(flow, size_type, sizes, interp_mode='bilinear', align_corners=False):
- """Resize a flow according to ratio or shape.
-
- Args:
- flow (Tensor): Precomputed flow. shape [N, 2, H, W].
- size_type (str): 'ratio' or 'shape'.
- sizes (list[int | float]): the ratio for resizing or the final output
- shape.
- 1) The order of ratio should be [ratio_h, ratio_w]. For
- downsampling, the ratio should be smaller than 1.0 (i.e., ratio
- < 1.0). For upsampling, the ratio should be larger than 1.0 (i.e.,
- ratio > 1.0).
- 2) The order of output_size should be [out_h, out_w].
- interp_mode (str): The mode of interpolation for resizing.
- Default: 'bilinear'.
- align_corners (bool): Whether align corners. Default: False.
-
- Returns:
- Tensor: Resized flow.
- """
- _, _, flow_h, flow_w = flow.size()
- if size_type == 'ratio':
- output_h, output_w = int(flow_h * sizes[0]), int(flow_w * sizes[1])
- elif size_type == 'shape':
- output_h, output_w = sizes[0], sizes[1]
- else:
- raise ValueError(f'Size type should be ratio or shape, but got type {size_type}.')
-
- input_flow = flow.clone()
- ratio_h = output_h / flow_h
- ratio_w = output_w / flow_w
- input_flow[:, 0, :, :] *= ratio_w
- input_flow[:, 1, :, :] *= ratio_h
- resized_flow = F.interpolate(
- input=input_flow, size=(output_h, output_w), mode=interp_mode, align_corners=align_corners)
- return resized_flow
-
-
-# TODO: may write a cpp file
-def pixel_unshuffle(x, scale):
- """ Pixel unshuffle.
-
- Args:
- x (Tensor): Input feature with shape (b, c, hh, hw).
- scale (int): Downsample ratio.
-
- Returns:
- Tensor: the pixel unshuffled feature.
- """
- b, c, hh, hw = x.size()
- out_channel = c * (scale**2)
- assert hh % scale == 0 and hw % scale == 0
- h = hh // scale
- w = hw // scale
- x_view = x.view(b, c, h, scale, w, scale)
- return x_view.permute(0, 1, 3, 5, 2, 4).reshape(b, out_channel, h, w)
-
-
-class DCNv2Pack(ModulatedDeformConvPack):
- """Modulated deformable conv for deformable alignment.
-
- Different from the official DCNv2Pack, which generates offsets and masks
- from the preceding features, this DCNv2Pack takes another different
- features to generate offsets and masks.
-
- ``Paper: Delving Deep into Deformable Alignment in Video Super-Resolution``
- """
-
- def forward(self, x, feat):
- out = self.conv_offset(feat)
- o1, o2, mask = torch.chunk(out, 3, dim=1)
- offset = torch.cat((o1, o2), dim=1)
- mask = torch.sigmoid(mask)
-
- offset_absmean = torch.mean(torch.abs(offset))
- if offset_absmean > 50:
- logger = get_root_logger()
- logger.warning(f'Offset abs mean is {offset_absmean}, larger than 50.')
-
- if LooseVersion(torchvision.__version__) >= LooseVersion('0.9.0'):
- return torchvision.ops.deform_conv2d(x, offset, self.weight, self.bias, self.stride, self.padding,
- self.dilation, mask)
- else:
- return modulated_deform_conv(x, offset, mask, self.weight, self.bias, self.stride, self.padding,
- self.dilation, self.groups, self.deformable_groups)
-
-
-def _no_grad_trunc_normal_(tensor, mean, std, a, b):
- # From: https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/layers/weight_init.py
- # Cut & paste from PyTorch official master until it's in a few official releases - RW
- # Method based on https://people.sc.fsu.edu/~jburkardt/presentations/truncated_normal.pdf
- def norm_cdf(x):
- # Computes standard normal cumulative distribution function
- return (1. + math.erf(x / math.sqrt(2.))) / 2.
-
- if (mean < a - 2 * std) or (mean > b + 2 * std):
- warnings.warn(
- 'mean is more than 2 std from [a, b] in nn.init.trunc_normal_. '
- 'The distribution of values may be incorrect.',
- stacklevel=2)
-
- with torch.no_grad():
- # Values are generated by using a truncated uniform distribution and
- # then using the inverse CDF for the normal distribution.
- # Get upper and lower cdf values
- low = norm_cdf((a - mean) / std)
- up = norm_cdf((b - mean) / std)
-
- # Uniformly fill tensor with values from [low, up], then translate to
- # [2l-1, 2u-1].
- tensor.uniform_(2 * low - 1, 2 * up - 1)
-
- # Use inverse cdf transform for normal distribution to get truncated
- # standard normal
- tensor.erfinv_()
-
- # Transform to proper mean, std
- tensor.mul_(std * math.sqrt(2.))
- tensor.add_(mean)
-
- # Clamp to ensure it's in the proper range
- tensor.clamp_(min=a, max=b)
- return tensor
-
-
-def trunc_normal_(tensor, mean=0., std=1., a=-2., b=2.):
- r"""Fills the input Tensor with values drawn from a truncated
- normal distribution.
-
- From: https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/layers/weight_init.py
-
- The values are effectively drawn from the
- normal distribution :math:`\mathcal{N}(\text{mean}, \text{std}^2)`
- with values outside :math:`[a, b]` redrawn until they are within
- the bounds. The method used for generating the random values works
- best when :math:`a \leq \text{mean} \leq b`.
-
- Args:
- tensor: an n-dimensional `torch.Tensor`
- mean: the mean of the normal distribution
- std: the standard deviation of the normal distribution
- a: the minimum cutoff value
- b: the maximum cutoff value
-
- Examples:
- >>> w = torch.empty(3, 5)
- >>> nn.init.trunc_normal_(w)
- """
- return _no_grad_trunc_normal_(tensor, mean, std, a, b)
-
-
-# From PyTorch
-def _ntuple(n):
-
- def parse(x):
- if isinstance(x, collections.abc.Iterable):
- return x
- return tuple(repeat(x, n))
-
- return parse
-
-
-to_1tuple = _ntuple(1)
-to_2tuple = _ntuple(2)
-to_3tuple = _ntuple(3)
-to_4tuple = _ntuple(4)
-to_ntuple = _ntuple
diff --git a/spaces/Jackflack09/diffuse-custom/diffusers/dependency_versions_check.py b/spaces/Jackflack09/diffuse-custom/diffusers/dependency_versions_check.py
deleted file mode 100644
index bbf863222a52fd60a15a95be0fbd6391acd3ba6d..0000000000000000000000000000000000000000
--- a/spaces/Jackflack09/diffuse-custom/diffusers/dependency_versions_check.py
+++ /dev/null
@@ -1,47 +0,0 @@
-# Copyright 2020 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-import sys
-
-from .dependency_versions_table import deps
-from .utils.versions import require_version, require_version_core
-
-
-# define which module versions we always want to check at run time
-# (usually the ones defined in `install_requires` in setup.py)
-#
-# order specific notes:
-# - tqdm must be checked before tokenizers
-
-pkgs_to_check_at_runtime = "python tqdm regex requests packaging filelock numpy tokenizers".split()
-if sys.version_info < (3, 7):
- pkgs_to_check_at_runtime.append("dataclasses")
-if sys.version_info < (3, 8):
- pkgs_to_check_at_runtime.append("importlib_metadata")
-
-for pkg in pkgs_to_check_at_runtime:
- if pkg in deps:
- if pkg == "tokenizers":
- # must be loaded here, or else tqdm check may fail
- from .utils import is_tokenizers_available
-
- if not is_tokenizers_available():
- continue # not required, check version only if installed
-
- require_version_core(deps[pkg])
- else:
- raise ValueError(f"can't find {pkg} in {deps.keys()}, check dependency_versions_table.py")
-
-
-def dep_version_check(pkg, hint=None):
- require_version(deps[pkg], hint)
diff --git a/spaces/Jackflack09/diffuse-custom/diffusers/pipeline_utils.py b/spaces/Jackflack09/diffuse-custom/diffusers/pipeline_utils.py
deleted file mode 100644
index e65d55e20cd9faa5396ed116efcc28656079e972..0000000000000000000000000000000000000000
--- a/spaces/Jackflack09/diffuse-custom/diffusers/pipeline_utils.py
+++ /dev/null
@@ -1,841 +0,0 @@
-# coding=utf-8
-# Copyright 2022 The HuggingFace Inc. team.
-# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import importlib
-import inspect
-import os
-from dataclasses import dataclass
-from pathlib import Path
-from typing import Any, Dict, List, Optional, Union
-
-import numpy as np
-import torch
-
-import diffusers
-import PIL
-from huggingface_hub import model_info, snapshot_download
-from packaging import version
-from PIL import Image
-from tqdm.auto import tqdm
-
-from .configuration_utils import ConfigMixin
-from .dynamic_modules_utils import get_class_from_dynamic_module
-from .hub_utils import http_user_agent
-from .modeling_utils import _LOW_CPU_MEM_USAGE_DEFAULT
-from .schedulers.scheduling_utils import SCHEDULER_CONFIG_NAME
-from .utils import (
- CONFIG_NAME,
- DIFFUSERS_CACHE,
- ONNX_WEIGHTS_NAME,
- WEIGHTS_NAME,
- BaseOutput,
- deprecate,
- is_accelerate_available,
- is_safetensors_available,
- is_torch_version,
- is_transformers_available,
- logging,
-)
-
-
-if is_transformers_available():
- import transformers
- from transformers import PreTrainedModel
-
-
-INDEX_FILE = "diffusion_pytorch_model.bin"
-CUSTOM_PIPELINE_FILE_NAME = "pipeline.py"
-DUMMY_MODULES_FOLDER = "diffusers.utils"
-TRANSFORMERS_DUMMY_MODULES_FOLDER = "transformers.utils"
-
-
-logger = logging.get_logger(__name__)
-
-
-LOADABLE_CLASSES = {
- "diffusers": {
- "ModelMixin": ["save_pretrained", "from_pretrained"],
- "SchedulerMixin": ["save_pretrained", "from_pretrained"],
- "DiffusionPipeline": ["save_pretrained", "from_pretrained"],
- "OnnxRuntimeModel": ["save_pretrained", "from_pretrained"],
- },
- "transformers": {
- "PreTrainedTokenizer": ["save_pretrained", "from_pretrained"],
- "PreTrainedTokenizerFast": ["save_pretrained", "from_pretrained"],
- "PreTrainedModel": ["save_pretrained", "from_pretrained"],
- "FeatureExtractionMixin": ["save_pretrained", "from_pretrained"],
- "ProcessorMixin": ["save_pretrained", "from_pretrained"],
- "ImageProcessingMixin": ["save_pretrained", "from_pretrained"],
- },
- "onnxruntime.training": {
- "ORTModule": ["save_pretrained", "from_pretrained"],
- },
-}
-
-ALL_IMPORTABLE_CLASSES = {}
-for library in LOADABLE_CLASSES:
- ALL_IMPORTABLE_CLASSES.update(LOADABLE_CLASSES[library])
-
-
-@dataclass
-class ImagePipelineOutput(BaseOutput):
- """
- Output class for image pipelines.
-
- Args:
- images (`List[PIL.Image.Image]` or `np.ndarray`)
- List of denoised PIL images of length `batch_size` or numpy array of shape `(batch_size, height, width,
- num_channels)`. PIL images or numpy array present the denoised images of the diffusion pipeline.
- """
-
- images: Union[List[PIL.Image.Image], np.ndarray]
-
-
-@dataclass
-class AudioPipelineOutput(BaseOutput):
- """
- Output class for audio pipelines.
-
- Args:
- audios (`np.ndarray`)
- List of denoised samples of shape `(batch_size, num_channels, sample_rate)`. Numpy array present the
- denoised audio samples of the diffusion pipeline.
- """
-
- audios: np.ndarray
-
-
-def is_safetensors_compatible(info) -> bool:
- filenames = set(sibling.rfilename for sibling in info.siblings)
- pt_filenames = set(filename for filename in filenames if filename.endswith(".bin"))
- is_safetensors_compatible = any(file.endswith(".safetensors") for file in filenames)
- for pt_filename in pt_filenames:
- prefix, raw = os.path.split(pt_filename)
- if raw == "pytorch_model.bin":
- # transformers specific
- sf_filename = os.path.join(prefix, "model.safetensors")
- else:
- sf_filename = pt_filename[: -len(".bin")] + ".safetensors"
- if is_safetensors_compatible and sf_filename not in filenames:
- logger.warning(f"{sf_filename} not found")
- is_safetensors_compatible = False
- return is_safetensors_compatible
-
-
-class DiffusionPipeline(ConfigMixin):
- r"""
- Base class for all models.
-
- [`DiffusionPipeline`] takes care of storing all components (models, schedulers, processors) for diffusion pipelines
- and handles methods for loading, downloading and saving models as well as a few methods common to all pipelines to:
-
- - move all PyTorch modules to the device of your choice
- - enabling/disabling the progress bar for the denoising iteration
-
- Class attributes:
-
- - **config_name** (`str`) -- name of the config file that will store the class and module names of all
- components of the diffusion pipeline.
- - **_optional_components** (List[`str`]) -- list of all components that are optional so they don't have to be
- passed for the pipeline to function (should be overridden by subclasses).
- """
- config_name = "model_index.json"
- _optional_components = []
-
- def register_modules(self, **kwargs):
- # import it here to avoid circular import
- from diffusers import pipelines
-
- for name, module in kwargs.items():
- # retrieve library
- if module is None:
- register_dict = {name: (None, None)}
- else:
- library = module.__module__.split(".")[0]
-
- # check if the module is a pipeline module
- pipeline_dir = module.__module__.split(".")[-2] if len(module.__module__.split(".")) > 2 else None
- path = module.__module__.split(".")
- is_pipeline_module = pipeline_dir in path and hasattr(pipelines, pipeline_dir)
-
- # if library is not in LOADABLE_CLASSES, then it is a custom module.
- # Or if it's a pipeline module, then the module is inside the pipeline
- # folder so we set the library to module name.
- if library not in LOADABLE_CLASSES or is_pipeline_module:
- library = pipeline_dir
-
- # retrieve class_name
- class_name = module.__class__.__name__
-
- register_dict = {name: (library, class_name)}
-
- # save model index config
- self.register_to_config(**register_dict)
-
- # set models
- setattr(self, name, module)
-
- def save_pretrained(
- self,
- save_directory: Union[str, os.PathLike],
- safe_serialization: bool = False,
- ):
- """
- Save all variables of the pipeline that can be saved and loaded as well as the pipelines configuration file to
- a directory. A pipeline variable can be saved and loaded if its class implements both a save and loading
- method. The pipeline can easily be re-loaded using the `[`~DiffusionPipeline.from_pretrained`]` class method.
-
- Arguments:
- save_directory (`str` or `os.PathLike`):
- Directory to which to save. Will be created if it doesn't exist.
- safe_serialization (`bool`, *optional*, defaults to `False`):
- Whether to save the model using `safetensors` or the traditional PyTorch way (that uses `pickle`).
- """
- self.save_config(save_directory)
-
- model_index_dict = dict(self.config)
- model_index_dict.pop("_class_name")
- model_index_dict.pop("_diffusers_version")
- model_index_dict.pop("_module", None)
-
- expected_modules, optional_kwargs = self._get_signature_keys(self)
-
- def is_saveable_module(name, value):
- if name not in expected_modules:
- return False
- if name in self._optional_components and value[0] is None:
- return False
- return True
-
- model_index_dict = {k: v for k, v in model_index_dict.items() if is_saveable_module(k, v)}
-
- for pipeline_component_name in model_index_dict.keys():
- sub_model = getattr(self, pipeline_component_name)
- model_cls = sub_model.__class__
-
- save_method_name = None
- # search for the model's base class in LOADABLE_CLASSES
- for library_name, library_classes in LOADABLE_CLASSES.items():
- library = importlib.import_module(library_name)
- for base_class, save_load_methods in library_classes.items():
- class_candidate = getattr(library, base_class, None)
- if class_candidate is not None and issubclass(model_cls, class_candidate):
- # if we found a suitable base class in LOADABLE_CLASSES then grab its save method
- save_method_name = save_load_methods[0]
- break
- if save_method_name is not None:
- break
-
- save_method = getattr(sub_model, save_method_name)
-
- # Call the save method with the argument safe_serialization only if it's supported
- save_method_signature = inspect.signature(save_method)
- save_method_accept_safe = "safe_serialization" in save_method_signature.parameters
- if save_method_accept_safe:
- save_method(
- os.path.join(save_directory, pipeline_component_name), safe_serialization=safe_serialization
- )
- else:
- save_method(os.path.join(save_directory, pipeline_component_name))
-
- def to(self, torch_device: Optional[Union[str, torch.device]] = None):
- if torch_device is None:
- return self
-
- module_names, _, _ = self.extract_init_dict(dict(self.config))
- for name in module_names.keys():
- module = getattr(self, name)
- if isinstance(module, torch.nn.Module):
- if module.dtype == torch.float16 and str(torch_device) in ["cpu"]:
- logger.warning(
- "Pipelines loaded with `torch_dtype=torch.float16` cannot run with `cpu` device. It"
- " is not recommended to move them to `cpu` as running them will fail. Please make"
- " sure to use an accelerator to run the pipeline in inference, due to the lack of"
- " support for`float16` operations on this device in PyTorch. Please, remove the"
- " `torch_dtype=torch.float16` argument, or use another device for inference."
- )
- module.to(torch_device)
- return self
-
- @property
- def device(self) -> torch.device:
- r"""
- Returns:
- `torch.device`: The torch device on which the pipeline is located.
- """
- module_names, _, _ = self.extract_init_dict(dict(self.config))
- for name in module_names.keys():
- module = getattr(self, name)
- if isinstance(module, torch.nn.Module):
- return module.device
- return torch.device("cpu")
-
- @classmethod
- def from_pretrained(cls, pretrained_model_name_or_path: Optional[Union[str, os.PathLike]], **kwargs):
- r"""
- Instantiate a PyTorch diffusion pipeline from pre-trained pipeline weights.
-
- The pipeline is set in evaluation mode by default using `model.eval()` (Dropout modules are deactivated).
-
- The warning *Weights from XXX not initialized from pretrained model* means that the weights of XXX do not come
- pretrained with the rest of the model. It is up to you to train those weights with a downstream fine-tuning
- task.
-
- The warning *Weights from XXX not used in YYY* means that the layer XXX is not used by YYY, therefore those
- weights are discarded.
-
- Parameters:
- pretrained_model_name_or_path (`str` or `os.PathLike`, *optional*):
- Can be either:
-
- - A string, the *repo id* of a pretrained pipeline hosted inside a model repo on
- https://huggingface.co/ Valid repo ids have to be located under a user or organization name, like
- `CompVis/ldm-text2im-large-256`.
- - A path to a *directory* containing pipeline weights saved using
- [`~DiffusionPipeline.save_pretrained`], e.g., `./my_pipeline_directory/`.
- torch_dtype (`str` or `torch.dtype`, *optional*):
- Override the default `torch.dtype` and load the model under this dtype. If `"auto"` is passed the dtype
- will be automatically derived from the model's weights.
- custom_pipeline (`str`, *optional*):
-
-
-
- This is an experimental feature and is likely to change in the future.
-
-
-
- Can be either:
-
- - A string, the *repo id* of a custom pipeline hosted inside a model repo on
- https://huggingface.co/. Valid repo ids have to be located under a user or organization name,
- like `hf-internal-testing/diffusers-dummy-pipeline`.
-
-
-
- It is required that the model repo has a file, called `pipeline.py` that defines the custom
- pipeline.
-
-
-
- - A string, the *file name* of a community pipeline hosted on GitHub under
- https://github.com/huggingface/diffusers/tree/main/examples/community. Valid file names have to
- match exactly the file name without `.py` located under the above link, *e.g.*
- `clip_guided_stable_diffusion`.
-
-
-
- Community pipelines are always loaded from the current `main` branch of GitHub.
-
-
-
- - A path to a *directory* containing a custom pipeline, e.g., `./my_pipeline_directory/`.
-
-
-
- It is required that the directory has a file, called `pipeline.py` that defines the custom
- pipeline.
-
-
-
- For more information on how to load and create custom pipelines, please have a look at [Loading and
- Adding Custom
- Pipelines](https://huggingface.co/docs/diffusers/using-diffusers/custom_pipeline_overview)
-
- torch_dtype (`str` or `torch.dtype`, *optional*):
- force_download (`bool`, *optional*, defaults to `False`):
- Whether or not to force the (re-)download of the model weights and configuration files, overriding the
- cached versions if they exist.
- resume_download (`bool`, *optional*, defaults to `False`):
- Whether or not to delete incompletely received files. Will attempt to resume the download if such a
- file exists.
- proxies (`Dict[str, str]`, *optional*):
- A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128',
- 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.
- output_loading_info(`bool`, *optional*, defaults to `False`):
- Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.
- local_files_only(`bool`, *optional*, defaults to `False`):
- Whether or not to only look at local files (i.e., do not try to download the model).
- use_auth_token (`str` or *bool*, *optional*):
- The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated
- when running `huggingface-cli login` (stored in `~/.huggingface`).
- revision (`str`, *optional*, defaults to `"main"`):
- The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
- git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any
- identifier allowed by git.
- mirror (`str`, *optional*):
- Mirror source to accelerate downloads in China. If you are from China and have an accessibility
- problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety.
- Please refer to the mirror site for more information. specify the folder name here.
- device_map (`str` or `Dict[str, Union[int, str, torch.device]]`, *optional*):
- A map that specifies where each submodule should go. It doesn't need to be refined to each
- parameter/buffer name, once a given module name is inside, every submodule of it will be sent to the
- same device.
-
- To have Accelerate compute the most optimized `device_map` automatically, set `device_map="auto"`. For
- more information about each option see [designing a device
- map](https://hf.co/docs/accelerate/main/en/usage_guides/big_modeling#designing-a-device-map).
- low_cpu_mem_usage (`bool`, *optional*, defaults to `True` if torch version >= 1.9.0 else `False`):
- Speed up model loading by not initializing the weights and only loading the pre-trained weights. This
- also tries to not use more than 1x model size in CPU memory (including peak memory) while loading the
- model. This is only supported when torch version >= 1.9.0. If you are using an older version of torch,
- setting this argument to `True` will raise an error.
- return_cached_folder (`bool`, *optional*, defaults to `False`):
- If set to `True`, path to downloaded cached folder will be returned in addition to loaded pipeline.
- kwargs (remaining dictionary of keyword arguments, *optional*):
- Can be used to overwrite load - and saveable variables - *i.e.* the pipeline components - of the
- specific pipeline class. The overwritten components are then directly passed to the pipelines
- `__init__` method. See example below for more information.
-
-
-
- It is required to be logged in (`huggingface-cli login`) when you want to use private or [gated
- models](https://huggingface.co/docs/hub/models-gated#gated-models), *e.g.* `"runwayml/stable-diffusion-v1-5"`
-
-
-
-
-
- Activate the special ["offline-mode"](https://huggingface.co/diffusers/installation.html#offline-mode) to use
- this method in a firewalled environment.
-
-
-
- Examples:
-
- ```py
- >>> from diffusers import DiffusionPipeline
-
- >>> # Download pipeline from huggingface.co and cache.
- >>> pipeline = DiffusionPipeline.from_pretrained("CompVis/ldm-text2im-large-256")
-
- >>> # Download pipeline that requires an authorization token
- >>> # For more information on access tokens, please refer to this section
- >>> # of the documentation](https://huggingface.co/docs/hub/security-tokens)
- >>> pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
-
- >>> # Use a different scheduler
- >>> from diffusers import LMSDiscreteScheduler
-
- >>> scheduler = LMSDiscreteScheduler.from_config(pipeline.scheduler.config)
- >>> pipeline.scheduler = scheduler
- ```
- """
- cache_dir = kwargs.pop("cache_dir", DIFFUSERS_CACHE)
- resume_download = kwargs.pop("resume_download", False)
- force_download = kwargs.pop("force_download", False)
- proxies = kwargs.pop("proxies", None)
- local_files_only = kwargs.pop("local_files_only", False)
- use_auth_token = kwargs.pop("use_auth_token", None)
- revision = kwargs.pop("revision", None)
- torch_dtype = kwargs.pop("torch_dtype", None)
- custom_pipeline = kwargs.pop("custom_pipeline", None)
- provider = kwargs.pop("provider", None)
- sess_options = kwargs.pop("sess_options", None)
- device_map = kwargs.pop("device_map", None)
- low_cpu_mem_usage = kwargs.pop("low_cpu_mem_usage", _LOW_CPU_MEM_USAGE_DEFAULT)
- return_cached_folder = kwargs.pop("return_cached_folder", False)
-
- # 1. Download the checkpoints and configs
- # use snapshot download here to get it working from from_pretrained
- if not os.path.isdir(pretrained_model_name_or_path):
- config_dict = cls.load_config(
- pretrained_model_name_or_path,
- cache_dir=cache_dir,
- resume_download=resume_download,
- force_download=force_download,
- proxies=proxies,
- local_files_only=local_files_only,
- use_auth_token=use_auth_token,
- revision=revision,
- )
- # make sure we only download sub-folders and `diffusers` filenames
- folder_names = [k for k in config_dict.keys() if not k.startswith("_")]
- allow_patterns = [os.path.join(k, "*") for k in folder_names]
- allow_patterns += [WEIGHTS_NAME, SCHEDULER_CONFIG_NAME, CONFIG_NAME, ONNX_WEIGHTS_NAME, cls.config_name]
-
- # make sure we don't download flax weights
- ignore_patterns = ["*.msgpack"]
-
- if custom_pipeline is not None:
- allow_patterns += [CUSTOM_PIPELINE_FILE_NAME]
-
- if cls != DiffusionPipeline:
- requested_pipeline_class = cls.__name__
- else:
- requested_pipeline_class = config_dict.get("_class_name", cls.__name__)
- user_agent = {"pipeline_class": requested_pipeline_class}
- if custom_pipeline is not None:
- user_agent["custom_pipeline"] = custom_pipeline
- user_agent = http_user_agent(user_agent)
-
- if is_safetensors_available():
- info = model_info(
- pretrained_model_name_or_path,
- use_auth_token=use_auth_token,
- revision=revision,
- )
- if is_safetensors_compatible(info):
- ignore_patterns.append("*.bin")
-
- # download all allow_patterns
- cached_folder = snapshot_download(
- pretrained_model_name_or_path,
- cache_dir=cache_dir,
- resume_download=resume_download,
- proxies=proxies,
- local_files_only=local_files_only,
- use_auth_token=use_auth_token,
- revision=revision,
- allow_patterns=allow_patterns,
- ignore_patterns=ignore_patterns,
- user_agent=user_agent,
- )
- else:
- cached_folder = pretrained_model_name_or_path
-
- config_dict = cls.load_config(cached_folder)
-
- # 2. Load the pipeline class, if using custom module then load it from the hub
- # if we load from explicit class, let's use it
- if custom_pipeline is not None:
- if custom_pipeline.endswith(".py"):
- path = Path(custom_pipeline)
- # decompose into folder & file
- file_name = path.name
- custom_pipeline = path.parent.absolute()
- else:
- file_name = CUSTOM_PIPELINE_FILE_NAME
-
- pipeline_class = get_class_from_dynamic_module(
- custom_pipeline, module_file=file_name, cache_dir=custom_pipeline
- )
- elif cls != DiffusionPipeline:
- pipeline_class = cls
- else:
- diffusers_module = importlib.import_module(cls.__module__.split(".")[0])
- pipeline_class = getattr(diffusers_module, config_dict["_class_name"])
-
- # To be removed in 1.0.0
- if pipeline_class.__name__ == "StableDiffusionInpaintPipeline" and version.parse(
- version.parse(config_dict["_diffusers_version"]).base_version
- ) <= version.parse("0.5.1"):
- from diffusers import StableDiffusionInpaintPipeline, StableDiffusionInpaintPipelineLegacy
-
- pipeline_class = StableDiffusionInpaintPipelineLegacy
-
- deprecation_message = (
- "You are using a legacy checkpoint for inpainting with Stable Diffusion, therefore we are loading the"
- f" {StableDiffusionInpaintPipelineLegacy} class instead of {StableDiffusionInpaintPipeline}. For"
- " better inpainting results, we strongly suggest using Stable Diffusion's official inpainting"
- " checkpoint: https://huggingface.co/runwayml/stable-diffusion-inpainting instead or adapting your"
- f" checkpoint {pretrained_model_name_or_path} to the format of"
- " https://huggingface.co/runwayml/stable-diffusion-inpainting. Note that we do not actively maintain"
- " the {StableDiffusionInpaintPipelineLegacy} class and will likely remove it in version 1.0.0."
- )
- deprecate("StableDiffusionInpaintPipelineLegacy", "1.0.0", deprecation_message, standard_warn=False)
-
- # some modules can be passed directly to the init
- # in this case they are already instantiated in `kwargs`
- # extract them here
- expected_modules, optional_kwargs = cls._get_signature_keys(pipeline_class)
- passed_class_obj = {k: kwargs.pop(k) for k in expected_modules if k in kwargs}
- passed_pipe_kwargs = {k: kwargs.pop(k) for k in optional_kwargs if k in kwargs}
-
- init_dict, unused_kwargs, _ = pipeline_class.extract_init_dict(config_dict, **kwargs)
-
- # define init kwargs
- init_kwargs = {k: init_dict.pop(k) for k in optional_kwargs if k in init_dict}
- init_kwargs = {**init_kwargs, **passed_pipe_kwargs}
-
- # remove `null` components
- def load_module(name, value):
- if value[0] is None:
- return False
- if name in passed_class_obj and passed_class_obj[name] is None:
- return False
- return True
-
- init_dict = {k: v for k, v in init_dict.items() if load_module(k, v)}
-
- if len(unused_kwargs) > 0:
- logger.warning(
- f"Keyword arguments {unused_kwargs} are not expected by {pipeline_class.__name__} and will be ignored."
- )
-
- if low_cpu_mem_usage and not is_accelerate_available():
- low_cpu_mem_usage = False
- logger.warning(
- "Cannot initialize model with low cpu memory usage because `accelerate` was not found in the"
- " environment. Defaulting to `low_cpu_mem_usage=False`. It is strongly recommended to install"
- " `accelerate` for faster and less memory-intense model loading. You can do so with: \n```\npip"
- " install accelerate\n```\n."
- )
-
- if device_map is not None and not is_torch_version(">=", "1.9.0"):
- raise NotImplementedError(
- "Loading and dispatching requires torch >= 1.9.0. Please either update your PyTorch version or set"
- " `device_map=None`."
- )
-
- if low_cpu_mem_usage is True and not is_torch_version(">=", "1.9.0"):
- raise NotImplementedError(
- "Low memory initialization requires torch >= 1.9.0. Please either update your PyTorch version or set"
- " `low_cpu_mem_usage=False`."
- )
-
- if low_cpu_mem_usage is False and device_map is not None:
- raise ValueError(
- f"You cannot set `low_cpu_mem_usage` to False while using device_map={device_map} for loading and"
- " dispatching. Please make sure to set `low_cpu_mem_usage=True`."
- )
-
- # import it here to avoid circular import
- from diffusers import pipelines
-
- # 3. Load each module in the pipeline
- for name, (library_name, class_name) in init_dict.items():
- # 3.1 - now that JAX/Flax is an official framework of the library, we might load from Flax names
- if class_name.startswith("Flax"):
- class_name = class_name[4:]
-
- is_pipeline_module = hasattr(pipelines, library_name)
- loaded_sub_model = None
-
- # if the model is in a pipeline module, then we load it from the pipeline
- if name in passed_class_obj:
- # 1. check that passed_class_obj has correct parent class
- if not is_pipeline_module:
- library = importlib.import_module(library_name)
- class_obj = getattr(library, class_name)
- importable_classes = LOADABLE_CLASSES[library_name]
- class_candidates = {c: getattr(library, c, None) for c in importable_classes.keys()}
-
- expected_class_obj = None
- for class_name, class_candidate in class_candidates.items():
- if class_candidate is not None and issubclass(class_obj, class_candidate):
- expected_class_obj = class_candidate
-
- if not issubclass(passed_class_obj[name].__class__, expected_class_obj):
- raise ValueError(
- f"{passed_class_obj[name]} is of type: {type(passed_class_obj[name])}, but should be"
- f" {expected_class_obj}"
- )
- else:
- logger.warning(
- f"You have passed a non-standard module {passed_class_obj[name]}. We cannot verify whether it"
- " has the correct type"
- )
-
- # set passed class object
- loaded_sub_model = passed_class_obj[name]
- elif is_pipeline_module:
- pipeline_module = getattr(pipelines, library_name)
- class_obj = getattr(pipeline_module, class_name)
- importable_classes = ALL_IMPORTABLE_CLASSES
- class_candidates = {c: class_obj for c in importable_classes.keys()}
- else:
- # else we just import it from the library.
- library = importlib.import_module(library_name)
-
- class_obj = getattr(library, class_name)
- importable_classes = LOADABLE_CLASSES[library_name]
- class_candidates = {c: getattr(library, c, None) for c in importable_classes.keys()}
-
- if loaded_sub_model is None:
- load_method_name = None
- for class_name, class_candidate in class_candidates.items():
- if class_candidate is not None and issubclass(class_obj, class_candidate):
- load_method_name = importable_classes[class_name][1]
-
- if load_method_name is None:
- none_module = class_obj.__module__
- is_dummy_path = none_module.startswith(DUMMY_MODULES_FOLDER) or none_module.startswith(
- TRANSFORMERS_DUMMY_MODULES_FOLDER
- )
- if is_dummy_path and "dummy" in none_module:
- # call class_obj for nice error message of missing requirements
- class_obj()
-
- raise ValueError(
- f"The component {class_obj} of {pipeline_class} cannot be loaded as it does not seem to have"
- f" any of the loading methods defined in {ALL_IMPORTABLE_CLASSES}."
- )
-
- load_method = getattr(class_obj, load_method_name)
- loading_kwargs = {}
-
- if issubclass(class_obj, torch.nn.Module):
- loading_kwargs["torch_dtype"] = torch_dtype
- if issubclass(class_obj, diffusers.OnnxRuntimeModel):
- loading_kwargs["provider"] = provider
- loading_kwargs["sess_options"] = sess_options
-
- is_diffusers_model = issubclass(class_obj, diffusers.ModelMixin)
- is_transformers_model = (
- is_transformers_available()
- and issubclass(class_obj, PreTrainedModel)
- and version.parse(version.parse(transformers.__version__).base_version) >= version.parse("4.20.0")
- )
-
- # When loading a transformers model, if the device_map is None, the weights will be initialized as opposed to diffusers.
- # To make default loading faster we set the `low_cpu_mem_usage=low_cpu_mem_usage` flag which is `True` by default.
- # This makes sure that the weights won't be initialized which significantly speeds up loading.
- if is_diffusers_model or is_transformers_model:
- loading_kwargs["device_map"] = device_map
- loading_kwargs["low_cpu_mem_usage"] = low_cpu_mem_usage
-
- # check if the module is in a subdirectory
- if os.path.isdir(os.path.join(cached_folder, name)):
- loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs)
- else:
- # else load from the root directory
- loaded_sub_model = load_method(cached_folder, **loading_kwargs)
-
- init_kwargs[name] = loaded_sub_model # UNet(...), # DiffusionSchedule(...)
-
- # 4. Potentially add passed objects if expected
- missing_modules = set(expected_modules) - set(init_kwargs.keys())
- passed_modules = list(passed_class_obj.keys())
- optional_modules = pipeline_class._optional_components
- if len(missing_modules) > 0 and missing_modules <= set(passed_modules + optional_modules):
- for module in missing_modules:
- init_kwargs[module] = passed_class_obj.get(module, None)
- elif len(missing_modules) > 0:
- passed_modules = set(list(init_kwargs.keys()) + list(passed_class_obj.keys())) - optional_kwargs
- raise ValueError(
- f"Pipeline {pipeline_class} expected {expected_modules}, but only {passed_modules} were passed."
- )
-
- # 5. Instantiate the pipeline
- model = pipeline_class(**init_kwargs)
-
- if return_cached_folder:
- return model, cached_folder
- return model
-
- @staticmethod
- def _get_signature_keys(obj):
- parameters = inspect.signature(obj.__init__).parameters
- required_parameters = {k: v for k, v in parameters.items() if v.default == inspect._empty}
- optional_parameters = set({k for k, v in parameters.items() if v.default != inspect._empty})
- expected_modules = set(required_parameters.keys()) - set(["self"])
- return expected_modules, optional_parameters
-
- @property
- def components(self) -> Dict[str, Any]:
- r"""
-
- The `self.components` property can be useful to run different pipelines with the same weights and
- configurations to not have to re-allocate memory.
-
- Examples:
-
- ```py
- >>> from diffusers import (
- ... StableDiffusionPipeline,
- ... StableDiffusionImg2ImgPipeline,
- ... StableDiffusionInpaintPipeline,
- ... )
-
- >>> text2img = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
- >>> img2img = StableDiffusionImg2ImgPipeline(**text2img.components)
- >>> inpaint = StableDiffusionInpaintPipeline(**text2img.components)
- ```
-
- Returns:
- A dictionaly containing all the modules needed to initialize the pipeline.
- """
- expected_modules, optional_parameters = self._get_signature_keys(self)
- components = {
- k: getattr(self, k) for k in self.config.keys() if not k.startswith("_") and k not in optional_parameters
- }
-
- if set(components.keys()) != expected_modules:
- raise ValueError(
- f"{self} has been incorrectly initialized or {self.__class__} is incorrectly implemented. Expected"
- f" {expected_modules} to be defined, but {components} are defined."
- )
-
- return components
-
- @staticmethod
- def numpy_to_pil(images):
- """
- Convert a numpy image or a batch of images to a PIL image.
- """
- if images.ndim == 3:
- images = images[None, ...]
- images = (images * 255).round().astype("uint8")
- if images.shape[-1] == 1:
- # special case for grayscale (single channel) images
- pil_images = [Image.fromarray(image.squeeze(), mode="L") for image in images]
- else:
- pil_images = [Image.fromarray(image) for image in images]
-
- return pil_images
-
- def progress_bar(self, iterable=None, total=None):
- if not hasattr(self, "_progress_bar_config"):
- self._progress_bar_config = {}
- elif not isinstance(self._progress_bar_config, dict):
- raise ValueError(
- f"`self._progress_bar_config` should be of type `dict`, but is {type(self._progress_bar_config)}."
- )
-
- if iterable is not None:
- return tqdm(iterable, **self._progress_bar_config)
- elif total is not None:
- return tqdm(total=total, **self._progress_bar_config)
- else:
- raise ValueError("Either `total` or `iterable` has to be defined.")
-
- def set_progress_bar_config(self, **kwargs):
- self._progress_bar_config = kwargs
-
- def enable_xformers_memory_efficient_attention(self):
- r"""
- Enable memory efficient attention as implemented in xformers.
-
- When this option is enabled, you should observe lower GPU memory usage and a potential speed up at inference
- time. Speed up at training time is not guaranteed.
-
- Warning: When Memory Efficient Attention and Sliced attention are both enabled, the Memory Efficient Attention
- is used.
- """
- self.set_use_memory_efficient_attention_xformers(True)
-
- def disable_xformers_memory_efficient_attention(self):
- r"""
- Disable memory efficient attention as implemented in xformers.
- """
- self.set_use_memory_efficient_attention_xformers(False)
-
- def set_use_memory_efficient_attention_xformers(self, valid: bool) -> None:
- # Recursively walk through all the children.
- # Any children which exposes the set_use_memory_efficient_attention_xformers method
- # gets the message
- def fn_recursive_set_mem_eff(module: torch.nn.Module):
- if hasattr(module, "set_use_memory_efficient_attention_xformers"):
- module.set_use_memory_efficient_attention_xformers(valid)
-
- for child in module.children():
- fn_recursive_set_mem_eff(child)
-
- module_names, _, _ = self.extract_init_dict(dict(self.config))
- for module_name in module_names:
- module = getattr(self, module_name)
- if isinstance(module, torch.nn.Module):
- fn_recursive_set_mem_eff(module)
diff --git a/spaces/JeffJing/ZookChatBot/steamship/plugin/outputs/__init__.py b/spaces/JeffJing/ZookChatBot/steamship/plugin/outputs/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/JohnSmith9982/ChuanhuChatGPT_Beta/modules/index_func.py b/spaces/JohnSmith9982/ChuanhuChatGPT_Beta/modules/index_func.py
deleted file mode 100644
index ac128668c2920b6b4b945e0de3dcd745fe141200..0000000000000000000000000000000000000000
--- a/spaces/JohnSmith9982/ChuanhuChatGPT_Beta/modules/index_func.py
+++ /dev/null
@@ -1,136 +0,0 @@
-import os
-import logging
-
-import hashlib
-import PyPDF2
-from tqdm import tqdm
-
-from modules.presets import *
-from modules.utils import *
-from modules.config import local_embedding
-
-
-def get_documents(file_src):
- from langchain.schema import Document
- from langchain.text_splitter import TokenTextSplitter
- text_splitter = TokenTextSplitter(chunk_size=500, chunk_overlap=30)
-
- documents = []
- logging.debug("Loading documents...")
- logging.debug(f"file_src: {file_src}")
- for file in file_src:
- filepath = file.name
- filename = os.path.basename(filepath)
- file_type = os.path.splitext(filename)[1]
- logging.info(f"loading file: {filename}")
- try:
- if file_type == ".pdf":
- logging.debug("Loading PDF...")
- try:
- from modules.pdf_func import parse_pdf
- from modules.config import advance_docs
-
- two_column = advance_docs["pdf"].get("two_column", False)
- pdftext = parse_pdf(filepath, two_column).text
- except:
- pdftext = ""
- with open(filepath, "rb") as pdfFileObj:
- pdfReader = PyPDF2.PdfReader(pdfFileObj)
- for page in tqdm(pdfReader.pages):
- pdftext += page.extract_text()
- texts = [Document(page_content=pdftext,
- metadata={"source": filepath})]
- elif file_type == ".docx":
- logging.debug("Loading Word...")
- from langchain.document_loaders import UnstructuredWordDocumentLoader
- loader = UnstructuredWordDocumentLoader(filepath)
- texts = loader.load()
- elif file_type == ".pptx":
- logging.debug("Loading PowerPoint...")
- from langchain.document_loaders import UnstructuredPowerPointLoader
- loader = UnstructuredPowerPointLoader(filepath)
- texts = loader.load()
- elif file_type == ".epub":
- logging.debug("Loading EPUB...")
- from langchain.document_loaders import UnstructuredEPubLoader
- loader = UnstructuredEPubLoader(filepath)
- texts = loader.load()
- elif file_type == ".xlsx":
- logging.debug("Loading Excel...")
- text_list = excel_to_string(filepath)
- texts = []
- for elem in text_list:
- texts.append(Document(page_content=elem,
- metadata={"source": filepath}))
- else:
- logging.debug("Loading text file...")
- from langchain.document_loaders import TextLoader
- loader = TextLoader(filepath, "utf8")
- texts = loader.load()
- except Exception as e:
- import traceback
- logging.error(f"Error loading file: {filename}")
- traceback.print_exc()
-
- texts = text_splitter.split_documents(texts)
- documents.extend(texts)
- logging.debug("Documents loaded.")
- return documents
-
-
-def construct_index(
- api_key,
- file_src,
- max_input_size=4096,
- num_outputs=5,
- max_chunk_overlap=20,
- chunk_size_limit=600,
- embedding_limit=None,
- separator=" ",
-):
- from langchain.chat_models import ChatOpenAI
- from langchain.vectorstores import FAISS
-
- if api_key:
- os.environ["OPENAI_API_KEY"] = api_key
- else:
- # 由于一个依赖的愚蠢的设计,这里必须要有一个API KEY
- os.environ["OPENAI_API_KEY"] = "sk-xxxxxxx"
- chunk_size_limit = None if chunk_size_limit == 0 else chunk_size_limit
- embedding_limit = None if embedding_limit == 0 else embedding_limit
- separator = " " if separator == "" else separator
-
- index_name = get_file_hash(file_src)
- index_path = f"./index/{index_name}"
- if local_embedding:
- from langchain.embeddings.huggingface import HuggingFaceEmbeddings
- embeddings = HuggingFaceEmbeddings(
- model_name="sentence-transformers/distiluse-base-multilingual-cased-v2")
- else:
- from langchain.embeddings import OpenAIEmbeddings
- if os.environ.get("OPENAI_API_TYPE", "openai") == "openai":
- embeddings = OpenAIEmbeddings(openai_api_base=os.environ.get(
- "OPENAI_API_BASE", None), openai_api_key=os.environ.get("OPENAI_EMBEDDING_API_KEY", api_key))
- else:
- embeddings = OpenAIEmbeddings(deployment=os.environ["AZURE_EMBEDDING_DEPLOYMENT_NAME"], openai_api_key=os.environ["AZURE_OPENAI_API_KEY"],
- model=os.environ["AZURE_EMBEDDING_MODEL_NAME"], openai_api_base=os.environ["AZURE_OPENAI_API_BASE_URL"], openai_api_type="azure")
- if os.path.exists(index_path):
- logging.info("找到了缓存的索引文件,加载中……")
- return FAISS.load_local(index_path, embeddings)
- else:
- try:
- documents = get_documents(file_src)
- logging.info("构建索引中……")
- with retrieve_proxy():
- index = FAISS.from_documents(documents, embeddings)
- logging.debug("索引构建完成!")
- os.makedirs("./index", exist_ok=True)
- index.save_local(index_path)
- logging.debug("索引已保存至本地!")
- return index
-
- except Exception as e:
- import traceback
- logging.error("索引构建失败!%s", e)
- traceback.print_exc()
- return None
diff --git a/spaces/Kayson/InstructDiffusion/stable_diffusion/ldm/models/diffusion/dpm_solver/dpm_solver.py b/spaces/Kayson/InstructDiffusion/stable_diffusion/ldm/models/diffusion/dpm_solver/dpm_solver.py
deleted file mode 100644
index bdb64e0c78cc3520f92d79db3124c85fc3cfb9b4..0000000000000000000000000000000000000000
--- a/spaces/Kayson/InstructDiffusion/stable_diffusion/ldm/models/diffusion/dpm_solver/dpm_solver.py
+++ /dev/null
@@ -1,1184 +0,0 @@
-import torch
-import torch.nn.functional as F
-import math
-
-
-class NoiseScheduleVP:
- def __init__(
- self,
- schedule='discrete',
- betas=None,
- alphas_cumprod=None,
- continuous_beta_0=0.1,
- continuous_beta_1=20.,
- ):
- """Create a wrapper class for the forward SDE (VP type).
-
- ***
- Update: We support discrete-time diffusion models by implementing a picewise linear interpolation for log_alpha_t.
- We recommend to use schedule='discrete' for the discrete-time diffusion models, especially for high-resolution images.
- ***
-
- The forward SDE ensures that the condition distribution q_{t|0}(x_t | x_0) = N ( alpha_t * x_0, sigma_t^2 * I ).
- We further define lambda_t = log(alpha_t) - log(sigma_t), which is the half-logSNR (described in the DPM-Solver paper).
- Therefore, we implement the functions for computing alpha_t, sigma_t and lambda_t. For t in [0, T], we have:
-
- log_alpha_t = self.marginal_log_mean_coeff(t)
- sigma_t = self.marginal_std(t)
- lambda_t = self.marginal_lambda(t)
-
- Moreover, as lambda(t) is an invertible function, we also support its inverse function:
-
- t = self.inverse_lambda(lambda_t)
-
- ===============================================================
-
- We support both discrete-time DPMs (trained on n = 0, 1, ..., N-1) and continuous-time DPMs (trained on t in [t_0, T]).
-
- 1. For discrete-time DPMs:
-
- For discrete-time DPMs trained on n = 0, 1, ..., N-1, we convert the discrete steps to continuous time steps by:
- t_i = (i + 1) / N
- e.g. for N = 1000, we have t_0 = 1e-3 and T = t_{N-1} = 1.
- We solve the corresponding diffusion ODE from time T = 1 to time t_0 = 1e-3.
-
- Args:
- betas: A `torch.Tensor`. The beta array for the discrete-time DPM. (See the original DDPM paper for details)
- alphas_cumprod: A `torch.Tensor`. The cumprod alphas for the discrete-time DPM. (See the original DDPM paper for details)
-
- Note that we always have alphas_cumprod = cumprod(betas). Therefore, we only need to set one of `betas` and `alphas_cumprod`.
-
- **Important**: Please pay special attention for the args for `alphas_cumprod`:
- The `alphas_cumprod` is the \hat{alpha_n} arrays in the notations of DDPM. Specifically, DDPMs assume that
- q_{t_n | 0}(x_{t_n} | x_0) = N ( \sqrt{\hat{alpha_n}} * x_0, (1 - \hat{alpha_n}) * I ).
- Therefore, the notation \hat{alpha_n} is different from the notation alpha_t in DPM-Solver. In fact, we have
- alpha_{t_n} = \sqrt{\hat{alpha_n}},
- and
- log(alpha_{t_n}) = 0.5 * log(\hat{alpha_n}).
-
-
- 2. For continuous-time DPMs:
-
- We support two types of VPSDEs: linear (DDPM) and cosine (improved-DDPM). The hyperparameters for the noise
- schedule are the default settings in DDPM and improved-DDPM:
-
- Args:
- beta_min: A `float` number. The smallest beta for the linear schedule.
- beta_max: A `float` number. The largest beta for the linear schedule.
- cosine_s: A `float` number. The hyperparameter in the cosine schedule.
- cosine_beta_max: A `float` number. The hyperparameter in the cosine schedule.
- T: A `float` number. The ending time of the forward process.
-
- ===============================================================
-
- Args:
- schedule: A `str`. The noise schedule of the forward SDE. 'discrete' for discrete-time DPMs,
- 'linear' or 'cosine' for continuous-time DPMs.
- Returns:
- A wrapper object of the forward SDE (VP type).
-
- ===============================================================
-
- Example:
-
- # For discrete-time DPMs, given betas (the beta array for n = 0, 1, ..., N - 1):
- >>> ns = NoiseScheduleVP('discrete', betas=betas)
-
- # For discrete-time DPMs, given alphas_cumprod (the \hat{alpha_n} array for n = 0, 1, ..., N - 1):
- >>> ns = NoiseScheduleVP('discrete', alphas_cumprod=alphas_cumprod)
-
- # For continuous-time DPMs (VPSDE), linear schedule:
- >>> ns = NoiseScheduleVP('linear', continuous_beta_0=0.1, continuous_beta_1=20.)
-
- """
-
- if schedule not in ['discrete', 'linear', 'cosine']:
- raise ValueError("Unsupported noise schedule {}. The schedule needs to be 'discrete' or 'linear' or 'cosine'".format(schedule))
-
- self.schedule = schedule
- if schedule == 'discrete':
- if betas is not None:
- log_alphas = 0.5 * torch.log(1 - betas).cumsum(dim=0)
- else:
- assert alphas_cumprod is not None
- log_alphas = 0.5 * torch.log(alphas_cumprod)
- self.total_N = len(log_alphas)
- self.T = 1.
- self.t_array = torch.linspace(0., 1., self.total_N + 1)[1:].reshape((1, -1))
- self.log_alpha_array = log_alphas.reshape((1, -1,))
- else:
- self.total_N = 1000
- self.beta_0 = continuous_beta_0
- self.beta_1 = continuous_beta_1
- self.cosine_s = 0.008
- self.cosine_beta_max = 999.
- self.cosine_t_max = math.atan(self.cosine_beta_max * (1. + self.cosine_s) / math.pi) * 2. * (1. + self.cosine_s) / math.pi - self.cosine_s
- self.cosine_log_alpha_0 = math.log(math.cos(self.cosine_s / (1. + self.cosine_s) * math.pi / 2.))
- self.schedule = schedule
- if schedule == 'cosine':
- # For the cosine schedule, T = 1 will have numerical issues. So we manually set the ending time T.
- # Note that T = 0.9946 may be not the optimal setting. However, we find it works well.
- self.T = 0.9946
- else:
- self.T = 1.
-
- def marginal_log_mean_coeff(self, t):
- """
- Compute log(alpha_t) of a given continuous-time label t in [0, T].
- """
- if self.schedule == 'discrete':
- return interpolate_fn(t.reshape((-1, 1)), self.t_array.to(t.device), self.log_alpha_array.to(t.device)).reshape((-1))
- elif self.schedule == 'linear':
- return -0.25 * t ** 2 * (self.beta_1 - self.beta_0) - 0.5 * t * self.beta_0
- elif self.schedule == 'cosine':
- log_alpha_fn = lambda s: torch.log(torch.cos((s + self.cosine_s) / (1. + self.cosine_s) * math.pi / 2.))
- log_alpha_t = log_alpha_fn(t) - self.cosine_log_alpha_0
- return log_alpha_t
-
- def marginal_alpha(self, t):
- """
- Compute alpha_t of a given continuous-time label t in [0, T].
- """
- return torch.exp(self.marginal_log_mean_coeff(t))
-
- def marginal_std(self, t):
- """
- Compute sigma_t of a given continuous-time label t in [0, T].
- """
- return torch.sqrt(1. - torch.exp(2. * self.marginal_log_mean_coeff(t)))
-
- def marginal_lambda(self, t):
- """
- Compute lambda_t = log(alpha_t) - log(sigma_t) of a given continuous-time label t in [0, T].
- """
- log_mean_coeff = self.marginal_log_mean_coeff(t)
- log_std = 0.5 * torch.log(1. - torch.exp(2. * log_mean_coeff))
- return log_mean_coeff - log_std
-
- def inverse_lambda(self, lamb):
- """
- Compute the continuous-time label t in [0, T] of a given half-logSNR lambda_t.
- """
- if self.schedule == 'linear':
- tmp = 2. * (self.beta_1 - self.beta_0) * torch.logaddexp(-2. * lamb, torch.zeros((1,)).to(lamb))
- Delta = self.beta_0**2 + tmp
- return tmp / (torch.sqrt(Delta) + self.beta_0) / (self.beta_1 - self.beta_0)
- elif self.schedule == 'discrete':
- log_alpha = -0.5 * torch.logaddexp(torch.zeros((1,)).to(lamb.device), -2. * lamb)
- t = interpolate_fn(log_alpha.reshape((-1, 1)), torch.flip(self.log_alpha_array.to(lamb.device), [1]), torch.flip(self.t_array.to(lamb.device), [1]))
- return t.reshape((-1,))
- else:
- log_alpha = -0.5 * torch.logaddexp(-2. * lamb, torch.zeros((1,)).to(lamb))
- t_fn = lambda log_alpha_t: torch.arccos(torch.exp(log_alpha_t + self.cosine_log_alpha_0)) * 2. * (1. + self.cosine_s) / math.pi - self.cosine_s
- t = t_fn(log_alpha)
- return t
-
-
-def model_wrapper(
- model,
- noise_schedule,
- model_type="noise",
- model_kwargs={},
- guidance_type="uncond",
- condition=None,
- unconditional_condition=None,
- guidance_scale=1.,
- classifier_fn=None,
- classifier_kwargs={},
-):
- """Create a wrapper function for the noise prediction model.
-
- DPM-Solver needs to solve the continuous-time diffusion ODEs. For DPMs trained on discrete-time labels, we need to
- firstly wrap the model function to a noise prediction model that accepts the continuous time as the input.
-
- We support four types of the diffusion model by setting `model_type`:
-
- 1. "noise": noise prediction model. (Trained by predicting noise).
-
- 2. "x_start": data prediction model. (Trained by predicting the data x_0 at time 0).
-
- 3. "v": velocity prediction model. (Trained by predicting the velocity).
- The "v" prediction is derivation detailed in Appendix D of [1], and is used in Imagen-Video [2].
-
- [1] Salimans, Tim, and Jonathan Ho. "Progressive distillation for fast sampling of diffusion models."
- arXiv preprint arXiv:2202.00512 (2022).
- [2] Ho, Jonathan, et al. "Imagen Video: High Definition Video Generation with Diffusion Models."
- arXiv preprint arXiv:2210.02303 (2022).
-
- 4. "score": marginal score function. (Trained by denoising score matching).
- Note that the score function and the noise prediction model follows a simple relationship:
- ```
- noise(x_t, t) = -sigma_t * score(x_t, t)
- ```
-
- We support three types of guided sampling by DPMs by setting `guidance_type`:
- 1. "uncond": unconditional sampling by DPMs.
- The input `model` has the following format:
- ``
- model(x, t_input, **model_kwargs) -> noise | x_start | v | score
- ``
-
- 2. "classifier": classifier guidance sampling [3] by DPMs and another classifier.
- The input `model` has the following format:
- ``
- model(x, t_input, **model_kwargs) -> noise | x_start | v | score
- ``
-
- The input `classifier_fn` has the following format:
- ``
- classifier_fn(x, t_input, cond, **classifier_kwargs) -> logits(x, t_input, cond)
- ``
-
- [3] P. Dhariwal and A. Q. Nichol, "Diffusion models beat GANs on image synthesis,"
- in Advances in Neural Information Processing Systems, vol. 34, 2021, pp. 8780-8794.
-
- 3. "classifier-free": classifier-free guidance sampling by conditional DPMs.
- The input `model` has the following format:
- ``
- model(x, t_input, cond, **model_kwargs) -> noise | x_start | v | score
- ``
- And if cond == `unconditional_condition`, the model output is the unconditional DPM output.
-
- [4] Ho, Jonathan, and Tim Salimans. "Classifier-free diffusion guidance."
- arXiv preprint arXiv:2207.12598 (2022).
-
-
- The `t_input` is the time label of the model, which may be discrete-time labels (i.e. 0 to 999)
- or continuous-time labels (i.e. epsilon to T).
-
- We wrap the model function to accept only `x` and `t_continuous` as inputs, and outputs the predicted noise:
- ``
- def model_fn(x, t_continuous) -> noise:
- t_input = get_model_input_time(t_continuous)
- return noise_pred(model, x, t_input, **model_kwargs)
- ``
- where `t_continuous` is the continuous time labels (i.e. epsilon to T). And we use `model_fn` for DPM-Solver.
-
- ===============================================================
-
- Args:
- model: A diffusion model with the corresponding format described above.
- noise_schedule: A noise schedule object, such as NoiseScheduleVP.
- model_type: A `str`. The parameterization type of the diffusion model.
- "noise" or "x_start" or "v" or "score".
- model_kwargs: A `dict`. A dict for the other inputs of the model function.
- guidance_type: A `str`. The type of the guidance for sampling.
- "uncond" or "classifier" or "classifier-free".
- condition: A pytorch tensor. The condition for the guided sampling.
- Only used for "classifier" or "classifier-free" guidance type.
- unconditional_condition: A pytorch tensor. The condition for the unconditional sampling.
- Only used for "classifier-free" guidance type.
- guidance_scale: A `float`. The scale for the guided sampling.
- classifier_fn: A classifier function. Only used for the classifier guidance.
- classifier_kwargs: A `dict`. A dict for the other inputs of the classifier function.
- Returns:
- A noise prediction model that accepts the noised data and the continuous time as the inputs.
- """
-
- def get_model_input_time(t_continuous):
- """
- Convert the continuous-time `t_continuous` (in [epsilon, T]) to the model input time.
- For discrete-time DPMs, we convert `t_continuous` in [1 / N, 1] to `t_input` in [0, 1000 * (N - 1) / N].
- For continuous-time DPMs, we just use `t_continuous`.
- """
- if noise_schedule.schedule == 'discrete':
- return (t_continuous - 1. / noise_schedule.total_N) * 1000.
- else:
- return t_continuous
-
- def noise_pred_fn(x, t_continuous, cond=None):
- if t_continuous.reshape((-1,)).shape[0] == 1:
- t_continuous = t_continuous.expand((x.shape[0]))
- t_input = get_model_input_time(t_continuous)
- if cond is None:
- output = model(x, t_input, **model_kwargs)
- else:
- output = model(x, t_input, cond, **model_kwargs)
- if model_type == "noise":
- return output
- elif model_type == "x_start":
- alpha_t, sigma_t = noise_schedule.marginal_alpha(t_continuous), noise_schedule.marginal_std(t_continuous)
- dims = x.dim()
- return (x - expand_dims(alpha_t, dims) * output) / expand_dims(sigma_t, dims)
- elif model_type == "v":
- alpha_t, sigma_t = noise_schedule.marginal_alpha(t_continuous), noise_schedule.marginal_std(t_continuous)
- dims = x.dim()
- return expand_dims(alpha_t, dims) * output + expand_dims(sigma_t, dims) * x
- elif model_type == "score":
- sigma_t = noise_schedule.marginal_std(t_continuous)
- dims = x.dim()
- return -expand_dims(sigma_t, dims) * output
-
- def cond_grad_fn(x, t_input):
- """
- Compute the gradient of the classifier, i.e. nabla_{x} log p_t(cond | x_t).
- """
- with torch.enable_grad():
- x_in = x.detach().requires_grad_(True)
- log_prob = classifier_fn(x_in, t_input, condition, **classifier_kwargs)
- return torch.autograd.grad(log_prob.sum(), x_in)[0]
-
- def model_fn(x, t_continuous):
- """
- The noise predicition model function that is used for DPM-Solver.
- """
- if t_continuous.reshape((-1,)).shape[0] == 1:
- t_continuous = t_continuous.expand((x.shape[0]))
- if guidance_type == "uncond":
- return noise_pred_fn(x, t_continuous)
- elif guidance_type == "classifier":
- assert classifier_fn is not None
- t_input = get_model_input_time(t_continuous)
- cond_grad = cond_grad_fn(x, t_input)
- sigma_t = noise_schedule.marginal_std(t_continuous)
- noise = noise_pred_fn(x, t_continuous)
- return noise - guidance_scale * expand_dims(sigma_t, dims=cond_grad.dim()) * cond_grad
- elif guidance_type == "classifier-free":
- if guidance_scale == 1. or unconditional_condition is None:
- return noise_pred_fn(x, t_continuous, cond=condition)
- else:
- x_in = torch.cat([x] * 2)
- t_in = torch.cat([t_continuous] * 2)
- c_in = torch.cat([unconditional_condition, condition])
- noise_uncond, noise = noise_pred_fn(x_in, t_in, cond=c_in).chunk(2)
- return noise_uncond + guidance_scale * (noise - noise_uncond)
-
- assert model_type in ["noise", "x_start", "v"]
- assert guidance_type in ["uncond", "classifier", "classifier-free"]
- return model_fn
-
-
-class DPM_Solver:
- def __init__(self, model_fn, noise_schedule, predict_x0=False, thresholding=False, max_val=1.):
- """Construct a DPM-Solver.
-
- We support both the noise prediction model ("predicting epsilon") and the data prediction model ("predicting x0").
- If `predict_x0` is False, we use the solver for the noise prediction model (DPM-Solver).
- If `predict_x0` is True, we use the solver for the data prediction model (DPM-Solver++).
- In such case, we further support the "dynamic thresholding" in [1] when `thresholding` is True.
- The "dynamic thresholding" can greatly improve the sample quality for pixel-space DPMs with large guidance scales.
-
- Args:
- model_fn: A noise prediction model function which accepts the continuous-time input (t in [epsilon, T]):
- ``
- def model_fn(x, t_continuous):
- return noise
- ``
- noise_schedule: A noise schedule object, such as NoiseScheduleVP.
- predict_x0: A `bool`. If true, use the data prediction model; else, use the noise prediction model.
- thresholding: A `bool`. Valid when `predict_x0` is True. Whether to use the "dynamic thresholding" in [1].
- max_val: A `float`. Valid when both `predict_x0` and `thresholding` are True. The max value for thresholding.
-
- [1] Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S Sara Mahdavi, Rapha Gontijo Lopes, et al. Photorealistic text-to-image diffusion models with deep language understanding. arXiv preprint arXiv:2205.11487, 2022b.
- """
- self.model = model_fn
- self.noise_schedule = noise_schedule
- self.predict_x0 = predict_x0
- self.thresholding = thresholding
- self.max_val = max_val
-
- def noise_prediction_fn(self, x, t):
- """
- Return the noise prediction model.
- """
- return self.model(x, t)
-
- def data_prediction_fn(self, x, t):
- """
- Return the data prediction model (with thresholding).
- """
- noise = self.noise_prediction_fn(x, t)
- dims = x.dim()
- alpha_t, sigma_t = self.noise_schedule.marginal_alpha(t), self.noise_schedule.marginal_std(t)
- x0 = (x - expand_dims(sigma_t, dims) * noise) / expand_dims(alpha_t, dims)
- if self.thresholding:
- p = 0.995 # A hyperparameter in the paper of "Imagen" [1].
- s = torch.quantile(torch.abs(x0).reshape((x0.shape[0], -1)), p, dim=1)
- s = expand_dims(torch.maximum(s, self.max_val * torch.ones_like(s).to(s.device)), dims)
- x0 = torch.clamp(x0, -s, s) / s
- return x0
-
- def model_fn(self, x, t):
- """
- Convert the model to the noise prediction model or the data prediction model.
- """
- if self.predict_x0:
- return self.data_prediction_fn(x, t)
- else:
- return self.noise_prediction_fn(x, t)
-
- def get_time_steps(self, skip_type, t_T, t_0, N, device):
- """Compute the intermediate time steps for sampling.
-
- Args:
- skip_type: A `str`. The type for the spacing of the time steps. We support three types:
- - 'logSNR': uniform logSNR for the time steps.
- - 'time_uniform': uniform time for the time steps. (**Recommended for high-resolutional data**.)
- - 'time_quadratic': quadratic time for the time steps. (Used in DDIM for low-resolutional data.)
- t_T: A `float`. The starting time of the sampling (default is T).
- t_0: A `float`. The ending time of the sampling (default is epsilon).
- N: A `int`. The total number of the spacing of the time steps.
- device: A torch device.
- Returns:
- A pytorch tensor of the time steps, with the shape (N + 1,).
- """
- if skip_type == 'logSNR':
- lambda_T = self.noise_schedule.marginal_lambda(torch.tensor(t_T).to(device))
- lambda_0 = self.noise_schedule.marginal_lambda(torch.tensor(t_0).to(device))
- logSNR_steps = torch.linspace(lambda_T.cpu().item(), lambda_0.cpu().item(), N + 1).to(device)
- return self.noise_schedule.inverse_lambda(logSNR_steps)
- elif skip_type == 'time_uniform':
- return torch.linspace(t_T, t_0, N + 1).to(device)
- elif skip_type == 'time_quadratic':
- t_order = 2
- t = torch.linspace(t_T**(1. / t_order), t_0**(1. / t_order), N + 1).pow(t_order).to(device)
- return t
- else:
- raise ValueError("Unsupported skip_type {}, need to be 'logSNR' or 'time_uniform' or 'time_quadratic'".format(skip_type))
-
- def get_orders_and_timesteps_for_singlestep_solver(self, steps, order, skip_type, t_T, t_0, device):
- """
- Get the order of each step for sampling by the singlestep DPM-Solver.
-
- We combine both DPM-Solver-1,2,3 to use all the function evaluations, which is named as "DPM-Solver-fast".
- Given a fixed number of function evaluations by `steps`, the sampling procedure by DPM-Solver-fast is:
- - If order == 1:
- We take `steps` of DPM-Solver-1 (i.e. DDIM).
- - If order == 2:
- - Denote K = (steps // 2). We take K or (K + 1) intermediate time steps for sampling.
- - If steps % 2 == 0, we use K steps of DPM-Solver-2.
- - If steps % 2 == 1, we use K steps of DPM-Solver-2 and 1 step of DPM-Solver-1.
- - If order == 3:
- - Denote K = (steps // 3 + 1). We take K intermediate time steps for sampling.
- - If steps % 3 == 0, we use (K - 2) steps of DPM-Solver-3, and 1 step of DPM-Solver-2 and 1 step of DPM-Solver-1.
- - If steps % 3 == 1, we use (K - 1) steps of DPM-Solver-3 and 1 step of DPM-Solver-1.
- - If steps % 3 == 2, we use (K - 1) steps of DPM-Solver-3 and 1 step of DPM-Solver-2.
-
- ============================================
- Args:
- order: A `int`. The max order for the solver (2 or 3).
- steps: A `int`. The total number of function evaluations (NFE).
- skip_type: A `str`. The type for the spacing of the time steps. We support three types:
- - 'logSNR': uniform logSNR for the time steps.
- - 'time_uniform': uniform time for the time steps. (**Recommended for high-resolutional data**.)
- - 'time_quadratic': quadratic time for the time steps. (Used in DDIM for low-resolutional data.)
- t_T: A `float`. The starting time of the sampling (default is T).
- t_0: A `float`. The ending time of the sampling (default is epsilon).
- device: A torch device.
- Returns:
- orders: A list of the solver order of each step.
- """
- if order == 3:
- K = steps // 3 + 1
- if steps % 3 == 0:
- orders = [3,] * (K - 2) + [2, 1]
- elif steps % 3 == 1:
- orders = [3,] * (K - 1) + [1]
- else:
- orders = [3,] * (K - 1) + [2]
- elif order == 2:
- if steps % 2 == 0:
- K = steps // 2
- orders = [2,] * K
- else:
- K = steps // 2 + 1
- orders = [2,] * (K - 1) + [1]
- elif order == 1:
- K = 1
- orders = [1,] * steps
- else:
- raise ValueError("'order' must be '1' or '2' or '3'.")
- if skip_type == 'logSNR':
- # To reproduce the results in DPM-Solver paper
- timesteps_outer = self.get_time_steps(skip_type, t_T, t_0, K, device)
- else:
- timesteps_outer = self.get_time_steps(skip_type, t_T, t_0, steps, device)[torch.cumsum(torch.tensor([0,] + orders)).to(device)]
- return timesteps_outer, orders
-
- def denoise_to_zero_fn(self, x, s):
- """
- Denoise at the final step, which is equivalent to solve the ODE from lambda_s to infty by first-order discretization.
- """
- return self.data_prediction_fn(x, s)
-
- def dpm_solver_first_update(self, x, s, t, model_s=None, return_intermediate=False):
- """
- DPM-Solver-1 (equivalent to DDIM) from time `s` to time `t`.
-
- Args:
- x: A pytorch tensor. The initial value at time `s`.
- s: A pytorch tensor. The starting time, with the shape (x.shape[0],).
- t: A pytorch tensor. The ending time, with the shape (x.shape[0],).
- model_s: A pytorch tensor. The model function evaluated at time `s`.
- If `model_s` is None, we evaluate the model by `x` and `s`; otherwise we directly use it.
- return_intermediate: A `bool`. If true, also return the model value at time `s`.
- Returns:
- x_t: A pytorch tensor. The approximated solution at time `t`.
- """
- ns = self.noise_schedule
- dims = x.dim()
- lambda_s, lambda_t = ns.marginal_lambda(s), ns.marginal_lambda(t)
- h = lambda_t - lambda_s
- log_alpha_s, log_alpha_t = ns.marginal_log_mean_coeff(s), ns.marginal_log_mean_coeff(t)
- sigma_s, sigma_t = ns.marginal_std(s), ns.marginal_std(t)
- alpha_t = torch.exp(log_alpha_t)
-
- if self.predict_x0:
- phi_1 = torch.expm1(-h)
- if model_s is None:
- model_s = self.model_fn(x, s)
- x_t = (
- expand_dims(sigma_t / sigma_s, dims) * x
- - expand_dims(alpha_t * phi_1, dims) * model_s
- )
- if return_intermediate:
- return x_t, {'model_s': model_s}
- else:
- return x_t
- else:
- phi_1 = torch.expm1(h)
- if model_s is None:
- model_s = self.model_fn(x, s)
- x_t = (
- expand_dims(torch.exp(log_alpha_t - log_alpha_s), dims) * x
- - expand_dims(sigma_t * phi_1, dims) * model_s
- )
- if return_intermediate:
- return x_t, {'model_s': model_s}
- else:
- return x_t
-
- def singlestep_dpm_solver_second_update(self, x, s, t, r1=0.5, model_s=None, return_intermediate=False, solver_type='dpm_solver'):
- """
- Singlestep solver DPM-Solver-2 from time `s` to time `t`.
-
- Args:
- x: A pytorch tensor. The initial value at time `s`.
- s: A pytorch tensor. The starting time, with the shape (x.shape[0],).
- t: A pytorch tensor. The ending time, with the shape (x.shape[0],).
- r1: A `float`. The hyperparameter of the second-order solver.
- model_s: A pytorch tensor. The model function evaluated at time `s`.
- If `model_s` is None, we evaluate the model by `x` and `s`; otherwise we directly use it.
- return_intermediate: A `bool`. If true, also return the model value at time `s` and `s1` (the intermediate time).
- solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers.
- The type slightly impacts the performance. We recommend to use 'dpm_solver' type.
- Returns:
- x_t: A pytorch tensor. The approximated solution at time `t`.
- """
- if solver_type not in ['dpm_solver', 'taylor']:
- raise ValueError("'solver_type' must be either 'dpm_solver' or 'taylor', got {}".format(solver_type))
- if r1 is None:
- r1 = 0.5
- ns = self.noise_schedule
- dims = x.dim()
- lambda_s, lambda_t = ns.marginal_lambda(s), ns.marginal_lambda(t)
- h = lambda_t - lambda_s
- lambda_s1 = lambda_s + r1 * h
- s1 = ns.inverse_lambda(lambda_s1)
- log_alpha_s, log_alpha_s1, log_alpha_t = ns.marginal_log_mean_coeff(s), ns.marginal_log_mean_coeff(s1), ns.marginal_log_mean_coeff(t)
- sigma_s, sigma_s1, sigma_t = ns.marginal_std(s), ns.marginal_std(s1), ns.marginal_std(t)
- alpha_s1, alpha_t = torch.exp(log_alpha_s1), torch.exp(log_alpha_t)
-
- if self.predict_x0:
- phi_11 = torch.expm1(-r1 * h)
- phi_1 = torch.expm1(-h)
-
- if model_s is None:
- model_s = self.model_fn(x, s)
- x_s1 = (
- expand_dims(sigma_s1 / sigma_s, dims) * x
- - expand_dims(alpha_s1 * phi_11, dims) * model_s
- )
- model_s1 = self.model_fn(x_s1, s1)
- if solver_type == 'dpm_solver':
- x_t = (
- expand_dims(sigma_t / sigma_s, dims) * x
- - expand_dims(alpha_t * phi_1, dims) * model_s
- - (0.5 / r1) * expand_dims(alpha_t * phi_1, dims) * (model_s1 - model_s)
- )
- elif solver_type == 'taylor':
- x_t = (
- expand_dims(sigma_t / sigma_s, dims) * x
- - expand_dims(alpha_t * phi_1, dims) * model_s
- + (1. / r1) * expand_dims(alpha_t * ((torch.exp(-h) - 1.) / h + 1.), dims) * (model_s1 - model_s)
- )
- else:
- phi_11 = torch.expm1(r1 * h)
- phi_1 = torch.expm1(h)
-
- if model_s is None:
- model_s = self.model_fn(x, s)
- x_s1 = (
- expand_dims(torch.exp(log_alpha_s1 - log_alpha_s), dims) * x
- - expand_dims(sigma_s1 * phi_11, dims) * model_s
- )
- model_s1 = self.model_fn(x_s1, s1)
- if solver_type == 'dpm_solver':
- x_t = (
- expand_dims(torch.exp(log_alpha_t - log_alpha_s), dims) * x
- - expand_dims(sigma_t * phi_1, dims) * model_s
- - (0.5 / r1) * expand_dims(sigma_t * phi_1, dims) * (model_s1 - model_s)
- )
- elif solver_type == 'taylor':
- x_t = (
- expand_dims(torch.exp(log_alpha_t - log_alpha_s), dims) * x
- - expand_dims(sigma_t * phi_1, dims) * model_s
- - (1. / r1) * expand_dims(sigma_t * ((torch.exp(h) - 1.) / h - 1.), dims) * (model_s1 - model_s)
- )
- if return_intermediate:
- return x_t, {'model_s': model_s, 'model_s1': model_s1}
- else:
- return x_t
-
- def singlestep_dpm_solver_third_update(self, x, s, t, r1=1./3., r2=2./3., model_s=None, model_s1=None, return_intermediate=False, solver_type='dpm_solver'):
- """
- Singlestep solver DPM-Solver-3 from time `s` to time `t`.
-
- Args:
- x: A pytorch tensor. The initial value at time `s`.
- s: A pytorch tensor. The starting time, with the shape (x.shape[0],).
- t: A pytorch tensor. The ending time, with the shape (x.shape[0],).
- r1: A `float`. The hyperparameter of the third-order solver.
- r2: A `float`. The hyperparameter of the third-order solver.
- model_s: A pytorch tensor. The model function evaluated at time `s`.
- If `model_s` is None, we evaluate the model by `x` and `s`; otherwise we directly use it.
- model_s1: A pytorch tensor. The model function evaluated at time `s1` (the intermediate time given by `r1`).
- If `model_s1` is None, we evaluate the model at `s1`; otherwise we directly use it.
- return_intermediate: A `bool`. If true, also return the model value at time `s`, `s1` and `s2` (the intermediate times).
- solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers.
- The type slightly impacts the performance. We recommend to use 'dpm_solver' type.
- Returns:
- x_t: A pytorch tensor. The approximated solution at time `t`.
- """
- if solver_type not in ['dpm_solver', 'taylor']:
- raise ValueError("'solver_type' must be either 'dpm_solver' or 'taylor', got {}".format(solver_type))
- if r1 is None:
- r1 = 1. / 3.
- if r2 is None:
- r2 = 2. / 3.
- ns = self.noise_schedule
- dims = x.dim()
- lambda_s, lambda_t = ns.marginal_lambda(s), ns.marginal_lambda(t)
- h = lambda_t - lambda_s
- lambda_s1 = lambda_s + r1 * h
- lambda_s2 = lambda_s + r2 * h
- s1 = ns.inverse_lambda(lambda_s1)
- s2 = ns.inverse_lambda(lambda_s2)
- log_alpha_s, log_alpha_s1, log_alpha_s2, log_alpha_t = ns.marginal_log_mean_coeff(s), ns.marginal_log_mean_coeff(s1), ns.marginal_log_mean_coeff(s2), ns.marginal_log_mean_coeff(t)
- sigma_s, sigma_s1, sigma_s2, sigma_t = ns.marginal_std(s), ns.marginal_std(s1), ns.marginal_std(s2), ns.marginal_std(t)
- alpha_s1, alpha_s2, alpha_t = torch.exp(log_alpha_s1), torch.exp(log_alpha_s2), torch.exp(log_alpha_t)
-
- if self.predict_x0:
- phi_11 = torch.expm1(-r1 * h)
- phi_12 = torch.expm1(-r2 * h)
- phi_1 = torch.expm1(-h)
- phi_22 = torch.expm1(-r2 * h) / (r2 * h) + 1.
- phi_2 = phi_1 / h + 1.
- phi_3 = phi_2 / h - 0.5
-
- if model_s is None:
- model_s = self.model_fn(x, s)
- if model_s1 is None:
- x_s1 = (
- expand_dims(sigma_s1 / sigma_s, dims) * x
- - expand_dims(alpha_s1 * phi_11, dims) * model_s
- )
- model_s1 = self.model_fn(x_s1, s1)
- x_s2 = (
- expand_dims(sigma_s2 / sigma_s, dims) * x
- - expand_dims(alpha_s2 * phi_12, dims) * model_s
- + r2 / r1 * expand_dims(alpha_s2 * phi_22, dims) * (model_s1 - model_s)
- )
- model_s2 = self.model_fn(x_s2, s2)
- if solver_type == 'dpm_solver':
- x_t = (
- expand_dims(sigma_t / sigma_s, dims) * x
- - expand_dims(alpha_t * phi_1, dims) * model_s
- + (1. / r2) * expand_dims(alpha_t * phi_2, dims) * (model_s2 - model_s)
- )
- elif solver_type == 'taylor':
- D1_0 = (1. / r1) * (model_s1 - model_s)
- D1_1 = (1. / r2) * (model_s2 - model_s)
- D1 = (r2 * D1_0 - r1 * D1_1) / (r2 - r1)
- D2 = 2. * (D1_1 - D1_0) / (r2 - r1)
- x_t = (
- expand_dims(sigma_t / sigma_s, dims) * x
- - expand_dims(alpha_t * phi_1, dims) * model_s
- + expand_dims(alpha_t * phi_2, dims) * D1
- - expand_dims(alpha_t * phi_3, dims) * D2
- )
- else:
- phi_11 = torch.expm1(r1 * h)
- phi_12 = torch.expm1(r2 * h)
- phi_1 = torch.expm1(h)
- phi_22 = torch.expm1(r2 * h) / (r2 * h) - 1.
- phi_2 = phi_1 / h - 1.
- phi_3 = phi_2 / h - 0.5
-
- if model_s is None:
- model_s = self.model_fn(x, s)
- if model_s1 is None:
- x_s1 = (
- expand_dims(torch.exp(log_alpha_s1 - log_alpha_s), dims) * x
- - expand_dims(sigma_s1 * phi_11, dims) * model_s
- )
- model_s1 = self.model_fn(x_s1, s1)
- x_s2 = (
- expand_dims(torch.exp(log_alpha_s2 - log_alpha_s), dims) * x
- - expand_dims(sigma_s2 * phi_12, dims) * model_s
- - r2 / r1 * expand_dims(sigma_s2 * phi_22, dims) * (model_s1 - model_s)
- )
- model_s2 = self.model_fn(x_s2, s2)
- if solver_type == 'dpm_solver':
- x_t = (
- expand_dims(torch.exp(log_alpha_t - log_alpha_s), dims) * x
- - expand_dims(sigma_t * phi_1, dims) * model_s
- - (1. / r2) * expand_dims(sigma_t * phi_2, dims) * (model_s2 - model_s)
- )
- elif solver_type == 'taylor':
- D1_0 = (1. / r1) * (model_s1 - model_s)
- D1_1 = (1. / r2) * (model_s2 - model_s)
- D1 = (r2 * D1_0 - r1 * D1_1) / (r2 - r1)
- D2 = 2. * (D1_1 - D1_0) / (r2 - r1)
- x_t = (
- expand_dims(torch.exp(log_alpha_t - log_alpha_s), dims) * x
- - expand_dims(sigma_t * phi_1, dims) * model_s
- - expand_dims(sigma_t * phi_2, dims) * D1
- - expand_dims(sigma_t * phi_3, dims) * D2
- )
-
- if return_intermediate:
- return x_t, {'model_s': model_s, 'model_s1': model_s1, 'model_s2': model_s2}
- else:
- return x_t
-
- def multistep_dpm_solver_second_update(self, x, model_prev_list, t_prev_list, t, solver_type="dpm_solver"):
- """
- Multistep solver DPM-Solver-2 from time `t_prev_list[-1]` to time `t`.
-
- Args:
- x: A pytorch tensor. The initial value at time `s`.
- model_prev_list: A list of pytorch tensor. The previous computed model values.
- t_prev_list: A list of pytorch tensor. The previous times, each time has the shape (x.shape[0],)
- t: A pytorch tensor. The ending time, with the shape (x.shape[0],).
- solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers.
- The type slightly impacts the performance. We recommend to use 'dpm_solver' type.
- Returns:
- x_t: A pytorch tensor. The approximated solution at time `t`.
- """
- if solver_type not in ['dpm_solver', 'taylor']:
- raise ValueError("'solver_type' must be either 'dpm_solver' or 'taylor', got {}".format(solver_type))
- ns = self.noise_schedule
- dims = x.dim()
- model_prev_1, model_prev_0 = model_prev_list
- t_prev_1, t_prev_0 = t_prev_list
- lambda_prev_1, lambda_prev_0, lambda_t = ns.marginal_lambda(t_prev_1), ns.marginal_lambda(t_prev_0), ns.marginal_lambda(t)
- log_alpha_prev_0, log_alpha_t = ns.marginal_log_mean_coeff(t_prev_0), ns.marginal_log_mean_coeff(t)
- sigma_prev_0, sigma_t = ns.marginal_std(t_prev_0), ns.marginal_std(t)
- alpha_t = torch.exp(log_alpha_t)
-
- h_0 = lambda_prev_0 - lambda_prev_1
- h = lambda_t - lambda_prev_0
- r0 = h_0 / h
- D1_0 = expand_dims(1. / r0, dims) * (model_prev_0 - model_prev_1)
- if self.predict_x0:
- if solver_type == 'dpm_solver':
- x_t = (
- expand_dims(sigma_t / sigma_prev_0, dims) * x
- - expand_dims(alpha_t * (torch.exp(-h) - 1.), dims) * model_prev_0
- - 0.5 * expand_dims(alpha_t * (torch.exp(-h) - 1.), dims) * D1_0
- )
- elif solver_type == 'taylor':
- x_t = (
- expand_dims(sigma_t / sigma_prev_0, dims) * x
- - expand_dims(alpha_t * (torch.exp(-h) - 1.), dims) * model_prev_0
- + expand_dims(alpha_t * ((torch.exp(-h) - 1.) / h + 1.), dims) * D1_0
- )
- else:
- if solver_type == 'dpm_solver':
- x_t = (
- expand_dims(torch.exp(log_alpha_t - log_alpha_prev_0), dims) * x
- - expand_dims(sigma_t * (torch.exp(h) - 1.), dims) * model_prev_0
- - 0.5 * expand_dims(sigma_t * (torch.exp(h) - 1.), dims) * D1_0
- )
- elif solver_type == 'taylor':
- x_t = (
- expand_dims(torch.exp(log_alpha_t - log_alpha_prev_0), dims) * x
- - expand_dims(sigma_t * (torch.exp(h) - 1.), dims) * model_prev_0
- - expand_dims(sigma_t * ((torch.exp(h) - 1.) / h - 1.), dims) * D1_0
- )
- return x_t
-
- def multistep_dpm_solver_third_update(self, x, model_prev_list, t_prev_list, t, solver_type='dpm_solver'):
- """
- Multistep solver DPM-Solver-3 from time `t_prev_list[-1]` to time `t`.
-
- Args:
- x: A pytorch tensor. The initial value at time `s`.
- model_prev_list: A list of pytorch tensor. The previous computed model values.
- t_prev_list: A list of pytorch tensor. The previous times, each time has the shape (x.shape[0],)
- t: A pytorch tensor. The ending time, with the shape (x.shape[0],).
- solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers.
- The type slightly impacts the performance. We recommend to use 'dpm_solver' type.
- Returns:
- x_t: A pytorch tensor. The approximated solution at time `t`.
- """
- ns = self.noise_schedule
- dims = x.dim()
- model_prev_2, model_prev_1, model_prev_0 = model_prev_list
- t_prev_2, t_prev_1, t_prev_0 = t_prev_list
- lambda_prev_2, lambda_prev_1, lambda_prev_0, lambda_t = ns.marginal_lambda(t_prev_2), ns.marginal_lambda(t_prev_1), ns.marginal_lambda(t_prev_0), ns.marginal_lambda(t)
- log_alpha_prev_0, log_alpha_t = ns.marginal_log_mean_coeff(t_prev_0), ns.marginal_log_mean_coeff(t)
- sigma_prev_0, sigma_t = ns.marginal_std(t_prev_0), ns.marginal_std(t)
- alpha_t = torch.exp(log_alpha_t)
-
- h_1 = lambda_prev_1 - lambda_prev_2
- h_0 = lambda_prev_0 - lambda_prev_1
- h = lambda_t - lambda_prev_0
- r0, r1 = h_0 / h, h_1 / h
- D1_0 = expand_dims(1. / r0, dims) * (model_prev_0 - model_prev_1)
- D1_1 = expand_dims(1. / r1, dims) * (model_prev_1 - model_prev_2)
- D1 = D1_0 + expand_dims(r0 / (r0 + r1), dims) * (D1_0 - D1_1)
- D2 = expand_dims(1. / (r0 + r1), dims) * (D1_0 - D1_1)
- if self.predict_x0:
- x_t = (
- expand_dims(sigma_t / sigma_prev_0, dims) * x
- - expand_dims(alpha_t * (torch.exp(-h) - 1.), dims) * model_prev_0
- + expand_dims(alpha_t * ((torch.exp(-h) - 1.) / h + 1.), dims) * D1
- - expand_dims(alpha_t * ((torch.exp(-h) - 1. + h) / h**2 - 0.5), dims) * D2
- )
- else:
- x_t = (
- expand_dims(torch.exp(log_alpha_t - log_alpha_prev_0), dims) * x
- - expand_dims(sigma_t * (torch.exp(h) - 1.), dims) * model_prev_0
- - expand_dims(sigma_t * ((torch.exp(h) - 1.) / h - 1.), dims) * D1
- - expand_dims(sigma_t * ((torch.exp(h) - 1. - h) / h**2 - 0.5), dims) * D2
- )
- return x_t
-
- def singlestep_dpm_solver_update(self, x, s, t, order, return_intermediate=False, solver_type='dpm_solver', r1=None, r2=None):
- """
- Singlestep DPM-Solver with the order `order` from time `s` to time `t`.
-
- Args:
- x: A pytorch tensor. The initial value at time `s`.
- s: A pytorch tensor. The starting time, with the shape (x.shape[0],).
- t: A pytorch tensor. The ending time, with the shape (x.shape[0],).
- order: A `int`. The order of DPM-Solver. We only support order == 1 or 2 or 3.
- return_intermediate: A `bool`. If true, also return the model value at time `s`, `s1` and `s2` (the intermediate times).
- solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers.
- The type slightly impacts the performance. We recommend to use 'dpm_solver' type.
- r1: A `float`. The hyperparameter of the second-order or third-order solver.
- r2: A `float`. The hyperparameter of the third-order solver.
- Returns:
- x_t: A pytorch tensor. The approximated solution at time `t`.
- """
- if order == 1:
- return self.dpm_solver_first_update(x, s, t, return_intermediate=return_intermediate)
- elif order == 2:
- return self.singlestep_dpm_solver_second_update(x, s, t, return_intermediate=return_intermediate, solver_type=solver_type, r1=r1)
- elif order == 3:
- return self.singlestep_dpm_solver_third_update(x, s, t, return_intermediate=return_intermediate, solver_type=solver_type, r1=r1, r2=r2)
- else:
- raise ValueError("Solver order must be 1 or 2 or 3, got {}".format(order))
-
- def multistep_dpm_solver_update(self, x, model_prev_list, t_prev_list, t, order, solver_type='dpm_solver'):
- """
- Multistep DPM-Solver with the order `order` from time `t_prev_list[-1]` to time `t`.
-
- Args:
- x: A pytorch tensor. The initial value at time `s`.
- model_prev_list: A list of pytorch tensor. The previous computed model values.
- t_prev_list: A list of pytorch tensor. The previous times, each time has the shape (x.shape[0],)
- t: A pytorch tensor. The ending time, with the shape (x.shape[0],).
- order: A `int`. The order of DPM-Solver. We only support order == 1 or 2 or 3.
- solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers.
- The type slightly impacts the performance. We recommend to use 'dpm_solver' type.
- Returns:
- x_t: A pytorch tensor. The approximated solution at time `t`.
- """
- if order == 1:
- return self.dpm_solver_first_update(x, t_prev_list[-1], t, model_s=model_prev_list[-1])
- elif order == 2:
- return self.multistep_dpm_solver_second_update(x, model_prev_list, t_prev_list, t, solver_type=solver_type)
- elif order == 3:
- return self.multistep_dpm_solver_third_update(x, model_prev_list, t_prev_list, t, solver_type=solver_type)
- else:
- raise ValueError("Solver order must be 1 or 2 or 3, got {}".format(order))
-
- def dpm_solver_adaptive(self, x, order, t_T, t_0, h_init=0.05, atol=0.0078, rtol=0.05, theta=0.9, t_err=1e-5, solver_type='dpm_solver'):
- """
- The adaptive step size solver based on singlestep DPM-Solver.
-
- Args:
- x: A pytorch tensor. The initial value at time `t_T`.
- order: A `int`. The (higher) order of the solver. We only support order == 2 or 3.
- t_T: A `float`. The starting time of the sampling (default is T).
- t_0: A `float`. The ending time of the sampling (default is epsilon).
- h_init: A `float`. The initial step size (for logSNR).
- atol: A `float`. The absolute tolerance of the solver. For image data, the default setting is 0.0078, followed [1].
- rtol: A `float`. The relative tolerance of the solver. The default setting is 0.05.
- theta: A `float`. The safety hyperparameter for adapting the step size. The default setting is 0.9, followed [1].
- t_err: A `float`. The tolerance for the time. We solve the diffusion ODE until the absolute error between the
- current time and `t_0` is less than `t_err`. The default setting is 1e-5.
- solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers.
- The type slightly impacts the performance. We recommend to use 'dpm_solver' type.
- Returns:
- x_0: A pytorch tensor. The approximated solution at time `t_0`.
-
- [1] A. Jolicoeur-Martineau, K. Li, R. Piché-Taillefer, T. Kachman, and I. Mitliagkas, "Gotta go fast when generating data with score-based models," arXiv preprint arXiv:2105.14080, 2021.
- """
- ns = self.noise_schedule
- s = t_T * torch.ones((x.shape[0],)).to(x)
- lambda_s = ns.marginal_lambda(s)
- lambda_0 = ns.marginal_lambda(t_0 * torch.ones_like(s).to(x))
- h = h_init * torch.ones_like(s).to(x)
- x_prev = x
- nfe = 0
- if order == 2:
- r1 = 0.5
- lower_update = lambda x, s, t: self.dpm_solver_first_update(x, s, t, return_intermediate=True)
- higher_update = lambda x, s, t, **kwargs: self.singlestep_dpm_solver_second_update(x, s, t, r1=r1, solver_type=solver_type, **kwargs)
- elif order == 3:
- r1, r2 = 1. / 3., 2. / 3.
- lower_update = lambda x, s, t: self.singlestep_dpm_solver_second_update(x, s, t, r1=r1, return_intermediate=True, solver_type=solver_type)
- higher_update = lambda x, s, t, **kwargs: self.singlestep_dpm_solver_third_update(x, s, t, r1=r1, r2=r2, solver_type=solver_type, **kwargs)
- else:
- raise ValueError("For adaptive step size solver, order must be 2 or 3, got {}".format(order))
- while torch.abs((s - t_0)).mean() > t_err:
- t = ns.inverse_lambda(lambda_s + h)
- x_lower, lower_noise_kwargs = lower_update(x, s, t)
- x_higher = higher_update(x, s, t, **lower_noise_kwargs)
- delta = torch.max(torch.ones_like(x).to(x) * atol, rtol * torch.max(torch.abs(x_lower), torch.abs(x_prev)))
- norm_fn = lambda v: torch.sqrt(torch.square(v.reshape((v.shape[0], -1))).mean(dim=-1, keepdim=True))
- E = norm_fn((x_higher - x_lower) / delta).max()
- if torch.all(E <= 1.):
- x = x_higher
- s = t
- x_prev = x_lower
- lambda_s = ns.marginal_lambda(s)
- h = torch.min(theta * h * torch.float_power(E, -1. / order).float(), lambda_0 - lambda_s)
- nfe += order
- print('adaptive solver nfe', nfe)
- return x
-
- def sample(self, x, steps=20, t_start=None, t_end=None, order=3, skip_type='time_uniform',
- method='singlestep', lower_order_final=True, denoise_to_zero=False, solver_type='dpm_solver',
- atol=0.0078, rtol=0.05,
- ):
- """
- Compute the sample at time `t_end` by DPM-Solver, given the initial `x` at time `t_start`.
-
- =====================================================
-
- We support the following algorithms for both noise prediction model and data prediction model:
- - 'singlestep':
- Singlestep DPM-Solver (i.e. "DPM-Solver-fast" in the paper), which combines different orders of singlestep DPM-Solver.
- We combine all the singlestep solvers with order <= `order` to use up all the function evaluations (steps).
- The total number of function evaluations (NFE) == `steps`.
- Given a fixed NFE == `steps`, the sampling procedure is:
- - If `order` == 1:
- - Denote K = steps. We use K steps of DPM-Solver-1 (i.e. DDIM).
- - If `order` == 2:
- - Denote K = (steps // 2) + (steps % 2). We take K intermediate time steps for sampling.
- - If steps % 2 == 0, we use K steps of singlestep DPM-Solver-2.
- - If steps % 2 == 1, we use (K - 1) steps of singlestep DPM-Solver-2 and 1 step of DPM-Solver-1.
- - If `order` == 3:
- - Denote K = (steps // 3 + 1). We take K intermediate time steps for sampling.
- - If steps % 3 == 0, we use (K - 2) steps of singlestep DPM-Solver-3, and 1 step of singlestep DPM-Solver-2 and 1 step of DPM-Solver-1.
- - If steps % 3 == 1, we use (K - 1) steps of singlestep DPM-Solver-3 and 1 step of DPM-Solver-1.
- - If steps % 3 == 2, we use (K - 1) steps of singlestep DPM-Solver-3 and 1 step of singlestep DPM-Solver-2.
- - 'multistep':
- Multistep DPM-Solver with the order of `order`. The total number of function evaluations (NFE) == `steps`.
- We initialize the first `order` values by lower order multistep solvers.
- Given a fixed NFE == `steps`, the sampling procedure is:
- Denote K = steps.
- - If `order` == 1:
- - We use K steps of DPM-Solver-1 (i.e. DDIM).
- - If `order` == 2:
- - We firstly use 1 step of DPM-Solver-1, then use (K - 1) step of multistep DPM-Solver-2.
- - If `order` == 3:
- - We firstly use 1 step of DPM-Solver-1, then 1 step of multistep DPM-Solver-2, then (K - 2) step of multistep DPM-Solver-3.
- - 'singlestep_fixed':
- Fixed order singlestep DPM-Solver (i.e. DPM-Solver-1 or singlestep DPM-Solver-2 or singlestep DPM-Solver-3).
- We use singlestep DPM-Solver-`order` for `order`=1 or 2 or 3, with total [`steps` // `order`] * `order` NFE.
- - 'adaptive':
- Adaptive step size DPM-Solver (i.e. "DPM-Solver-12" and "DPM-Solver-23" in the paper).
- We ignore `steps` and use adaptive step size DPM-Solver with a higher order of `order`.
- You can adjust the absolute tolerance `atol` and the relative tolerance `rtol` to balance the computatation costs
- (NFE) and the sample quality.
- - If `order` == 2, we use DPM-Solver-12 which combines DPM-Solver-1 and singlestep DPM-Solver-2.
- - If `order` == 3, we use DPM-Solver-23 which combines singlestep DPM-Solver-2 and singlestep DPM-Solver-3.
-
- =====================================================
-
- Some advices for choosing the algorithm:
- - For **unconditional sampling** or **guided sampling with small guidance scale** by DPMs:
- Use singlestep DPM-Solver ("DPM-Solver-fast" in the paper) with `order = 3`.
- e.g.
- >>> dpm_solver = DPM_Solver(model_fn, noise_schedule, predict_x0=False)
- >>> x_sample = dpm_solver.sample(x, steps=steps, t_start=t_start, t_end=t_end, order=3,
- skip_type='time_uniform', method='singlestep')
- - For **guided sampling with large guidance scale** by DPMs:
- Use multistep DPM-Solver with `predict_x0 = True` and `order = 2`.
- e.g.
- >>> dpm_solver = DPM_Solver(model_fn, noise_schedule, predict_x0=True)
- >>> x_sample = dpm_solver.sample(x, steps=steps, t_start=t_start, t_end=t_end, order=2,
- skip_type='time_uniform', method='multistep')
-
- We support three types of `skip_type`:
- - 'logSNR': uniform logSNR for the time steps. **Recommended for low-resolutional images**
- - 'time_uniform': uniform time for the time steps. **Recommended for high-resolutional images**.
- - 'time_quadratic': quadratic time for the time steps.
-
- =====================================================
- Args:
- x: A pytorch tensor. The initial value at time `t_start`
- e.g. if `t_start` == T, then `x` is a sample from the standard normal distribution.
- steps: A `int`. The total number of function evaluations (NFE).
- t_start: A `float`. The starting time of the sampling.
- If `T` is None, we use self.noise_schedule.T (default is 1.0).
- t_end: A `float`. The ending time of the sampling.
- If `t_end` is None, we use 1. / self.noise_schedule.total_N.
- e.g. if total_N == 1000, we have `t_end` == 1e-3.
- For discrete-time DPMs:
- - We recommend `t_end` == 1. / self.noise_schedule.total_N.
- For continuous-time DPMs:
- - We recommend `t_end` == 1e-3 when `steps` <= 15; and `t_end` == 1e-4 when `steps` > 15.
- order: A `int`. The order of DPM-Solver.
- skip_type: A `str`. The type for the spacing of the time steps. 'time_uniform' or 'logSNR' or 'time_quadratic'.
- method: A `str`. The method for sampling. 'singlestep' or 'multistep' or 'singlestep_fixed' or 'adaptive'.
- denoise_to_zero: A `bool`. Whether to denoise to time 0 at the final step.
- Default is `False`. If `denoise_to_zero` is `True`, the total NFE is (`steps` + 1).
-
- This trick is firstly proposed by DDPM (https://arxiv.org/abs/2006.11239) and
- score_sde (https://arxiv.org/abs/2011.13456). Such trick can improve the FID
- for diffusion models sampling by diffusion SDEs for low-resolutional images
- (such as CIFAR-10). However, we observed that such trick does not matter for
- high-resolutional images. As it needs an additional NFE, we do not recommend
- it for high-resolutional images.
- lower_order_final: A `bool`. Whether to use lower order solvers at the final steps.
- Only valid for `method=multistep` and `steps < 15`. We empirically find that
- this trick is a key to stabilizing the sampling by DPM-Solver with very few steps
- (especially for steps <= 10). So we recommend to set it to be `True`.
- solver_type: A `str`. The taylor expansion type for the solver. `dpm_solver` or `taylor`. We recommend `dpm_solver`.
- atol: A `float`. The absolute tolerance of the adaptive step size solver. Valid when `method` == 'adaptive'.
- rtol: A `float`. The relative tolerance of the adaptive step size solver. Valid when `method` == 'adaptive'.
- Returns:
- x_end: A pytorch tensor. The approximated solution at time `t_end`.
-
- """
- t_0 = 1. / self.noise_schedule.total_N if t_end is None else t_end
- t_T = self.noise_schedule.T if t_start is None else t_start
- device = x.device
- if method == 'adaptive':
- with torch.no_grad():
- x = self.dpm_solver_adaptive(x, order=order, t_T=t_T, t_0=t_0, atol=atol, rtol=rtol, solver_type=solver_type)
- elif method == 'multistep':
- assert steps >= order
- timesteps = self.get_time_steps(skip_type=skip_type, t_T=t_T, t_0=t_0, N=steps, device=device)
- assert timesteps.shape[0] - 1 == steps
- with torch.no_grad():
- vec_t = timesteps[0].expand((x.shape[0]))
- model_prev_list = [self.model_fn(x, vec_t)]
- t_prev_list = [vec_t]
- # Init the first `order` values by lower order multistep DPM-Solver.
- for init_order in range(1, order):
- vec_t = timesteps[init_order].expand(x.shape[0])
- x = self.multistep_dpm_solver_update(x, model_prev_list, t_prev_list, vec_t, init_order, solver_type=solver_type)
- model_prev_list.append(self.model_fn(x, vec_t))
- t_prev_list.append(vec_t)
- # Compute the remaining values by `order`-th order multistep DPM-Solver.
- for step in range(order, steps + 1):
- vec_t = timesteps[step].expand(x.shape[0])
- if lower_order_final and steps < 15:
- step_order = min(order, steps + 1 - step)
- else:
- step_order = order
- x = self.multistep_dpm_solver_update(x, model_prev_list, t_prev_list, vec_t, step_order, solver_type=solver_type)
- for i in range(order - 1):
- t_prev_list[i] = t_prev_list[i + 1]
- model_prev_list[i] = model_prev_list[i + 1]
- t_prev_list[-1] = vec_t
- # We do not need to evaluate the final model value.
- if step < steps:
- model_prev_list[-1] = self.model_fn(x, vec_t)
- elif method in ['singlestep', 'singlestep_fixed']:
- if method == 'singlestep':
- timesteps_outer, orders = self.get_orders_and_timesteps_for_singlestep_solver(steps=steps, order=order, skip_type=skip_type, t_T=t_T, t_0=t_0, device=device)
- elif method == 'singlestep_fixed':
- K = steps // order
- orders = [order,] * K
- timesteps_outer = self.get_time_steps(skip_type=skip_type, t_T=t_T, t_0=t_0, N=K, device=device)
- for i, order in enumerate(orders):
- t_T_inner, t_0_inner = timesteps_outer[i], timesteps_outer[i + 1]
- timesteps_inner = self.get_time_steps(skip_type=skip_type, t_T=t_T_inner.item(), t_0=t_0_inner.item(), N=order, device=device)
- lambda_inner = self.noise_schedule.marginal_lambda(timesteps_inner)
- vec_s, vec_t = t_T_inner.tile(x.shape[0]), t_0_inner.tile(x.shape[0])
- h = lambda_inner[-1] - lambda_inner[0]
- r1 = None if order <= 1 else (lambda_inner[1] - lambda_inner[0]) / h
- r2 = None if order <= 2 else (lambda_inner[2] - lambda_inner[0]) / h
- x = self.singlestep_dpm_solver_update(x, vec_s, vec_t, order, solver_type=solver_type, r1=r1, r2=r2)
- if denoise_to_zero:
- x = self.denoise_to_zero_fn(x, torch.ones((x.shape[0],)).to(device) * t_0)
- return x
-
-
-
-#############################################################
-# other utility functions
-#############################################################
-
-def interpolate_fn(x, xp, yp):
- """
- A piecewise linear function y = f(x), using xp and yp as keypoints.
- We implement f(x) in a differentiable way (i.e. applicable for autograd).
- The function f(x) is well-defined for all x-axis. (For x beyond the bounds of xp, we use the outmost points of xp to define the linear function.)
-
- Args:
- x: PyTorch tensor with shape [N, C], where N is the batch size, C is the number of channels (we use C = 1 for DPM-Solver).
- xp: PyTorch tensor with shape [C, K], where K is the number of keypoints.
- yp: PyTorch tensor with shape [C, K].
- Returns:
- The function values f(x), with shape [N, C].
- """
- N, K = x.shape[0], xp.shape[1]
- all_x = torch.cat([x.unsqueeze(2), xp.unsqueeze(0).repeat((N, 1, 1))], dim=2)
- sorted_all_x, x_indices = torch.sort(all_x, dim=2)
- x_idx = torch.argmin(x_indices, dim=2)
- cand_start_idx = x_idx - 1
- start_idx = torch.where(
- torch.eq(x_idx, 0),
- torch.tensor(1, device=x.device),
- torch.where(
- torch.eq(x_idx, K), torch.tensor(K - 2, device=x.device), cand_start_idx,
- ),
- )
- end_idx = torch.where(torch.eq(start_idx, cand_start_idx), start_idx + 2, start_idx + 1)
- start_x = torch.gather(sorted_all_x, dim=2, index=start_idx.unsqueeze(2)).squeeze(2)
- end_x = torch.gather(sorted_all_x, dim=2, index=end_idx.unsqueeze(2)).squeeze(2)
- start_idx2 = torch.where(
- torch.eq(x_idx, 0),
- torch.tensor(0, device=x.device),
- torch.where(
- torch.eq(x_idx, K), torch.tensor(K - 2, device=x.device), cand_start_idx,
- ),
- )
- y_positions_expanded = yp.unsqueeze(0).expand(N, -1, -1)
- start_y = torch.gather(y_positions_expanded, dim=2, index=start_idx2.unsqueeze(2)).squeeze(2)
- end_y = torch.gather(y_positions_expanded, dim=2, index=(start_idx2 + 1).unsqueeze(2)).squeeze(2)
- cand = start_y + (x - start_x) * (end_y - start_y) / (end_x - start_x)
- return cand
-
-
-def expand_dims(v, dims):
- """
- Expand the tensor `v` to the dim `dims`.
-
- Args:
- `v`: a PyTorch tensor with shape [N].
- `dim`: a `int`.
- Returns:
- a PyTorch tensor with shape [N, 1, 1, ..., 1] and the total dimension is `dims`.
- """
- return v[(...,) + (None,)*(dims - 1)]
\ No newline at end of file
diff --git a/spaces/Kayson/InstructDiffusion/stable_diffusion/ldm/modules/image_degradation/utils_image.py b/spaces/Kayson/InstructDiffusion/stable_diffusion/ldm/modules/image_degradation/utils_image.py
deleted file mode 100644
index 0175f155ad900ae33c3c46ed87f49b352e3faf98..0000000000000000000000000000000000000000
--- a/spaces/Kayson/InstructDiffusion/stable_diffusion/ldm/modules/image_degradation/utils_image.py
+++ /dev/null
@@ -1,916 +0,0 @@
-import os
-import math
-import random
-import numpy as np
-import torch
-import cv2
-from torchvision.utils import make_grid
-from datetime import datetime
-#import matplotlib.pyplot as plt # TODO: check with Dominik, also bsrgan.py vs bsrgan_light.py
-
-
-os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE"
-
-
-'''
-# --------------------------------------------
-# Kai Zhang (github: https://github.com/cszn)
-# 03/Mar/2019
-# --------------------------------------------
-# https://github.com/twhui/SRGAN-pyTorch
-# https://github.com/xinntao/BasicSR
-# --------------------------------------------
-'''
-
-
-IMG_EXTENSIONS = ['.jpg', '.JPG', '.jpeg', '.JPEG', '.png', '.PNG', '.ppm', '.PPM', '.bmp', '.BMP', '.tif']
-
-
-def is_image_file(filename):
- return any(filename.endswith(extension) for extension in IMG_EXTENSIONS)
-
-
-def get_timestamp():
- return datetime.now().strftime('%y%m%d-%H%M%S')
-
-
-def imshow(x, title=None, cbar=False, figsize=None):
- plt.figure(figsize=figsize)
- plt.imshow(np.squeeze(x), interpolation='nearest', cmap='gray')
- if title:
- plt.title(title)
- if cbar:
- plt.colorbar()
- plt.show()
-
-
-def surf(Z, cmap='rainbow', figsize=None):
- plt.figure(figsize=figsize)
- ax3 = plt.axes(projection='3d')
-
- w, h = Z.shape[:2]
- xx = np.arange(0,w,1)
- yy = np.arange(0,h,1)
- X, Y = np.meshgrid(xx, yy)
- ax3.plot_surface(X,Y,Z,cmap=cmap)
- #ax3.contour(X,Y,Z, zdim='z',offset=-2,cmap=cmap)
- plt.show()
-
-
-'''
-# --------------------------------------------
-# get image pathes
-# --------------------------------------------
-'''
-
-
-def get_image_paths(dataroot):
- paths = None # return None if dataroot is None
- if dataroot is not None:
- paths = sorted(_get_paths_from_images(dataroot))
- return paths
-
-
-def _get_paths_from_images(path):
- assert os.path.isdir(path), '{:s} is not a valid directory'.format(path)
- images = []
- for dirpath, _, fnames in sorted(os.walk(path)):
- for fname in sorted(fnames):
- if is_image_file(fname):
- img_path = os.path.join(dirpath, fname)
- images.append(img_path)
- assert images, '{:s} has no valid image file'.format(path)
- return images
-
-
-'''
-# --------------------------------------------
-# split large images into small images
-# --------------------------------------------
-'''
-
-
-def patches_from_image(img, p_size=512, p_overlap=64, p_max=800):
- w, h = img.shape[:2]
- patches = []
- if w > p_max and h > p_max:
- w1 = list(np.arange(0, w-p_size, p_size-p_overlap, dtype=np.int))
- h1 = list(np.arange(0, h-p_size, p_size-p_overlap, dtype=np.int))
- w1.append(w-p_size)
- h1.append(h-p_size)
-# print(w1)
-# print(h1)
- for i in w1:
- for j in h1:
- patches.append(img[i:i+p_size, j:j+p_size,:])
- else:
- patches.append(img)
-
- return patches
-
-
-def imssave(imgs, img_path):
- """
- imgs: list, N images of size WxHxC
- """
- img_name, ext = os.path.splitext(os.path.basename(img_path))
-
- for i, img in enumerate(imgs):
- if img.ndim == 3:
- img = img[:, :, [2, 1, 0]]
- new_path = os.path.join(os.path.dirname(img_path), img_name+str('_s{:04d}'.format(i))+'.png')
- cv2.imwrite(new_path, img)
-
-
-def split_imageset(original_dataroot, taget_dataroot, n_channels=3, p_size=800, p_overlap=96, p_max=1000):
- """
- split the large images from original_dataroot into small overlapped images with size (p_size)x(p_size),
- and save them into taget_dataroot; only the images with larger size than (p_max)x(p_max)
- will be splitted.
- Args:
- original_dataroot:
- taget_dataroot:
- p_size: size of small images
- p_overlap: patch size in training is a good choice
- p_max: images with smaller size than (p_max)x(p_max) keep unchanged.
- """
- paths = get_image_paths(original_dataroot)
- for img_path in paths:
- # img_name, ext = os.path.splitext(os.path.basename(img_path))
- img = imread_uint(img_path, n_channels=n_channels)
- patches = patches_from_image(img, p_size, p_overlap, p_max)
- imssave(patches, os.path.join(taget_dataroot,os.path.basename(img_path)))
- #if original_dataroot == taget_dataroot:
- #del img_path
-
-'''
-# --------------------------------------------
-# makedir
-# --------------------------------------------
-'''
-
-
-def mkdir(path):
- if not os.path.exists(path):
- os.makedirs(path)
-
-
-def mkdirs(paths):
- if isinstance(paths, str):
- mkdir(paths)
- else:
- for path in paths:
- mkdir(path)
-
-
-def mkdir_and_rename(path):
- if os.path.exists(path):
- new_name = path + '_archived_' + get_timestamp()
- print('Path already exists. Rename it to [{:s}]'.format(new_name))
- os.rename(path, new_name)
- os.makedirs(path)
-
-
-'''
-# --------------------------------------------
-# read image from path
-# opencv is fast, but read BGR numpy image
-# --------------------------------------------
-'''
-
-
-# --------------------------------------------
-# get uint8 image of size HxWxn_channles (RGB)
-# --------------------------------------------
-def imread_uint(path, n_channels=3):
- # input: path
- # output: HxWx3(RGB or GGG), or HxWx1 (G)
- if n_channels == 1:
- img = cv2.imread(path, 0) # cv2.IMREAD_GRAYSCALE
- img = np.expand_dims(img, axis=2) # HxWx1
- elif n_channels == 3:
- img = cv2.imread(path, cv2.IMREAD_UNCHANGED) # BGR or G
- if img.ndim == 2:
- img = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB) # GGG
- else:
- img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # RGB
- return img
-
-
-# --------------------------------------------
-# matlab's imwrite
-# --------------------------------------------
-def imsave(img, img_path):
- img = np.squeeze(img)
- if img.ndim == 3:
- img = img[:, :, [2, 1, 0]]
- cv2.imwrite(img_path, img)
-
-def imwrite(img, img_path):
- img = np.squeeze(img)
- if img.ndim == 3:
- img = img[:, :, [2, 1, 0]]
- cv2.imwrite(img_path, img)
-
-
-
-# --------------------------------------------
-# get single image of size HxWxn_channles (BGR)
-# --------------------------------------------
-def read_img(path):
- # read image by cv2
- # return: Numpy float32, HWC, BGR, [0,1]
- img = cv2.imread(path, cv2.IMREAD_UNCHANGED) # cv2.IMREAD_GRAYSCALE
- img = img.astype(np.float32) / 255.
- if img.ndim == 2:
- img = np.expand_dims(img, axis=2)
- # some images have 4 channels
- if img.shape[2] > 3:
- img = img[:, :, :3]
- return img
-
-
-'''
-# --------------------------------------------
-# image format conversion
-# --------------------------------------------
-# numpy(single) <---> numpy(unit)
-# numpy(single) <---> tensor
-# numpy(unit) <---> tensor
-# --------------------------------------------
-'''
-
-
-# --------------------------------------------
-# numpy(single) [0, 1] <---> numpy(unit)
-# --------------------------------------------
-
-
-def uint2single(img):
-
- return np.float32(img/255.)
-
-
-def single2uint(img):
-
- return np.uint8((img.clip(0, 1)*255.).round())
-
-
-def uint162single(img):
-
- return np.float32(img/65535.)
-
-
-def single2uint16(img):
-
- return np.uint16((img.clip(0, 1)*65535.).round())
-
-
-# --------------------------------------------
-# numpy(unit) (HxWxC or HxW) <---> tensor
-# --------------------------------------------
-
-
-# convert uint to 4-dimensional torch tensor
-def uint2tensor4(img):
- if img.ndim == 2:
- img = np.expand_dims(img, axis=2)
- return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().div(255.).unsqueeze(0)
-
-
-# convert uint to 3-dimensional torch tensor
-def uint2tensor3(img):
- if img.ndim == 2:
- img = np.expand_dims(img, axis=2)
- return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().div(255.)
-
-
-# convert 2/3/4-dimensional torch tensor to uint
-def tensor2uint(img):
- img = img.data.squeeze().float().clamp_(0, 1).cpu().numpy()
- if img.ndim == 3:
- img = np.transpose(img, (1, 2, 0))
- return np.uint8((img*255.0).round())
-
-
-# --------------------------------------------
-# numpy(single) (HxWxC) <---> tensor
-# --------------------------------------------
-
-
-# convert single (HxWxC) to 3-dimensional torch tensor
-def single2tensor3(img):
- return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float()
-
-
-# convert single (HxWxC) to 4-dimensional torch tensor
-def single2tensor4(img):
- return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().unsqueeze(0)
-
-
-# convert torch tensor to single
-def tensor2single(img):
- img = img.data.squeeze().float().cpu().numpy()
- if img.ndim == 3:
- img = np.transpose(img, (1, 2, 0))
-
- return img
-
-# convert torch tensor to single
-def tensor2single3(img):
- img = img.data.squeeze().float().cpu().numpy()
- if img.ndim == 3:
- img = np.transpose(img, (1, 2, 0))
- elif img.ndim == 2:
- img = np.expand_dims(img, axis=2)
- return img
-
-
-def single2tensor5(img):
- return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1, 3).float().unsqueeze(0)
-
-
-def single32tensor5(img):
- return torch.from_numpy(np.ascontiguousarray(img)).float().unsqueeze(0).unsqueeze(0)
-
-
-def single42tensor4(img):
- return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1, 3).float()
-
-
-# from skimage.io import imread, imsave
-def tensor2img(tensor, out_type=np.uint8, min_max=(0, 1)):
- '''
- Converts a torch Tensor into an image Numpy array of BGR channel order
- Input: 4D(B,(3/1),H,W), 3D(C,H,W), or 2D(H,W), any range, RGB channel order
- Output: 3D(H,W,C) or 2D(H,W), [0,255], np.uint8 (default)
- '''
- tensor = tensor.squeeze().float().cpu().clamp_(*min_max) # squeeze first, then clamp
- tensor = (tensor - min_max[0]) / (min_max[1] - min_max[0]) # to range [0,1]
- n_dim = tensor.dim()
- if n_dim == 4:
- n_img = len(tensor)
- img_np = make_grid(tensor, nrow=int(math.sqrt(n_img)), normalize=False).numpy()
- img_np = np.transpose(img_np[[2, 1, 0], :, :], (1, 2, 0)) # HWC, BGR
- elif n_dim == 3:
- img_np = tensor.numpy()
- img_np = np.transpose(img_np[[2, 1, 0], :, :], (1, 2, 0)) # HWC, BGR
- elif n_dim == 2:
- img_np = tensor.numpy()
- else:
- raise TypeError(
- 'Only support 4D, 3D and 2D tensor. But received with dimension: {:d}'.format(n_dim))
- if out_type == np.uint8:
- img_np = (img_np * 255.0).round()
- # Important. Unlike matlab, numpy.unit8() WILL NOT round by default.
- return img_np.astype(out_type)
-
-
-'''
-# --------------------------------------------
-# Augmentation, flipe and/or rotate
-# --------------------------------------------
-# The following two are enough.
-# (1) augmet_img: numpy image of WxHxC or WxH
-# (2) augment_img_tensor4: tensor image 1xCxWxH
-# --------------------------------------------
-'''
-
-
-def augment_img(img, mode=0):
- '''Kai Zhang (github: https://github.com/cszn)
- '''
- if mode == 0:
- return img
- elif mode == 1:
- return np.flipud(np.rot90(img))
- elif mode == 2:
- return np.flipud(img)
- elif mode == 3:
- return np.rot90(img, k=3)
- elif mode == 4:
- return np.flipud(np.rot90(img, k=2))
- elif mode == 5:
- return np.rot90(img)
- elif mode == 6:
- return np.rot90(img, k=2)
- elif mode == 7:
- return np.flipud(np.rot90(img, k=3))
-
-
-def augment_img_tensor4(img, mode=0):
- '''Kai Zhang (github: https://github.com/cszn)
- '''
- if mode == 0:
- return img
- elif mode == 1:
- return img.rot90(1, [2, 3]).flip([2])
- elif mode == 2:
- return img.flip([2])
- elif mode == 3:
- return img.rot90(3, [2, 3])
- elif mode == 4:
- return img.rot90(2, [2, 3]).flip([2])
- elif mode == 5:
- return img.rot90(1, [2, 3])
- elif mode == 6:
- return img.rot90(2, [2, 3])
- elif mode == 7:
- return img.rot90(3, [2, 3]).flip([2])
-
-
-def augment_img_tensor(img, mode=0):
- '''Kai Zhang (github: https://github.com/cszn)
- '''
- img_size = img.size()
- img_np = img.data.cpu().numpy()
- if len(img_size) == 3:
- img_np = np.transpose(img_np, (1, 2, 0))
- elif len(img_size) == 4:
- img_np = np.transpose(img_np, (2, 3, 1, 0))
- img_np = augment_img(img_np, mode=mode)
- img_tensor = torch.from_numpy(np.ascontiguousarray(img_np))
- if len(img_size) == 3:
- img_tensor = img_tensor.permute(2, 0, 1)
- elif len(img_size) == 4:
- img_tensor = img_tensor.permute(3, 2, 0, 1)
-
- return img_tensor.type_as(img)
-
-
-def augment_img_np3(img, mode=0):
- if mode == 0:
- return img
- elif mode == 1:
- return img.transpose(1, 0, 2)
- elif mode == 2:
- return img[::-1, :, :]
- elif mode == 3:
- img = img[::-1, :, :]
- img = img.transpose(1, 0, 2)
- return img
- elif mode == 4:
- return img[:, ::-1, :]
- elif mode == 5:
- img = img[:, ::-1, :]
- img = img.transpose(1, 0, 2)
- return img
- elif mode == 6:
- img = img[:, ::-1, :]
- img = img[::-1, :, :]
- return img
- elif mode == 7:
- img = img[:, ::-1, :]
- img = img[::-1, :, :]
- img = img.transpose(1, 0, 2)
- return img
-
-
-def augment_imgs(img_list, hflip=True, rot=True):
- # horizontal flip OR rotate
- hflip = hflip and random.random() < 0.5
- vflip = rot and random.random() < 0.5
- rot90 = rot and random.random() < 0.5
-
- def _augment(img):
- if hflip:
- img = img[:, ::-1, :]
- if vflip:
- img = img[::-1, :, :]
- if rot90:
- img = img.transpose(1, 0, 2)
- return img
-
- return [_augment(img) for img in img_list]
-
-
-'''
-# --------------------------------------------
-# modcrop and shave
-# --------------------------------------------
-'''
-
-
-def modcrop(img_in, scale):
- # img_in: Numpy, HWC or HW
- img = np.copy(img_in)
- if img.ndim == 2:
- H, W = img.shape
- H_r, W_r = H % scale, W % scale
- img = img[:H - H_r, :W - W_r]
- elif img.ndim == 3:
- H, W, C = img.shape
- H_r, W_r = H % scale, W % scale
- img = img[:H - H_r, :W - W_r, :]
- else:
- raise ValueError('Wrong img ndim: [{:d}].'.format(img.ndim))
- return img
-
-
-def shave(img_in, border=0):
- # img_in: Numpy, HWC or HW
- img = np.copy(img_in)
- h, w = img.shape[:2]
- img = img[border:h-border, border:w-border]
- return img
-
-
-'''
-# --------------------------------------------
-# image processing process on numpy image
-# channel_convert(in_c, tar_type, img_list):
-# rgb2ycbcr(img, only_y=True):
-# bgr2ycbcr(img, only_y=True):
-# ycbcr2rgb(img):
-# --------------------------------------------
-'''
-
-
-def rgb2ycbcr(img, only_y=True):
- '''same as matlab rgb2ycbcr
- only_y: only return Y channel
- Input:
- uint8, [0, 255]
- float, [0, 1]
- '''
- in_img_type = img.dtype
- img.astype(np.float32)
- if in_img_type != np.uint8:
- img *= 255.
- # convert
- if only_y:
- rlt = np.dot(img, [65.481, 128.553, 24.966]) / 255.0 + 16.0
- else:
- rlt = np.matmul(img, [[65.481, -37.797, 112.0], [128.553, -74.203, -93.786],
- [24.966, 112.0, -18.214]]) / 255.0 + [16, 128, 128]
- if in_img_type == np.uint8:
- rlt = rlt.round()
- else:
- rlt /= 255.
- return rlt.astype(in_img_type)
-
-
-def ycbcr2rgb(img):
- '''same as matlab ycbcr2rgb
- Input:
- uint8, [0, 255]
- float, [0, 1]
- '''
- in_img_type = img.dtype
- img.astype(np.float32)
- if in_img_type != np.uint8:
- img *= 255.
- # convert
- rlt = np.matmul(img, [[0.00456621, 0.00456621, 0.00456621], [0, -0.00153632, 0.00791071],
- [0.00625893, -0.00318811, 0]]) * 255.0 + [-222.921, 135.576, -276.836]
- if in_img_type == np.uint8:
- rlt = rlt.round()
- else:
- rlt /= 255.
- return rlt.astype(in_img_type)
-
-
-def bgr2ycbcr(img, only_y=True):
- '''bgr version of rgb2ycbcr
- only_y: only return Y channel
- Input:
- uint8, [0, 255]
- float, [0, 1]
- '''
- in_img_type = img.dtype
- img.astype(np.float32)
- if in_img_type != np.uint8:
- img *= 255.
- # convert
- if only_y:
- rlt = np.dot(img, [24.966, 128.553, 65.481]) / 255.0 + 16.0
- else:
- rlt = np.matmul(img, [[24.966, 112.0, -18.214], [128.553, -74.203, -93.786],
- [65.481, -37.797, 112.0]]) / 255.0 + [16, 128, 128]
- if in_img_type == np.uint8:
- rlt = rlt.round()
- else:
- rlt /= 255.
- return rlt.astype(in_img_type)
-
-
-def channel_convert(in_c, tar_type, img_list):
- # conversion among BGR, gray and y
- if in_c == 3 and tar_type == 'gray': # BGR to gray
- gray_list = [cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) for img in img_list]
- return [np.expand_dims(img, axis=2) for img in gray_list]
- elif in_c == 3 and tar_type == 'y': # BGR to y
- y_list = [bgr2ycbcr(img, only_y=True) for img in img_list]
- return [np.expand_dims(img, axis=2) for img in y_list]
- elif in_c == 1 and tar_type == 'RGB': # gray/y to BGR
- return [cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) for img in img_list]
- else:
- return img_list
-
-
-'''
-# --------------------------------------------
-# metric, PSNR and SSIM
-# --------------------------------------------
-'''
-
-
-# --------------------------------------------
-# PSNR
-# --------------------------------------------
-def calculate_psnr(img1, img2, border=0):
- # img1 and img2 have range [0, 255]
- #img1 = img1.squeeze()
- #img2 = img2.squeeze()
- if not img1.shape == img2.shape:
- raise ValueError('Input images must have the same dimensions.')
- h, w = img1.shape[:2]
- img1 = img1[border:h-border, border:w-border]
- img2 = img2[border:h-border, border:w-border]
-
- img1 = img1.astype(np.float64)
- img2 = img2.astype(np.float64)
- mse = np.mean((img1 - img2)**2)
- if mse == 0:
- return float('inf')
- return 20 * math.log10(255.0 / math.sqrt(mse))
-
-
-# --------------------------------------------
-# SSIM
-# --------------------------------------------
-def calculate_ssim(img1, img2, border=0):
- '''calculate SSIM
- the same outputs as MATLAB's
- img1, img2: [0, 255]
- '''
- #img1 = img1.squeeze()
- #img2 = img2.squeeze()
- if not img1.shape == img2.shape:
- raise ValueError('Input images must have the same dimensions.')
- h, w = img1.shape[:2]
- img1 = img1[border:h-border, border:w-border]
- img2 = img2[border:h-border, border:w-border]
-
- if img1.ndim == 2:
- return ssim(img1, img2)
- elif img1.ndim == 3:
- if img1.shape[2] == 3:
- ssims = []
- for i in range(3):
- ssims.append(ssim(img1[:,:,i], img2[:,:,i]))
- return np.array(ssims).mean()
- elif img1.shape[2] == 1:
- return ssim(np.squeeze(img1), np.squeeze(img2))
- else:
- raise ValueError('Wrong input image dimensions.')
-
-
-def ssim(img1, img2):
- C1 = (0.01 * 255)**2
- C2 = (0.03 * 255)**2
-
- img1 = img1.astype(np.float64)
- img2 = img2.astype(np.float64)
- kernel = cv2.getGaussianKernel(11, 1.5)
- window = np.outer(kernel, kernel.transpose())
-
- mu1 = cv2.filter2D(img1, -1, window)[5:-5, 5:-5] # valid
- mu2 = cv2.filter2D(img2, -1, window)[5:-5, 5:-5]
- mu1_sq = mu1**2
- mu2_sq = mu2**2
- mu1_mu2 = mu1 * mu2
- sigma1_sq = cv2.filter2D(img1**2, -1, window)[5:-5, 5:-5] - mu1_sq
- sigma2_sq = cv2.filter2D(img2**2, -1, window)[5:-5, 5:-5] - mu2_sq
- sigma12 = cv2.filter2D(img1 * img2, -1, window)[5:-5, 5:-5] - mu1_mu2
-
- ssim_map = ((2 * mu1_mu2 + C1) * (2 * sigma12 + C2)) / ((mu1_sq + mu2_sq + C1) *
- (sigma1_sq + sigma2_sq + C2))
- return ssim_map.mean()
-
-
-'''
-# --------------------------------------------
-# matlab's bicubic imresize (numpy and torch) [0, 1]
-# --------------------------------------------
-'''
-
-
-# matlab 'imresize' function, now only support 'bicubic'
-def cubic(x):
- absx = torch.abs(x)
- absx2 = absx**2
- absx3 = absx**3
- return (1.5*absx3 - 2.5*absx2 + 1) * ((absx <= 1).type_as(absx)) + \
- (-0.5*absx3 + 2.5*absx2 - 4*absx + 2) * (((absx > 1)*(absx <= 2)).type_as(absx))
-
-
-def calculate_weights_indices(in_length, out_length, scale, kernel, kernel_width, antialiasing):
- if (scale < 1) and (antialiasing):
- # Use a modified kernel to simultaneously interpolate and antialias- larger kernel width
- kernel_width = kernel_width / scale
-
- # Output-space coordinates
- x = torch.linspace(1, out_length, out_length)
-
- # Input-space coordinates. Calculate the inverse mapping such that 0.5
- # in output space maps to 0.5 in input space, and 0.5+scale in output
- # space maps to 1.5 in input space.
- u = x / scale + 0.5 * (1 - 1 / scale)
-
- # What is the left-most pixel that can be involved in the computation?
- left = torch.floor(u - kernel_width / 2)
-
- # What is the maximum number of pixels that can be involved in the
- # computation? Note: it's OK to use an extra pixel here; if the
- # corresponding weights are all zero, it will be eliminated at the end
- # of this function.
- P = math.ceil(kernel_width) + 2
-
- # The indices of the input pixels involved in computing the k-th output
- # pixel are in row k of the indices matrix.
- indices = left.view(out_length, 1).expand(out_length, P) + torch.linspace(0, P - 1, P).view(
- 1, P).expand(out_length, P)
-
- # The weights used to compute the k-th output pixel are in row k of the
- # weights matrix.
- distance_to_center = u.view(out_length, 1).expand(out_length, P) - indices
- # apply cubic kernel
- if (scale < 1) and (antialiasing):
- weights = scale * cubic(distance_to_center * scale)
- else:
- weights = cubic(distance_to_center)
- # Normalize the weights matrix so that each row sums to 1.
- weights_sum = torch.sum(weights, 1).view(out_length, 1)
- weights = weights / weights_sum.expand(out_length, P)
-
- # If a column in weights is all zero, get rid of it. only consider the first and last column.
- weights_zero_tmp = torch.sum((weights == 0), 0)
- if not math.isclose(weights_zero_tmp[0], 0, rel_tol=1e-6):
- indices = indices.narrow(1, 1, P - 2)
- weights = weights.narrow(1, 1, P - 2)
- if not math.isclose(weights_zero_tmp[-1], 0, rel_tol=1e-6):
- indices = indices.narrow(1, 0, P - 2)
- weights = weights.narrow(1, 0, P - 2)
- weights = weights.contiguous()
- indices = indices.contiguous()
- sym_len_s = -indices.min() + 1
- sym_len_e = indices.max() - in_length
- indices = indices + sym_len_s - 1
- return weights, indices, int(sym_len_s), int(sym_len_e)
-
-
-# --------------------------------------------
-# imresize for tensor image [0, 1]
-# --------------------------------------------
-def imresize(img, scale, antialiasing=True):
- # Now the scale should be the same for H and W
- # input: img: pytorch tensor, CHW or HW [0,1]
- # output: CHW or HW [0,1] w/o round
- need_squeeze = True if img.dim() == 2 else False
- if need_squeeze:
- img.unsqueeze_(0)
- in_C, in_H, in_W = img.size()
- out_C, out_H, out_W = in_C, math.ceil(in_H * scale), math.ceil(in_W * scale)
- kernel_width = 4
- kernel = 'cubic'
-
- # Return the desired dimension order for performing the resize. The
- # strategy is to perform the resize first along the dimension with the
- # smallest scale factor.
- # Now we do not support this.
-
- # get weights and indices
- weights_H, indices_H, sym_len_Hs, sym_len_He = calculate_weights_indices(
- in_H, out_H, scale, kernel, kernel_width, antialiasing)
- weights_W, indices_W, sym_len_Ws, sym_len_We = calculate_weights_indices(
- in_W, out_W, scale, kernel, kernel_width, antialiasing)
- # process H dimension
- # symmetric copying
- img_aug = torch.FloatTensor(in_C, in_H + sym_len_Hs + sym_len_He, in_W)
- img_aug.narrow(1, sym_len_Hs, in_H).copy_(img)
-
- sym_patch = img[:, :sym_len_Hs, :]
- inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(1, inv_idx)
- img_aug.narrow(1, 0, sym_len_Hs).copy_(sym_patch_inv)
-
- sym_patch = img[:, -sym_len_He:, :]
- inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(1, inv_idx)
- img_aug.narrow(1, sym_len_Hs + in_H, sym_len_He).copy_(sym_patch_inv)
-
- out_1 = torch.FloatTensor(in_C, out_H, in_W)
- kernel_width = weights_H.size(1)
- for i in range(out_H):
- idx = int(indices_H[i][0])
- for j in range(out_C):
- out_1[j, i, :] = img_aug[j, idx:idx + kernel_width, :].transpose(0, 1).mv(weights_H[i])
-
- # process W dimension
- # symmetric copying
- out_1_aug = torch.FloatTensor(in_C, out_H, in_W + sym_len_Ws + sym_len_We)
- out_1_aug.narrow(2, sym_len_Ws, in_W).copy_(out_1)
-
- sym_patch = out_1[:, :, :sym_len_Ws]
- inv_idx = torch.arange(sym_patch.size(2) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(2, inv_idx)
- out_1_aug.narrow(2, 0, sym_len_Ws).copy_(sym_patch_inv)
-
- sym_patch = out_1[:, :, -sym_len_We:]
- inv_idx = torch.arange(sym_patch.size(2) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(2, inv_idx)
- out_1_aug.narrow(2, sym_len_Ws + in_W, sym_len_We).copy_(sym_patch_inv)
-
- out_2 = torch.FloatTensor(in_C, out_H, out_W)
- kernel_width = weights_W.size(1)
- for i in range(out_W):
- idx = int(indices_W[i][0])
- for j in range(out_C):
- out_2[j, :, i] = out_1_aug[j, :, idx:idx + kernel_width].mv(weights_W[i])
- if need_squeeze:
- out_2.squeeze_()
- return out_2
-
-
-# --------------------------------------------
-# imresize for numpy image [0, 1]
-# --------------------------------------------
-def imresize_np(img, scale, antialiasing=True):
- # Now the scale should be the same for H and W
- # input: img: Numpy, HWC or HW [0,1]
- # output: HWC or HW [0,1] w/o round
- img = torch.from_numpy(img)
- need_squeeze = True if img.dim() == 2 else False
- if need_squeeze:
- img.unsqueeze_(2)
-
- in_H, in_W, in_C = img.size()
- out_C, out_H, out_W = in_C, math.ceil(in_H * scale), math.ceil(in_W * scale)
- kernel_width = 4
- kernel = 'cubic'
-
- # Return the desired dimension order for performing the resize. The
- # strategy is to perform the resize first along the dimension with the
- # smallest scale factor.
- # Now we do not support this.
-
- # get weights and indices
- weights_H, indices_H, sym_len_Hs, sym_len_He = calculate_weights_indices(
- in_H, out_H, scale, kernel, kernel_width, antialiasing)
- weights_W, indices_W, sym_len_Ws, sym_len_We = calculate_weights_indices(
- in_W, out_W, scale, kernel, kernel_width, antialiasing)
- # process H dimension
- # symmetric copying
- img_aug = torch.FloatTensor(in_H + sym_len_Hs + sym_len_He, in_W, in_C)
- img_aug.narrow(0, sym_len_Hs, in_H).copy_(img)
-
- sym_patch = img[:sym_len_Hs, :, :]
- inv_idx = torch.arange(sym_patch.size(0) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(0, inv_idx)
- img_aug.narrow(0, 0, sym_len_Hs).copy_(sym_patch_inv)
-
- sym_patch = img[-sym_len_He:, :, :]
- inv_idx = torch.arange(sym_patch.size(0) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(0, inv_idx)
- img_aug.narrow(0, sym_len_Hs + in_H, sym_len_He).copy_(sym_patch_inv)
-
- out_1 = torch.FloatTensor(out_H, in_W, in_C)
- kernel_width = weights_H.size(1)
- for i in range(out_H):
- idx = int(indices_H[i][0])
- for j in range(out_C):
- out_1[i, :, j] = img_aug[idx:idx + kernel_width, :, j].transpose(0, 1).mv(weights_H[i])
-
- # process W dimension
- # symmetric copying
- out_1_aug = torch.FloatTensor(out_H, in_W + sym_len_Ws + sym_len_We, in_C)
- out_1_aug.narrow(1, sym_len_Ws, in_W).copy_(out_1)
-
- sym_patch = out_1[:, :sym_len_Ws, :]
- inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(1, inv_idx)
- out_1_aug.narrow(1, 0, sym_len_Ws).copy_(sym_patch_inv)
-
- sym_patch = out_1[:, -sym_len_We:, :]
- inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(1, inv_idx)
- out_1_aug.narrow(1, sym_len_Ws + in_W, sym_len_We).copy_(sym_patch_inv)
-
- out_2 = torch.FloatTensor(out_H, out_W, in_C)
- kernel_width = weights_W.size(1)
- for i in range(out_W):
- idx = int(indices_W[i][0])
- for j in range(out_C):
- out_2[:, i, j] = out_1_aug[:, idx:idx + kernel_width, j].mv(weights_W[i])
- if need_squeeze:
- out_2.squeeze_()
-
- return out_2.numpy()
-
-
-if __name__ == '__main__':
- print('---')
-# img = imread_uint('test.bmp', 3)
-# img = uint2single(img)
-# img_bicubic = imresize_np(img, 1/4)
\ No newline at end of file
diff --git a/spaces/Kevin676/Real-Time-Voice-Cloning/vocoder/display.py b/spaces/Kevin676/Real-Time-Voice-Cloning/vocoder/display.py
deleted file mode 100644
index 956880722a3f05613ebd06f5686b3d8a59642e92..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/Real-Time-Voice-Cloning/vocoder/display.py
+++ /dev/null
@@ -1,120 +0,0 @@
-import matplotlib.pyplot as plt
-import time
-import numpy as np
-import sys
-
-
-def progbar(i, n, size=16):
- done = (i * size) // n
- bar = ''
- for i in range(size):
- bar += '█' if i <= done else '░'
- return bar
-
-
-def stream(message) :
- try:
- sys.stdout.write("\r{%s}" % message)
- except:
- #Remove non-ASCII characters from message
- message = ''.join(i for i in message if ord(i)<128)
- sys.stdout.write("\r{%s}" % message)
-
-
-def simple_table(item_tuples) :
-
- border_pattern = '+---------------------------------------'
- whitespace = ' '
-
- headings, cells, = [], []
-
- for item in item_tuples :
-
- heading, cell = str(item[0]), str(item[1])
-
- pad_head = True if len(heading) < len(cell) else False
-
- pad = abs(len(heading) - len(cell))
- pad = whitespace[:pad]
-
- pad_left = pad[:len(pad)//2]
- pad_right = pad[len(pad)//2:]
-
- if pad_head :
- heading = pad_left + heading + pad_right
- else :
- cell = pad_left + cell + pad_right
-
- headings += [heading]
- cells += [cell]
-
- border, head, body = '', '', ''
-
- for i in range(len(item_tuples)) :
-
- temp_head = f'| {headings[i]} '
- temp_body = f'| {cells[i]} '
-
- border += border_pattern[:len(temp_head)]
- head += temp_head
- body += temp_body
-
- if i == len(item_tuples) - 1 :
- head += '|'
- body += '|'
- border += '+'
-
- print(border)
- print(head)
- print(border)
- print(body)
- print(border)
- print(' ')
-
-
-def time_since(started) :
- elapsed = time.time() - started
- m = int(elapsed // 60)
- s = int(elapsed % 60)
- if m >= 60 :
- h = int(m // 60)
- m = m % 60
- return f'{h}h {m}m {s}s'
- else :
- return f'{m}m {s}s'
-
-
-def save_attention(attn, path) :
- fig = plt.figure(figsize=(12, 6))
- plt.imshow(attn.T, interpolation='nearest', aspect='auto')
- fig.savefig(f'{path}.png', bbox_inches='tight')
- plt.close(fig)
-
-
-def save_spectrogram(M, path, length=None) :
- M = np.flip(M, axis=0)
- if length : M = M[:, :length]
- fig = plt.figure(figsize=(12, 6))
- plt.imshow(M, interpolation='nearest', aspect='auto')
- fig.savefig(f'{path}.png', bbox_inches='tight')
- plt.close(fig)
-
-
-def plot(array) :
- fig = plt.figure(figsize=(30, 5))
- ax = fig.add_subplot(111)
- ax.xaxis.label.set_color('grey')
- ax.yaxis.label.set_color('grey')
- ax.xaxis.label.set_fontsize(23)
- ax.yaxis.label.set_fontsize(23)
- ax.tick_params(axis='x', colors='grey', labelsize=23)
- ax.tick_params(axis='y', colors='grey', labelsize=23)
- plt.plot(array)
-
-
-def plot_spec(M) :
- M = np.flip(M, axis=0)
- plt.figure(figsize=(18,4))
- plt.imshow(M, interpolation='nearest', aspect='auto')
- plt.show()
-
diff --git a/spaces/Kreaols/ChuanhuChatGPT/assets/external-scripts.js b/spaces/Kreaols/ChuanhuChatGPT/assets/external-scripts.js
deleted file mode 100644
index 8d0352669045537af5698b1824dbc1dba21df478..0000000000000000000000000000000000000000
--- a/spaces/Kreaols/ChuanhuChatGPT/assets/external-scripts.js
+++ /dev/null
@@ -1,2 +0,0 @@
-
-// external javascript here
diff --git a/spaces/Kuachi/ai-voice/README.md b/spaces/Kuachi/ai-voice/README.md
deleted file mode 100644
index 62dee36e0b30f5e99a6eea4122deb42189651e4e..0000000000000000000000000000000000000000
--- a/spaces/Kuachi/ai-voice/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Anime Voice Ai
-emoji: 🗿
-colorFrom: pink
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.17.0
-app_file: app.py
-pinned: false
-license: apache-2.0
-duplicated_from: kuachi/voice
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/roi_heads/mask_heads/fused_semantic_head.py b/spaces/KyanChen/RSPrompter/mmdet/models/roi_heads/mask_heads/fused_semantic_head.py
deleted file mode 100644
index d20beb2975a563f03e7b6b2afcef287cb41af05a..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmdet/models/roi_heads/mask_heads/fused_semantic_head.py
+++ /dev/null
@@ -1,144 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import warnings
-from typing import Tuple
-
-import torch.nn as nn
-import torch.nn.functional as F
-from mmcv.cnn import ConvModule
-from mmengine.config import ConfigDict
-from mmengine.model import BaseModule
-from torch import Tensor
-
-from mmdet.registry import MODELS
-from mmdet.utils import MultiConfig, OptConfigType
-
-
-@MODELS.register_module()
-class FusedSemanticHead(BaseModule):
- r"""Multi-level fused semantic segmentation head.
-
- .. code-block:: none
-
- in_1 -> 1x1 conv ---
- |
- in_2 -> 1x1 conv -- |
- ||
- in_3 -> 1x1 conv - ||
- ||| /-> 1x1 conv (mask prediction)
- in_4 -> 1x1 conv -----> 3x3 convs (*4)
- | \-> 1x1 conv (feature)
- in_5 -> 1x1 conv ---
- """ # noqa: W605
-
- def __init__(
- self,
- num_ins: int,
- fusion_level: int,
- seg_scale_factor=1 / 8,
- num_convs: int = 4,
- in_channels: int = 256,
- conv_out_channels: int = 256,
- num_classes: int = 183,
- conv_cfg: OptConfigType = None,
- norm_cfg: OptConfigType = None,
- ignore_label: int = None,
- loss_weight: float = None,
- loss_seg: ConfigDict = dict(
- type='CrossEntropyLoss', ignore_index=255, loss_weight=0.2),
- init_cfg: MultiConfig = dict(
- type='Kaiming', override=dict(name='conv_logits'))
- ) -> None:
- super().__init__(init_cfg=init_cfg)
- self.num_ins = num_ins
- self.fusion_level = fusion_level
- self.seg_scale_factor = seg_scale_factor
- self.num_convs = num_convs
- self.in_channels = in_channels
- self.conv_out_channels = conv_out_channels
- self.num_classes = num_classes
- self.conv_cfg = conv_cfg
- self.norm_cfg = norm_cfg
- self.fp16_enabled = False
-
- self.lateral_convs = nn.ModuleList()
- for i in range(self.num_ins):
- self.lateral_convs.append(
- ConvModule(
- self.in_channels,
- self.in_channels,
- 1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- inplace=False))
-
- self.convs = nn.ModuleList()
- for i in range(self.num_convs):
- in_channels = self.in_channels if i == 0 else conv_out_channels
- self.convs.append(
- ConvModule(
- in_channels,
- conv_out_channels,
- 3,
- padding=1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg))
- self.conv_embedding = ConvModule(
- conv_out_channels,
- conv_out_channels,
- 1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg)
- self.conv_logits = nn.Conv2d(conv_out_channels, self.num_classes, 1)
- if ignore_label:
- loss_seg['ignore_index'] = ignore_label
- if loss_weight:
- loss_seg['loss_weight'] = loss_weight
- if ignore_label or loss_weight:
- warnings.warn('``ignore_label`` and ``loss_weight`` would be '
- 'deprecated soon. Please set ``ingore_index`` and '
- '``loss_weight`` in ``loss_seg`` instead.')
- self.criterion = MODELS.build(loss_seg)
-
- def forward(self, feats: Tuple[Tensor]) -> Tuple[Tensor]:
- """Forward function.
-
- Args:
- feats (tuple[Tensor]): Multi scale feature maps.
-
- Returns:
- tuple[Tensor]:
-
- - mask_preds (Tensor): Predicted mask logits.
- - x (Tensor): Fused feature.
- """
- x = self.lateral_convs[self.fusion_level](feats[self.fusion_level])
- fused_size = tuple(x.shape[-2:])
- for i, feat in enumerate(feats):
- if i != self.fusion_level:
- feat = F.interpolate(
- feat, size=fused_size, mode='bilinear', align_corners=True)
- # fix runtime error of "+=" inplace operation in PyTorch 1.10
- x = x + self.lateral_convs[i](feat)
-
- for i in range(self.num_convs):
- x = self.convs[i](x)
-
- mask_preds = self.conv_logits(x)
- x = self.conv_embedding(x)
- return mask_preds, x
-
- def loss(self, mask_preds: Tensor, labels: Tensor) -> Tensor:
- """Loss function.
-
- Args:
- mask_preds (Tensor): Predicted mask logits.
- labels (Tensor): Ground truth.
-
- Returns:
- Tensor: Semantic segmentation loss.
- """
- labels = F.interpolate(
- labels.float(), scale_factor=self.seg_scale_factor, mode='nearest')
- labels = labels.squeeze(1).long()
- loss_semantic_seg = self.criterion(mask_preds, labels)
- return loss_semantic_seg
diff --git a/spaces/Loren/Streamlit_OCR_comparator/configs/textdet/psenet/psenet_r50_fpnf_600e_icdar2015.py b/spaces/Loren/Streamlit_OCR_comparator/configs/textdet/psenet/psenet_r50_fpnf_600e_icdar2015.py
deleted file mode 100644
index fbaacc19b19f6f8284eb65c7d2d2aa95e8051427..0000000000000000000000000000000000000000
--- a/spaces/Loren/Streamlit_OCR_comparator/configs/textdet/psenet/psenet_r50_fpnf_600e_icdar2015.py
+++ /dev/null
@@ -1,35 +0,0 @@
-_base_ = [
- '../../_base_/default_runtime.py',
- '../../_base_/schedules/schedule_adam_step_600e.py',
- '../../_base_/det_models/psenet_r50_fpnf.py',
- '../../_base_/det_datasets/icdar2015.py',
- '../../_base_/det_pipelines/psenet_pipeline.py'
-]
-
-model = {{_base_.model_quad}}
-
-train_list = {{_base_.train_list}}
-test_list = {{_base_.test_list}}
-
-train_pipeline = {{_base_.train_pipeline}}
-test_pipeline_icdar2015 = {{_base_.test_pipeline_icdar2015}}
-
-data = dict(
- samples_per_gpu=8,
- workers_per_gpu=2,
- val_dataloader=dict(samples_per_gpu=1),
- test_dataloader=dict(samples_per_gpu=1),
- train=dict(
- type='UniformConcatDataset',
- datasets=train_list,
- pipeline=train_pipeline),
- val=dict(
- type='UniformConcatDataset',
- datasets=test_list,
- pipeline=test_pipeline_icdar2015),
- test=dict(
- type='UniformConcatDataset',
- datasets=test_list,
- pipeline=test_pipeline_icdar2015))
-
-evaluation = dict(interval=10, metric='hmean-iou')
diff --git a/spaces/MA9149210776/CrucibleAI-ControlNetMediaPipeFace/app.py b/spaces/MA9149210776/CrucibleAI-ControlNetMediaPipeFace/app.py
deleted file mode 100644
index 52058acccc89fabb676263590dd45e3c16ea72cc..0000000000000000000000000000000000000000
--- a/spaces/MA9149210776/CrucibleAI-ControlNetMediaPipeFace/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/CrucibleAI/ControlNetMediaPipeFace").launch()
\ No newline at end of file
diff --git a/spaces/Mahiruoshi/vits-chatbot/modules.py b/spaces/Mahiruoshi/vits-chatbot/modules.py
deleted file mode 100644
index f5af1fd9a20dc03707889f360a39bb4b784a6df3..0000000000000000000000000000000000000000
--- a/spaces/Mahiruoshi/vits-chatbot/modules.py
+++ /dev/null
@@ -1,387 +0,0 @@
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from torch.nn import Conv1d
-from torch.nn.utils import weight_norm, remove_weight_norm
-
-import commons
-from commons import init_weights, get_padding
-from transforms import piecewise_rational_quadratic_transform
-
-
-LRELU_SLOPE = 0.1
-
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-
-class ConvReluNorm(nn.Module):
- def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(
- nn.ReLU(),
- nn.Dropout(p_dropout))
- for _ in range(n_layers-1):
- self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dialted and Depth-Separable Convolution
- """
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size ** i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size,
- groups=channels, dilation=dilation, padding=padding
- ))
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0):
- super(WN, self).__init__()
- assert(kernel_size % 2 == 1)
- self.hidden_channels =hidden_channels
- self.kernel_size = kernel_size,
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1)
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight')
-
- for i in range(n_layers):
- dilation = dilation_rate ** i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size,
- dilation=dilation, padding=padding)
- in_layer = torch.nn.utils.weight_norm(in_layer, name='weight')
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight')
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(
- x_in,
- g_l,
- n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:,:self.hidden_channels,:]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:,self.hidden_channels:,:]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2])))
- ])
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1)))
- ])
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1])))
- ])
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels,1))
- self.logs = nn.Parameter(torch.zeros(channels,1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1,2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels)
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels]*2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1,2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
-
-class ConvFlow(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0):
- super().__init__()
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.num_bins = num_bins
- self.tail_bound = tail_bound
- self.half_channels = in_channels // 2
-
- self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
- self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.)
- self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0)
- h = self.convs(h, x_mask, g=g)
- h = self.proj(h) * x_mask
-
- b, c, t = x0.shape
- h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
-
- unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_derivatives = h[..., 2 * self.num_bins:]
-
- x1, logabsdet = piecewise_rational_quadratic_transform(x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails='linear',
- tail_bound=self.tail_bound
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1,2])
- if not reverse:
- return x, logdet
- else:
- return x
diff --git a/spaces/Makiing/coolb-in-gtest/src/components/ui/separator.tsx b/spaces/Makiing/coolb-in-gtest/src/components/ui/separator.tsx
deleted file mode 100644
index 6c55e0b2ca8e2436658a06748aadbff7cd700db0..0000000000000000000000000000000000000000
--- a/spaces/Makiing/coolb-in-gtest/src/components/ui/separator.tsx
+++ /dev/null
@@ -1,31 +0,0 @@
-'use client'
-
-import * as React from 'react'
-import * as SeparatorPrimitive from '@radix-ui/react-separator'
-
-import { cn } from '@/lib/utils'
-
-const Separator = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(
- (
- { className, orientation = 'horizontal', decorative = true, ...props },
- ref
- ) => (
-
- )
-)
-Separator.displayName = SeparatorPrimitive.Root.displayName
-
-export { Separator }
diff --git a/spaces/Marshalls/testmtd/analysis/remove_bad_beginnings.py b/spaces/Marshalls/testmtd/analysis/remove_bad_beginnings.py
deleted file mode 100644
index dcd505596253e4401b999df4bad2ed4bca525106..0000000000000000000000000000000000000000
--- a/spaces/Marshalls/testmtd/analysis/remove_bad_beginnings.py
+++ /dev/null
@@ -1,39 +0,0 @@
-from analysis.pymo.parsers import BVHParser
-from analysis.pymo.data import Joint, MocapData
-from analysis.pymo.preprocessing import *
-from analysis.pymo.viz_tools import *
-from analysis.pymo.writers import *
-from sklearn.pipeline import Pipeline
-from pathlib import Path
-import sys
-path = sys.argv[1]
-
-from feature_extraction.utils import distribute_tasks
-from mpi4py import MPI
-comm = MPI.COMM_WORLD
-rank = comm.Get_rank()
-size = comm.Get_size()
-
-path = Path(path)
-candidate_audio_files = sorted(path.glob('**/*.bvh'), key=lambda path: path.parent.__str__())
-tasks = distribute_tasks(candidate_audio_files,rank,size)
-
-p = BVHParser()
-datas = []
-filenames = []
-for i in tasks:
- f = candidate_audio_files[i]
- print(f)
- filenames.append(f)
- datas.append(p.parse(f))
-
-with open("to_check"+str(rank),"w") as f:
- for i,data in enumerate(datas):
- bad_ones = data.values[(data.values["Hips_Xposition"] > 100000) | (data.values["Hips_Xposition"] < -100000)]
- if len(bad_ones) > 0:
- last_index = bad_ones.index[-1]
- data.values = data.values.loc[last_index:].iloc[1:]
- writer = BVHWriter()
-
- with open(filenames[i],'w') as out_f:
- writer.write(data, out_f)
diff --git a/spaces/Marshalls/testmtd/feature_extraction/audio_feature_extraction_test.sh b/spaces/Marshalls/testmtd/feature_extraction/audio_feature_extraction_test.sh
deleted file mode 100644
index b2f93c1d3ecec3b131e868b790753e1e317b7938..0000000000000000000000000000000000000000
--- a/spaces/Marshalls/testmtd/feature_extraction/audio_feature_extraction_test.sh
+++ /dev/null
@@ -1,47 +0,0 @@
-#!/bin/bash
-
-folder=$1
-
-py=python3
-n=$(nproc) #get Get the maximum number of processes on the computer
-#n=6
-
-#find $1 -type f -iname "*.mp3" -exec basename \{\} .mp3 \; > $1/base_filenames.txt
-
-fps=20
-format=wav #the format of the audio
-
-###SEQUENCE TO PROCESS DATA WHEN NEEDING TO COMPUTE NORMALIZATION TRANSFORMS
-#mpirun -n $n $py ./feature_extraction/process_audio.py $@ --audio_format $format --feature_names multi_mel --mel_feature_size 80 --fps 100 # fps=100 coz thats what ddc expects
-#mpirun -n $n $py ./feature_extraction/generate_ddc_features.py $@ --audio_format $format --experiment_name block_placement_ddc2 --checkpoint 130000 --checkpoints_dir feature_extraction --fps $fps
-#mpirun -n $n $py ./feature_extraction/process_audio.py $@ --audio_format $format --feature_names mel,envelope,madmombeats --mel_feature_size 80 --fps $fps --combined_feature_name audio_feats
-#mpirun -n 1 $py ./feature_extraction/extract_transform.py $1 --feature_name ${format}_multi_mel_80.npy_ddc_hidden --transforms pca_transform
-#mpirun -n $n $py ./feature_extraction/apply_transforms.py $@ --feature_name ${format}_multi_mel_80.npy_ddc_hidden --transform_name pca_transform --pca_dims 2 --new_feature_name ddcpca
-#./feature_extraction/script_to_list_filenames $1 $format
-#mpirun -n $n $py ./feature_extraction/combine_feats.py $@ $1/base_filenames.txt --feature_names ${format}_audio_feats,ddcpca --new_feature_name feats_ddcpca
-#mpirun -n 1 $py ./feature_extraction/extract_transform2.py $1 --feature_name feats_ddcpca --transforms scaler
-#mpirun -n $n $py ./feature_extraction/apply_transforms.py $@ --feature_name feats_ddcpca --transform_name scaler --new_feature_name audio_feats_scaled_${fps}
-
-###SEQUENCE WHEN USING EXISTING TRANSFORMS (SO NO NEED T RECOMPUTE THEM)
-#mpirun -n $n $py ./feature_extraction/process_audio.py $@ --audio_format $format --feature_names multi_mel --mel_feature_size 80 --fps 100 # fps=100 coz thats what ddc expects
-#mpirun -n $n $py ./feature_extraction/generate_ddc_features.py $@ --audio_format $format --experiment_name block_placement_ddc2 --checkpoint 130000 --checkpoints_dir feature_extraction --fps $fps
-#mpirun -n $n $py ./feature_extraction/process_audio.py $@ --audio_format $format --feature_names mel,envelope,madmombeats --mel_feature_size 80 --fps $fps --combined_feature_name audio_feats
-#mpirun -n $n $py ./feature_extraction/apply_transforms.py $@ --feature_name ${format}_multi_mel_80.npy_ddc_hidden --transform_name pca_transform --pca_dims 2 --new_feature_name ddcpca
-#./feature_extraction/script_to_list_filenames $1 $format
-#mpirun -n $n $py ./feature_extraction/combine_feats.py $@ $1/base_filenames.txt --feature_names ${format}_audio_feats,ddcpca --new_feature_name feats_ddcpca
-#mpirun -n $n $py ./feature_extraction/apply_transforms.py $@ --feature_name feats_ddcpca --transform_name scaler --new_feature_name audio_feats_scaled_${fps}
-
-###NOMPI
-chmod +x ./feature_extraction/process_audio.py
-chmod +x ./feature_extraction/generate_ddc_features.py
-chmod +x ./feature_extraction/process_audio.py
-chmod +x ./feature_extraction/apply_transforms.py
-chmod +x ./feature_extraction/combine_feats.py
-chmod +x ./feature_extraction/apply_transforms.py
-$py ./feature_extraction/process_audio.py $@ --audio_format $format --feature_names multi_mel --mel_feature_size 80 --fps 100 # fps=100 coz thats what ddc expects
-$py ./feature_extraction/generate_ddc_features.py $@ --audio_format $format --experiment_name block_placement_ddc2 --checkpoint 130000 --checkpoints_dir feature_extraction --fps $fps
-$py ./feature_extraction/process_audio.py $@ --audio_format $format --feature_names mel,envelope,madmombeats --mel_feature_size 80 --fps $fps --combined_feature_name audio_feats
-$py ./feature_extraction/apply_transforms.py $@ --feature_name ${format}_multi_mel_80.npy_ddc_hidden --transform_name pca_transform --pca_dims 2 --new_feature_name ddcpca
-./feature_extraction/script_to_list_filenames $1 $format
-$py ./feature_extraction/combine_feats.py $@ $1/base_filenames.txt --feature_names ${format}_audio_feats,ddcpca --new_feature_name feats_ddcpca
-$py ./feature_extraction/apply_transforms.py $@ --feature_name feats_ddcpca --transform_name scaler --new_feature_name audio_feats_scaled_${fps}
\ No newline at end of file
diff --git a/spaces/MetaWabbit/Auto-GPT/autogpt/token_counter.py b/spaces/MetaWabbit/Auto-GPT/autogpt/token_counter.py
deleted file mode 100644
index 338fe6be4d47a679f2bf0815685edeb3dce66936..0000000000000000000000000000000000000000
--- a/spaces/MetaWabbit/Auto-GPT/autogpt/token_counter.py
+++ /dev/null
@@ -1,73 +0,0 @@
-"""Functions for counting the number of tokens in a message or string."""
-from __future__ import annotations
-
-import tiktoken
-
-from autogpt.logs import logger
-
-
-def count_message_tokens(
- messages: list[dict[str, str]], model: str = "gpt-3.5-turbo-0301"
-) -> int:
- """
- Returns the number of tokens used by a list of messages.
-
- Args:
- messages (list): A list of messages, each of which is a dictionary
- containing the role and content of the message.
- model (str): The name of the model to use for tokenization.
- Defaults to "gpt-3.5-turbo-0301".
-
- Returns:
- int: The number of tokens used by the list of messages.
- """
- try:
- encoding = tiktoken.encoding_for_model(model)
- except KeyError:
- logger.warn("Warning: model not found. Using cl100k_base encoding.")
- encoding = tiktoken.get_encoding("cl100k_base")
- if model == "gpt-3.5-turbo":
- # !Note: gpt-3.5-turbo may change over time.
- # Returning num tokens assuming gpt-3.5-turbo-0301.")
- return count_message_tokens(messages, model="gpt-3.5-turbo-0301")
- elif model == "gpt-4":
- # !Note: gpt-4 may change over time. Returning num tokens assuming gpt-4-0314.")
- return count_message_tokens(messages, model="gpt-4-0314")
- elif model == "gpt-3.5-turbo-0301":
- tokens_per_message = (
- 4 # every message follows <|start|>{role/name}\n{content}<|end|>\n
- )
- tokens_per_name = -1 # if there's a name, the role is omitted
- elif model == "gpt-4-0314":
- tokens_per_message = 3
- tokens_per_name = 1
- else:
- raise NotImplementedError(
- f"num_tokens_from_messages() is not implemented for model {model}.\n"
- " See https://github.com/openai/openai-python/blob/main/chatml.md for"
- " information on how messages are converted to tokens."
- )
- num_tokens = 0
- for message in messages:
- num_tokens += tokens_per_message
- for key, value in message.items():
- num_tokens += len(encoding.encode(value))
- if key == "name":
- num_tokens += tokens_per_name
- num_tokens += 3 # every reply is primed with <|start|>assistant<|message|>
- return num_tokens
-
-
-def count_string_tokens(string: str, model_name: str) -> int:
- """
- Returns the number of tokens in a text string.
-
- Args:
- string (str): The text string.
- model_name (str): The name of the encoding to use. (e.g., "gpt-3.5-turbo")
-
- Returns:
- int: The number of tokens in the text string.
- """
- encoding = tiktoken.encoding_for_model(model_name)
- return len(encoding.encode(string))
diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/common/losses/dice_loss.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/models/common/losses/dice_loss.py
deleted file mode 100644
index 37d2d3d1926263e85c4fd4b98c8f98087405686e..0000000000000000000000000000000000000000
--- a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/common/losses/dice_loss.py
+++ /dev/null
@@ -1,112 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from typing import Optional
-
-import torch
-import torch.nn as nn
-
-from mmocr.registry import MODELS
-
-
-@MODELS.register_module()
-class MaskedDiceLoss(nn.Module):
- """Masked dice loss.
-
- Args:
- eps (float, optional): Eps to avoid zero-divison error. Defaults to
- 1e-6.
- """
-
- def __init__(self, eps: float = 1e-6) -> None:
- super().__init__()
- assert isinstance(eps, float)
- self.eps = eps
-
- def forward(self,
- pred: torch.Tensor,
- gt: torch.Tensor,
- mask: Optional[torch.Tensor] = None) -> torch.Tensor:
- """Forward function.
-
- Args:
- pred (torch.Tensor): The prediction in any shape.
- gt (torch.Tensor): The learning target of the prediction in the
- same shape as pred.
- mask (torch.Tensor, optional): Binary mask in the same shape of
- pred, indicating positive regions to calculate the loss. Whole
- region will be taken into account if not provided. Defaults to
- None.
-
- Returns:
- torch.Tensor: The loss value.
- """
-
- assert pred.size() == gt.size() and gt.numel() > 0
- if mask is None:
- mask = torch.ones_like(gt)
- assert mask.size() == gt.size()
-
- pred = pred.contiguous().view(pred.size(0), -1)
- gt = gt.contiguous().view(gt.size(0), -1)
-
- mask = mask.contiguous().view(mask.size(0), -1)
- pred = pred * mask
- gt = gt * mask
-
- dice_coeff = (2 * (pred * gt).sum()) / (
- pred.sum() + gt.sum() + self.eps)
-
- return 1 - dice_coeff
-
-
-@MODELS.register_module()
-class MaskedSquareDiceLoss(nn.Module):
- """Masked square dice loss.
-
- Args:
- eps (float, optional): Eps to avoid zero-divison error. Defaults to
- 1e-3.
- """
-
- def __init__(self, eps: float = 1e-3) -> None:
- super().__init__()
- assert isinstance(eps, float)
- self.eps = eps
-
- def forward(self,
- pred: torch.Tensor,
- gt: torch.Tensor,
- mask: Optional[torch.Tensor] = None) -> torch.Tensor:
- """Forward function.
-
- Args:
- pred (torch.Tensor): The prediction in any shape.
- gt (torch.Tensor): The learning target of the prediction in the
- same shape as pred.
- mask (torch.Tensor, optional): Binary mask in the same shape of
- pred, indicating positive regions to calculate the loss. Whole
- region will be taken into account if not provided. Defaults to
- None.
-
- Returns:
- torch.Tensor: The loss value.
- """
- assert pred.size() == gt.size() and gt.numel() > 0
- if mask is None:
- mask = torch.ones_like(gt)
- assert mask.size() == gt.size()
- batch_size = pred.size(0)
- pred = pred.contiguous().view(batch_size, -1)
- gt = gt.contiguous().view(batch_size, -1).float()
- mask = mask.contiguous().view(batch_size, -1).float()
-
- pred = pred * mask
- gt = gt * mask
-
- a = torch.sum(pred * gt, dim=1)
- b = torch.sum(pred * pred, dim=1) + self.eps
- c = torch.sum(gt * gt, dim=1) + self.eps
- d = (2 * a) / (b + c)
- loss = 1 - d
-
- loss = torch.mean(loss)
- return loss
diff --git a/spaces/MrBodean/VoiceClone/encoder/model.py b/spaces/MrBodean/VoiceClone/encoder/model.py
deleted file mode 100644
index e050d3204d8f1becdf0f8b3133470708e5420cea..0000000000000000000000000000000000000000
--- a/spaces/MrBodean/VoiceClone/encoder/model.py
+++ /dev/null
@@ -1,135 +0,0 @@
-from encoder.params_model import *
-from encoder.params_data import *
-from scipy.interpolate import interp1d
-from sklearn.metrics import roc_curve
-from torch.nn.utils import clip_grad_norm_
-from scipy.optimize import brentq
-from torch import nn
-import numpy as np
-import torch
-
-
-class SpeakerEncoder(nn.Module):
- def __init__(self, device, loss_device):
- super().__init__()
- self.loss_device = loss_device
-
- # Network defition
- self.lstm = nn.LSTM(input_size=mel_n_channels,
- hidden_size=model_hidden_size,
- num_layers=model_num_layers,
- batch_first=True).to(device)
- self.linear = nn.Linear(in_features=model_hidden_size,
- out_features=model_embedding_size).to(device)
- self.relu = torch.nn.ReLU().to(device)
-
- # Cosine similarity scaling (with fixed initial parameter values)
- self.similarity_weight = nn.Parameter(torch.tensor([10.])).to(loss_device)
- self.similarity_bias = nn.Parameter(torch.tensor([-5.])).to(loss_device)
-
- # Loss
- self.loss_fn = nn.CrossEntropyLoss().to(loss_device)
-
- def do_gradient_ops(self):
- # Gradient scale
- self.similarity_weight.grad *= 0.01
- self.similarity_bias.grad *= 0.01
-
- # Gradient clipping
- clip_grad_norm_(self.parameters(), 3, norm_type=2)
-
- def forward(self, utterances, hidden_init=None):
- """
- Computes the embeddings of a batch of utterance spectrograms.
-
- :param utterances: batch of mel-scale filterbanks of same duration as a tensor of shape
- (batch_size, n_frames, n_channels)
- :param hidden_init: initial hidden state of the LSTM as a tensor of shape (num_layers,
- batch_size, hidden_size). Will default to a tensor of zeros if None.
- :return: the embeddings as a tensor of shape (batch_size, embedding_size)
- """
- # Pass the input through the LSTM layers and retrieve all outputs, the final hidden state
- # and the final cell state.
- out, (hidden, cell) = self.lstm(utterances, hidden_init)
-
- # We take only the hidden state of the last layer
- embeds_raw = self.relu(self.linear(hidden[-1]))
-
- # L2-normalize it
- embeds = embeds_raw / (torch.norm(embeds_raw, dim=1, keepdim=True) + 1e-5)
-
- return embeds
-
- def similarity_matrix(self, embeds):
- """
- Computes the similarity matrix according the section 2.1 of GE2E.
-
- :param embeds: the embeddings as a tensor of shape (speakers_per_batch,
- utterances_per_speaker, embedding_size)
- :return: the similarity matrix as a tensor of shape (speakers_per_batch,
- utterances_per_speaker, speakers_per_batch)
- """
- speakers_per_batch, utterances_per_speaker = embeds.shape[:2]
-
- # Inclusive centroids (1 per speaker). Cloning is needed for reverse differentiation
- centroids_incl = torch.mean(embeds, dim=1, keepdim=True)
- centroids_incl = centroids_incl.clone() / (torch.norm(centroids_incl, dim=2, keepdim=True) + 1e-5)
-
- # Exclusive centroids (1 per utterance)
- centroids_excl = (torch.sum(embeds, dim=1, keepdim=True) - embeds)
- centroids_excl /= (utterances_per_speaker - 1)
- centroids_excl = centroids_excl.clone() / (torch.norm(centroids_excl, dim=2, keepdim=True) + 1e-5)
-
- # Similarity matrix. The cosine similarity of already 2-normed vectors is simply the dot
- # product of these vectors (which is just an element-wise multiplication reduced by a sum).
- # We vectorize the computation for efficiency.
- sim_matrix = torch.zeros(speakers_per_batch, utterances_per_speaker,
- speakers_per_batch).to(self.loss_device)
- mask_matrix = 1 - np.eye(speakers_per_batch, dtype=np.int)
- for j in range(speakers_per_batch):
- mask = np.where(mask_matrix[j])[0]
- sim_matrix[mask, :, j] = (embeds[mask] * centroids_incl[j]).sum(dim=2)
- sim_matrix[j, :, j] = (embeds[j] * centroids_excl[j]).sum(dim=1)
-
- ## Even more vectorized version (slower maybe because of transpose)
- # sim_matrix2 = torch.zeros(speakers_per_batch, speakers_per_batch, utterances_per_speaker
- # ).to(self.loss_device)
- # eye = np.eye(speakers_per_batch, dtype=np.int)
- # mask = np.where(1 - eye)
- # sim_matrix2[mask] = (embeds[mask[0]] * centroids_incl[mask[1]]).sum(dim=2)
- # mask = np.where(eye)
- # sim_matrix2[mask] = (embeds * centroids_excl).sum(dim=2)
- # sim_matrix2 = sim_matrix2.transpose(1, 2)
-
- sim_matrix = sim_matrix * self.similarity_weight + self.similarity_bias
- return sim_matrix
-
- def loss(self, embeds):
- """
- Computes the softmax loss according the section 2.1 of GE2E.
-
- :param embeds: the embeddings as a tensor of shape (speakers_per_batch,
- utterances_per_speaker, embedding_size)
- :return: the loss and the EER for this batch of embeddings.
- """
- speakers_per_batch, utterances_per_speaker = embeds.shape[:2]
-
- # Loss
- sim_matrix = self.similarity_matrix(embeds)
- sim_matrix = sim_matrix.reshape((speakers_per_batch * utterances_per_speaker,
- speakers_per_batch))
- ground_truth = np.repeat(np.arange(speakers_per_batch), utterances_per_speaker)
- target = torch.from_numpy(ground_truth).long().to(self.loss_device)
- loss = self.loss_fn(sim_matrix, target)
-
- # EER (not backpropagated)
- with torch.no_grad():
- inv_argmax = lambda i: np.eye(1, speakers_per_batch, i, dtype=np.int)[0]
- labels = np.array([inv_argmax(i) for i in ground_truth])
- preds = sim_matrix.detach().cpu().numpy()
-
- # Snippet from https://yangcha.github.io/EER-ROC/
- fpr, tpr, thresholds = roc_curve(labels.flatten(), preds.flatten())
- eer = brentq(lambda x: 1. - x - interp1d(fpr, tpr)(x), 0., 1.)
-
- return loss, eer
diff --git a/spaces/MrZak/LearnUp-4.1/README.md b/spaces/MrZak/LearnUp-4.1/README.md
deleted file mode 100644
index ef315dda009ea71dd6e5630fadc7729d14b5ad2b..0000000000000000000000000000000000000000
--- a/spaces/MrZak/LearnUp-4.1/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: LearnUp 4.1
-emoji: 🚀
-colorFrom: red
-colorTo: green
-sdk: gradio
-sdk_version: 3.37.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/MrlolDev/Explore_llamav2_with_TGI/app.py b/spaces/MrlolDev/Explore_llamav2_with_TGI/app.py
deleted file mode 100644
index f837be1401301d80ce2fde42441f97006a36a658..0000000000000000000000000000000000000000
--- a/spaces/MrlolDev/Explore_llamav2_with_TGI/app.py
+++ /dev/null
@@ -1,41 +0,0 @@
-import json
-import gradio as gr
-import os
-import requests
-
-hf_token = os.getenv('HF_TOKEN')
-api_url = os.getenv('API_URL')
-headers = {
- 'Content-Type': 'application/json',
-}
-
-system_message = "\nYou are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\n\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information."
-title = "Llama2 70B Chatbot"
-description = """This Space demonstrates model [Llama-2-70b-chat-hf](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf) by Meta, running on Inference Endpoints using text-generation-inference. To have your own dedicated endpoint, you can [deploy it on Inference Endpoints](https://ui.endpoints.huggingface.co/). """
-
-
-def predict(message, chatbot):
-
- print(f"Logging: message is - {message}")
- print(f"Logging: chatbot is - {chatbot}")
-
- input_prompt = f"[INST]<>\n{system_message}\n<>\n\n "
- for interaction in chatbot:
- input_prompt = input_prompt + interaction[0] + " [/INST] " + interaction[1] + " [INST] "
-
- input_prompt = input_prompt + message + " [/INST] "
-
- print(f"Logging: input_prompt is - {input_prompt}")
- data = {
- "inputs": input_prompt,
- "parameters": {"max_new_tokens":256}
- }
-
- response = requests.post(api_url, headers=headers, data=json.dumps(data), auth=('hf', hf_token))
-
- print(f'Logging: API response is - {response.text}')
- response_json_object = json.loads(response.text)
- return response_json_object[0]['generated_text']
-
-
-gr.ChatInterface(predict, title=title, description=description).queue().launch(debug= True)
diff --git a/spaces/NAACL2022/CLIP-Caption-Reward/captioning/utils/resnet_utils.py b/spaces/NAACL2022/CLIP-Caption-Reward/captioning/utils/resnet_utils.py
deleted file mode 100644
index e1df171ab75700352333f6af5d59f751819b57f6..0000000000000000000000000000000000000000
--- a/spaces/NAACL2022/CLIP-Caption-Reward/captioning/utils/resnet_utils.py
+++ /dev/null
@@ -1,27 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-class myResnet(nn.Module):
- def __init__(self, resnet):
- super(myResnet, self).__init__()
- self.resnet = resnet
-
- def forward(self, img, att_size=14):
- x = img.unsqueeze(0)
-
- x = self.resnet.conv1(x)
- x = self.resnet.bn1(x)
- x = self.resnet.relu(x)
- x = self.resnet.maxpool(x)
-
- x = self.resnet.layer1(x)
- x = self.resnet.layer2(x)
- x = self.resnet.layer3(x)
- x = self.resnet.layer4(x)
-
- fc = x.mean(3).mean(2).squeeze()
- att = F.adaptive_avg_pool2d(x,[att_size,att_size]).squeeze().permute(1, 2, 0)
-
- return fc, att
-
diff --git a/spaces/NCTCMumbai/NCTC/models/official/utils/flags/_performance.py b/spaces/NCTCMumbai/NCTC/models/official/utils/flags/_performance.py
deleted file mode 100644
index cc5840f95e1ea26697951d1b78fe847526d5859b..0000000000000000000000000000000000000000
--- a/spaces/NCTCMumbai/NCTC/models/official/utils/flags/_performance.py
+++ /dev/null
@@ -1,289 +0,0 @@
-# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-"""Register flags for optimizing performance."""
-
-from __future__ import absolute_import
-from __future__ import division
-from __future__ import print_function
-
-import multiprocessing
-
-from absl import flags # pylint: disable=g-bad-import-order
-import tensorflow as tf # pylint: disable=g-bad-import-order
-
-from official.utils.flags._conventions import help_wrap
-
-
-# Map string to TensorFlow dtype
-DTYPE_MAP = {
- "fp16": tf.float16,
- "bf16": tf.bfloat16,
- "fp32": tf.float32,
-}
-
-
-def get_tf_dtype(flags_obj):
- if getattr(flags_obj, "fp16_implementation", None) == "graph_rewrite":
- # If the graph_rewrite is used, we build the graph with fp32, and let the
- # graph rewrite change ops to fp16.
- return tf.float32
- return DTYPE_MAP[flags_obj.dtype]
-
-
-def get_loss_scale(flags_obj, default_for_fp16):
- dtype = get_tf_dtype(flags_obj)
- if flags_obj.loss_scale == "dynamic":
- return flags_obj.loss_scale
- elif flags_obj.loss_scale is not None:
- return float(flags_obj.loss_scale)
- elif dtype == tf.float32 or dtype == tf.bfloat16:
- return 1 # No loss scaling is needed for fp32
- else:
- assert dtype == tf.float16
- return default_for_fp16
-
-
-def define_performance(num_parallel_calls=False, inter_op=False, intra_op=False,
- synthetic_data=False, max_train_steps=False, dtype=False,
- all_reduce_alg=False, num_packs=False,
- tf_gpu_thread_mode=False,
- datasets_num_private_threads=False,
- datasets_num_parallel_batches=False,
- dynamic_loss_scale=False, fp16_implementation=False,
- loss_scale=False,
- tf_data_experimental_slack=False, enable_xla=False,
- training_dataset_cache=False):
- """Register flags for specifying performance tuning arguments.
-
- Args:
- num_parallel_calls: Create a flag to specify parallelism of data loading.
- inter_op: Create a flag to allow specification of inter op threads.
- intra_op: Create a flag to allow specification of intra op threads.
- synthetic_data: Create a flag to allow the use of synthetic data.
- max_train_steps: Create a flags to allow specification of maximum number
- of training steps
- dtype: Create flags for specifying dtype.
- all_reduce_alg: If set forces a specific algorithm for multi-gpu.
- num_packs: If set provides number of packs for MirroredStrategy's cross
- device ops.
- tf_gpu_thread_mode: gpu_private triggers us of private thread pool.
- datasets_num_private_threads: Number of private threads for datasets.
- datasets_num_parallel_batches: Determines how many batches to process in
- parallel when using map and batch from tf.data.
- dynamic_loss_scale: Allow the "loss_scale" flag to take on the value
- "dynamic". Only valid if `dtype` is True.
- fp16_implementation: Create fp16_implementation flag.
- loss_scale: Controls the loss scaling, normally for mixed-precision
- training. Can only be turned on if dtype is also True.
- tf_data_experimental_slack: Determines whether to enable tf.data's
- `experimental_slack` option.
- enable_xla: Determines if XLA (auto clustering) is turned on.
- training_dataset_cache: Whether to cache the training dataset on workers.
- Typically used to improve training performance when training data is in
- remote storage and can fit into worker memory.
-
- Returns:
- A list of flags for core.py to marks as key flags.
- """
-
- key_flags = []
- if num_parallel_calls:
- flags.DEFINE_integer(
- name="num_parallel_calls", short_name="npc",
- default=multiprocessing.cpu_count(),
- help=help_wrap("The number of records that are processed in parallel "
- "during input processing. This can be optimized per "
- "data set but for generally homogeneous data sets, "
- "should be approximately the number of available CPU "
- "cores. (default behavior)"))
-
- if inter_op:
- flags.DEFINE_integer(
- name="inter_op_parallelism_threads", short_name="inter", default=0,
- help=help_wrap("Number of inter_op_parallelism_threads to use for CPU. "
- "See TensorFlow config.proto for details.")
- )
-
- if intra_op:
- flags.DEFINE_integer(
- name="intra_op_parallelism_threads", short_name="intra", default=0,
- help=help_wrap("Number of intra_op_parallelism_threads to use for CPU. "
- "See TensorFlow config.proto for details."))
-
- if synthetic_data:
- flags.DEFINE_bool(
- name="use_synthetic_data", short_name="synth", default=False,
- help=help_wrap(
- "If set, use fake data (zeroes) instead of a real dataset. "
- "This mode is useful for performance debugging, as it removes "
- "input processing steps, but will not learn anything."))
-
- if max_train_steps:
- flags.DEFINE_integer(
- name="max_train_steps", short_name="mts", default=None, help=help_wrap(
- "The model will stop training if the global_step reaches this "
- "value. If not set, training will run until the specified number "
- "of epochs have run as usual. It is generally recommended to set "
- "--train_epochs=1 when using this flag."
- ))
-
- if dtype:
- flags.DEFINE_enum(
- name="dtype", short_name="dt", default="fp32",
- enum_values=DTYPE_MAP.keys(),
- help=help_wrap("The TensorFlow datatype used for calculations. "
- "Variables may be cast to a higher precision on a "
- "case-by-case basis for numerical stability."))
-
- loss_scale_help_text = (
- "The amount to scale the loss by when the model is run. {}. Before "
- "gradients are computed, the loss is multiplied by the loss scale, "
- "making all gradients loss_scale times larger. To adjust for this, "
- "gradients are divided by the loss scale before being applied to "
- "variables. This is mathematically equivalent to training without "
- "a loss scale, but the loss scale helps avoid some intermediate "
- "gradients from underflowing to zero. If not provided the default "
- "for fp16 is 128 and 1 for all other dtypes.{}"
- )
- if dynamic_loss_scale:
- loss_scale_help_text = loss_scale_help_text.format(
- "This can be an int/float or the string 'dynamic'",
- " The string 'dynamic' can be used to dynamically determine the "
- "optimal loss scale during training, but currently this "
- "significantly slows down performance")
- loss_scale_validation_msg = ("loss_scale should be a positive int/float "
- "or the string 'dynamic'.")
- else:
- loss_scale_help_text = loss_scale_help_text.format(
- "This must be an int/float", "")
- loss_scale_validation_msg = "loss_scale should be a positive int/float."
- if loss_scale:
- flags.DEFINE_string(
- name="loss_scale", short_name="ls", default=None,
- help=help_wrap(loss_scale_help_text))
-
- @flags.validator(flag_name="loss_scale",
- message=loss_scale_validation_msg)
- def _check_loss_scale(loss_scale): # pylint: disable=unused-variable
- """Validator to check the loss scale flag is valid."""
- if loss_scale is None:
- return True # null case is handled in get_loss_scale()
-
- if loss_scale == "dynamic" and dynamic_loss_scale:
- return True
-
- try:
- loss_scale = float(loss_scale)
- except ValueError:
- return False
-
- return loss_scale > 0
-
- if fp16_implementation:
- flags.DEFINE_enum(
- name="fp16_implementation", default="keras",
- enum_values=("keras', 'graph_rewrite"),
- help=help_wrap(
- "When --dtype=fp16, how fp16 should be implemented. This has no "
- "impact on correctness. 'keras' uses the "
- "tf.keras.mixed_precision API. 'graph_rewrite' uses the "
- "tf.train.experimental.enable_mixed_precision_graph_rewrite "
- "API."))
-
- @flags.multi_flags_validator(["fp16_implementation", "dtype",
- "loss_scale"])
- def _check_fp16_implementation(flags_dict):
- """Validator to check fp16_implementation flag is valid."""
- if (flags_dict["fp16_implementation"] == "graph_rewrite" and
- flags_dict["dtype"] != "fp16"):
- raise flags.ValidationError("--fp16_implementation should not be "
- "specified unless --dtype=fp16")
- return True
-
- if all_reduce_alg:
- flags.DEFINE_string(
- name="all_reduce_alg", short_name="ara", default=None,
- help=help_wrap("Defines the algorithm to use for performing all-reduce."
- "When specified with MirroredStrategy for single "
- "worker, this controls "
- "tf.contrib.distribute.AllReduceCrossTowerOps. When "
- "specified with MultiWorkerMirroredStrategy, this "
- "controls "
- "tf.distribute.experimental.CollectiveCommunication; "
- "valid options are `ring` and `nccl`."))
-
- if num_packs:
- flags.DEFINE_integer(
- name="num_packs", default=1,
- help=help_wrap("Sets `num_packs` in the cross device ops used in "
- "MirroredStrategy. For details, see "
- "tf.distribute.NcclAllReduce."))
-
- if tf_gpu_thread_mode:
- flags.DEFINE_string(
- name="tf_gpu_thread_mode", short_name="gt_mode", default=None,
- help=help_wrap(
- "Whether and how the GPU device uses its own threadpool.")
- )
-
- flags.DEFINE_integer(
- name="per_gpu_thread_count", short_name="pgtc", default=0,
- help=help_wrap(
- "The number of threads to use for GPU. Only valid when "
- "tf_gpu_thread_mode is not global.")
- )
-
- if datasets_num_private_threads:
- flags.DEFINE_integer(
- name="datasets_num_private_threads",
- default=None,
- help=help_wrap(
- "Number of threads for a private threadpool created for all"
- "datasets computation..")
- )
-
- if datasets_num_parallel_batches:
- flags.DEFINE_integer(
- name="datasets_num_parallel_batches",
- default=None,
- help=help_wrap(
- "Determines how many batches to process in parallel when using "
- "map and batch from tf.data.")
- )
-
- if training_dataset_cache:
- flags.DEFINE_boolean(
- name="training_dataset_cache",
- default=False,
- help=help_wrap(
- "Determines whether to cache the training dataset on workers. "
- "Typically used to improve training performance when training "
- "data is in remote storage and can fit into worker memory.")
- )
-
- if tf_data_experimental_slack:
- flags.DEFINE_boolean(
- name="tf_data_experimental_slack",
- default=False,
- help=help_wrap(
- "Whether to enable tf.data's `experimental_slack` option.")
- )
-
- if enable_xla:
- flags.DEFINE_boolean(
- name="enable_xla", default=False,
- help="Whether to enable XLA auto jit compilation")
-
- return key_flags
diff --git a/spaces/NCTCMumbai/NCTC/models/official/utils/misc/__init__.py b/spaces/NCTCMumbai/NCTC/models/official/utils/misc/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/NCTCMumbai/NCTC/models/research/audioset/vggish/mel_features.py b/spaces/NCTCMumbai/NCTC/models/research/audioset/vggish/mel_features.py
deleted file mode 100644
index ac58fb5427f772fcced9cbd3cec3373ffbe5908c..0000000000000000000000000000000000000000
--- a/spaces/NCTCMumbai/NCTC/models/research/audioset/vggish/mel_features.py
+++ /dev/null
@@ -1,223 +0,0 @@
-# Copyright 2017 The TensorFlow Authors All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-
-"""Defines routines to compute mel spectrogram features from audio waveform."""
-
-import numpy as np
-
-
-def frame(data, window_length, hop_length):
- """Convert array into a sequence of successive possibly overlapping frames.
-
- An n-dimensional array of shape (num_samples, ...) is converted into an
- (n+1)-D array of shape (num_frames, window_length, ...), where each frame
- starts hop_length points after the preceding one.
-
- This is accomplished using stride_tricks, so the original data is not
- copied. However, there is no zero-padding, so any incomplete frames at the
- end are not included.
-
- Args:
- data: np.array of dimension N >= 1.
- window_length: Number of samples in each frame.
- hop_length: Advance (in samples) between each window.
-
- Returns:
- (N+1)-D np.array with as many rows as there are complete frames that can be
- extracted.
- """
- num_samples = data.shape[0]
- num_frames = 1 + int(np.floor((num_samples - window_length) / hop_length))
- shape = (num_frames, window_length) + data.shape[1:]
- strides = (data.strides[0] * hop_length,) + data.strides
- return np.lib.stride_tricks.as_strided(data, shape=shape, strides=strides)
-
-
-def periodic_hann(window_length):
- """Calculate a "periodic" Hann window.
-
- The classic Hann window is defined as a raised cosine that starts and
- ends on zero, and where every value appears twice, except the middle
- point for an odd-length window. Matlab calls this a "symmetric" window
- and np.hanning() returns it. However, for Fourier analysis, this
- actually represents just over one cycle of a period N-1 cosine, and
- thus is not compactly expressed on a length-N Fourier basis. Instead,
- it's better to use a raised cosine that ends just before the final
- zero value - i.e. a complete cycle of a period-N cosine. Matlab
- calls this a "periodic" window. This routine calculates it.
-
- Args:
- window_length: The number of points in the returned window.
-
- Returns:
- A 1D np.array containing the periodic hann window.
- """
- return 0.5 - (0.5 * np.cos(2 * np.pi / window_length *
- np.arange(window_length)))
-
-
-def stft_magnitude(signal, fft_length,
- hop_length=None,
- window_length=None):
- """Calculate the short-time Fourier transform magnitude.
-
- Args:
- signal: 1D np.array of the input time-domain signal.
- fft_length: Size of the FFT to apply.
- hop_length: Advance (in samples) between each frame passed to FFT.
- window_length: Length of each block of samples to pass to FFT.
-
- Returns:
- 2D np.array where each row contains the magnitudes of the fft_length/2+1
- unique values of the FFT for the corresponding frame of input samples.
- """
- frames = frame(signal, window_length, hop_length)
- # Apply frame window to each frame. We use a periodic Hann (cosine of period
- # window_length) instead of the symmetric Hann of np.hanning (period
- # window_length-1).
- window = periodic_hann(window_length)
- windowed_frames = frames * window
- return np.abs(np.fft.rfft(windowed_frames, int(fft_length)))
-
-
-# Mel spectrum constants and functions.
-_MEL_BREAK_FREQUENCY_HERTZ = 700.0
-_MEL_HIGH_FREQUENCY_Q = 1127.0
-
-
-def hertz_to_mel(frequencies_hertz):
- """Convert frequencies to mel scale using HTK formula.
-
- Args:
- frequencies_hertz: Scalar or np.array of frequencies in hertz.
-
- Returns:
- Object of same size as frequencies_hertz containing corresponding values
- on the mel scale.
- """
- return _MEL_HIGH_FREQUENCY_Q * np.log(
- 1.0 + (frequencies_hertz / _MEL_BREAK_FREQUENCY_HERTZ))
-
-
-def spectrogram_to_mel_matrix(num_mel_bins=20,
- num_spectrogram_bins=129,
- audio_sample_rate=8000,
- lower_edge_hertz=125.0,
- upper_edge_hertz=3800.0):
- """Return a matrix that can post-multiply spectrogram rows to make mel.
-
- Returns a np.array matrix A that can be used to post-multiply a matrix S of
- spectrogram values (STFT magnitudes) arranged as frames x bins to generate a
- "mel spectrogram" M of frames x num_mel_bins. M = S A.
-
- The classic HTK algorithm exploits the complementarity of adjacent mel bands
- to multiply each FFT bin by only one mel weight, then add it, with positive
- and negative signs, to the two adjacent mel bands to which that bin
- contributes. Here, by expressing this operation as a matrix multiply, we go
- from num_fft multiplies per frame (plus around 2*num_fft adds) to around
- num_fft^2 multiplies and adds. However, because these are all presumably
- accomplished in a single call to np.dot(), it's not clear which approach is
- faster in Python. The matrix multiplication has the attraction of being more
- general and flexible, and much easier to read.
-
- Args:
- num_mel_bins: How many bands in the resulting mel spectrum. This is
- the number of columns in the output matrix.
- num_spectrogram_bins: How many bins there are in the source spectrogram
- data, which is understood to be fft_size/2 + 1, i.e. the spectrogram
- only contains the nonredundant FFT bins.
- audio_sample_rate: Samples per second of the audio at the input to the
- spectrogram. We need this to figure out the actual frequencies for
- each spectrogram bin, which dictates how they are mapped into mel.
- lower_edge_hertz: Lower bound on the frequencies to be included in the mel
- spectrum. This corresponds to the lower edge of the lowest triangular
- band.
- upper_edge_hertz: The desired top edge of the highest frequency band.
-
- Returns:
- An np.array with shape (num_spectrogram_bins, num_mel_bins).
-
- Raises:
- ValueError: if frequency edges are incorrectly ordered or out of range.
- """
- nyquist_hertz = audio_sample_rate / 2.
- if lower_edge_hertz < 0.0:
- raise ValueError("lower_edge_hertz %.1f must be >= 0" % lower_edge_hertz)
- if lower_edge_hertz >= upper_edge_hertz:
- raise ValueError("lower_edge_hertz %.1f >= upper_edge_hertz %.1f" %
- (lower_edge_hertz, upper_edge_hertz))
- if upper_edge_hertz > nyquist_hertz:
- raise ValueError("upper_edge_hertz %.1f is greater than Nyquist %.1f" %
- (upper_edge_hertz, nyquist_hertz))
- spectrogram_bins_hertz = np.linspace(0.0, nyquist_hertz, num_spectrogram_bins)
- spectrogram_bins_mel = hertz_to_mel(spectrogram_bins_hertz)
- # The i'th mel band (starting from i=1) has center frequency
- # band_edges_mel[i], lower edge band_edges_mel[i-1], and higher edge
- # band_edges_mel[i+1]. Thus, we need num_mel_bins + 2 values in
- # the band_edges_mel arrays.
- band_edges_mel = np.linspace(hertz_to_mel(lower_edge_hertz),
- hertz_to_mel(upper_edge_hertz), num_mel_bins + 2)
- # Matrix to post-multiply feature arrays whose rows are num_spectrogram_bins
- # of spectrogram values.
- mel_weights_matrix = np.empty((num_spectrogram_bins, num_mel_bins))
- for i in range(num_mel_bins):
- lower_edge_mel, center_mel, upper_edge_mel = band_edges_mel[i:i + 3]
- # Calculate lower and upper slopes for every spectrogram bin.
- # Line segments are linear in the *mel* domain, not hertz.
- lower_slope = ((spectrogram_bins_mel - lower_edge_mel) /
- (center_mel - lower_edge_mel))
- upper_slope = ((upper_edge_mel - spectrogram_bins_mel) /
- (upper_edge_mel - center_mel))
- # .. then intersect them with each other and zero.
- mel_weights_matrix[:, i] = np.maximum(0.0, np.minimum(lower_slope,
- upper_slope))
- # HTK excludes the spectrogram DC bin; make sure it always gets a zero
- # coefficient.
- mel_weights_matrix[0, :] = 0.0
- return mel_weights_matrix
-
-
-def log_mel_spectrogram(data,
- audio_sample_rate=8000,
- log_offset=0.0,
- window_length_secs=0.025,
- hop_length_secs=0.010,
- **kwargs):
- """Convert waveform to a log magnitude mel-frequency spectrogram.
-
- Args:
- data: 1D np.array of waveform data.
- audio_sample_rate: The sampling rate of data.
- log_offset: Add this to values when taking log to avoid -Infs.
- window_length_secs: Duration of each window to analyze.
- hop_length_secs: Advance between successive analysis windows.
- **kwargs: Additional arguments to pass to spectrogram_to_mel_matrix.
-
- Returns:
- 2D np.array of (num_frames, num_mel_bins) consisting of log mel filterbank
- magnitudes for successive frames.
- """
- window_length_samples = int(round(audio_sample_rate * window_length_secs))
- hop_length_samples = int(round(audio_sample_rate * hop_length_secs))
- fft_length = 2 ** int(np.ceil(np.log(window_length_samples) / np.log(2.0)))
- spectrogram = stft_magnitude(
- data,
- fft_length=fft_length,
- hop_length=hop_length_samples,
- window_length=window_length_samples)
- mel_spectrogram = np.dot(spectrogram, spectrogram_to_mel_matrix(
- num_spectrogram_bins=spectrogram.shape[1],
- audio_sample_rate=audio_sample_rate, **kwargs))
- return np.log(mel_spectrogram + log_offset)
diff --git a/spaces/Nunchakuka/FrenchAnonymizer/README.md b/spaces/Nunchakuka/FrenchAnonymizer/README.md
deleted file mode 100644
index 42de351be9279a3acd70d30fcbefdfbde8757dec..0000000000000000000000000000000000000000
--- a/spaces/Nunchakuka/FrenchAnonymizer/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: French Anonymizer
-emoji: ⚡
-colorFrom: pink
-colorTo: blue
-sdk: gradio
-sdk_version: 3.42.0
-app_file: app.py
-pinned: false
-license: mit
-duplicated_from: OlaWod/FreeVC
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/OAOA/DifFace/basicsr/archs/tof_arch.py b/spaces/OAOA/DifFace/basicsr/archs/tof_arch.py
deleted file mode 100644
index a90a64d89386e19f92c987bbe2133472991d764a..0000000000000000000000000000000000000000
--- a/spaces/OAOA/DifFace/basicsr/archs/tof_arch.py
+++ /dev/null
@@ -1,172 +0,0 @@
-import torch
-from torch import nn as nn
-from torch.nn import functional as F
-
-from basicsr.utils.registry import ARCH_REGISTRY
-from .arch_util import flow_warp
-
-
-class BasicModule(nn.Module):
- """Basic module of SPyNet.
-
- Note that unlike the architecture in spynet_arch.py, the basic module
- here contains batch normalization.
- """
-
- def __init__(self):
- super(BasicModule, self).__init__()
- self.basic_module = nn.Sequential(
- nn.Conv2d(in_channels=8, out_channels=32, kernel_size=7, stride=1, padding=3, bias=False),
- nn.BatchNorm2d(32), nn.ReLU(inplace=True),
- nn.Conv2d(in_channels=32, out_channels=64, kernel_size=7, stride=1, padding=3, bias=False),
- nn.BatchNorm2d(64), nn.ReLU(inplace=True),
- nn.Conv2d(in_channels=64, out_channels=32, kernel_size=7, stride=1, padding=3, bias=False),
- nn.BatchNorm2d(32), nn.ReLU(inplace=True),
- nn.Conv2d(in_channels=32, out_channels=16, kernel_size=7, stride=1, padding=3, bias=False),
- nn.BatchNorm2d(16), nn.ReLU(inplace=True),
- nn.Conv2d(in_channels=16, out_channels=2, kernel_size=7, stride=1, padding=3))
-
- def forward(self, tensor_input):
- """
- Args:
- tensor_input (Tensor): Input tensor with shape (b, 8, h, w).
- 8 channels contain:
- [reference image (3), neighbor image (3), initial flow (2)].
-
- Returns:
- Tensor: Estimated flow with shape (b, 2, h, w)
- """
- return self.basic_module(tensor_input)
-
-
-class SPyNetTOF(nn.Module):
- """SPyNet architecture for TOF.
-
- Note that this implementation is specifically for TOFlow. Please use :file:`spynet_arch.py` for general use.
- They differ in the following aspects:
-
- 1. The basic modules here contain BatchNorm.
- 2. Normalization and denormalization are not done here, as they are done in TOFlow.
-
- ``Paper: Optical Flow Estimation using a Spatial Pyramid Network``
-
- Reference: https://github.com/Coldog2333/pytoflow
-
- Args:
- load_path (str): Path for pretrained SPyNet. Default: None.
- """
-
- def __init__(self, load_path=None):
- super(SPyNetTOF, self).__init__()
-
- self.basic_module = nn.ModuleList([BasicModule() for _ in range(4)])
- if load_path:
- self.load_state_dict(torch.load(load_path, map_location=lambda storage, loc: storage)['params'])
-
- def forward(self, ref, supp):
- """
- Args:
- ref (Tensor): Reference image with shape of (b, 3, h, w).
- supp: The supporting image to be warped: (b, 3, h, w).
-
- Returns:
- Tensor: Estimated optical flow: (b, 2, h, w).
- """
- num_batches, _, h, w = ref.size()
- ref = [ref]
- supp = [supp]
-
- # generate downsampled frames
- for _ in range(3):
- ref.insert(0, F.avg_pool2d(input=ref[0], kernel_size=2, stride=2, count_include_pad=False))
- supp.insert(0, F.avg_pool2d(input=supp[0], kernel_size=2, stride=2, count_include_pad=False))
-
- # flow computation
- flow = ref[0].new_zeros(num_batches, 2, h // 16, w // 16)
- for i in range(4):
- flow_up = F.interpolate(input=flow, scale_factor=2, mode='bilinear', align_corners=True) * 2.0
- flow = flow_up + self.basic_module[i](
- torch.cat([ref[i], flow_warp(supp[i], flow_up.permute(0, 2, 3, 1)), flow_up], 1))
- return flow
-
-
-@ARCH_REGISTRY.register()
-class TOFlow(nn.Module):
- """PyTorch implementation of TOFlow.
-
- In TOFlow, the LR frames are pre-upsampled and have the same size with the GT frames.
-
- ``Paper: Video Enhancement with Task-Oriented Flow``
-
- Reference: https://github.com/anchen1011/toflow
-
- Reference: https://github.com/Coldog2333/pytoflow
-
- Args:
- adapt_official_weights (bool): Whether to adapt the weights translated
- from the official implementation. Set to false if you want to
- train from scratch. Default: False
- """
-
- def __init__(self, adapt_official_weights=False):
- super(TOFlow, self).__init__()
- self.adapt_official_weights = adapt_official_weights
- self.ref_idx = 0 if adapt_official_weights else 3
-
- self.register_buffer('mean', torch.Tensor([0.485, 0.456, 0.406]).view(1, 3, 1, 1))
- self.register_buffer('std', torch.Tensor([0.229, 0.224, 0.225]).view(1, 3, 1, 1))
-
- # flow estimation module
- self.spynet = SPyNetTOF()
-
- # reconstruction module
- self.conv_1 = nn.Conv2d(3 * 7, 64, 9, 1, 4)
- self.conv_2 = nn.Conv2d(64, 64, 9, 1, 4)
- self.conv_3 = nn.Conv2d(64, 64, 1)
- self.conv_4 = nn.Conv2d(64, 3, 1)
-
- # activation function
- self.relu = nn.ReLU(inplace=True)
-
- def normalize(self, img):
- return (img - self.mean) / self.std
-
- def denormalize(self, img):
- return img * self.std + self.mean
-
- def forward(self, lrs):
- """
- Args:
- lrs: Input lr frames: (b, 7, 3, h, w).
-
- Returns:
- Tensor: SR frame: (b, 3, h, w).
- """
- # In the official implementation, the 0-th frame is the reference frame
- if self.adapt_official_weights:
- lrs = lrs[:, [3, 0, 1, 2, 4, 5, 6], :, :, :]
-
- num_batches, num_lrs, _, h, w = lrs.size()
-
- lrs = self.normalize(lrs.view(-1, 3, h, w))
- lrs = lrs.view(num_batches, num_lrs, 3, h, w)
-
- lr_ref = lrs[:, self.ref_idx, :, :, :]
- lr_aligned = []
- for i in range(7): # 7 frames
- if i == self.ref_idx:
- lr_aligned.append(lr_ref)
- else:
- lr_supp = lrs[:, i, :, :, :]
- flow = self.spynet(lr_ref, lr_supp)
- lr_aligned.append(flow_warp(lr_supp, flow.permute(0, 2, 3, 1)))
-
- # reconstruction
- hr = torch.stack(lr_aligned, dim=1)
- hr = hr.view(num_batches, -1, h, w)
- hr = self.relu(self.conv_1(hr))
- hr = self.relu(self.conv_2(hr))
- hr = self.relu(self.conv_3(hr))
- hr = self.conv_4(hr) + lr_ref
-
- return self.denormalize(hr)
diff --git a/spaces/OAOA/DifFace/basicsr/utils/diffjpeg.py b/spaces/OAOA/DifFace/basicsr/utils/diffjpeg.py
deleted file mode 100644
index 65f96b44f9e7f3f8a589668f0003adf328cc5742..0000000000000000000000000000000000000000
--- a/spaces/OAOA/DifFace/basicsr/utils/diffjpeg.py
+++ /dev/null
@@ -1,515 +0,0 @@
-"""
-Modified from https://github.com/mlomnitz/DiffJPEG
-
-For images not divisible by 8
-https://dsp.stackexchange.com/questions/35339/jpeg-dct-padding/35343#35343
-"""
-import itertools
-import numpy as np
-import torch
-import torch.nn as nn
-from torch.nn import functional as F
-
-# ------------------------ utils ------------------------#
-y_table = np.array(
- [[16, 11, 10, 16, 24, 40, 51, 61], [12, 12, 14, 19, 26, 58, 60, 55], [14, 13, 16, 24, 40, 57, 69, 56],
- [14, 17, 22, 29, 51, 87, 80, 62], [18, 22, 37, 56, 68, 109, 103, 77], [24, 35, 55, 64, 81, 104, 113, 92],
- [49, 64, 78, 87, 103, 121, 120, 101], [72, 92, 95, 98, 112, 100, 103, 99]],
- dtype=np.float32).T
-y_table = nn.Parameter(torch.from_numpy(y_table))
-c_table = np.empty((8, 8), dtype=np.float32)
-c_table.fill(99)
-c_table[:4, :4] = np.array([[17, 18, 24, 47], [18, 21, 26, 66], [24, 26, 56, 99], [47, 66, 99, 99]]).T
-c_table = nn.Parameter(torch.from_numpy(c_table))
-
-
-def diff_round(x):
- """ Differentiable rounding function
- """
- return torch.round(x) + (x - torch.round(x))**3
-
-
-def quality_to_factor(quality):
- """ Calculate factor corresponding to quality
-
- Args:
- quality(float): Quality for jpeg compression.
-
- Returns:
- float: Compression factor.
- """
- if quality < 50:
- quality = 5000. / quality
- else:
- quality = 200. - quality * 2
- return quality / 100.
-
-
-# ------------------------ compression ------------------------#
-class RGB2YCbCrJpeg(nn.Module):
- """ Converts RGB image to YCbCr
- """
-
- def __init__(self):
- super(RGB2YCbCrJpeg, self).__init__()
- matrix = np.array([[0.299, 0.587, 0.114], [-0.168736, -0.331264, 0.5], [0.5, -0.418688, -0.081312]],
- dtype=np.float32).T
- self.shift = nn.Parameter(torch.tensor([0., 128., 128.]))
- self.matrix = nn.Parameter(torch.from_numpy(matrix))
-
- def forward(self, image):
- """
- Args:
- image(Tensor): batch x 3 x height x width
-
- Returns:
- Tensor: batch x height x width x 3
- """
- image = image.permute(0, 2, 3, 1)
- result = torch.tensordot(image, self.matrix, dims=1) + self.shift
- return result.view(image.shape)
-
-
-class ChromaSubsampling(nn.Module):
- """ Chroma subsampling on CbCr channels
- """
-
- def __init__(self):
- super(ChromaSubsampling, self).__init__()
-
- def forward(self, image):
- """
- Args:
- image(tensor): batch x height x width x 3
-
- Returns:
- y(tensor): batch x height x width
- cb(tensor): batch x height/2 x width/2
- cr(tensor): batch x height/2 x width/2
- """
- image_2 = image.permute(0, 3, 1, 2).clone()
- cb = F.avg_pool2d(image_2[:, 1, :, :].unsqueeze(1), kernel_size=2, stride=(2, 2), count_include_pad=False)
- cr = F.avg_pool2d(image_2[:, 2, :, :].unsqueeze(1), kernel_size=2, stride=(2, 2), count_include_pad=False)
- cb = cb.permute(0, 2, 3, 1)
- cr = cr.permute(0, 2, 3, 1)
- return image[:, :, :, 0], cb.squeeze(3), cr.squeeze(3)
-
-
-class BlockSplitting(nn.Module):
- """ Splitting image into patches
- """
-
- def __init__(self):
- super(BlockSplitting, self).__init__()
- self.k = 8
-
- def forward(self, image):
- """
- Args:
- image(tensor): batch x height x width
-
- Returns:
- Tensor: batch x h*w/64 x h x w
- """
- height, _ = image.shape[1:3]
- batch_size = image.shape[0]
- image_reshaped = image.view(batch_size, height // self.k, self.k, -1, self.k)
- image_transposed = image_reshaped.permute(0, 1, 3, 2, 4)
- return image_transposed.contiguous().view(batch_size, -1, self.k, self.k)
-
-
-class DCT8x8(nn.Module):
- """ Discrete Cosine Transformation
- """
-
- def __init__(self):
- super(DCT8x8, self).__init__()
- tensor = np.zeros((8, 8, 8, 8), dtype=np.float32)
- for x, y, u, v in itertools.product(range(8), repeat=4):
- tensor[x, y, u, v] = np.cos((2 * x + 1) * u * np.pi / 16) * np.cos((2 * y + 1) * v * np.pi / 16)
- alpha = np.array([1. / np.sqrt(2)] + [1] * 7)
- self.tensor = nn.Parameter(torch.from_numpy(tensor).float())
- self.scale = nn.Parameter(torch.from_numpy(np.outer(alpha, alpha) * 0.25).float())
-
- def forward(self, image):
- """
- Args:
- image(tensor): batch x height x width
-
- Returns:
- Tensor: batch x height x width
- """
- image = image - 128
- result = self.scale * torch.tensordot(image, self.tensor, dims=2)
- result.view(image.shape)
- return result
-
-
-class YQuantize(nn.Module):
- """ JPEG Quantization for Y channel
-
- Args:
- rounding(function): rounding function to use
- """
-
- def __init__(self, rounding):
- super(YQuantize, self).__init__()
- self.rounding = rounding
- self.y_table = y_table
-
- def forward(self, image, factor=1):
- """
- Args:
- image(tensor): batch x height x width
-
- Returns:
- Tensor: batch x height x width
- """
- if isinstance(factor, (int, float)):
- image = image.float() / (self.y_table * factor)
- else:
- b = factor.size(0)
- table = self.y_table.expand(b, 1, 8, 8) * factor.view(b, 1, 1, 1)
- image = image.float() / table
- image = self.rounding(image)
- return image
-
-
-class CQuantize(nn.Module):
- """ JPEG Quantization for CbCr channels
-
- Args:
- rounding(function): rounding function to use
- """
-
- def __init__(self, rounding):
- super(CQuantize, self).__init__()
- self.rounding = rounding
- self.c_table = c_table
-
- def forward(self, image, factor=1):
- """
- Args:
- image(tensor): batch x height x width
-
- Returns:
- Tensor: batch x height x width
- """
- if isinstance(factor, (int, float)):
- image = image.float() / (self.c_table * factor)
- else:
- b = factor.size(0)
- table = self.c_table.expand(b, 1, 8, 8) * factor.view(b, 1, 1, 1)
- image = image.float() / table
- image = self.rounding(image)
- return image
-
-
-class CompressJpeg(nn.Module):
- """Full JPEG compression algorithm
-
- Args:
- rounding(function): rounding function to use
- """
-
- def __init__(self, rounding=torch.round):
- super(CompressJpeg, self).__init__()
- self.l1 = nn.Sequential(RGB2YCbCrJpeg(), ChromaSubsampling())
- self.l2 = nn.Sequential(BlockSplitting(), DCT8x8())
- self.c_quantize = CQuantize(rounding=rounding)
- self.y_quantize = YQuantize(rounding=rounding)
-
- def forward(self, image, factor=1):
- """
- Args:
- image(tensor): batch x 3 x height x width
-
- Returns:
- dict(tensor): Compressed tensor with batch x h*w/64 x 8 x 8.
- """
- y, cb, cr = self.l1(image * 255)
- components = {'y': y, 'cb': cb, 'cr': cr}
- for k in components.keys():
- comp = self.l2(components[k])
- if k in ('cb', 'cr'):
- comp = self.c_quantize(comp, factor=factor)
- else:
- comp = self.y_quantize(comp, factor=factor)
-
- components[k] = comp
-
- return components['y'], components['cb'], components['cr']
-
-
-# ------------------------ decompression ------------------------#
-
-
-class YDequantize(nn.Module):
- """Dequantize Y channel
- """
-
- def __init__(self):
- super(YDequantize, self).__init__()
- self.y_table = y_table
-
- def forward(self, image, factor=1):
- """
- Args:
- image(tensor): batch x height x width
-
- Returns:
- Tensor: batch x height x width
- """
- if isinstance(factor, (int, float)):
- out = image * (self.y_table * factor)
- else:
- b = factor.size(0)
- table = self.y_table.expand(b, 1, 8, 8) * factor.view(b, 1, 1, 1)
- out = image * table
- return out
-
-
-class CDequantize(nn.Module):
- """Dequantize CbCr channel
- """
-
- def __init__(self):
- super(CDequantize, self).__init__()
- self.c_table = c_table
-
- def forward(self, image, factor=1):
- """
- Args:
- image(tensor): batch x height x width
-
- Returns:
- Tensor: batch x height x width
- """
- if isinstance(factor, (int, float)):
- out = image * (self.c_table * factor)
- else:
- b = factor.size(0)
- table = self.c_table.expand(b, 1, 8, 8) * factor.view(b, 1, 1, 1)
- out = image * table
- return out
-
-
-class iDCT8x8(nn.Module):
- """Inverse discrete Cosine Transformation
- """
-
- def __init__(self):
- super(iDCT8x8, self).__init__()
- alpha = np.array([1. / np.sqrt(2)] + [1] * 7)
- self.alpha = nn.Parameter(torch.from_numpy(np.outer(alpha, alpha)).float())
- tensor = np.zeros((8, 8, 8, 8), dtype=np.float32)
- for x, y, u, v in itertools.product(range(8), repeat=4):
- tensor[x, y, u, v] = np.cos((2 * u + 1) * x * np.pi / 16) * np.cos((2 * v + 1) * y * np.pi / 16)
- self.tensor = nn.Parameter(torch.from_numpy(tensor).float())
-
- def forward(self, image):
- """
- Args:
- image(tensor): batch x height x width
-
- Returns:
- Tensor: batch x height x width
- """
- image = image * self.alpha
- result = 0.25 * torch.tensordot(image, self.tensor, dims=2) + 128
- result.view(image.shape)
- return result
-
-
-class BlockMerging(nn.Module):
- """Merge patches into image
- """
-
- def __init__(self):
- super(BlockMerging, self).__init__()
-
- def forward(self, patches, height, width):
- """
- Args:
- patches(tensor) batch x height*width/64, height x width
- height(int)
- width(int)
-
- Returns:
- Tensor: batch x height x width
- """
- k = 8
- batch_size = patches.shape[0]
- image_reshaped = patches.view(batch_size, height // k, width // k, k, k)
- image_transposed = image_reshaped.permute(0, 1, 3, 2, 4)
- return image_transposed.contiguous().view(batch_size, height, width)
-
-
-class ChromaUpsampling(nn.Module):
- """Upsample chroma layers
- """
-
- def __init__(self):
- super(ChromaUpsampling, self).__init__()
-
- def forward(self, y, cb, cr):
- """
- Args:
- y(tensor): y channel image
- cb(tensor): cb channel
- cr(tensor): cr channel
-
- Returns:
- Tensor: batch x height x width x 3
- """
-
- def repeat(x, k=2):
- height, width = x.shape[1:3]
- x = x.unsqueeze(-1)
- x = x.repeat(1, 1, k, k)
- x = x.view(-1, height * k, width * k)
- return x
-
- cb = repeat(cb)
- cr = repeat(cr)
- return torch.cat([y.unsqueeze(3), cb.unsqueeze(3), cr.unsqueeze(3)], dim=3)
-
-
-class YCbCr2RGBJpeg(nn.Module):
- """Converts YCbCr image to RGB JPEG
- """
-
- def __init__(self):
- super(YCbCr2RGBJpeg, self).__init__()
-
- matrix = np.array([[1., 0., 1.402], [1, -0.344136, -0.714136], [1, 1.772, 0]], dtype=np.float32).T
- self.shift = nn.Parameter(torch.tensor([0, -128., -128.]))
- self.matrix = nn.Parameter(torch.from_numpy(matrix))
-
- def forward(self, image):
- """
- Args:
- image(tensor): batch x height x width x 3
-
- Returns:
- Tensor: batch x 3 x height x width
- """
- result = torch.tensordot(image + self.shift, self.matrix, dims=1)
- return result.view(image.shape).permute(0, 3, 1, 2)
-
-
-class DeCompressJpeg(nn.Module):
- """Full JPEG decompression algorithm
-
- Args:
- rounding(function): rounding function to use
- """
-
- def __init__(self, rounding=torch.round):
- super(DeCompressJpeg, self).__init__()
- self.c_dequantize = CDequantize()
- self.y_dequantize = YDequantize()
- self.idct = iDCT8x8()
- self.merging = BlockMerging()
- self.chroma = ChromaUpsampling()
- self.colors = YCbCr2RGBJpeg()
-
- def forward(self, y, cb, cr, imgh, imgw, factor=1):
- """
- Args:
- compressed(dict(tensor)): batch x h*w/64 x 8 x 8
- imgh(int)
- imgw(int)
- factor(float)
-
- Returns:
- Tensor: batch x 3 x height x width
- """
- components = {'y': y, 'cb': cb, 'cr': cr}
- for k in components.keys():
- if k in ('cb', 'cr'):
- comp = self.c_dequantize(components[k], factor=factor)
- height, width = int(imgh / 2), int(imgw / 2)
- else:
- comp = self.y_dequantize(components[k], factor=factor)
- height, width = imgh, imgw
- comp = self.idct(comp)
- components[k] = self.merging(comp, height, width)
- #
- image = self.chroma(components['y'], components['cb'], components['cr'])
- image = self.colors(image)
-
- image = torch.min(255 * torch.ones_like(image), torch.max(torch.zeros_like(image), image))
- return image / 255
-
-
-# ------------------------ main DiffJPEG ------------------------ #
-
-
-class DiffJPEG(nn.Module):
- """This JPEG algorithm result is slightly different from cv2.
- DiffJPEG supports batch processing.
-
- Args:
- differentiable(bool): If True, uses custom differentiable rounding function, if False, uses standard torch.round
- """
-
- def __init__(self, differentiable=True):
- super(DiffJPEG, self).__init__()
- if differentiable:
- rounding = diff_round
- else:
- rounding = torch.round
-
- self.compress = CompressJpeg(rounding=rounding)
- self.decompress = DeCompressJpeg(rounding=rounding)
-
- def forward(self, x, quality):
- """
- Args:
- x (Tensor): Input image, bchw, rgb, [0, 1]
- quality(float): Quality factor for jpeg compression scheme.
- """
- factor = quality
- if isinstance(factor, (int, float)):
- factor = quality_to_factor(factor)
- else:
- for i in range(factor.size(0)):
- factor[i] = quality_to_factor(factor[i])
- h, w = x.size()[-2:]
- h_pad, w_pad = 0, 0
- # why should use 16
- if h % 16 != 0:
- h_pad = 16 - h % 16
- if w % 16 != 0:
- w_pad = 16 - w % 16
- x = F.pad(x, (0, w_pad, 0, h_pad), mode='constant', value=0)
-
- y, cb, cr = self.compress(x, factor=factor)
- recovered = self.decompress(y, cb, cr, (h + h_pad), (w + w_pad), factor=factor)
- recovered = recovered[:, :, 0:h, 0:w]
- return recovered
-
-
-if __name__ == '__main__':
- import cv2
-
- from basicsr.utils import img2tensor, tensor2img
-
- img_gt = cv2.imread('test.png') / 255.
-
- # -------------- cv2 -------------- #
- encode_param = [int(cv2.IMWRITE_JPEG_QUALITY), 20]
- _, encimg = cv2.imencode('.jpg', img_gt * 255., encode_param)
- img_lq = np.float32(cv2.imdecode(encimg, 1))
- cv2.imwrite('cv2_JPEG_20.png', img_lq)
-
- # -------------- DiffJPEG -------------- #
- jpeger = DiffJPEG(differentiable=False).cuda()
- img_gt = img2tensor(img_gt)
- img_gt = torch.stack([img_gt, img_gt]).cuda()
- quality = img_gt.new_tensor([20, 40])
- out = jpeger(img_gt, quality=quality)
-
- cv2.imwrite('pt_JPEG_20.png', tensor2img(out[0]))
- cv2.imwrite('pt_JPEG_40.png', tensor2img(out[1]))
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/CODE_OF_CONDUCT.md b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/CODE_OF_CONDUCT.md
deleted file mode 100644
index a0cbeaab7650bf08267fbdbc9bb54e845c88f392..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/CODE_OF_CONDUCT.md
+++ /dev/null
@@ -1,77 +0,0 @@
-# Code of Conduct
-
-## Our Pledge
-
-In the interest of fostering an open and welcoming environment, we as
-contributors and maintainers pledge to make participation in our project and
-our community a harassment-free experience for everyone, regardless of age, body
-size, disability, ethnicity, sex characteristics, gender identity and expression,
-level of experience, education, socio-economic status, nationality, personal
-appearance, race, religion, or sexual identity and orientation.
-
-## Our Standards
-
-Examples of behavior that contributes to creating a positive environment
-include:
-
-* Using welcoming and inclusive language
-* Being respectful of differing viewpoints and experiences
-* Gracefully accepting constructive criticism
-* Focusing on what is best for the community
-* Showing empathy towards other community members
-
-Examples of unacceptable behavior by participants include:
-
-* The use of sexualized language or imagery and unwelcome sexual attention or
- advances
-* Trolling, insulting/derogatory comments, and personal or political attacks
-* Public or private harassment
-* Publishing others' private information, such as a physical or electronic
- address, without explicit permission
-* Other conduct which could reasonably be considered inappropriate in a
- professional setting
-
-## Our Responsibilities
-
-Project maintainers are responsible for clarifying the standards of acceptable
-behavior and are expected to take appropriate and fair corrective action in
-response to any instances of unacceptable behavior.
-
-Project maintainers have the right and responsibility to remove, edit, or
-reject comments, commits, code, wiki edits, issues, and other contributions
-that are not aligned to this Code of Conduct, or to ban temporarily or
-permanently any contributor for other behaviors that they deem inappropriate,
-threatening, offensive, or harmful.
-
-## Scope
-
-This Code of Conduct applies within all project spaces, and it also applies when
-an individual is representing the project or its community in public spaces.
-Examples of representing a project or community include using an official
-project e-mail address, posting via an official social media account, or acting
-as an appointed representative at an online or offline event. Representation of
-a project may be further defined and clarified by project maintainers.
-
-## Enforcement
-
-Instances of abusive, harassing, or otherwise unacceptable behavior may be
-reported by contacting the project team at . All
-complaints will be reviewed and investigated and will result in a response that
-is deemed necessary and appropriate to the circumstances. The project team is
-obligated to maintain confidentiality with regard to the reporter of an incident.
-Further details of specific enforcement policies may be posted separately.
-
-Project maintainers who do not follow or enforce the Code of Conduct in good
-faith may face temporary or permanent repercussions as determined by other
-members of the project's leadership.
-
-## Attribution
-
-This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
-available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
-
-[homepage]: https://www.contributor-covenant.org
-
-For answers to common questions about this code of conduct, see
-https://www.contributor-covenant.org/faq
-
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_synthesis/data_utils.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_synthesis/data_utils.py
deleted file mode 100644
index f43a4a90046fb9ee4944dc06ba377c1faade141d..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_synthesis/data_utils.py
+++ /dev/null
@@ -1,320 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import os
-from pathlib import Path
-from typing import Optional, List, Dict
-import zipfile
-import tempfile
-from dataclasses import dataclass
-from itertools import groupby
-
-import torch
-import torch.nn.functional as F
-import numpy as np
-from tqdm import tqdm
-
-from examples.speech_to_text.data_utils import load_tsv_to_dicts
-from fairseq.data.audio.audio_utils import TTSSpectrogram, TTSMelScale
-
-
-def trim_or_pad_to_target_length(
- data_1d_or_2d: np.ndarray, target_length: int
-) -> np.ndarray:
- assert len(data_1d_or_2d.shape) in {1, 2}
- delta = data_1d_or_2d.shape[0] - target_length
- if delta >= 0: # trim if being longer
- data_1d_or_2d = data_1d_or_2d[: target_length]
- else: # pad if being shorter
- if len(data_1d_or_2d.shape) == 1:
- data_1d_or_2d = np.concatenate(
- [data_1d_or_2d, np.zeros(-delta)], axis=0
- )
- else:
- data_1d_or_2d = np.concatenate(
- [data_1d_or_2d, np.zeros((-delta, data_1d_or_2d.shape[1]))],
- axis=0
- )
- return data_1d_or_2d
-
-
-def extract_logmel_spectrogram(
- waveform: torch.Tensor, sample_rate: int,
- output_path: Optional[Path] = None, win_length: int = 1024,
- hop_length: int = 256, n_fft: int = 1024,
- win_fn: callable = torch.hann_window, n_mels: int = 80,
- f_min: float = 0., f_max: float = 8000, eps: float = 1e-5,
- overwrite: bool = False, target_length: Optional[int] = None
-):
- if output_path is not None and output_path.is_file() and not overwrite:
- return
-
- spectrogram_transform = TTSSpectrogram(
- n_fft=n_fft, win_length=win_length, hop_length=hop_length,
- window_fn=win_fn
- )
- mel_scale_transform = TTSMelScale(
- n_mels=n_mels, sample_rate=sample_rate, f_min=f_min, f_max=f_max,
- n_stft=n_fft // 2 + 1
- )
- spectrogram = spectrogram_transform(waveform)
- mel_spec = mel_scale_transform(spectrogram)
- logmel_spec = torch.clamp(mel_spec, min=eps).log()
- assert len(logmel_spec.shape) == 3 and logmel_spec.shape[0] == 1
- logmel_spec = logmel_spec.squeeze().t() # D x T -> T x D
- if target_length is not None:
- trim_or_pad_to_target_length(logmel_spec, target_length)
-
- if output_path is not None:
- np.save(output_path.as_posix(), logmel_spec)
- else:
- return logmel_spec
-
-
-def extract_pitch(
- waveform: torch.Tensor, sample_rate: int,
- output_path: Optional[Path] = None, hop_length: int = 256,
- log_scale: bool = True, phoneme_durations: Optional[List[int]] = None
-):
- if output_path is not None and output_path.is_file():
- return
-
- try:
- import pyworld
- except ImportError:
- raise ImportError("Please install PyWORLD: pip install pyworld")
-
- _waveform = waveform.squeeze(0).double().numpy()
- pitch, t = pyworld.dio(
- _waveform, sample_rate, frame_period=hop_length / sample_rate * 1000
- )
- pitch = pyworld.stonemask(_waveform, pitch, t, sample_rate)
-
- if phoneme_durations is not None:
- pitch = trim_or_pad_to_target_length(pitch, sum(phoneme_durations))
- try:
- from scipy.interpolate import interp1d
- except ImportError:
- raise ImportError("Please install SciPy: pip install scipy")
- nonzero_ids = np.where(pitch != 0)[0]
- interp_fn = interp1d(
- nonzero_ids,
- pitch[nonzero_ids],
- fill_value=(pitch[nonzero_ids[0]], pitch[nonzero_ids[-1]]),
- bounds_error=False,
- )
- pitch = interp_fn(np.arange(0, len(pitch)))
- d_cumsum = np.cumsum(np.concatenate([np.array([0]), phoneme_durations]))
- pitch = np.array(
- [
- np.mean(pitch[d_cumsum[i-1]: d_cumsum[i]])
- for i in range(1, len(d_cumsum))
- ]
- )
- assert len(pitch) == len(phoneme_durations)
-
- if log_scale:
- pitch = np.log(pitch + 1)
-
- if output_path is not None:
- np.save(output_path.as_posix(), pitch)
- else:
- return pitch
-
-
-def extract_energy(
- waveform: torch.Tensor, output_path: Optional[Path] = None,
- hop_length: int = 256, n_fft: int = 1024, log_scale: bool = True,
- phoneme_durations: Optional[List[int]] = None
-):
- if output_path is not None and output_path.is_file():
- return
-
- assert len(waveform.shape) == 2 and waveform.shape[0] == 1
- waveform = waveform.view(1, 1, waveform.shape[1])
- waveform = F.pad(
- waveform.unsqueeze(1), [n_fft // 2, n_fft // 2, 0, 0],
- mode="reflect"
- )
- waveform = waveform.squeeze(1)
-
- fourier_basis = np.fft.fft(np.eye(n_fft))
- cutoff = int((n_fft / 2 + 1))
- fourier_basis = np.vstack(
- [np.real(fourier_basis[:cutoff, :]),
- np.imag(fourier_basis[:cutoff, :])]
- )
-
- forward_basis = torch.FloatTensor(fourier_basis[:, None, :])
- forward_transform = F.conv1d(
- waveform, forward_basis, stride=hop_length, padding=0
- )
-
- real_part = forward_transform[:, :cutoff, :]
- imag_part = forward_transform[:, cutoff:, :]
- magnitude = torch.sqrt(real_part ** 2 + imag_part ** 2)
- energy = torch.norm(magnitude, dim=1).squeeze(0).numpy()
-
- if phoneme_durations is not None:
- energy = trim_or_pad_to_target_length(energy, sum(phoneme_durations))
- d_cumsum = np.cumsum(np.concatenate([np.array([0]), phoneme_durations]))
- energy = np.array(
- [
- np.mean(energy[d_cumsum[i - 1]: d_cumsum[i]])
- for i in range(1, len(d_cumsum))
- ]
- )
- assert len(energy) == len(phoneme_durations)
-
- if log_scale:
- energy = np.log(energy + 1)
-
- if output_path is not None:
- np.save(output_path.as_posix(), energy)
- else:
- return energy
-
-
-def get_global_cmvn(feature_root: Path, output_path: Optional[Path] = None):
- mean_x, mean_x2, n_frames = None, None, 0
- feature_paths = feature_root.glob("*.npy")
- for p in tqdm(feature_paths):
- with open(p, 'rb') as f:
- frames = np.load(f).squeeze()
-
- n_frames += frames.shape[0]
-
- cur_mean_x = frames.sum(axis=0)
- if mean_x is None:
- mean_x = cur_mean_x
- else:
- mean_x += cur_mean_x
-
- cur_mean_x2 = (frames ** 2).sum(axis=0)
- if mean_x2 is None:
- mean_x2 = cur_mean_x2
- else:
- mean_x2 += cur_mean_x2
-
- mean_x /= n_frames
- mean_x2 /= n_frames
- var_x = mean_x2 - mean_x ** 2
- std_x = np.sqrt(np.maximum(var_x, 1e-10))
-
- if output_path is not None:
- with open(output_path, 'wb') as f:
- np.savez(f, mean=mean_x, std=std_x)
- else:
- return {"mean": mean_x, "std": std_x}
-
-
-def ipa_phonemize(text, lang="en-us", use_g2p=False):
- if use_g2p:
- assert lang == "en-us", "g2pE phonemizer only works for en-us"
- try:
- from g2p_en import G2p
- g2p = G2p()
- return " ".join("|" if p == " " else p for p in g2p(text))
- except ImportError:
- raise ImportError(
- "Please install phonemizer: pip install g2p_en"
- )
- else:
- try:
- from phonemizer import phonemize
- from phonemizer.separator import Separator
- return phonemize(
- text, backend='espeak', language=lang,
- separator=Separator(word="| ", phone=" ")
- )
- except ImportError:
- raise ImportError(
- "Please install phonemizer: pip install phonemizer"
- )
-
-
-@dataclass
-class ForceAlignmentInfo(object):
- tokens: List[str]
- frame_durations: List[int]
- start_sec: Optional[float]
- end_sec: Optional[float]
-
-
-def get_mfa_alignment_by_sample_id(
- textgrid_zip_path: str, sample_id: str, sample_rate: int,
- hop_length: int, silence_phones: List[str] = ("sil", "sp", "spn")
-) -> ForceAlignmentInfo:
- try:
- import tgt
- except ImportError:
- raise ImportError("Please install TextGridTools: pip install tgt")
-
- filename = f"{sample_id}.TextGrid"
- out_root = Path(tempfile.gettempdir())
- tgt_path = out_root / filename
- with zipfile.ZipFile(textgrid_zip_path) as f_zip:
- f_zip.extract(filename, path=out_root)
- textgrid = tgt.io.read_textgrid(tgt_path.as_posix())
- os.remove(tgt_path)
-
- phones, frame_durations = [], []
- start_sec, end_sec, end_idx = 0, 0, 0
- for t in textgrid.get_tier_by_name("phones")._objects:
- s, e, p = t.start_time, t.end_time, t.text
- # Trim leading silences
- if len(phones) == 0:
- if p in silence_phones:
- continue
- else:
- start_sec = s
- phones.append(p)
- if p not in silence_phones:
- end_sec = e
- end_idx = len(phones)
- r = sample_rate / hop_length
- frame_durations.append(int(np.round(e * r) - np.round(s * r)))
- # Trim tailing silences
- phones = phones[:end_idx]
- frame_durations = frame_durations[:end_idx]
-
- return ForceAlignmentInfo(
- tokens=phones, frame_durations=frame_durations, start_sec=start_sec,
- end_sec=end_sec
- )
-
-
-def get_mfa_alignment(
- textgrid_zip_path: str, sample_ids: List[str], sample_rate: int,
- hop_length: int
-) -> Dict[str, ForceAlignmentInfo]:
- return {
- i: get_mfa_alignment_by_sample_id(
- textgrid_zip_path, i, sample_rate, hop_length
- ) for i in tqdm(sample_ids)
- }
-
-
-def get_unit_alignment(
- id_to_unit_tsv_path: str, sample_ids: List[str]
-) -> Dict[str, ForceAlignmentInfo]:
- id_to_units = {
- e["id"]: e["units"] for e in load_tsv_to_dicts(id_to_unit_tsv_path)
- }
- id_to_units = {i: id_to_units[i].split() for i in sample_ids}
- id_to_units_collapsed = {
- i: [uu for uu, _ in groupby(u)] for i, u in id_to_units.items()
- }
- id_to_durations = {
- i: [len(list(g)) for _, g in groupby(u)] for i, u in id_to_units.items()
- }
-
- return {
- i: ForceAlignmentInfo(
- tokens=id_to_units_collapsed[i], frame_durations=id_to_durations[i],
- start_sec=None, end_sec=None
- )
- for i in sample_ids
- }
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/legacy/masked_lm_dictionary.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/legacy/masked_lm_dictionary.py
deleted file mode 100644
index dee88f7a3ed72ea465ea4e8ffe7b1c01ff6f57f1..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/legacy/masked_lm_dictionary.py
+++ /dev/null
@@ -1,60 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from fairseq.data import Dictionary
-
-
-class MaskedLMDictionary(Dictionary):
- """
- Dictionary for Masked Language Modelling tasks. This extends Dictionary by
- adding the mask symbol.
- """
-
- def __init__(
- self,
- pad="",
- eos="",
- unk="",
- mask="",
- ):
- super().__init__(pad=pad, eos=eos, unk=unk)
- self.mask_word = mask
- self.mask_index = self.add_symbol(mask)
- self.nspecial = len(self.symbols)
-
- def mask(self):
- """Helper to get index of mask symbol"""
- return self.mask_index
-
-
-class BertDictionary(MaskedLMDictionary):
- """
- Dictionary for BERT task. This extends MaskedLMDictionary by adding support
- for cls and sep symbols.
- """
-
- def __init__(
- self,
- pad="",
- eos="",
- unk="",
- mask="",
- cls="",
- sep="",
- ):
- super().__init__(pad=pad, eos=eos, unk=unk, mask=mask)
- self.cls_word = cls
- self.sep_word = sep
- self.cls_index = self.add_symbol(cls)
- self.sep_index = self.add_symbol(sep)
- self.nspecial = len(self.symbols)
-
- def cls(self):
- """Helper to get index of cls symbol"""
- return self.cls_index
-
- def sep(self):
- """Helper to get index of sep symbol"""
- return self.sep_index
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/fairseq_dropout.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/fairseq_dropout.py
deleted file mode 100644
index 3cddca77186f5ddd5cfb9c0ed6def9bafdf3bf1e..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/fairseq_dropout.py
+++ /dev/null
@@ -1,51 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-from typing import List, Optional
-
-import torch.nn as nn
-import torch.nn.functional as F
-
-
-logger = logging.getLogger(__name__)
-
-
-class FairseqDropout(nn.Module):
- def __init__(self, p, module_name=None):
- super().__init__()
- self.p = p
- self.module_name = module_name
- self.apply_during_inference = False
-
- def forward(self, x, inplace: bool = False):
- if self.p > 0 and (self.training or self.apply_during_inference):
- return F.dropout(x, p=self.p, training=True, inplace=inplace)
- else:
- return x
-
- def make_generation_fast_(
- self,
- name: str,
- retain_dropout: bool = False,
- retain_dropout_modules: Optional[List[str]] = None,
- **kwargs
- ):
- if retain_dropout:
- if retain_dropout_modules is not None and self.module_name is None:
- logger.warning(
- "Cannot enable dropout during inference for module {} "
- "because module_name was not set".format(name)
- )
- elif (
- retain_dropout_modules is None # if None, apply to all modules
- or self.module_name in retain_dropout_modules
- ):
- logger.info(
- "Enabling dropout during inference for module: {}".format(name)
- )
- self.apply_during_inference = True
- else:
- logger.info("Disabling dropout for module: {}".format(name))
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/optim/composite.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/optim/composite.py
deleted file mode 100644
index a5366d62434a4400ba9cc524f4286f99f733d121..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/optim/composite.py
+++ /dev/null
@@ -1,188 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-from collections import defaultdict
-from dataclasses import dataclass, field
-from typing import Dict, Any, List, Optional
-
-import torch.optim
-from fairseq.dataclass import FairseqDataclass
-from fairseq.optim import FairseqOptimizer, register_optimizer, _build_optimizer
-from fairseq.optim.lr_scheduler import FairseqLRScheduler, build_lr_scheduler
-from omegaconf import II, open_dict
-
-
-logger = logging.getLogger(__name__)
-
-
-@dataclass
-class OptimizerAndSchedulerConfig(FairseqDataclass):
- optimizer: Any = None
- lr_scheduler: Optional[Any] = None
- lr: List = II("optimization.lr")
- lr_float: Optional[float] = None # this makes it easier to sweep on learning rate with auto sweepers
-
-
-@dataclass
-class CompositeOptimizerConfig(FairseqDataclass):
- groups: Dict[str, Any] = field(
- default_factory=lambda: {},
- metadata={
- "help": "optimizer name -> optimizer OptimizerAndSchedulerConfig. "
- "Configures a different optimizer and (optionally) lr scheduler for each parameter group"
- },
- )
-
-
-@register_optimizer("composite", dataclass=CompositeOptimizerConfig)
-class FairseqCompositeOptimizer(FairseqOptimizer):
-
- optimizers: Dict[str, FairseqOptimizer] = {}
- lr_schedulers: Dict[str, FairseqLRScheduler] = {}
- lr_scheduler: FairseqLRScheduler = None
- _optimizer: torch.optim.Optimizer
-
- def __init__(self, cfg: CompositeOptimizerConfig, params):
- super().__init__(cfg)
-
- assert (
- len(params) > 1
- ), "Composite optimizer only works when there are multiple parameter groups (try fp16_no_flatten_grads: true)"
-
- groupped_params = defaultdict(list)
- for p in params:
- group = getattr(p, "param_group", "default")
- groupped_params[group].append(p)
-
- assert groupped_params.keys() == cfg.groups.keys(), (
- f"Parameter groups {groupped_params.keys()} and optimizer groups {cfg.groups.keys()} are not the same! "
- "Try setting 'param_group' on your parameters in the model."
- )
-
- for group, group_params in groupped_params.items():
- group_cfg = cfg.groups[group]
- with open_dict(group_cfg):
- if group_cfg.lr_float is not None:
- group_cfg.optimizer.lr = [group_cfg.lr_float]
- group_cfg.lr_scheduler.lr = [group_cfg.lr_float]
- else:
- group_cfg.optimizer.lr = group_cfg.lr
- group_cfg.lr_scheduler.lr = group_cfg.lr
- self.optimizers[group] = _build_optimizer(group_cfg.optimizer, group_params)
- if group_cfg.lr_scheduler is not None:
- self.lr_schedulers[group] = build_lr_scheduler(
- group_cfg.lr_scheduler, self.optimizers[group]
- )
-
- if len(self.lr_schedulers) > 0:
- assert len(self.lr_schedulers) == len(self.optimizers), (
- f"Please provide an lr scheduler for each optimizer to use pass_through scheduler. "
- f"Optimizers: {self.optimizers}; Lr scheds: {self.lr_schedulers}"
- )
- self.lr_scheduler = CompositeLRScheduler(self.lr_schedulers)
-
- self._optimizer = CompositeOptimizer(self.optimizers)
-
- @property
- def supports_groups(self):
- return True
-
- @property
- def param_groups(self):
- for opt in self.optimizers.values():
- for group in opt.param_groups:
- yield group
-
- def get_lr(self):
- """Return the current learning rate."""
- k = (
- "default"
- if "default" in self.optimizers
- else next(iter(self.optimizers.keys()))
- )
- return self.optimizers[k].param_groups[0]["lr"]
-
- def state_dict(self):
- """Return the LR scheduler state dict."""
- return {k: s.state_dict() for k, s in self.optimizers.items()}
-
- def load_state_dict(self, state_dict, optimizer_overrides=None):
- """Load an LR scheduler state dict."""
- for k, state in state_dict.items():
- if k not in self.optimizers:
- # skip extra keys like "loss_scale" added by fp16 optimizer
- continue
-
- overrides = (
- optimizer_overrides[k]
- if isinstance(optimizer_overrides, dict) and k in optimizer_overrides
- else None
- )
- self.optimizers[k].load_state_dict(state, optimizer_overrides=overrides)
-
-
-class CompositeOptimizer(torch.optim.Optimizer):
- def __init__(self, optimizers: Dict[str, FairseqOptimizer]):
- self.optimizers = optimizers
-
- @property
- def supports_memory_efficient_fp16(self):
- return all(o.supports_memory_efficient_fp16 for o in self.optimizers.values())
-
- @property
- def supports_flat_params(self):
- return all(o.supports_flat_params for o in self.optimizers.values())
-
- def step(self, closure=None, groups=None):
- """Performs a single optimization step.
-
- Args:
- closure (callable, optional): A closure that reevaluates the model
- and returns the loss.
- """
- loss = None
- if closure is not None:
- loss = closure()
-
- for k, opt in self.optimizers.items():
- if groups is None or k in groups:
- opt.step()
-
- return loss
-
- def zero_grad(self):
- for opt in self.optimizers.values():
- opt.zero_grad()
-
-
-class CompositeLRScheduler(FairseqLRScheduler):
- def __init__(self, lr_schedulers):
- super().__init__(None, None)
-
- self.lr_schedulers = lr_schedulers
-
- def state_dict(self):
- """Return the LR scheduler state dict."""
- return {k: s.state_dict() for k, s in self.lr_schedulers.items()}
-
- def load_state_dict(self, state_dict):
- """Load an LR scheduler state dict."""
- for k, state in state_dict.items():
- self.lr_schedulers[k].load_state_dict(state)
-
- def step_begin_epoch(self, epoch):
- """Update the learning rate at the beginning of the given epoch."""
- for s in self.lr_schedulers.values():
- s.step_begin_epoch(epoch)
-
- def step(self, epoch, val_loss=None):
- """Update the learning rate at the end of the given epoch."""
- for s in self.lr_schedulers.values():
- s.step(epoch)
-
- def step_update(self, num_updates):
- """Update the learning rate after each update."""
- return {k: s.step_update(num_updates) for k, s in self.lr_schedulers.items()}
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/tests/test_sequence_scorer.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/tests/test_sequence_scorer.py
deleted file mode 100644
index 42f9447b599bcd7a9913aec37d94ea5078ff43a3..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/tests/test_sequence_scorer.py
+++ /dev/null
@@ -1,120 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import argparse
-import unittest
-
-import tests.utils as test_utils
-import torch
-from fairseq.sequence_scorer import SequenceScorer
-
-
-class TestSequenceScorer(unittest.TestCase):
- def test_sequence_scorer(self):
- # construct dummy dictionary
- d = test_utils.dummy_dictionary(vocab_size=2)
- self.assertEqual(d.pad(), 1)
- self.assertEqual(d.eos(), 2)
- self.assertEqual(d.unk(), 3)
- eos = d.eos()
- w1 = 4
- w2 = 5
-
- # construct dataloader
- data = [
- {
- "source": torch.LongTensor([w1, w2, eos]),
- "target": torch.LongTensor([w1, w2, w1, eos]),
- },
- {
- "source": torch.LongTensor([w2, eos]),
- "target": torch.LongTensor([w2, w1, eos]),
- },
- {
- "source": torch.LongTensor([w2, eos]),
- "target": torch.LongTensor([w2, eos]),
- },
- ]
- data_itr = test_utils.dummy_dataloader(data)
-
- # specify expected output probabilities
- args = argparse.Namespace()
- unk = 0.0
- args.beam_probs = [
- # step 0:
- torch.FloatTensor(
- [
- # eos w1 w2
- [0.0, unk, 0.6, 0.4], # sentence 1
- [0.0, unk, 0.4, 0.6], # sentence 2
- [0.0, unk, 0.7, 0.3], # sentence 3
- ]
- ),
- # step 1:
- torch.FloatTensor(
- [
- # eos w1 w2
- [0.0, unk, 0.2, 0.7], # sentence 1
- [0.0, unk, 0.8, 0.2], # sentence 2
- [0.7, unk, 0.1, 0.2], # sentence 3
- ]
- ),
- # step 2:
- torch.FloatTensor(
- [
- # eos w1 w2
- [0.10, unk, 0.50, 0.4], # sentence 1
- [0.15, unk, 0.15, 0.7], # sentence 2
- [0.00, unk, 0.00, 0.0], # sentence 3
- ]
- ),
- # step 3:
- torch.FloatTensor(
- [
- # eos w1 w2
- [0.9, unk, 0.05, 0.05], # sentence 1
- [0.0, unk, 0.00, 0.0], # sentence 2
- [0.0, unk, 0.00, 0.0], # sentence 3
- ]
- ),
- ]
- expected_scores = [
- [0.6, 0.7, 0.5, 0.9], # sentence 1
- [0.6, 0.8, 0.15], # sentence 2
- [0.3, 0.7], # sentence 3
- ]
-
- task = test_utils.TestTranslationTask.setup_task(args, d, d)
- model = task.build_model(args)
- scorer = SequenceScorer(task.target_dictionary)
- for sample in data_itr:
- hypos = task.inference_step(scorer, [model], sample)
- for id, hypos_id in zip(sample["id"].tolist(), hypos):
- self.assertHypoTokens(hypos_id[0], data[id]["target"])
- self.assertHypoScore(hypos_id[0], expected_scores[id])
-
- def assertHypoTokens(self, hypo, tokens):
- self.assertTensorEqual(hypo["tokens"], torch.LongTensor(tokens))
-
- def assertHypoScore(self, hypo, pos_probs, normalized=True, lenpen=1.0):
- pos_scores = torch.FloatTensor(pos_probs).log()
- self.assertAlmostEqual(hypo["positional_scores"], pos_scores)
- self.assertEqual(pos_scores.numel(), hypo["tokens"].numel())
- score = pos_scores.sum()
- if normalized:
- score /= pos_scores.numel() ** lenpen
- self.assertLess(abs(score - hypo["score"]), 1e-6)
-
- def assertAlmostEqual(self, t1, t2):
- self.assertEqual(t1.size(), t2.size(), "size mismatch")
- self.assertLess((t1 - t2).abs().max(), 1e-4)
-
- def assertTensorEqual(self, t1, t2):
- self.assertEqual(t1.size(), t2.size(), "size mismatch")
- self.assertEqual(t1.ne(t2).long().sum(), 0)
-
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/audio/multi_modality_dataset.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/audio/multi_modality_dataset.py
deleted file mode 100644
index 69d23d31c1eb66803fa5062b5991a7c34ab07dc7..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/audio/multi_modality_dataset.py
+++ /dev/null
@@ -1,263 +0,0 @@
-# Copyright (c) 2021-present, Facebook, Inc.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the LICENSE file in
-# the root directory of this source tree. An additional grant of patent rights
-# can be found in the PATENTS file in the same directory.
-
-import logging
-import math
-from typing import List, Optional, NamedTuple
-
-import numpy as np
-import torch
-from fairseq.data import (
- ConcatDataset,
- LanguagePairDataset,
- FileAudioDataset,
- data_utils,
-)
-from fairseq.data import FairseqDataset
-
-logger = logging.getLogger(__name__)
-
-
-class ModalityDatasetItem(NamedTuple):
- datasetname: str
- dataset: any
- max_positions: List[int]
- max_tokens: Optional[int] = None
- max_sentences: Optional[int] = None
-
-# MultiModalityDataset: it concate multiple datasets with different modalities.
-# Compared with ConcatDataset it can 1) sample data given the ratios for different datasets
-# 2) it adds mode to indicate what type of the data samples come from.
-# It will be used with GroupedEpochBatchIterator together to generate mini-batch with samples
-# from the same type of dataset
-# If only one dataset is used, it will perform like the original dataset with mode added
-class MultiModalityDataset(ConcatDataset):
- def __init__(self, datasets: List[ModalityDatasetItem]):
- id_to_mode = []
- dsets = []
- max_tokens = []
- max_sentences = []
- max_positions = []
- for dset in datasets:
- id_to_mode.append(dset.datasetname)
- dsets.append(dset.dataset)
- max_tokens.append(dset.max_tokens)
- max_positions.append(dset.max_positions)
- max_sentences.append(dset.max_sentences)
- weights = [1.0 for s in dsets]
- super().__init__(dsets, weights)
- self.max_tokens = max_tokens
- self.max_positions = max_positions
- self.max_sentences = max_sentences
- self.id_to_mode = id_to_mode
- self.raw_sub_batch_samplers = []
- self._cur_epoch = 0
-
- def set_epoch(self, epoch):
- super().set_epoch(epoch)
- self._cur_epoch = epoch
-
- def __getitem__(self, idx):
- dataset_idx, sample_idx = self._get_dataset_and_sample_index(idx)
- sample = self.datasets[dataset_idx][sample_idx]
- return (dataset_idx, sample)
-
- def collater(self, samples):
- if len(samples) == 0:
- return {}
- dataset_idx = samples[0][0]
- # make sure all samples in samples are from same dataset
- assert sum([0 if dataset_idx == s[0] else 1 for s in samples]) == 0
- samples = self.datasets[dataset_idx].collater([x[1] for x in samples])
- # add mode
- samples["net_input"]["mode"] = self.id_to_mode[dataset_idx]
-
- return samples
-
- def size(self, index: int):
- if len(self.datasets) == 1:
- return self.datasets[0].size(index)
- return super().size(index)
-
- @property
- def sizes(self):
- if len(self.datasets) == 1:
- return self.datasets[0].sizes
- super().sizes
-
- def ordered_indices(self):
- """
- Returns indices sorted by length. So less padding is needed.
- """
- if len(self.datasets) == 1:
- return self.datasets[0].ordered_indices()
- indices_group = []
- for d_idx, ds in enumerate(self.datasets):
- sample_num = self.cumulative_sizes[d_idx]
- if d_idx > 0:
- sample_num = sample_num - self.cumulative_sizes[d_idx - 1]
- assert sample_num == len(ds)
- indices_group.append(ds.ordered_indices())
- return indices_group
-
- def get_raw_batch_samplers(self, required_batch_size_multiple, seed):
- if len(self.raw_sub_batch_samplers) > 0:
- logger.info(" raw_sub_batch_samplers exists. No action is taken")
- return
- with data_utils.numpy_seed(seed):
- indices = self.ordered_indices()
- for i, ds in enumerate(self.datasets):
- indices[i] = ds.filter_indices_by_size(
- indices[i],
- self.max_positions[i],
- )[0]
- sub_batch_sampler = ds.batch_by_size(
- indices[i],
- max_tokens=self.max_tokens[i],
- max_sentences=self.max_sentences[i],
- required_batch_size_multiple=required_batch_size_multiple,
- )
- self.raw_sub_batch_samplers.append(sub_batch_sampler)
-
- def get_batch_samplers(self, mult_ratios, required_batch_size_multiple, seed):
- self.get_raw_batch_samplers(required_batch_size_multiple, seed)
- batch_samplers = []
- for i, _ in enumerate(self.datasets):
- if i > 0:
- sub_batch_sampler = [
- [y + self.cumulative_sizes[i - 1] for y in x]
- for x in self.raw_sub_batch_samplers[i]
- ]
- else:
- sub_batch_sampler = list(self.raw_sub_batch_samplers[i])
- smp_r = mult_ratios[i]
- if smp_r != 1:
- is_increase = "increased" if smp_r > 1 else "decreased"
- logger.info(
- "number of batch for the dataset {} is {} from {} to {}".format(
- self.id_to_mode[i],
- is_increase,
- len(sub_batch_sampler),
- int(len(sub_batch_sampler) * smp_r),
- )
- )
- mul_samplers = []
- for _ in range(math.floor(smp_r)):
- mul_samplers = mul_samplers + sub_batch_sampler
- if math.floor(smp_r) != smp_r:
- with data_utils.numpy_seed(seed + self._cur_epoch):
- np.random.shuffle(sub_batch_sampler)
- smp_num = int(
- (smp_r - math.floor(smp_r)) * len(sub_batch_sampler)
- )
- mul_samplers = mul_samplers + sub_batch_sampler[:smp_num]
- sub_batch_sampler = mul_samplers
- else:
- logger.info(
- "dataset {} batch number is {} ".format(
- self.id_to_mode[i], len(sub_batch_sampler)
- )
- )
- batch_samplers.append(sub_batch_sampler)
-
- return batch_samplers
-
-
-class LangPairMaskDataset(FairseqDataset):
- def __init__(
- self,
- dataset: LanguagePairDataset,
- src_eos: int,
- src_bos: Optional[int] = None,
- noise_id: Optional[int] = -1,
- mask_ratio: Optional[float] = 0,
- mask_type: Optional[str] = "random",
- ):
- self.dataset = dataset
- self.src_eos = src_eos
- self.src_bos = src_bos
- self.noise_id = noise_id
- self.mask_ratio = mask_ratio
- self.mask_type = mask_type
- assert mask_type in ("random", "tail")
-
- @property
- def src_sizes(self):
- return self.dataset.src_sizes
-
- @property
- def tgt_sizes(self):
- return self.dataset.tgt_sizes
-
- @property
- def sizes(self):
- # dataset.sizes can be a dynamically computed sizes:
- return self.dataset.sizes
-
- def get_batch_shapes(self):
- return self.dataset.buckets
-
- def num_tokens_vec(self, indices):
- return self.dataset.num_tokens_vec(indices)
-
- def __len__(self):
- return len(self.dataset)
-
- def num_tokens(self, index):
- return self.dataset.num_tokens(index)
-
- def size(self, index):
- return self.dataset.size(index)
-
- def ordered_indices(self):
- return self.dataset.ordered_indices()
-
- @property
- def supports_prefetch(self):
- return getattr(self.dataset, "supports_prefetch", False)
-
- def prefetch(self, indices):
- return self.dataset.prefetch(indices)
-
- def mask_src_tokens(self, sample):
- src_item = sample["source"]
- mask = None
- if self.mask_type == "random":
- mask = torch.rand(len(src_item)).le(self.mask_ratio)
- else:
- mask = torch.ones(len(src_item))
- mask[: int(len(src_item) * (1 - self.mask_ratio))] = 0
- mask = mask.eq(1)
- if src_item[0] == self.src_bos:
- mask[0] = False
- if src_item[-1] == self.src_eos:
- mask[-1] = False
- mask_src_item = src_item.masked_fill(mask, self.noise_id)
- smp = {"id": sample["id"], "source": mask_src_item, "target": sample["target"]}
- return smp
-
- def __getitem__(self, index):
- sample = self.dataset[index]
- if self.mask_ratio > 0:
- sample = self.mask_src_tokens(sample)
- return sample
-
- def collater(self, samples, pad_to_length=None):
- return self.dataset.collater(samples, pad_to_length)
-
-
-class FileAudioDatasetWrapper(FileAudioDataset):
- def collater(self, samples):
- samples = super().collater(samples)
- if len(samples) == 0:
- return {}
- samples["net_input"]["src_tokens"] = samples["net_input"]["source"]
- samples["net_input"]["prev_output_tokens"] = None
- del samples["net_input"]["source"]
- samples["net_input"]["src_lengths"] = None
- samples["net_input"]["alignment"] = None
- return samples
diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/grit/data/transforms/custom_augmentation_impl.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/grit/data/transforms/custom_augmentation_impl.py
deleted file mode 100644
index 6b9637f3ad41e3ba513636219e49371296d9ab9f..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/grit/data/transforms/custom_augmentation_impl.py
+++ /dev/null
@@ -1,52 +0,0 @@
-# -*- coding: utf-8 -*-
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-# Part of the code is from https://github.com/rwightman/efficientdet-pytorch/blob/master/effdet/data/transforms.py
-# Modified by Xingyi Zhou
-# The original code is under Apache-2.0 License
-import numpy as np
-from PIL import Image
-
-from detectron2.data.transforms.augmentation import Augmentation
-from .custom_transform import EfficientDetResizeCropTransform
-
-__all__ = [
- "EfficientDetResizeCrop",
-]
-
-
-class EfficientDetResizeCrop(Augmentation):
- """
- Scale the shorter edge to the given size, with a limit of `max_size` on the longer edge.
- If `max_size` is reached, then downscale so that the longer edge does not exceed max_size.
- """
-
- def __init__(
- self, size, scale, interp=Image.BILINEAR
- ):
- """
- """
- super().__init__()
- self.target_size = (size, size)
- self.scale = scale
- self.interp = interp
-
- def get_transform(self, img):
- # Select a random scale factor.
- scale_factor = np.random.uniform(*self.scale)
- scaled_target_height = scale_factor * self.target_size[0]
- scaled_target_width = scale_factor * self.target_size[1]
- # Recompute the accurate scale_factor using rounded scaled image size.
- width, height = img.shape[1], img.shape[0]
- img_scale_y = scaled_target_height / height
- img_scale_x = scaled_target_width / width
- img_scale = min(img_scale_y, img_scale_x)
-
- # Select non-zero random offset (x, y) if scaled image is larger than target size
- scaled_h = int(height * img_scale)
- scaled_w = int(width * img_scale)
- offset_y = scaled_h - self.target_size[0]
- offset_x = scaled_w - self.target_size[1]
- offset_y = int(max(0.0, float(offset_y)) * np.random.uniform(0, 1))
- offset_x = int(max(0.0, float(offset_x)) * np.random.uniform(0, 1))
- return EfficientDetResizeCropTransform(
- scaled_h, scaled_w, offset_y, offset_x, img_scale, self.target_size, self.interp)
diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/data/datasets/objects365.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/data/datasets/objects365.py
deleted file mode 100644
index 41395bdd53b67b7a7111f06564c3a2d2b63a7cdc..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/data/datasets/objects365.py
+++ /dev/null
@@ -1,394 +0,0 @@
-from detectron2.data.datasets.register_coco import register_coco_instances
-import os
-
-categories_v1 = [
-{'id': 164, 'name': 'cutting/chopping board'} ,
-{'id': 49, 'name': 'tie'} ,
-{'id': 306, 'name': 'crosswalk sign'} ,
-{'id': 145, 'name': 'gun'} ,
-{'id': 14, 'name': 'street lights'} ,
-{'id': 223, 'name': 'bar soap'} ,
-{'id': 74, 'name': 'wild bird'} ,
-{'id': 219, 'name': 'ice cream'} ,
-{'id': 37, 'name': 'stool'} ,
-{'id': 25, 'name': 'storage box'} ,
-{'id': 153, 'name': 'giraffe'} ,
-{'id': 52, 'name': 'pen/pencil'} ,
-{'id': 61, 'name': 'high heels'} ,
-{'id': 340, 'name': 'mangosteen'} ,
-{'id': 22, 'name': 'bracelet'} ,
-{'id': 155, 'name': 'piano'} ,
-{'id': 162, 'name': 'vent'} ,
-{'id': 75, 'name': 'laptop'} ,
-{'id': 236, 'name': 'toaster'} ,
-{'id': 231, 'name': 'fire truck'} ,
-{'id': 42, 'name': 'basket'} ,
-{'id': 150, 'name': 'zebra'} ,
-{'id': 124, 'name': 'head phone'} ,
-{'id': 90, 'name': 'sheep'} ,
-{'id': 322, 'name': 'steak'} ,
-{'id': 39, 'name': 'couch'} ,
-{'id': 209, 'name': 'toothbrush'} ,
-{'id': 59, 'name': 'bicycle'} ,
-{'id': 336, 'name': 'red cabbage'} ,
-{'id': 228, 'name': 'golf ball'} ,
-{'id': 120, 'name': 'tomato'} ,
-{'id': 132, 'name': 'computer box'} ,
-{'id': 8, 'name': 'cup'} ,
-{'id': 183, 'name': 'basketball'} ,
-{'id': 298, 'name': 'butterfly'} ,
-{'id': 250, 'name': 'garlic'} ,
-{'id': 12, 'name': 'desk'} ,
-{'id': 141, 'name': 'microwave'} ,
-{'id': 171, 'name': 'strawberry'} ,
-{'id': 200, 'name': 'kettle'} ,
-{'id': 63, 'name': 'van'} ,
-{'id': 300, 'name': 'cheese'} ,
-{'id': 215, 'name': 'marker'} ,
-{'id': 100, 'name': 'blackboard/whiteboard'} ,
-{'id': 186, 'name': 'printer'} ,
-{'id': 333, 'name': 'bread/bun'} ,
-{'id': 243, 'name': 'penguin'} ,
-{'id': 364, 'name': 'iron'} ,
-{'id': 180, 'name': 'ladder'} ,
-{'id': 34, 'name': 'flag'} ,
-{'id': 78, 'name': 'cell phone'} ,
-{'id': 97, 'name': 'fan'} ,
-{'id': 224, 'name': 'scale'} ,
-{'id': 151, 'name': 'duck'} ,
-{'id': 319, 'name': 'flute'} ,
-{'id': 156, 'name': 'stop sign'} ,
-{'id': 290, 'name': 'rickshaw'} ,
-{'id': 128, 'name': 'sailboat'} ,
-{'id': 165, 'name': 'tennis racket'} ,
-{'id': 241, 'name': 'cigar'} ,
-{'id': 101, 'name': 'balloon'} ,
-{'id': 308, 'name': 'hair drier'} ,
-{'id': 167, 'name': 'skating and skiing shoes'} ,
-{'id': 237, 'name': 'helicopter'} ,
-{'id': 65, 'name': 'sink'} ,
-{'id': 129, 'name': 'tangerine'} ,
-{'id': 330, 'name': 'crab'} ,
-{'id': 320, 'name': 'measuring cup'} ,
-{'id': 260, 'name': 'fishing rod'} ,
-{'id': 346, 'name': 'saw'} ,
-{'id': 216, 'name': 'ship'} ,
-{'id': 46, 'name': 'coffee table'} ,
-{'id': 194, 'name': 'facial mask'} ,
-{'id': 281, 'name': 'stapler'} ,
-{'id': 118, 'name': 'refrigerator'} ,
-{'id': 40, 'name': 'belt'} ,
-{'id': 349, 'name': 'starfish'} ,
-{'id': 87, 'name': 'hanger'} ,
-{'id': 116, 'name': 'baseball glove'} ,
-{'id': 261, 'name': 'cherry'} ,
-{'id': 334, 'name': 'baozi'} ,
-{'id': 267, 'name': 'screwdriver'} ,
-{'id': 158, 'name': 'converter'} ,
-{'id': 335, 'name': 'lion'} ,
-{'id': 170, 'name': 'baseball'} ,
-{'id': 111, 'name': 'skis'} ,
-{'id': 136, 'name': 'broccoli'} ,
-{'id': 342, 'name': 'eraser'} ,
-{'id': 337, 'name': 'polar bear'} ,
-{'id': 139, 'name': 'shovel'} ,
-{'id': 193, 'name': 'extension cord'} ,
-{'id': 284, 'name': 'goldfish'} ,
-{'id': 174, 'name': 'pepper'} ,
-{'id': 138, 'name': 'stroller'} ,
-{'id': 328, 'name': 'yak'} ,
-{'id': 83, 'name': 'clock'} ,
-{'id': 235, 'name': 'tricycle'} ,
-{'id': 248, 'name': 'parking meter'} ,
-{'id': 274, 'name': 'trophy'} ,
-{'id': 324, 'name': 'binoculars'} ,
-{'id': 51, 'name': 'traffic light'} ,
-{'id': 314, 'name': 'donkey'} ,
-{'id': 45, 'name': 'barrel/bucket'} ,
-{'id': 292, 'name': 'pomegranate'} ,
-{'id': 13, 'name': 'handbag'} ,
-{'id': 262, 'name': 'tablet'} ,
-{'id': 68, 'name': 'apple'} ,
-{'id': 226, 'name': 'cabbage'} ,
-{'id': 23, 'name': 'flower'} ,
-{'id': 58, 'name': 'faucet'} ,
-{'id': 206, 'name': 'tong'} ,
-{'id': 291, 'name': 'trombone'} ,
-{'id': 160, 'name': 'carrot'} ,
-{'id': 172, 'name': 'bow tie'} ,
-{'id': 122, 'name': 'tent'} ,
-{'id': 163, 'name': 'cookies'} ,
-{'id': 115, 'name': 'remote'} ,
-{'id': 175, 'name': 'coffee machine'} ,
-{'id': 238, 'name': 'green beans'} ,
-{'id': 233, 'name': 'cello'} ,
-{'id': 28, 'name': 'wine glass'} ,
-{'id': 295, 'name': 'mushroom'} ,
-{'id': 344, 'name': 'scallop'} ,
-{'id': 125, 'name': 'lantern'} ,
-{'id': 123, 'name': 'shampoo/shower gel'} ,
-{'id': 285, 'name': 'meat balls'} ,
-{'id': 266, 'name': 'key'} ,
-{'id': 296, 'name': 'calculator'} ,
-{'id': 168, 'name': 'scissors'} ,
-{'id': 103, 'name': 'cymbal'} ,
-{'id': 6, 'name': 'bottle'} ,
-{'id': 264, 'name': 'nuts'} ,
-{'id': 234, 'name': 'notepaper'} ,
-{'id': 211, 'name': 'mango'} ,
-{'id': 287, 'name': 'toothpaste'} ,
-{'id': 196, 'name': 'chopsticks'} ,
-{'id': 140, 'name': 'baseball bat'} ,
-{'id': 244, 'name': 'hurdle'} ,
-{'id': 195, 'name': 'tennis ball'} ,
-{'id': 144, 'name': 'surveillance camera'} ,
-{'id': 271, 'name': 'volleyball'} ,
-{'id': 94, 'name': 'keyboard'} ,
-{'id': 339, 'name': 'seal'} ,
-{'id': 11, 'name': 'picture/frame'} ,
-{'id': 348, 'name': 'okra'} ,
-{'id': 191, 'name': 'sausage'} ,
-{'id': 166, 'name': 'candy'} ,
-{'id': 62, 'name': 'ring'} ,
-{'id': 311, 'name': 'dolphin'} ,
-{'id': 273, 'name': 'eggplant'} ,
-{'id': 84, 'name': 'drum'} ,
-{'id': 143, 'name': 'surfboard'} ,
-{'id': 288, 'name': 'antelope'} ,
-{'id': 204, 'name': 'clutch'} ,
-{'id': 207, 'name': 'slide'} ,
-{'id': 43, 'name': 'towel/napkin'} ,
-{'id': 352, 'name': 'durian'} ,
-{'id': 276, 'name': 'board eraser'} ,
-{'id': 315, 'name': 'electric drill'} ,
-{'id': 312, 'name': 'sushi'} ,
-{'id': 198, 'name': 'pie'} ,
-{'id': 106, 'name': 'pickup truck'} ,
-{'id': 176, 'name': 'bathtub'} ,
-{'id': 26, 'name': 'vase'} ,
-{'id': 133, 'name': 'elephant'} ,
-{'id': 256, 'name': 'sandwich'} ,
-{'id': 327, 'name': 'noodles'} ,
-{'id': 10, 'name': 'glasses'} ,
-{'id': 109, 'name': 'airplane'} ,
-{'id': 95, 'name': 'tripod'} ,
-{'id': 247, 'name': 'CD'} ,
-{'id': 121, 'name': 'machinery vehicle'} ,
-{'id': 365, 'name': 'flashlight'} ,
-{'id': 53, 'name': 'microphone'} ,
-{'id': 270, 'name': 'pliers'} ,
-{'id': 362, 'name': 'chainsaw'} ,
-{'id': 259, 'name': 'bear'} ,
-{'id': 197, 'name': 'electronic stove and gas stove'} ,
-{'id': 89, 'name': 'pot/pan'} ,
-{'id': 220, 'name': 'tape'} ,
-{'id': 338, 'name': 'lighter'} ,
-{'id': 177, 'name': 'snowboard'} ,
-{'id': 214, 'name': 'violin'} ,
-{'id': 217, 'name': 'chicken'} ,
-{'id': 2, 'name': 'sneakers'} ,
-{'id': 161, 'name': 'washing machine'} ,
-{'id': 131, 'name': 'kite'} ,
-{'id': 354, 'name': 'rabbit'} ,
-{'id': 86, 'name': 'bus'} ,
-{'id': 275, 'name': 'dates'} ,
-{'id': 282, 'name': 'camel'} ,
-{'id': 88, 'name': 'nightstand'} ,
-{'id': 179, 'name': 'grapes'} ,
-{'id': 229, 'name': 'pine apple'} ,
-{'id': 56, 'name': 'necklace'} ,
-{'id': 18, 'name': 'leather shoes'} ,
-{'id': 358, 'name': 'hoverboard'} ,
-{'id': 345, 'name': 'pencil case'} ,
-{'id': 359, 'name': 'pasta'} ,
-{'id': 157, 'name': 'radiator'} ,
-{'id': 201, 'name': 'hamburger'} ,
-{'id': 268, 'name': 'globe'} ,
-{'id': 332, 'name': 'barbell'} ,
-{'id': 329, 'name': 'mop'} ,
-{'id': 252, 'name': 'horn'} ,
-{'id': 350, 'name': 'eagle'} ,
-{'id': 169, 'name': 'folder'} ,
-{'id': 137, 'name': 'toilet'} ,
-{'id': 5, 'name': 'lamp'} ,
-{'id': 27, 'name': 'bench'} ,
-{'id': 249, 'name': 'swan'} ,
-{'id': 76, 'name': 'knife'} ,
-{'id': 341, 'name': 'comb'} ,
-{'id': 64, 'name': 'watch'} ,
-{'id': 105, 'name': 'telephone'} ,
-{'id': 3, 'name': 'chair'} ,
-{'id': 33, 'name': 'boat'} ,
-{'id': 107, 'name': 'orange'} ,
-{'id': 60, 'name': 'bread'} ,
-{'id': 147, 'name': 'cat'} ,
-{'id': 135, 'name': 'gas stove'} ,
-{'id': 307, 'name': 'papaya'} ,
-{'id': 227, 'name': 'router/modem'} ,
-{'id': 357, 'name': 'asparagus'} ,
-{'id': 73, 'name': 'motorcycle'} ,
-{'id': 77, 'name': 'traffic sign'} ,
-{'id': 67, 'name': 'fish'} ,
-{'id': 326, 'name': 'radish'} ,
-{'id': 213, 'name': 'egg'} ,
-{'id': 203, 'name': 'cucumber'} ,
-{'id': 17, 'name': 'helmet'} ,
-{'id': 110, 'name': 'luggage'} ,
-{'id': 80, 'name': 'truck'} ,
-{'id': 199, 'name': 'frisbee'} ,
-{'id': 232, 'name': 'peach'} ,
-{'id': 1, 'name': 'person'} ,
-{'id': 29, 'name': 'boots'} ,
-{'id': 310, 'name': 'chips'} ,
-{'id': 142, 'name': 'skateboard'} ,
-{'id': 44, 'name': 'slippers'} ,
-{'id': 4, 'name': 'hat'} ,
-{'id': 178, 'name': 'suitcase'} ,
-{'id': 24, 'name': 'tv'} ,
-{'id': 119, 'name': 'train'} ,
-{'id': 82, 'name': 'power outlet'} ,
-{'id': 245, 'name': 'swing'} ,
-{'id': 15, 'name': 'book'} ,
-{'id': 294, 'name': 'jellyfish'} ,
-{'id': 192, 'name': 'fire extinguisher'} ,
-{'id': 212, 'name': 'deer'} ,
-{'id': 181, 'name': 'pear'} ,
-{'id': 347, 'name': 'table tennis paddle'} ,
-{'id': 113, 'name': 'trolley'} ,
-{'id': 91, 'name': 'guitar'} ,
-{'id': 202, 'name': 'golf club'} ,
-{'id': 221, 'name': 'wheelchair'} ,
-{'id': 254, 'name': 'saxophone'} ,
-{'id': 117, 'name': 'paper towel'} ,
-{'id': 303, 'name': 'race car'} ,
-{'id': 240, 'name': 'carriage'} ,
-{'id': 246, 'name': 'radio'} ,
-{'id': 318, 'name': 'parrot'} ,
-{'id': 251, 'name': 'french fries'} ,
-{'id': 98, 'name': 'dog'} ,
-{'id': 112, 'name': 'soccer'} ,
-{'id': 355, 'name': 'french horn'} ,
-{'id': 79, 'name': 'paddle'} ,
-{'id': 283, 'name': 'lettuce'} ,
-{'id': 9, 'name': 'car'} ,
-{'id': 258, 'name': 'kiwi fruit'} ,
-{'id': 325, 'name': 'llama'} ,
-{'id': 187, 'name': 'billiards'} ,
-{'id': 210, 'name': 'facial cleanser'} ,
-{'id': 81, 'name': 'cow'} ,
-{'id': 331, 'name': 'microscope'} ,
-{'id': 148, 'name': 'lemon'} ,
-{'id': 302, 'name': 'pomelo'} ,
-{'id': 85, 'name': 'fork'} ,
-{'id': 154, 'name': 'pumpkin'} ,
-{'id': 289, 'name': 'shrimp'} ,
-{'id': 71, 'name': 'teddy bear'} ,
-{'id': 184, 'name': 'potato'} ,
-{'id': 102, 'name': 'air conditioner'} ,
-{'id': 208, 'name': 'hot dog'} ,
-{'id': 222, 'name': 'plum'} ,
-{'id': 316, 'name': 'spring rolls'} ,
-{'id': 230, 'name': 'crane'} ,
-{'id': 149, 'name': 'liquid soap'} ,
-{'id': 55, 'name': 'canned'} ,
-{'id': 35, 'name': 'speaker'} ,
-{'id': 108, 'name': 'banana'} ,
-{'id': 297, 'name': 'treadmill'} ,
-{'id': 99, 'name': 'spoon'} ,
-{'id': 104, 'name': 'mouse'} ,
-{'id': 182, 'name': 'american football'} ,
-{'id': 299, 'name': 'egg tart'} ,
-{'id': 127, 'name': 'cleaning products'} ,
-{'id': 313, 'name': 'urinal'} ,
-{'id': 286, 'name': 'medal'} ,
-{'id': 239, 'name': 'brush'} ,
-{'id': 96, 'name': 'hockey'} ,
-{'id': 279, 'name': 'dumbbell'} ,
-{'id': 32, 'name': 'umbrella'} ,
-{'id': 272, 'name': 'hammer'} ,
-{'id': 16, 'name': 'plate'} ,
-{'id': 21, 'name': 'potted plant'} ,
-{'id': 242, 'name': 'earphone'} ,
-{'id': 70, 'name': 'candle'} ,
-{'id': 185, 'name': 'paint brush'} ,
-{'id': 48, 'name': 'toy'} ,
-{'id': 130, 'name': 'pizza'} ,
-{'id': 255, 'name': 'trumpet'} ,
-{'id': 361, 'name': 'hotair balloon'} ,
-{'id': 188, 'name': 'fire hydrant'} ,
-{'id': 50, 'name': 'bed'} ,
-{'id': 253, 'name': 'avocado'} ,
-{'id': 293, 'name': 'coconut'} ,
-{'id': 257, 'name': 'cue'} ,
-{'id': 280, 'name': 'hamimelon'} ,
-{'id': 66, 'name': 'horse'} ,
-{'id': 173, 'name': 'pigeon'} ,
-{'id': 190, 'name': 'projector'} ,
-{'id': 69, 'name': 'camera'} ,
-{'id': 30, 'name': 'bowl'} ,
-{'id': 269, 'name': 'broom'} ,
-{'id': 343, 'name': 'pitaya'} ,
-{'id': 305, 'name': 'tuba'} ,
-{'id': 309, 'name': 'green onion'} ,
-{'id': 363, 'name': 'lobster'} ,
-{'id': 225, 'name': 'watermelon'} ,
-{'id': 47, 'name': 'suv'} ,
-{'id': 31, 'name': 'dining table'} ,
-{'id': 54, 'name': 'sandals'} ,
-{'id': 351, 'name': 'monkey'} ,
-{'id': 218, 'name': 'onion'} ,
-{'id': 36, 'name': 'trash bin/can'} ,
-{'id': 20, 'name': 'glove'} ,
-{'id': 277, 'name': 'rice'} ,
-{'id': 152, 'name': 'sports car'} ,
-{'id': 360, 'name': 'target'} ,
-{'id': 205, 'name': 'blender'} ,
-{'id': 19, 'name': 'pillow'} ,
-{'id': 72, 'name': 'cake'} ,
-{'id': 93, 'name': 'tea pot'} ,
-{'id': 353, 'name': 'game board'} ,
-{'id': 38, 'name': 'backpack'} ,
-{'id': 356, 'name': 'ambulance'} ,
-{'id': 146, 'name': 'life saver'} ,
-{'id': 189, 'name': 'goose'} ,
-{'id': 278, 'name': 'tape measure/ruler'} ,
-{'id': 92, 'name': 'traffic cone'} ,
-{'id': 134, 'name': 'toiletries'} ,
-{'id': 114, 'name': 'oven'} ,
-{'id': 317, 'name': 'tortoise/turtle'} ,
-{'id': 265, 'name': 'corn'} ,
-{'id': 126, 'name': 'donut'} ,
-{'id': 57, 'name': 'mirror'} ,
-{'id': 7, 'name': 'cabinet/shelf'} ,
-{'id': 263, 'name': 'green vegetables'} ,
-{'id': 159, 'name': 'tissue '} ,
-{'id': 321, 'name': 'shark'} ,
-{'id': 301, 'name': 'pig'} ,
-{'id': 41, 'name': 'carpet'} ,
-{'id': 304, 'name': 'rice cooker'} ,
-{'id': 323, 'name': 'poker card'} ,
-]
-
-def _get_builtin_metadata(version):
- if version == 'v1':
- id_to_name = {x['id']: x['name'] for x in categories_v1}
- else:
- assert 0, version
- thing_dataset_id_to_contiguous_id = {i + 1: i for i in range(365)}
- thing_classes = [id_to_name[k] for k in sorted(id_to_name)]
- return {
- "thing_dataset_id_to_contiguous_id": thing_dataset_id_to_contiguous_id,
- "thing_classes": thing_classes}
-
-_PREDEFINED_SPLITS_OBJECTS365 = {
- "objects365_train": ("objects365/train", "objects365/annotations/objects365_train.json"),
- "objects365_val": ("objects365/val", "objects365/annotations/objects365_val.json"),
-}
-
-for key, (image_root, json_file) in _PREDEFINED_SPLITS_OBJECTS365.items():
- register_coco_instances(
- key,
- _get_builtin_metadata('v1'),
- os.path.join("datasets", json_file) if "://" not in json_file else json_file,
- os.path.join("datasets", image_root),
- )
diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/backbone/dla.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/backbone/dla.py
deleted file mode 100644
index 9f15f840355571b6d02d5534fa8a9b6b8cb22c70..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/backbone/dla.py
+++ /dev/null
@@ -1,479 +0,0 @@
-import numpy as np
-import math
-from os.path import join
-import fvcore.nn.weight_init as weight_init
-import torch
-import torch.nn.functional as F
-from torch import nn
-import torch.utils.model_zoo as model_zoo
-
-from detectron2.modeling.backbone.resnet import (
- BasicStem, BottleneckBlock, DeformBottleneckBlock)
-from detectron2.layers import (
- Conv2d,
- DeformConv,
- FrozenBatchNorm2d,
- ModulatedDeformConv,
- ShapeSpec,
- get_norm,
-)
-
-from detectron2.modeling.backbone.backbone import Backbone
-from detectron2.modeling.backbone.build import BACKBONE_REGISTRY
-from detectron2.modeling.backbone.fpn import FPN
-
-__all__ = [
- "BottleneckBlock",
- "DeformBottleneckBlock",
- "BasicStem",
-]
-
-DCNV1 = False
-
-HASH = {
- 34: 'ba72cf86',
- 60: '24839fc4',
-}
-
-def get_model_url(data, name, hash):
- return join('http://dl.yf.io/dla/models', data, '{}-{}.pth'.format(name, hash))
-
-class BasicBlock(nn.Module):
- def __init__(self, inplanes, planes, stride=1, dilation=1, norm='BN'):
- super(BasicBlock, self).__init__()
- self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=3,
- stride=stride, padding=dilation,
- bias=False, dilation=dilation)
- self.bn1 = get_norm(norm, planes)
- self.relu = nn.ReLU(inplace=True)
- self.conv2 = nn.Conv2d(planes, planes, kernel_size=3,
- stride=1, padding=dilation,
- bias=False, dilation=dilation)
- self.bn2 = get_norm(norm, planes)
- self.stride = stride
-
- def forward(self, x, residual=None):
- if residual is None:
- residual = x
-
- out = self.conv1(x)
- out = self.bn1(out)
- out = self.relu(out)
-
- out = self.conv2(out)
- out = self.bn2(out)
-
- out += residual
- out = self.relu(out)
-
- return out
-
-class Bottleneck(nn.Module):
- expansion = 2
-
- def __init__(self, inplanes, planes, stride=1, dilation=1, norm='BN'):
- super(Bottleneck, self).__init__()
- expansion = Bottleneck.expansion
- bottle_planes = planes // expansion
- self.conv1 = nn.Conv2d(inplanes, bottle_planes,
- kernel_size=1, bias=False)
- self.bn1 = get_norm(norm, bottle_planes)
- self.conv2 = nn.Conv2d(bottle_planes, bottle_planes, kernel_size=3,
- stride=stride, padding=dilation,
- bias=False, dilation=dilation)
- self.bn2 = get_norm(norm, bottle_planes)
- self.conv3 = nn.Conv2d(bottle_planes, planes,
- kernel_size=1, bias=False)
- self.bn3 = get_norm(norm, planes)
- self.relu = nn.ReLU(inplace=True)
- self.stride = stride
-
- def forward(self, x, residual=None):
- if residual is None:
- residual = x
-
- out = self.conv1(x)
- out = self.bn1(out)
- out = self.relu(out)
-
- out = self.conv2(out)
- out = self.bn2(out)
- out = self.relu(out)
-
- out = self.conv3(out)
- out = self.bn3(out)
-
- out += residual
- out = self.relu(out)
-
- return out
-
-class Root(nn.Module):
- def __init__(self, in_channels, out_channels, kernel_size, residual, norm='BN'):
- super(Root, self).__init__()
- self.conv = nn.Conv2d(
- in_channels, out_channels, 1,
- stride=1, bias=False, padding=(kernel_size - 1) // 2)
- self.bn = get_norm(norm, out_channels)
- self.relu = nn.ReLU(inplace=True)
- self.residual = residual
-
- def forward(self, *x):
- children = x
- x = self.conv(torch.cat(x, 1))
- x = self.bn(x)
- if self.residual:
- x += children[0]
- x = self.relu(x)
-
- return x
-
-
-class Tree(nn.Module):
- def __init__(self, levels, block, in_channels, out_channels, stride=1,
- level_root=False, root_dim=0, root_kernel_size=1,
- dilation=1, root_residual=False, norm='BN'):
- super(Tree, self).__init__()
- if root_dim == 0:
- root_dim = 2 * out_channels
- if level_root:
- root_dim += in_channels
- if levels == 1:
- self.tree1 = block(in_channels, out_channels, stride,
- dilation=dilation, norm=norm)
- self.tree2 = block(out_channels, out_channels, 1,
- dilation=dilation, norm=norm)
- else:
- self.tree1 = Tree(levels - 1, block, in_channels, out_channels,
- stride, root_dim=0,
- root_kernel_size=root_kernel_size,
- dilation=dilation, root_residual=root_residual,
- norm=norm)
- self.tree2 = Tree(levels - 1, block, out_channels, out_channels,
- root_dim=root_dim + out_channels,
- root_kernel_size=root_kernel_size,
- dilation=dilation, root_residual=root_residual,
- norm=norm)
- if levels == 1:
- self.root = Root(root_dim, out_channels, root_kernel_size,
- root_residual, norm=norm)
- self.level_root = level_root
- self.root_dim = root_dim
- self.downsample = None
- self.project = None
- self.levels = levels
- if stride > 1:
- self.downsample = nn.MaxPool2d(stride, stride=stride)
- if in_channels != out_channels:
- self.project = nn.Sequential(
- nn.Conv2d(in_channels, out_channels,
- kernel_size=1, stride=1, bias=False),
- get_norm(norm, out_channels)
- )
-
- def forward(self, x, residual=None, children=None):
- children = [] if children is None else children
- bottom = self.downsample(x) if self.downsample else x
- residual = self.project(bottom) if self.project else bottom
- if self.level_root:
- children.append(bottom)
- x1 = self.tree1(x, residual)
- if self.levels == 1:
- x2 = self.tree2(x1)
- x = self.root(x2, x1, *children)
- else:
- children.append(x1)
- x = self.tree2(x1, children=children)
- return x
-
-class DLA(nn.Module):
- def __init__(self, num_layers, levels, channels,
- block=BasicBlock, residual_root=False, norm='BN'):
- """
- Args:
- """
- super(DLA, self).__init__()
- self.norm = norm
- self.channels = channels
- self.base_layer = nn.Sequential(
- nn.Conv2d(3, channels[0], kernel_size=7, stride=1,
- padding=3, bias=False),
- get_norm(self.norm, channels[0]),
- nn.ReLU(inplace=True))
- self.level0 = self._make_conv_level(
- channels[0], channels[0], levels[0])
- self.level1 = self._make_conv_level(
- channels[0], channels[1], levels[1], stride=2)
- self.level2 = Tree(levels[2], block, channels[1], channels[2], 2,
- level_root=False,
- root_residual=residual_root, norm=norm)
- self.level3 = Tree(levels[3], block, channels[2], channels[3], 2,
- level_root=True, root_residual=residual_root,
- norm=norm)
- self.level4 = Tree(levels[4], block, channels[3], channels[4], 2,
- level_root=True, root_residual=residual_root,
- norm=norm)
- self.level5 = Tree(levels[5], block, channels[4], channels[5], 2,
- level_root=True, root_residual=residual_root,
- norm=norm)
- self.load_pretrained_model(
- data='imagenet', name='dla{}'.format(num_layers),
- hash=HASH[num_layers])
-
- def load_pretrained_model(self, data, name, hash):
- model_url = get_model_url(data, name, hash)
- model_weights = model_zoo.load_url(model_url)
- num_classes = len(model_weights[list(model_weights.keys())[-1]])
- self.fc = nn.Conv2d(
- self.channels[-1], num_classes,
- kernel_size=1, stride=1, padding=0, bias=True)
- print('Loading pretrained')
- self.load_state_dict(model_weights, strict=False)
-
- def _make_conv_level(self, inplanes, planes, convs, stride=1, dilation=1):
- modules = []
- for i in range(convs):
- modules.extend([
- nn.Conv2d(inplanes, planes, kernel_size=3,
- stride=stride if i == 0 else 1,
- padding=dilation, bias=False, dilation=dilation),
- get_norm(self.norm, planes),
- nn.ReLU(inplace=True)])
- inplanes = planes
- return nn.Sequential(*modules)
-
- def forward(self, x):
- y = []
- x = self.base_layer(x)
- for i in range(6):
- x = getattr(self, 'level{}'.format(i))(x)
- y.append(x)
- return y
-
-
-def fill_up_weights(up):
- w = up.weight.data
- f = math.ceil(w.size(2) / 2)
- c = (2 * f - 1 - f % 2) / (2. * f)
- for i in range(w.size(2)):
- for j in range(w.size(3)):
- w[0, 0, i, j] = \
- (1 - math.fabs(i / f - c)) * (1 - math.fabs(j / f - c))
- for c in range(1, w.size(0)):
- w[c, 0, :, :] = w[0, 0, :, :]
-
-
-class _DeformConv(nn.Module):
- def __init__(self, chi, cho, norm='BN'):
- super(_DeformConv, self).__init__()
- self.actf = nn.Sequential(
- get_norm(norm, cho),
- nn.ReLU(inplace=True)
- )
- if DCNV1:
- self.offset = Conv2d(
- chi, 18, kernel_size=3, stride=1,
- padding=1, dilation=1)
- self.conv = DeformConv(
- chi, cho, kernel_size=(3,3), stride=1, padding=1,
- dilation=1, deformable_groups=1)
- else:
- self.offset = Conv2d(
- chi, 27, kernel_size=3, stride=1,
- padding=1, dilation=1)
- self.conv = ModulatedDeformConv(
- chi, cho, kernel_size=3, stride=1, padding=1,
- dilation=1, deformable_groups=1)
- nn.init.constant_(self.offset.weight, 0)
- nn.init.constant_(self.offset.bias, 0)
-
- def forward(self, x):
- if DCNV1:
- offset = self.offset(x)
- x = self.conv(x, offset)
- else:
- offset_mask = self.offset(x)
- offset_x, offset_y, mask = torch.chunk(offset_mask, 3, dim=1)
- offset = torch.cat((offset_x, offset_y), dim=1)
- mask = mask.sigmoid()
- x = self.conv(x, offset, mask)
- x = self.actf(x)
- return x
-
-
-class IDAUp(nn.Module):
- def __init__(self, o, channels, up_f, norm='BN'):
- super(IDAUp, self).__init__()
- for i in range(1, len(channels)):
- c = channels[i]
- f = int(up_f[i])
- proj = _DeformConv(c, o, norm=norm)
- node = _DeformConv(o, o, norm=norm)
-
- up = nn.ConvTranspose2d(o, o, f * 2, stride=f,
- padding=f // 2, output_padding=0,
- groups=o, bias=False)
- fill_up_weights(up)
-
- setattr(self, 'proj_' + str(i), proj)
- setattr(self, 'up_' + str(i), up)
- setattr(self, 'node_' + str(i), node)
-
-
- def forward(self, layers, startp, endp):
- for i in range(startp + 1, endp):
- upsample = getattr(self, 'up_' + str(i - startp))
- project = getattr(self, 'proj_' + str(i - startp))
- layers[i] = upsample(project(layers[i]))
- node = getattr(self, 'node_' + str(i - startp))
- layers[i] = node(layers[i] + layers[i - 1])
-
-
-class DLAUp(nn.Module):
- def __init__(self, startp, channels, scales, in_channels=None, norm='BN'):
- super(DLAUp, self).__init__()
- self.startp = startp
- if in_channels is None:
- in_channels = channels
- self.channels = channels
- channels = list(channels)
- scales = np.array(scales, dtype=int)
- for i in range(len(channels) - 1):
- j = -i - 2
- setattr(self, 'ida_{}'.format(i),
- IDAUp(channels[j], in_channels[j:],
- scales[j:] // scales[j], norm=norm))
- scales[j + 1:] = scales[j]
- in_channels[j + 1:] = [channels[j] for _ in channels[j + 1:]]
-
- def forward(self, layers):
- out = [layers[-1]] # start with 32
- for i in range(len(layers) - self.startp - 1):
- ida = getattr(self, 'ida_{}'.format(i))
- ida(layers, len(layers) -i - 2, len(layers))
- out.insert(0, layers[-1])
- return out
-
-DLA_CONFIGS = {
- 34: ([1, 1, 1, 2, 2, 1], [16, 32, 64, 128, 256, 512], BasicBlock),
- 60: ([1, 1, 1, 2, 3, 1], [16, 32, 128, 256, 512, 1024], Bottleneck)
-}
-
-
-class DLASeg(Backbone):
- def __init__(self, num_layers, out_features, use_dla_up=True,
- ms_output=False, norm='BN'):
- super(DLASeg, self).__init__()
- # depth = 34
- levels, channels, Block = DLA_CONFIGS[num_layers]
- self.base = DLA(num_layers=num_layers,
- levels=levels, channels=channels, block=Block, norm=norm)
- down_ratio = 4
- self.first_level = int(np.log2(down_ratio))
- self.ms_output = ms_output
- self.last_level = 5 if not self.ms_output else 6
- channels = self.base.channels
- scales = [2 ** i for i in range(len(channels[self.first_level:]))]
- self.use_dla_up = use_dla_up
- if self.use_dla_up:
- self.dla_up = DLAUp(
- self.first_level, channels[self.first_level:], scales,
- norm=norm)
- out_channel = channels[self.first_level]
- if not self.ms_output: # stride 4 DLA
- self.ida_up = IDAUp(
- out_channel, channels[self.first_level:self.last_level],
- [2 ** i for i in range(self.last_level - self.first_level)],
- norm=norm)
- self._out_features = out_features
- self._out_feature_channels = {
- 'dla{}'.format(i): channels[i] for i in range(6)}
- self._out_feature_strides = {
- 'dla{}'.format(i): 2 ** i for i in range(6)}
- self._size_divisibility = 32
-
- @property
- def size_divisibility(self):
- return self._size_divisibility
-
- def forward(self, x):
- x = self.base(x)
- if self.use_dla_up:
- x = self.dla_up(x)
- if not self.ms_output: # stride 4 dla
- y = []
- for i in range(self.last_level - self.first_level):
- y.append(x[i].clone())
- self.ida_up(y, 0, len(y))
- ret = {}
- for i in range(self.last_level - self.first_level):
- out_feature = 'dla{}'.format(i)
- if out_feature in self._out_features:
- ret[out_feature] = y[i]
- else:
- ret = {}
- st = self.first_level if self.use_dla_up else 0
- for i in range(self.last_level - st):
- out_feature = 'dla{}'.format(i + st)
- if out_feature in self._out_features:
- ret[out_feature] = x[i]
-
- return ret
-
-
-@BACKBONE_REGISTRY.register()
-def build_dla_backbone(cfg, input_shape):
- """
- Create a ResNet instance from config.
-
- Returns:
- ResNet: a :class:`ResNet` instance.
- """
- return DLASeg(
- out_features=cfg.MODEL.DLA.OUT_FEATURES,
- num_layers=cfg.MODEL.DLA.NUM_LAYERS,
- use_dla_up=cfg.MODEL.DLA.USE_DLA_UP,
- ms_output=cfg.MODEL.DLA.MS_OUTPUT,
- norm=cfg.MODEL.DLA.NORM)
-
-class LastLevelP6P7(nn.Module):
- """
- This module is used in RetinaNet to generate extra layers, P6 and P7 from
- C5 feature.
- """
-
- def __init__(self, in_channels, out_channels):
- super().__init__()
- self.num_levels = 2
- self.in_feature = "dla5"
- self.p6 = nn.Conv2d(in_channels, out_channels, 3, 2, 1)
- self.p7 = nn.Conv2d(out_channels, out_channels, 3, 2, 1)
- for module in [self.p6, self.p7]:
- weight_init.c2_xavier_fill(module)
-
- def forward(self, c5):
- p6 = self.p6(c5)
- p7 = self.p7(F.relu(p6))
- return [p6, p7]
-
-@BACKBONE_REGISTRY.register()
-def build_retinanet_dla_fpn_backbone(cfg, input_shape: ShapeSpec):
- """
- Args:
- cfg: a detectron2 CfgNode
- Returns:
- backbone (Backbone): backbone module, must be a subclass of :class:`Backbone`.
- """
- bottom_up = build_dla_backbone(cfg, input_shape)
- in_features = cfg.MODEL.FPN.IN_FEATURES
- out_channels = cfg.MODEL.FPN.OUT_CHANNELS
- in_channels_p6p7 = bottom_up.output_shape()['dla5'].channels
- backbone = FPN(
- bottom_up=bottom_up,
- in_features=in_features,
- out_channels=out_channels,
- norm=cfg.MODEL.FPN.NORM,
- top_block=LastLevelP6P7(in_channels_p6p7, out_channels),
- fuse_type=cfg.MODEL.FPN.FUSE_TYPE,
- )
- return backbone
diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tools/analyze_model.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tools/analyze_model.py
deleted file mode 100644
index 8e38f8b71eb3b8d1e2b670e7f01a796ec2ea4b7e..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tools/analyze_model.py
+++ /dev/null
@@ -1,159 +0,0 @@
-# -*- coding: utf-8 -*-
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-import logging
-import numpy as np
-from collections import Counter
-import tqdm
-from fvcore.nn import flop_count_table # can also try flop_count_str
-
-from detectron2.checkpoint import DetectionCheckpointer
-from detectron2.config import CfgNode, LazyConfig, get_cfg, instantiate
-from detectron2.data import build_detection_test_loader
-from detectron2.engine import default_argument_parser
-from detectron2.modeling import build_model
-from detectron2.utils.analysis import (
- FlopCountAnalysis,
- activation_count_operators,
- parameter_count_table,
-)
-from detectron2.utils.logger import setup_logger
-
-logger = logging.getLogger("detectron2")
-
-
-def setup(args):
- if args.config_file.endswith(".yaml"):
- cfg = get_cfg()
- cfg.merge_from_file(args.config_file)
- cfg.DATALOADER.NUM_WORKERS = 0
- cfg.merge_from_list(args.opts)
- cfg.freeze()
- else:
- cfg = LazyConfig.load(args.config_file)
- cfg = LazyConfig.apply_overrides(cfg, args.opts)
- setup_logger(name="fvcore")
- setup_logger()
- return cfg
-
-
-def do_flop(cfg):
- if isinstance(cfg, CfgNode):
- data_loader = build_detection_test_loader(cfg, cfg.DATASETS.TEST[0])
- model = build_model(cfg)
- DetectionCheckpointer(model).load(cfg.MODEL.WEIGHTS)
- else:
- data_loader = instantiate(cfg.dataloader.test)
- model = instantiate(cfg.model)
- model.to(cfg.train.device)
- DetectionCheckpointer(model).load(cfg.train.init_checkpoint)
- model.eval()
-
- counts = Counter()
- total_flops = []
- for idx, data in zip(tqdm.trange(args.num_inputs), data_loader): # noqa
- flops = FlopCountAnalysis(model, data)
- if idx > 0:
- flops.unsupported_ops_warnings(False).uncalled_modules_warnings(False)
- counts += flops.by_operator()
- total_flops.append(flops.total())
-
- logger.info("Flops table computed from only one input sample:\n" + flop_count_table(flops))
- logger.info(
- "Average GFlops for each type of operators:\n"
- + str([(k, v / (idx + 1) / 1e9) for k, v in counts.items()])
- )
- logger.info(
- "Total GFlops: {:.1f}±{:.1f}".format(np.mean(total_flops) / 1e9, np.std(total_flops) / 1e9)
- )
-
-
-def do_activation(cfg):
- if isinstance(cfg, CfgNode):
- data_loader = build_detection_test_loader(cfg, cfg.DATASETS.TEST[0])
- model = build_model(cfg)
- DetectionCheckpointer(model).load(cfg.MODEL.WEIGHTS)
- else:
- data_loader = instantiate(cfg.dataloader.test)
- model = instantiate(cfg.model)
- model.to(cfg.train.device)
- DetectionCheckpointer(model).load(cfg.train.init_checkpoint)
- model.eval()
-
- counts = Counter()
- total_activations = []
- for idx, data in zip(tqdm.trange(args.num_inputs), data_loader): # noqa
- count = activation_count_operators(model, data)
- counts += count
- total_activations.append(sum(count.values()))
- logger.info(
- "(Million) Activations for Each Type of Operators:\n"
- + str([(k, v / idx) for k, v in counts.items()])
- )
- logger.info(
- "Total (Million) Activations: {}±{}".format(
- np.mean(total_activations), np.std(total_activations)
- )
- )
-
-
-def do_parameter(cfg):
- if isinstance(cfg, CfgNode):
- model = build_model(cfg)
- else:
- model = instantiate(cfg.model)
- logger.info("Parameter Count:\n" + parameter_count_table(model, max_depth=5))
-
-
-def do_structure(cfg):
- if isinstance(cfg, CfgNode):
- model = build_model(cfg)
- else:
- model = instantiate(cfg.model)
- logger.info("Model Structure:\n" + str(model))
-
-
-if __name__ == "__main__":
- parser = default_argument_parser(
- epilog="""
-Examples:
-
-To show parameters of a model:
-$ ./analyze_model.py --tasks parameter \\
- --config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml
-
-Flops and activations are data-dependent, therefore inputs and model weights
-are needed to count them:
-
-$ ./analyze_model.py --num-inputs 100 --tasks flop \\
- --config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml \\
- MODEL.WEIGHTS /path/to/model.pkl
-"""
- )
- parser.add_argument(
- "--tasks",
- choices=["flop", "activation", "parameter", "structure"],
- required=True,
- nargs="+",
- )
- parser.add_argument(
- "-n",
- "--num-inputs",
- default=100,
- type=int,
- help="number of inputs used to compute statistics for flops/activations, "
- "both are data dependent.",
- )
- args = parser.parse_args()
- assert not args.eval_only
- assert args.num_gpus == 1
-
- cfg = setup(args)
-
- for task in args.tasks:
- {
- "flop": do_flop,
- "activation": do_activation,
- "parameter": do_parameter,
- "structure": do_structure,
- }[task](cfg)
diff --git a/spaces/PAIR/PAIR-Diffusion/cldm/logger.py b/spaces/PAIR/PAIR-Diffusion/cldm/logger.py
deleted file mode 100644
index fd2798af63e43c8d048c043aa83dc140925e2dea..0000000000000000000000000000000000000000
--- a/spaces/PAIR/PAIR-Diffusion/cldm/logger.py
+++ /dev/null
@@ -1,233 +0,0 @@
-import os
-
-import numpy as np
-import torch
-import torchvision
-from PIL import Image
-from pytorch_lightning.callbacks import Callback
-import pytorch_lightning as pl
-from pytorch_lightning.utilities.distributed import rank_zero_only
-from omegaconf import OmegaConf
-
-# class ImageLogger(Callback):
-# def __init__(self, batch_frequency=2000, max_images=4, clamp=True, increase_log_steps=True,
-# rescale=True, disabled=False, log_on_batch_idx=False, log_first_step=False,
-# log_images_kwargs=None):
-# super().__init__()
-# self.rescale = rescale
-# self.batch_freq = batch_frequency
-# self.max_images = max_images
-# if not increase_log_steps:
-# self.log_steps = [self.batch_freq]
-# self.clamp = clamp
-# self.disabled = disabled
-# self.log_on_batch_idx = log_on_batch_idx
-# self.log_images_kwargs = log_images_kwargs if log_images_kwargs else {}
-# self.log_first_step = log_first_step
-
-# @rank_zero_only
-# def log_local(self, save_dir, split, images, global_step, current_epoch, batch_idx):
-# root = os.path.join(save_dir, "image_log", split)
-# for k in images:
-# grid = torchvision.utils.make_grid(images[k], nrow=4)
-# if self.rescale:
-# grid = (grid + 1.0) / 2.0 # -1,1 -> 0,1; c,h,w
-# grid = grid.transpose(0, 1).transpose(1, 2).squeeze(-1)
-# grid = grid.numpy()
-# grid = (grid * 255).astype(np.uint8)
-# filename = "{}_gs-{:06}_e-{:06}_b-{:06}.png".format(k, global_step, current_epoch, batch_idx)
-# path = os.path.join(root, filename)
-# os.makedirs(os.path.split(path)[0], exist_ok=True)
-# Image.fromarray(grid).save(path)
-
-# def log_img(self, pl_module, batch, batch_idx, split="train"):
-# check_idx = batch_idx # if self.log_on_batch_idx else pl_module.global_step
-# if (self.check_frequency(check_idx) and # batch_idx % self.batch_freq == 0
-# hasattr(pl_module, "log_images") and
-# callable(pl_module.log_images) and
-# self.max_images > 0):
-# logger = type(pl_module.logger)
-
-# is_train = pl_module.training
-# if is_train:
-# pl_module.eval()
-
-# with torch.no_grad():
-# images = pl_module.log_images(batch, split=split, **self.log_images_kwargs)
-
-# for k in images:
-# N = min(images[k].shape[0], self.max_images)
-# images[k] = images[k][:N]
-# if isinstance(images[k], torch.Tensor):
-# images[k] = images[k].detach().cpu()
-# if self.clamp:
-# images[k] = torch.clamp(images[k], -1., 1.)
-
-# self.log_local(pl_module.logger.save_dir, split, images,
-# pl_module.global_step, pl_module.current_epoch, batch_idx)
-
-# if is_train:
-# pl_module.train()
-
-# def check_frequency(self, check_idx):
-# return check_idx % self.batch_freq == 0
-
-# def on_train_batch_end(self, trainer, pl_module, outputs, batch, batch_idx, dataloader_idx):
-# if not self.disabled:
-# self.log_img(pl_module, batch, batch_idx, split="train")
-
-
-class SetupCallback(Callback):
- def __init__(self, resume, now, logdir, ckptdir, cfgdir, config, lightning_config):
- super().__init__()
- self.resume = resume
- self.now = now
- self.logdir = logdir
- self.ckptdir = ckptdir
- self.cfgdir = cfgdir
- self.config = config
- self.lightning_config = lightning_config
-
- def on_keyboard_interrupt(self, trainer, pl_module):
- if trainer.global_rank == 0:
- print("Summoning checkpoint.")
- ckpt_path = os.path.join(self.ckptdir, "last.ckpt")
- trainer.save_checkpoint(ckpt_path)
-
- def on_pretrain_routine_start(self, trainer, pl_module):
- if trainer.global_rank == 0:
- # Create logdirs and save configs
- os.makedirs(self.logdir, exist_ok=True)
- os.makedirs(self.ckptdir, exist_ok=True)
- os.makedirs(self.cfgdir, exist_ok=True)
-
- if "callbacks" in self.lightning_config:
- if 'metrics_over_trainsteps_checkpoint' in self.lightning_config['callbacks']:
- os.makedirs(os.path.join(self.ckptdir, 'trainstep_checkpoints'), exist_ok=True)
- print("Project config")
- print(OmegaConf.to_yaml(self.config))
- OmegaConf.save(self.config,
- os.path.join(self.cfgdir, "{}-project.yaml".format(self.now)))
-
- print("Lightning config")
- print(OmegaConf.to_yaml(self.lightning_config))
- OmegaConf.save(OmegaConf.create({"lightning": self.lightning_config}),
- os.path.join(self.cfgdir, "{}-lightning.yaml".format(self.now)))
-
- else:
- # ModelCheckpoint callback created log directory --- remove it
- if not self.resume and os.path.exists(self.logdir):
- dst, name = os.path.split(self.logdir)
- dst = os.path.join(dst, "child_runs", name)
- os.makedirs(os.path.split(dst)[0], exist_ok=True)
- try:
- os.rename(self.logdir, dst)
- except FileNotFoundError:
- pass
-
-
-class ImageLogger(Callback):
- def __init__(self, batch_frequency, max_images, clamp=True, increase_log_steps=True,
- rescale=True, disabled=False, log_on_batch_idx=False, log_first_step=False,
- log_images_kwargs=None):
- super().__init__()
- self.rescale = rescale
- self.batch_freq = batch_frequency
- self.max_images = max_images
- self.logger_log_images = {
- pl.loggers.TestTubeLogger: self._testtube,
- }
- self.log_steps = [2 ** n for n in range(int(np.log2(self.batch_freq)) + 1)]
- if not increase_log_steps:
- self.log_steps = [self.batch_freq]
- self.clamp = clamp
- self.disabled = disabled
- self.log_on_batch_idx = log_on_batch_idx
- self.log_images_kwargs = log_images_kwargs if log_images_kwargs else {}
- self.log_first_step = log_first_step
-
- @rank_zero_only
- def _testtube(self, pl_module, images, batch_idx, split):
- for k in images:
- grid = torchvision.utils.make_grid(images[k])
- grid = (grid + 1.0) / 2.0 # -1,1 -> 0,1; c,h,w
-
- tag = f"{split}/{k}"
- pl_module.logger.experiment.add_image(
- tag, grid,
- global_step=pl_module.global_step)
-
- @rank_zero_only
- def log_local(self, save_dir, split, images,
- global_step, current_epoch, batch_idx):
- root = os.path.join(save_dir, "images", split)
- for k in images:
- grid = torchvision.utils.make_grid(images[k], nrow=4)
- if self.rescale:
- grid = (grid + 1.0) / 2.0 # -1,1 -> 0,1; c,h,w
- grid = grid.transpose(0, 1).transpose(1, 2).squeeze(-1)
- grid = grid.numpy()
- grid = (grid * 255).astype(np.uint8)
- filename = "{}_gs-{:06}_e-{:06}_b-{:06}.png".format(
- k,
- global_step,
- current_epoch,
- batch_idx)
- path = os.path.join(root, filename)
- os.makedirs(os.path.split(path)[0], exist_ok=True)
- Image.fromarray(grid).save(path)
-
- def log_img(self, pl_module, batch, batch_idx, split="train"):
- check_idx = batch_idx if self.log_on_batch_idx else pl_module.global_step
- if (self.check_frequency(check_idx) and # batch_idx % self.batch_freq == 0
- hasattr(pl_module, "log_images") and
- callable(pl_module.log_images) and
- self.max_images > 0):
- logger = type(pl_module.logger)
-
- is_train = pl_module.training
- if is_train:
- pl_module.eval()
-
- with torch.no_grad():
- images = pl_module.log_images(batch, split=split, **self.log_images_kwargs)
-
- for k in images:
- N = min(images[k].shape[0], self.max_images)
- images[k] = images[k][:N]
- if isinstance(images[k], torch.Tensor):
- images[k] = images[k].detach().cpu()
- if self.clamp:
- images[k] = torch.clamp(images[k], -1., 1.)
-
- self.log_local(pl_module.logger.save_dir, split, images,
- pl_module.global_step, pl_module.current_epoch, batch_idx)
-
- logger_log_images = self.logger_log_images.get(logger, lambda *args, **kwargs: None)
- logger_log_images(pl_module, images, pl_module.global_step, split)
-
- if is_train:
- pl_module.train()
-
- def check_frequency(self, check_idx):
- if ((check_idx % self.batch_freq) == 0 or (check_idx in self.log_steps)) and (
- check_idx > 0 or self.log_first_step):
- try:
- self.log_steps.pop(0)
- except IndexError as e:
- print(e)
- pass
- return True
- return False
-
- def on_train_batch_end(self, trainer, pl_module, outputs, batch, batch_idx, dataloader_idx):
- if not self.disabled and (pl_module.global_step > 0 or self.log_first_step):
- self.log_img(pl_module, batch, batch_idx, split="train")
-
- def on_validation_batch_end(self, trainer, pl_module, outputs, batch, batch_idx, dataloader_idx):
- # if not self.disabled and pl_module.global_step > 0:
- # self.log_img(pl_module, batch, batch_idx, split="val")
- # if hasattr(pl_module, 'calibrate_grad_norm'):
- # if (pl_module.calibrate_grad_norm and batch_idx % 25 == 0) and batch_idx > 0:
- # self.log_gradients(trainer, pl_module, batch_idx=batch_idx)
- pass
\ No newline at end of file
diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/ops/pixel_group.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/ops/pixel_group.py
deleted file mode 100644
index 2143c75f835a467c802fc3c37ecd3ac0f85bcda4..0000000000000000000000000000000000000000
--- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/ops/pixel_group.py
+++ /dev/null
@@ -1,75 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import numpy as np
-import torch
-
-from ..utils import ext_loader
-
-ext_module = ext_loader.load_ext('_ext', ['pixel_group'])
-
-
-def pixel_group(score, mask, embedding, kernel_label, kernel_contour,
- kernel_region_num, distance_threshold):
- """Group pixels into text instances, which is widely used text detection
- methods.
-
- Arguments:
- score (np.array or Tensor): The foreground score with size hxw.
- mask (np.array or Tensor): The foreground mask with size hxw.
- embedding (np.array or Tensor): The embedding with size hxwxc to
- distinguish instances.
- kernel_label (np.array or Tensor): The instance kernel index with
- size hxw.
- kernel_contour (np.array or Tensor): The kernel contour with size hxw.
- kernel_region_num (int): The instance kernel region number.
- distance_threshold (float): The embedding distance threshold between
- kernel and pixel in one instance.
-
- Returns:
- pixel_assignment (List[List[float]]): The instance coordinate list.
- Each element consists of averaged confidence, pixel number, and
- coordinates (x_i, y_i for all pixels) in order.
- """
- assert isinstance(score, (torch.Tensor, np.ndarray))
- assert isinstance(mask, (torch.Tensor, np.ndarray))
- assert isinstance(embedding, (torch.Tensor, np.ndarray))
- assert isinstance(kernel_label, (torch.Tensor, np.ndarray))
- assert isinstance(kernel_contour, (torch.Tensor, np.ndarray))
- assert isinstance(kernel_region_num, int)
- assert isinstance(distance_threshold, float)
-
- if isinstance(score, np.ndarray):
- score = torch.from_numpy(score)
- if isinstance(mask, np.ndarray):
- mask = torch.from_numpy(mask)
- if isinstance(embedding, np.ndarray):
- embedding = torch.from_numpy(embedding)
- if isinstance(kernel_label, np.ndarray):
- kernel_label = torch.from_numpy(kernel_label)
- if isinstance(kernel_contour, np.ndarray):
- kernel_contour = torch.from_numpy(kernel_contour)
-
- if torch.__version__ == 'parrots':
- label = ext_module.pixel_group(
- score,
- mask,
- embedding,
- kernel_label,
- kernel_contour,
- kernel_region_num=kernel_region_num,
- distance_threshold=distance_threshold)
- label = label.tolist()
- label = label[0]
- list_index = kernel_region_num
- pixel_assignment = []
- for x in range(kernel_region_num):
- pixel_assignment.append(
- np.array(
- label[list_index:list_index + int(label[x])],
- dtype=np.float))
- list_index = list_index + int(label[x])
- else:
- pixel_assignment = ext_module.pixel_group(score, mask, embedding,
- kernel_label, kernel_contour,
- kernel_region_num,
- distance_threshold)
- return pixel_assignment
diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/models/decode_heads/dm_head.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/models/decode_heads/dm_head.py
deleted file mode 100644
index 19c963923126b53ce22f60813540a35badf24b3d..0000000000000000000000000000000000000000
--- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/models/decode_heads/dm_head.py
+++ /dev/null
@@ -1,140 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from annotator.uniformer.mmcv.cnn import ConvModule, build_activation_layer, build_norm_layer
-
-from ..builder import HEADS
-from .decode_head import BaseDecodeHead
-
-
-class DCM(nn.Module):
- """Dynamic Convolutional Module used in DMNet.
-
- Args:
- filter_size (int): The filter size of generated convolution kernel
- used in Dynamic Convolutional Module.
- fusion (bool): Add one conv to fuse DCM output feature.
- in_channels (int): Input channels.
- channels (int): Channels after modules, before conv_seg.
- conv_cfg (dict | None): Config of conv layers.
- norm_cfg (dict | None): Config of norm layers.
- act_cfg (dict): Config of activation layers.
- """
-
- def __init__(self, filter_size, fusion, in_channels, channels, conv_cfg,
- norm_cfg, act_cfg):
- super(DCM, self).__init__()
- self.filter_size = filter_size
- self.fusion = fusion
- self.in_channels = in_channels
- self.channels = channels
- self.conv_cfg = conv_cfg
- self.norm_cfg = norm_cfg
- self.act_cfg = act_cfg
- self.filter_gen_conv = nn.Conv2d(self.in_channels, self.channels, 1, 1,
- 0)
-
- self.input_redu_conv = ConvModule(
- self.in_channels,
- self.channels,
- 1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg)
-
- if self.norm_cfg is not None:
- self.norm = build_norm_layer(self.norm_cfg, self.channels)[1]
- else:
- self.norm = None
- self.activate = build_activation_layer(self.act_cfg)
-
- if self.fusion:
- self.fusion_conv = ConvModule(
- self.channels,
- self.channels,
- 1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg)
-
- def forward(self, x):
- """Forward function."""
- generated_filter = self.filter_gen_conv(
- F.adaptive_avg_pool2d(x, self.filter_size))
- x = self.input_redu_conv(x)
- b, c, h, w = x.shape
- # [1, b * c, h, w], c = self.channels
- x = x.view(1, b * c, h, w)
- # [b * c, 1, filter_size, filter_size]
- generated_filter = generated_filter.view(b * c, 1, self.filter_size,
- self.filter_size)
- pad = (self.filter_size - 1) // 2
- if (self.filter_size - 1) % 2 == 0:
- p2d = (pad, pad, pad, pad)
- else:
- p2d = (pad + 1, pad, pad + 1, pad)
- x = F.pad(input=x, pad=p2d, mode='constant', value=0)
- # [1, b * c, h, w]
- output = F.conv2d(input=x, weight=generated_filter, groups=b * c)
- # [b, c, h, w]
- output = output.view(b, c, h, w)
- if self.norm is not None:
- output = self.norm(output)
- output = self.activate(output)
-
- if self.fusion:
- output = self.fusion_conv(output)
-
- return output
-
-
-@HEADS.register_module()
-class DMHead(BaseDecodeHead):
- """Dynamic Multi-scale Filters for Semantic Segmentation.
-
- This head is the implementation of
- `DMNet `_.
-
- Args:
- filter_sizes (tuple[int]): The size of generated convolutional filters
- used in Dynamic Convolutional Module. Default: (1, 3, 5, 7).
- fusion (bool): Add one conv to fuse DCM output feature.
- """
-
- def __init__(self, filter_sizes=(1, 3, 5, 7), fusion=False, **kwargs):
- super(DMHead, self).__init__(**kwargs)
- assert isinstance(filter_sizes, (list, tuple))
- self.filter_sizes = filter_sizes
- self.fusion = fusion
- dcm_modules = []
- for filter_size in self.filter_sizes:
- dcm_modules.append(
- DCM(filter_size,
- self.fusion,
- self.in_channels,
- self.channels,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg))
- self.dcm_modules = nn.ModuleList(dcm_modules)
- self.bottleneck = ConvModule(
- self.in_channels + len(filter_sizes) * self.channels,
- self.channels,
- 3,
- padding=1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg)
-
- def forward(self, inputs):
- """Forward function."""
- x = self._transform_inputs(inputs)
- dcm_outs = [x]
- for dcm_module in self.dcm_modules:
- dcm_outs.append(dcm_module(x))
- dcm_outs = torch.cat(dcm_outs, dim=1)
- output = self.bottleneck(dcm_outs)
- output = self.cls_seg(output)
- return output
diff --git a/spaces/Paaz/gpt2-lyrics/style.css b/spaces/Paaz/gpt2-lyrics/style.css
deleted file mode 100644
index 00f89aa902e0b52ca76e7a3e0679172790f9568c..0000000000000000000000000000000000000000
--- a/spaces/Paaz/gpt2-lyrics/style.css
+++ /dev/null
@@ -1,3 +0,0 @@
-h1 {
- text-align: center;
-}
\ No newline at end of file
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/rnrs/bytevectors.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/rnrs/bytevectors.go
deleted file mode 100644
index 76fb555b727336762d172abf3d91380984411e5d..0000000000000000000000000000000000000000
Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/rnrs/bytevectors.go and /dev/null differ
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/srfi/srfi-111.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/srfi/srfi-111.go
deleted file mode 100644
index a3194cd912525493d9a021f7c7d61737277f77ed..0000000000000000000000000000000000000000
Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/srfi/srfi-111.go and /dev/null differ
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/modal-transforms.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/modal-transforms.go
deleted file mode 100644
index 2ef44a69600f29cc6a823b97356b4f84aa3aee2f..0000000000000000000000000000000000000000
Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/modal-transforms.go and /dev/null differ
diff --git a/spaces/PeepDaSlan9/Bark-Voice-Cloning/bark/model_fine.py b/spaces/PeepDaSlan9/Bark-Voice-Cloning/bark/model_fine.py
deleted file mode 100644
index 6179a851319692b10df0d69b00910ad36cee8685..0000000000000000000000000000000000000000
--- a/spaces/PeepDaSlan9/Bark-Voice-Cloning/bark/model_fine.py
+++ /dev/null
@@ -1,149 +0,0 @@
-"""
-Much of this code is adapted from Andrej Karpathy's NanoGPT
-(https://github.com/karpathy/nanoGPT)
-"""
-from dataclasses import dataclass
-import math
-
-import torch
-import torch.nn as nn
-from torch.nn import functional as F
-
-from .model import GPT, GPTConfig, MLP
-
-
-class NonCausalSelfAttention(nn.Module):
- def __init__(self, config):
- super().__init__()
- assert config.n_embd % config.n_head == 0
- # key, query, value projections for all heads, but in a batch
- self.c_attn = nn.Linear(config.n_embd, 3 * config.n_embd, bias=config.bias)
- # output projection
- self.c_proj = nn.Linear(config.n_embd, config.n_embd, bias=config.bias)
- # regularization
- self.attn_dropout = nn.Dropout(config.dropout)
- self.resid_dropout = nn.Dropout(config.dropout)
- self.n_head = config.n_head
- self.n_embd = config.n_embd
- self.dropout = config.dropout
- # flash attention make GPU go brrrrr but support is only in PyTorch nightly and still a bit scary
- self.flash = (
- hasattr(torch.nn.functional, "scaled_dot_product_attention") and self.dropout == 0.0
- )
-
- def forward(self, x):
- B, T, C = x.size() # batch size, sequence length, embedding dimensionality (n_embd)
-
- # calculate query, key, values for all heads in batch and move head forward to be the batch dim
- q, k, v = self.c_attn(x).split(self.n_embd, dim=2)
- k = k.view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs)
- q = q.view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs)
- v = v.view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs)
-
- # causal self-attention; Self-attend: (B, nh, T, hs) x (B, nh, hs, T) -> (B, nh, T, T)
- if self.flash:
- # efficient attention using Flash Attention CUDA kernels
- y = torch.nn.functional.scaled_dot_product_attention(
- q, k, v, attn_mask=None, dropout_p=self.dropout, is_causal=False
- )
- else:
- # manual implementation of attention
- att = (q @ k.transpose(-2, -1)) * (1.0 / math.sqrt(k.size(-1)))
- att = F.softmax(att, dim=-1)
- att = self.attn_dropout(att)
- y = att @ v # (B, nh, T, T) x (B, nh, T, hs) -> (B, nh, T, hs)
- y = (
- y.transpose(1, 2).contiguous().view(B, T, C)
- ) # re-assemble all head outputs side by side
-
- # output projection
- y = self.resid_dropout(self.c_proj(y))
- return y
-
-
-class FineBlock(nn.Module):
- def __init__(self, config):
- super().__init__()
- self.ln_1 = nn.LayerNorm(config.n_embd)
- self.attn = NonCausalSelfAttention(config)
- self.ln_2 = nn.LayerNorm(config.n_embd)
- self.mlp = MLP(config)
-
- def forward(self, x):
- x = x + self.attn(self.ln_1(x))
- x = x + self.mlp(self.ln_2(x))
- return x
-
-
-class FineGPT(GPT):
- def __init__(self, config):
- super().__init__(config)
- del self.lm_head
- self.config = config
- self.n_codes_total = config.n_codes_total
- self.transformer = nn.ModuleDict(
- dict(
- wtes=nn.ModuleList(
- [
- nn.Embedding(config.input_vocab_size, config.n_embd)
- for _ in range(config.n_codes_total)
- ]
- ),
- wpe=nn.Embedding(config.block_size, config.n_embd),
- drop=nn.Dropout(config.dropout),
- h=nn.ModuleList([FineBlock(config) for _ in range(config.n_layer)]),
- ln_f=nn.LayerNorm(config.n_embd),
- )
- )
- self.lm_heads = nn.ModuleList(
- [
- nn.Linear(config.n_embd, config.output_vocab_size, bias=False)
- for _ in range(config.n_codes_given, self.n_codes_total)
- ]
- )
- for i in range(self.n_codes_total - config.n_codes_given):
- self.transformer.wtes[i + 1].weight = self.lm_heads[i].weight
-
- def forward(self, pred_idx, idx):
- device = idx.device
- b, t, codes = idx.size()
- assert (
- t <= self.config.block_size
- ), f"Cannot forward sequence of length {t}, block size is only {self.config.block_size}"
- assert pred_idx > 0, "cannot predict 0th codebook"
- assert codes == self.n_codes_total, (b, t, codes)
- pos = torch.arange(0, t, dtype=torch.long, device=device).unsqueeze(0) # shape (1, t)
-
- # forward the GPT model itself
- tok_embs = [
- wte(idx[:, :, i]).unsqueeze(-1) for i, wte in enumerate(self.transformer.wtes)
- ] # token embeddings of shape (b, t, n_embd)
- tok_emb = torch.cat(tok_embs, dim=-1)
- pos_emb = self.transformer.wpe(pos) # position embeddings of shape (1, t, n_embd)
- x = tok_emb[:, :, :, : pred_idx + 1].sum(dim=-1)
- x = self.transformer.drop(x + pos_emb)
- for block in self.transformer.h:
- x = block(x)
- x = self.transformer.ln_f(x)
- logits = self.lm_heads[pred_idx - self.config.n_codes_given](x)
- return logits
-
- def get_num_params(self, non_embedding=True):
- """
- Return the number of parameters in the model.
- For non-embedding count (default), the position embeddings get subtracted.
- The token embeddings would too, except due to the parameter sharing these
- params are actually used as weights in the final layer, so we include them.
- """
- n_params = sum(p.numel() for p in self.parameters())
- if non_embedding:
- for wte in self.transformer.wtes:
- n_params -= wte.weight.numel()
- n_params -= self.transformer.wpe.weight.numel()
- return n_params
-
-
-@dataclass
-class FineGPTConfig(GPTConfig):
- n_codes_total: int = 8
- n_codes_given: int = 1
diff --git a/spaces/Pengyey/bingo-chuchu/src/components/ui/input.tsx b/spaces/Pengyey/bingo-chuchu/src/components/ui/input.tsx
deleted file mode 100644
index 684a857f3d769b78818fb13de1abaebfb09ca79c..0000000000000000000000000000000000000000
--- a/spaces/Pengyey/bingo-chuchu/src/components/ui/input.tsx
+++ /dev/null
@@ -1,25 +0,0 @@
-import * as React from 'react'
-
-import { cn } from '@/lib/utils'
-
-export interface InputProps
- extends React.InputHTMLAttributes {}
-
-const Input = React.forwardRef(
- ({ className, type, ...props }, ref) => {
- return (
-
- )
- }
-)
-Input.displayName = 'Input'
-
-export { Input }
diff --git a/spaces/Pengyey/bingo-chuchu/src/lib/bots/bing/index.ts b/spaces/Pengyey/bingo-chuchu/src/lib/bots/bing/index.ts
deleted file mode 100644
index c75c69f94af8c3db92d4c90d465c219a2af72a4d..0000000000000000000000000000000000000000
--- a/spaces/Pengyey/bingo-chuchu/src/lib/bots/bing/index.ts
+++ /dev/null
@@ -1,432 +0,0 @@
-import { fetch, WebSocket, debug } from '@/lib/isomorphic'
-import WebSocketAsPromised from 'websocket-as-promised'
-import {
- SendMessageParams,
- BingConversationStyle,
- ConversationResponse,
- ChatResponseMessage,
- ConversationInfo,
- InvocationEventType,
- ChatError,
- ErrorCode,
- ChatUpdateCompleteResponse,
- ImageInfo,
- KBlobResponse
-} from './types'
-
-import { convertMessageToMarkdown, websocketUtils, streamAsyncIterable } from './utils'
-import { WatchDog, createChunkDecoder } from '@/lib/utils'
-
-type Params = SendMessageParams<{ bingConversationStyle: BingConversationStyle }>
-
-const OPTIONS_SETS = [
- 'nlu_direct_response_filter',
- 'deepleo',
- 'disable_emoji_spoken_text',
- 'responsible_ai_policy_235',
- 'enablemm',
- 'iycapbing',
- 'iyxapbing',
- 'objopinion',
- 'rweasgv2',
- 'dagslnv1',
- 'dv3sugg',
- 'autosave',
- 'iyoloxap',
- 'iyoloneutral',
- 'clgalileo',
- 'gencontentv3',
-]
-
-export class BingWebBot {
- protected conversationContext?: ConversationInfo
- protected cookie: string
- protected ua: string
- protected endpoint = ''
- private lastText = ''
- private asyncTasks: Array> = []
-
- constructor(opts: {
- cookie: string
- ua: string
- bingConversationStyle?: BingConversationStyle
- conversationContext?: ConversationInfo
- }) {
- const { cookie, ua, conversationContext } = opts
- this.cookie = cookie?.includes(';') ? cookie : `_EDGE_V=1; _U=${cookie}`
- this.ua = ua
- this.conversationContext = conversationContext
- }
-
- static buildChatRequest(conversation: ConversationInfo) {
- const optionsSets = OPTIONS_SETS
- if (conversation.conversationStyle === BingConversationStyle.Precise) {
- optionsSets.push('h3precise')
- } else if (conversation.conversationStyle === BingConversationStyle.Creative) {
- optionsSets.push('h3imaginative')
- }
- return {
- arguments: [
- {
- source: 'cib',
- optionsSets,
- allowedMessageTypes: [
- 'ActionRequest',
- 'Chat',
- 'Context',
- 'InternalSearchQuery',
- 'InternalSearchResult',
- 'Disengaged',
- 'InternalLoaderMessage',
- 'Progress',
- 'RenderCardRequest',
- 'SemanticSerp',
- 'GenerateContentQuery',
- 'SearchQuery',
- ],
- sliceIds: [
- 'winmuid1tf',
- 'anssupfor_c',
- 'imgchatgptv2',
- 'tts2cf',
- 'contansperf',
- 'mlchatpc8500w',
- 'mlchatpc2',
- 'ctrlworkpay',
- 'winshortmsgtf',
- 'cibctrl',
- 'sydtransctrl',
- 'sydconfigoptc',
- '0705trt4',
- '517opinion',
- '628ajcopus0',
- '330uaugs0',
- '529rwea',
- '0626snptrcs0',
- '424dagslnv1',
- ],
- isStartOfSession: conversation.invocationId === 0,
- message: {
- author: 'user',
- inputMethod: 'Keyboard',
- text: conversation.prompt,
- imageUrl: conversation.imageUrl,
- messageType: 'Chat',
- },
- conversationId: conversation.conversationId,
- conversationSignature: conversation.conversationSignature,
- participant: { id: conversation.clientId },
- },
- ],
- invocationId: conversation.invocationId.toString(),
- target: 'chat',
- type: InvocationEventType.StreamInvocation,
- }
- }
-
- async createConversation(): Promise {
- const headers = {
- 'Accept-Encoding': 'gzip, deflate, br, zsdch',
- 'User-Agent': this.ua,
- 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32',
- cookie: this.cookie,
- }
-
- let resp: ConversationResponse | undefined
- try {
- const response = await fetch(this.endpoint + '/api/create', { method: 'POST', headers, redirect: 'error', mode: 'cors', credentials: 'include' })
- if (response.status === 404) {
- throw new ChatError('Not Found', ErrorCode.NOTFOUND_ERROR)
- }
- resp = await response.json() as ConversationResponse
- } catch (err) {
- console.error('create conversation error', err)
- }
-
- if (!resp?.result) {
- throw new ChatError('你的 VPS 或代理可能被封禁,如有疑问,请前往 https://github.com/weaigc/bingo 咨询', ErrorCode.BING_IP_FORBIDDEN)
- }
-
- const { value, message } = resp.result || {}
- if (value !== 'Success') {
- const errorMsg = `${value}: ${message}`
- if (value === 'UnauthorizedRequest') {
- if (/fetch failed/i.test(message || '')) {
- throw new ChatError(errorMsg, ErrorCode.BING_IP_FORBIDDEN)
- }
- throw new ChatError(errorMsg, ErrorCode.BING_UNAUTHORIZED)
- }
- if (value === 'TryLater') {
- throw new ChatError(errorMsg, ErrorCode.BING_TRY_LATER)
- }
- if (value === 'Forbidden') {
- throw new ChatError(errorMsg, ErrorCode.BING_FORBIDDEN)
- }
- throw new ChatError(errorMsg, ErrorCode.UNKOWN_ERROR)
- }
- return resp
- }
-
- private async createContext(conversationStyle: BingConversationStyle) {
- if (!this.conversationContext) {
- const conversation = await this.createConversation()
- this.conversationContext = {
- conversationId: conversation.conversationId,
- conversationSignature: conversation.conversationSignature,
- clientId: conversation.clientId,
- invocationId: 0,
- conversationStyle,
- prompt: '',
- }
- }
- return this.conversationContext
- }
-
- async sendMessage(params: Params) {
- try {
- await this.createContext(params.options.bingConversationStyle)
- Object.assign(this.conversationContext!, { prompt: params.prompt, imageUrl: params.imageUrl })
- return this.sydneyProxy(params)
- } catch (error) {
- params.onEvent({
- type: 'ERROR',
- error: error instanceof ChatError ? error : new ChatError('Catch Error', ErrorCode.UNKOWN_ERROR),
- })
- }
- }
-
- private async sydneyProxy(params: Params) {
- const abortController = new AbortController()
- const response = await fetch(this.endpoint + '/api/sydney', {
- method: 'POST',
- headers: {
- 'Content-Type': 'application/json',
- },
- signal: abortController.signal,
- body: JSON.stringify(this.conversationContext!)
- })
- if (response.status !== 200) {
- params.onEvent({
- type: 'ERROR',
- error: new ChatError(
- 'Unknown error',
- ErrorCode.UNKOWN_ERROR,
- ),
- })
- }
- params.signal?.addEventListener('abort', () => {
- abortController.abort()
- })
-
- const textDecoder = createChunkDecoder()
- for await (const chunk of streamAsyncIterable(response.body!)) {
- this.parseEvents(params, websocketUtils.unpackMessage(textDecoder(chunk)))
- }
- }
-
- async sendWs() {
- const wsConfig: ConstructorParameters[1] = {
- packMessage: websocketUtils.packMessage,
- unpackMessage: websocketUtils.unpackMessage,
- createWebSocket: (url) => new WebSocket(url, {
- headers: {
- 'accept-language': 'zh-CN,zh;q=0.9',
- 'cache-control': 'no-cache',
- 'User-Agent': this.ua,
- pragma: 'no-cache',
- cookie: this.cookie,
- }
- })
- }
- const wsp = new WebSocketAsPromised('wss://sydney.bing.com/sydney/ChatHub', wsConfig)
-
- wsp.open().then(() => {
- wsp.sendPacked({ protocol: 'json', version: 1 })
- wsp.sendPacked({ type: 6 })
- wsp.sendPacked(BingWebBot.buildChatRequest(this.conversationContext!))
- })
-
- return wsp
- }
-
- private async useWs(params: Params) {
- const wsp = await this.sendWs()
- const watchDog = new WatchDog()
- wsp.onUnpackedMessage.addListener((events) => {
- watchDog.watch(() => {
- wsp.sendPacked({ type: 6 })
- })
- this.parseEvents(params, events)
- })
-
- wsp.onClose.addListener(() => {
- watchDog.reset()
- params.onEvent({ type: 'DONE' })
- wsp.removeAllListeners()
- })
-
- params.signal?.addEventListener('abort', () => {
- wsp.removeAllListeners()
- wsp.close()
- })
- }
-
- private async createImage(prompt: string, id: string) {
- try {
- const headers = {
- 'Accept-Encoding': 'gzip, deflate, br, zsdch',
- 'User-Agent': this.ua,
- 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32',
- cookie: this.cookie,
- }
- const query = new URLSearchParams({
- prompt,
- id
- })
- const response = await fetch(this.endpoint + '/api/image?' + query.toString(),
- {
- method: 'POST',
- headers,
- mode: 'cors',
- credentials: 'include'
- })
- .then(res => res.text())
- if (response) {
- this.lastText += '\n' + response
- }
- } catch (err) {
- console.error('Create Image Error', err)
- }
- }
-
- private buildKnowledgeApiPayload(imageUrl: string, conversationStyle: BingConversationStyle) {
- const imageInfo: ImageInfo = {}
- let imageBase64: string | undefined = undefined
- const knowledgeRequest = {
- imageInfo,
- knowledgeRequest: {
- invokedSkills: [
- 'ImageById'
- ],
- subscriptionId: 'Bing.Chat.Multimodal',
- invokedSkillsRequestData: {
- enableFaceBlur: true
- },
- convoData: {
- convoid: this.conversationContext?.conversationId,
- convotone: conversationStyle,
- }
- },
- }
-
- if (imageUrl.startsWith('data:image/')) {
- imageBase64 = imageUrl.replace('data:image/', '');
- const partIndex = imageBase64.indexOf(',')
- if (partIndex) {
- imageBase64 = imageBase64.substring(partIndex + 1)
- }
- } else {
- imageInfo.url = imageUrl
- }
- return { knowledgeRequest, imageBase64 }
- }
-
- async uploadImage(imageUrl: string, conversationStyle: BingConversationStyle = BingConversationStyle.Creative): Promise {
- if (!imageUrl) {
- return
- }
- await this.createContext(conversationStyle)
- const payload = this.buildKnowledgeApiPayload(imageUrl, conversationStyle)
-
- const response = await fetch(this.endpoint + '/api/kblob',
- {
- headers: {
- 'Content-Type': 'application/json',
- },
- method: 'POST',
- mode: 'cors',
- credentials: 'include',
- body: JSON.stringify(payload),
- })
- .then(res => res.json())
- .catch(e => {
- console.log('Error', e)
- })
- return response
- }
-
- private async generateContent(message: ChatResponseMessage) {
- if (message.contentType === 'IMAGE') {
- this.asyncTasks.push(this.createImage(message.text, message.messageId))
- }
- }
-
- private async parseEvents(params: Params, events: any) {
- const conversation = this.conversationContext!
-
- events?.forEach(async (event: ChatUpdateCompleteResponse) => {
- debug('bing event', event)
- if (event.type === 3) {
- await Promise.all(this.asyncTasks)
- this.asyncTasks = []
- params.onEvent({ type: 'UPDATE_ANSWER', data: { text: this.lastText } })
- params.onEvent({ type: 'DONE' })
- conversation.invocationId = parseInt(event.invocationId, 10) + 1
- } else if (event.type === 1) {
- const messages = event.arguments[0].messages
- if (messages) {
- const text = convertMessageToMarkdown(messages[0])
- this.lastText = text
- params.onEvent({ type: 'UPDATE_ANSWER', data: { text, spokenText: messages[0].text, throttling: event.arguments[0].throttling } })
- }
- } else if (event.type === 2) {
- const messages = event.item.messages as ChatResponseMessage[] | undefined
- if (!messages) {
- params.onEvent({
- type: 'ERROR',
- error: new ChatError(
- event.item.result.error || 'Unknown error',
- event.item.result.value === 'Throttled' ? ErrorCode.THROTTLE_LIMIT
- : event.item.result.value === 'CaptchaChallenge' ? (this.conversationContext?.conversationId?.includes('BingProdUnAuthenticatedUsers') ? ErrorCode.BING_UNAUTHORIZED : ErrorCode.BING_CAPTCHA)
- : ErrorCode.UNKOWN_ERROR
- ),
- })
- return
- }
- const limited = messages.some((message) =>
- message.contentOrigin === 'TurnLimiter'
- || message.messageType === 'Disengaged'
- )
- if (limited) {
- params.onEvent({
- type: 'ERROR',
- error: new ChatError(
- 'Sorry, you have reached chat limit in this conversation.',
- ErrorCode.CONVERSATION_LIMIT,
- ),
- })
- return
- }
-
- const lastMessage = event.item.messages.at(-1) as ChatResponseMessage
- const specialMessage = event.item.messages.find(message => message.author === 'bot' && message.contentType === 'IMAGE')
- if (specialMessage) {
- this.generateContent(specialMessage)
- }
-
- if (lastMessage) {
- const text = convertMessageToMarkdown(lastMessage)
- this.lastText = text
- params.onEvent({
- type: 'UPDATE_ANSWER',
- data: { text, throttling: event.item.throttling, suggestedResponses: lastMessage.suggestedResponses, sourceAttributions: lastMessage.sourceAttributions },
- })
- }
- }
- })
- }
-
- resetConversation() {
- this.conversationContext = undefined
- }
-}
diff --git a/spaces/PhotoPranab/Joeythemonster-anything-midjourney-v-4-1/README.md b/spaces/PhotoPranab/Joeythemonster-anything-midjourney-v-4-1/README.md
deleted file mode 100644
index cf31fa376ebc4f713058b1d98bcab4c16e69f88e..0000000000000000000000000000000000000000
--- a/spaces/PhotoPranab/Joeythemonster-anything-midjourney-v-4-1/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Joeythemonster Anything Midjourney V 4 1
-emoji: 😻
-colorFrom: red
-colorTo: gray
-sdk: gradio
-sdk_version: 3.20.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/RMXK/RVC_HFF/Applio-RVC-Fork/utils/backups_test.py b/spaces/RMXK/RVC_HFF/Applio-RVC-Fork/utils/backups_test.py
deleted file mode 100644
index f3edf15811b5035ee82f21e54e87b7e87ce413eb..0000000000000000000000000000000000000000
--- a/spaces/RMXK/RVC_HFF/Applio-RVC-Fork/utils/backups_test.py
+++ /dev/null
@@ -1,138 +0,0 @@
-
-import os
-import shutil
-import hashlib
-import time
-
-LOGS_FOLDER = '/content/Applio-RVC-Fork/logs'
-WEIGHTS_FOLDER = '/content/Applio-RVC-Fork/weights'
-GOOGLE_DRIVE_PATH = '/content/drive/MyDrive/RVC_Backup'
-
-def import_google_drive_backup():
- print("Importing Google Drive backup...")
- GOOGLE_DRIVE_PATH = '/content/drive/MyDrive/RVC_Backup' # change this to your Google Drive path
- LOGS_FOLDER = '/content/Applio-RVC-Fork/logs'
- WEIGHTS_FOLDER = '/content/Applio-RVC-Fork/weights'
- weights_exist = False
- files_to_copy = []
- weights_to_copy = []
-
- def handle_files(root, files, is_weight_files=False):
- for filename in files:
- filepath = os.path.join(root, filename)
- if filename.endswith('.pth') and is_weight_files:
- weights_exist = True
- backup_filepath = os.path.join(WEIGHTS_FOLDER, os.path.relpath(filepath, GOOGLE_DRIVE_PATH))
- else:
- backup_filepath = os.path.join(LOGS_FOLDER, os.path.relpath(filepath, GOOGLE_DRIVE_PATH))
- backup_folderpath = os.path.dirname(backup_filepath)
- if not os.path.exists(backup_folderpath):
- os.makedirs(backup_folderpath)
- print(f'Created folder: {backup_folderpath}', flush=True)
- if is_weight_files:
- weights_to_copy.append((filepath, backup_filepath))
- else:
- files_to_copy.append((filepath, backup_filepath))
-
- for root, dirs, files in os.walk(os.path.join(GOOGLE_DRIVE_PATH, 'logs')):
- handle_files(root, files)
-
- for root, dirs, files in os.walk(os.path.join(GOOGLE_DRIVE_PATH, 'weights')):
- handle_files(root, files, True)
-
- # Copy files in batches
- total_files = len(files_to_copy)
- start_time = time.time()
- for i, (source, dest) in enumerate(files_to_copy, start=1):
- with open(source, 'rb') as src, open(dest, 'wb') as dst:
- shutil.copyfileobj(src, dst, 1024*1024) # 1MB buffer size
- # Report progress every 5 seconds or after every 100 files, whichever is less frequent
- if time.time() - start_time > 5 or i % 100 == 0:
- print(f'\rCopying file {i} of {total_files} ({i * 100 / total_files:.2f}%)', end="")
- start_time = time.time()
- print(f'\nImported {len(files_to_copy)} files from Google Drive backup')
-
- # Copy weights in batches
- total_weights = len(weights_to_copy)
- start_time = time.time()
- for i, (source, dest) in enumerate(weights_to_copy, start=1):
- with open(source, 'rb') as src, open(dest, 'wb') as dst:
- shutil.copyfileobj(src, dst, 1024*1024) # 1MB buffer size
- # Report progress every 5 seconds or after every 100 files, whichever is less frequent
- if time.time() - start_time > 5 or i % 100 == 0:
- print(f'\rCopying weight file {i} of {total_weights} ({i * 100 / total_weights:.2f}%)', end="")
- start_time = time.time()
- if weights_exist:
- print(f'\nImported {len(weights_to_copy)} weight files')
- print("Copied weights from Google Drive backup to local weights folder.")
- else:
- print("\nNo weights found in Google Drive backup.")
- print("Google Drive backup import completed.")
-
-def backup_files():
- print("\n Starting backup loop...")
- last_backup_timestamps_path = os.path.join(LOGS_FOLDER, 'last_backup_timestamps.txt')
- fully_updated = False # boolean to track if all files are up to date
- try:
- with open(last_backup_timestamps_path, 'r') as f:
- last_backup_timestamps = dict(line.strip().split(':') for line in f)
- except:
- last_backup_timestamps = {}
-
- while True:
- updated = False
- files_to_copy = []
- files_to_delete = []
-
- for root, dirs, files in os.walk(LOGS_FOLDER):
- for filename in files:
- if filename != 'last_backup_timestamps.txt':
- filepath = os.path.join(root, filename)
- if os.path.isfile(filepath):
- backup_filepath = os.path.join(GOOGLE_DRIVE_PATH, os.path.relpath(filepath, LOGS_FOLDER))
- backup_folderpath = os.path.dirname(backup_filepath)
-
- if not os.path.exists(backup_folderpath):
- os.makedirs(backup_folderpath)
- print(f'Created backup folder: {backup_folderpath}', flush=True)
-
- # check if file has changed since last backup
- last_backup_timestamp = last_backup_timestamps.get(filepath)
- current_timestamp = os.path.getmtime(filepath)
- if last_backup_timestamp is None or float(last_backup_timestamp) < current_timestamp:
- files_to_copy.append((filepath, backup_filepath)) # add to list of files to copy
- last_backup_timestamps[filepath] = str(current_timestamp) # update last backup timestamp
- updated = True
- fully_updated = False # if a file is updated, all files are not up to date
-
- # check if any files were deleted in Colab and delete them from the backup drive
- for filepath in list(last_backup_timestamps.keys()):
- if not os.path.exists(filepath):
- backup_filepath = os.path.join(GOOGLE_DRIVE_PATH, os.path.relpath(filepath, LOGS_FOLDER))
- if os.path.exists(backup_filepath):
- files_to_delete.append(backup_filepath) # add to list of files to delete
- del last_backup_timestamps[filepath]
- updated = True
- fully_updated = False # if a file is deleted, all files are not up to date
-
- # Copy files in batches
- if files_to_copy:
- for source, dest in files_to_copy:
- shutil.copy2(source, dest)
- print(f'Copied or updated {len(files_to_copy)} files')
-
- # Delete files in batches
- if files_to_delete:
- for file in files_to_delete:
- os.remove(file)
- print(f'Deleted {len(files_to_delete)} files')
-
- if not updated and not fully_updated:
- print("Files are up to date.")
- fully_updated = True # if all files are up to date, set the boolean to True
- copy_weights_folder_to_drive()
-
- with open(last_backup_timestamps_path, 'w') as f:
- for filepath, timestamp in last_backup_timestamps.items():
- f.write(f'{filepath}:{timestamp}\n')
- time.sleep(15) # wait for 15 seconds before checking again
diff --git a/spaces/Ramse/TTS_Hindi/modules/commons/.ipynb_checkpoints/ssim-checkpoint.py b/spaces/Ramse/TTS_Hindi/modules/commons/.ipynb_checkpoints/ssim-checkpoint.py
deleted file mode 100644
index 0d0241f267ef58b24979e022b05f2a9adf768826..0000000000000000000000000000000000000000
--- a/spaces/Ramse/TTS_Hindi/modules/commons/.ipynb_checkpoints/ssim-checkpoint.py
+++ /dev/null
@@ -1,391 +0,0 @@
-# '''
-# https://github.com/One-sixth/ms_ssim_pytorch/blob/master/ssim.py
-# '''
-#
-# import torch
-# import torch.jit
-# import torch.nn.functional as F
-#
-#
-# @torch.jit.script
-# def create_window(window_size: int, sigma: float, channel: int):
-# '''
-# Create 1-D gauss kernel
-# :param window_size: the size of gauss kernel
-# :param sigma: sigma of normal distribution
-# :param channel: input channel
-# :return: 1D kernel
-# '''
-# coords = torch.arange(window_size, dtype=torch.float)
-# coords -= window_size // 2
-#
-# g = torch.exp(-(coords ** 2) / (2 * sigma ** 2))
-# g /= g.sum()
-#
-# g = g.reshape(1, 1, 1, -1).repeat(channel, 1, 1, 1)
-# return g
-#
-#
-# @torch.jit.script
-# def _gaussian_filter(x, window_1d, use_padding: bool):
-# '''
-# Blur input with 1-D kernel
-# :param x: batch of tensors to be blured
-# :param window_1d: 1-D gauss kernel
-# :param use_padding: padding image before conv
-# :return: blured tensors
-# '''
-# C = x.shape[1]
-# padding = 0
-# if use_padding:
-# window_size = window_1d.shape[3]
-# padding = window_size // 2
-# out = F.conv2d(x, window_1d, stride=1, padding=(0, padding), groups=C)
-# out = F.conv2d(out, window_1d.transpose(2, 3), stride=1, padding=(padding, 0), groups=C)
-# return out
-#
-#
-# @torch.jit.script
-# def ssim(X, Y, window, data_range: float, use_padding: bool = False):
-# '''
-# Calculate ssim index for X and Y
-# :param X: images [B, C, H, N_bins]
-# :param Y: images [B, C, H, N_bins]
-# :param window: 1-D gauss kernel
-# :param data_range: value range of input images. (usually 1.0 or 255)
-# :param use_padding: padding image before conv
-# :return:
-# '''
-#
-# K1 = 0.01
-# K2 = 0.03
-# compensation = 1.0
-#
-# C1 = (K1 * data_range) ** 2
-# C2 = (K2 * data_range) ** 2
-#
-# mu1 = _gaussian_filter(X, window, use_padding)
-# mu2 = _gaussian_filter(Y, window, use_padding)
-# sigma1_sq = _gaussian_filter(X * X, window, use_padding)
-# sigma2_sq = _gaussian_filter(Y * Y, window, use_padding)
-# sigma12 = _gaussian_filter(X * Y, window, use_padding)
-#
-# mu1_sq = mu1.pow(2)
-# mu2_sq = mu2.pow(2)
-# mu1_mu2 = mu1 * mu2
-#
-# sigma1_sq = compensation * (sigma1_sq - mu1_sq)
-# sigma2_sq = compensation * (sigma2_sq - mu2_sq)
-# sigma12 = compensation * (sigma12 - mu1_mu2)
-#
-# cs_map = (2 * sigma12 + C2) / (sigma1_sq + sigma2_sq + C2)
-# # Fixed the issue that the negative value of cs_map caused ms_ssim to output Nan.
-# cs_map = cs_map.clamp_min(0.)
-# ssim_map = ((2 * mu1_mu2 + C1) / (mu1_sq + mu2_sq + C1)) * cs_map
-#
-# ssim_val = ssim_map.mean(dim=(1, 2, 3)) # reduce along CHW
-# cs = cs_map.mean(dim=(1, 2, 3))
-#
-# return ssim_val, cs
-#
-#
-# @torch.jit.script
-# def ms_ssim(X, Y, window, data_range: float, weights, use_padding: bool = False, eps: float = 1e-8):
-# '''
-# interface of ms-ssim
-# :param X: a batch of images, (N,C,H,W)
-# :param Y: a batch of images, (N,C,H,W)
-# :param window: 1-D gauss kernel
-# :param data_range: value range of input images. (usually 1.0 or 255)
-# :param weights: weights for different levels
-# :param use_padding: padding image before conv
-# :param eps: use for avoid grad nan.
-# :return:
-# '''
-# levels = weights.shape[0]
-# cs_vals = []
-# ssim_vals = []
-# for _ in range(levels):
-# ssim_val, cs = ssim(X, Y, window=window, data_range=data_range, use_padding=use_padding)
-# # Use for fix a issue. When c = a ** b and a is 0, c.backward() will cause the a.grad become inf.
-# ssim_val = ssim_val.clamp_min(eps)
-# cs = cs.clamp_min(eps)
-# cs_vals.append(cs)
-#
-# ssim_vals.append(ssim_val)
-# padding = (X.shape[2] % 2, X.shape[3] % 2)
-# X = F.avg_pool2d(X, kernel_size=2, stride=2, padding=padding)
-# Y = F.avg_pool2d(Y, kernel_size=2, stride=2, padding=padding)
-#
-# cs_vals = torch.stack(cs_vals, dim=0)
-# ms_ssim_val = torch.prod((cs_vals[:-1] ** weights[:-1].unsqueeze(1)) * (ssim_vals[-1] ** weights[-1]), dim=0)
-# return ms_ssim_val
-#
-#
-# class SSIM(torch.jit.ScriptModule):
-# __constants__ = ['data_range', 'use_padding']
-#
-# def __init__(self, window_size=11, window_sigma=1.5, data_range=255., channel=3, use_padding=False):
-# '''
-# :param window_size: the size of gauss kernel
-# :param window_sigma: sigma of normal distribution
-# :param data_range: value range of input images. (usually 1.0 or 255)
-# :param channel: input channels (default: 3)
-# :param use_padding: padding image before conv
-# '''
-# super().__init__()
-# assert window_size % 2 == 1, 'Window size must be odd.'
-# window = create_window(window_size, window_sigma, channel)
-# self.register_buffer('window', window)
-# self.data_range = data_range
-# self.use_padding = use_padding
-#
-# @torch.jit.script_method
-# def forward(self, X, Y):
-# r = ssim(X, Y, window=self.window, data_range=self.data_range, use_padding=self.use_padding)
-# return r[0]
-#
-#
-# class MS_SSIM(torch.jit.ScriptModule):
-# __constants__ = ['data_range', 'use_padding', 'eps']
-#
-# def __init__(self, window_size=11, window_sigma=1.5, data_range=255., channel=3, use_padding=False, weights=None,
-# levels=None, eps=1e-8):
-# '''
-# class for ms-ssim
-# :param window_size: the size of gauss kernel
-# :param window_sigma: sigma of normal distribution
-# :param data_range: value range of input images. (usually 1.0 or 255)
-# :param channel: input channels
-# :param use_padding: padding image before conv
-# :param weights: weights for different levels. (default [0.0448, 0.2856, 0.3001, 0.2363, 0.1333])
-# :param levels: number of downsampling
-# :param eps: Use for fix a issue. When c = a ** b and a is 0, c.backward() will cause the a.grad become inf.
-# '''
-# super().__init__()
-# assert window_size % 2 == 1, 'Window size must be odd.'
-# self.data_range = data_range
-# self.use_padding = use_padding
-# self.eps = eps
-#
-# window = create_window(window_size, window_sigma, channel)
-# self.register_buffer('window', window)
-#
-# if weights is None:
-# weights = [0.0448, 0.2856, 0.3001, 0.2363, 0.1333]
-# weights = torch.tensor(weights, dtype=torch.float)
-#
-# if levels is not None:
-# weights = weights[:levels]
-# weights = weights / weights.sum()
-#
-# self.register_buffer('weights', weights)
-#
-# @torch.jit.script_method
-# def forward(self, X, Y):
-# return ms_ssim(X, Y, window=self.window, data_range=self.data_range, weights=self.weights,
-# use_padding=self.use_padding, eps=self.eps)
-#
-#
-# if __name__ == '__main__':
-# print('Simple Test')
-# im = torch.randint(0, 255, (5, 3, 256, 256), dtype=torch.float, device='cuda')
-# img1 = im / 255
-# img2 = img1 * 0.5
-#
-# losser = SSIM(data_range=1.).cuda()
-# loss = losser(img1, img2).mean()
-#
-# losser2 = MS_SSIM(data_range=1.).cuda()
-# loss2 = losser2(img1, img2).mean()
-#
-# print(loss.item())
-# print(loss2.item())
-#
-# if __name__ == '__main__':
-# print('Training Test')
-# import cv2
-# import torch.optim
-# import numpy as np
-# import imageio
-# import time
-#
-# out_test_video = False
-# # 最好不要直接输出gif图,会非常大,最好先输出mkv文件后用ffmpeg转换到GIF
-# video_use_gif = False
-#
-# im = cv2.imread('test_img1.jpg', 1)
-# t_im = torch.from_numpy(im).cuda().permute(2, 0, 1).float()[None] / 255.
-#
-# if out_test_video:
-# if video_use_gif:
-# fps = 0.5
-# out_wh = (im.shape[1] // 2, im.shape[0] // 2)
-# suffix = '.gif'
-# else:
-# fps = 5
-# out_wh = (im.shape[1], im.shape[0])
-# suffix = '.mkv'
-# video_last_time = time.perf_counter()
-# video = imageio.get_writer('ssim_test' + suffix, fps=fps)
-#
-# # 测试ssim
-# print('Training SSIM')
-# rand_im = torch.randint_like(t_im, 0, 255, dtype=torch.float32) / 255.
-# rand_im.requires_grad = True
-# optim = torch.optim.Adam([rand_im], 0.003, eps=1e-8)
-# losser = SSIM(data_range=1., channel=t_im.shape[1]).cuda()
-# ssim_score = 0
-# while ssim_score < 0.999:
-# optim.zero_grad()
-# loss = losser(rand_im, t_im)
-# (-loss).sum().backward()
-# ssim_score = loss.item()
-# optim.step()
-# r_im = np.transpose(rand_im.detach().cpu().numpy().clip(0, 1) * 255, [0, 2, 3, 1]).astype(np.uint8)[0]
-# r_im = cv2.putText(r_im, 'ssim %f' % ssim_score, (10, 30), cv2.FONT_HERSHEY_PLAIN, 2, (255, 0, 0), 2)
-#
-# if out_test_video:
-# if time.perf_counter() - video_last_time > 1. / fps:
-# video_last_time = time.perf_counter()
-# out_frame = cv2.cvtColor(r_im, cv2.COLOR_BGR2RGB)
-# out_frame = cv2.resize(out_frame, out_wh, interpolation=cv2.INTER_AREA)
-# if isinstance(out_frame, cv2.UMat):
-# out_frame = out_frame.get()
-# video.append_data(out_frame)
-#
-# cv2.imshow('ssim', r_im)
-# cv2.setWindowTitle('ssim', 'ssim %f' % ssim_score)
-# cv2.waitKey(1)
-#
-# if out_test_video:
-# video.close()
-#
-# # 测试ms_ssim
-# if out_test_video:
-# if video_use_gif:
-# fps = 0.5
-# out_wh = (im.shape[1] // 2, im.shape[0] // 2)
-# suffix = '.gif'
-# else:
-# fps = 5
-# out_wh = (im.shape[1], im.shape[0])
-# suffix = '.mkv'
-# video_last_time = time.perf_counter()
-# video = imageio.get_writer('ms_ssim_test' + suffix, fps=fps)
-#
-# print('Training MS_SSIM')
-# rand_im = torch.randint_like(t_im, 0, 255, dtype=torch.float32) / 255.
-# rand_im.requires_grad = True
-# optim = torch.optim.Adam([rand_im], 0.003, eps=1e-8)
-# losser = MS_SSIM(data_range=1., channel=t_im.shape[1]).cuda()
-# ssim_score = 0
-# while ssim_score < 0.999:
-# optim.zero_grad()
-# loss = losser(rand_im, t_im)
-# (-loss).sum().backward()
-# ssim_score = loss.item()
-# optim.step()
-# r_im = np.transpose(rand_im.detach().cpu().numpy().clip(0, 1) * 255, [0, 2, 3, 1]).astype(np.uint8)[0]
-# r_im = cv2.putText(r_im, 'ms_ssim %f' % ssim_score, (10, 30), cv2.FONT_HERSHEY_PLAIN, 2, (255, 0, 0), 2)
-#
-# if out_test_video:
-# if time.perf_counter() - video_last_time > 1. / fps:
-# video_last_time = time.perf_counter()
-# out_frame = cv2.cvtColor(r_im, cv2.COLOR_BGR2RGB)
-# out_frame = cv2.resize(out_frame, out_wh, interpolation=cv2.INTER_AREA)
-# if isinstance(out_frame, cv2.UMat):
-# out_frame = out_frame.get()
-# video.append_data(out_frame)
-#
-# cv2.imshow('ms_ssim', r_im)
-# cv2.setWindowTitle('ms_ssim', 'ms_ssim %f' % ssim_score)
-# cv2.waitKey(1)
-#
-# if out_test_video:
-# video.close()
-
-"""
-Adapted from https://github.com/Po-Hsun-Su/pytorch-ssim
-"""
-
-import torch
-import torch.nn.functional as F
-from torch.autograd import Variable
-import numpy as np
-from math import exp
-
-
-def gaussian(window_size, sigma):
- gauss = torch.Tensor([exp(-(x - window_size // 2) ** 2 / float(2 * sigma ** 2)) for x in range(window_size)])
- return gauss / gauss.sum()
-
-
-def create_window(window_size, channel):
- _1D_window = gaussian(window_size, 1.5).unsqueeze(1)
- _2D_window = _1D_window.mm(_1D_window.t()).float().unsqueeze(0).unsqueeze(0)
- window = Variable(_2D_window.expand(channel, 1, window_size, window_size).contiguous())
- return window
-
-
-def _ssim(img1, img2, window, window_size, channel, size_average=True):
- mu1 = F.conv2d(img1, window, padding=window_size // 2, groups=channel)
- mu2 = F.conv2d(img2, window, padding=window_size // 2, groups=channel)
-
- mu1_sq = mu1.pow(2)
- mu2_sq = mu2.pow(2)
- mu1_mu2 = mu1 * mu2
-
- sigma1_sq = F.conv2d(img1 * img1, window, padding=window_size // 2, groups=channel) - mu1_sq
- sigma2_sq = F.conv2d(img2 * img2, window, padding=window_size // 2, groups=channel) - mu2_sq
- sigma12 = F.conv2d(img1 * img2, window, padding=window_size // 2, groups=channel) - mu1_mu2
-
- C1 = 0.01 ** 2
- C2 = 0.03 ** 2
-
- ssim_map = ((2 * mu1_mu2 + C1) * (2 * sigma12 + C2)) / ((mu1_sq + mu2_sq + C1) * (sigma1_sq + sigma2_sq + C2))
-
- if size_average:
- return ssim_map.mean()
- else:
- return ssim_map.mean(1)
-
-
-class SSIM(torch.nn.Module):
- def __init__(self, window_size=11, size_average=True):
- super(SSIM, self).__init__()
- self.window_size = window_size
- self.size_average = size_average
- self.channel = 1
- self.window = create_window(window_size, self.channel)
-
- def forward(self, img1, img2):
- (_, channel, _, _) = img1.size()
-
- if channel == self.channel and self.window.data.type() == img1.data.type():
- window = self.window
- else:
- window = create_window(self.window_size, channel)
-
- if img1.is_cuda:
- window = window.cuda(img1.get_device())
- window = window.type_as(img1)
-
- self.window = window
- self.channel = channel
-
- return _ssim(img1, img2, window, self.window_size, channel, self.size_average)
-
-
-window = None
-
-
-def ssim(img1, img2, window_size=11, size_average=True):
- (_, channel, _, _) = img1.size()
- global window
- if window is None:
- window = create_window(window_size, channel)
- if img1.is_cuda:
- window = window.cuda(img1.get_device())
- window = window.type_as(img1)
- return _ssim(img1, img2, window, window_size, channel, size_average)
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/operations/install/wheel.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/operations/install/wheel.py
deleted file mode 100644
index c79941398a2c1d502e60cd0dd0703d8c0530a30f..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/operations/install/wheel.py
+++ /dev/null
@@ -1,738 +0,0 @@
-"""Support for installing and building the "wheel" binary package format.
-"""
-
-import collections
-import compileall
-import contextlib
-import csv
-import importlib
-import logging
-import os.path
-import re
-import shutil
-import sys
-import warnings
-from base64 import urlsafe_b64encode
-from email.message import Message
-from itertools import chain, filterfalse, starmap
-from typing import (
- IO,
- TYPE_CHECKING,
- Any,
- BinaryIO,
- Callable,
- Dict,
- Generator,
- Iterable,
- Iterator,
- List,
- NewType,
- Optional,
- Sequence,
- Set,
- Tuple,
- Union,
- cast,
-)
-from zipfile import ZipFile, ZipInfo
-
-from pip._vendor.distlib.scripts import ScriptMaker
-from pip._vendor.distlib.util import get_export_entry
-from pip._vendor.packaging.utils import canonicalize_name
-
-from pip._internal.exceptions import InstallationError
-from pip._internal.locations import get_major_minor_version
-from pip._internal.metadata import (
- BaseDistribution,
- FilesystemWheel,
- get_wheel_distribution,
-)
-from pip._internal.models.direct_url import DIRECT_URL_METADATA_NAME, DirectUrl
-from pip._internal.models.scheme import SCHEME_KEYS, Scheme
-from pip._internal.utils.filesystem import adjacent_tmp_file, replace
-from pip._internal.utils.misc import captured_stdout, ensure_dir, hash_file, partition
-from pip._internal.utils.unpacking import (
- current_umask,
- is_within_directory,
- set_extracted_file_to_default_mode_plus_executable,
- zip_item_is_executable,
-)
-from pip._internal.utils.wheel import parse_wheel
-
-if TYPE_CHECKING:
- from typing import Protocol
-
- class File(Protocol):
- src_record_path: "RecordPath"
- dest_path: str
- changed: bool
-
- def save(self) -> None:
- pass
-
-
-logger = logging.getLogger(__name__)
-
-RecordPath = NewType("RecordPath", str)
-InstalledCSVRow = Tuple[RecordPath, str, Union[int, str]]
-
-
-def rehash(path: str, blocksize: int = 1 << 20) -> Tuple[str, str]:
- """Return (encoded_digest, length) for path using hashlib.sha256()"""
- h, length = hash_file(path, blocksize)
- digest = "sha256=" + urlsafe_b64encode(h.digest()).decode("latin1").rstrip("=")
- return (digest, str(length))
-
-
-def csv_io_kwargs(mode: str) -> Dict[str, Any]:
- """Return keyword arguments to properly open a CSV file
- in the given mode.
- """
- return {"mode": mode, "newline": "", "encoding": "utf-8"}
-
-
-def fix_script(path: str) -> bool:
- """Replace #!python with #!/path/to/python
- Return True if file was changed.
- """
- # XXX RECORD hashes will need to be updated
- assert os.path.isfile(path)
-
- with open(path, "rb") as script:
- firstline = script.readline()
- if not firstline.startswith(b"#!python"):
- return False
- exename = sys.executable.encode(sys.getfilesystemencoding())
- firstline = b"#!" + exename + os.linesep.encode("ascii")
- rest = script.read()
- with open(path, "wb") as script:
- script.write(firstline)
- script.write(rest)
- return True
-
-
-def wheel_root_is_purelib(metadata: Message) -> bool:
- return metadata.get("Root-Is-Purelib", "").lower() == "true"
-
-
-def get_entrypoints(dist: BaseDistribution) -> Tuple[Dict[str, str], Dict[str, str]]:
- console_scripts = {}
- gui_scripts = {}
- for entry_point in dist.iter_entry_points():
- if entry_point.group == "console_scripts":
- console_scripts[entry_point.name] = entry_point.value
- elif entry_point.group == "gui_scripts":
- gui_scripts[entry_point.name] = entry_point.value
- return console_scripts, gui_scripts
-
-
-def message_about_scripts_not_on_PATH(scripts: Sequence[str]) -> Optional[str]:
- """Determine if any scripts are not on PATH and format a warning.
- Returns a warning message if one or more scripts are not on PATH,
- otherwise None.
- """
- if not scripts:
- return None
-
- # Group scripts by the path they were installed in
- grouped_by_dir: Dict[str, Set[str]] = collections.defaultdict(set)
- for destfile in scripts:
- parent_dir = os.path.dirname(destfile)
- script_name = os.path.basename(destfile)
- grouped_by_dir[parent_dir].add(script_name)
-
- # We don't want to warn for directories that are on PATH.
- not_warn_dirs = [
- os.path.normcase(i).rstrip(os.sep)
- for i in os.environ.get("PATH", "").split(os.pathsep)
- ]
- # If an executable sits with sys.executable, we don't warn for it.
- # This covers the case of venv invocations without activating the venv.
- not_warn_dirs.append(os.path.normcase(os.path.dirname(sys.executable)))
- warn_for: Dict[str, Set[str]] = {
- parent_dir: scripts
- for parent_dir, scripts in grouped_by_dir.items()
- if os.path.normcase(parent_dir) not in not_warn_dirs
- }
- if not warn_for:
- return None
-
- # Format a message
- msg_lines = []
- for parent_dir, dir_scripts in warn_for.items():
- sorted_scripts: List[str] = sorted(dir_scripts)
- if len(sorted_scripts) == 1:
- start_text = "script {} is".format(sorted_scripts[0])
- else:
- start_text = "scripts {} are".format(
- ", ".join(sorted_scripts[:-1]) + " and " + sorted_scripts[-1]
- )
-
- msg_lines.append(
- "The {} installed in '{}' which is not on PATH.".format(
- start_text, parent_dir
- )
- )
-
- last_line_fmt = (
- "Consider adding {} to PATH or, if you prefer "
- "to suppress this warning, use --no-warn-script-location."
- )
- if len(msg_lines) == 1:
- msg_lines.append(last_line_fmt.format("this directory"))
- else:
- msg_lines.append(last_line_fmt.format("these directories"))
-
- # Add a note if any directory starts with ~
- warn_for_tilde = any(
- i[0] == "~" for i in os.environ.get("PATH", "").split(os.pathsep) if i
- )
- if warn_for_tilde:
- tilde_warning_msg = (
- "NOTE: The current PATH contains path(s) starting with `~`, "
- "which may not be expanded by all applications."
- )
- msg_lines.append(tilde_warning_msg)
-
- # Returns the formatted multiline message
- return "\n".join(msg_lines)
-
-
-def _normalized_outrows(
- outrows: Iterable[InstalledCSVRow],
-) -> List[Tuple[str, str, str]]:
- """Normalize the given rows of a RECORD file.
-
- Items in each row are converted into str. Rows are then sorted to make
- the value more predictable for tests.
-
- Each row is a 3-tuple (path, hash, size) and corresponds to a record of
- a RECORD file (see PEP 376 and PEP 427 for details). For the rows
- passed to this function, the size can be an integer as an int or string,
- or the empty string.
- """
- # Normally, there should only be one row per path, in which case the
- # second and third elements don't come into play when sorting.
- # However, in cases in the wild where a path might happen to occur twice,
- # we don't want the sort operation to trigger an error (but still want
- # determinism). Since the third element can be an int or string, we
- # coerce each element to a string to avoid a TypeError in this case.
- # For additional background, see--
- # https://github.com/pypa/pip/issues/5868
- return sorted(
- (record_path, hash_, str(size)) for record_path, hash_, size in outrows
- )
-
-
-def _record_to_fs_path(record_path: RecordPath, lib_dir: str) -> str:
- return os.path.join(lib_dir, record_path)
-
-
-def _fs_to_record_path(path: str, lib_dir: str) -> RecordPath:
- # On Windows, do not handle relative paths if they belong to different
- # logical disks
- if os.path.splitdrive(path)[0].lower() == os.path.splitdrive(lib_dir)[0].lower():
- path = os.path.relpath(path, lib_dir)
-
- path = path.replace(os.path.sep, "/")
- return cast("RecordPath", path)
-
-
-def get_csv_rows_for_installed(
- old_csv_rows: List[List[str]],
- installed: Dict[RecordPath, RecordPath],
- changed: Set[RecordPath],
- generated: List[str],
- lib_dir: str,
-) -> List[InstalledCSVRow]:
- """
- :param installed: A map from archive RECORD path to installation RECORD
- path.
- """
- installed_rows: List[InstalledCSVRow] = []
- for row in old_csv_rows:
- if len(row) > 3:
- logger.warning("RECORD line has more than three elements: %s", row)
- old_record_path = cast("RecordPath", row[0])
- new_record_path = installed.pop(old_record_path, old_record_path)
- if new_record_path in changed:
- digest, length = rehash(_record_to_fs_path(new_record_path, lib_dir))
- else:
- digest = row[1] if len(row) > 1 else ""
- length = row[2] if len(row) > 2 else ""
- installed_rows.append((new_record_path, digest, length))
- for f in generated:
- path = _fs_to_record_path(f, lib_dir)
- digest, length = rehash(f)
- installed_rows.append((path, digest, length))
- for installed_record_path in installed.values():
- installed_rows.append((installed_record_path, "", ""))
- return installed_rows
-
-
-def get_console_script_specs(console: Dict[str, str]) -> List[str]:
- """
- Given the mapping from entrypoint name to callable, return the relevant
- console script specs.
- """
- # Don't mutate caller's version
- console = console.copy()
-
- scripts_to_generate = []
-
- # Special case pip and setuptools to generate versioned wrappers
- #
- # The issue is that some projects (specifically, pip and setuptools) use
- # code in setup.py to create "versioned" entry points - pip2.7 on Python
- # 2.7, pip3.3 on Python 3.3, etc. But these entry points are baked into
- # the wheel metadata at build time, and so if the wheel is installed with
- # a *different* version of Python the entry points will be wrong. The
- # correct fix for this is to enhance the metadata to be able to describe
- # such versioned entry points, but that won't happen till Metadata 2.0 is
- # available.
- # In the meantime, projects using versioned entry points will either have
- # incorrect versioned entry points, or they will not be able to distribute
- # "universal" wheels (i.e., they will need a wheel per Python version).
- #
- # Because setuptools and pip are bundled with _ensurepip and virtualenv,
- # we need to use universal wheels. So, as a stopgap until Metadata 2.0, we
- # override the versioned entry points in the wheel and generate the
- # correct ones. This code is purely a short-term measure until Metadata 2.0
- # is available.
- #
- # To add the level of hack in this section of code, in order to support
- # ensurepip this code will look for an ``ENSUREPIP_OPTIONS`` environment
- # variable which will control which version scripts get installed.
- #
- # ENSUREPIP_OPTIONS=altinstall
- # - Only pipX.Y and easy_install-X.Y will be generated and installed
- # ENSUREPIP_OPTIONS=install
- # - pipX.Y, pipX, easy_install-X.Y will be generated and installed. Note
- # that this option is technically if ENSUREPIP_OPTIONS is set and is
- # not altinstall
- # DEFAULT
- # - The default behavior is to install pip, pipX, pipX.Y, easy_install
- # and easy_install-X.Y.
- pip_script = console.pop("pip", None)
- if pip_script:
- if "ENSUREPIP_OPTIONS" not in os.environ:
- scripts_to_generate.append("pip = " + pip_script)
-
- if os.environ.get("ENSUREPIP_OPTIONS", "") != "altinstall":
- scripts_to_generate.append(
- "pip{} = {}".format(sys.version_info[0], pip_script)
- )
-
- scripts_to_generate.append(f"pip{get_major_minor_version()} = {pip_script}")
- # Delete any other versioned pip entry points
- pip_ep = [k for k in console if re.match(r"pip(\d+(\.\d+)?)?$", k)]
- for k in pip_ep:
- del console[k]
- easy_install_script = console.pop("easy_install", None)
- if easy_install_script:
- if "ENSUREPIP_OPTIONS" not in os.environ:
- scripts_to_generate.append("easy_install = " + easy_install_script)
-
- scripts_to_generate.append(
- "easy_install-{} = {}".format(
- get_major_minor_version(), easy_install_script
- )
- )
- # Delete any other versioned easy_install entry points
- easy_install_ep = [
- k for k in console if re.match(r"easy_install(-\d+\.\d+)?$", k)
- ]
- for k in easy_install_ep:
- del console[k]
-
- # Generate the console entry points specified in the wheel
- scripts_to_generate.extend(starmap("{} = {}".format, console.items()))
-
- return scripts_to_generate
-
-
-class ZipBackedFile:
- def __init__(
- self, src_record_path: RecordPath, dest_path: str, zip_file: ZipFile
- ) -> None:
- self.src_record_path = src_record_path
- self.dest_path = dest_path
- self._zip_file = zip_file
- self.changed = False
-
- def _getinfo(self) -> ZipInfo:
- return self._zip_file.getinfo(self.src_record_path)
-
- def save(self) -> None:
- # directory creation is lazy and after file filtering
- # to ensure we don't install empty dirs; empty dirs can't be
- # uninstalled.
- parent_dir = os.path.dirname(self.dest_path)
- ensure_dir(parent_dir)
-
- # When we open the output file below, any existing file is truncated
- # before we start writing the new contents. This is fine in most
- # cases, but can cause a segfault if pip has loaded a shared
- # object (e.g. from pyopenssl through its vendored urllib3)
- # Since the shared object is mmap'd an attempt to call a
- # symbol in it will then cause a segfault. Unlinking the file
- # allows writing of new contents while allowing the process to
- # continue to use the old copy.
- if os.path.exists(self.dest_path):
- os.unlink(self.dest_path)
-
- zipinfo = self._getinfo()
-
- with self._zip_file.open(zipinfo) as f:
- with open(self.dest_path, "wb") as dest:
- shutil.copyfileobj(f, dest)
-
- if zip_item_is_executable(zipinfo):
- set_extracted_file_to_default_mode_plus_executable(self.dest_path)
-
-
-class ScriptFile:
- def __init__(self, file: "File") -> None:
- self._file = file
- self.src_record_path = self._file.src_record_path
- self.dest_path = self._file.dest_path
- self.changed = False
-
- def save(self) -> None:
- self._file.save()
- self.changed = fix_script(self.dest_path)
-
-
-class MissingCallableSuffix(InstallationError):
- def __init__(self, entry_point: str) -> None:
- super().__init__(
- "Invalid script entry point: {} - A callable "
- "suffix is required. Cf https://packaging.python.org/"
- "specifications/entry-points/#use-for-scripts for more "
- "information.".format(entry_point)
- )
-
-
-def _raise_for_invalid_entrypoint(specification: str) -> None:
- entry = get_export_entry(specification)
- if entry is not None and entry.suffix is None:
- raise MissingCallableSuffix(str(entry))
-
-
-class PipScriptMaker(ScriptMaker):
- def make(
- self, specification: str, options: Optional[Dict[str, Any]] = None
- ) -> List[str]:
- _raise_for_invalid_entrypoint(specification)
- return super().make(specification, options)
-
-
-def _install_wheel(
- name: str,
- wheel_zip: ZipFile,
- wheel_path: str,
- scheme: Scheme,
- pycompile: bool = True,
- warn_script_location: bool = True,
- direct_url: Optional[DirectUrl] = None,
- requested: bool = False,
-) -> None:
- """Install a wheel.
-
- :param name: Name of the project to install
- :param wheel_zip: open ZipFile for wheel being installed
- :param scheme: Distutils scheme dictating the install directories
- :param req_description: String used in place of the requirement, for
- logging
- :param pycompile: Whether to byte-compile installed Python files
- :param warn_script_location: Whether to check that scripts are installed
- into a directory on PATH
- :raises UnsupportedWheel:
- * when the directory holds an unpacked wheel with incompatible
- Wheel-Version
- * when the .dist-info dir does not match the wheel
- """
- info_dir, metadata = parse_wheel(wheel_zip, name)
-
- if wheel_root_is_purelib(metadata):
- lib_dir = scheme.purelib
- else:
- lib_dir = scheme.platlib
-
- # Record details of the files moved
- # installed = files copied from the wheel to the destination
- # changed = files changed while installing (scripts #! line typically)
- # generated = files newly generated during the install (script wrappers)
- installed: Dict[RecordPath, RecordPath] = {}
- changed: Set[RecordPath] = set()
- generated: List[str] = []
-
- def record_installed(
- srcfile: RecordPath, destfile: str, modified: bool = False
- ) -> None:
- """Map archive RECORD paths to installation RECORD paths."""
- newpath = _fs_to_record_path(destfile, lib_dir)
- installed[srcfile] = newpath
- if modified:
- changed.add(newpath)
-
- def is_dir_path(path: RecordPath) -> bool:
- return path.endswith("/")
-
- def assert_no_path_traversal(dest_dir_path: str, target_path: str) -> None:
- if not is_within_directory(dest_dir_path, target_path):
- message = (
- "The wheel {!r} has a file {!r} trying to install"
- " outside the target directory {!r}"
- )
- raise InstallationError(
- message.format(wheel_path, target_path, dest_dir_path)
- )
-
- def root_scheme_file_maker(
- zip_file: ZipFile, dest: str
- ) -> Callable[[RecordPath], "File"]:
- def make_root_scheme_file(record_path: RecordPath) -> "File":
- normed_path = os.path.normpath(record_path)
- dest_path = os.path.join(dest, normed_path)
- assert_no_path_traversal(dest, dest_path)
- return ZipBackedFile(record_path, dest_path, zip_file)
-
- return make_root_scheme_file
-
- def data_scheme_file_maker(
- zip_file: ZipFile, scheme: Scheme
- ) -> Callable[[RecordPath], "File"]:
- scheme_paths = {key: getattr(scheme, key) for key in SCHEME_KEYS}
-
- def make_data_scheme_file(record_path: RecordPath) -> "File":
- normed_path = os.path.normpath(record_path)
- try:
- _, scheme_key, dest_subpath = normed_path.split(os.path.sep, 2)
- except ValueError:
- message = (
- "Unexpected file in {}: {!r}. .data directory contents"
- " should be named like: '/'."
- ).format(wheel_path, record_path)
- raise InstallationError(message)
-
- try:
- scheme_path = scheme_paths[scheme_key]
- except KeyError:
- valid_scheme_keys = ", ".join(sorted(scheme_paths))
- message = (
- "Unknown scheme key used in {}: {} (for file {!r}). .data"
- " directory contents should be in subdirectories named"
- " with a valid scheme key ({})"
- ).format(wheel_path, scheme_key, record_path, valid_scheme_keys)
- raise InstallationError(message)
-
- dest_path = os.path.join(scheme_path, dest_subpath)
- assert_no_path_traversal(scheme_path, dest_path)
- return ZipBackedFile(record_path, dest_path, zip_file)
-
- return make_data_scheme_file
-
- def is_data_scheme_path(path: RecordPath) -> bool:
- return path.split("/", 1)[0].endswith(".data")
-
- paths = cast(List[RecordPath], wheel_zip.namelist())
- file_paths = filterfalse(is_dir_path, paths)
- root_scheme_paths, data_scheme_paths = partition(is_data_scheme_path, file_paths)
-
- make_root_scheme_file = root_scheme_file_maker(wheel_zip, lib_dir)
- files: Iterator[File] = map(make_root_scheme_file, root_scheme_paths)
-
- def is_script_scheme_path(path: RecordPath) -> bool:
- parts = path.split("/", 2)
- return len(parts) > 2 and parts[0].endswith(".data") and parts[1] == "scripts"
-
- other_scheme_paths, script_scheme_paths = partition(
- is_script_scheme_path, data_scheme_paths
- )
-
- make_data_scheme_file = data_scheme_file_maker(wheel_zip, scheme)
- other_scheme_files = map(make_data_scheme_file, other_scheme_paths)
- files = chain(files, other_scheme_files)
-
- # Get the defined entry points
- distribution = get_wheel_distribution(
- FilesystemWheel(wheel_path),
- canonicalize_name(name),
- )
- console, gui = get_entrypoints(distribution)
-
- def is_entrypoint_wrapper(file: "File") -> bool:
- # EP, EP.exe and EP-script.py are scripts generated for
- # entry point EP by setuptools
- path = file.dest_path
- name = os.path.basename(path)
- if name.lower().endswith(".exe"):
- matchname = name[:-4]
- elif name.lower().endswith("-script.py"):
- matchname = name[:-10]
- elif name.lower().endswith(".pya"):
- matchname = name[:-4]
- else:
- matchname = name
- # Ignore setuptools-generated scripts
- return matchname in console or matchname in gui
-
- script_scheme_files: Iterator[File] = map(
- make_data_scheme_file, script_scheme_paths
- )
- script_scheme_files = filterfalse(is_entrypoint_wrapper, script_scheme_files)
- script_scheme_files = map(ScriptFile, script_scheme_files)
- files = chain(files, script_scheme_files)
-
- for file in files:
- file.save()
- record_installed(file.src_record_path, file.dest_path, file.changed)
-
- def pyc_source_file_paths() -> Generator[str, None, None]:
- # We de-duplicate installation paths, since there can be overlap (e.g.
- # file in .data maps to same location as file in wheel root).
- # Sorting installation paths makes it easier to reproduce and debug
- # issues related to permissions on existing files.
- for installed_path in sorted(set(installed.values())):
- full_installed_path = os.path.join(lib_dir, installed_path)
- if not os.path.isfile(full_installed_path):
- continue
- if not full_installed_path.endswith(".py"):
- continue
- yield full_installed_path
-
- def pyc_output_path(path: str) -> str:
- """Return the path the pyc file would have been written to."""
- return importlib.util.cache_from_source(path)
-
- # Compile all of the pyc files for the installed files
- if pycompile:
- with captured_stdout() as stdout:
- with warnings.catch_warnings():
- warnings.filterwarnings("ignore")
- for path in pyc_source_file_paths():
- success = compileall.compile_file(path, force=True, quiet=True)
- if success:
- pyc_path = pyc_output_path(path)
- assert os.path.exists(pyc_path)
- pyc_record_path = cast(
- "RecordPath", pyc_path.replace(os.path.sep, "/")
- )
- record_installed(pyc_record_path, pyc_path)
- logger.debug(stdout.getvalue())
-
- maker = PipScriptMaker(None, scheme.scripts)
-
- # Ensure old scripts are overwritten.
- # See https://github.com/pypa/pip/issues/1800
- maker.clobber = True
-
- # Ensure we don't generate any variants for scripts because this is almost
- # never what somebody wants.
- # See https://bitbucket.org/pypa/distlib/issue/35/
- maker.variants = {""}
-
- # This is required because otherwise distlib creates scripts that are not
- # executable.
- # See https://bitbucket.org/pypa/distlib/issue/32/
- maker.set_mode = True
-
- # Generate the console and GUI entry points specified in the wheel
- scripts_to_generate = get_console_script_specs(console)
-
- gui_scripts_to_generate = list(starmap("{} = {}".format, gui.items()))
-
- generated_console_scripts = maker.make_multiple(scripts_to_generate)
- generated.extend(generated_console_scripts)
-
- generated.extend(maker.make_multiple(gui_scripts_to_generate, {"gui": True}))
-
- if warn_script_location:
- msg = message_about_scripts_not_on_PATH(generated_console_scripts)
- if msg is not None:
- logger.warning(msg)
-
- generated_file_mode = 0o666 & ~current_umask()
-
- @contextlib.contextmanager
- def _generate_file(path: str, **kwargs: Any) -> Generator[BinaryIO, None, None]:
- with adjacent_tmp_file(path, **kwargs) as f:
- yield f
- os.chmod(f.name, generated_file_mode)
- replace(f.name, path)
-
- dest_info_dir = os.path.join(lib_dir, info_dir)
-
- # Record pip as the installer
- installer_path = os.path.join(dest_info_dir, "INSTALLER")
- with _generate_file(installer_path) as installer_file:
- installer_file.write(b"pip\n")
- generated.append(installer_path)
-
- # Record the PEP 610 direct URL reference
- if direct_url is not None:
- direct_url_path = os.path.join(dest_info_dir, DIRECT_URL_METADATA_NAME)
- with _generate_file(direct_url_path) as direct_url_file:
- direct_url_file.write(direct_url.to_json().encode("utf-8"))
- generated.append(direct_url_path)
-
- # Record the REQUESTED file
- if requested:
- requested_path = os.path.join(dest_info_dir, "REQUESTED")
- with open(requested_path, "wb"):
- pass
- generated.append(requested_path)
-
- record_text = distribution.read_text("RECORD")
- record_rows = list(csv.reader(record_text.splitlines()))
-
- rows = get_csv_rows_for_installed(
- record_rows,
- installed=installed,
- changed=changed,
- generated=generated,
- lib_dir=lib_dir,
- )
-
- # Record details of all files installed
- record_path = os.path.join(dest_info_dir, "RECORD")
-
- with _generate_file(record_path, **csv_io_kwargs("w")) as record_file:
- # Explicitly cast to typing.IO[str] as a workaround for the mypy error:
- # "writer" has incompatible type "BinaryIO"; expected "_Writer"
- writer = csv.writer(cast("IO[str]", record_file))
- writer.writerows(_normalized_outrows(rows))
-
-
-@contextlib.contextmanager
-def req_error_context(req_description: str) -> Generator[None, None, None]:
- try:
- yield
- except InstallationError as e:
- message = "For req: {}. {}".format(req_description, e.args[0])
- raise InstallationError(message) from e
-
-
-def install_wheel(
- name: str,
- wheel_path: str,
- scheme: Scheme,
- req_description: str,
- pycompile: bool = True,
- warn_script_location: bool = True,
- direct_url: Optional[DirectUrl] = None,
- requested: bool = False,
-) -> None:
- with ZipFile(wheel_path, allowZip64=True) as z:
- with req_error_context(req_description):
- _install_wheel(
- name=name,
- wheel_zip=z,
- wheel_path=wheel_path,
- scheme=scheme,
- pycompile=pycompile,
- warn_script_location=warn_script_location,
- direct_url=direct_url,
- requested=requested,
- )
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/_stack.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/_stack.py
deleted file mode 100644
index 194564e761ddae165b39ef6598877e2e3820af0a..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/_stack.py
+++ /dev/null
@@ -1,16 +0,0 @@
-from typing import List, TypeVar
-
-T = TypeVar("T")
-
-
-class Stack(List[T]):
- """A small shim over builtin list."""
-
- @property
- def top(self) -> T:
- """Get top of stack."""
- return self[-1]
-
- def push(self, item: T) -> None:
- """Push an item on to the stack (append in stack nomenclature)."""
- self.append(item)
diff --git a/spaces/Realcat/image-matching-webui/third_party/SOLD2/sold2/model/__init__.py b/spaces/Realcat/image-matching-webui/third_party/SOLD2/sold2/model/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Ricecake123/RVC-demo/infer-web.py b/spaces/Ricecake123/RVC-demo/infer-web.py
deleted file mode 100644
index 7de75cc5ac0624b0b66acf62eb330222cc5a5d6a..0000000000000000000000000000000000000000
--- a/spaces/Ricecake123/RVC-demo/infer-web.py
+++ /dev/null
@@ -1,1991 +0,0 @@
-import os
-import shutil
-import sys
-
-now_dir = os.getcwd()
-sys.path.append(now_dir)
-import traceback, pdb
-import warnings
-
-import numpy as np
-import torch
-
-os.environ["OPENBLAS_NUM_THREADS"] = "1"
-os.environ["no_proxy"] = "localhost, 127.0.0.1, ::1"
-import logging
-import threading
-from random import shuffle
-from subprocess import Popen
-from time import sleep
-
-import faiss
-import ffmpeg
-import gradio as gr
-import soundfile as sf
-from config import Config
-from fairseq import checkpoint_utils
-from i18n import I18nAuto
-from lib.infer_pack.models import (
- SynthesizerTrnMs256NSFsid,
- SynthesizerTrnMs256NSFsid_nono,
- SynthesizerTrnMs768NSFsid,
- SynthesizerTrnMs768NSFsid_nono,
-)
-from lib.infer_pack.models_onnx import SynthesizerTrnMsNSFsidM
-from infer_uvr5 import _audio_pre_, _audio_pre_new
-from my_utils import load_audio
-from train.process_ckpt import change_info, extract_small_model, merge, show_info
-from vc_infer_pipeline import VC
-from sklearn.cluster import MiniBatchKMeans
-
-logging.getLogger("numba").setLevel(logging.WARNING)
-
-
-tmp = os.path.join(now_dir, "TEMP")
-shutil.rmtree(tmp, ignore_errors=True)
-shutil.rmtree(
- "%s/runtime/Lib/site-packages/lib.infer_pack" % (now_dir), ignore_errors=True
-)
-shutil.rmtree("%s/runtime/Lib/site-packages/uvr5_pack" % (now_dir), ignore_errors=True)
-os.makedirs(tmp, exist_ok=True)
-os.makedirs(os.path.join(now_dir, "logs"), exist_ok=True)
-os.makedirs(os.path.join(now_dir, "weights"), exist_ok=True)
-os.environ["TEMP"] = tmp
-warnings.filterwarnings("ignore")
-torch.manual_seed(114514)
-
-
-config = Config()
-i18n = I18nAuto()
-i18n.print()
-# 判断是否有能用来训练和加速推理的N卡
-ngpu = torch.cuda.device_count()
-gpu_infos = []
-mem = []
-if_gpu_ok = False
-
-if torch.cuda.is_available() or ngpu != 0:
- for i in range(ngpu):
- gpu_name = torch.cuda.get_device_name(i)
- if any(
- value in gpu_name.upper()
- for value in [
- "10",
- "16",
- "20",
- "30",
- "40",
- "A2",
- "A3",
- "A4",
- "P4",
- "A50",
- "500",
- "A60",
- "70",
- "80",
- "90",
- "M4",
- "T4",
- "TITAN",
- ]
- ):
- # A10#A100#V100#A40#P40#M40#K80#A4500
- if_gpu_ok = True # 至少有一张能用的N卡
- gpu_infos.append("%s\t%s" % (i, gpu_name))
- mem.append(
- int(
- torch.cuda.get_device_properties(i).total_memory
- / 1024
- / 1024
- / 1024
- + 0.4
- )
- )
-if if_gpu_ok and len(gpu_infos) > 0:
- gpu_info = "\n".join(gpu_infos)
- default_batch_size = min(mem) // 2
-else:
- gpu_info = i18n("很遗憾您这没有能用的显卡来支持您训练")
- default_batch_size = 1
-gpus = "-".join([i[0] for i in gpu_infos])
-
-
-class ToolButton(gr.Button, gr.components.FormComponent):
- """Small button with single emoji as text, fits inside gradio forms"""
-
- def __init__(self, **kwargs):
- super().__init__(variant="tool", **kwargs)
-
- def get_block_name(self):
- return "button"
-
-
-hubert_model = None
-
-
-def load_hubert():
- global hubert_model
- models, _, _ = checkpoint_utils.load_model_ensemble_and_task(
- ["hubert_base.pt"],
- suffix="",
- )
- hubert_model = models[0]
- hubert_model = hubert_model.to(config.device)
- if config.is_half:
- hubert_model = hubert_model.half()
- else:
- hubert_model = hubert_model.float()
- hubert_model.eval()
-
-
-weight_root = "weights"
-weight_uvr5_root = "uvr5_weights"
-index_root = "logs"
-names = []
-for name in os.listdir(weight_root):
- if name.endswith(".pth"):
- names.append(name)
-index_paths = []
-for root, dirs, files in os.walk(index_root, topdown=False):
- for name in files:
- if name.endswith(".index") and "trained" not in name:
- index_paths.append("%s/%s" % (root, name))
-uvr5_names = []
-for name in os.listdir(weight_uvr5_root):
- if name.endswith(".pth") or "onnx" in name:
- uvr5_names.append(name.replace(".pth", ""))
-
-
-def vc_single(
- sid,
- input_audio_path,
- f0_up_key,
- f0_file,
- f0_method,
- file_index,
- file_index2,
- # file_big_npy,
- index_rate,
- filter_radius,
- resample_sr,
- rms_mix_rate,
- protect,
-): # spk_item, input_audio0, vc_transform0,f0_file,f0method0
- global tgt_sr, net_g, vc, hubert_model, version
- if input_audio_path is None:
- return "You need to upload an audio", None
- f0_up_key = int(f0_up_key)
- try:
- audio = load_audio(input_audio_path, 16000)
- audio_max = np.abs(audio).max() / 0.95
- if audio_max > 1:
- audio /= audio_max
- times = [0, 0, 0]
- if not hubert_model:
- load_hubert()
- if_f0 = cpt.get("f0", 1)
- file_index = (
- (
- file_index.strip(" ")
- .strip('"')
- .strip("\n")
- .strip('"')
- .strip(" ")
- .replace("trained", "added")
- )
- if file_index != ""
- else file_index2
- ) # 防止小白写错,自动帮他替换掉
- # file_big_npy = (
- # file_big_npy.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
- # )
- audio_opt = vc.pipeline(
- hubert_model,
- net_g,
- sid,
- audio,
- input_audio_path,
- times,
- f0_up_key,
- f0_method,
- file_index,
- # file_big_npy,
- index_rate,
- if_f0,
- filter_radius,
- tgt_sr,
- resample_sr,
- rms_mix_rate,
- version,
- protect,
- f0_file=f0_file,
- )
- if tgt_sr != resample_sr >= 16000:
- tgt_sr = resample_sr
- index_info = (
- "Using index:%s." % file_index
- if os.path.exists(file_index)
- else "Index not used."
- )
- return "Success.\n %s\nTime:\n npy:%ss, f0:%ss, infer:%ss" % (
- index_info,
- times[0],
- times[1],
- times[2],
- ), (tgt_sr, audio_opt)
- except:
- info = traceback.format_exc()
- print(info)
- return info, (None, None)
-
-
-def vc_multi(
- sid,
- dir_path,
- opt_root,
- paths,
- f0_up_key,
- f0_method,
- file_index,
- file_index2,
- # file_big_npy,
- index_rate,
- filter_radius,
- resample_sr,
- rms_mix_rate,
- protect,
- format1,
-):
- try:
- dir_path = (
- dir_path.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
- ) # 防止小白拷路径头尾带了空格和"和回车
- opt_root = opt_root.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
- os.makedirs(opt_root, exist_ok=True)
- try:
- if dir_path != "":
- paths = [os.path.join(dir_path, name) for name in os.listdir(dir_path)]
- else:
- paths = [path.name for path in paths]
- except:
- traceback.print_exc()
- paths = [path.name for path in paths]
- infos = []
- for path in paths:
- info, opt = vc_single(
- sid,
- path,
- f0_up_key,
- None,
- f0_method,
- file_index,
- file_index2,
- # file_big_npy,
- index_rate,
- filter_radius,
- resample_sr,
- rms_mix_rate,
- protect,
- )
- if "Success" in info:
- try:
- tgt_sr, audio_opt = opt
- if format1 in ["wav", "flac"]:
- sf.write(
- "%s/%s.%s" % (opt_root, os.path.basename(path), format1),
- audio_opt,
- tgt_sr,
- )
- else:
- path = "%s/%s.wav" % (opt_root, os.path.basename(path))
- sf.write(
- path,
- audio_opt,
- tgt_sr,
- )
- if os.path.exists(path):
- os.system(
- "ffmpeg -i %s -vn %s -q:a 2 -y"
- % (path, path[:-4] + ".%s" % format1)
- )
- except:
- info += traceback.format_exc()
- infos.append("%s->%s" % (os.path.basename(path), info))
- yield "\n".join(infos)
- yield "\n".join(infos)
- except:
- yield traceback.format_exc()
-
-
-def uvr(model_name, inp_root, save_root_vocal, paths, save_root_ins, agg, format0):
- infos = []
- try:
- inp_root = inp_root.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
- save_root_vocal = (
- save_root_vocal.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
- )
- save_root_ins = (
- save_root_ins.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
- )
- if model_name == "onnx_dereverb_By_FoxJoy":
- from MDXNet import MDXNetDereverb
-
- pre_fun = MDXNetDereverb(15)
- else:
- func = _audio_pre_ if "DeEcho" not in model_name else _audio_pre_new
- pre_fun = func(
- agg=int(agg),
- model_path=os.path.join(weight_uvr5_root, model_name + ".pth"),
- device=config.device,
- is_half=config.is_half,
- )
- if inp_root != "":
- paths = [os.path.join(inp_root, name) for name in os.listdir(inp_root)]
- else:
- paths = [path.name for path in paths]
- for path in paths:
- inp_path = os.path.join(inp_root, path)
- need_reformat = 1
- done = 0
- try:
- info = ffmpeg.probe(inp_path, cmd="ffprobe")
- if (
- info["streams"][0]["channels"] == 2
- and info["streams"][0]["sample_rate"] == "44100"
- ):
- need_reformat = 0
- pre_fun._path_audio_(
- inp_path, save_root_ins, save_root_vocal, format0
- )
- done = 1
- except:
- need_reformat = 1
- traceback.print_exc()
- if need_reformat == 1:
- tmp_path = "%s/%s.reformatted.wav" % (tmp, os.path.basename(inp_path))
- os.system(
- "ffmpeg -i %s -vn -acodec pcm_s16le -ac 2 -ar 44100 %s -y"
- % (inp_path, tmp_path)
- )
- inp_path = tmp_path
- try:
- if done == 0:
- pre_fun._path_audio_(
- inp_path, save_root_ins, save_root_vocal, format0
- )
- infos.append("%s->Success" % (os.path.basename(inp_path)))
- yield "\n".join(infos)
- except:
- infos.append(
- "%s->%s" % (os.path.basename(inp_path), traceback.format_exc())
- )
- yield "\n".join(infos)
- except:
- infos.append(traceback.format_exc())
- yield "\n".join(infos)
- finally:
- try:
- if model_name == "onnx_dereverb_By_FoxJoy":
- del pre_fun.pred.model
- del pre_fun.pred.model_
- else:
- del pre_fun.model
- del pre_fun
- except:
- traceback.print_exc()
- print("clean_empty_cache")
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- yield "\n".join(infos)
-
-
-# 一个选项卡全局只能有一个音色
-def get_vc(sid, to_return_protect0, to_return_protect1):
- global n_spk, tgt_sr, net_g, vc, cpt, version
- if sid == "" or sid == []:
- global hubert_model
- if hubert_model is not None: # 考虑到轮询, 需要加个判断看是否 sid 是由有模型切换到无模型的
- print("clean_empty_cache")
- del net_g, n_spk, vc, hubert_model, tgt_sr # ,cpt
- hubert_model = net_g = n_spk = vc = hubert_model = tgt_sr = None
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- ###楼下不这么折腾清理不干净
- if_f0 = cpt.get("f0", 1)
- version = cpt.get("version", "v1")
- if version == "v1":
- if if_f0 == 1:
- net_g = SynthesizerTrnMs256NSFsid(
- *cpt["config"], is_half=config.is_half
- )
- else:
- net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"])
- elif version == "v2":
- if if_f0 == 1:
- net_g = SynthesizerTrnMs768NSFsid(
- *cpt["config"], is_half=config.is_half
- )
- else:
- net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"])
- del net_g, cpt
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- cpt = None
- return {"visible": False, "__type__": "update"}
- person = "%s/%s" % (weight_root, sid)
- print("loading %s" % person)
- cpt = torch.load(person, map_location="cpu")
- tgt_sr = cpt["config"][-1]
- cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk
- if_f0 = cpt.get("f0", 1)
- if if_f0 == 0:
- to_return_protect0 = to_return_protect1 = {
- "visible": False,
- "value": 0.5,
- "__type__": "update",
- }
- else:
- to_return_protect0 = {
- "visible": True,
- "value": to_return_protect0,
- "__type__": "update",
- }
- to_return_protect1 = {
- "visible": True,
- "value": to_return_protect1,
- "__type__": "update",
- }
- version = cpt.get("version", "v1")
- if version == "v1":
- if if_f0 == 1:
- net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=config.is_half)
- else:
- net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"])
- elif version == "v2":
- if if_f0 == 1:
- net_g = SynthesizerTrnMs768NSFsid(*cpt["config"], is_half=config.is_half)
- else:
- net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"])
- del net_g.enc_q
- print(net_g.load_state_dict(cpt["weight"], strict=False))
- net_g.eval().to(config.device)
- if config.is_half:
- net_g = net_g.half()
- else:
- net_g = net_g.float()
- vc = VC(tgt_sr, config)
- n_spk = cpt["config"][-3]
- return (
- {"visible": True, "maximum": n_spk, "__type__": "update"},
- to_return_protect0,
- to_return_protect1,
- )
-
-
-def change_choices():
- names = []
- for name in os.listdir(weight_root):
- if name.endswith(".pth"):
- names.append(name)
- index_paths = []
- for root, dirs, files in os.walk(index_root, topdown=False):
- for name in files:
- if name.endswith(".index") and "trained" not in name:
- index_paths.append("%s/%s" % (root, name))
- return {"choices": sorted(names), "__type__": "update"}, {
- "choices": sorted(index_paths),
- "__type__": "update",
- }
-
-
-def clean():
- return {"value": "", "__type__": "update"}
-
-
-sr_dict = {
- "32k": 32000,
- "40k": 40000,
- "48k": 48000,
-}
-
-
-def if_done(done, p):
- while 1:
- if p.poll() is None:
- sleep(0.5)
- else:
- break
- done[0] = True
-
-
-def if_done_multi(done, ps):
- while 1:
- # poll==None代表进程未结束
- # 只要有一个进程未结束都不停
- flag = 1
- for p in ps:
- if p.poll() is None:
- flag = 0
- sleep(0.5)
- break
- if flag == 1:
- break
- done[0] = True
-
-
-def preprocess_dataset(trainset_dir, exp_dir, sr, n_p):
- sr = sr_dict[sr]
- os.makedirs("%s/logs/%s" % (now_dir, exp_dir), exist_ok=True)
- f = open("%s/logs/%s/preprocess.log" % (now_dir, exp_dir), "w")
- f.close()
- cmd = (
- config.python_cmd
- + " trainset_preprocess_pipeline_print.py %s %s %s %s/logs/%s "
- % (trainset_dir, sr, n_p, now_dir, exp_dir)
- + str(config.noparallel)
- )
- print(cmd)
- p = Popen(cmd, shell=True) # , stdin=PIPE, stdout=PIPE,stderr=PIPE,cwd=now_dir
- ###煞笔gr, popen read都非得全跑完了再一次性读取, 不用gr就正常读一句输出一句;只能额外弄出一个文本流定时读
- done = [False]
- threading.Thread(
- target=if_done,
- args=(
- done,
- p,
- ),
- ).start()
- while 1:
- with open("%s/logs/%s/preprocess.log" % (now_dir, exp_dir), "r") as f:
- yield (f.read())
- sleep(1)
- if done[0]:
- break
- with open("%s/logs/%s/preprocess.log" % (now_dir, exp_dir), "r") as f:
- log = f.read()
- print(log)
- yield log
-
-
-# but2.click(extract_f0,[gpus6,np7,f0method8,if_f0_3,trainset_dir4],[info2])
-def extract_f0_feature(gpus, n_p, f0method, if_f0, exp_dir, version19):
- gpus = gpus.split("-")
- os.makedirs("%s/logs/%s" % (now_dir, exp_dir), exist_ok=True)
- f = open("%s/logs/%s/extract_f0_feature.log" % (now_dir, exp_dir), "w")
- f.close()
- if if_f0:
- cmd = config.python_cmd + " extract_f0_print.py %s/logs/%s %s %s" % (
- now_dir,
- exp_dir,
- n_p,
- f0method,
- )
- print(cmd)
- p = Popen(cmd, shell=True, cwd=now_dir) # , stdin=PIPE, stdout=PIPE,stderr=PIPE
- ###煞笔gr, popen read都非得全跑完了再一次性读取, 不用gr就正常读一句输出一句;只能额外弄出一个文本流定时读
- done = [False]
- threading.Thread(
- target=if_done,
- args=(
- done,
- p,
- ),
- ).start()
- while 1:
- with open(
- "%s/logs/%s/extract_f0_feature.log" % (now_dir, exp_dir), "r"
- ) as f:
- yield (f.read())
- sleep(1)
- if done[0]:
- break
- with open("%s/logs/%s/extract_f0_feature.log" % (now_dir, exp_dir), "r") as f:
- log = f.read()
- print(log)
- yield log
- ####对不同part分别开多进程
- """
- n_part=int(sys.argv[1])
- i_part=int(sys.argv[2])
- i_gpu=sys.argv[3]
- exp_dir=sys.argv[4]
- os.environ["CUDA_VISIBLE_DEVICES"]=str(i_gpu)
- """
- leng = len(gpus)
- ps = []
- for idx, n_g in enumerate(gpus):
- cmd = (
- config.python_cmd
- + " extract_feature_print.py %s %s %s %s %s/logs/%s %s"
- % (
- config.device,
- leng,
- idx,
- n_g,
- now_dir,
- exp_dir,
- version19,
- )
- )
- print(cmd)
- p = Popen(
- cmd, shell=True, cwd=now_dir
- ) # , shell=True, stdin=PIPE, stdout=PIPE, stderr=PIPE, cwd=now_dir
- ps.append(p)
- ###煞笔gr, popen read都非得全跑完了再一次性读取, 不用gr就正常读一句输出一句;只能额外弄出一个文本流定时读
- done = [False]
- threading.Thread(
- target=if_done_multi,
- args=(
- done,
- ps,
- ),
- ).start()
- while 1:
- with open("%s/logs/%s/extract_f0_feature.log" % (now_dir, exp_dir), "r") as f:
- yield (f.read())
- sleep(1)
- if done[0]:
- break
- with open("%s/logs/%s/extract_f0_feature.log" % (now_dir, exp_dir), "r") as f:
- log = f.read()
- print(log)
- yield log
-
-
-def change_sr2(sr2, if_f0_3, version19):
- path_str = "" if version19 == "v1" else "_v2"
- f0_str = "f0" if if_f0_3 else ""
- if_pretrained_generator_exist = os.access(
- "pretrained%s/%sG%s.pth" % (path_str, f0_str, sr2), os.F_OK
- )
- if_pretrained_discriminator_exist = os.access(
- "pretrained%s/%sD%s.pth" % (path_str, f0_str, sr2), os.F_OK
- )
- if not if_pretrained_generator_exist:
- print(
- "pretrained%s/%sG%s.pth" % (path_str, f0_str, sr2),
- "not exist, will not use pretrained model",
- )
- if not if_pretrained_discriminator_exist:
- print(
- "pretrained%s/%sD%s.pth" % (path_str, f0_str, sr2),
- "not exist, will not use pretrained model",
- )
- return (
- "pretrained%s/%sG%s.pth" % (path_str, f0_str, sr2)
- if if_pretrained_generator_exist
- else "",
- "pretrained%s/%sD%s.pth" % (path_str, f0_str, sr2)
- if if_pretrained_discriminator_exist
- else "",
- )
-
-
-def change_version19(sr2, if_f0_3, version19):
- path_str = "" if version19 == "v1" else "_v2"
- if sr2 == "32k" and version19 == "v1":
- sr2 = "40k"
- to_return_sr2 = (
- {"choices": ["40k", "48k"], "__type__": "update", "value": sr2}
- if version19 == "v1"
- else {"choices": ["40k", "48k", "32k"], "__type__": "update", "value": sr2}
- )
- f0_str = "f0" if if_f0_3 else ""
- if_pretrained_generator_exist = os.access(
- "pretrained%s/%sG%s.pth" % (path_str, f0_str, sr2), os.F_OK
- )
- if_pretrained_discriminator_exist = os.access(
- "pretrained%s/%sD%s.pth" % (path_str, f0_str, sr2), os.F_OK
- )
- if not if_pretrained_generator_exist:
- print(
- "pretrained%s/%sG%s.pth" % (path_str, f0_str, sr2),
- "not exist, will not use pretrained model",
- )
- if not if_pretrained_discriminator_exist:
- print(
- "pretrained%s/%sD%s.pth" % (path_str, f0_str, sr2),
- "not exist, will not use pretrained model",
- )
- return (
- "pretrained%s/%sG%s.pth" % (path_str, f0_str, sr2)
- if if_pretrained_generator_exist
- else "",
- "pretrained%s/%sD%s.pth" % (path_str, f0_str, sr2)
- if if_pretrained_discriminator_exist
- else "",
- to_return_sr2,
- )
-
-
-def change_f0(if_f0_3, sr2, version19): # f0method8,pretrained_G14,pretrained_D15
- path_str = "" if version19 == "v1" else "_v2"
- if_pretrained_generator_exist = os.access(
- "pretrained%s/f0G%s.pth" % (path_str, sr2), os.F_OK
- )
- if_pretrained_discriminator_exist = os.access(
- "pretrained%s/f0D%s.pth" % (path_str, sr2), os.F_OK
- )
- if not if_pretrained_generator_exist:
- print(
- "pretrained%s/f0G%s.pth" % (path_str, sr2),
- "not exist, will not use pretrained model",
- )
- if not if_pretrained_discriminator_exist:
- print(
- "pretrained%s/f0D%s.pth" % (path_str, sr2),
- "not exist, will not use pretrained model",
- )
- if if_f0_3:
- return (
- {"visible": True, "__type__": "update"},
- "pretrained%s/f0G%s.pth" % (path_str, sr2)
- if if_pretrained_generator_exist
- else "",
- "pretrained%s/f0D%s.pth" % (path_str, sr2)
- if if_pretrained_discriminator_exist
- else "",
- )
- return (
- {"visible": False, "__type__": "update"},
- ("pretrained%s/G%s.pth" % (path_str, sr2))
- if if_pretrained_generator_exist
- else "",
- ("pretrained%s/D%s.pth" % (path_str, sr2))
- if if_pretrained_discriminator_exist
- else "",
- )
-
-
-# but3.click(click_train,[exp_dir1,sr2,if_f0_3,save_epoch10,total_epoch11,batch_size12,if_save_latest13,pretrained_G14,pretrained_D15,gpus16])
-def click_train(
- exp_dir1,
- sr2,
- if_f0_3,
- spk_id5,
- save_epoch10,
- total_epoch11,
- batch_size12,
- if_save_latest13,
- pretrained_G14,
- pretrained_D15,
- gpus16,
- if_cache_gpu17,
- if_save_every_weights18,
- version19,
-):
- # 生成filelist
- exp_dir = "%s/logs/%s" % (now_dir, exp_dir1)
- os.makedirs(exp_dir, exist_ok=True)
- gt_wavs_dir = "%s/0_gt_wavs" % (exp_dir)
- feature_dir = (
- "%s/3_feature256" % (exp_dir)
- if version19 == "v1"
- else "%s/3_feature768" % (exp_dir)
- )
- if if_f0_3:
- f0_dir = "%s/2a_f0" % (exp_dir)
- f0nsf_dir = "%s/2b-f0nsf" % (exp_dir)
- names = (
- set([name.split(".")[0] for name in os.listdir(gt_wavs_dir)])
- & set([name.split(".")[0] for name in os.listdir(feature_dir)])
- & set([name.split(".")[0] for name in os.listdir(f0_dir)])
- & set([name.split(".")[0] for name in os.listdir(f0nsf_dir)])
- )
- else:
- names = set([name.split(".")[0] for name in os.listdir(gt_wavs_dir)]) & set(
- [name.split(".")[0] for name in os.listdir(feature_dir)]
- )
- opt = []
- for name in names:
- if if_f0_3:
- opt.append(
- "%s/%s.wav|%s/%s.npy|%s/%s.wav.npy|%s/%s.wav.npy|%s"
- % (
- gt_wavs_dir.replace("\\", "\\\\"),
- name,
- feature_dir.replace("\\", "\\\\"),
- name,
- f0_dir.replace("\\", "\\\\"),
- name,
- f0nsf_dir.replace("\\", "\\\\"),
- name,
- spk_id5,
- )
- )
- else:
- opt.append(
- "%s/%s.wav|%s/%s.npy|%s"
- % (
- gt_wavs_dir.replace("\\", "\\\\"),
- name,
- feature_dir.replace("\\", "\\\\"),
- name,
- spk_id5,
- )
- )
- fea_dim = 256 if version19 == "v1" else 768
- if if_f0_3:
- for _ in range(2):
- opt.append(
- "%s/logs/mute/0_gt_wavs/mute%s.wav|%s/logs/mute/3_feature%s/mute.npy|%s/logs/mute/2a_f0/mute.wav.npy|%s/logs/mute/2b-f0nsf/mute.wav.npy|%s"
- % (now_dir, sr2, now_dir, fea_dim, now_dir, now_dir, spk_id5)
- )
- else:
- for _ in range(2):
- opt.append(
- "%s/logs/mute/0_gt_wavs/mute%s.wav|%s/logs/mute/3_feature%s/mute.npy|%s"
- % (now_dir, sr2, now_dir, fea_dim, spk_id5)
- )
- shuffle(opt)
- with open("%s/filelist.txt" % exp_dir, "w") as f:
- f.write("\n".join(opt))
- print("write filelist done")
- # 生成config#无需生成config
- # cmd = python_cmd + " train_nsf_sim_cache_sid_load_pretrain.py -e mi-test -sr 40k -f0 1 -bs 4 -g 0 -te 10 -se 5 -pg pretrained/f0G40k.pth -pd pretrained/f0D40k.pth -l 1 -c 0"
- print("use gpus:", gpus16)
- if pretrained_G14 == "":
- print("no pretrained Generator")
- if pretrained_D15 == "":
- print("no pretrained Discriminator")
- if gpus16:
- cmd = (
- config.python_cmd
- + " train_nsf_sim_cache_sid_load_pretrain.py -e %s -sr %s -f0 %s -bs %s -g %s -te %s -se %s %s %s -l %s -c %s -sw %s -v %s"
- % (
- exp_dir1,
- sr2,
- 1 if if_f0_3 else 0,
- batch_size12,
- gpus16,
- total_epoch11,
- save_epoch10,
- "-pg %s" % pretrained_G14 if pretrained_G14 != "" else "",
- "-pd %s" % pretrained_D15 if pretrained_D15 != "" else "",
- 1 if if_save_latest13 == i18n("是") else 0,
- 1 if if_cache_gpu17 == i18n("是") else 0,
- 1 if if_save_every_weights18 == i18n("是") else 0,
- version19,
- )
- )
- else:
- cmd = (
- config.python_cmd
- + " train_nsf_sim_cache_sid_load_pretrain.py -e %s -sr %s -f0 %s -bs %s -te %s -se %s %s %s -l %s -c %s -sw %s -v %s"
- % (
- exp_dir1,
- sr2,
- 1 if if_f0_3 else 0,
- batch_size12,
- total_epoch11,
- save_epoch10,
- "-pg %s" % pretrained_G14 if pretrained_G14 != "" else "\b",
- "-pd %s" % pretrained_D15 if pretrained_D15 != "" else "\b",
- 1 if if_save_latest13 == i18n("是") else 0,
- 1 if if_cache_gpu17 == i18n("是") else 0,
- 1 if if_save_every_weights18 == i18n("是") else 0,
- version19,
- )
- )
- print(cmd)
- p = Popen(cmd, shell=True, cwd=now_dir)
- p.wait()
- return "训练结束, 您可查看控制台训练日志或实验文件夹下的train.log"
-
-
-# but4.click(train_index, [exp_dir1], info3)
-def train_index(exp_dir1, version19):
- exp_dir = "%s/logs/%s" % (now_dir, exp_dir1)
- os.makedirs(exp_dir, exist_ok=True)
- feature_dir = (
- "%s/3_feature256" % (exp_dir)
- if version19 == "v1"
- else "%s/3_feature768" % (exp_dir)
- )
- if not os.path.exists(feature_dir):
- return "请先进行特征提取!"
- listdir_res = list(os.listdir(feature_dir))
- if len(listdir_res) == 0:
- return "请先进行特征提取!"
- infos = []
- npys = []
- for name in sorted(listdir_res):
- phone = np.load("%s/%s" % (feature_dir, name))
- npys.append(phone)
- big_npy = np.concatenate(npys, 0)
- big_npy_idx = np.arange(big_npy.shape[0])
- np.random.shuffle(big_npy_idx)
- big_npy = big_npy[big_npy_idx]
- if big_npy.shape[0] > 2e5:
- # if(1):
- infos.append("Trying doing kmeans %s shape to 10k centers." % big_npy.shape[0])
- yield "\n".join(infos)
- try:
- big_npy = (
- MiniBatchKMeans(
- n_clusters=10000,
- verbose=True,
- batch_size=256 * config.n_cpu,
- compute_labels=False,
- init="random",
- )
- .fit(big_npy)
- .cluster_centers_
- )
- except:
- info = traceback.format_exc()
- print(info)
- infos.append(info)
- yield "\n".join(infos)
-
- np.save("%s/total_fea.npy" % exp_dir, big_npy)
- n_ivf = min(int(16 * np.sqrt(big_npy.shape[0])), big_npy.shape[0] // 39)
- infos.append("%s,%s" % (big_npy.shape, n_ivf))
- yield "\n".join(infos)
- index = faiss.index_factory(256 if version19 == "v1" else 768, "IVF%s,Flat" % n_ivf)
- # index = faiss.index_factory(256if version19=="v1"else 768, "IVF%s,PQ128x4fs,RFlat"%n_ivf)
- infos.append("training")
- yield "\n".join(infos)
- index_ivf = faiss.extract_index_ivf(index) #
- index_ivf.nprobe = 1
- index.train(big_npy)
- faiss.write_index(
- index,
- "%s/trained_IVF%s_Flat_nprobe_%s_%s_%s.index"
- % (exp_dir, n_ivf, index_ivf.nprobe, exp_dir1, version19),
- )
- # faiss.write_index(index, '%s/trained_IVF%s_Flat_FastScan_%s.index'%(exp_dir,n_ivf,version19))
- infos.append("adding")
- yield "\n".join(infos)
- batch_size_add = 8192
- for i in range(0, big_npy.shape[0], batch_size_add):
- index.add(big_npy[i : i + batch_size_add])
- faiss.write_index(
- index,
- "%s/added_IVF%s_Flat_nprobe_%s_%s_%s.index"
- % (exp_dir, n_ivf, index_ivf.nprobe, exp_dir1, version19),
- )
- infos.append(
- "成功构建索引,added_IVF%s_Flat_nprobe_%s_%s_%s.index"
- % (n_ivf, index_ivf.nprobe, exp_dir1, version19)
- )
- # faiss.write_index(index, '%s/added_IVF%s_Flat_FastScan_%s.index'%(exp_dir,n_ivf,version19))
- # infos.append("成功构建索引,added_IVF%s_Flat_FastScan_%s.index"%(n_ivf,version19))
- yield "\n".join(infos)
-
-
-# but5.click(train1key, [exp_dir1, sr2, if_f0_3, trainset_dir4, spk_id5, gpus6, np7, f0method8, save_epoch10, total_epoch11, batch_size12, if_save_latest13, pretrained_G14, pretrained_D15, gpus16, if_cache_gpu17], info3)
-def train1key(
- exp_dir1,
- sr2,
- if_f0_3,
- trainset_dir4,
- spk_id5,
- np7,
- f0method8,
- save_epoch10,
- total_epoch11,
- batch_size12,
- if_save_latest13,
- pretrained_G14,
- pretrained_D15,
- gpus16,
- if_cache_gpu17,
- if_save_every_weights18,
- version19,
-):
- infos = []
-
- def get_info_str(strr):
- infos.append(strr)
- return "\n".join(infos)
-
- model_log_dir = "%s/logs/%s" % (now_dir, exp_dir1)
- preprocess_log_path = "%s/preprocess.log" % model_log_dir
- extract_f0_feature_log_path = "%s/extract_f0_feature.log" % model_log_dir
- gt_wavs_dir = "%s/0_gt_wavs" % model_log_dir
- feature_dir = (
- "%s/3_feature256" % model_log_dir
- if version19 == "v1"
- else "%s/3_feature768" % model_log_dir
- )
-
- os.makedirs(model_log_dir, exist_ok=True)
- #########step1:处理数据
- open(preprocess_log_path, "w").close()
- cmd = (
- config.python_cmd
- + " trainset_preprocess_pipeline_print.py %s %s %s %s "
- % (trainset_dir4, sr_dict[sr2], np7, model_log_dir)
- + str(config.noparallel)
- )
- yield get_info_str(i18n("step1:正在处理数据"))
- yield get_info_str(cmd)
- p = Popen(cmd, shell=True)
- p.wait()
- with open(preprocess_log_path, "r") as f:
- print(f.read())
- #########step2a:提取音高
- open(extract_f0_feature_log_path, "w")
- if if_f0_3:
- yield get_info_str("step2a:正在提取音高")
- cmd = config.python_cmd + " extract_f0_print.py %s %s %s" % (
- model_log_dir,
- np7,
- f0method8,
- )
- yield get_info_str(cmd)
- p = Popen(cmd, shell=True, cwd=now_dir)
- p.wait()
- with open(extract_f0_feature_log_path, "r") as f:
- print(f.read())
- else:
- yield get_info_str(i18n("step2a:无需提取音高"))
- #######step2b:提取特征
- yield get_info_str(i18n("step2b:正在提取特征"))
- gpus = gpus16.split("-")
- leng = len(gpus)
- ps = []
- for idx, n_g in enumerate(gpus):
- cmd = config.python_cmd + " extract_feature_print.py %s %s %s %s %s %s" % (
- config.device,
- leng,
- idx,
- n_g,
- model_log_dir,
- version19,
- )
- yield get_info_str(cmd)
- p = Popen(
- cmd, shell=True, cwd=now_dir
- ) # , shell=True, stdin=PIPE, stdout=PIPE, stderr=PIPE, cwd=now_dir
- ps.append(p)
- for p in ps:
- p.wait()
- with open(extract_f0_feature_log_path, "r") as f:
- print(f.read())
- #######step3a:训练模型
- yield get_info_str(i18n("step3a:正在训练模型"))
- # 生成filelist
- if if_f0_3:
- f0_dir = "%s/2a_f0" % model_log_dir
- f0nsf_dir = "%s/2b-f0nsf" % model_log_dir
- names = (
- set([name.split(".")[0] for name in os.listdir(gt_wavs_dir)])
- & set([name.split(".")[0] for name in os.listdir(feature_dir)])
- & set([name.split(".")[0] for name in os.listdir(f0_dir)])
- & set([name.split(".")[0] for name in os.listdir(f0nsf_dir)])
- )
- else:
- names = set([name.split(".")[0] for name in os.listdir(gt_wavs_dir)]) & set(
- [name.split(".")[0] for name in os.listdir(feature_dir)]
- )
- opt = []
- for name in names:
- if if_f0_3:
- opt.append(
- "%s/%s.wav|%s/%s.npy|%s/%s.wav.npy|%s/%s.wav.npy|%s"
- % (
- gt_wavs_dir.replace("\\", "\\\\"),
- name,
- feature_dir.replace("\\", "\\\\"),
- name,
- f0_dir.replace("\\", "\\\\"),
- name,
- f0nsf_dir.replace("\\", "\\\\"),
- name,
- spk_id5,
- )
- )
- else:
- opt.append(
- "%s/%s.wav|%s/%s.npy|%s"
- % (
- gt_wavs_dir.replace("\\", "\\\\"),
- name,
- feature_dir.replace("\\", "\\\\"),
- name,
- spk_id5,
- )
- )
- fea_dim = 256 if version19 == "v1" else 768
- if if_f0_3:
- for _ in range(2):
- opt.append(
- "%s/logs/mute/0_gt_wavs/mute%s.wav|%s/logs/mute/3_feature%s/mute.npy|%s/logs/mute/2a_f0/mute.wav.npy|%s/logs/mute/2b-f0nsf/mute.wav.npy|%s"
- % (now_dir, sr2, now_dir, fea_dim, now_dir, now_dir, spk_id5)
- )
- else:
- for _ in range(2):
- opt.append(
- "%s/logs/mute/0_gt_wavs/mute%s.wav|%s/logs/mute/3_feature%s/mute.npy|%s"
- % (now_dir, sr2, now_dir, fea_dim, spk_id5)
- )
- shuffle(opt)
- with open("%s/filelist.txt" % model_log_dir, "w") as f:
- f.write("\n".join(opt))
- yield get_info_str("write filelist done")
- if gpus16:
- cmd = (
- config.python_cmd
- + " train_nsf_sim_cache_sid_load_pretrain.py -e %s -sr %s -f0 %s -bs %s -g %s -te %s -se %s %s %s -l %s -c %s -sw %s -v %s"
- % (
- exp_dir1,
- sr2,
- 1 if if_f0_3 else 0,
- batch_size12,
- gpus16,
- total_epoch11,
- save_epoch10,
- "-pg %s" % pretrained_G14 if pretrained_G14 != "" else "",
- "-pd %s" % pretrained_D15 if pretrained_D15 != "" else "",
- 1 if if_save_latest13 == i18n("是") else 0,
- 1 if if_cache_gpu17 == i18n("是") else 0,
- 1 if if_save_every_weights18 == i18n("是") else 0,
- version19,
- )
- )
- else:
- cmd = (
- config.python_cmd
- + " train_nsf_sim_cache_sid_load_pretrain.py -e %s -sr %s -f0 %s -bs %s -te %s -se %s %s %s -l %s -c %s -sw %s -v %s"
- % (
- exp_dir1,
- sr2,
- 1 if if_f0_3 else 0,
- batch_size12,
- total_epoch11,
- save_epoch10,
- "-pg %s" % pretrained_G14 if pretrained_G14 != "" else "",
- "-pd %s" % pretrained_D15 if pretrained_D15 != "" else "",
- 1 if if_save_latest13 == i18n("是") else 0,
- 1 if if_cache_gpu17 == i18n("是") else 0,
- 1 if if_save_every_weights18 == i18n("是") else 0,
- version19,
- )
- )
- yield get_info_str(cmd)
- p = Popen(cmd, shell=True, cwd=now_dir)
- p.wait()
- yield get_info_str(i18n("训练结束, 您可查看控制台训练日志或实验文件夹下的train.log"))
- #######step3b:训练索引
- npys = []
- listdir_res = list(os.listdir(feature_dir))
- for name in sorted(listdir_res):
- phone = np.load("%s/%s" % (feature_dir, name))
- npys.append(phone)
- big_npy = np.concatenate(npys, 0)
-
- big_npy_idx = np.arange(big_npy.shape[0])
- np.random.shuffle(big_npy_idx)
- big_npy = big_npy[big_npy_idx]
-
- if big_npy.shape[0] > 2e5:
- # if(1):
- info = "Trying doing kmeans %s shape to 10k centers." % big_npy.shape[0]
- print(info)
- yield get_info_str(info)
- try:
- big_npy = (
- MiniBatchKMeans(
- n_clusters=10000,
- verbose=True,
- batch_size=256 * config.n_cpu,
- compute_labels=False,
- init="random",
- )
- .fit(big_npy)
- .cluster_centers_
- )
- except:
- info = traceback.format_exc()
- print(info)
- yield get_info_str(info)
-
- np.save("%s/total_fea.npy" % model_log_dir, big_npy)
-
- # n_ivf = big_npy.shape[0] // 39
- n_ivf = min(int(16 * np.sqrt(big_npy.shape[0])), big_npy.shape[0] // 39)
- yield get_info_str("%s,%s" % (big_npy.shape, n_ivf))
- index = faiss.index_factory(256 if version19 == "v1" else 768, "IVF%s,Flat" % n_ivf)
- yield get_info_str("training index")
- index_ivf = faiss.extract_index_ivf(index) #
- index_ivf.nprobe = 1
- index.train(big_npy)
- faiss.write_index(
- index,
- "%s/trained_IVF%s_Flat_nprobe_%s_%s_%s.index"
- % (model_log_dir, n_ivf, index_ivf.nprobe, exp_dir1, version19),
- )
- yield get_info_str("adding index")
- batch_size_add = 8192
- for i in range(0, big_npy.shape[0], batch_size_add):
- index.add(big_npy[i : i + batch_size_add])
- faiss.write_index(
- index,
- "%s/added_IVF%s_Flat_nprobe_%s_%s_%s.index"
- % (model_log_dir, n_ivf, index_ivf.nprobe, exp_dir1, version19),
- )
- yield get_info_str(
- "成功构建索引, added_IVF%s_Flat_nprobe_%s_%s_%s.index"
- % (n_ivf, index_ivf.nprobe, exp_dir1, version19)
- )
- yield get_info_str(i18n("全流程结束!"))
-
-
-# ckpt_path2.change(change_info_,[ckpt_path2],[sr__,if_f0__])
-def change_info_(ckpt_path):
- if not os.path.exists(ckpt_path.replace(os.path.basename(ckpt_path), "train.log")):
- return {"__type__": "update"}, {"__type__": "update"}, {"__type__": "update"}
- try:
- with open(
- ckpt_path.replace(os.path.basename(ckpt_path), "train.log"), "r"
- ) as f:
- info = eval(f.read().strip("\n").split("\n")[0].split("\t")[-1])
- sr, f0 = info["sample_rate"], info["if_f0"]
- version = "v2" if ("version" in info and info["version"] == "v2") else "v1"
- return sr, str(f0), version
- except:
- traceback.print_exc()
- return {"__type__": "update"}, {"__type__": "update"}, {"__type__": "update"}
-
-
-def export_onnx(ModelPath, ExportedPath):
- cpt = torch.load(ModelPath, map_location="cpu")
- cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0]
- vec_channels = 256 if cpt.get("version", "v1") == "v1" else 768
-
- test_phone = torch.rand(1, 200, vec_channels) # hidden unit
- test_phone_lengths = torch.tensor([200]).long() # hidden unit 长度(貌似没啥用)
- test_pitch = torch.randint(size=(1, 200), low=5, high=255) # 基频(单位赫兹)
- test_pitchf = torch.rand(1, 200) # nsf基频
- test_ds = torch.LongTensor([0]) # 说话人ID
- test_rnd = torch.rand(1, 192, 200) # 噪声(加入随机因子)
-
- device = "cpu" # 导出时设备(不影响使用模型)
-
- net_g = SynthesizerTrnMsNSFsidM(
- *cpt["config"], is_half=False, version=cpt.get("version", "v1")
- ) # fp32导出(C++要支持fp16必须手动将内存重新排列所以暂时不用fp16)
- net_g.load_state_dict(cpt["weight"], strict=False)
- input_names = ["phone", "phone_lengths", "pitch", "pitchf", "ds", "rnd"]
- output_names = [
- "audio",
- ]
- # net_g.construct_spkmixmap(n_speaker) 多角色混合轨道导出
- torch.onnx.export(
- net_g,
- (
- test_phone.to(device),
- test_phone_lengths.to(device),
- test_pitch.to(device),
- test_pitchf.to(device),
- test_ds.to(device),
- test_rnd.to(device),
- ),
- ExportedPath,
- dynamic_axes={
- "phone": [1],
- "pitch": [1],
- "pitchf": [1],
- "rnd": [2],
- },
- do_constant_folding=False,
- opset_version=13,
- verbose=False,
- input_names=input_names,
- output_names=output_names,
- )
- return "Finished"
-
-
-with gr.Blocks() as app:
- gr.Markdown(
- value=i18n(
- "本软件以MIT协议开源, 作者不对软件具备任何控制力, 使用软件者、传播软件导出的声音者自负全责. 如不认可该条款, 则不能使用或引用软件包内任何代码和文件. 详见根目录LICENSE."
- )
- )
- with gr.Tabs():
- with gr.TabItem(i18n("模型推理")):
- with gr.Row():
- sid0 = gr.Dropdown(label=i18n("推理音色"), choices=sorted(names))
- refresh_button = gr.Button(i18n("刷新音色列表和索引路径"), variant="primary")
- clean_button = gr.Button(i18n("卸载音色省显存"), variant="primary")
- spk_item = gr.Slider(
- minimum=0,
- maximum=2333,
- step=1,
- label=i18n("请选择说话人id"),
- value=0,
- visible=False,
- interactive=True,
- )
- clean_button.click(fn=clean, inputs=[], outputs=[sid0])
- with gr.Group():
- gr.Markdown(
- value=i18n("男转女推荐+12key, 女转男推荐-12key, 如果音域爆炸导致音色失真也可以自己调整到合适音域. ")
- )
- with gr.Row():
- with gr.Column():
- vc_transform0 = gr.Number(
- label=i18n("变调(整数, 半音数量, 升八度12降八度-12)"), value=0
- )
- input_audio0 = gr.Textbox(
- label=i18n("输入待处理音频文件路径(默认是正确格式示例)"),
- value="E:\\codes\\py39\\test-20230416b\\todo-songs\\冬之花clip1.wav",
- )
- f0method0 = gr.Radio(
- label=i18n(
- "选择音高提取算法,输入歌声可用pm提速,harvest低音好但巨慢无比,crepe效果好但吃GPU"
- ),
- choices=["pm", "harvest", "crepe"],
- value="pm",
- interactive=True,
- )
- filter_radius0 = gr.Slider(
- minimum=0,
- maximum=7,
- label=i18n(">=3则使用对harvest音高识别的结果使用中值滤波,数值为滤波半径,使用可以削弱哑音"),
- value=3,
- step=1,
- interactive=True,
- )
- with gr.Column():
- file_index1 = gr.Textbox(
- label=i18n("特征检索库文件路径,为空则使用下拉的选择结果"),
- value="",
- interactive=True,
- )
- file_index2 = gr.Dropdown(
- label=i18n("自动检测index路径,下拉式选择(dropdown)"),
- choices=sorted(index_paths),
- interactive=True,
- )
- refresh_button.click(
- fn=change_choices, inputs=[], outputs=[sid0, file_index2]
- )
- # file_big_npy1 = gr.Textbox(
- # label=i18n("特征文件路径"),
- # value="E:\\codes\py39\\vits_vc_gpu_train\\logs\\mi-test-1key\\total_fea.npy",
- # interactive=True,
- # )
- index_rate1 = gr.Slider(
- minimum=0,
- maximum=1,
- label=i18n("检索特征占比"),
- value=0.75,
- interactive=True,
- )
- with gr.Column():
- resample_sr0 = gr.Slider(
- minimum=0,
- maximum=48000,
- label=i18n("后处理重采样至最终采样率,0为不进行重采样"),
- value=0,
- step=1,
- interactive=True,
- )
- rms_mix_rate0 = gr.Slider(
- minimum=0,
- maximum=1,
- label=i18n("输入源音量包络替换输出音量包络融合比例,越靠近1越使用输出包络"),
- value=0.25,
- interactive=True,
- )
- protect0 = gr.Slider(
- minimum=0,
- maximum=0.5,
- label=i18n(
- "保护清辅音和呼吸声,防止电音撕裂等artifact,拉满0.5不开启,调低加大保护力度但可能降低索引效果"
- ),
- value=0.33,
- step=0.01,
- interactive=True,
- )
- f0_file = gr.File(label=i18n("F0曲线文件, 可选, 一行一个音高, 代替默认F0及升降调"))
- but0 = gr.Button(i18n("转换"), variant="primary")
- with gr.Row():
- vc_output1 = gr.Textbox(label=i18n("输出信息"))
- vc_output2 = gr.Audio(label=i18n("输出音频(右下角三个点,点了可以下载)"))
- but0.click(
- vc_single,
- [
- spk_item,
- input_audio0,
- vc_transform0,
- f0_file,
- f0method0,
- file_index1,
- file_index2,
- # file_big_npy1,
- index_rate1,
- filter_radius0,
- resample_sr0,
- rms_mix_rate0,
- protect0,
- ],
- [vc_output1, vc_output2],
- )
- with gr.Group():
- gr.Markdown(
- value=i18n("批量转换, 输入待转换音频文件夹, 或上传多个音频文件, 在指定文件夹(默认opt)下输出转换的音频. ")
- )
- with gr.Row():
- with gr.Column():
- vc_transform1 = gr.Number(
- label=i18n("变调(整数, 半音数量, 升八度12降八度-12)"), value=0
- )
- opt_input = gr.Textbox(label=i18n("指定输出文件夹"), value="opt")
- f0method1 = gr.Radio(
- label=i18n(
- "选择音高提取算法,输入歌声可用pm提速,harvest低音好但巨慢无比,crepe效果好但吃GPU"
- ),
- choices=["pm", "harvest", "crepe"],
- value="pm",
- interactive=True,
- )
- filter_radius1 = gr.Slider(
- minimum=0,
- maximum=7,
- label=i18n(">=3则使用对harvest音高识别的结果使用中值滤波,数值为滤波半径,使用可以削弱哑音"),
- value=3,
- step=1,
- interactive=True,
- )
- with gr.Column():
- file_index3 = gr.Textbox(
- label=i18n("特征检索库文件路径,为空则使用下拉的选择结果"),
- value="",
- interactive=True,
- )
- file_index4 = gr.Dropdown(
- label=i18n("自动检测index路径,下拉式选择(dropdown)"),
- choices=sorted(index_paths),
- interactive=True,
- )
- refresh_button.click(
- fn=lambda: change_choices()[1],
- inputs=[],
- outputs=file_index4,
- )
- # file_big_npy2 = gr.Textbox(
- # label=i18n("特征文件路径"),
- # value="E:\\codes\\py39\\vits_vc_gpu_train\\logs\\mi-test-1key\\total_fea.npy",
- # interactive=True,
- # )
- index_rate2 = gr.Slider(
- minimum=0,
- maximum=1,
- label=i18n("检索特征占比"),
- value=1,
- interactive=True,
- )
- with gr.Column():
- resample_sr1 = gr.Slider(
- minimum=0,
- maximum=48000,
- label=i18n("后处理重采样至最终采样率,0为不进行重采样"),
- value=0,
- step=1,
- interactive=True,
- )
- rms_mix_rate1 = gr.Slider(
- minimum=0,
- maximum=1,
- label=i18n("输入源音量包络替换输出音量包络融合比例,越靠近1越使用输出包络"),
- value=1,
- interactive=True,
- )
- protect1 = gr.Slider(
- minimum=0,
- maximum=0.5,
- label=i18n(
- "保护清辅音和呼吸声,防止电音撕裂等artifact,拉满0.5不开启,调低加大保护力度但可能降低索引效果"
- ),
- value=0.33,
- step=0.01,
- interactive=True,
- )
- with gr.Column():
- dir_input = gr.Textbox(
- label=i18n("输入待处理音频文件夹路径(去文件管理器地址栏拷就行了)"),
- value="E:\codes\py39\\test-20230416b\\todo-songs",
- )
- inputs = gr.File(
- file_count="multiple", label=i18n("也可批量输入音频文件, 二选一, 优先读文件夹")
- )
- with gr.Row():
- format1 = gr.Radio(
- label=i18n("导出文件格式"),
- choices=["wav", "flac", "mp3", "m4a"],
- value="flac",
- interactive=True,
- )
- but1 = gr.Button(i18n("转换"), variant="primary")
- vc_output3 = gr.Textbox(label=i18n("输出信息"))
- but1.click(
- vc_multi,
- [
- spk_item,
- dir_input,
- opt_input,
- inputs,
- vc_transform1,
- f0method1,
- file_index3,
- file_index4,
- # file_big_npy2,
- index_rate2,
- filter_radius1,
- resample_sr1,
- rms_mix_rate1,
- protect1,
- format1,
- ],
- [vc_output3],
- )
- sid0.change(
- fn=get_vc,
- inputs=[sid0, protect0, protect1],
- outputs=[spk_item, protect0, protect1],
- )
- with gr.TabItem(i18n("伴奏人声分离&去混响&去回声")):
- with gr.Group():
- gr.Markdown(
- value=i18n(
- "人声伴奏分离批量处理, 使用UVR5模型。 合格的文件夹路径格式举例: E:\\codes\\py39\\vits_vc_gpu\\白鹭霜华测试样例(去文件管理器地址栏拷就行了)。 模型分为三类: 1、保留人声:不带和声的音频选这个,对主人声保留比HP5更好。内置HP2和HP3两个模型,HP3可能轻微漏伴奏但对主人声保留比HP2稍微好一丁点; 2、仅保留主人声:带和声的音频选这个,对主人声可能有削弱。内置HP5一个模型; 3、去混响、去延迟模型(by FoxJoy): (1)MDX-Net(onnx_dereverb):对于双通道混响是最好的选择,不能去除单通道混响; (234)DeEcho:去除延迟效果。Aggressive比Normal去除得更彻底,DeReverb额外去除混响,可去除单声道混响,但是对高频重的板式混响去不干净。 去混响/去延迟,附: 1、DeEcho-DeReverb模型的耗时是另外2个DeEcho模型的接近2倍; 2、MDX-Net-Dereverb模型挺慢的; 3、个人推荐的最干净的配置是先MDX-Net再DeEcho-Aggressive。"
- )
- )
- with gr.Row():
- with gr.Column():
- dir_wav_input = gr.Textbox(
- label=i18n("输入待处理音频文件夹路径"),
- value="E:\\codes\\py39\\test-20230416b\\todo-songs\\todo-songs",
- )
- wav_inputs = gr.File(
- file_count="multiple", label=i18n("也可批量输入音频文件, 二选一, 优先读文件夹")
- )
- with gr.Column():
- model_choose = gr.Dropdown(label=i18n("模型"), choices=uvr5_names)
- agg = gr.Slider(
- minimum=0,
- maximum=20,
- step=1,
- label="人声提取激进程度",
- value=10,
- interactive=True,
- visible=False, # 先不开放调整
- )
- opt_vocal_root = gr.Textbox(
- label=i18n("指定输出主人声文件夹"), value="opt"
- )
- opt_ins_root = gr.Textbox(
- label=i18n("指定输出非主人声文件夹"), value="opt"
- )
- format0 = gr.Radio(
- label=i18n("导出文件格式"),
- choices=["wav", "flac", "mp3", "m4a"],
- value="flac",
- interactive=True,
- )
- but2 = gr.Button(i18n("转换"), variant="primary")
- vc_output4 = gr.Textbox(label=i18n("输出信息"))
- but2.click(
- uvr,
- [
- model_choose,
- dir_wav_input,
- opt_vocal_root,
- wav_inputs,
- opt_ins_root,
- agg,
- format0,
- ],
- [vc_output4],
- )
- with gr.TabItem(i18n("训练")):
- gr.Markdown(
- value=i18n(
- "step1: 填写实验配置. 实验数据放在logs下, 每个实验一个文件夹, 需手工输入实验名路径, 内含实验配置, 日志, 训练得到的模型文件. "
- )
- )
- with gr.Row():
- exp_dir1 = gr.Textbox(label=i18n("输入实验名"), value="mi-test")
- sr2 = gr.Radio(
- label=i18n("目标采样率"),
- choices=["40k", "48k"],
- value="40k",
- interactive=True,
- )
- if_f0_3 = gr.Radio(
- label=i18n("模型是否带音高指导(唱歌一定要, 语音可以不要)"),
- choices=[True, False],
- value=True,
- interactive=True,
- )
- version19 = gr.Radio(
- label=i18n("版本"),
- choices=["v1", "v2"],
- value="v1",
- interactive=True,
- visible=True,
- )
- np7 = gr.Slider(
- minimum=0,
- maximum=config.n_cpu,
- step=1,
- label=i18n("提取音高和处理数据使用的CPU进程数"),
- value=int(np.ceil(config.n_cpu / 1.5)),
- interactive=True,
- )
- with gr.Group(): # 暂时单人的, 后面支持最多4人的#数据处理
- gr.Markdown(
- value=i18n(
- "step2a: 自动遍历训练文件夹下所有可解码成音频的文件并进行切片归一化, 在实验目录下生成2个wav文件夹; 暂时只支持单人训练. "
- )
- )
- with gr.Row():
- trainset_dir4 = gr.Textbox(
- label=i18n("输入训练文件夹路径"), value="E:\\语音音频+标注\\米津玄师\\src"
- )
- spk_id5 = gr.Slider(
- minimum=0,
- maximum=4,
- step=1,
- label=i18n("请指定说话人id"),
- value=0,
- interactive=True,
- )
- but1 = gr.Button(i18n("处理数据"), variant="primary")
- info1 = gr.Textbox(label=i18n("输出信息"), value="")
- but1.click(
- preprocess_dataset, [trainset_dir4, exp_dir1, sr2, np7], [info1]
- )
- with gr.Group():
- gr.Markdown(value=i18n("step2b: 使用CPU提取音高(如果模型带音高), 使用GPU提取特征(选择卡号)"))
- with gr.Row():
- with gr.Column():
- gpus6 = gr.Textbox(
- label=i18n("以-分隔输入使用的卡号, 例如 0-1-2 使用卡0和卡1和卡2"),
- value=gpus,
- interactive=True,
- )
- gpu_info9 = gr.Textbox(label=i18n("显卡信息"), value=gpu_info)
- with gr.Column():
- f0method8 = gr.Radio(
- label=i18n(
- "选择音高提取算法:输入歌声可用pm提速,高质量语音但CPU差可用dio提速,harvest质量更好但慢"
- ),
- choices=["pm", "harvest", "dio"],
- value="harvest",
- interactive=True,
- )
- but2 = gr.Button(i18n("特征提取"), variant="primary")
- info2 = gr.Textbox(label=i18n("输出信息"), value="", max_lines=8)
- but2.click(
- extract_f0_feature,
- [gpus6, np7, f0method8, if_f0_3, exp_dir1, version19],
- [info2],
- )
- with gr.Group():
- gr.Markdown(value=i18n("step3: 填写训练设置, 开始训练模型和索引"))
- with gr.Row():
- save_epoch10 = gr.Slider(
- minimum=0,
- maximum=50,
- step=1,
- label=i18n("保存频率save_every_epoch"),
- value=5,
- interactive=True,
- )
- total_epoch11 = gr.Slider(
- minimum=0,
- maximum=1000,
- step=1,
- label=i18n("总训练轮数total_epoch"),
- value=20,
- interactive=True,
- )
- batch_size12 = gr.Slider(
- minimum=1,
- maximum=40,
- step=1,
- label=i18n("每张显卡的batch_size"),
- value=default_batch_size,
- interactive=True,
- )
- if_save_latest13 = gr.Radio(
- label=i18n("是否仅保存最新的ckpt文件以节省硬盘空间"),
- choices=[i18n("是"), i18n("否")],
- value=i18n("否"),
- interactive=True,
- )
- if_cache_gpu17 = gr.Radio(
- label=i18n(
- "是否缓存所有训练集至显存. 10min以下小数据可缓存以加速训练, 大数据缓存会炸显存也加不了多少速"
- ),
- choices=[i18n("是"), i18n("否")],
- value=i18n("否"),
- interactive=True,
- )
- if_save_every_weights18 = gr.Radio(
- label=i18n("是否在每次保存时间点将最终小模型保存至weights文件夹"),
- choices=[i18n("是"), i18n("否")],
- value=i18n("否"),
- interactive=True,
- )
- with gr.Row():
- pretrained_G14 = gr.Textbox(
- label=i18n("加载预训练底模G路径"),
- value="pretrained/f0G40k.pth",
- interactive=True,
- )
- pretrained_D15 = gr.Textbox(
- label=i18n("加载预训练底模D路径"),
- value="pretrained/f0D40k.pth",
- interactive=True,
- )
- sr2.change(
- change_sr2,
- [sr2, if_f0_3, version19],
- [pretrained_G14, pretrained_D15],
- )
- version19.change(
- change_version19,
- [sr2, if_f0_3, version19],
- [pretrained_G14, pretrained_D15, sr2],
- )
- if_f0_3.change(
- change_f0,
- [if_f0_3, sr2, version19],
- [f0method8, pretrained_G14, pretrained_D15],
- )
- gpus16 = gr.Textbox(
- label=i18n("以-分隔输入使用的卡号, 例如 0-1-2 使用卡0和卡1和卡2"),
- value=gpus,
- interactive=True,
- )
- but3 = gr.Button(i18n("训练模型"), variant="primary")
- but4 = gr.Button(i18n("训练特征索引"), variant="primary")
- but5 = gr.Button(i18n("一键训练"), variant="primary")
- info3 = gr.Textbox(label=i18n("输出信息"), value="", max_lines=10)
- but3.click(
- click_train,
- [
- exp_dir1,
- sr2,
- if_f0_3,
- spk_id5,
- save_epoch10,
- total_epoch11,
- batch_size12,
- if_save_latest13,
- pretrained_G14,
- pretrained_D15,
- gpus16,
- if_cache_gpu17,
- if_save_every_weights18,
- version19,
- ],
- info3,
- )
- but4.click(train_index, [exp_dir1, version19], info3)
- but5.click(
- train1key,
- [
- exp_dir1,
- sr2,
- if_f0_3,
- trainset_dir4,
- spk_id5,
- np7,
- f0method8,
- save_epoch10,
- total_epoch11,
- batch_size12,
- if_save_latest13,
- pretrained_G14,
- pretrained_D15,
- gpus16,
- if_cache_gpu17,
- if_save_every_weights18,
- version19,
- ],
- info3,
- )
-
- with gr.TabItem(i18n("ckpt处理")):
- with gr.Group():
- gr.Markdown(value=i18n("模型融合, 可用于测试音色融合"))
- with gr.Row():
- ckpt_a = gr.Textbox(label=i18n("A模型路径"), value="", interactive=True)
- ckpt_b = gr.Textbox(label=i18n("B模型路径"), value="", interactive=True)
- alpha_a = gr.Slider(
- minimum=0,
- maximum=1,
- label=i18n("A模型权重"),
- value=0.5,
- interactive=True,
- )
- with gr.Row():
- sr_ = gr.Radio(
- label=i18n("目标采样率"),
- choices=["40k", "48k"],
- value="40k",
- interactive=True,
- )
- if_f0_ = gr.Radio(
- label=i18n("模型是否带音高指导"),
- choices=[i18n("是"), i18n("否")],
- value=i18n("是"),
- interactive=True,
- )
- info__ = gr.Textbox(
- label=i18n("要置入的模型信息"), value="", max_lines=8, interactive=True
- )
- name_to_save0 = gr.Textbox(
- label=i18n("保存的模型名不带后缀"),
- value="",
- max_lines=1,
- interactive=True,
- )
- version_2 = gr.Radio(
- label=i18n("模型版本型号"),
- choices=["v1", "v2"],
- value="v1",
- interactive=True,
- )
- with gr.Row():
- but6 = gr.Button(i18n("融合"), variant="primary")
- info4 = gr.Textbox(label=i18n("输出信息"), value="", max_lines=8)
- but6.click(
- merge,
- [
- ckpt_a,
- ckpt_b,
- alpha_a,
- sr_,
- if_f0_,
- info__,
- name_to_save0,
- version_2,
- ],
- info4,
- ) # def merge(path1,path2,alpha1,sr,f0,info):
- with gr.Group():
- gr.Markdown(value=i18n("修改模型信息(仅支持weights文件夹下提取的小模型文件)"))
- with gr.Row():
- ckpt_path0 = gr.Textbox(
- label=i18n("模型路径"), value="", interactive=True
- )
- info_ = gr.Textbox(
- label=i18n("要改的模型信息"), value="", max_lines=8, interactive=True
- )
- name_to_save1 = gr.Textbox(
- label=i18n("保存的文件名, 默认空为和源文件同名"),
- value="",
- max_lines=8,
- interactive=True,
- )
- with gr.Row():
- but7 = gr.Button(i18n("修改"), variant="primary")
- info5 = gr.Textbox(label=i18n("输出信息"), value="", max_lines=8)
- but7.click(change_info, [ckpt_path0, info_, name_to_save1], info5)
- with gr.Group():
- gr.Markdown(value=i18n("查看模型信息(仅支持weights文件夹下提取的小模型文件)"))
- with gr.Row():
- ckpt_path1 = gr.Textbox(
- label=i18n("模型路径"), value="", interactive=True
- )
- but8 = gr.Button(i18n("查看"), variant="primary")
- info6 = gr.Textbox(label=i18n("输出信息"), value="", max_lines=8)
- but8.click(show_info, [ckpt_path1], info6)
- with gr.Group():
- gr.Markdown(
- value=i18n(
- "模型提取(输入logs文件夹下大文件模型路径),适用于训一半不想训了模型没有自动提取保存小文件模型,或者想测试中间模型的情况"
- )
- )
- with gr.Row():
- ckpt_path2 = gr.Textbox(
- label=i18n("模型路径"),
- value="E:\\codes\\py39\\logs\\mi-test_f0_48k\\G_23333.pth",
- interactive=True,
- )
- save_name = gr.Textbox(
- label=i18n("保存名"), value="", interactive=True
- )
- sr__ = gr.Radio(
- label=i18n("目标采样率"),
- choices=["32k", "40k", "48k"],
- value="40k",
- interactive=True,
- )
- if_f0__ = gr.Radio(
- label=i18n("模型是否带音高指导,1是0否"),
- choices=["1", "0"],
- value="1",
- interactive=True,
- )
- version_1 = gr.Radio(
- label=i18n("模型版本型号"),
- choices=["v1", "v2"],
- value="v2",
- interactive=True,
- )
- info___ = gr.Textbox(
- label=i18n("要置入的模型信息"), value="", max_lines=8, interactive=True
- )
- but9 = gr.Button(i18n("提取"), variant="primary")
- info7 = gr.Textbox(label=i18n("输出信息"), value="", max_lines=8)
- ckpt_path2.change(
- change_info_, [ckpt_path2], [sr__, if_f0__, version_1]
- )
- but9.click(
- extract_small_model,
- [ckpt_path2, save_name, sr__, if_f0__, info___, version_1],
- info7,
- )
-
- with gr.TabItem(i18n("Onnx导出")):
- with gr.Row():
- ckpt_dir = gr.Textbox(label=i18n("RVC模型路径"), value="", interactive=True)
- with gr.Row():
- onnx_dir = gr.Textbox(
- label=i18n("Onnx输出路径"), value="", interactive=True
- )
- with gr.Row():
- infoOnnx = gr.Label(label="info")
- with gr.Row():
- butOnnx = gr.Button(i18n("导出Onnx模型"), variant="primary")
- butOnnx.click(export_onnx, [ckpt_dir, onnx_dir], infoOnnx)
-
- tab_faq = i18n("常见问题解答")
- with gr.TabItem(tab_faq):
- try:
- if tab_faq == "常见问题解答":
- with open("docs/faq.md", "r", encoding="utf8") as f:
- info = f.read()
- else:
- with open("docs/faq_en.md", "r", encoding="utf8") as f:
- info = f.read()
- gr.Markdown(value=info)
- except:
- gr.Markdown(traceback.format_exc())
-
- # with gr.TabItem(i18n("招募音高曲线前端编辑器")):
- # gr.Markdown(value=i18n("加开发群联系我xxxxx"))
- # with gr.TabItem(i18n("点击查看交流、问题反馈群号")):
- # gr.Markdown(value=i18n("xxxxx"))
-
- if config.iscolab:
- app.queue(concurrency_count=511, max_size=1022).launch(share=True)
- else:
- app.queue(concurrency_count=511, max_size=1022).launch(
- server_name="0.0.0.0",
- inbrowser=not config.noautoopen,
- server_port=config.listen_port,
- quiet=True,
- )
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/core/bbox/assigners/max_iou_assigner.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/core/bbox/assigners/max_iou_assigner.py
deleted file mode 100644
index 5cf4c4b4b450f87dfb99c3d33d8ed83d3e5cfcb3..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/core/bbox/assigners/max_iou_assigner.py
+++ /dev/null
@@ -1,212 +0,0 @@
-import torch
-
-from ..builder import BBOX_ASSIGNERS
-from ..iou_calculators import build_iou_calculator
-from .assign_result import AssignResult
-from .base_assigner import BaseAssigner
-
-
-@BBOX_ASSIGNERS.register_module()
-class MaxIoUAssigner(BaseAssigner):
- """Assign a corresponding gt bbox or background to each bbox.
-
- Each proposals will be assigned with `-1`, or a semi-positive integer
- indicating the ground truth index.
-
- - -1: negative sample, no assigned gt
- - semi-positive integer: positive sample, index (0-based) of assigned gt
-
- Args:
- pos_iou_thr (float): IoU threshold for positive bboxes.
- neg_iou_thr (float or tuple): IoU threshold for negative bboxes.
- min_pos_iou (float): Minimum iou for a bbox to be considered as a
- positive bbox. Positive samples can have smaller IoU than
- pos_iou_thr due to the 4th step (assign max IoU sample to each gt).
- gt_max_assign_all (bool): Whether to assign all bboxes with the same
- highest overlap with some gt to that gt.
- ignore_iof_thr (float): IoF threshold for ignoring bboxes (if
- `gt_bboxes_ignore` is specified). Negative values mean not
- ignoring any bboxes.
- ignore_wrt_candidates (bool): Whether to compute the iof between
- `bboxes` and `gt_bboxes_ignore`, or the contrary.
- match_low_quality (bool): Whether to allow low quality matches. This is
- usually allowed for RPN and single stage detectors, but not allowed
- in the second stage. Details are demonstrated in Step 4.
- gpu_assign_thr (int): The upper bound of the number of GT for GPU
- assign. When the number of gt is above this threshold, will assign
- on CPU device. Negative values mean not assign on CPU.
- """
-
- def __init__(self,
- pos_iou_thr,
- neg_iou_thr,
- min_pos_iou=.0,
- gt_max_assign_all=True,
- ignore_iof_thr=-1,
- ignore_wrt_candidates=True,
- match_low_quality=True,
- gpu_assign_thr=-1,
- iou_calculator=dict(type='BboxOverlaps2D')):
- self.pos_iou_thr = pos_iou_thr
- self.neg_iou_thr = neg_iou_thr
- self.min_pos_iou = min_pos_iou
- self.gt_max_assign_all = gt_max_assign_all
- self.ignore_iof_thr = ignore_iof_thr
- self.ignore_wrt_candidates = ignore_wrt_candidates
- self.gpu_assign_thr = gpu_assign_thr
- self.match_low_quality = match_low_quality
- self.iou_calculator = build_iou_calculator(iou_calculator)
-
- def assign(self, bboxes, gt_bboxes, gt_bboxes_ignore=None, gt_labels=None):
- """Assign gt to bboxes.
-
- This method assign a gt bbox to every bbox (proposal/anchor), each bbox
- will be assigned with -1, or a semi-positive number. -1 means negative
- sample, semi-positive number is the index (0-based) of assigned gt.
- The assignment is done in following steps, the order matters.
-
- 1. assign every bbox to the background
- 2. assign proposals whose iou with all gts < neg_iou_thr to 0
- 3. for each bbox, if the iou with its nearest gt >= pos_iou_thr,
- assign it to that bbox
- 4. for each gt bbox, assign its nearest proposals (may be more than
- one) to itself
-
- Args:
- bboxes (Tensor): Bounding boxes to be assigned, shape(n, 4).
- gt_bboxes (Tensor): Groundtruth boxes, shape (k, 4).
- gt_bboxes_ignore (Tensor, optional): Ground truth bboxes that are
- labelled as `ignored`, e.g., crowd boxes in COCO.
- gt_labels (Tensor, optional): Label of gt_bboxes, shape (k, ).
-
- Returns:
- :obj:`AssignResult`: The assign result.
-
- Example:
- >>> self = MaxIoUAssigner(0.5, 0.5)
- >>> bboxes = torch.Tensor([[0, 0, 10, 10], [10, 10, 20, 20]])
- >>> gt_bboxes = torch.Tensor([[0, 0, 10, 9]])
- >>> assign_result = self.assign(bboxes, gt_bboxes)
- >>> expected_gt_inds = torch.LongTensor([1, 0])
- >>> assert torch.all(assign_result.gt_inds == expected_gt_inds)
- """
- assign_on_cpu = True if (self.gpu_assign_thr > 0) and (
- gt_bboxes.shape[0] > self.gpu_assign_thr) else False
- # compute overlap and assign gt on CPU when number of GT is large
- if assign_on_cpu:
- device = bboxes.device
- bboxes = bboxes.cpu()
- gt_bboxes = gt_bboxes.cpu()
- if gt_bboxes_ignore is not None:
- gt_bboxes_ignore = gt_bboxes_ignore.cpu()
- if gt_labels is not None:
- gt_labels = gt_labels.cpu()
-
- overlaps = self.iou_calculator(gt_bboxes, bboxes)
-
- if (self.ignore_iof_thr > 0 and gt_bboxes_ignore is not None
- and gt_bboxes_ignore.numel() > 0 and bboxes.numel() > 0):
- if self.ignore_wrt_candidates:
- ignore_overlaps = self.iou_calculator(
- bboxes, gt_bboxes_ignore, mode='iof')
- ignore_max_overlaps, _ = ignore_overlaps.max(dim=1)
- else:
- ignore_overlaps = self.iou_calculator(
- gt_bboxes_ignore, bboxes, mode='iof')
- ignore_max_overlaps, _ = ignore_overlaps.max(dim=0)
- overlaps[:, ignore_max_overlaps > self.ignore_iof_thr] = -1
-
- assign_result = self.assign_wrt_overlaps(overlaps, gt_labels)
- if assign_on_cpu:
- assign_result.gt_inds = assign_result.gt_inds.to(device)
- assign_result.max_overlaps = assign_result.max_overlaps.to(device)
- if assign_result.labels is not None:
- assign_result.labels = assign_result.labels.to(device)
- return assign_result
-
- def assign_wrt_overlaps(self, overlaps, gt_labels=None):
- """Assign w.r.t. the overlaps of bboxes with gts.
-
- Args:
- overlaps (Tensor): Overlaps between k gt_bboxes and n bboxes,
- shape(k, n).
- gt_labels (Tensor, optional): Labels of k gt_bboxes, shape (k, ).
-
- Returns:
- :obj:`AssignResult`: The assign result.
- """
- num_gts, num_bboxes = overlaps.size(0), overlaps.size(1)
-
- # 1. assign -1 by default
- assigned_gt_inds = overlaps.new_full((num_bboxes, ),
- -1,
- dtype=torch.long)
-
- if num_gts == 0 or num_bboxes == 0:
- # No ground truth or boxes, return empty assignment
- max_overlaps = overlaps.new_zeros((num_bboxes, ))
- if num_gts == 0:
- # No truth, assign everything to background
- assigned_gt_inds[:] = 0
- if gt_labels is None:
- assigned_labels = None
- else:
- assigned_labels = overlaps.new_full((num_bboxes, ),
- -1,
- dtype=torch.long)
- return AssignResult(
- num_gts,
- assigned_gt_inds,
- max_overlaps,
- labels=assigned_labels)
-
- # for each anchor, which gt best overlaps with it
- # for each anchor, the max iou of all gts
- max_overlaps, argmax_overlaps = overlaps.max(dim=0)
- # for each gt, which anchor best overlaps with it
- # for each gt, the max iou of all proposals
- gt_max_overlaps, gt_argmax_overlaps = overlaps.max(dim=1)
-
- # 2. assign negative: below
- # the negative inds are set to be 0
- if isinstance(self.neg_iou_thr, float):
- assigned_gt_inds[(max_overlaps >= 0)
- & (max_overlaps < self.neg_iou_thr)] = 0
- elif isinstance(self.neg_iou_thr, tuple):
- assert len(self.neg_iou_thr) == 2
- assigned_gt_inds[(max_overlaps >= self.neg_iou_thr[0])
- & (max_overlaps < self.neg_iou_thr[1])] = 0
-
- # 3. assign positive: above positive IoU threshold
- pos_inds = max_overlaps >= self.pos_iou_thr
- assigned_gt_inds[pos_inds] = argmax_overlaps[pos_inds] + 1
-
- if self.match_low_quality:
- # Low-quality matching will overwrite the assigned_gt_inds assigned
- # in Step 3. Thus, the assigned gt might not be the best one for
- # prediction.
- # For example, if bbox A has 0.9 and 0.8 iou with GT bbox 1 & 2,
- # bbox 1 will be assigned as the best target for bbox A in step 3.
- # However, if GT bbox 2's gt_argmax_overlaps = A, bbox A's
- # assigned_gt_inds will be overwritten to be bbox B.
- # This might be the reason that it is not used in ROI Heads.
- for i in range(num_gts):
- if gt_max_overlaps[i] >= self.min_pos_iou:
- if self.gt_max_assign_all:
- max_iou_inds = overlaps[i, :] == gt_max_overlaps[i]
- assigned_gt_inds[max_iou_inds] = i + 1
- else:
- assigned_gt_inds[gt_argmax_overlaps[i]] = i + 1
-
- if gt_labels is not None:
- assigned_labels = assigned_gt_inds.new_full((num_bboxes, ), -1)
- pos_inds = torch.nonzero(
- assigned_gt_inds > 0, as_tuple=False).squeeze()
- if pos_inds.numel() > 0:
- assigned_labels[pos_inds] = gt_labels[
- assigned_gt_inds[pos_inds] - 1]
- else:
- assigned_labels = None
-
- return AssignResult(
- num_gts, assigned_gt_inds, max_overlaps, labels=assigned_labels)
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/utils/logger.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/utils/logger.py
deleted file mode 100644
index 6fc6e6b438a73e857ba6f173594985807cb88b30..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/utils/logger.py
+++ /dev/null
@@ -1,19 +0,0 @@
-import logging
-
-from mmcv.utils import get_logger
-
-
-def get_root_logger(log_file=None, log_level=logging.INFO):
- """Get root logger.
-
- Args:
- log_file (str, optional): File path of log. Defaults to None.
- log_level (int, optional): The level of logger.
- Defaults to logging.INFO.
-
- Returns:
- :obj:`logging.Logger`: The obtained logger
- """
- logger = get_logger(name='mmdet', log_file=log_file, log_level=log_level)
-
- return logger
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/dense_heads/base_dense_head.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/dense_heads/base_dense_head.py
deleted file mode 100644
index de11e4a2197b1dfe241ce7a66daa1907a8fc5661..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/dense_heads/base_dense_head.py
+++ /dev/null
@@ -1,59 +0,0 @@
-from abc import ABCMeta, abstractmethod
-
-import torch.nn as nn
-
-
-class BaseDenseHead(nn.Module, metaclass=ABCMeta):
- """Base class for DenseHeads."""
-
- def __init__(self):
- super(BaseDenseHead, self).__init__()
-
- @abstractmethod
- def loss(self, **kwargs):
- """Compute losses of the head."""
- pass
-
- @abstractmethod
- def get_bboxes(self, **kwargs):
- """Transform network output for a batch into bbox predictions."""
- pass
-
- def forward_train(self,
- x,
- img_metas,
- gt_bboxes,
- gt_labels=None,
- gt_bboxes_ignore=None,
- proposal_cfg=None,
- **kwargs):
- """
- Args:
- x (list[Tensor]): Features from FPN.
- img_metas (list[dict]): Meta information of each image, e.g.,
- image size, scaling factor, etc.
- gt_bboxes (Tensor): Ground truth bboxes of the image,
- shape (num_gts, 4).
- gt_labels (Tensor): Ground truth labels of each box,
- shape (num_gts,).
- gt_bboxes_ignore (Tensor): Ground truth bboxes to be
- ignored, shape (num_ignored_gts, 4).
- proposal_cfg (mmcv.Config): Test / postprocessing configuration,
- if None, test_cfg would be used
-
- Returns:
- tuple:
- losses: (dict[str, Tensor]): A dictionary of loss components.
- proposal_list (list[Tensor]): Proposals of each image.
- """
- outs = self(x)
- if gt_labels is None:
- loss_inputs = outs + (gt_bboxes, img_metas)
- else:
- loss_inputs = outs + (gt_bboxes, gt_labels, img_metas)
- losses = self.loss(*loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore)
- if proposal_cfg is None:
- return losses
- else:
- proposal_list = self.get_bboxes(*outs, img_metas, cfg=proposal_cfg)
- return losses, proposal_list
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/exp/upernet_global_base/test.sh b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/exp/upernet_global_base/test.sh
deleted file mode 100644
index d9a85e7a0d3b7c96b060f473d41254b37a382fcb..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/exp/upernet_global_base/test.sh
+++ /dev/null
@@ -1,10 +0,0 @@
-#!/usr/bin/env bash
-
-work_path=$(dirname $0)
-PYTHONPATH="$(dirname $0)/../../":$PYTHONPATH \
-python -m torch.distributed.launch --nproc_per_node=8 \
- tools/test.py ${work_path}/test_config_h32.py \
- ${work_path}/ckpt/latest.pth \
- --launcher pytorch \
- --eval mIoU \
- 2>&1 | tee -a ${work_path}/log.txt
diff --git a/spaces/SLAYEROFALL3050/AudioGenerator/README.md b/spaces/SLAYEROFALL3050/AudioGenerator/README.md
deleted file mode 100644
index 27bbf7a5185ea121bbfb1e91ed2e49f15ff816cb..0000000000000000000000000000000000000000
--- a/spaces/SLAYEROFALL3050/AudioGenerator/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Music Generation using ML
-emoji: 🧐
-colorFrom: indigo
-colorTo: green
-sdk: streamlit
-sdk_version: 1.10.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/Sapphire-356/Video2MC/joints_detectors/Alphapose/train_sppe/src/models/layers/SE_module.py b/spaces/Sapphire-356/Video2MC/joints_detectors/Alphapose/train_sppe/src/models/layers/SE_module.py
deleted file mode 100644
index f370fd4e1fb777306e37f4a7c7be99bd0fbca64a..0000000000000000000000000000000000000000
--- a/spaces/Sapphire-356/Video2MC/joints_detectors/Alphapose/train_sppe/src/models/layers/SE_module.py
+++ /dev/null
@@ -1,24 +0,0 @@
-# -----------------------------------------------------
-# Copyright (c) Shanghai Jiao Tong University. All rights reserved.
-# Written by Jiefeng Li (jeff.lee.sjtu@gmail.com)
-# -----------------------------------------------------
-
-from torch import nn
-
-
-class SELayer(nn.Module):
- def __init__(self, channel, reduction=1):
- super(SELayer, self).__init__()
- self.avg_pool = nn.AdaptiveAvgPool2d(1)
- self.fc = nn.Sequential(
- nn.Linear(channel, channel // reduction),
- nn.ReLU(inplace=True),
- nn.Linear(channel // reduction, channel),
- nn.Sigmoid()
- )
-
- def forward(self, x):
- b, c, _, _ = x.size()
- y = self.avg_pool(x).view(b, c)
- y = self.fc(y).view(b, c, 1, 1)
- return x * y
diff --git a/spaces/Sourabh2/detectron2-segmentation/README.md b/spaces/Sourabh2/detectron2-segmentation/README.md
deleted file mode 100644
index 8f98c487ef97d8e279478689da4379750919feda..0000000000000000000000000000000000000000
--- a/spaces/Sourabh2/detectron2-segmentation/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Detectron2 Segmentation
-emoji: 🐠
-colorFrom: purple
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.40.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/SpacesExamples/secret-example/main.py b/spaces/SpacesExamples/secret-example/main.py
deleted file mode 100644
index 5ef99b7d3a7905e030c415fdd73b9166ee88a753..0000000000000000000000000000000000000000
--- a/spaces/SpacesExamples/secret-example/main.py
+++ /dev/null
@@ -1,13 +0,0 @@
-from typing import Union
-
-from fastapi import FastAPI
-import os
-
-app = FastAPI()
-
-
-@app.get("/")
-def read_root():
- return {"Hello EXAMPLE": os.environ.get("EXAMPLE"),
- "Hello SECRET_EXAMPLE": os.environ.get("SECRET_EXAMPLE")
- }
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/lib/tests/test_lexers.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/lib/tests/test_lexers.py
deleted file mode 100644
index 000b8fe6fd98c4017d5be56448cad68798b087a4..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/lib/tests/test_lexers.py
+++ /dev/null
@@ -1,184 +0,0 @@
-"""Test lexers module"""
-
-# Copyright (c) IPython Development Team.
-# Distributed under the terms of the Modified BSD License.
-
-from unittest import TestCase
-from pygments import __version__ as pygments_version
-from pygments.token import Token
-from pygments.lexers import BashLexer
-
-from .. import lexers
-
-pyg214 = tuple(int(x) for x in pygments_version.split(".")[:2]) >= (2, 14)
-
-
-class TestLexers(TestCase):
- """Collection of lexers tests"""
- def setUp(self):
- self.lexer = lexers.IPythonLexer()
- self.bash_lexer = BashLexer()
-
- def testIPythonLexer(self):
- fragment = '!echo $HOME\n'
- bash_tokens = [
- (Token.Operator, '!'),
- ]
- bash_tokens.extend(self.bash_lexer.get_tokens(fragment[1:]))
- ipylex_token = list(self.lexer.get_tokens(fragment))
- assert bash_tokens[:-1] == ipylex_token[:-1]
-
- fragment_2 = "!" + fragment
- tokens_2 = [
- (Token.Operator, '!!'),
- ] + bash_tokens[1:]
- assert tokens_2[:-1] == list(self.lexer.get_tokens(fragment_2))[:-1]
-
- fragment_2 = '\t %%!\n' + fragment[1:]
- tokens_2 = [
- (Token.Text, '\t '),
- (Token.Operator, '%%!'),
- (Token.Text, '\n'),
- ] + bash_tokens[1:]
- assert tokens_2 == list(self.lexer.get_tokens(fragment_2))
-
- fragment_2 = 'x = ' + fragment
- tokens_2 = [
- (Token.Name, 'x'),
- (Token.Text, ' '),
- (Token.Operator, '='),
- (Token.Text, ' '),
- ] + bash_tokens
- assert tokens_2[:-1] == list(self.lexer.get_tokens(fragment_2))[:-1]
-
- fragment_2 = 'x, = ' + fragment
- tokens_2 = [
- (Token.Name, 'x'),
- (Token.Punctuation, ','),
- (Token.Text, ' '),
- (Token.Operator, '='),
- (Token.Text, ' '),
- ] + bash_tokens
- assert tokens_2[:-1] == list(self.lexer.get_tokens(fragment_2))[:-1]
-
- fragment_2 = 'x, = %sx ' + fragment[1:]
- tokens_2 = [
- (Token.Name, 'x'),
- (Token.Punctuation, ','),
- (Token.Text, ' '),
- (Token.Operator, '='),
- (Token.Text, ' '),
- (Token.Operator, '%'),
- (Token.Keyword, 'sx'),
- (Token.Text, ' '),
- ] + bash_tokens[1:]
- if tokens_2[7] == (Token.Text, " ") and pyg214: # pygments 2.14+
- tokens_2[7] = (Token.Text.Whitespace, " ")
- assert tokens_2[:-1] == list(self.lexer.get_tokens(fragment_2))[:-1]
-
- fragment_2 = 'f = %R function () {}\n'
- tokens_2 = [
- (Token.Name, 'f'),
- (Token.Text, ' '),
- (Token.Operator, '='),
- (Token.Text, ' '),
- (Token.Operator, '%'),
- (Token.Keyword, 'R'),
- (Token.Text, ' function () {}\n'),
- ]
- assert tokens_2 == list(self.lexer.get_tokens(fragment_2))
-
- fragment_2 = '\t%%xyz\n$foo\n'
- tokens_2 = [
- (Token.Text, '\t'),
- (Token.Operator, '%%'),
- (Token.Keyword, 'xyz'),
- (Token.Text, '\n$foo\n'),
- ]
- assert tokens_2 == list(self.lexer.get_tokens(fragment_2))
-
- fragment_2 = '%system?\n'
- tokens_2 = [
- (Token.Operator, '%'),
- (Token.Keyword, 'system'),
- (Token.Operator, '?'),
- (Token.Text, '\n'),
- ]
- assert tokens_2[:-1] == list(self.lexer.get_tokens(fragment_2))[:-1]
-
- fragment_2 = 'x != y\n'
- tokens_2 = [
- (Token.Name, 'x'),
- (Token.Text, ' '),
- (Token.Operator, '!='),
- (Token.Text, ' '),
- (Token.Name, 'y'),
- (Token.Text, '\n'),
- ]
- assert tokens_2[:-1] == list(self.lexer.get_tokens(fragment_2))[:-1]
-
- fragment_2 = ' ?math.sin\n'
- tokens_2 = [
- (Token.Text, ' '),
- (Token.Operator, '?'),
- (Token.Text, 'math.sin'),
- (Token.Text, '\n'),
- ]
- assert tokens_2[:-1] == list(self.lexer.get_tokens(fragment_2))[:-1]
-
- fragment = ' *int*?\n'
- tokens = [
- (Token.Text, ' *int*'),
- (Token.Operator, '?'),
- (Token.Text, '\n'),
- ]
- assert tokens == list(self.lexer.get_tokens(fragment))
-
- fragment = '%%writefile -a foo.py\nif a == b:\n pass'
- tokens = [
- (Token.Operator, '%%writefile'),
- (Token.Text, ' -a foo.py\n'),
- (Token.Keyword, 'if'),
- (Token.Text, ' '),
- (Token.Name, 'a'),
- (Token.Text, ' '),
- (Token.Operator, '=='),
- (Token.Text, ' '),
- (Token.Name, 'b'),
- (Token.Punctuation, ':'),
- (Token.Text, '\n'),
- (Token.Text, ' '),
- (Token.Keyword, 'pass'),
- (Token.Text, '\n'),
- ]
- if tokens[10] == (Token.Text, "\n") and pyg214: # pygments 2.14+
- tokens[10] = (Token.Text.Whitespace, "\n")
- assert tokens[:-1] == list(self.lexer.get_tokens(fragment))[:-1]
-
- fragment = '%%timeit\nmath.sin(0)'
- tokens = [
- (Token.Operator, '%%timeit\n'),
- (Token.Name, 'math'),
- (Token.Operator, '.'),
- (Token.Name, 'sin'),
- (Token.Punctuation, '('),
- (Token.Literal.Number.Integer, '0'),
- (Token.Punctuation, ')'),
- (Token.Text, '\n'),
- ]
-
- fragment = '%%HTML\n
- )
-}
diff --git a/spaces/TLME/Bert-VITS-Umamusume-Genshin-HonkaiSR/utils.py b/spaces/TLME/Bert-VITS-Umamusume-Genshin-HonkaiSR/utils.py
deleted file mode 100644
index 5f98aafadb83a9f341d6d9d3401c6c3101485b4e..0000000000000000000000000000000000000000
--- a/spaces/TLME/Bert-VITS-Umamusume-Genshin-HonkaiSR/utils.py
+++ /dev/null
@@ -1,356 +0,0 @@
-import os
-import glob
-import argparse
-import logging
-import json
-import subprocess
-import numpy as np
-from scipy.io.wavfile import read
-import torch
-
-MATPLOTLIB_FLAG = False
-
-logger = logging.getLogger(__name__)
-
-
-def load_checkpoint(checkpoint_path, model, optimizer=None, skip_optimizer=False):
- assert os.path.isfile(checkpoint_path)
- checkpoint_dict = torch.load(checkpoint_path, map_location="cpu")
- iteration = checkpoint_dict["iteration"]
- learning_rate = checkpoint_dict["learning_rate"]
- if (
- optimizer is not None
- and not skip_optimizer
- and checkpoint_dict["optimizer"] is not None
- ):
- optimizer.load_state_dict(checkpoint_dict["optimizer"])
- elif optimizer is None and not skip_optimizer:
- # else: Disable this line if Infer and resume checkpoint,then enable the line upper
- new_opt_dict = optimizer.state_dict()
- new_opt_dict_params = new_opt_dict["param_groups"][0]["params"]
- new_opt_dict["param_groups"] = checkpoint_dict["optimizer"]["param_groups"]
- new_opt_dict["param_groups"][0]["params"] = new_opt_dict_params
- optimizer.load_state_dict(new_opt_dict)
-
- saved_state_dict = checkpoint_dict["model"]
- if hasattr(model, "module"):
- state_dict = model.module.state_dict()
- else:
- state_dict = model.state_dict()
-
- new_state_dict = {}
- for k, v in state_dict.items():
- try:
- # assert "emb_g" not in k
- new_state_dict[k] = saved_state_dict[k]
- assert saved_state_dict[k].shape == v.shape, (
- saved_state_dict[k].shape,
- v.shape,
- )
- except:
- # For upgrading from the old version
- if "ja_bert_proj" in k:
- v = torch.zeros_like(v)
- logger.warn(
- f"Seems you are using the old version of the model, the {k} is automatically set to zero for backward compatibility"
- )
- else:
- logger.error(f"{k} is not in the checkpoint")
-
- new_state_dict[k] = v
-
- if hasattr(model, "module"):
- model.module.load_state_dict(new_state_dict, strict=False)
- else:
- model.load_state_dict(new_state_dict, strict=False)
-
- logger.info(
- "Loaded checkpoint '{}' (iteration {})".format(checkpoint_path, iteration)
- )
-
- return model, optimizer, learning_rate, iteration
-
-
-def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path):
- logger.info(
- "Saving model and optimizer state at iteration {} to {}".format(
- iteration, checkpoint_path
- )
- )
- if hasattr(model, "module"):
- state_dict = model.module.state_dict()
- else:
- state_dict = model.state_dict()
- torch.save(
- {
- "model": state_dict,
- "iteration": iteration,
- "optimizer": optimizer.state_dict(),
- "learning_rate": learning_rate,
- },
- checkpoint_path,
- )
-
-
-def summarize(
- writer,
- global_step,
- scalars={},
- histograms={},
- images={},
- audios={},
- audio_sampling_rate=22050,
-):
- for k, v in scalars.items():
- writer.add_scalar(k, v, global_step)
- for k, v in histograms.items():
- writer.add_histogram(k, v, global_step)
- for k, v in images.items():
- writer.add_image(k, v, global_step, dataformats="HWC")
- for k, v in audios.items():
- writer.add_audio(k, v, global_step, audio_sampling_rate)
-
-
-def latest_checkpoint_path(dir_path, regex="G_*.pth"):
- f_list = glob.glob(os.path.join(dir_path, regex))
- f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f))))
- x = f_list[-1]
- return x
-
-
-def plot_spectrogram_to_numpy(spectrogram):
- global MATPLOTLIB_FLAG
- if not MATPLOTLIB_FLAG:
- import matplotlib
-
- matplotlib.use("Agg")
- MATPLOTLIB_FLAG = True
- mpl_logger = logging.getLogger("matplotlib")
- mpl_logger.setLevel(logging.WARNING)
- import matplotlib.pylab as plt
- import numpy as np
-
- fig, ax = plt.subplots(figsize=(10, 2))
- im = ax.imshow(spectrogram, aspect="auto", origin="lower", interpolation="none")
- plt.colorbar(im, ax=ax)
- plt.xlabel("Frames")
- plt.ylabel("Channels")
- plt.tight_layout()
-
- fig.canvas.draw()
- data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep="")
- data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
- plt.close()
- return data
-
-
-def plot_alignment_to_numpy(alignment, info=None):
- global MATPLOTLIB_FLAG
- if not MATPLOTLIB_FLAG:
- import matplotlib
-
- matplotlib.use("Agg")
- MATPLOTLIB_FLAG = True
- mpl_logger = logging.getLogger("matplotlib")
- mpl_logger.setLevel(logging.WARNING)
- import matplotlib.pylab as plt
- import numpy as np
-
- fig, ax = plt.subplots(figsize=(6, 4))
- im = ax.imshow(
- alignment.transpose(), aspect="auto", origin="lower", interpolation="none"
- )
- fig.colorbar(im, ax=ax)
- xlabel = "Decoder timestep"
- if info is not None:
- xlabel += "\n\n" + info
- plt.xlabel(xlabel)
- plt.ylabel("Encoder timestep")
- plt.tight_layout()
-
- fig.canvas.draw()
- data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep="")
- data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
- plt.close()
- return data
-
-
-def load_wav_to_torch(full_path):
- sampling_rate, data = read(full_path)
- return torch.FloatTensor(data.astype(np.float32)), sampling_rate
-
-
-def load_filepaths_and_text(filename, split="|"):
- with open(filename, encoding="utf-8") as f:
- filepaths_and_text = [line.strip().split(split) for line in f]
- return filepaths_and_text
-
-
-def get_hparams(init=True):
- parser = argparse.ArgumentParser()
- parser.add_argument(
- "-c",
- "--config",
- type=str,
- default="./configs/base.json",
- help="JSON file for configuration",
- )
- parser.add_argument("-m", "--model", type=str, required=True, help="Model name")
-
- args = parser.parse_args()
- model_dir = os.path.join("./logs", args.model)
-
- if not os.path.exists(model_dir):
- os.makedirs(model_dir)
-
- config_path = args.config
- config_save_path = os.path.join(model_dir, "config.json")
- if init:
- with open(config_path, "r", encoding="utf-8") as f:
- data = f.read()
- with open(config_save_path, "w", encoding="utf-8") as f:
- f.write(data)
- else:
- with open(config_save_path, "r", vencoding="utf-8") as f:
- data = f.read()
- config = json.loads(data)
- hparams = HParams(**config)
- hparams.model_dir = model_dir
- return hparams
-
-
-def clean_checkpoints(path_to_models="logs/44k/", n_ckpts_to_keep=2, sort_by_time=True):
- """Freeing up space by deleting saved ckpts
-
- Arguments:
- path_to_models -- Path to the model directory
- n_ckpts_to_keep -- Number of ckpts to keep, excluding G_0.pth and D_0.pth
- sort_by_time -- True -> chronologically delete ckpts
- False -> lexicographically delete ckpts
- """
- import re
-
- ckpts_files = [
- f
- for f in os.listdir(path_to_models)
- if os.path.isfile(os.path.join(path_to_models, f))
- ]
-
- def name_key(_f):
- return int(re.compile("._(\\d+)\\.pth").match(_f).group(1))
-
- def time_key(_f):
- return os.path.getmtime(os.path.join(path_to_models, _f))
-
- sort_key = time_key if sort_by_time else name_key
-
- def x_sorted(_x):
- return sorted(
- [f for f in ckpts_files if f.startswith(_x) and not f.endswith("_0.pth")],
- key=sort_key,
- )
-
- to_del = [
- os.path.join(path_to_models, fn)
- for fn in (x_sorted("G")[:-n_ckpts_to_keep] + x_sorted("D")[:-n_ckpts_to_keep])
- ]
-
- def del_info(fn):
- return logger.info(f".. Free up space by deleting ckpt {fn}")
-
- def del_routine(x):
- return [os.remove(x), del_info(x)]
-
- [del_routine(fn) for fn in to_del]
-
-
-def get_hparams_from_dir(model_dir):
- config_save_path = os.path.join(model_dir, "config.json")
- with open(config_save_path, "r", encoding="utf-8") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams = HParams(**config)
- hparams.model_dir = model_dir
- return hparams
-
-
-def get_hparams_from_file(config_path):
- with open(config_path, "r", encoding="utf-8") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams = HParams(**config)
- return hparams
-
-
-def check_git_hash(model_dir):
- source_dir = os.path.dirname(os.path.realpath(__file__))
- if not os.path.exists(os.path.join(source_dir, ".git")):
- logger.warn(
- "{} is not a git repository, therefore hash value comparison will be ignored.".format(
- source_dir
- )
- )
- return
-
- cur_hash = subprocess.getoutput("git rev-parse HEAD")
-
- path = os.path.join(model_dir, "githash")
- if os.path.exists(path):
- saved_hash = open(path).read()
- if saved_hash != cur_hash:
- logger.warn(
- "git hash values are different. {}(saved) != {}(current)".format(
- saved_hash[:8], cur_hash[:8]
- )
- )
- else:
- open(path, "w").write(cur_hash)
-
-
-def get_logger(model_dir, filename="train.log"):
- global logger
- logger = logging.getLogger(os.path.basename(model_dir))
- logger.setLevel(logging.DEBUG)
-
- formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s")
- if not os.path.exists(model_dir):
- os.makedirs(model_dir)
- h = logging.FileHandler(os.path.join(model_dir, filename))
- h.setLevel(logging.DEBUG)
- h.setFormatter(formatter)
- logger.addHandler(h)
- return logger
-
-
-class HParams:
- def __init__(self, **kwargs):
- for k, v in kwargs.items():
- if type(v) == dict:
- v = HParams(**v)
- self[k] = v
-
- def keys(self):
- return self.__dict__.keys()
-
- def items(self):
- return self.__dict__.items()
-
- def values(self):
- return self.__dict__.values()
-
- def __len__(self):
- return len(self.__dict__)
-
- def __getitem__(self, key):
- return getattr(self, key)
-
- def __setitem__(self, key, value):
- return setattr(self, key, value)
-
- def __contains__(self, key):
- return key in self.__dict__
-
- def __repr__(self):
- return self.__dict__.__repr__()
diff --git a/spaces/TM9450/Income_prediction/app.py b/spaces/TM9450/Income_prediction/app.py
deleted file mode 100644
index f5090dcf86d32f6ae9efc23f702c896faa946dea..0000000000000000000000000000000000000000
--- a/spaces/TM9450/Income_prediction/app.py
+++ /dev/null
@@ -1,71 +0,0 @@
-import joblib
-import pandas as pd
-import streamlit as st
-
-
-EDU_DICT = {'Preschool': 1,
- '1st-4th': 2,
- '5th-6th': 3,
- '7th-8th': 4,
- '9th': 5,
- '10th': 6,
- '11th': 7,
- '12th': 8,
- 'HS-grad': 9,
- 'Some-college': 10,
- 'Assoc-voc': 11,
- 'Assoc-acdm': 12,
- 'Bachelors': 13,
- 'Masters': 14,
- 'Prof-school': 15,
- 'Doctorate': 16
- }
-
-model = joblib.load('model.joblib')
-unique_values = joblib.load('unique_values.joblib')
-
-unique_class = unique_values["workclass"]
-unique_education = unique_values["education"]
-unique_marital_status = unique_values["marital.status"]
-unique_relationship = unique_values["relationship"]
-unique_occupation = unique_values["occupation"]
-unique_sex = unique_values["sex"]
-unique_race = unique_values["race"]
-unique_country = unique_values["native.country"]
-
-def main():
- st.title("Adult Income")
-
- with st.form("questionaire"):
- age = st.slider("Age", min_value=10, max_value=100)
- workclass = st.selectbox("Workclass", options=unique_class)
- education = st.selectbox("Education", options=unique_education)
- Marital_Status = st.selectbox("Marital_Status", options=unique_marital_status)
- occupation = st.selectbox("Occupation", options=unique_occupation)
- relationship = st.selectbox("Relationship", options=unique_relationship)
- race = st.selectbox("Race", options=unique_race)
- sex = st.selectbox("Sex", options=unique_sex)
- hours_per_week = st.slider("Hours_per_week", min_value=1, max_value=100)
- native_country = st.selectbox("Native_country", options=unique_country)
-
- # clicked==True only when the button is clicked
- clicked = st.form_submit_button("Predict income")
- if clicked:
- result=model.predict(pd.DataFrame({"age": [age],
- "workclass": [workclass],
- "education": [EDU_DICT[education]],
- "marital.status": [Marital_Status],
- "occupation": [occupation],
- "relationship": [relationship],
- "race": [race],
- "sex": [sex],
- "hours.per.week": [hours_per_week],
- "native.country": [native_country]}))
- # Show prediction
- result = '>50K' if result[0] == 1 else '<=50K'
- st.success("Your predicted income is "+result)
-
-# Run main()
-#บางคนเขาไม่อยากรันเลยใส่if ไว้
-if __name__ == "__main__":
- main()
\ No newline at end of file
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/archive_util.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/archive_util.py
deleted file mode 100644
index 7f9e1e00ccdb0e67a5601db4707a1cfa46cbc96f..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/archive_util.py
+++ /dev/null
@@ -1,280 +0,0 @@
-"""distutils.archive_util
-
-Utility functions for creating archive files (tarballs, zip files,
-that sort of thing)."""
-
-import os
-from warnings import warn
-import sys
-
-try:
- import zipfile
-except ImportError:
- zipfile = None
-
-
-from .errors import DistutilsExecError
-from .spawn import spawn
-from .dir_util import mkpath
-from ._log import log
-
-try:
- from pwd import getpwnam
-except ImportError:
- getpwnam = None
-
-try:
- from grp import getgrnam
-except ImportError:
- getgrnam = None
-
-
-def _get_gid(name):
- """Returns a gid, given a group name."""
- if getgrnam is None or name is None:
- return None
- try:
- result = getgrnam(name)
- except KeyError:
- result = None
- if result is not None:
- return result[2]
- return None
-
-
-def _get_uid(name):
- """Returns an uid, given a user name."""
- if getpwnam is None or name is None:
- return None
- try:
- result = getpwnam(name)
- except KeyError:
- result = None
- if result is not None:
- return result[2]
- return None
-
-
-def make_tarball(
- base_name, base_dir, compress="gzip", verbose=0, dry_run=0, owner=None, group=None
-):
- """Create a (possibly compressed) tar file from all the files under
- 'base_dir'.
-
- 'compress' must be "gzip" (the default), "bzip2", "xz", "compress", or
- None. ("compress" will be deprecated in Python 3.2)
-
- 'owner' and 'group' can be used to define an owner and a group for the
- archive that is being built. If not provided, the current owner and group
- will be used.
-
- The output tar file will be named 'base_dir' + ".tar", possibly plus
- the appropriate compression extension (".gz", ".bz2", ".xz" or ".Z").
-
- Returns the output filename.
- """
- tar_compression = {
- 'gzip': 'gz',
- 'bzip2': 'bz2',
- 'xz': 'xz',
- None: '',
- 'compress': '',
- }
- compress_ext = {'gzip': '.gz', 'bzip2': '.bz2', 'xz': '.xz', 'compress': '.Z'}
-
- # flags for compression program, each element of list will be an argument
- if compress is not None and compress not in compress_ext.keys():
- raise ValueError(
- "bad value for 'compress': must be None, 'gzip', 'bzip2', "
- "'xz' or 'compress'"
- )
-
- archive_name = base_name + '.tar'
- if compress != 'compress':
- archive_name += compress_ext.get(compress, '')
-
- mkpath(os.path.dirname(archive_name), dry_run=dry_run)
-
- # creating the tarball
- import tarfile # late import so Python build itself doesn't break
-
- log.info('Creating tar archive')
-
- uid = _get_uid(owner)
- gid = _get_gid(group)
-
- def _set_uid_gid(tarinfo):
- if gid is not None:
- tarinfo.gid = gid
- tarinfo.gname = group
- if uid is not None:
- tarinfo.uid = uid
- tarinfo.uname = owner
- return tarinfo
-
- if not dry_run:
- tar = tarfile.open(archive_name, 'w|%s' % tar_compression[compress])
- try:
- tar.add(base_dir, filter=_set_uid_gid)
- finally:
- tar.close()
-
- # compression using `compress`
- if compress == 'compress':
- warn("'compress' is deprecated.", DeprecationWarning)
- # the option varies depending on the platform
- compressed_name = archive_name + compress_ext[compress]
- if sys.platform == 'win32':
- cmd = [compress, archive_name, compressed_name]
- else:
- cmd = [compress, '-f', archive_name]
- spawn(cmd, dry_run=dry_run)
- return compressed_name
-
- return archive_name
-
-
-def make_zipfile(base_name, base_dir, verbose=0, dry_run=0): # noqa: C901
- """Create a zip file from all the files under 'base_dir'.
-
- The output zip file will be named 'base_name' + ".zip". Uses either the
- "zipfile" Python module (if available) or the InfoZIP "zip" utility
- (if installed and found on the default search path). If neither tool is
- available, raises DistutilsExecError. Returns the name of the output zip
- file.
- """
- zip_filename = base_name + ".zip"
- mkpath(os.path.dirname(zip_filename), dry_run=dry_run)
-
- # If zipfile module is not available, try spawning an external
- # 'zip' command.
- if zipfile is None:
- if verbose:
- zipoptions = "-r"
- else:
- zipoptions = "-rq"
-
- try:
- spawn(["zip", zipoptions, zip_filename, base_dir], dry_run=dry_run)
- except DistutilsExecError:
- # XXX really should distinguish between "couldn't find
- # external 'zip' command" and "zip failed".
- raise DistutilsExecError(
- (
- "unable to create zip file '%s': "
- "could neither import the 'zipfile' module nor "
- "find a standalone zip utility"
- )
- % zip_filename
- )
-
- else:
- log.info("creating '%s' and adding '%s' to it", zip_filename, base_dir)
-
- if not dry_run:
- try:
- zip = zipfile.ZipFile(
- zip_filename, "w", compression=zipfile.ZIP_DEFLATED
- )
- except RuntimeError:
- zip = zipfile.ZipFile(zip_filename, "w", compression=zipfile.ZIP_STORED)
-
- with zip:
- if base_dir != os.curdir:
- path = os.path.normpath(os.path.join(base_dir, ''))
- zip.write(path, path)
- log.info("adding '%s'", path)
- for dirpath, dirnames, filenames in os.walk(base_dir):
- for name in dirnames:
- path = os.path.normpath(os.path.join(dirpath, name, ''))
- zip.write(path, path)
- log.info("adding '%s'", path)
- for name in filenames:
- path = os.path.normpath(os.path.join(dirpath, name))
- if os.path.isfile(path):
- zip.write(path, path)
- log.info("adding '%s'", path)
-
- return zip_filename
-
-
-ARCHIVE_FORMATS = {
- 'gztar': (make_tarball, [('compress', 'gzip')], "gzip'ed tar-file"),
- 'bztar': (make_tarball, [('compress', 'bzip2')], "bzip2'ed tar-file"),
- 'xztar': (make_tarball, [('compress', 'xz')], "xz'ed tar-file"),
- 'ztar': (make_tarball, [('compress', 'compress')], "compressed tar file"),
- 'tar': (make_tarball, [('compress', None)], "uncompressed tar file"),
- 'zip': (make_zipfile, [], "ZIP file"),
-}
-
-
-def check_archive_formats(formats):
- """Returns the first format from the 'format' list that is unknown.
-
- If all formats are known, returns None
- """
- for format in formats:
- if format not in ARCHIVE_FORMATS:
- return format
- return None
-
-
-def make_archive(
- base_name,
- format,
- root_dir=None,
- base_dir=None,
- verbose=0,
- dry_run=0,
- owner=None,
- group=None,
-):
- """Create an archive file (eg. zip or tar).
-
- 'base_name' is the name of the file to create, minus any format-specific
- extension; 'format' is the archive format: one of "zip", "tar", "gztar",
- "bztar", "xztar", or "ztar".
-
- 'root_dir' is a directory that will be the root directory of the
- archive; ie. we typically chdir into 'root_dir' before creating the
- archive. 'base_dir' is the directory where we start archiving from;
- ie. 'base_dir' will be the common prefix of all files and
- directories in the archive. 'root_dir' and 'base_dir' both default
- to the current directory. Returns the name of the archive file.
-
- 'owner' and 'group' are used when creating a tar archive. By default,
- uses the current owner and group.
- """
- save_cwd = os.getcwd()
- if root_dir is not None:
- log.debug("changing into '%s'", root_dir)
- base_name = os.path.abspath(base_name)
- if not dry_run:
- os.chdir(root_dir)
-
- if base_dir is None:
- base_dir = os.curdir
-
- kwargs = {'dry_run': dry_run}
-
- try:
- format_info = ARCHIVE_FORMATS[format]
- except KeyError:
- raise ValueError("unknown archive format '%s'" % format)
-
- func = format_info[0]
- for arg, val in format_info[1]:
- kwargs[arg] = val
-
- if format != 'zip':
- kwargs['owner'] = owner
- kwargs['group'] = group
-
- try:
- filename = func(base_name, base_dir, **kwargs)
- finally:
- if root_dir is not None:
- log.debug("changing back to '%s'", save_cwd)
- os.chdir(save_cwd)
-
- return filename
diff --git a/spaces/Tetel/chat/SydneyGPT/SydneyGPTUtils.py b/spaces/Tetel/chat/SydneyGPT/SydneyGPTUtils.py
deleted file mode 100644
index 4c328e9390fceca307217c15aed13f1285f5eb6f..0000000000000000000000000000000000000000
--- a/spaces/Tetel/chat/SydneyGPT/SydneyGPTUtils.py
+++ /dev/null
@@ -1,28 +0,0 @@
-from SydneyGPT.SydneyGPT import Chatbot
-try:
- import EdgeGPT.EdgeGPT as EdgeGPT_module
- from EdgeGPT.EdgeUtils import Query as BaseQuery
-except ImportError:
- import EdgeGPT as EdgeGPT_module
- from EdgeUtils import Query as BaseQuery
-
-
-create_method = EdgeGPT_module.Chatbot.create
-
-
-async def new_create(*args, **kwargs):
- monkey_create = EdgeGPT_module.Chatbot.create
- try:
- EdgeGPT_module.Chatbot.create = create_method
- gpt_bot_create = Chatbot.create(*args, **kwargs)
- return await gpt_bot_create
- finally:
- EdgeGPT_module.Chatbot.create = monkey_create
-
-
-EdgeGPT_module.Chatbot.create = staticmethod(new_create)
-
-
-class Query(BaseQuery):
- pass
-
diff --git a/spaces/ThomasSimonini/ML-Agents-SnowballTarget/README.md b/spaces/ThomasSimonini/ML-Agents-SnowballTarget/README.md
deleted file mode 100644
index 3bffc4f8f3d9dbf8ba17faac41a1927c649de599..0000000000000000000000000000000000000000
--- a/spaces/ThomasSimonini/ML-Agents-SnowballTarget/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: ML Agents SnowballTarget
-emoji: ❄️
-colorFrom: red
-colorTo: white
-sdk: static
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/VickyKira/NASAGPT/client/js/sidebar-toggler.js b/spaces/VickyKira/NASAGPT/client/js/sidebar-toggler.js
deleted file mode 100644
index b23f94e3bfba5bac53432e1b557765736dabbab4..0000000000000000000000000000000000000000
--- a/spaces/VickyKira/NASAGPT/client/js/sidebar-toggler.js
+++ /dev/null
@@ -1,34 +0,0 @@
-const sidebar = document.querySelector(".sidebar");
-const menuButton = document.querySelector(".menu-button");
-
-function toggleSidebar(event) {
- if (sidebar.classList.contains("shown")) {
- hideSidebar(event.target);
- } else {
- showSidebar(event.target);
- }
- window.scrollTo(0, 0);
-}
-
-function showSidebar(target) {
- sidebar.classList.add("shown");
- target.classList.add("rotated");
- document.body.style.overflow = "hidden";
-}
-
-function hideSidebar(target) {
- sidebar.classList.remove("shown");
- target.classList.remove("rotated");
- document.body.style.overflow = "auto";
-}
-
-menuButton.addEventListener("click", toggleSidebar);
-
-document.body.addEventListener('click', function(event) {
- if (event.target.matches('.conversation-title')) {
- const menuButtonStyle = window.getComputedStyle(menuButton);
- if (menuButtonStyle.display !== 'none') {
- hideSidebar(menuButton);
- }
- }
-});
diff --git a/spaces/Vignesh2496/project/app.py b/spaces/Vignesh2496/project/app.py
deleted file mode 100644
index ca8b6d40b4ab898c70da92f4a4298de2baf703dc..0000000000000000000000000000000000000000
--- a/spaces/Vignesh2496/project/app.py
+++ /dev/null
@@ -1,164 +0,0 @@
-import os
-import re
-import requests
-import json
-import gradio as gr
-from langchain.chat_models import ChatOpenAI
-from langchain import LLMChain, PromptTemplate
-from langchain.memory import ConversationBufferMemory
-
-OPENAI_API_KEY=os.getenv('OPENAI_API_KEY')
-PLAY_HT_API_KEY=os.getenv('PLAY_HT_API_KEY')
-PLAY_HT_USER_ID=os.getenv('PLAY_HT_USER_ID')
-
-PLAY_HT_VOICE_ID=os.getenv('PLAY_HT_VOICE_ID')
-play_ht_api_get_audio_url = "https://play.ht/api/v2/tts"
-
-
-template = """You are a helpful assistant to answer user queries.
-{chat_history}
-User: {user_message}
-Chatbot:"""
-
-prompt = PromptTemplate(
- input_variables=["chat_history", "user_message"], template=template
-)
-
-memory = ConversationBufferMemory(memory_key="chat_history")
-
-llm_chain = LLMChain(
- llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"),
- prompt=prompt,
- verbose=True,
- memory=memory,
-)
-
-headers = {
- "accept": "text/event-stream",
- "content-type": "application/json",
- "AUTHORIZATION": "Bearer "+ PLAY_HT_API_KEY,
- "X-USER-ID": PLAY_HT_USER_ID
-}
-
-
-def get_payload(text):
- return {
- "text": text,
- "voice": PLAY_HT_VOICE_ID,
- "quality": "medium",
- "output_format": "mp3",
- "speed": 1,
- "sample_rate": 24000,
- "seed": None,
- "temperature": None
- }
-
-def get_generated_audio(text):
- payload = get_payload(text)
- generated_response = {}
- try:
- response = requests.post(play_ht_api_get_audio_url, json=payload, headers=headers)
- response.raise_for_status()
- generated_response["type"]= 'SUCCESS'
- generated_response["response"] = response.text
- except requests.exceptions.RequestException as e:
- generated_response["type"]= 'ERROR'
- try:
- response_text = json.loads(response.text)
- if response_text['error_message']:
- generated_response["response"] = response_text['error_message']
- else:
- generated_response["response"] = response.text
- except Exception as e:
- generated_response["response"] = response.text
- except Exception as e:
- generated_response["type"]= 'ERROR'
- generated_response["response"] = response.text
- return generated_response
-
-def extract_urls(text):
- # Define the regex pattern for URLs
- url_pattern = r'https?://(?:[-\w.]|(?:%[\da-fA-F]{2}))+[/\w\.-]*'
-
- # Find all occurrences of URLs in the text
- urls = re.findall(url_pattern, text)
-
- return urls
-
-def get_audio_reply_for_question(text):
- generated_audio_event = get_generated_audio(text)
- #From get_generated_audio, you will get events in a string format, from that we need to extract the url
- final_response = {
- "audio_url": '',
- "message": ''
- }
- if generated_audio_event["type"] == 'SUCCESS':
- audio_urls = extract_urls(generated_audio_event["response"])
- if len(audio_urls) == 0:
- final_response['message'] = "No audio file link found in generated event"
- else:
- final_response['audio_url'] = audio_urls[-1]
- else:
- final_response['message'] = generated_audio_event['response']
- return final_response
-
-def download_url(url):
- try:
- # Send a GET request to the URL to fetch the content
- final_response = {
- 'content':'',
- 'error':''
- }
- response = requests.get(url)
- # Check if the request was successful (status code 200)
- if response.status_code == 200:
- final_response['content'] = response.content
- else:
- final_response['error'] = f"Failed to download the URL. Status code: {response.status_code}"
- except Exception as e:
- final_response['error'] = f"Failed to download the URL. Error: {e}"
- return final_response
-
-def get_filename_from_url(url):
- # Use os.path.basename() to extract the file name from the URL
- file_name = os.path.basename(url)
- return file_name
-
-def get_text_response(user_message):
- response = llm_chain.predict(user_message = user_message)
- return response
-
-def get_text_response_and_audio_response(user_message):
- response = get_text_response(user_message) # Getting the reply from Open AI
- audio_reply_for_question_response = get_audio_reply_for_question(response)
- final_response = {
- 'output_file_path': '',
- 'message':''
- }
- audio_url = audio_reply_for_question_response['audio_url']
- if audio_url:
- output_file_path=get_filename_from_url(audio_url)
- download_url_response = download_url(audio_url)
- audio_content = download_url_response['content']
- if audio_content:
- with open(output_file_path, "wb") as audio_file:
- audio_file.write(audio_content)
- final_response['output_file_path'] = output_file_path
- else:
- final_response['message'] = download_url_response['error']
- else:
- final_response['message'] = audio_reply_for_question_response['message']
- return final_response
-
-def chat_bot_response(message, history):
- text_and_audio_response = get_text_response_and_audio_response(message)
- output_file_path = text_and_audio_response['output_file_path']
- if output_file_path:
- return (text_and_audio_response['output_file_path'],)
- else:
- return text_and_audio_response['message']
-
-demo = gr.ChatInterface(chat_bot_response,examples=["How are you doing?","What are your interests?","Which places do you like to visit?"])
-
-if __name__ == "__main__":
- demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`.
diff --git a/spaces/Wauplin/space_to_dataset_saver/README.md b/spaces/Wauplin/space_to_dataset_saver/README.md
deleted file mode 100644
index ec4034f28c28dad0cefab0dfc8eda5340aeb4f04..0000000000000000000000000000000000000000
--- a/spaces/Wauplin/space_to_dataset_saver/README.md
+++ /dev/null
@@ -1,18 +0,0 @@
----
-title: Space to Dataset Saver
-emoji: 🌍
-colorFrom: indigo
-colorTo: blue
-sdk: gradio
-sdk_version: 3.34.0
-app_file: app.py
-pinned: false
----
-
-Demo to save data from a Space to a Dataset. Goal is to provide reusable snippets of code.
-
-- Documentation: https://huggingface.co/docs/huggingface_hub/main/en/guides/upload#scheduled-uploads
-- Space: https://huggingface.co/spaces/Wauplin/space_to_dataset_saver/
-- JSON dataset: https://huggingface.co/datasets/Wauplin/example-space-to-dataset-json
-- Image dataset: https://huggingface.co/datasets/Wauplin/example-space-to-dataset-image
-- Image (zipped) dataset: https://huggingface.co/datasets/Wauplin/example-space-to-dataset-image-zip
\ No newline at end of file
diff --git a/spaces/Wayben/ChatGPT/assets/Kelpy-Codos.js b/spaces/Wayben/ChatGPT/assets/Kelpy-Codos.js
deleted file mode 100644
index cfbaeedb4f371dfb5fe157db545b364046fca3e1..0000000000000000000000000000000000000000
--- a/spaces/Wayben/ChatGPT/assets/Kelpy-Codos.js
+++ /dev/null
@@ -1,76 +0,0 @@
-// ==UserScript==
-// @name Kelpy Codos
-// @namespace https://github.com/Keldos-Li/Kelpy-Codos
-// @version 1.0.5
-// @author Keldos; https://keldos.me/
-// @description Add copy button to PRE tags before CODE tag, for Chuanhu ChatGPT especially.
-// Based on Chuanhu ChatGPT version: ac04408 (2023-3-22)
-// @license GPL-3.0
-// @grant none
-// ==/UserScript==
-
-(function () {
- 'use strict';
-
- function addCopyButton(pre) {
- var code = pre.querySelector('code');
- if (!code) {
- return; // 如果没有找到 元素,则不添加按钮
- }
- var firstChild = code.firstChild;
- if (!firstChild) {
- return; // 如果 元素没有子节点,则不添加按钮
- }
- var button = document.createElement('button');
- button.textContent = '\uD83D\uDCCE'; // 使用 📎 符号作为“复制”按钮的文本
- button.style.position = 'relative';
- button.style.float = 'right';
- button.style.fontSize = '1em'; // 可选:调整按钮大小
- button.style.background = 'none'; // 可选:去掉背景颜色
- button.style.border = 'none'; // 可选:去掉边框
- button.style.cursor = 'pointer'; // 可选:显示指针样式
- button.addEventListener('click', function () {
- var range = document.createRange();
- range.selectNodeContents(code);
- range.setStartBefore(firstChild); // 将范围设置为第一个子节点之前
- var selection = window.getSelection();
- selection.removeAllRanges();
- selection.addRange(range);
-
- try {
- var success = document.execCommand('copy');
- if (success) {
- button.textContent = '\u2714';
- setTimeout(function () {
- button.textContent = '\uD83D\uDCCE'; // 恢复按钮为“复制”
- }, 2000);
- } else {
- button.textContent = '\u2716';
- }
- } catch (e) {
- console.error(e);
- button.textContent = '\u2716';
- }
-
- selection.removeAllRanges();
- });
- code.insertBefore(button, firstChild); // 将按钮插入到第一个子元素之前
- }
-
- function handleNewElements(mutationsList, observer) {
- for (var mutation of mutationsList) {
- if (mutation.type === 'childList') {
- for (var node of mutation.addedNodes) {
- if (node.nodeName === 'PRE') {
- addCopyButton(node);
- }
- }
- }
- }
- }
-
- var observer = new MutationObserver(handleNewElements);
- observer.observe(document.documentElement, { childList: true, subtree: true });
-
- document.querySelectorAll('pre').forEach(addCopyButton);
-})();
diff --git a/spaces/WhyLIM/ChatGPT-academic/functional_crazy.py b/spaces/WhyLIM/ChatGPT-academic/functional_crazy.py
deleted file mode 100644
index d5375733317a8344d7340e7c4098c60bffb538d6..0000000000000000000000000000000000000000
--- a/spaces/WhyLIM/ChatGPT-academic/functional_crazy.py
+++ /dev/null
@@ -1,68 +0,0 @@
-
-def get_crazy_functionals():
- from crazy_functions.读文章写摘要 import 读文章写摘要
- from crazy_functions.生成函数注释 import 批量生成函数注释
- from crazy_functions.解析项目源代码 import 解析项目本身
- from crazy_functions.解析项目源代码 import 解析一个Python项目
- from crazy_functions.解析项目源代码 import 解析一个C项目的头文件
- from crazy_functions.解析项目源代码 import 解析一个C项目
- from crazy_functions.高级功能函数模板 import 高阶功能模板函数
-
- return {
- "[实验] 请解析并解构此项目本身": {
- "Function": 解析项目本身
- },
- "[实验] 解析整个py项目(配合input输入框)": {
- "Color": "stop", # 按钮颜色
- "Function": 解析一个Python项目
- },
- "[实验] 解析整个C++项目头文件(配合input输入框)": {
- "Color": "stop", # 按钮颜色
- "Function": 解析一个C项目的头文件
- },
- "[实验] 解析整个C++项目(配合input输入框)": {
- "Color": "stop", # 按钮颜色
- "Function": 解析一个C项目
- },
- "[实验] 读tex论文写摘要(配合input输入框)": {
- "Color": "stop", # 按钮颜色
- "Function": 读文章写摘要
- },
- "[实验] 批量生成函数注释(配合input输入框)": {
- "Color": "stop", # 按钮颜色
- "Function": 批量生成函数注释
- },
- "[实验] 实验功能函数模板": {
- "Color": "stop", # 按钮颜色
- "Function": 高阶功能模板函数
- },
- }
-
-def on_file_uploaded(files, chatbot, txt):
- if len(files) == 0: return chatbot, txt
- import shutil, os, time, glob
- from toolbox import extract_archive
- try: shutil.rmtree('./private_upload/')
- except: pass
- time_tag = time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime())
- os.makedirs(f'private_upload/{time_tag}', exist_ok=True)
- for file in files:
- file_origin_name = os.path.basename(file.orig_name)
- shutil.copy(file.name, f'private_upload/{time_tag}/{file_origin_name}')
- extract_archive(f'private_upload/{time_tag}/{file_origin_name}',
- dest_dir=f'private_upload/{time_tag}/{file_origin_name}.extract')
- moved_files = [fp for fp in glob.glob('private_upload/**/*', recursive=True)]
- txt = f'private_upload/{time_tag}'
- moved_files_str = '\t\n\n'.join(moved_files)
- chatbot.append(['我上传了文件,请查收',
- f'[Local Message] 收到以下文件: \n\n{moved_files_str}\n\n调用路径参数已自动修正到: \n\n{txt}\n\n现在您可以直接选择任意实现性功能'])
- return chatbot, txt
-
-def on_report_generated(files, chatbot):
- from toolbox import find_recent_files
- report_files = find_recent_files('gpt_log')
- if len(report_files) == 0: return report_files, chatbot
- # files.extend(report_files)
- chatbot.append(['汇总报告如何远程获取?', '汇总报告已经添加到右侧文件上传区,请查收。'])
- return report_files, chatbot
-
diff --git a/spaces/Wootang01/paraphraser_three/app.py b/spaces/Wootang01/paraphraser_three/app.py
deleted file mode 100644
index e55d952e7a15ef08a8d1226104aa09ff865d55c3..0000000000000000000000000000000000000000
--- a/spaces/Wootang01/paraphraser_three/app.py
+++ /dev/null
@@ -1,36 +0,0 @@
-import streamlit as st
-import torch
-import sacremoses
-from transformers import pipeline
-from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
-from transformers import FSMTForConditionalGeneration, FSMTTokenizer
-
-st.title("Paraphraser Three -- Back Translation")
-st.write("Paraphrase means to express meaning using different words. Back Translation refers to the method by which the computer paraphrases.")
-st.write("Write or paste an English language sentence below, and enter. The machine will translate your sentence into another language using one language model. The machine will then translate that sentence into English using another language model.")
-
-user_input = st.text_area("Input sentence.")
-
-def load_en2de():
- en2de = pipeline("translation_en_to_de", model="t5-base")
- return en2de
-
-def load_de2en():
- model_name = "facebook/wmt19-de-en"
- tokenizer = FSMTTokenizer.from_pretrained(model_name)
- model_de_to_en = FSMTForConditionalGeneration.from_pretrained(model_name)
- return tokenizer, model_de_to_en
-
-en2de = load_en2de()
-tokenizer_de2en, de2en = load_de2en()
-
-en_to_de_output = en2de(user_input)
-translated_text = en_to_de_output[0]['translation_text']
-
-input_ids = tokenizer_de2en.encode(translated_text, return_tensors="pt")
-output_ids = de2en.generate(input_ids)[0]
-augmented_text = tokenizer_de2en.decode(output_ids, skip_special_tokens=True)
-
-st.write("Paraphrased sentence: ", augmented_text)
-
-
diff --git a/spaces/XzJosh/Bella-Bert-VITS2/preprocess_text.py b/spaces/XzJosh/Bella-Bert-VITS2/preprocess_text.py
deleted file mode 100644
index 5eb0f3b9e929fcbe91dcbeb653391227a2518a15..0000000000000000000000000000000000000000
--- a/spaces/XzJosh/Bella-Bert-VITS2/preprocess_text.py
+++ /dev/null
@@ -1,64 +0,0 @@
-import json
-from random import shuffle
-
-import tqdm
-from text.cleaner import clean_text
-from collections import defaultdict
-stage = [1,2,3]
-
-transcription_path = 'filelists/genshin.list'
-train_path = 'filelists/train.list'
-val_path = 'filelists/val.list'
-config_path = "configs/config.json"
-val_per_spk = 4
-max_val_total = 8
-
-if 1 in stage:
- with open( transcription_path+'.cleaned', 'w', encoding='utf-8') as f:
- for line in tqdm.tqdm(open(transcription_path, encoding='utf-8').readlines()):
- try:
- utt, spk, language, text = line.strip().split('|')
- norm_text, phones, tones, word2ph = clean_text(text, language)
- f.write('{}|{}|{}|{}|{}|{}|{}\n'.format(utt, spk, language, norm_text, ' '.join(phones),
- " ".join([str(i) for i in tones]),
- " ".join([str(i) for i in word2ph])))
- except Exception as error :
- print("err!", utt, error)
-
-if 2 in stage:
- spk_utt_map = defaultdict(list)
- spk_id_map = {}
- current_sid = 0
-
- with open( transcription_path+'.cleaned', encoding='utf-8') as f:
- for line in f.readlines():
- utt, spk, language, text, phones, tones, word2ph = line.strip().split('|')
- spk_utt_map[spk].append(line)
- if spk not in spk_id_map.keys():
- spk_id_map[spk] = current_sid
- current_sid += 1
- train_list = []
- val_list = []
-
- for spk, utts in spk_utt_map.items():
- shuffle(utts)
- val_list+=utts[:val_per_spk]
- train_list+=utts[val_per_spk:]
- if len(val_list) > max_val_total:
- train_list+=val_list[max_val_total:]
- val_list = val_list[:max_val_total]
-
- with open( train_path,"w", encoding='utf-8') as f:
- for line in train_list:
- f.write(line)
-
- with open(val_path, "w", encoding='utf-8') as f:
- for line in val_list:
- f.write(line)
-
-if 3 in stage:
- assert 2 in stage
- config = json.load(open(config_path, encoding='utf-8'))
- config["data"]['spk2id'] = spk_id_map
- with open(config_path, 'w', encoding='utf-8') as f:
- json.dump(config, f, indent=2, ensure_ascii=False)
diff --git a/spaces/XzJosh/Diana-Bert-VITS2/text/cleaner.py b/spaces/XzJosh/Diana-Bert-VITS2/text/cleaner.py
deleted file mode 100644
index 64bd5f7296f66c94f3a335666c53706bb5fe5b39..0000000000000000000000000000000000000000
--- a/spaces/XzJosh/Diana-Bert-VITS2/text/cleaner.py
+++ /dev/null
@@ -1,27 +0,0 @@
-from text import chinese, cleaned_text_to_sequence
-
-
-language_module_map = {
- 'ZH': chinese
-}
-
-
-def clean_text(text, language):
- language_module = language_module_map[language]
- norm_text = language_module.text_normalize(text)
- phones, tones, word2ph = language_module.g2p(norm_text)
- return norm_text, phones, tones, word2ph
-
-def clean_text_bert(text, language):
- language_module = language_module_map[language]
- norm_text = language_module.text_normalize(text)
- phones, tones, word2ph = language_module.g2p(norm_text)
- bert = language_module.get_bert_feature(norm_text, word2ph)
- return phones, tones, bert
-
-def text_to_sequence(text, language):
- norm_text, phones, tones, word2ph = clean_text(text, language)
- return cleaned_text_to_sequence(phones, tones, language)
-
-if __name__ == '__main__':
- pass
diff --git a/spaces/XzJosh/nanami-Bert-VITS2/bert/chinese-roberta-wwm-ext-large/README.md b/spaces/XzJosh/nanami-Bert-VITS2/bert/chinese-roberta-wwm-ext-large/README.md
deleted file mode 100644
index 7bce039b7f81ee328fdf8efe3f14409200aacbef..0000000000000000000000000000000000000000
--- a/spaces/XzJosh/nanami-Bert-VITS2/bert/chinese-roberta-wwm-ext-large/README.md
+++ /dev/null
@@ -1,57 +0,0 @@
----
-language:
-- zh
-tags:
-- bert
-license: "apache-2.0"
----
-
-# Please use 'Bert' related functions to load this model!
-
-## Chinese BERT with Whole Word Masking
-For further accelerating Chinese natural language processing, we provide **Chinese pre-trained BERT with Whole Word Masking**.
-
-**[Pre-Training with Whole Word Masking for Chinese BERT](https://arxiv.org/abs/1906.08101)**
-Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu
-
-This repository is developed based on:https://github.com/google-research/bert
-
-You may also interested in,
-- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
-- Chinese MacBERT: https://github.com/ymcui/MacBERT
-- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
-- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
-- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
-
-More resources by HFL: https://github.com/ymcui/HFL-Anthology
-
-## Citation
-If you find the technical report or resource is useful, please cite the following technical report in your paper.
-- Primary: https://arxiv.org/abs/2004.13922
-```
-@inproceedings{cui-etal-2020-revisiting,
- title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
- author = "Cui, Yiming and
- Che, Wanxiang and
- Liu, Ting and
- Qin, Bing and
- Wang, Shijin and
- Hu, Guoping",
- booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
- month = nov,
- year = "2020",
- address = "Online",
- publisher = "Association for Computational Linguistics",
- url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
- pages = "657--668",
-}
-```
-- Secondary: https://arxiv.org/abs/1906.08101
-```
-@article{chinese-bert-wwm,
- title={Pre-Training with Whole Word Masking for Chinese BERT},
- author={Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Yang, Ziqing and Wang, Shijin and Hu, Guoping},
- journal={arXiv preprint arXiv:1906.08101},
- year={2019}
- }
-```
\ No newline at end of file
diff --git a/spaces/XzJosh/nine1-Bert-VITS2/train_ms.py b/spaces/XzJosh/nine1-Bert-VITS2/train_ms.py
deleted file mode 100644
index 5d109003d40497ea4493e7c73f47c1eb7370a81e..0000000000000000000000000000000000000000
--- a/spaces/XzJosh/nine1-Bert-VITS2/train_ms.py
+++ /dev/null
@@ -1,402 +0,0 @@
-import os
-import json
-import argparse
-import itertools
-import math
-import torch
-import shutil
-from torch import nn, optim
-from torch.nn import functional as F
-from torch.utils.data import DataLoader
-from torch.utils.tensorboard import SummaryWriter
-import torch.multiprocessing as mp
-import torch.distributed as dist
-from torch.nn.parallel import DistributedDataParallel as DDP
-from torch.cuda.amp import autocast, GradScaler
-from tqdm import tqdm
-import logging
-logging.getLogger('numba').setLevel(logging.WARNING)
-import commons
-import utils
-from data_utils import (
- TextAudioSpeakerLoader,
- TextAudioSpeakerCollate,
- DistributedBucketSampler
-)
-from models import (
- SynthesizerTrn,
- MultiPeriodDiscriminator,
- DurationDiscriminator,
-)
-from losses import (
- generator_loss,
- discriminator_loss,
- feature_loss,
- kl_loss
-)
-from mel_processing import mel_spectrogram_torch, spec_to_mel_torch
-from text.symbols import symbols
-
-torch.backends.cudnn.benchmark = True
-torch.backends.cuda.matmul.allow_tf32 = True
-torch.backends.cudnn.allow_tf32 = True
-torch.set_float32_matmul_precision('medium')
-global_step = 0
-
-
-def main():
- """Assume Single Node Multi GPUs Training Only"""
- assert torch.cuda.is_available(), "CPU training is not allowed."
-
- n_gpus = torch.cuda.device_count()
- os.environ['MASTER_ADDR'] = 'localhost'
- os.environ['MASTER_PORT'] = '65280'
-
- hps = utils.get_hparams()
- if not hps.cont:
- shutil.copy('./pretrained_models/D_0.pth','./logs/OUTPUT_MODEL/D_0.pth')
- shutil.copy('./pretrained_models/G_0.pth','./logs/OUTPUT_MODEL/G_0.pth')
- shutil.copy('./pretrained_models/DUR_0.pth','./logs/OUTPUT_MODEL/DUR_0.pth')
- mp.spawn(run, nprocs=n_gpus, args=(n_gpus, hps,))
-
-
-def run(rank, n_gpus, hps):
- global global_step
- if rank == 0:
- logger = utils.get_logger(hps.model_dir)
- logger.info(hps)
- utils.check_git_hash(hps.model_dir)
- writer = SummaryWriter(log_dir=hps.model_dir)
- writer_eval = SummaryWriter(log_dir=os.path.join(hps.model_dir, "eval"))
-
- dist.init_process_group(backend= 'gloo' if os.name == 'nt' else 'nccl', init_method='env://', world_size=n_gpus, rank=rank)
- torch.manual_seed(hps.train.seed)
- torch.cuda.set_device(rank)
-
- train_dataset = TextAudioSpeakerLoader(hps.data.training_files, hps.data)
- train_sampler = DistributedBucketSampler(
- train_dataset,
- hps.train.batch_size,
- [32, 300, 400, 500, 600, 700, 800, 900, 1000],
- num_replicas=n_gpus,
- rank=rank,
- shuffle=True)
- collate_fn = TextAudioSpeakerCollate()
- train_loader = DataLoader(train_dataset, num_workers=2, shuffle=False, pin_memory=True,
- collate_fn=collate_fn, batch_sampler=train_sampler)
- if rank == 0:
- eval_dataset = TextAudioSpeakerLoader(hps.data.validation_files, hps.data)
- eval_loader = DataLoader(eval_dataset, num_workers=0, shuffle=False,
- batch_size=1, pin_memory=True,
- drop_last=False, collate_fn=collate_fn)
- if "use_noise_scaled_mas" in hps.model.keys() and hps.model.use_noise_scaled_mas == True:
- print("Using noise scaled MAS for VITS2")
- use_noise_scaled_mas = True
- mas_noise_scale_initial = 0.01
- noise_scale_delta = 2e-6
- else:
- print("Using normal MAS for VITS1")
- use_noise_scaled_mas = False
- mas_noise_scale_initial = 0.0
- noise_scale_delta = 0.0
- if "use_duration_discriminator" in hps.model.keys() and hps.model.use_duration_discriminator == True:
- print("Using duration discriminator for VITS2")
- use_duration_discriminator = True
- net_dur_disc = DurationDiscriminator(
- hps.model.hidden_channels,
- hps.model.hidden_channels,
- 3,
- 0.1,
- gin_channels=hps.model.gin_channels if hps.data.n_speakers != 0 else 0,
- ).cuda(rank)
- if "use_spk_conditioned_encoder" in hps.model.keys() and hps.model.use_spk_conditioned_encoder == True:
- if hps.data.n_speakers == 0:
- raise ValueError("n_speakers must be > 0 when using spk conditioned encoder to train multi-speaker model")
- use_spk_conditioned_encoder = True
- else:
- print("Using normal encoder for VITS1")
- use_spk_conditioned_encoder = False
-
- net_g = SynthesizerTrn(
- len(symbols),
- hps.data.filter_length // 2 + 1,
- hps.train.segment_size // hps.data.hop_length,
- n_speakers=hps.data.n_speakers,
- mas_noise_scale_initial = mas_noise_scale_initial,
- noise_scale_delta = noise_scale_delta,
- **hps.model).cuda(rank)
-
- freeze_enc = getattr(hps.model, "freeze_enc", False)
- if freeze_enc:
- print("freeze encoder !!!")
- for param in net_g.enc_p.parameters():
- param.requires_grad = False
-
- net_d = MultiPeriodDiscriminator(hps.model.use_spectral_norm).cuda(rank)
- optim_g = torch.optim.AdamW(
- filter(lambda p: p.requires_grad, net_g.parameters()),
- hps.train.learning_rate,
- betas=hps.train.betas,
- eps=hps.train.eps)
- optim_d = torch.optim.AdamW(
- net_d.parameters(),
- hps.train.learning_rate,
- betas=hps.train.betas,
- eps=hps.train.eps)
- if net_dur_disc is not None:
- optim_dur_disc = torch.optim.AdamW(
- net_dur_disc.parameters(),
- hps.train.learning_rate,
- betas=hps.train.betas,
- eps=hps.train.eps)
- else:
- optim_dur_disc = None
- net_g = DDP(net_g, device_ids=[rank], find_unused_parameters=True)
- net_d = DDP(net_d, device_ids=[rank], find_unused_parameters=True)
- if net_dur_disc is not None:
- net_dur_disc = DDP(net_dur_disc, device_ids=[rank], find_unused_parameters=True)
-
- pretrain_dir = None
- if pretrain_dir is None:
- try:
- if net_dur_disc is not None:
- _, optim_dur_disc, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "DUR_*.pth"), net_dur_disc, optim_dur_disc, skip_optimizer=not hps.cont)
- _, optim_g, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), net_g,
- optim_g, skip_optimizer=not hps.cont)
- _, optim_d, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "D_*.pth"), net_d,
- optim_d, skip_optimizer=not hps.cont)
-
- epoch_str = max(epoch_str, 1)
- global_step = (epoch_str - 1) * len(train_loader)
- except Exception as e:
- print(e)
- epoch_str = 1
- global_step = 0
- else:
- _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(pretrain_dir, "G_*.pth"), net_g,
- optim_g, True)
- _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(pretrain_dir, "D_*.pth"), net_d,
- optim_d, True)
-
-
-
- scheduler_g = torch.optim.lr_scheduler.ExponentialLR(optim_g, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2)
- scheduler_d = torch.optim.lr_scheduler.ExponentialLR(optim_d, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2)
- if net_dur_disc is not None:
- scheduler_dur_disc = torch.optim.lr_scheduler.ExponentialLR(optim_dur_disc, gamma=hps.train.lr_decay, last_epoch=epoch_str-2)
- else:
- scheduler_dur_disc = None
- scaler = GradScaler(enabled=hps.train.fp16_run)
-
- for epoch in range(epoch_str, hps.train.epochs + 1):
- if rank == 0:
- train_and_evaluate(rank, epoch, hps, [net_g, net_d, net_dur_disc], [optim_g, optim_d, optim_dur_disc], [scheduler_g, scheduler_d, scheduler_dur_disc], scaler, [train_loader, eval_loader], logger, [writer, writer_eval])
- else:
- train_and_evaluate(rank, epoch, hps, [net_g, net_d, net_dur_disc], [optim_g, optim_d, optim_dur_disc], [scheduler_g, scheduler_d, scheduler_dur_disc], scaler, [train_loader, None], None, None)
- scheduler_g.step()
- scheduler_d.step()
- if net_dur_disc is not None:
- scheduler_dur_disc.step()
-
-
-def train_and_evaluate(rank, epoch, hps, nets, optims, schedulers, scaler, loaders, logger, writers):
- net_g, net_d, net_dur_disc = nets
- optim_g, optim_d, optim_dur_disc = optims
- scheduler_g, scheduler_d, scheduler_dur_disc = schedulers
- train_loader, eval_loader = loaders
- if writers is not None:
- writer, writer_eval = writers
-
- train_loader.batch_sampler.set_epoch(epoch)
- global global_step
-
- net_g.train()
- net_d.train()
- if net_dur_disc is not None:
- net_dur_disc.train()
- for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths, speakers, tone, language, bert) in tqdm(enumerate(train_loader)):
- if net_g.module.use_noise_scaled_mas:
- current_mas_noise_scale = net_g.module.mas_noise_scale_initial - net_g.module.noise_scale_delta * global_step
- net_g.module.current_mas_noise_scale = max(current_mas_noise_scale, 0.0)
- x, x_lengths = x.cuda(rank, non_blocking=True), x_lengths.cuda(rank, non_blocking=True)
- spec, spec_lengths = spec.cuda(rank, non_blocking=True), spec_lengths.cuda(rank, non_blocking=True)
- y, y_lengths = y.cuda(rank, non_blocking=True), y_lengths.cuda(rank, non_blocking=True)
- speakers = speakers.cuda(rank, non_blocking=True)
- tone = tone.cuda(rank, non_blocking=True)
- language = language.cuda(rank, non_blocking=True)
- bert = bert.cuda(rank, non_blocking=True)
-
- with autocast(enabled=hps.train.fp16_run):
- y_hat, l_length, attn, ids_slice, x_mask, z_mask, \
- (z, z_p, m_p, logs_p, m_q, logs_q), (hidden_x, logw, logw_) = net_g(x, x_lengths, spec, spec_lengths, speakers, tone, language, bert)
- mel = spec_to_mel_torch(
- spec,
- hps.data.filter_length,
- hps.data.n_mel_channels,
- hps.data.sampling_rate,
- hps.data.mel_fmin,
- hps.data.mel_fmax)
- y_mel = commons.slice_segments(mel, ids_slice, hps.train.segment_size // hps.data.hop_length)
- y_hat_mel = mel_spectrogram_torch(
- y_hat.squeeze(1),
- hps.data.filter_length,
- hps.data.n_mel_channels,
- hps.data.sampling_rate,
- hps.data.hop_length,
- hps.data.win_length,
- hps.data.mel_fmin,
- hps.data.mel_fmax
- )
-
- y = commons.slice_segments(y, ids_slice * hps.data.hop_length, hps.train.segment_size) # slice
-
- # Discriminator
- y_d_hat_r, y_d_hat_g, _, _ = net_d(y, y_hat.detach())
- with autocast(enabled=False):
- loss_disc, losses_disc_r, losses_disc_g = discriminator_loss(y_d_hat_r, y_d_hat_g)
- loss_disc_all = loss_disc
- if net_dur_disc is not None:
- y_dur_hat_r, y_dur_hat_g = net_dur_disc(hidden_x.detach(), x_mask.detach(), logw.detach(), logw_.detach())
- with autocast(enabled=False):
- # TODO: I think need to mean using the mask, but for now, just mean all
- loss_dur_disc, losses_dur_disc_r, losses_dur_disc_g = discriminator_loss(y_dur_hat_r, y_dur_hat_g)
- loss_dur_disc_all = loss_dur_disc
- optim_dur_disc.zero_grad()
- scaler.scale(loss_dur_disc_all).backward()
- scaler.unscale_(optim_dur_disc)
- grad_norm_dur_disc = commons.clip_grad_value_(net_dur_disc.parameters(), None)
- scaler.step(optim_dur_disc)
-
- optim_d.zero_grad()
- scaler.scale(loss_disc_all).backward()
- scaler.unscale_(optim_d)
- grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None)
- scaler.step(optim_d)
-
- with autocast(enabled=hps.train.fp16_run):
- # Generator
- y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(y, y_hat)
- if net_dur_disc is not None:
- y_dur_hat_r, y_dur_hat_g = net_dur_disc(hidden_x, x_mask, logw, logw_)
- with autocast(enabled=False):
- loss_dur = torch.sum(l_length.float())
- loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel
- loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, z_mask) * hps.train.c_kl
-
- loss_fm = feature_loss(fmap_r, fmap_g)
- loss_gen, losses_gen = generator_loss(y_d_hat_g)
- loss_gen_all = loss_gen + loss_fm + loss_mel + loss_dur + loss_kl
- if net_dur_disc is not None:
- loss_dur_gen, losses_dur_gen = generator_loss(y_dur_hat_g)
- loss_gen_all += loss_dur_gen
- optim_g.zero_grad()
- scaler.scale(loss_gen_all).backward()
- scaler.unscale_(optim_g)
- grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None)
- scaler.step(optim_g)
- scaler.update()
-
- if rank == 0:
- if global_step % hps.train.log_interval == 0:
- lr = optim_g.param_groups[0]['lr']
- losses = [loss_disc, loss_gen, loss_fm, loss_mel, loss_dur, loss_kl]
- logger.info('Train Epoch: {} [{:.0f}%]'.format(
- epoch,
- 100. * batch_idx / len(train_loader)))
- logger.info([x.item() for x in losses] + [global_step, lr])
-
- scalar_dict = {"loss/g/total": loss_gen_all, "loss/d/total": loss_disc_all, "learning_rate": lr,
- "grad_norm_d": grad_norm_d, "grad_norm_g": grad_norm_g}
- scalar_dict.update(
- {"loss/g/fm": loss_fm, "loss/g/mel": loss_mel, "loss/g/dur": loss_dur, "loss/g/kl": loss_kl})
- scalar_dict.update({"loss/g/{}".format(i): v for i, v in enumerate(losses_gen)})
- scalar_dict.update({"loss/d_r/{}".format(i): v for i, v in enumerate(losses_disc_r)})
- scalar_dict.update({"loss/d_g/{}".format(i): v for i, v in enumerate(losses_disc_g)})
-
- image_dict = {
- "slice/mel_org": utils.plot_spectrogram_to_numpy(y_mel[0].data.cpu().numpy()),
- "slice/mel_gen": utils.plot_spectrogram_to_numpy(y_hat_mel[0].data.cpu().numpy()),
- "all/mel": utils.plot_spectrogram_to_numpy(mel[0].data.cpu().numpy()),
- "all/attn": utils.plot_alignment_to_numpy(attn[0, 0].data.cpu().numpy())
- }
- utils.summarize(
- writer=writer,
- global_step=global_step,
- images=image_dict,
- scalars=scalar_dict)
-
- if global_step % hps.train.eval_interval == 0:
- evaluate(hps, net_g, eval_loader, writer_eval)
- utils.save_checkpoint(net_g, optim_g, hps.train.learning_rate, epoch,
- os.path.join(hps.model_dir, "G_{}.pth".format(global_step)))
- utils.save_checkpoint(net_d, optim_d, hps.train.learning_rate, epoch,
- os.path.join(hps.model_dir, "D_{}.pth".format(global_step)))
- if net_dur_disc is not None:
- utils.save_checkpoint(net_dur_disc, optim_dur_disc, hps.train.learning_rate, epoch, os.path.join(hps.model_dir, "DUR_{}.pth".format(global_step)))
- keep_ckpts = getattr(hps.train, 'keep_ckpts', 5)
- if keep_ckpts > 0:
- utils.clean_checkpoints(path_to_models=hps.model_dir, n_ckpts_to_keep=keep_ckpts, sort_by_time=True)
-
-
- global_step += 1
-
- if rank == 0:
- logger.info('====> Epoch: {}'.format(epoch))
-
-
-
-def evaluate(hps, generator, eval_loader, writer_eval):
- generator.eval()
- image_dict = {}
- audio_dict = {}
- print("Evaluating ...")
- with torch.no_grad():
- for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths, speakers, tone, language, bert) in enumerate(eval_loader):
- x, x_lengths = x.cuda(), x_lengths.cuda()
- spec, spec_lengths = spec.cuda(), spec_lengths.cuda()
- y, y_lengths = y.cuda(), y_lengths.cuda()
- speakers = speakers.cuda()
- bert = bert.cuda()
- tone = tone.cuda()
- language = language.cuda()
- for use_sdp in [True, False]:
- y_hat, attn, mask, *_ = generator.module.infer(x, x_lengths, speakers, tone, language, bert, y=spec, max_len=1000, sdp_ratio=0.0 if not use_sdp else 1.0)
- y_hat_lengths = mask.sum([1, 2]).long() * hps.data.hop_length
-
- mel = spec_to_mel_torch(
- spec,
- hps.data.filter_length,
- hps.data.n_mel_channels,
- hps.data.sampling_rate,
- hps.data.mel_fmin,
- hps.data.mel_fmax)
- y_hat_mel = mel_spectrogram_torch(
- y_hat.squeeze(1).float(),
- hps.data.filter_length,
- hps.data.n_mel_channels,
- hps.data.sampling_rate,
- hps.data.hop_length,
- hps.data.win_length,
- hps.data.mel_fmin,
- hps.data.mel_fmax
- )
- image_dict.update({
- f"gen/mel_{batch_idx}": utils.plot_spectrogram_to_numpy(y_hat_mel[0].cpu().numpy())
- })
- audio_dict.update({
- f"gen/audio_{batch_idx}_{use_sdp}": y_hat[0, :, :y_hat_lengths[0]]
- })
- image_dict.update({f"gt/mel_{batch_idx}": utils.plot_spectrogram_to_numpy(mel[0].cpu().numpy())})
- audio_dict.update({f"gt/audio_{batch_idx}": y[0, :, :y_lengths[0]]})
-
- utils.summarize(
- writer=writer_eval,
- global_step=global_step,
- images=image_dict,
- audios=audio_dict,
- audio_sampling_rate=hps.data.sampling_rate
- )
- generator.train()
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/experimental/rl/__init__.py b/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/experimental/rl/__init__.py
deleted file mode 100644
index 7b338d3173e12d478b6b6d6fd0e50650a0ab5a4c..0000000000000000000000000000000000000000
--- a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/experimental/rl/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-from .value_guided_sampling import ValueGuidedRLPipeline
diff --git a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_image_variation.py b/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_image_variation.py
deleted file mode 100644
index d77e71653078dfb206f267f889334d1ed7b7da8b..0000000000000000000000000000000000000000
--- a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_image_variation.py
+++ /dev/null
@@ -1,461 +0,0 @@
-# Copyright 2022 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import inspect
-from typing import Callable, List, Optional, Union
-
-import torch
-
-import PIL
-from diffusers.utils import is_accelerate_available
-from packaging import version
-from transformers import CLIPFeatureExtractor, CLIPVisionModelWithProjection
-
-from ...configuration_utils import FrozenDict
-from ...models import AutoencoderKL, UNet2DConditionModel
-from ...pipeline_utils import DiffusionPipeline
-from ...schedulers import (
- DDIMScheduler,
- DPMSolverMultistepScheduler,
- EulerAncestralDiscreteScheduler,
- EulerDiscreteScheduler,
- LMSDiscreteScheduler,
- PNDMScheduler,
-)
-from ...utils import deprecate, logging
-from . import StableDiffusionPipelineOutput
-from .safety_checker import StableDiffusionSafetyChecker
-
-
-logger = logging.get_logger(__name__) # pylint: disable=invalid-name
-
-
-class StableDiffusionImageVariationPipeline(DiffusionPipeline):
- r"""
- Pipeline to generate variations from an input image using Stable Diffusion.
-
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
- library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
-
- Args:
- vae ([`AutoencoderKL`]):
- Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- image_encoder ([`CLIPVisionModelWithProjection`]):
- Frozen CLIP image-encoder. Stable Diffusion Image Variation uses the vision portion of
- [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPVisionModelWithProjection),
- specifically the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
- scheduler ([`SchedulerMixin`]):
- A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
- [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
- safety_checker ([`StableDiffusionSafetyChecker`]):
- Classification module that estimates whether generated images could be considered offensive or harmful.
- Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details.
- feature_extractor ([`CLIPFeatureExtractor`]):
- Model that extracts features from generated images to be used as inputs for the `safety_checker`.
- """
- _optional_components = ["safety_checker", "feature_extractor"]
-
- def __init__(
- self,
- vae: AutoencoderKL,
- image_encoder: CLIPVisionModelWithProjection,
- unet: UNet2DConditionModel,
- scheduler: Union[
- DDIMScheduler,
- PNDMScheduler,
- LMSDiscreteScheduler,
- EulerDiscreteScheduler,
- EulerAncestralDiscreteScheduler,
- DPMSolverMultistepScheduler,
- ],
- safety_checker: StableDiffusionSafetyChecker,
- feature_extractor: CLIPFeatureExtractor,
- requires_safety_checker: bool = True,
- ):
- super().__init__()
-
- if safety_checker is None and requires_safety_checker:
- logger.warn(
- f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
- " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
- " results in services or applications open to the public. Both the diffusers team and Hugging Face"
- " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
- " it only for use-cases that involve analyzing network behavior or auditing its results. For more"
- " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
- )
-
- if safety_checker is not None and feature_extractor is None:
- raise ValueError(
- "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety"
- " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead."
- )
-
- is_unet_version_less_0_9_0 = hasattr(unet.config, "_diffusers_version") and version.parse(
- version.parse(unet.config._diffusers_version).base_version
- ) < version.parse("0.9.0.dev0")
- is_unet_sample_size_less_64 = hasattr(unet.config, "sample_size") and unet.config.sample_size < 64
- if is_unet_version_less_0_9_0 and is_unet_sample_size_less_64:
- deprecation_message = (
- "The configuration file of the unet has set the default `sample_size` to smaller than"
- " 64 which seems highly unlikely .If you're checkpoint is a fine-tuned version of any of the"
- " following: \n- CompVis/stable-diffusion-v1-4 \n- CompVis/stable-diffusion-v1-3 \n-"
- " CompVis/stable-diffusion-v1-2 \n- CompVis/stable-diffusion-v1-1 \n- runwayml/stable-diffusion-v1-5"
- " \n- runwayml/stable-diffusion-inpainting \n you should change 'sample_size' to 64 in the"
- " configuration file. Please make sure to update the config accordingly as leaving `sample_size=32`"
- " in the config might lead to incorrect results in future versions. If you have downloaded this"
- " checkpoint from the Hugging Face Hub, it would be very nice if you could open a Pull request for"
- " the `unet/config.json` file"
- )
- deprecate("sample_size<64", "1.0.0", deprecation_message, standard_warn=False)
- new_config = dict(unet.config)
- new_config["sample_size"] = 64
- unet._internal_dict = FrozenDict(new_config)
-
- self.register_modules(
- vae=vae,
- image_encoder=image_encoder,
- unet=unet,
- scheduler=scheduler,
- safety_checker=safety_checker,
- feature_extractor=feature_extractor,
- )
- self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1)
- self.register_to_config(requires_safety_checker=requires_safety_checker)
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_attention_slicing
- def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"):
- r"""
- Enable sliced attention computation.
-
- When this option is enabled, the attention module will split the input tensor in slices, to compute attention
- in several steps. This is useful to save some memory in exchange for a small speed decrease.
-
- Args:
- slice_size (`str` or `int`, *optional*, defaults to `"auto"`):
- When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
- a number is provided, uses as many slices as `attention_head_dim // slice_size`. In this case,
- `attention_head_dim` must be a multiple of `slice_size`.
- """
- if slice_size == "auto":
- if isinstance(self.unet.config.attention_head_dim, int):
- # half the attention head size is usually a good trade-off between
- # speed and memory
- slice_size = self.unet.config.attention_head_dim // 2
- else:
- # if `attention_head_dim` is a list, take the smallest head size
- slice_size = min(self.unet.config.attention_head_dim)
-
- self.unet.set_attention_slice(slice_size)
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.disable_attention_slicing
- def disable_attention_slicing(self):
- r"""
- Disable sliced attention computation. If `enable_attention_slicing` was previously invoked, this method will go
- back to computing attention in one step.
- """
- # set slice_size = `None` to disable `attention slicing`
- self.enable_attention_slicing(None)
-
- def enable_sequential_cpu_offload(self, gpu_id=0):
- r"""
- Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet,
- text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a
- `torch.device('meta') and loaded to GPU only when their specific submodule has its `forward` method called.
- """
- if is_accelerate_available():
- from accelerate import cpu_offload
- else:
- raise ImportError("Please install accelerate via `pip install accelerate`")
-
- device = torch.device(f"cuda:{gpu_id}")
-
- for cpu_offloaded_model in [self.unet, self.image_encoder, self.vae, self.safety_checker]:
- if cpu_offloaded_model is not None:
- cpu_offload(cpu_offloaded_model, device)
-
- @property
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._execution_device
- def _execution_device(self):
- r"""
- Returns the device on which the pipeline's models will be executed. After calling
- `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module
- hooks.
- """
- if self.device != torch.device("meta") or not hasattr(self.unet, "_hf_hook"):
- return self.device
- for module in self.unet.modules():
- if (
- hasattr(module, "_hf_hook")
- and hasattr(module._hf_hook, "execution_device")
- and module._hf_hook.execution_device is not None
- ):
- return torch.device(module._hf_hook.execution_device)
- return self.device
-
- def _encode_image(self, image, device, num_images_per_prompt, do_classifier_free_guidance):
- dtype = next(self.image_encoder.parameters()).dtype
-
- if not isinstance(image, torch.Tensor):
- image = self.feature_extractor(images=image, return_tensors="pt").pixel_values
-
- image = image.to(device=device, dtype=dtype)
- image_embeddings = self.image_encoder(image).image_embeds
- image_embeddings = image_embeddings.unsqueeze(1)
-
- # duplicate image embeddings for each generation per prompt, using mps friendly method
- bs_embed, seq_len, _ = image_embeddings.shape
- image_embeddings = image_embeddings.repeat(1, num_images_per_prompt, 1)
- image_embeddings = image_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1)
-
- if do_classifier_free_guidance:
- uncond_embeddings = torch.zeros_like(image_embeddings)
-
- # For classifier free guidance, we need to do two forward passes.
- # Here we concatenate the unconditional and text embeddings into a single batch
- # to avoid doing two forward passes
- image_embeddings = torch.cat([uncond_embeddings, image_embeddings])
-
- return image_embeddings
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.run_safety_checker
- def run_safety_checker(self, image, device, dtype):
- if self.safety_checker is not None:
- safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(device)
- image, has_nsfw_concept = self.safety_checker(
- images=image, clip_input=safety_checker_input.pixel_values.to(dtype)
- )
- else:
- has_nsfw_concept = None
- return image, has_nsfw_concept
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.decode_latents
- def decode_latents(self, latents):
- latents = 1 / 0.18215 * latents
- image = self.vae.decode(latents).sample
- image = (image / 2 + 0.5).clamp(0, 1)
- # we always cast to float32 as this does not cause significant overhead and is compatible with bfloa16
- image = image.cpu().permute(0, 2, 3, 1).float().numpy()
- return image
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs
- def prepare_extra_step_kwargs(self, generator, eta):
- # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
- # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
- # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
- # and should be between [0, 1]
-
- accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
- extra_step_kwargs = {}
- if accepts_eta:
- extra_step_kwargs["eta"] = eta
-
- # check if the scheduler accepts generator
- accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys())
- if accepts_generator:
- extra_step_kwargs["generator"] = generator
- return extra_step_kwargs
-
- def check_inputs(self, image, height, width, callback_steps):
- if (
- not isinstance(image, torch.Tensor)
- and not isinstance(image, PIL.Image.Image)
- and not isinstance(image, list)
- ):
- raise ValueError(
- f"`image` has to be of type `torch.FloatTensor` or `PIL.Image.Image` or `list` but is {type(image)}"
- )
-
- if height % 8 != 0 or width % 8 != 0:
- raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
-
- if (callback_steps is None) or (
- callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
- ):
- raise ValueError(
- f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
- f" {type(callback_steps)}."
- )
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_latents
- def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None):
- shape = (batch_size, num_channels_latents, height // self.vae_scale_factor, width // self.vae_scale_factor)
- if latents is None:
- if device.type == "mps":
- # randn does not work reproducibly on mps
- latents = torch.randn(shape, generator=generator, device="cpu", dtype=dtype).to(device)
- else:
- latents = torch.randn(shape, generator=generator, device=device, dtype=dtype)
- else:
- if latents.shape != shape:
- raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {shape}")
- latents = latents.to(device)
-
- # scale the initial noise by the standard deviation required by the scheduler
- latents = latents * self.scheduler.init_noise_sigma
- return latents
-
- @torch.no_grad()
- def __call__(
- self,
- image: Union[PIL.Image.Image, List[PIL.Image.Image], torch.FloatTensor],
- height: Optional[int] = None,
- width: Optional[int] = None,
- num_inference_steps: int = 50,
- guidance_scale: float = 7.5,
- num_images_per_prompt: Optional[int] = 1,
- eta: float = 0.0,
- generator: Optional[torch.Generator] = None,
- latents: Optional[torch.FloatTensor] = None,
- output_type: Optional[str] = "pil",
- return_dict: bool = True,
- callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
- callback_steps: Optional[int] = 1,
- ):
- r"""
- Function invoked when calling the pipeline for generation.
-
- Args:
- image (`PIL.Image.Image` or `List[PIL.Image.Image]` or `torch.FloatTensor`):
- The image or images to guide the image generation. If you provide a tensor, it needs to comply with the
- configuration of
- [this](https://huggingface.co/lambdalabs/sd-image-variations-diffusers/blob/main/feature_extractor/preprocessor_config.json)
- `CLIPFeatureExtractor`
- height (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor):
- The height in pixels of the generated image.
- width (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor):
- The width in pixels of the generated image.
- num_inference_steps (`int`, *optional*, defaults to 50):
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
- expense of slower inference.
- guidance_scale (`float`, *optional*, defaults to 7.5):
- Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
- `guidance_scale` is defined as `w` of equation 2. of [Imagen
- Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
- 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
- usually at the expense of lower image quality.
- num_images_per_prompt (`int`, *optional*, defaults to 1):
- The number of images to generate per prompt.
- eta (`float`, *optional*, defaults to 0.0):
- Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
- [`schedulers.DDIMScheduler`], will be ignored for others.
- generator (`torch.Generator`, *optional*):
- A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
- deterministic.
- latents (`torch.FloatTensor`, *optional*):
- Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
- generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
- tensor will ge generated by sampling using the supplied random `generator`.
- output_type (`str`, *optional*, defaults to `"pil"`):
- The output format of the generate image. Choose between
- [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
- plain tuple.
- callback (`Callable`, *optional*):
- A function that will be called every `callback_steps` steps during inference. The function will be
- called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
- callback_steps (`int`, *optional*, defaults to 1):
- The frequency at which the `callback` function will be called. If not specified, the callback will be
- called at every step.
-
- Returns:
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
- When returning a tuple, the first element is a list with the generated images, and the second element is a
- list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
- (nsfw) content, according to the `safety_checker`.
- """
- # 0. Default height and width to unet
- height = height or self.unet.config.sample_size * self.vae_scale_factor
- width = width or self.unet.config.sample_size * self.vae_scale_factor
-
- # 1. Check inputs. Raise error if not correct
- self.check_inputs(image, height, width, callback_steps)
-
- # 2. Define call parameters
- if isinstance(image, PIL.Image.Image):
- batch_size = 1
- elif isinstance(image, list):
- batch_size = len(image)
- else:
- batch_size = image.shape[0]
- device = self._execution_device
- # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
- # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
- # corresponds to doing no classifier free guidance.
- do_classifier_free_guidance = guidance_scale > 1.0
-
- # 3. Encode input image
- image_embeddings = self._encode_image(image, device, num_images_per_prompt, do_classifier_free_guidance)
-
- # 4. Prepare timesteps
- self.scheduler.set_timesteps(num_inference_steps, device=device)
- timesteps = self.scheduler.timesteps
-
- # 5. Prepare latent variables
- num_channels_latents = self.unet.in_channels
- latents = self.prepare_latents(
- batch_size * num_images_per_prompt,
- num_channels_latents,
- height,
- width,
- image_embeddings.dtype,
- device,
- generator,
- latents,
- )
-
- # 6. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline
- extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
-
- # 7. Denoising loop
- num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order
- with self.progress_bar(total=num_inference_steps) as progress_bar:
- for i, t in enumerate(timesteps):
- # expand the latents if we are doing classifier free guidance
- latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
- latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
-
- # predict the noise residual
- noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=image_embeddings).sample
-
- # perform guidance
- if do_classifier_free_guidance:
- noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
- noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
-
- # compute the previous noisy sample x_t -> x_t-1
- latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
-
- # call the callback, if provided
- if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0):
- progress_bar.update()
- if callback is not None and i % callback_steps == 0:
- callback(i, t, latents)
-
- # 8. Post-processing
- image = self.decode_latents(latents)
-
- # 9. Run safety checker
- image, has_nsfw_concept = self.run_safety_checker(image, device, image_embeddings.dtype)
-
- # 10. Convert to PIL
- if output_type == "pil":
- image = self.numpy_to_pil(image)
-
- if not return_dict:
- return (image, has_nsfw_concept)
-
- return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
diff --git a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/schedulers/scheduling_sde_ve_flax.py b/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/schedulers/scheduling_sde_ve_flax.py
deleted file mode 100644
index d1f762bc90c471d6bbc7f33e5854d014b1e25667..0000000000000000000000000000000000000000
--- a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/schedulers/scheduling_sde_ve_flax.py
+++ /dev/null
@@ -1,276 +0,0 @@
-# Copyright 2022 Google Brain and The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# DISCLAIMER: This file is strongly influenced by https://github.com/yang-song/score_sde_pytorch
-
-from dataclasses import dataclass
-from typing import Optional, Tuple, Union
-
-import flax
-import jax.numpy as jnp
-from jax import random
-
-from ..configuration_utils import ConfigMixin, register_to_config
-from .scheduling_utils_flax import FlaxSchedulerMixin, FlaxSchedulerOutput, broadcast_to_shape_from_left
-
-
-@flax.struct.dataclass
-class ScoreSdeVeSchedulerState:
- # setable values
- timesteps: Optional[jnp.ndarray] = None
- discrete_sigmas: Optional[jnp.ndarray] = None
- sigmas: Optional[jnp.ndarray] = None
-
- @classmethod
- def create(cls):
- return cls()
-
-
-@dataclass
-class FlaxSdeVeOutput(FlaxSchedulerOutput):
- """
- Output class for the ScoreSdeVeScheduler's step function output.
-
- Args:
- state (`ScoreSdeVeSchedulerState`):
- prev_sample (`jnp.ndarray` of shape `(batch_size, num_channels, height, width)` for images):
- Computed sample (x_{t-1}) of previous timestep. `prev_sample` should be used as next model input in the
- denoising loop.
- prev_sample_mean (`jnp.ndarray` of shape `(batch_size, num_channels, height, width)` for images):
- Mean averaged `prev_sample`. Same as `prev_sample`, only mean-averaged over previous timesteps.
- """
-
- state: ScoreSdeVeSchedulerState
- prev_sample: jnp.ndarray
- prev_sample_mean: Optional[jnp.ndarray] = None
-
-
-class FlaxScoreSdeVeScheduler(FlaxSchedulerMixin, ConfigMixin):
- """
- The variance exploding stochastic differential equation (SDE) scheduler.
-
- For more information, see the original paper: https://arxiv.org/abs/2011.13456
-
- [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__`
- function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`.
- [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and
- [`~SchedulerMixin.from_pretrained`] functions.
-
- Args:
- num_train_timesteps (`int`): number of diffusion steps used to train the model.
- snr (`float`):
- coefficient weighting the step from the model_output sample (from the network) to the random noise.
- sigma_min (`float`):
- initial noise scale for sigma sequence in sampling procedure. The minimum sigma should mirror the
- distribution of the data.
- sigma_max (`float`): maximum value used for the range of continuous timesteps passed into the model.
- sampling_eps (`float`): the end value of sampling, where timesteps decrease progressively from 1 to
- epsilon.
- correct_steps (`int`): number of correction steps performed on a produced sample.
- """
-
- @property
- def has_state(self):
- return True
-
- @register_to_config
- def __init__(
- self,
- num_train_timesteps: int = 2000,
- snr: float = 0.15,
- sigma_min: float = 0.01,
- sigma_max: float = 1348.0,
- sampling_eps: float = 1e-5,
- correct_steps: int = 1,
- ):
- pass
-
- def create_state(self):
- state = ScoreSdeVeSchedulerState.create()
- return self.set_sigmas(
- state,
- self.config.num_train_timesteps,
- self.config.sigma_min,
- self.config.sigma_max,
- self.config.sampling_eps,
- )
-
- def set_timesteps(
- self, state: ScoreSdeVeSchedulerState, num_inference_steps: int, shape: Tuple = (), sampling_eps: float = None
- ) -> ScoreSdeVeSchedulerState:
- """
- Sets the continuous timesteps used for the diffusion chain. Supporting function to be run before inference.
-
- Args:
- state (`ScoreSdeVeSchedulerState`): the `FlaxScoreSdeVeScheduler` state data class instance.
- num_inference_steps (`int`):
- the number of diffusion steps used when generating samples with a pre-trained model.
- sampling_eps (`float`, optional): final timestep value (overrides value given at Scheduler instantiation).
-
- """
- sampling_eps = sampling_eps if sampling_eps is not None else self.config.sampling_eps
-
- timesteps = jnp.linspace(1, sampling_eps, num_inference_steps)
- return state.replace(timesteps=timesteps)
-
- def set_sigmas(
- self,
- state: ScoreSdeVeSchedulerState,
- num_inference_steps: int,
- sigma_min: float = None,
- sigma_max: float = None,
- sampling_eps: float = None,
- ) -> ScoreSdeVeSchedulerState:
- """
- Sets the noise scales used for the diffusion chain. Supporting function to be run before inference.
-
- The sigmas control the weight of the `drift` and `diffusion` components of sample update.
-
- Args:
- state (`ScoreSdeVeSchedulerState`): the `FlaxScoreSdeVeScheduler` state data class instance.
- num_inference_steps (`int`):
- the number of diffusion steps used when generating samples with a pre-trained model.
- sigma_min (`float`, optional):
- initial noise scale value (overrides value given at Scheduler instantiation).
- sigma_max (`float`, optional): final noise scale value (overrides value given at Scheduler instantiation).
- sampling_eps (`float`, optional): final timestep value (overrides value given at Scheduler instantiation).
- """
- sigma_min = sigma_min if sigma_min is not None else self.config.sigma_min
- sigma_max = sigma_max if sigma_max is not None else self.config.sigma_max
- sampling_eps = sampling_eps if sampling_eps is not None else self.config.sampling_eps
- if state.timesteps is None:
- state = self.set_timesteps(state, num_inference_steps, sampling_eps)
-
- discrete_sigmas = jnp.exp(jnp.linspace(jnp.log(sigma_min), jnp.log(sigma_max), num_inference_steps))
- sigmas = jnp.array([sigma_min * (sigma_max / sigma_min) ** t for t in state.timesteps])
-
- return state.replace(discrete_sigmas=discrete_sigmas, sigmas=sigmas)
-
- def get_adjacent_sigma(self, state, timesteps, t):
- return jnp.where(timesteps == 0, jnp.zeros_like(t), state.discrete_sigmas[timesteps - 1])
-
- def step_pred(
- self,
- state: ScoreSdeVeSchedulerState,
- model_output: jnp.ndarray,
- timestep: int,
- sample: jnp.ndarray,
- key: random.KeyArray,
- return_dict: bool = True,
- ) -> Union[FlaxSdeVeOutput, Tuple]:
- """
- Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion
- process from the learned model outputs (most often the predicted noise).
-
- Args:
- state (`ScoreSdeVeSchedulerState`): the `FlaxScoreSdeVeScheduler` state data class instance.
- model_output (`jnp.ndarray`): direct output from learned diffusion model.
- timestep (`int`): current discrete timestep in the diffusion chain.
- sample (`jnp.ndarray`):
- current instance of sample being created by diffusion process.
- generator: random number generator.
- return_dict (`bool`): option for returning tuple rather than FlaxSdeVeOutput class
-
- Returns:
- [`FlaxSdeVeOutput`] or `tuple`: [`FlaxSdeVeOutput`] if `return_dict` is True, otherwise a `tuple`. When
- returning a tuple, the first element is the sample tensor.
-
- """
- if state.timesteps is None:
- raise ValueError(
- "`state.timesteps` is not set, you need to run 'set_timesteps' after creating the scheduler"
- )
-
- timestep = timestep * jnp.ones(
- sample.shape[0],
- )
- timesteps = (timestep * (len(state.timesteps) - 1)).long()
-
- sigma = state.discrete_sigmas[timesteps]
- adjacent_sigma = self.get_adjacent_sigma(state, timesteps, timestep)
- drift = jnp.zeros_like(sample)
- diffusion = (sigma**2 - adjacent_sigma**2) ** 0.5
-
- # equation 6 in the paper: the model_output modeled by the network is grad_x log pt(x)
- # also equation 47 shows the analog from SDE models to ancestral sampling methods
- diffusion = diffusion.flatten()
- diffusion = broadcast_to_shape_from_left(diffusion, sample.shape)
- drift = drift - diffusion**2 * model_output
-
- # equation 6: sample noise for the diffusion term of
- key = random.split(key, num=1)
- noise = random.normal(key=key, shape=sample.shape)
- prev_sample_mean = sample - drift # subtract because `dt` is a small negative timestep
- # TODO is the variable diffusion the correct scaling term for the noise?
- prev_sample = prev_sample_mean + diffusion * noise # add impact of diffusion field g
-
- if not return_dict:
- return (prev_sample, prev_sample_mean, state)
-
- return FlaxSdeVeOutput(prev_sample=prev_sample, prev_sample_mean=prev_sample_mean, state=state)
-
- def step_correct(
- self,
- state: ScoreSdeVeSchedulerState,
- model_output: jnp.ndarray,
- sample: jnp.ndarray,
- key: random.KeyArray,
- return_dict: bool = True,
- ) -> Union[FlaxSdeVeOutput, Tuple]:
- """
- Correct the predicted sample based on the output model_output of the network. This is often run repeatedly
- after making the prediction for the previous timestep.
-
- Args:
- state (`ScoreSdeVeSchedulerState`): the `FlaxScoreSdeVeScheduler` state data class instance.
- model_output (`jnp.ndarray`): direct output from learned diffusion model.
- sample (`jnp.ndarray`):
- current instance of sample being created by diffusion process.
- generator: random number generator.
- return_dict (`bool`): option for returning tuple rather than FlaxSdeVeOutput class
-
- Returns:
- [`FlaxSdeVeOutput`] or `tuple`: [`FlaxSdeVeOutput`] if `return_dict` is True, otherwise a `tuple`. When
- returning a tuple, the first element is the sample tensor.
-
- """
- if state.timesteps is None:
- raise ValueError(
- "`state.timesteps` is not set, you need to run 'set_timesteps' after creating the scheduler"
- )
-
- # For small batch sizes, the paper "suggest replacing norm(z) with sqrt(d), where d is the dim. of z"
- # sample noise for correction
- key = random.split(key, num=1)
- noise = random.normal(key=key, shape=sample.shape)
-
- # compute step size from the model_output, the noise, and the snr
- grad_norm = jnp.linalg.norm(model_output)
- noise_norm = jnp.linalg.norm(noise)
- step_size = (self.config.snr * noise_norm / grad_norm) ** 2 * 2
- step_size = step_size * jnp.ones(sample.shape[0])
-
- # compute corrected sample: model_output term and noise term
- step_size = step_size.flatten()
- step_size = broadcast_to_shape_from_left(step_size, sample.shape)
- prev_sample_mean = sample + step_size * model_output
- prev_sample = prev_sample_mean + ((step_size * 2) ** 0.5) * noise
-
- if not return_dict:
- return (prev_sample, state)
-
- return FlaxSdeVeOutput(prev_sample=prev_sample, state=state)
-
- def __len__(self):
- return self.config.num_train_timesteps
diff --git a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/.github/CONTRIBUTING.md b/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/.github/CONTRIBUTING.md
deleted file mode 100644
index 9bab709cae689ba3b92dd52f7fbcc0c6926f4a38..0000000000000000000000000000000000000000
--- a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/.github/CONTRIBUTING.md
+++ /dev/null
@@ -1,68 +0,0 @@
-# Contributing to detectron2
-
-## Issues
-We use GitHub issues to track public bugs and questions.
-Please make sure to follow one of the
-[issue templates](https://github.com/facebookresearch/detectron2/issues/new/choose)
-when reporting any issues.
-
-Facebook has a [bounty program](https://www.facebook.com/whitehat/) for the safe
-disclosure of security bugs. In those cases, please go through the process
-outlined on that page and do not file a public issue.
-
-## Pull Requests
-We actively welcome pull requests.
-
-However, if you're adding any significant features (e.g. > 50 lines), please
-make sure to discuss with maintainers about your motivation and proposals in an issue
-before sending a PR. This is to save your time so you don't spend time on a PR that we'll not accept.
-
-We do not always accept new features, and we take the following
-factors into consideration:
-
-1. Whether the same feature can be achieved without modifying detectron2.
- Detectron2 is designed so that you can implement many extensions from the outside, e.g.
- those in [projects](https://github.com/facebookresearch/detectron2/tree/master/projects).
- * If some part of detectron2 is not extensible enough, you can also bring up a more general issue to
- improve it. Such feature request may be useful to more users.
-2. Whether the feature is potentially useful to a large audience (e.g. an impactful detection paper, a popular dataset,
- a significant speedup, a widely useful utility),
- or only to a small portion of users (e.g., a less-known paper, an improvement not in the object
- detection field, a trick that's not very popular in the community, code to handle a non-standard type of data)
- * Adoption of additional models, datasets, new task are by default not added to detectron2 before they
- receive significant popularity in the community.
- We sometimes accept such features in `projects/`, or as a link in `projects/README.md`.
-3. Whether the proposed solution has a good design / interface. This can be discussed in the issue prior to PRs, or
- in the form of a draft PR.
-4. Whether the proposed solution adds extra mental/practical overhead to users who don't
- need such feature.
-5. Whether the proposed solution breaks existing APIs.
-
-To add a feature to an existing function/class `Func`, there are always two approaches:
-(1) add new arguments to `Func`; (2) write a new `Func_with_new_feature`.
-To meet the above criteria, we often prefer approach (2), because:
-
-1. It does not involve modifying or potentially breaking existing code.
-2. It does not add overhead to users who do not need the new feature.
-3. Adding new arguments to a function/class is not scalable w.r.t. all the possible new research ideas in the future.
-
-When sending a PR, please do:
-
-1. If a PR contains multiple orthogonal changes, split it to several PRs.
-2. If you've added code that should be tested, add tests.
-3. For PRs that need experiments (e.g. adding a new model or new methods),
- you don't need to update model zoo, but do provide experiment results in the description of the PR.
-4. If APIs are changed, update the documentation.
-5. We use the [Google style docstrings](https://www.sphinx-doc.org/en/master/usage/extensions/napoleon.html) in python.
-6. Make sure your code lints with `./dev/linter.sh`.
-
-
-## Contributor License Agreement ("CLA")
-In order to accept your pull request, we need you to submit a CLA. You only need
-to do this once to work on any of Facebook's open source projects.
-
-Complete your CLA here:
-
-## License
-By contributing to detectron2, you agree that your contributions will be licensed
-under the LICENSE file in the root directory of this source tree.
diff --git a/spaces/Yntec/PrintingPress/README.md b/spaces/Yntec/PrintingPress/README.md
deleted file mode 100644
index 2966bd77e82fd4b0ab2aeb34f754e5649c10314b..0000000000000000000000000000000000000000
--- a/spaces/Yntec/PrintingPress/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Printing Press 540 Models
-emoji: 👩🎨👨🎨
-colorFrom: red
-colorTo: green
-sdk: gradio
-sdk_version: 3.15.0
-app_file: app.py
-pinned: true
-duplicated_from: Omnibus/maximum_multiplier_places
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/a-v-bely/russian-task-generator/utilities_language_general/rus_constants.py b/spaces/a-v-bely/russian-task-generator/utilities_language_general/rus_constants.py
deleted file mode 100644
index c1bf05f79422cd0855767e12d5ed405e4e2b8345..0000000000000000000000000000000000000000
--- a/spaces/a-v-bely/russian-task-generator/utilities_language_general/rus_constants.py
+++ /dev/null
@@ -1,70 +0,0 @@
-import json
-import spacy
-import gensim
-import pymorphy2
-import streamlit as st
-from transformers import pipeline
-
-
-@st.cache_resource
-def load_morph():
- _morph = pymorphy2.MorphAnalyzer(lang='ru')
- return _morph
-
-
-@st.cache_resource
-def load_w2v(model_path):
- _w2v_model = gensim.models.KeyedVectors.load_word2vec_format(model_path, binary=True)
- return _w2v_model
-
-
-@st.cache_resource
-def load_spacy():
- _nlp = spacy.load('ru_core_news_lg')
- return _nlp
-
-
-@st.cache_resource
-def load_bert():
- return pipeline("fill-mask", model="a-v-white/ruBert-base-finetuned-russian-moshkov-child-corpus-pro")
-
-
-nlp = load_spacy()
-morph = load_morph()
-w2v_model1_path = r'model1.gz'
-w2v_model2_path = r'model2.gz'
-
-# Upload stop list
-stop_list = set()
-with open(r'language_data/stop_words.txt', 'r', encoding='utf-8') as read_file:
- for line in read_file:
- stop_list.add(line.strip())
-
-# Upload minimums
-a1_path, a1_target_set = r'language_data/A1_MINIMUM.txt', set()
-a2_path, a2_target_set = r'language_data/A2_MINIMUM.txt', set()
-b1_path, b1_target_set = r'language_data/B1_MINIMUM.txt', set()
-b2_path, b2_target_set = r'language_data/B2_MINIMUM.txt', set()
-c1_path, c1_target_set = r'language_data/C1_MINIMUM.txt', set()
-c2_path, c2_target_set = r'language_data/C2_MINIMUM.txt', set()
-minimums_paths = (a1_path, a2_path, b1_path, b2_path)
-minimums_sets = (a1_target_set, a2_target_set, b1_target_set, b2_target_set, c1_target_set, c2_target_set)
-for i in range(len(minimums_paths)):
- with open(minimums_paths[i], 'r', encoding='utf-8') as read_file:
- for line in read_file:
- minimums_sets[i].add(line.strip())
-
-a1_distractor_set = a1_target_set
-a2_distractor_set = a2_target_set.union(a1_target_set)
-b1_distractor_set = b1_target_set.union(a2_target_set)
-b2_distractor_set = b2_target_set.union(b1_target_set)
-c1_distractor_set = c1_target_set.union(b2_target_set)
-c2_distractor_set = c2_target_set.union(c1_target_set)
-
-with open('language_data/phrases.json', 'r', encoding='utf-8') as f:
- PHRASES = set(json.load(f)['PHRASES'])
-
-SIMILARITY_VALUES_w2v = {'A1': 1.0, 'A2': 1.0, 'B1': 1.0, 'B2': 1.0, 'C1': 1.0, 'C2': 1.0, 'Без уровня': 1.0}
-SIMILARITY_VALUES_bert = {'A1': 1.0, 'A2': 1.0, 'B1': 1.0, 'B2': 1.0, 'C1': 1.0, 'C2': 1.0, 'Без уровня': 1.0}
-
-BAD_USER_TARGET_WORDS = []
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/roi_heads/base_roi_head.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/roi_heads/base_roi_head.py
deleted file mode 100644
index 2d61cc08007924c61b4a53d7fbc6e6fedfd68f08..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/roi_heads/base_roi_head.py
+++ /dev/null
@@ -1,103 +0,0 @@
-from abc import ABCMeta, abstractmethod
-
-import torch.nn as nn
-
-from ..builder import build_shared_head
-
-
-class BaseRoIHead(nn.Module, metaclass=ABCMeta):
- """Base class for RoIHeads."""
-
- def __init__(self,
- bbox_roi_extractor=None,
- bbox_head=None,
- mask_roi_extractor=None,
- mask_head=None,
- shared_head=None,
- train_cfg=None,
- test_cfg=None):
- super(BaseRoIHead, self).__init__()
- self.train_cfg = train_cfg
- self.test_cfg = test_cfg
- if shared_head is not None:
- self.shared_head = build_shared_head(shared_head)
-
- if bbox_head is not None:
- self.init_bbox_head(bbox_roi_extractor, bbox_head)
-
- if mask_head is not None:
- self.init_mask_head(mask_roi_extractor, mask_head)
-
- self.init_assigner_sampler()
-
- @property
- def with_bbox(self):
- """bool: whether the RoI head contains a `bbox_head`"""
- return hasattr(self, 'bbox_head') and self.bbox_head is not None
-
- @property
- def with_mask(self):
- """bool: whether the RoI head contains a `mask_head`"""
- return hasattr(self, 'mask_head') and self.mask_head is not None
-
- @property
- def with_shared_head(self):
- """bool: whether the RoI head contains a `shared_head`"""
- return hasattr(self, 'shared_head') and self.shared_head is not None
-
- @abstractmethod
- def init_weights(self, pretrained):
- """Initialize the weights in head.
-
- Args:
- pretrained (str, optional): Path to pre-trained weights.
- Defaults to None.
- """
- pass
-
- @abstractmethod
- def init_bbox_head(self):
- """Initialize ``bbox_head``"""
- pass
-
- @abstractmethod
- def init_mask_head(self):
- """Initialize ``mask_head``"""
- pass
-
- @abstractmethod
- def init_assigner_sampler(self):
- """Initialize assigner and sampler."""
- pass
-
- @abstractmethod
- def forward_train(self,
- x,
- img_meta,
- proposal_list,
- gt_bboxes,
- gt_labels,
- gt_bboxes_ignore=None,
- gt_masks=None,
- **kwargs):
- """Forward function during training."""
-
- async def async_simple_test(self, x, img_meta, **kwargs):
- """Asynchronized test function."""
- raise NotImplementedError
-
- def simple_test(self,
- x,
- proposal_list,
- img_meta,
- proposals=None,
- rescale=False,
- **kwargs):
- """Test without augmentation."""
-
- def aug_test(self, x, proposal_list, img_metas, rescale=False, **kwargs):
- """Test with augmentations.
-
- If rescale is False, then returned bboxes and masks will fit the scale
- of imgs[0].
- """
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/ops/iou3d.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/ops/iou3d.py
deleted file mode 100644
index 6fc71979190323f44c09f8b7e1761cf49cd2d76b..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/ops/iou3d.py
+++ /dev/null
@@ -1,85 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import torch
-
-from ..utils import ext_loader
-
-ext_module = ext_loader.load_ext('_ext', [
- 'iou3d_boxes_iou_bev_forward', 'iou3d_nms_forward',
- 'iou3d_nms_normal_forward'
-])
-
-
-def boxes_iou_bev(boxes_a, boxes_b):
- """Calculate boxes IoU in the Bird's Eye View.
-
- Args:
- boxes_a (torch.Tensor): Input boxes a with shape (M, 5).
- boxes_b (torch.Tensor): Input boxes b with shape (N, 5).
-
- Returns:
- ans_iou (torch.Tensor): IoU result with shape (M, N).
- """
- ans_iou = boxes_a.new_zeros(
- torch.Size((boxes_a.shape[0], boxes_b.shape[0])))
-
- ext_module.iou3d_boxes_iou_bev_forward(boxes_a.contiguous(),
- boxes_b.contiguous(), ans_iou)
-
- return ans_iou
-
-
-def nms_bev(boxes, scores, thresh, pre_max_size=None, post_max_size=None):
- """NMS function GPU implementation (for BEV boxes). The overlap of two
- boxes for IoU calculation is defined as the exact overlapping area of the
- two boxes. In this function, one can also set ``pre_max_size`` and
- ``post_max_size``.
-
- Args:
- boxes (torch.Tensor): Input boxes with the shape of [N, 5]
- ([x1, y1, x2, y2, ry]).
- scores (torch.Tensor): Scores of boxes with the shape of [N].
- thresh (float): Overlap threshold of NMS.
- pre_max_size (int, optional): Max size of boxes before NMS.
- Default: None.
- post_max_size (int, optional): Max size of boxes after NMS.
- Default: None.
-
- Returns:
- torch.Tensor: Indexes after NMS.
- """
- assert boxes.size(1) == 5, 'Input boxes shape should be [N, 5]'
- order = scores.sort(0, descending=True)[1]
-
- if pre_max_size is not None:
- order = order[:pre_max_size]
- boxes = boxes[order].contiguous()
-
- keep = torch.zeros(boxes.size(0), dtype=torch.long)
- num_out = ext_module.iou3d_nms_forward(boxes, keep, thresh)
- keep = order[keep[:num_out].cuda(boxes.device)].contiguous()
- if post_max_size is not None:
- keep = keep[:post_max_size]
- return keep
-
-
-def nms_normal_bev(boxes, scores, thresh):
- """Normal NMS function GPU implementation (for BEV boxes). The overlap of
- two boxes for IoU calculation is defined as the exact overlapping area of
- the two boxes WITH their yaw angle set to 0.
-
- Args:
- boxes (torch.Tensor): Input boxes with shape (N, 5).
- scores (torch.Tensor): Scores of predicted boxes with shape (N).
- thresh (float): Overlap threshold of NMS.
-
- Returns:
- torch.Tensor: Remaining indices with scores in descending order.
- """
- assert boxes.shape[1] == 5, 'Input boxes shape should be [N, 5]'
- order = scores.sort(0, descending=True)[1]
-
- boxes = boxes[order].contiguous()
-
- keep = torch.zeros(boxes.size(0), dtype=torch.long)
- num_out = ext_module.iou3d_nms_normal_forward(boxes, keep, thresh)
- return order[keep[:num_out].cuda(boxes.device)].contiguous()
diff --git a/spaces/airsat/dalle-mini/README.md b/spaces/airsat/dalle-mini/README.md
deleted file mode 100644
index ee4fed8ac832c90c53ffdf7ad01795a7edb01e5a..0000000000000000000000000000000000000000
--- a/spaces/airsat/dalle-mini/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: DALL·E mini
-emoji: 🥑
-colorFrom: blue
-colorTo: blue
-sdk: static
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/akhaliq/deeplab2/model/builder_test.py b/spaces/akhaliq/deeplab2/model/builder_test.py
deleted file mode 100644
index 6fd603127caf05c0c72bc892c8bb93a7c81393be..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/deeplab2/model/builder_test.py
+++ /dev/null
@@ -1,80 +0,0 @@
-# coding=utf-8
-# Copyright 2021 The Deeplab2 Authors.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""Tests for model.builder."""
-
-import os
-from absl.testing import parameterized
-
-import tensorflow as tf
-
-from google.protobuf import text_format
-from deeplab2 import config_pb2
-from deeplab2.model import builder
-from deeplab2.model.decoder import motion_deeplab_decoder
-from deeplab2.model.encoder import axial_resnet_instances
-from deeplab2.model.encoder import mobilenet
-# resources dependency
-
-
-_CONFIG_PATH = 'deeplab2/configs/example'
-
-
-def _read_proto_file(filename, proto):
- filename = filename # OSS: removed internal filename loading.
- with tf.io.gfile.GFile(filename, 'r') as proto_file:
- return text_format.ParseLines(proto_file, proto)
-
-
-class BuilderTest(tf.test.TestCase, parameterized.TestCase):
-
- def test_resnet50_encoder_creation(self):
- backbone_options = config_pb2.ModelOptions.BackboneOptions(
- name='resnet50', output_stride=32)
- encoder = builder.create_encoder(
- backbone_options,
- tf.keras.layers.experimental.SyncBatchNormalization)
- self.assertIsInstance(encoder, axial_resnet_instances.ResNet50)
-
- @parameterized.parameters('mobilenet_v3_large', 'mobilenet_v3_small')
- def test_mobilenet_encoder_creation(self, model_name):
- backbone_options = config_pb2.ModelOptions.BackboneOptions(
- name=model_name, use_squeeze_and_excite=True, output_stride=32)
- encoder = builder.create_encoder(
- backbone_options,
- tf.keras.layers.experimental.SyncBatchNormalization)
- self.assertIsInstance(encoder, mobilenet.MobileNet)
-
- def test_resnet_encoder_creation(self):
- backbone_options = config_pb2.ModelOptions.BackboneOptions(
- name='max_deeplab_s', output_stride=32)
- encoder = builder.create_resnet_encoder(
- backbone_options,
- bn_layer=tf.keras.layers.experimental.SyncBatchNormalization)
- self.assertIsInstance(encoder, axial_resnet_instances.MaXDeepLabS)
-
- def test_decoder_creation(self):
- proto_filename = os.path.join(
- _CONFIG_PATH, 'example_kitti-step_motion_deeplab.textproto')
- model_options = _read_proto_file(proto_filename, config_pb2.ModelOptions())
- motion_decoder = builder.create_decoder(
- model_options, tf.keras.layers.experimental.SyncBatchNormalization,
- ignore_label=255)
- self.assertIsInstance(motion_decoder,
- motion_deeplab_decoder.MotionDeepLabDecoder)
-
-
-if __name__ == '__main__':
- tf.test.main()
diff --git a/spaces/alex-mindspace/gpt-agents/swarmai/utils/__init__.py b/spaces/alex-mindspace/gpt-agents/swarmai/utils/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/colorama/__init__.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/colorama/__init__.py
deleted file mode 100644
index b149ed79b0a1d5808a7e392876c2f5aae4b5057c..0000000000000000000000000000000000000000
--- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/colorama/__init__.py
+++ /dev/null
@@ -1,6 +0,0 @@
-# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file.
-from .initialise import init, deinit, reinit, colorama_text
-from .ansi import Fore, Back, Style, Cursor
-from .ansitowin32 import AnsiToWin32
-
-__version__ = '0.4.4'
diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pygments/formatters/bbcode.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pygments/formatters/bbcode.py
deleted file mode 100644
index 35a37328ec7d835ae510a7a9b0127bb9b790b3c1..0000000000000000000000000000000000000000
--- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pygments/formatters/bbcode.py
+++ /dev/null
@@ -1,108 +0,0 @@
-"""
- pygments.formatters.bbcode
- ~~~~~~~~~~~~~~~~~~~~~~~~~~
-
- BBcode formatter.
-
- :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS.
- :license: BSD, see LICENSE for details.
-"""
-
-
-from pip._vendor.pygments.formatter import Formatter
-from pip._vendor.pygments.util import get_bool_opt
-
-__all__ = ['BBCodeFormatter']
-
-
-class BBCodeFormatter(Formatter):
- """
- Format tokens with BBcodes. These formatting codes are used by many
- bulletin boards, so you can highlight your sourcecode with pygments before
- posting it there.
-
- This formatter has no support for background colors and borders, as there
- are no common BBcode tags for that.
-
- Some board systems (e.g. phpBB) don't support colors in their [code] tag,
- so you can't use the highlighting together with that tag.
- Text in a [code] tag usually is shown with a monospace font (which this
- formatter can do with the ``monofont`` option) and no spaces (which you
- need for indentation) are removed.
-
- Additional options accepted:
-
- `style`
- The style to use, can be a string or a Style subclass (default:
- ``'default'``).
-
- `codetag`
- If set to true, put the output into ``[code]`` tags (default:
- ``false``)
-
- `monofont`
- If set to true, add a tag to show the code with a monospace font
- (default: ``false``).
- """
- name = 'BBCode'
- aliases = ['bbcode', 'bb']
- filenames = []
-
- def __init__(self, **options):
- Formatter.__init__(self, **options)
- self._code = get_bool_opt(options, 'codetag', False)
- self._mono = get_bool_opt(options, 'monofont', False)
-
- self.styles = {}
- self._make_styles()
-
- def _make_styles(self):
- for ttype, ndef in self.style:
- start = end = ''
- if ndef['color']:
- start += '[color=#%s]' % ndef['color']
- end = '[/color]' + end
- if ndef['bold']:
- start += '[b]'
- end = '[/b]' + end
- if ndef['italic']:
- start += '[i]'
- end = '[/i]' + end
- if ndef['underline']:
- start += '[u]'
- end = '[/u]' + end
- # there are no common BBcodes for background-color and border
-
- self.styles[ttype] = start, end
-
- def format_unencoded(self, tokensource, outfile):
- if self._code:
- outfile.write('[code]')
- if self._mono:
- outfile.write('[font=monospace]')
-
- lastval = ''
- lasttype = None
-
- for ttype, value in tokensource:
- while ttype not in self.styles:
- ttype = ttype.parent
- if ttype == lasttype:
- lastval += value
- else:
- if lastval:
- start, end = self.styles[lasttype]
- outfile.write(''.join((start, lastval, end)))
- lastval = value
- lasttype = ttype
-
- if lastval:
- start, end = self.styles[lasttype]
- outfile.write(''.join((start, lastval, end)))
-
- if self._mono:
- outfile.write('[/font]')
- if self._code:
- outfile.write('[/code]')
- if self._code or self._mono:
- outfile.write('\n')
diff --git a/spaces/aliabd/SummerTime/model/third_party/HMNet/Models/Networks/Transformer.py b/spaces/aliabd/SummerTime/model/third_party/HMNet/Models/Networks/Transformer.py
deleted file mode 100644
index e1ce4582b9ca2d9ac5b6ab3720ab9e6e1581c719..0000000000000000000000000000000000000000
--- a/spaces/aliabd/SummerTime/model/third_party/HMNet/Models/Networks/Transformer.py
+++ /dev/null
@@ -1,845 +0,0 @@
-# Copyright (c) Microsoft Corporation.
-# Licensed under the MIT license.
-
-import copy
-import json
-import math
-import re
-import collections
-import numpy as np
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from torch.autograd import Variable
-from torch.nn.parameter import Parameter
-
-
-def gelu(x):
- return (
- 0.5
- * x
- * (1 + torch.tanh(math.sqrt(2 / math.pi) * (x + 0.044715 * torch.pow(x, 3))))
- )
-
-
-def swish(x):
- return x * torch.sigmoid(x)
-
-
-class LayerNorm(nn.Module):
- "Construct a layernorm module in the OpenAI style (epsilon inside the square root)."
-
- def __init__(self, n_state, e=1e-5):
- super(LayerNorm, self).__init__()
- self.g = nn.Parameter(torch.ones(n_state))
- self.b = nn.Parameter(torch.zeros(n_state))
- self.e = e
-
- """
- Input:
- x: n_state-dim
- Output:
- o: n_state-dim
- """
-
- def forward(self, x):
- u = x.mean(-1, keepdim=True)
- s = (x - u).pow(2).mean(-1, keepdim=True)
- x = (x - u) / torch.sqrt(s + self.e)
- return self.g * x + self.b
-
-
-"""
- Convolution
- nx is the last input dim
- nf is the last output dim
-"""
-
-
-class Conv1D(nn.Module):
- def __init__(self, nf, nx):
- super(Conv1D, self).__init__()
- self.nf = nf
- w = torch.empty(nx, nf)
- nn.init.normal_(w, std=0.02)
- self.w = Parameter(w)
- self.b = Parameter(torch.zeros(nf))
-
- """
- Input:
- x: batch x len x nx
- Output:
- x: batch x len x nf
- """
-
- def forward(self, x):
- size_out = x.size()[:-1] + (self.nf,)
- x = torch.addmm(self.b, x.view(-1, x.size(-1)), self.w)
- x = x.view(*size_out)
- return x
-
-
-class PositionalEmbedding(nn.Module):
- def __init__(self, opt, demb):
- super(PositionalEmbedding, self).__init__()
- self.demb = demb
- inv_freq = 1 / (10000 ** (torch.arange(0.0, demb, 2.0) / demb))
- self.pos_discount = float(opt["TRANSFORMER_POS_DISCOUNT"])
- self.register_buffer("inv_freq", inv_freq)
-
- """
- Input:
- pos_seq: len
- Output:
- pos_emb: len x demb
- """
-
- def forward(self, pos_seq):
- sinusoid_inp = torch.ger(pos_seq, self.inv_freq)
- pos_emb = (
- torch.cat([sinusoid_inp.sin(), sinusoid_inp.cos()], dim=-1)
- / self.pos_discount
- )
- return pos_emb
-
-
-"""
- Splitter
-"""
-
-
-class Splitter(nn.Module):
- def __init__(self, nx):
- super(Splitter, self).__init__()
- self.nx = nx
- self.augmenter = Conv1D(nx * 3, nx)
-
- """
- Input:
- x: batch x len x nx
- Output:
- query,key,value: batch x len x nx
- """
-
- def forward(self, x):
- x = self.augmenter(x)
- # x: batch x len x (3 x nx)
-
- query, key, value = x.split(self.nx, dim=2)
- # query,key,value: batch x len x nx
-
- return query, key, value
-
-
-"""
- Multi-head Attention
-"""
-
-
-class Attention(nn.Module):
- """
- nx: input dimension
- """
-
- def __init__(self, nx, opt):
- super(Attention, self).__init__()
- n_state = nx # in Attention: n_state=768 (nx=n_embd)
- # [switch nx => n_state from Block to Attention to keep identical to TF implem]
- n_head = int(opt["TRANSFORMER_HEAD"])
- resid_pdrop = opt["TRANSFORMER_RESIDUAL_DROPOUT"]
- attn_pdrop = opt["TRANSFORMER_ATTENTION_DROPOUT"]
- use_cuda = opt["cuda"]
-
- assert n_state % n_head == 0
- # if mask is needed, uncomment this
- self.maxlen = 2048 # beyond this scale
- self.mask = (
- Variable(
- torch.tril(torch.ones(self.maxlen, self.maxlen)).view(
- 1, 1, self.maxlen, self.maxlen
- ),
- requires_grad=False,
- ).cuda()
- if use_cuda
- else Variable(
- torch.tril(torch.ones(self.maxlen, self.maxlen)).view(
- 1, 1, self.maxlen, self.maxlen
- ),
- requires_grad=False,
- )
- )
- self.n_head = n_head
- self.c_proj = Conv1D(n_state, nx)
- self.attn_dropout = nn.Dropout(attn_pdrop)
- self.resid_dropout = nn.Dropout(resid_pdrop)
- self.use_cuda = use_cuda
-
- """
- Input:
- q: batch x n_head x len x dim
- k: batch x n_head x dim x kv_len
- v: batch x n_head x kv_len x dim
- x_mask: batch x kv_len # key and value's mask (if not None, used for encoder's self-attention and decoder's src-tgt attention)
- one_dir_visible: only sees previous history (used for decoder's self-attention)
- return_attn_weight: if true, also return the attention weights
- Output:
- a: batch x n_head x len x n_state x dim
- attn_weight (if return_attn_weight): attn_weight: batch x n_head x len x kv_len
- """
-
- def _attn(self, q, k, v, x_mask, one_dir_visible, return_attn_weight):
- w = torch.matmul(q, k)
- # batch x n_head x len x kv_len
- w = w / math.sqrt(v.size(-1))
-
- mask = None
- if one_dir_visible: # mask "seeing the future"
- if w.size(-2) <= self.maxlen and w.size(-1) <= self.maxlen:
- mask = (
- self.mask[:, :, : w.size(-2), : w.size(-1)].cuda()
- if self.use_cuda
- else self.mask[:, :, : w.size(-2), : w.size(-1)]
- )
- else:
- mask = (
- Variable(
- torch.tril(torch.ones(w.size(-2), w.size(-1))).view(
- 1, 1, w.size(-2), w.size(-1)
- ),
- requires_grad=False,
- ).cuda()
- if self.use_cuda
- else Variable(
- torch.tril(torch.ones(w.size(-2), w.size(-1))).view(
- 1, 1, w.size(-2), w.size(-1)
- ),
- requires_grad=False,
- )
- )
-
- if x_mask is not None:
- mask = x_mask.unsqueeze(1).unsqueeze(1).expand_as(w).float()
- # batch x n_head x len x kv_len
-
- if mask is not None:
- w = w * mask + -1e9 * (1 - mask)
-
- w_prob = nn.Softmax(dim=-1)(w)
- w_prob = self.attn_dropout(w_prob)
- if return_attn_weight:
- return torch.matmul(w_prob, v), w
- else:
- return torch.matmul(w_prob, v)
-
- def merge_heads(self, x):
- x = x.permute(0, 2, 1, 3).contiguous()
- new_x_shape = x.size()[:-2] + (x.size(-2) * x.size(-1),)
- return x.view(*new_x_shape) # in Tensorflow implem: fct merge_states
-
- """
- Input:
- x: batch x len x dim
- Output:
- not k: batch x n_head x (dim/n_head) x len
- k: batch x n_head x len x (dim/n_head)
- """
-
- def split_heads(self, x, k=False):
- new_x_shape = x.size()[:-1] + (self.n_head, x.size(-1) // self.n_head)
- x = x.view(*new_x_shape) # in Tensorflow implem: fct split_states
- if k:
- return x.permute(0, 2, 3, 1)
- else:
- return x.permute(0, 2, 1, 3)
-
- """
- Input:
- query: batch x len x n_state
- key, value: batch x kv_len x n_state
- x_mask: batch x kv_len # key and value's mask (if not None, used for encoder's self-attention and decoder's src-tgt attention)
- one_dir_visible: only sees previous history (used for decoder's self-attention)
- return_attn_weight: if true, also return the attention weights
- Output:
- a: batch x len x n_state
- attn_weight (if return_attn_weight): batch x len x kv_len
- """
-
- def forward(
- self, query, key, value, x_mask, one_dir_visible=False, return_attn_weight=False
- ):
- query = self.split_heads(query)
- # batch x n_head x len x (n_state/n_head)
-
- key = self.split_heads(key, k=True)
- # batch x n_head x (n_state/n_head) x kv_len
-
- value = self.split_heads(value)
- # batch x n_head x kv_len x (n_state/n_head)
-
- out = self._attn(query, key, value, x_mask, one_dir_visible, return_attn_weight)
-
- if return_attn_weight:
- a, attn_weight = out
- # a: batch x n_head x len x (n_state/n_head)
- # attn_weight: batch x n_head x len x kv_len
- attn_weight = attn_weight.permute(0, 2, 3, 1).contiguous()
- # batch x len x kv_len x n_head
- attn_weight = torch.sum(attn_weight, dim=3)
- # batch x len x kv_len
- else:
- a = out
- # batch x n_head x len x (n_state/n_head)
-
- a = self.merge_heads(a)
- # batch x len x n_state
-
- a = self.c_proj(a)
- # batch x len x n_state
-
- a = self.resid_dropout(a)
- # batch x len x n_state
-
- if return_attn_weight:
- return a, attn_weight
- else:
- return a
-
-
-"""
- Two-layer network
-"""
-
-
-class MLP(nn.Module):
- """
- Input:
- n_state: intermediate dim
- """
-
- def __init__(self, n_state, opt): # in MLP: n_state=3072 (4 * n_embd)
- super(MLP, self).__init__()
- nx = int(opt["transformer_embed_dim"])
- resid_pdrop = opt["TRANSFORMER_RESIDUAL_DROPOUT"]
- self.c_fc = Conv1D(n_state, nx)
- self.c_proj = Conv1D(nx, n_state)
- self.dropout = nn.Dropout(resid_pdrop)
-
- """
- Input:
- x: batch x len x nx
- Output: batch x len x nx
- """
-
- def forward(self, x):
- h = F.relu(self.c_fc(x))
- h2 = self.c_proj(h)
- return self.dropout(h2)
-
-
-"""
- One encoder block of transformer
-"""
-
-
-class EncoderBlock(nn.Module):
- def __init__(self, opt):
- super(EncoderBlock, self).__init__()
- nx = int(opt["transformer_embed_dim"])
- self.one_dir_visible = False
- if "transformer_encoder_one_dir_visible" in opt:
- self.one_dir_visible = opt["transformer_encoder_one_dir_visible"]
- self.splitter = Splitter(nx)
- self.attn = Attention(nx, opt)
- self.ln_1 = LayerNorm(nx)
- self.mlp = MLP(4 * nx, opt)
- self.ln_2 = LayerNorm(nx)
-
- """
- Input:
- x: batch x len x n_state
- x_mask: batch x len (1 means there's something)
- Output:
- h: batch x len x n_state
- """
-
- def forward(self, x, x_mask):
- query, key, value = self.splitter(x)
- if self.one_dir_visible:
- # in this case, use triangle masking, as it's one_direction
- a = self.attn(query, key, value, None, one_dir_visible=True)
- else:
- # in this case, use x_mask for attention masking
- a = self.attn(query, key, value, x_mask, one_dir_visible=False)
-
- n = self.ln_1(x + a) # residual
- m = self.mlp(n)
- h = self.ln_2(n + m)
- return h
-
-
-"""
- One encoder block of transformer
-"""
-
-
-class DecoderBlock(nn.Module):
- def __init__(self, opt):
- super(DecoderBlock, self).__init__()
- nx = int(opt["transformer_embed_dim"])
- self.decoder_splitter = Splitter(nx)
- self.self_attn = Attention(nx, opt)
- self.cross_attn = Attention(nx, opt)
- self.ln_1 = LayerNorm(nx)
- self.ln_2 = LayerNorm(nx)
- self.mlp = MLP(4 * nx, opt)
- self.ln_3 = LayerNorm(nx)
-
- """
- Input:
- x_mask: batch x len, mask for encoder's input
- y: batch x len x n_state (decoder part)
- enc_key: batch x encoder_len x n_state
- enc_value: batch x encoder_len x n_state
- lang_model: whether it's for language model training (no encoder part is used)
- Output:
- h: batch x len x n_state
- """
-
- def forward(self, x_mask, y, enc_key, enc_value, lang_model=False):
- query, key, value = self.decoder_splitter(y)
- # batch x len x n_state
-
- # self-attention
- a = self.self_attn(query, key, value, None, one_dir_visible=True)
- # batch x len x n_state
-
- n = self.ln_1(y + a) # residual
-
- # seq2seq
- if not lang_model:
- # src-tgt attention
- o = self.cross_attn(n, enc_key, enc_value, x_mask)
- p = self.ln_2(n + o) # residual
- # batch x len x n_state
- else: # language model
- p = n
-
- m = self.mlp(p)
- h = self.ln_3(p + m)
- return h
-
-
-"""
- Embedder
-"""
-
-
-class Embedder(nn.Module):
- """
- Input:
- vocab: size of vocabulary
- """
-
- def __init__(self, opt, embed=None):
- super(Embedder, self).__init__()
- n_state = int(opt["transformer_embed_dim"]) # n_state
- embed_dropout_rate = opt["TRANSFORMER_EMBED_DROPOUT"]
- if embed is None:
- self.embed = nn.Embedding(opt["vocab_size"], n_state)
- nn.init.normal_(self.embed.weight, std=0.02)
- else:
- self.embed = embed
- self.drop = nn.Dropout(embed_dropout_rate)
- self.pos_emb = PositionalEmbedding(opt, n_state)
- self.use_cuda = opt["cuda"]
-
- """
- Input:
- x: batch x len (word_id)
- Output:
- h: batch x len x n_state
- """
-
- def forward(self, x):
- x_emb = self.embed(x)
- batch_size = x.shape[0]
- x_len = x.shape[1]
- x_pos = self.pos_emb(
- torch.arange(x_len).type(
- torch.cuda.FloatTensor if self.use_cuda else torch.FloatTensor
- )
- ) # len x n_state
- x_pos = (
- Variable(
- x_pos.unsqueeze(0).repeat(batch_size, 1, 1), requires_grad=False
- ).cuda()
- if self.use_cuda
- else Variable(
- x_pos.unsqueeze(0).repeat(batch_size, 1, 1), requires_grad=False
- )
- )
- x_input = x_emb + x_pos
- h = self.drop(x_input)
- return h
-
-
-"""
- Transformer encoder
-"""
-
-
-class TransformerEncoder(nn.Module):
- """
- Input:
- embed: (if not None) pre-computed vocab embeddings
- """
-
- def __init__(self, opt, embed=None):
- super(TransformerEncoder, self).__init__()
- vocab = int(opt["vocab_size"])
- n_state = int(opt["transformer_embed_dim"])
- n_layer = int(opt["TRANSFORMER_LAYER"])
- if "vae_z_scale_factor" in opt:
- self.vae_z_scale_factor = float(opt["vae_z_scale_factor"])
-
- self.embedder = Embedder(opt, embed)
- block = EncoderBlock(opt)
- self.blocks = nn.ModuleList([copy.deepcopy(block) for _ in range(n_layer)])
- self.use_cuda = opt["cuda"]
-
- """
- Input:
- x: batch x len (word_id)
- z (optional): batch x len x n_state (for VAE)
- Output:
- h: batch x len x n_state (word_id)
- """
-
- def forward(self, x, z=None):
- x_mask = ~x.eq(0) # 1 is PAD_id
- x_mask = x_mask.type(
- torch.cuda.FloatTensor if self.use_cuda else torch.FloatTensor
- )
-
- h = self.embedder(x)
- if z is not None:
- z *= self.vae_z_scale_factor
- h += z
-
- for block in self.blocks:
- h = block(h, x_mask)
- return h
-
-
-"""
- Transformer decoder
-"""
-
-
-class TransformerDecoder(nn.Module):
- """
- Input:
- embed: (if not None) pre-computed vocab embeddings
- """
-
- def __init__(self, opt, embed=None):
- super(TransformerDecoder, self).__init__()
- self.opt = opt
- vocab_size = int(opt["vocab_size"])
- n_state = int(opt["transformer_embed_dim"]) # n_state
- n_layer = int(opt["TRANSFORMER_LAYER"])
- self.embedder = Embedder(opt, embed)
- self.encoder_splitter = Splitter(n_state)
- block = DecoderBlock(opt)
- self.blocks = nn.ModuleList([copy.deepcopy(block) for _ in range(n_layer)])
- if embed is None:
- self.linear = Conv1D(vocab_size, n_state)
- else:
- self.linear = nn.Linear(n_state, vocab_size, bias=False)
- if (
- "FINETUNE_RETRAIN_SOFTMAX" not in opt
- ): # if FINETUNE_RETRAIN_SOFTMAX, linear needs to be seperately trained
- self.linear.weight = embed.weight # share weight
- self.use_coda = opt["cuda"]
-
- """
- Input:
- x: batch x encoder_len (word id)
- x_out: batch x encoder_len x n_state
- y: batch x len (word_id) (decoder part)
- lang_model: whether it's for language model training (no encoder part is used)
- Output:
- prob: batch x len x vocab_size (probabilities after softmax)
- """
-
- def forward(self, x, x_out, y, lang_model=False):
- # seq2seq
- if not lang_model:
- _, enc_key, enc_value = self.encoder_splitter(x_out)
- # enc_key: batch x encoder_len x n_state
- # enc_value: batch x encoder_len x n_state
-
- x_mask = ~x.eq(0) # 1 is PAD_id
- x_mask = x_mask.type(
- torch.cuda.FloatTensor if self.use_cuda else torch.FloatTensor
- )
- else:
- enc_key = None
- enc_value = None
- x_mask = None
-
- h = self.embedder(y)
- for block in self.blocks:
- h = block(x_mask, h, enc_key, enc_value, lang_model)
- prob = F.softmax(self.linear(h), dim=-1)
- return prob
-
-
-class TransformerBeam:
- """
- Input:
- encoder: TransformerEncoder class
- decoder: TransformerDecoder class
- begin_id: word id of ''
- vocab: list of words
- """
-
- def __init__(self, opt, encoder, decoder, begin_id, vocab):
- self.encoder = encoder
- self.decoder = decoder
- self.opt = opt
- self.max_sent_len = int(opt["max_sent_len"])
- self.begin_id = begin_id
- self.vocab = vocab
- self.beam_width = int(opt["beam_width"])
- self.use_cuda = opt["cuda"]
-
- # each candidate is (idx, prob, 0/1, position/wordid)
- def merge_candidates(self, cand_A, cand_B):
- C = []
- pA, lA, pB, lB = 0, len(cand_A), 0, len(cand_B)
- lC = 0
- while (pA < lA or pB < lB) and (lC < self.beam_width):
- if pA < lA and (pB >= lB or cand_A[pA][1] > cand_B[pB][1]):
- C.append(cand_A[pA])
- pA += 1
- else:
- C.append(cand_B[pB])
- pB += 1
- lC += 1
- return C
-
- """
- Input:
- x = batch * encoder_len (word_ids) encoder's input
- k: top-k sampling
- Output:
- sents: list of words, with batch items, each one with up to beam_width (sentence, log_prob), each sentence with up to max_sent_len_word words
- """
-
- def topk(self, x, k):
- batch_size = x.shape[0]
- x_len = x.shape[1]
- x_out = self.encoder(x)
- # x_out: batch x encoder_len x n_state
-
- # sent_ids is the words for each of the batch_size sentences
- sent_ids = []
- for i in range(batch_size):
- sent_ids.append([self.begin_id])
-
- topk = 1
- MIN_GEN_LENGTH = 45
- if "MIN_GEN_LENGTH" in self.opt:
- MIN_GEN_LENGTH = int(self.opt["MIN_GEN_LENGTH"])
- for l in range(self.max_sent_len):
- y = (
- Variable(torch.LongTensor(sent_ids)).cuda()
- if self.use_cuda
- else Variable(torch.LongTensor(sent_ids))
- ) # batch_size x l
- decoder_outputs = self.decoder(x, x_out, y)
- probs = decoder_outputs[
- :, -1, :
- ] # batch_size x vocab_size (only take the last output)
- for i in range(batch_size):
- topk_probs, _ = torch.topk(probs[i], k)
- threshold = float(topk_probs[-1])
- probs[i][probs[i] < threshold] = 0.0
-
- samples = torch.multinomial(
- probs, 2
- ) # sample 2 since the first one may be
- for i in range(batch_size):
- if l < MIN_GEN_LENGTH and self.vocab[int(samples[i, 0])] == "":
- sent_ids[i].append(int(samples[i, 1]))
- else:
- sent_ids[i].append(int(samples[i, 0]))
-
- sents = []
- for i in range(batch_size):
- utt = []
- for j in range(len(sent_ids[i])):
- w = self.vocab[sent_ids[i][j]]
- if w == "":
- continue
- if w == "":
- break
- utt.append(w)
- sents.append([(utt, 0)])
-
- return sents
-
- """
- Input:
- x = batch * encoder_len (word_ids) encoder's input
- Output:
- sents: list of words, with batch items, each one with up to beam_width (sentence, log_prob), each sentence with up to max_sent_len_word words
- """
-
- def beam_search(self, x):
- batch_size = x.shape[0]
- x_len = x.shape[1]
- x_out = self.encoder(x)
- # x_out: batch x encoder_len x n_state
-
- sents = []
- topk = 1
- history_nodes = [{}]
- end_nodes = {}
- for idx in range(batch_size):
- start_node = BeamSearchNode([self.begin_id], 0, 1)
- history_nodes[0][idx] = [start_node]
- end_nodes[idx] = []
-
- for l in range(self.max_sent_len):
- last_nodes = history_nodes[-1]
- if sum([len(l) for i, l in last_nodes.items()]) == 0: # no nodes left
- break
- ys = []
- x_outs = []
- xs = []
- for idx in range(batch_size):
- ys.extend([node.word_ids for node in last_nodes[idx]])
- x_outs.extend(
- [x_out[idx, :, :].unsqueeze(0) for node in last_nodes[idx]]
- )
- xs.extend([x[idx, :].unsqueeze(0) for node in last_nodes[idx]])
-
- ys = (
- Variable(torch.LongTensor(ys)).cuda()
- if self.use_cuda
- else Variable(torch.LongTensor(ys))
- ) # N x l
- x_outs = torch.cat(x_outs, dim=0) # N x x_len x n_state
- xs = torch.cat(xs, dim=0) # N x x_len
- probs = self.decoder(xs, x_outs, ys)
- log_probs = torch.log(
- probs[:, -1, :] + 1e-15
- ) # N x vocab_size (only take the last output)
-
- history_nodes.append({})
- p = 0
- for idx in range(batch_size):
- history_nodes[-1][idx] = []
- N = len(last_nodes[idx])
- if N == 0:
- continue
- log_prob = log_probs[p : p + N]
- p += N
- # log_prob = N x extended_vocab_size
-
- # generate
- candidates = []
- for k in range(N):
- logprobs, ids = torch.topk(log_prob[k], self.beam_width)
- candidates = self.merge_candidates(
- candidates, [(k, p, d) for p, d in zip(logprobs, ids)]
- )
-
- candidates = candidates[: self.beam_width]
- extended_nodes_in_last_nodes = set()
- for k in range(len(candidates)):
- h, logp, next_word_id = candidates[
- k
- ] # h means "the h-th node in last_nodes"
- logp = float(logp)
- next_word_id = int(next_word_id)
- prev_node = last_nodes[idx][h]
- next_wordids = prev_node.word_ids + [next_word_id]
- next_word = self.vocab[next_word_id]
-
- next_node = BeamSearchNode(
- next_wordids, prev_node.log_prob + logp, prev_node.length + 1
- )
- if next_node.duplicate == False: # no duplicate trigram generated
- extended_nodes_in_last_nodes.add(h)
- if next_word == "" or l == self.max_sent_len - 1:
- end_nodes[idx].append((next_node.eval(), next_node))
- else:
- history_nodes[-1][idx].append(next_node)
-
- special_words = ["", "", "", "", "", ""]
- for k in range(N):
- if k not in extended_nodes_in_last_nodes:
- node = last_nodes[idx][k]
- effective_word_count = sum(
- [
- 1
- for x in node.word_ids
- if self.vocab[x] not in special_words
- ]
- )
- if effective_word_count >= 5:
- end_nodes[idx].append((node.eval(), node))
-
- MIN_GEN_LENGTH = 45
- if "MIN_GEN_LENGTH" in self.opt:
- MIN_GEN_LENGTH = int(self.opt["MIN_GEN_LENGTH"])
- for idx in range(batch_size):
- t = len([w for w in end_nodes[idx] if w[1].length > MIN_GEN_LENGTH])
- if t > 0:
- end_nodes[idx] = [
- w for w in end_nodes[idx] if w[1].length > MIN_GEN_LENGTH
- ]
-
- end_nodes[idx].sort(key=lambda tup: tup[0], reverse=True)
- candidates = []
- for score, node in end_nodes[idx][:topk]:
- utt = [self.vocab[x] for x in node.word_ids]
- utt = [x for x in utt if x not in ["", ""]]
- candidates.append((utt, score))
- if len(candidates) == 0:
- candidates.append(("", 0))
- sents.append(candidates)
-
- return sents
-
-
-class BeamSearchNode(object):
- def __init__(self, word_ids, log_prob, length):
- self.word_ids = word_ids
- self.log_prob = log_prob
- self.length = length
-
- trigram_set = set()
- self.duplicate = False
-
- for i in range(2, len(word_ids)):
- trigram = (
- str(word_ids[i - 2])
- + " "
- + str(word_ids[i - 1])
- + " "
- + str(word_ids[i])
- )
- if trigram in trigram_set:
- self.duplicate = True
- break
- trigram_set.add(trigram)
-
- def eval(self):
- return self.log_prob / float(self.length - 1.0 + 1e-6)
-
- def __lt__(self, other):
- return self.length < other.length
diff --git a/spaces/amarchheda/ChordDuplicate/portaudio/.github/ISSUE_TEMPLATE/bug_report.md b/spaces/amarchheda/ChordDuplicate/portaudio/.github/ISSUE_TEMPLATE/bug_report.md
deleted file mode 100644
index 794c5e989b3e58595241a52197186b5482857690..0000000000000000000000000000000000000000
--- a/spaces/amarchheda/ChordDuplicate/portaudio/.github/ISSUE_TEMPLATE/bug_report.md
+++ /dev/null
@@ -1,34 +0,0 @@
----
-name: Bug report
-about: Create a report to help us improve
-title: ''
-labels: ''
-assignees: ''
-
----
-
-(Please use the mailing list for support requests and general discussion. This is only for actual bugs.)
-
-**Describe the bug**
-A clear and concise description of what the bug is.
-
-**To Reproduce**
-Steps to reproduce the behavior. Include code if applicable.
-1.
-
-**Expected behavior**
-A clear and concise description of what you expected to happen.
-
-**Actual behavior**
-What actually happened.
-Include a recording if helpful.
-Error messages or logs longer than a page should be attached as a .txt file.
-
-**Desktop (please complete the following information):**
- - OS: [e.g. Mac OS]
- - OS Version [e.g. 22]
- - PortAudio version: stable, nightly snapshot (which?), current (please give date and/or Git hash):
- - If Windows or Linux, which Host API (e.g. WASAPI):
-
-**Additional context**
-Add any other context about the problem here.
diff --git a/spaces/aodianyun/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/noise.py b/spaces/aodianyun/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/noise.py
deleted file mode 100644
index 768f0e9f73ea50b3262c643b712730f614488895..0000000000000000000000000000000000000000
--- a/spaces/aodianyun/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/noise.py
+++ /dev/null
@@ -1,64 +0,0 @@
-import torch
-import numpy as np
-from PIL import ImageOps
-import math
-from .animation import sample_to_cv2
-import cv2
-
-deforum_noise_gen = torch.Generator(device='cpu')
-
-# 2D Perlin noise in PyTorch https://gist.github.com/vadimkantorov/ac1b097753f217c5c11bc2ff396e0a57
-def rand_perlin_2d(shape, res, fade = lambda t: 6*t**5 - 15*t**4 + 10*t**3):
- delta = (res[0] / shape[0], res[1] / shape[1])
- d = (shape[0] // res[0], shape[1] // res[1])
-
- grid = torch.stack(torch.meshgrid(torch.arange(0, res[0], delta[0]), torch.arange(0, res[1], delta[1]), indexing='ij'), dim = -1) % 1
- angles = 2*math.pi*torch.rand(res[0]+1, res[1]+1, generator=deforum_noise_gen)
- gradients = torch.stack((torch.cos(angles), torch.sin(angles)), dim = -1)
-
- tile_grads = lambda slice1, slice2: gradients[slice1[0]:slice1[1], slice2[0]:slice2[1]].repeat_interleave(d[0], 0).repeat_interleave(d[1], 1)
- dot = lambda grad, shift: (torch.stack((grid[:shape[0],:shape[1],0] + shift[0], grid[:shape[0],:shape[1], 1] + shift[1] ), dim = -1) * grad[:shape[0], :shape[1]]).sum(dim = -1)
-
- n00 = dot(tile_grads([0, -1], [0, -1]), [0, 0])
- n10 = dot(tile_grads([1, None], [0, -1]), [-1, 0])
- n01 = dot(tile_grads([0, -1],[1, None]), [0, -1])
- n11 = dot(tile_grads([1, None], [1, None]), [-1,-1])
- t = fade(grid[:shape[0], :shape[1]])
- return math.sqrt(2) * torch.lerp(torch.lerp(n00, n10, t[..., 0]), torch.lerp(n01, n11, t[..., 0]), t[..., 1])
-
-def rand_perlin_2d_octaves(shape, res, octaves=1, persistence=0.5):
- noise = torch.zeros(shape)
- frequency = 1
- amplitude = 1
- for _ in range(int(octaves)):
- noise += amplitude * rand_perlin_2d(shape, (frequency*res[0], frequency*res[1]))
- frequency *= 2
- amplitude *= persistence
- return noise
-
-def condition_noise_mask(noise_mask, invert_mask = False):
- if invert_mask:
- noise_mask = ImageOps.invert(noise_mask)
- noise_mask = np.array(noise_mask.convert("L"))
- noise_mask = noise_mask.astype(np.float32) / 255.0
- noise_mask = np.around(noise_mask, decimals=0)
- noise_mask = torch.from_numpy(noise_mask)
- #noise_mask = torch.round(noise_mask)
- return noise_mask
-
-def add_noise(sample, noise_amt: float, seed: int, noise_type: str, noise_args, noise_mask = None, invert_mask = False):
- deforum_noise_gen.manual_seed(seed) # Reproducibility
- sample2dshape = (sample.shape[0], sample.shape[1]) #sample is cv2, so height - width
- noise = torch.randn((sample.shape[2], sample.shape[0], sample.shape[1]), generator=deforum_noise_gen) # White noise
- if noise_type == 'perlin':
- # rand_perlin_2d_octaves is between -1 and 1, so we need to shift it to be between 0 and 1
- # print(sample.shape)
- noise = noise * ((rand_perlin_2d_octaves(sample2dshape, (int(noise_args[0]), int(noise_args[1])), octaves=noise_args[2], persistence=noise_args[3]) + torch.ones(sample2dshape)) / 2)
- if noise_mask is not None:
- noise_mask = condition_noise_mask(noise_mask, invert_mask)
- noise_to_add = sample_to_cv2(noise * noise_mask)
- else:
- noise_to_add = sample_to_cv2(noise)
- sample = cv2.addWeighted(sample, 1-noise_amt, noise_to_add, noise_amt, 0)
-
- return sample
diff --git a/spaces/armgabrielyan/search-in-video/utils.py b/spaces/armgabrielyan/search-in-video/utils.py
deleted file mode 100644
index 39b8db4f46d1df025e67eddd56da4cb789c40214..0000000000000000000000000000000000000000
--- a/spaces/armgabrielyan/search-in-video/utils.py
+++ /dev/null
@@ -1,35 +0,0 @@
-from transformers import ViTFeatureExtractor
-import torchvision.transforms.functional as fn
-import torch as th
-
-
-def video2image(video, feature_extractor_name):
- feature_extractor = ViTFeatureExtractor.from_pretrained(
- feature_extractor_name
- )
-
- vid = th.permute(video, (3, 0, 1, 2))
- samp = th.linspace(0, vid.shape[1]-1, 49, dtype=th.long)
- vid = vid[:, samp, :, :]
-
- im_l = list()
- for i in range(vid.shape[1]):
- im_l.append(vid[:, i, :, :])
-
- inputs = feature_extractor(im_l, return_tensors="pt")
-
- inputs = inputs['pixel_values']
-
- im_h = list()
- for i in range(7):
- im_v = th.cat((inputs[0+i*7, :, :, :],
- inputs[1+i*7, :, :, :],
- inputs[2+i*7, :, :, :],
- inputs[3+i*7, :, :, :],
- inputs[4+i*7, :, :, :],
- inputs[5+i*7, :, :, :],
- inputs[6+i*7, :, :, :]), 2)
- im_h.append(im_v)
- resize = fn.resize(th.cat(im_h, 1), size=[224])
-
- return resize
diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/bark/inference_funcs.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/bark/inference_funcs.py
deleted file mode 100644
index f3d3fee9371fae0cd06187c967a5b0028940138e..0000000000000000000000000000000000000000
--- a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/bark/inference_funcs.py
+++ /dev/null
@@ -1,606 +0,0 @@
-import logging
-import os
-import re
-from glob import glob
-from typing import Dict, List
-
-import librosa
-import numpy as np
-import torch
-import torchaudio
-import tqdm
-from encodec.utils import convert_audio
-from scipy.special import softmax
-from torch.nn import functional as F
-
-from TTS.tts.layers.bark.hubert.hubert_manager import HubertManager
-from TTS.tts.layers.bark.hubert.kmeans_hubert import CustomHubert
-from TTS.tts.layers.bark.hubert.tokenizer import HubertTokenizer
-from TTS.tts.layers.bark.load_model import clear_cuda_cache, inference_mode
-
-logger = logging.getLogger(__name__)
-
-
-def _tokenize(tokenizer, text):
- return tokenizer.encode(text, add_special_tokens=False)
-
-
-def _detokenize(tokenizer, enc_text):
- return tokenizer.decode(enc_text)
-
-
-def _normalize_whitespace(text):
- return re.sub(r"\s+", " ", text).strip()
-
-
-def get_voices(extra_voice_dirs: List[str] = []): # pylint: disable=dangerous-default-value
- dirs = extra_voice_dirs
- voices: Dict[str, List[str]] = {}
- for d in dirs:
- subs = os.listdir(d)
- for sub in subs:
- subj = os.path.join(d, sub)
- if os.path.isdir(subj):
- voices[sub] = list(glob(f"{subj}/*.npz"))
- # fetch audio files if no npz files are found
- if len(voices[sub]) == 0:
- voices[sub] = list(glob(f"{subj}/*.wav")) + list(glob(f"{subj}/*.mp3"))
- return voices
-
-
-def load_npz(npz_file):
- x_history = np.load(npz_file)
- semantic = x_history["semantic_prompt"]
- coarse = x_history["coarse_prompt"]
- fine = x_history["fine_prompt"]
- return semantic, coarse, fine
-
-
-def load_voice(model, voice: str, extra_voice_dirs: List[str] = []): # pylint: disable=dangerous-default-value
- if voice == "random":
- return None, None, None
-
- voices = get_voices(extra_voice_dirs)
- paths = voices[voice]
-
- # bark only uses a single sample for cloning
- if len(paths) > 1:
- raise ValueError(f"Voice {voice} has multiple paths: {paths}")
-
- try:
- path = voices[voice]
- except KeyError as e:
- raise KeyError(f"Voice {voice} not found in {extra_voice_dirs}") from e
-
- if len(paths) == 1 and paths[0].endswith(".npz"):
- return load_npz(path[0])
-
- audio_path = paths[0]
- # replace the file extension with .npz
- output_path = os.path.splitext(audio_path)[0] + ".npz"
- generate_voice(audio=audio_path, model=model, output_path=output_path)
- return load_voice(model, voice, extra_voice_dirs)
-
-
-def zero_crossing_rate(audio, frame_length=1024, hop_length=512):
- zero_crossings = np.sum(np.abs(np.diff(np.sign(audio))) / 2)
- total_frames = 1 + int((len(audio) - frame_length) / hop_length)
- return zero_crossings / total_frames
-
-
-def compute_spectral_contrast(audio_data, sample_rate, n_bands=6, fmin=200.0):
- spectral_contrast = librosa.feature.spectral_contrast(y=audio_data, sr=sample_rate, n_bands=n_bands, fmin=fmin)
- return np.mean(spectral_contrast)
-
-
-def compute_average_bass_energy(audio_data, sample_rate, max_bass_freq=250):
- stft = librosa.stft(audio_data)
- power_spectrogram = np.abs(stft) ** 2
- frequencies = librosa.fft_frequencies(sr=sample_rate, n_fft=stft.shape[0])
- bass_mask = frequencies <= max_bass_freq
- bass_energy = power_spectrogram[np.ix_(bass_mask, np.arange(power_spectrogram.shape[1]))].mean()
- return bass_energy
-
-
-def generate_voice(
- audio,
- model,
- output_path,
-):
- """Generate a new voice from a given audio and text prompt.
-
- Args:
- audio (np.ndarray): The audio to use as a base for the new voice.
- text (str): Transcription of the audio you are clonning.
- model (BarkModel): The BarkModel to use for generating the new voice.
- output_path (str): The path to save the generated voice to.
- """
- if isinstance(audio, str):
- audio, sr = torchaudio.load(audio)
- audio = convert_audio(audio, sr, model.config.sample_rate, model.encodec.channels)
- audio = audio.unsqueeze(0).to(model.device)
-
- with torch.no_grad():
- encoded_frames = model.encodec.encode(audio)
- codes = torch.cat([encoded[0] for encoded in encoded_frames], dim=-1).squeeze() # [n_q, T]
-
- # move codes to cpu
- codes = codes.cpu().numpy()
-
- # generate semantic tokens
- # Load the HuBERT model
- hubert_manager = HubertManager()
- # hubert_manager.make_sure_hubert_installed(model_path=model.config.LOCAL_MODEL_PATHS["hubert"])
- hubert_manager.make_sure_tokenizer_installed(model_path=model.config.LOCAL_MODEL_PATHS["hubert_tokenizer"])
-
- hubert_model = CustomHubert(checkpoint_path=model.config.LOCAL_MODEL_PATHS["hubert"]).to(model.device)
-
- # Load the CustomTokenizer model
- tokenizer = HubertTokenizer.load_from_checkpoint(
- model.config.LOCAL_MODEL_PATHS["hubert_tokenizer"], map_location=model.device
- )
- # semantic_tokens = model.text_to_semantic(
- # text, max_gen_duration_s=seconds, top_k=50, top_p=0.95, temp=0.7
- # ) # not 100%
- semantic_vectors = hubert_model.forward(audio[0], input_sample_hz=model.config.sample_rate)
- semantic_tokens = tokenizer.get_token(semantic_vectors)
- semantic_tokens = semantic_tokens.cpu().numpy()
-
- np.savez(output_path, fine_prompt=codes, coarse_prompt=codes[:2, :], semantic_prompt=semantic_tokens)
-
-
-def generate_text_semantic(
- text,
- model,
- history_prompt=None,
- temp=0.7,
- top_k=None,
- top_p=None,
- silent=False,
- min_eos_p=0.2,
- max_gen_duration_s=None,
- allow_early_stop=True,
- base=None,
- use_kv_caching=True,
- **kwargs, # pylint: disable=unused-argument
-):
- """Generate semantic tokens from text.
-
- Args:
- text (str): The text to generate semantic tokens from.
- model (BarkModel): The BarkModel to use for generating the semantic tokens.
- history_prompt (tuple): A tuple of (semantic_history, coarse_history, fine_history) to use as a prompt for the generation.
- temp (float): The temperature to use for the generation.
- top_k (int): The number of top tokens to consider for the generation.
- top_p (float): The cumulative probability to consider for the generation.
- silent (bool): Whether to silence the tqdm progress bar.
- min_eos_p (float): The minimum probability to consider for the end of sentence token.
- max_gen_duration_s (float): The maximum duration in seconds to generate for.
- allow_early_stop (bool): Whether to allow the generation to stop early.
- base (tuple): A tuple of (semantic_history, coarse_history, fine_history) to use as a base for the generation.
- use_kv_caching (bool): Whether to use key-value caching for the generation.
- **kwargs: Additional keyword arguments. They are ignored.
-
- Returns:
- np.ndarray: The generated semantic tokens.
- """
- assert isinstance(text, str)
- text = _normalize_whitespace(text)
- assert len(text.strip()) > 0
- if all(v is not None for v in history_prompt) or base is not None:
- if history_prompt is not None:
- semantic_history = history_prompt[0]
- if base is not None:
- semantic_history = base[0]
- assert (
- isinstance(semantic_history, np.ndarray)
- and len(semantic_history.shape) == 1
- and len(semantic_history) > 0
- and semantic_history.min() >= 0
- and semantic_history.max() <= model.config.SEMANTIC_VOCAB_SIZE - 1
- )
- else:
- semantic_history = None
- encoded_text = np.array(_tokenize(model.tokenizer, text)) + model.config.TEXT_ENCODING_OFFSET
- if len(encoded_text) > 256:
- p = round((len(encoded_text) - 256) / len(encoded_text) * 100, 1)
- logger.warning(f"warning, text too long, lopping of last {p}%")
- encoded_text = encoded_text[:256]
- encoded_text = np.pad(
- encoded_text,
- (0, 256 - len(encoded_text)),
- constant_values=model.config.TEXT_PAD_TOKEN,
- mode="constant",
- )
- if semantic_history is not None:
- semantic_history = semantic_history.astype(np.int64)
- # lop off if history is too long, pad if needed
- semantic_history = semantic_history[-256:]
- semantic_history = np.pad(
- semantic_history,
- (0, 256 - len(semantic_history)),
- constant_values=model.config.SEMANTIC_PAD_TOKEN,
- mode="constant",
- )
- else:
- semantic_history = np.array([model.config.SEMANTIC_PAD_TOKEN] * 256)
- x = torch.from_numpy(
- np.hstack([encoded_text, semantic_history, np.array([model.config.SEMANTIC_INFER_TOKEN])]).astype(np.int64)
- )[None]
- assert x.shape[1] == 256 + 256 + 1
- with inference_mode():
- x = x.to(model.device)
- n_tot_steps = 768
- # custom tqdm updates since we don't know when eos will occur
- pbar = tqdm.tqdm(disable=silent, total=100)
- pbar_state = 0
- tot_generated_duration_s = 0
- kv_cache = None
- for n in range(n_tot_steps):
- if use_kv_caching and kv_cache is not None:
- x_input = x[:, [-1]]
- else:
- x_input = x
- logits, kv_cache = model.semantic_model(
- x_input, merge_context=True, use_cache=use_kv_caching, past_kv=kv_cache
- )
- relevant_logits = logits[0, 0, : model.config.SEMANTIC_VOCAB_SIZE]
- if allow_early_stop:
- relevant_logits = torch.hstack(
- (relevant_logits, logits[0, 0, [model.config.SEMANTIC_PAD_TOKEN]])
- ) # eos
- if top_p is not None:
- # faster to convert to numpy
- logits_device = relevant_logits.device
- logits_dtype = relevant_logits.type()
- relevant_logits = relevant_logits.detach().cpu().type(torch.float32).numpy()
- sorted_indices = np.argsort(relevant_logits)[::-1]
- sorted_logits = relevant_logits[sorted_indices]
- cumulative_probs = np.cumsum(softmax(sorted_logits))
- sorted_indices_to_remove = cumulative_probs > top_p
- sorted_indices_to_remove[1:] = sorted_indices_to_remove[:-1].copy()
- sorted_indices_to_remove[0] = False
- relevant_logits[sorted_indices[sorted_indices_to_remove]] = -np.inf
- relevant_logits = torch.from_numpy(relevant_logits)
- relevant_logits = relevant_logits.to(logits_device).type(logits_dtype)
- if top_k is not None:
- v, _ = torch.topk(relevant_logits, min(top_k, relevant_logits.size(-1)))
- relevant_logits[relevant_logits < v[-1]] = -float("Inf")
- probs = torch.softmax(relevant_logits / temp, dim=-1)
- item_next = torch.multinomial(probs, num_samples=1)
- if allow_early_stop and (
- item_next == model.config.SEMANTIC_VOCAB_SIZE or (min_eos_p is not None and probs[-1] >= min_eos_p)
- ):
- # eos found, so break
- pbar.update(100 - pbar_state)
- break
- x = torch.cat((x, item_next[None]), dim=1)
- tot_generated_duration_s += 1 / model.config.SEMANTIC_RATE_HZ
- if max_gen_duration_s is not None and tot_generated_duration_s > max_gen_duration_s:
- pbar.update(100 - pbar_state)
- break
- if n == n_tot_steps - 1:
- pbar.update(100 - pbar_state)
- break
- del logits, relevant_logits, probs, item_next
- req_pbar_state = np.min([100, int(round(100 * n / n_tot_steps))])
- if req_pbar_state > pbar_state:
- pbar.update(req_pbar_state - pbar_state)
- pbar_state = req_pbar_state
- pbar.close()
- out = x.detach().cpu().numpy().squeeze()[256 + 256 + 1 :]
- assert all(out >= 0) and all(out < model.config.SEMANTIC_VOCAB_SIZE)
- clear_cuda_cache()
- return out
-
-
-def _flatten_codebooks(arr, offset_size):
- assert len(arr.shape) == 2
- arr = arr.copy()
- if offset_size is not None:
- for n in range(1, arr.shape[0]):
- arr[n, :] += offset_size * n
- flat_arr = arr.ravel("F")
- return flat_arr
-
-
-def generate_coarse(
- x_semantic,
- model,
- history_prompt=None,
- temp=0.7,
- top_k=None,
- top_p=None,
- silent=False,
- max_coarse_history=630, # min 60 (faster), max 630 (more context)
- sliding_window_len=60,
- base=None,
- use_kv_caching=True,
-):
- """Generate coarse audio codes from semantic tokens.
-
- Args:
- x_semantic (np.ndarray): The semantic tokens to generate coarse audio codes from.
- model (BarkModel): The BarkModel to use for generating the coarse audio codes.
- history_prompt (tuple): A tuple of (semantic_history, coarse_history, fine_history) to use as a prompt for the generation.
- temp (float): The temperature to use for the generation.
- top_k (int): The number of top tokens to consider for the generation.
- top_p (float): The cumulative probability to consider for the generation.
- silent (bool): Whether to silence the tqdm progress bar.
- max_coarse_history (int): The maximum number of coarse audio codes to use as history.
- sliding_window_len (int): The length of the sliding window to use for the generation.
- base (tuple): A tuple of (semantic_history, coarse_history, fine_history) to use as a base for the generation.
- use_kv_caching (bool): Whether to use key-value caching for the generation.
-
- Returns:
- np.ndarray: The generated coarse audio codes.
- """
- assert (
- isinstance(x_semantic, np.ndarray)
- and len(x_semantic.shape) == 1
- and len(x_semantic) > 0
- and x_semantic.min() >= 0
- and x_semantic.max() <= model.config.SEMANTIC_VOCAB_SIZE - 1
- )
- assert 60 <= max_coarse_history <= 630
- assert max_coarse_history + sliding_window_len <= 1024 - 256
- semantic_to_coarse_ratio = (
- model.config.COARSE_RATE_HZ / model.config.SEMANTIC_RATE_HZ * model.config.N_COARSE_CODEBOOKS
- )
- max_semantic_history = int(np.floor(max_coarse_history / semantic_to_coarse_ratio))
- if all(v is not None for v in history_prompt) or base is not None:
- if history_prompt is not None:
- x_history = history_prompt
- x_semantic_history = x_history[0]
- x_coarse_history = x_history[1]
- if base is not None:
- x_semantic_history = base[0]
- x_coarse_history = base[1]
- assert (
- isinstance(x_semantic_history, np.ndarray)
- and len(x_semantic_history.shape) == 1
- and len(x_semantic_history) > 0
- and x_semantic_history.min() >= 0
- and x_semantic_history.max() <= model.config.SEMANTIC_VOCAB_SIZE - 1
- and isinstance(x_coarse_history, np.ndarray)
- and len(x_coarse_history.shape) == 2
- and x_coarse_history.shape[0] == model.config.N_COARSE_CODEBOOKS
- and x_coarse_history.shape[-1] >= 0
- and x_coarse_history.min() >= 0
- and x_coarse_history.max() <= model.config.CODEBOOK_SIZE - 1
- and (
- round(x_coarse_history.shape[-1] / len(x_semantic_history), 1)
- == round(semantic_to_coarse_ratio / model.config.N_COARSE_CODEBOOKS, 1)
- )
- )
- x_coarse_history = (
- _flatten_codebooks(x_coarse_history, model.config.CODEBOOK_SIZE) + model.config.SEMANTIC_VOCAB_SIZE
- )
- # trim histories correctly
- n_semantic_hist_provided = np.min(
- [
- max_semantic_history,
- len(x_semantic_history) - len(x_semantic_history) % 2,
- int(np.floor(len(x_coarse_history) / semantic_to_coarse_ratio)),
- ]
- )
- n_coarse_hist_provided = int(round(n_semantic_hist_provided * semantic_to_coarse_ratio))
- x_semantic_history = x_semantic_history[-n_semantic_hist_provided:].astype(np.int32)
- x_coarse_history = x_coarse_history[-n_coarse_hist_provided:].astype(np.int32)
- # TODO: bit of a hack for time alignment (sounds better)
- x_coarse_history = x_coarse_history[:-2]
- else:
- x_semantic_history = np.array([], dtype=np.int32)
- x_coarse_history = np.array([], dtype=np.int32)
- # start loop
- n_steps = int(
- round(
- np.floor(len(x_semantic) * semantic_to_coarse_ratio / model.config.N_COARSE_CODEBOOKS)
- * model.config.N_COARSE_CODEBOOKS
- )
- )
- assert n_steps > 0 and n_steps % model.config.N_COARSE_CODEBOOKS == 0
- x_semantic = np.hstack([x_semantic_history, x_semantic]).astype(np.int32)
- x_coarse = x_coarse_history.astype(np.int32)
- base_semantic_idx = len(x_semantic_history)
- with inference_mode():
- x_semantic_in = torch.from_numpy(x_semantic)[None].to(model.device)
- x_coarse_in = torch.from_numpy(x_coarse)[None].to(model.device)
- n_window_steps = int(np.ceil(n_steps / sliding_window_len))
- n_step = 0
- for _ in tqdm.tqdm(range(n_window_steps), total=n_window_steps, disable=silent):
- semantic_idx = base_semantic_idx + int(round(n_step / semantic_to_coarse_ratio))
- # pad from right side
- x_in = x_semantic_in[:, np.max([0, semantic_idx - max_semantic_history]) :]
- x_in = x_in[:, :256]
- x_in = F.pad(
- x_in,
- (0, 256 - x_in.shape[-1]),
- "constant",
- model.config.COARSE_SEMANTIC_PAD_TOKEN,
- )
- x_in = torch.hstack(
- [
- x_in,
- torch.tensor([model.config.COARSE_INFER_TOKEN])[None].to(model.device),
- x_coarse_in[:, -max_coarse_history:],
- ]
- )
- kv_cache = None
- for _ in range(sliding_window_len):
- if n_step >= n_steps:
- continue
- is_major_step = n_step % model.config.N_COARSE_CODEBOOKS == 0
-
- if use_kv_caching and kv_cache is not None:
- x_input = x_in[:, [-1]]
- else:
- x_input = x_in
-
- logits, kv_cache = model.coarse_model(x_input, use_cache=use_kv_caching, past_kv=kv_cache)
- logit_start_idx = (
- model.config.SEMANTIC_VOCAB_SIZE + (1 - int(is_major_step)) * model.config.CODEBOOK_SIZE
- )
- logit_end_idx = model.config.SEMANTIC_VOCAB_SIZE + (2 - int(is_major_step)) * model.config.CODEBOOK_SIZE
- relevant_logits = logits[0, 0, logit_start_idx:logit_end_idx]
- if top_p is not None:
- # faster to convert to numpy
- logits_device = relevant_logits.device
- logits_dtype = relevant_logits.type()
- relevant_logits = relevant_logits.detach().cpu().type(torch.float32).numpy()
- sorted_indices = np.argsort(relevant_logits)[::-1]
- sorted_logits = relevant_logits[sorted_indices]
- cumulative_probs = np.cumsum(torch.nn.functional.softmax(sorted_logits))
- sorted_indices_to_remove = cumulative_probs > top_p
- sorted_indices_to_remove[1:] = sorted_indices_to_remove[:-1].copy()
- sorted_indices_to_remove[0] = False
- relevant_logits[sorted_indices[sorted_indices_to_remove]] = -np.inf
- relevant_logits = torch.from_numpy(relevant_logits)
- relevant_logits = relevant_logits.to(logits_device).type(logits_dtype)
- if top_k is not None:
- v, _ = torch.topk(relevant_logits, min(top_k, relevant_logits.size(-1)))
- relevant_logits[relevant_logits < v[-1]] = -float("Inf")
- probs = torch.nn.functional.softmax(relevant_logits / temp, dim=-1)
- item_next = torch.multinomial(probs, num_samples=1)
- item_next += logit_start_idx
- x_coarse_in = torch.cat((x_coarse_in, item_next[None]), dim=1)
- x_in = torch.cat((x_in, item_next[None]), dim=1)
- del logits, relevant_logits, probs, item_next
- n_step += 1
- del x_in
- del x_semantic_in
- gen_coarse_arr = x_coarse_in.detach().cpu().numpy().squeeze()[len(x_coarse_history) :]
- del x_coarse_in
- assert len(gen_coarse_arr) == n_steps
- gen_coarse_audio_arr = (
- gen_coarse_arr.reshape(-1, model.config.N_COARSE_CODEBOOKS).T - model.config.SEMANTIC_VOCAB_SIZE
- )
- for n in range(1, model.config.N_COARSE_CODEBOOKS):
- gen_coarse_audio_arr[n, :] -= n * model.config.CODEBOOK_SIZE
- clear_cuda_cache()
- return gen_coarse_audio_arr
-
-
-def generate_fine(
- x_coarse_gen,
- model,
- history_prompt=None,
- temp=0.5,
- silent=True,
- base=None,
-):
- """Generate full audio codes from coarse audio codes.
-
- Args:
- x_coarse_gen (np.ndarray): The coarse audio codes to generate full audio codes from.
- model (BarkModel): The BarkModel to use for generating the full audio codes.
- history_prompt (tuple): A tuple of (semantic_history, coarse_history, fine_history) to use as a prompt for the generation.
- temp (float): The temperature to use for the generation.
- silent (bool): Whether to silence the tqdm progress bar.
- base (tuple): A tuple of (semantic_history, coarse_history, fine_history) to use as a base for the generation.
-
- Returns:
- np.ndarray: The generated full audio codes.
- """
- assert (
- isinstance(x_coarse_gen, np.ndarray)
- and len(x_coarse_gen.shape) == 2
- and 1 <= x_coarse_gen.shape[0] <= model.config.N_FINE_CODEBOOKS - 1
- and x_coarse_gen.shape[1] > 0
- and x_coarse_gen.min() >= 0
- and x_coarse_gen.max() <= model.config.CODEBOOK_SIZE - 1
- )
- if all(v is not None for v in history_prompt) or base is not None:
- if history_prompt is not None:
- x_fine_history = history_prompt[2]
- if base is not None:
- x_fine_history = base[2]
- assert (
- isinstance(x_fine_history, np.ndarray)
- and len(x_fine_history.shape) == 2
- and x_fine_history.shape[0] == model.config.N_FINE_CODEBOOKS
- and x_fine_history.shape[1] >= 0
- and x_fine_history.min() >= 0
- and x_fine_history.max() <= model.config.CODEBOOK_SIZE - 1
- )
- else:
- x_fine_history = None
- n_coarse = x_coarse_gen.shape[0]
- # make input arr
- in_arr = np.vstack(
- [
- x_coarse_gen,
- np.zeros((model.config.N_FINE_CODEBOOKS - n_coarse, x_coarse_gen.shape[1]))
- + model.config.CODEBOOK_SIZE, # padding
- ]
- ).astype(np.int32)
- # prepend history if available (max 512)
- if x_fine_history is not None:
- x_fine_history = x_fine_history.astype(np.int32)
- in_arr = np.hstack(
- [
- x_fine_history[:, -512:].astype(np.int32),
- in_arr,
- ]
- )
- n_history = x_fine_history[:, -512:].shape[1]
- else:
- n_history = 0
- n_remove_from_end = 0
- # need to pad if too short (since non-causal model)
- if in_arr.shape[1] < 1024:
- n_remove_from_end = 1024 - in_arr.shape[1]
- in_arr = np.hstack(
- [
- in_arr,
- np.zeros((model.config.N_FINE_CODEBOOKS, n_remove_from_end), dtype=np.int32)
- + model.config.CODEBOOK_SIZE,
- ]
- )
- # we can be lazy about fractional loop and just keep overwriting codebooks
- n_loops = np.max([0, int(np.ceil((x_coarse_gen.shape[1] - (1024 - n_history)) / 512))]) + 1
- with inference_mode():
- in_arr = torch.tensor(in_arr.T).to(model.device)
- for n in tqdm.tqdm(range(n_loops), disable=silent):
- start_idx = np.min([n * 512, in_arr.shape[0] - 1024])
- start_fill_idx = np.min([n_history + n * 512, in_arr.shape[0] - 512])
- rel_start_fill_idx = start_fill_idx - start_idx
- in_buffer = in_arr[start_idx : start_idx + 1024, :][None]
- for nn in range(n_coarse, model.config.N_FINE_CODEBOOKS):
- logits = model.fine_model(nn, in_buffer)
- if temp is None:
- relevant_logits = logits[0, rel_start_fill_idx:, : model.config.CODEBOOK_SIZE]
- codebook_preds = torch.argmax(relevant_logits, -1)
- else:
- relevant_logits = logits[0, :, : model.config.CODEBOOK_SIZE] / temp
- probs = F.softmax(relevant_logits, dim=-1)
- codebook_preds = torch.hstack(
- [torch.multinomial(probs[n], num_samples=1) for n in range(rel_start_fill_idx, 1024)]
- )
- in_buffer[0, rel_start_fill_idx:, nn] = codebook_preds
- del logits, codebook_preds
- # transfer over info into model_in and convert to numpy
- for nn in range(n_coarse, model.config.N_FINE_CODEBOOKS):
- in_arr[start_fill_idx : start_fill_idx + (1024 - rel_start_fill_idx), nn] = in_buffer[
- 0, rel_start_fill_idx:, nn
- ]
- del in_buffer
- gen_fine_arr = in_arr.detach().cpu().numpy().squeeze().T
- del in_arr
- gen_fine_arr = gen_fine_arr[:, n_history:]
- if n_remove_from_end > 0:
- gen_fine_arr = gen_fine_arr[:, :-n_remove_from_end]
- assert gen_fine_arr.shape[-1] == x_coarse_gen.shape[-1]
- clear_cuda_cache()
- return gen_fine_arr
-
-
-def codec_decode(fine_tokens, model):
- """Turn quantized audio codes into audio array using encodec."""
- arr = torch.from_numpy(fine_tokens)[None]
- arr = arr.to(model.device)
- arr = arr.transpose(0, 1)
- emb = model.encodec.quantizer.decode(arr)
- out = model.encodec.decoder(emb)
- audio_arr = out.detach().cpu().numpy().squeeze()
- return audio_arr
diff --git a/spaces/arxify/RVC-beta-v2-0618/infer_pack/modules/F0Predictor/HarvestF0Predictor.py b/spaces/arxify/RVC-beta-v2-0618/infer_pack/modules/F0Predictor/HarvestF0Predictor.py
deleted file mode 100644
index 98d4e98b353008f81bde2c37e7da818763a992c9..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/infer_pack/modules/F0Predictor/HarvestF0Predictor.py
+++ /dev/null
@@ -1,86 +0,0 @@
-from infer_pack.modules.F0Predictor.F0Predictor import F0Predictor
-import pyworld
-import numpy as np
-
-
-class HarvestF0Predictor(F0Predictor):
- def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100):
- self.hop_length = hop_length
- self.f0_min = f0_min
- self.f0_max = f0_max
- self.sampling_rate = sampling_rate
-
- def interpolate_f0(self, f0):
- """
- 对F0进行插值处理
- """
-
- data = np.reshape(f0, (f0.size, 1))
-
- vuv_vector = np.zeros((data.size, 1), dtype=np.float32)
- vuv_vector[data > 0.0] = 1.0
- vuv_vector[data <= 0.0] = 0.0
-
- ip_data = data
-
- frame_number = data.size
- last_value = 0.0
- for i in range(frame_number):
- if data[i] <= 0.0:
- j = i + 1
- for j in range(i + 1, frame_number):
- if data[j] > 0.0:
- break
- if j < frame_number - 1:
- if last_value > 0.0:
- step = (data[j] - data[i - 1]) / float(j - i)
- for k in range(i, j):
- ip_data[k] = data[i - 1] + step * (k - i + 1)
- else:
- for k in range(i, j):
- ip_data[k] = data[j]
- else:
- for k in range(i, frame_number):
- ip_data[k] = last_value
- else:
- ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝
- last_value = data[i]
-
- return ip_data[:, 0], vuv_vector[:, 0]
-
- def resize_f0(self, x, target_len):
- source = np.array(x)
- source[source < 0.001] = np.nan
- target = np.interp(
- np.arange(0, len(source) * target_len, len(source)) / target_len,
- np.arange(0, len(source)),
- source,
- )
- res = np.nan_to_num(target)
- return res
-
- def compute_f0(self, wav, p_len=None):
- if p_len is None:
- p_len = wav.shape[0] // self.hop_length
- f0, t = pyworld.harvest(
- wav.astype(np.double),
- fs=self.hop_length,
- f0_ceil=self.f0_max,
- f0_floor=self.f0_min,
- frame_period=1000 * self.hop_length / self.sampling_rate,
- )
- f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.fs)
- return self.interpolate_f0(self.resize_f0(f0, p_len))[0]
-
- def compute_f0_uv(self, wav, p_len=None):
- if p_len is None:
- p_len = wav.shape[0] // self.hop_length
- f0, t = pyworld.harvest(
- wav.astype(np.double),
- fs=self.sampling_rate,
- f0_floor=self.f0_min,
- f0_ceil=self.f0_max,
- frame_period=1000 * self.hop_length / self.sampling_rate,
- )
- f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate)
- return self.interpolate_f0(self.resize_f0(f0, p_len))
diff --git a/spaces/arxify/RVC-beta-v2-0618/my_utils.py b/spaces/arxify/RVC-beta-v2-0618/my_utils.py
deleted file mode 100644
index a5258394b8ae5385daa665ab6ba6380507d4798a..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/my_utils.py
+++ /dev/null
@@ -1,21 +0,0 @@
-import ffmpeg
-import numpy as np
-
-
-def load_audio(file, sr):
- try:
- # https://github.com/openai/whisper/blob/main/whisper/audio.py#L26
- # This launches a subprocess to decode audio while down-mixing and resampling as necessary.
- # Requires the ffmpeg CLI and `ffmpeg-python` package to be installed.
- file = (
- file.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
- ) # 防止小白拷路径头尾带了空格和"和回车
- out, _ = (
- ffmpeg.input(file, threads=0)
- .output("-", format="f32le", acodec="pcm_f32le", ac=1, ar=sr)
- .run(cmd=["ffmpeg", "-nostdin"], capture_stdout=True, capture_stderr=True)
- )
- except Exception as e:
- raise RuntimeError(f"Failed to load audio: {e}")
-
- return np.frombuffer(out, np.float32).flatten()
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/vega/__init__.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/vega/__init__.py
deleted file mode 100644
index eec9f692ddb2117e5196f654f5ff6d5a1a44e786..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/vega/__init__.py
+++ /dev/null
@@ -1,2 +0,0 @@
-# flake8: noqa
-from .v5 import *
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/audio/feature_transforms/global_cmvn.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/audio/feature_transforms/global_cmvn.py
deleted file mode 100644
index e457ff176fee3b996da11f47e7dc61b81c445ba3..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/audio/feature_transforms/global_cmvn.py
+++ /dev/null
@@ -1,29 +0,0 @@
-import numpy as np
-from fairseq.data.audio.feature_transforms import (
- AudioFeatureTransform,
- register_audio_feature_transform,
-)
-
-
-@register_audio_feature_transform("global_cmvn")
-class GlobalCMVN(AudioFeatureTransform):
- """Global CMVN (cepstral mean and variance normalization). The global mean
- and variance need to be pre-computed and stored in NumPy format (.npz)."""
-
- @classmethod
- def from_config_dict(cls, config=None):
- _config = {} if config is None else config
- return GlobalCMVN(_config.get("stats_npz_path"))
-
- def __init__(self, stats_npz_path):
- self.stats_npz_path = stats_npz_path
- stats = np.load(stats_npz_path)
- self.mean, self.std = stats["mean"], stats["std"]
-
- def __repr__(self):
- return self.__class__.__name__ + f'(stats_npz_path="{self.stats_npz_path}")'
-
- def __call__(self, x):
- x = np.subtract(x, self.mean)
- x = np.divide(x, self.std)
- return x
diff --git a/spaces/ashercn97/AsherTesting/extensions/ngrok/README.md b/spaces/ashercn97/AsherTesting/extensions/ngrok/README.md
deleted file mode 100644
index 0324bf9852408d9d2b86cc0165c2d548996f9c94..0000000000000000000000000000000000000000
--- a/spaces/ashercn97/AsherTesting/extensions/ngrok/README.md
+++ /dev/null
@@ -1,69 +0,0 @@
-# Adding an ingress URL through the ngrok Agent SDK for Python
-
-[ngrok](https://ngrok.com) is a globally distributed reverse proxy commonly used for quickly getting a public URL to a
-service running inside a private network, such as on your local laptop. The ngrok agent is usually
-deployed inside a private network and is used to communicate with the ngrok cloud service.
-
-By default the authtoken in the NGROK_AUTHTOKEN environment variable will be used. Alternatively one may be specified in
-the `settings.json` file, see the Examples below. Retrieve your authtoken on the [Auth Token page of your ngrok dashboard](https://dashboard.ngrok.com/get-started/your-authtoken), signing up is free.
-
-# Documentation
-
-For a list of all available options, see [the configuration documentation](https://ngrok.com/docs/ngrok-agent/config/) or [the connect example](https://github.com/ngrok/ngrok-py/blob/main/examples/ngrok-connect-full.py).
-
-The ngrok Python SDK is [on github here](https://github.com/ngrok/ngrok-py). A quickstart guide and a full API reference are included in the [ngrok-py Python API documentation](https://ngrok.github.io/ngrok-py/).
-
-# Running
-
-To enable ngrok install the requirements and then add `--extension ngrok` to the command line options, for instance:
-
-```bash
-pip install -r extensions/ngrok/requirements.txt
-python server.py --extension ngrok
-```
-
-In the output you should then see something like this:
-
-```bash
-INFO:Loading the extension "ngrok"...
-INFO:Session created
-INFO:Created tunnel "9d9d0944dc75ff9d3aae653e5eb29fe9" with url "https://d83706cf7be7.ngrok.app"
-INFO:Tunnel "9d9d0944dc75ff9d3aae653e5eb29fe9" TCP forwarding to "localhost:7860"
-INFO:Ingress established at https://d83706cf7be7.ngrok.app
-```
-
-You can now access the webui via the url shown, in this case `https://d83706cf7be7.ngrok.app`. It is recommended to add some authentication to the ingress, see below.
-
-# Example Settings
-
-In `settings.json` add a `ngrok` key with a dictionary of options, for instance:
-
-To enable basic authentication:
-```json
-{
- "ngrok": {
- "basic_auth": "user:password"
- }
-}
-```
-
-To enable OAUTH authentication:
-```json
-{
- "ngrok": {
- "oauth_provider": "google",
- "oauth_allow_domains": "asdf.com",
- "oauth_allow_emails": "asdf@asdf.com"
- }
-}
-```
-
-To add an authtoken instead of using the NGROK_AUTHTOKEN environment variable:
-```json
-{
- "ngrok": {
- "authtoken": "",
- "authtoken_from_env":false
- }
-}
-```
\ No newline at end of file
diff --git a/spaces/atimughal662/InfoFusion/app.py b/spaces/atimughal662/InfoFusion/app.py
deleted file mode 100644
index d4bb1f140028f8d79d99dce983e4fd15522be605..0000000000000000000000000000000000000000
--- a/spaces/atimughal662/InfoFusion/app.py
+++ /dev/null
@@ -1 +0,0 @@
-generate.py
\ No newline at end of file
diff --git a/spaces/avans06/whisper-webui-translate/docs/options.md b/spaces/avans06/whisper-webui-translate/docs/options.md
deleted file mode 100644
index 378bdaf4087efbb1326834f8af5084282deca927..0000000000000000000000000000000000000000
--- a/spaces/avans06/whisper-webui-translate/docs/options.md
+++ /dev/null
@@ -1,153 +0,0 @@
-# Standard Options
-To transcribe or translate an audio file, you can either copy an URL from a website (all [websites](https://github.com/yt-dlp/yt-dlp/blob/master/supportedsites.md)
-supported by YT-DLP will work, including YouTube). Otherwise, upload an audio file (choose "All Files (*.*)"
-in the file selector to select any file type, including video files) or use the microphone.
-
-For longer audio files (>10 minutes), it is recommended that you select Silero VAD (Voice Activity Detector) in the VAD option, especially if you are using the `large-v1` model. Note that `large-v2` is a lot more forgiving, but you may still want to use a VAD with a slightly higher "VAD - Max Merge Size (s)" (60 seconds or more).
-
-## Model
-Select the model that Whisper will use to transcribe the audio:
-
-| Size | Parameters | English-only model | Multilingual model | Required VRAM | Relative speed |
-|-----------|------------|--------------------|--------------------|---------------|----------------|
-| tiny | 39 M | tiny.en | tiny | ~1 GB | ~32x |
-| base | 74 M | base.en | base | ~1 GB | ~16x |
-| small | 244 M | small.en | small | ~2 GB | ~6x |
-| medium | 769 M | medium.en | medium | ~5 GB | ~2x |
-| large | 1550 M | N/A | large | ~10 GB | 1x |
-| large-v2 | 1550 M | N/A | large | ~10 GB | 1x |
-
-## Language
-
-Select the language, or leave it empty for Whisper to automatically detect it.
-
-Note that if the selected language and the language in the audio differs, Whisper may start to translate the audio to the selected
-language. For instance, if the audio is in English but you select Japaneese, the model may translate the audio to Japanese.
-
-## Inputs
-The options "URL (YouTube, etc.)", "Upload Files" or "Micriphone Input" allows you to send an audio input to the model.
-
-### Multiple Files
-Note that the UI will only process either the given URL or the upload files (including microphone) - not both.
-
-But you can upload multiple files either through the "Upload files" option, or as a playlist on YouTube. Each audio file will then be processed in turn, and the resulting SRT/VTT/Transcript will be made available in the "Download" section. When more than one file is processed, the UI will also generate a "All_Output" zip file containing all the text output files.
-
-## Task
-Select the task - either "transcribe" to transcribe the audio to text, or "translate" to translate it to English.
-
-## Vad
-Using a VAD will improve the timing accuracy of each transcribed line, as well as prevent Whisper getting into an infinite
-loop detecting the same sentence over and over again. The downside is that this may be at a cost to text accuracy, especially
-with regards to unique words or names that appear in the audio. You can compensate for this by increasing the prompt window.
-
-Note that English is very well handled by Whisper, and it's less susceptible to issues surrounding bad timings and infinite loops.
-So you may only need to use a VAD for other languages, such as Japanese, or when the audio is very long.
-
-* none
- * Run whisper on the entire audio input
-* silero-vad
- * Use Silero VAD to detect sections that contain speech, and run Whisper on independently on each section. Whisper is also run
- on the gaps between each speech section, by either expanding the section up to the max merge size, or running Whisper independently
- on the non-speech section.
-* silero-vad-expand-into-gaps
- * Use Silero VAD to detect sections that contain speech, and run Whisper on independently on each section. Each spech section will be expanded
- such that they cover any adjacent non-speech sections. For instance, if an audio file of one minute contains the speech sections
- 00:00 - 00:10 (A) and 00:30 - 00:40 (B), the first section (A) will be expanded to 00:00 - 00:30, and (B) will be expanded to 00:30 - 00:60.
-* silero-vad-skip-gaps
- * As above, but sections that doesn't contain speech according to Silero will be skipped. This will be slightly faster, but
- may cause dialogue to be skipped.
-* periodic-vad
- * Create sections of speech every 'VAD - Max Merge Size' seconds. This is very fast and simple, but will potentially break
- a sentence or word in two.
-
-## VAD - Merge Window
-If set, any adjacent speech sections that are at most this number of seconds apart will be automatically merged.
-
-## VAD - Max Merge Size (s)
-Disables merging of adjacent speech sections if they are this number of seconds long.
-
-## VAD - Padding (s)
-The number of seconds (floating point) to add to the beginning and end of each speech section. Setting this to a number
-larger than zero ensures that Whisper is more likely to correctly transcribe a sentence in the beginning of
-a speech section. However, this also increases the probability of Whisper assigning the wrong timestamp
-to each transcribed line. The default value is 1 second.
-
-## VAD - Prompt Window (s)
-The text of a detected line will be included as a prompt to the next speech section, if the speech section starts at most this
-number of seconds after the line has finished. For instance, if a line ends at 10:00, and the next speech section starts at
-10:04, the line's text will be included if the prompt window is 4 seconds or more (10:04 - 10:00 = 4 seconds).
-
-Note that detected lines in gaps between speech sections will not be included in the prompt
-(if silero-vad or silero-vad-expand-into-gaps) is used.
-
-## Diarization
-
-If checked, Pyannote will be used to detect speakers in the audio, and label them as (SPEAKER 00), (SPEAKER 01), etc.
-
-This requires a HuggingFace API key to function, which can be supplied with the `--auth_token` command line option for the CLI,
-set in the `config.json5` file for the GUI, or provided via the `HF_ACCESS_TOKEN` environment variable.
-
-## Diarization - Speakers
-
-The number of speakers to detect. If set to 0, Pyannote will attempt to detect the number of speakers automatically.
-
-# Command Line Options
-
-Both `app.py` and `cli.py` also accept command line options, such as the ability to enable parallel execution on multiple
-CPU/GPU cores, the default model name/VAD and so on. Consult the README in the root folder for more information.
-
-# Additional Options
-
-In addition to the above, there's also a "Full" options interface that allows you to set all the options available in the Whisper
-model. The options are as follows:
-
-## Initial Prompt
-Optional text to provide as a prompt for the first 30 seconds window. Whisper will attempt to use this as a starting point for the transcription, but you can
-also get creative and specify a style or format for the output of the transcription.
-
-For instance, if you use the prompt "hello how is it going always use lowercase no punctuation goodbye one two three start stop i you me they", Whisper will
-be biased to output lower capital letters and no punctuation, and may also be biased to output the words in the prompt more often.
-
-## Temperature
-The temperature to use when sampling. Default is 0 (zero). A higher temperature will result in more random output, while a lower temperature will be more deterministic.
-
-## Best Of - Non-zero temperature
-The number of candidates to sample from when sampling with non-zero temperature. Default is 5.
-
-## Beam Size - Zero temperature
-The number of beams to use in beam search when sampling with zero temperature. Default is 5.
-
-## Patience - Zero temperature
-The patience value to use in beam search when sampling with zero temperature. As in https://arxiv.org/abs/2204.05424, the default (1.0) is equivalent to conventional beam search.
-
-## Length Penalty - Any temperature
-The token length penalty coefficient (alpha) to use when sampling with any temperature. As in https://arxiv.org/abs/1609.08144, uses simple length normalization by default.
-
-## Suppress Tokens - Comma-separated list of token IDs
-A comma-separated list of token IDs to suppress during sampling. The default value of "-1" will suppress most special characters except common punctuations.
-
-## Condition on previous text
-If True, provide the previous output of the model as a prompt for the next window. Disabling this may make the text inconsistent across windows, but the model becomes less prone to getting stuck in a failure loop.
-
-## FP16
-Whether to perform inference in fp16. True by default.
-
-## Temperature increment on fallback
-The temperature to increase when falling back when the decoding fails to meet either of the thresholds below. Default is 0.2.
-
-## Compression ratio threshold
-If the gzip compression ratio is higher than this value, treat the decoding as failed. Default is 2.4.
-
-## Logprob threshold
-If the average log probability is lower than this value, treat the decoding as failed. Default is -1.0.
-
-## No speech threshold
-If the probability of the <|nospeech|> token is higher than this value AND the decoding has failed due to `logprob_threshold`, consider the segment as silence. Default is 0.6.
-
-## Diarization - Min Speakers
-
-The minimum number of speakers for Pyannote to detect.
-
-## Diarization - Max Speakers
-
-The maximum number of speakers for Pyannote to detect.
\ No newline at end of file
diff --git "a/spaces/awacke1/CardWriterPro/pages/6_\360\237\224\254_Model_Evaluation.py" "b/spaces/awacke1/CardWriterPro/pages/6_\360\237\224\254_Model_Evaluation.py"
deleted file mode 100644
index e3c4926a814f9f34980b77b5d8dc4277fd272d7e..0000000000000000000000000000000000000000
--- "a/spaces/awacke1/CardWriterPro/pages/6_\360\237\224\254_Model_Evaluation.py"
+++ /dev/null
@@ -1,66 +0,0 @@
-import streamlit as st
-from persist import persist, load_widget_state
-from pathlib import Path
-
-from middleMan import apply_view,writingPrompt
-
-global variable_output
-
-def main():
- cs_body()
-
-
-def cs_body():
-
- #stateVariable = 'Model_Eval'
- #help_text ='Detail the Evaluation Results for this model'
- #col1.header('Model Evaluation')
- st.markdown('# Evaluation')
- st.text_area(" This section describes the evaluation protocols and provides the results. ",help="Detail the Evaluation Results for this model")
- st.markdown('## Testing Data, Factors & Metrics:')
- left, right = st.columns([2,4])
-
- #st.markdown('### Model Description')
-
-
- with left:
- st.write("\n")
- st.write("\n")
- st.markdown('#### Testing Data:')
- st.write("\n")
- st.write("\n")
- st.write("\n")
- st.write("\n")
- st.write("\n")
- st.write("\n")
- #st.write("\n")
- st.markdown('#### Factors:')
- st.write("\n")
- st.write("\n")
- st.write("\n")
- st.write("\n")
- st.write("\n")
- st.write("\n")
- st.markdown('#### Metrics:')
- st.write("\n")
- st.write("\n")
- st.write("\n")
- st.write("\n")
- st.write("\n")
- st.markdown('#### Results:')
-
- with right:
- #soutput_jinja = parse_into_jinja_markdown()
- st.text_area("", help="Ideally this links to a Dataset Card.",key=persist("Testing_Data"))
- #st.write("\n")
- st.text_area("",help="What are the foreseeable characteristics that will influence how the model behaves? This includes domain and context, as well as population subgroups.",key=persist("Factors"))
- st.text_area("", help="What metrics will be used for evaluation in light of tradeoffs between different errors?", key=persist("Metrics"))
- st.text_area("", key=persist("Model_Results"))
-
-
-
-
-
-if __name__ == '__main__':
- load_widget_state()
- main()
\ No newline at end of file
diff --git a/spaces/awacke1/CodeParrot-Copilot-Alternative/app.py b/spaces/awacke1/CodeParrot-Copilot-Alternative/app.py
deleted file mode 100644
index d662e046ef498c0c8db358bb3ef41ef8ba20394b..0000000000000000000000000000000000000000
--- a/spaces/awacke1/CodeParrot-Copilot-Alternative/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/codeparrot/codeparrot").launch()
\ No newline at end of file
diff --git a/spaces/awacke1/Emojitrition-Fun-and-Easy-Nutrition/app.py b/spaces/awacke1/Emojitrition-Fun-and-Easy-Nutrition/app.py
deleted file mode 100644
index 5899945b09b198f95f85cbf06c9dc67124d211c7..0000000000000000000000000000000000000000
--- a/spaces/awacke1/Emojitrition-Fun-and-Easy-Nutrition/app.py
+++ /dev/null
@@ -1,81 +0,0 @@
-import streamlit as st
-import numpy as np
-import pandas as pd
-import plotly.graph_objects as go
-from datetime import datetime
-from base64 import b64encode
-
-# Define general functions
-FOOD_LIST = {4: "🍔", 6: "🍟", 8: "🌮", 10: "🍕", 12: "🍩", 20: "🥗", 50: "🍣", 100: "🍾"}
-
-def roll_dice(num_rolls, dice_type):
- rolls = np.random.randint(1, dice_type + 1, size=num_rolls)
- return rolls
-
-def plot_tokens(health_tokens, coin_tokens):
- fig = go.Figure()
- fig.add_trace(go.Sankey(
- node = {
- "label": ["Health", "Coins"] + [FOOD_LIST[i] for i in DICE_TYPES],
- "pad": 15
- },
- link = {
- "source": [0, 1] + list(range(2, len(DICE_TYPES) + 2)),
- "target": [2] * len(DICE_TYPES) + [3 + i for i in range(len(DICE_TYPES))],
- "value": health_tokens + coin_tokens
- },
- ))
- st.plotly_chart(fig)
-
-# Define Streamlit app
-st.set_page_config(page_title="🍔🍟 Emojitrition 🌮🍕", page_icon=":game_die:")
-st.title("🍔🍟 Emojitrition 🌮🍕")
-
-# Sidebar
-username = st.sidebar.text_input("👤 Enter your username:")
-num_rolls = st.sidebar.slider("🔢 Choose the number of rolls:", 1, 100, 3)
-
-# Main content
-DICE_TYPES = [4, 6, 8, 10, 12, 20, 50, 100]
-history = {"health_tokens": [0], "coin_tokens": [0]}
-
-for dice_type in DICE_TYPES:
- rolls = roll_dice(num_rolls, dice_type)
- highest_rolls = sum(roll == dice_type for roll in rolls)
- coin_tokens_added = 0
-
- dice_results = [f"{FOOD_LIST[dice_type]} {roll}" for roll in rolls]
- st.write(f"🎰 Results for {dice_type}-sided slot machine: {' | '.join(dice_results)}")
-
- for roll in rolls:
- if roll == dice_type:
- st.write(f"🎉 Congratulations! You got the {FOOD_LIST[dice_type]} jackpot! 💰 Adding 3 coins.")
- coin_tokens_added += 3
- if roll == max(rolls):
- st.write(f"🎉 Congratulations! You got the {FOOD_LIST[dice_type]} maximum value! 💖 Adding 10 health tokens.")
- if dice_type == 100:
- history["health_tokens"].append(history["health_tokens"][-1] + 10)
-
- history[f"{dice_type}-sided slot machine jackpots"] = highest_rolls
- history["roll_history"] = {**history.get("roll_history", {}), dice_type: rolls}
- history["coin_tokens"].append(history["coin_tokens"][-1] + coin_tokens_added)
-
-plot_tokens(history["health_tokens"], history["coin_tokens"])
-
-df = pd.concat([pd.DataFrame(history["roll_history"]), pd.DataFrame(history["health_tokens"], columns=["Health Tokens"]), pd.DataFrame(history["coin_tokens"], columns=["Coin Tokens"])], axis=1)
-
-timestamp = datetime.now().strftime("%m-%d-%Y-%H-%M-%S")
-filename = f"{username}_{timestamp}.csv"
-df.to_csv(filename, index=False)
-st.markdown(f'Download CSV File', unsafe_allow_html=True)
-
-st.markdown("""
-
-📣 Introducing Emojitrition - the fun and easy way to track your nutrition! 🍔🍟🌮🍕🍩🥗🍣🍾
-👉 Sick of boring nutrition tracking apps? Emojitrition is here to spice things up! 🌶️
-👉 Our app uses food nutrition emojis to make tracking your meals easy and fun. 🍴
-👉 Whether you're making healthy choices with 🥗 or indulging in some 🍩, Emojitrition makes it easy to see how your meals add up.
-👉 Download Emojitrition today and start making more informed choices for your health and well-being! 📲
-👉 It's time to ditch the boring old numbers and words and embrace the world of nutrition emojis! 🙌
-
-""")
\ No newline at end of file
diff --git a/spaces/awacke1/StreamlitMultiplayerTicTacToe/README.md b/spaces/awacke1/StreamlitMultiplayerTicTacToe/README.md
deleted file mode 100644
index 7c7594e084a96d64f0e528f40dad979175e34c8b..0000000000000000000000000000000000000000
--- a/spaces/awacke1/StreamlitMultiplayerTicTacToe/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: StreamlitMultiplayerTicTacToe
-emoji: ⚡
-colorFrom: green
-colorTo: pink
-sdk: streamlit
-sdk_version: 1.17.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/badayvedat/AudioSep/pipeline.py b/spaces/badayvedat/AudioSep/pipeline.py
deleted file mode 100644
index ca10a2ba413c13a3fb54214d68e11bdf78dffbd2..0000000000000000000000000000000000000000
--- a/spaces/badayvedat/AudioSep/pipeline.py
+++ /dev/null
@@ -1,69 +0,0 @@
-import yaml
-from typing import Dict, List
-import torch
-import torch.nn as nn
-import numpy as np
-import librosa
-from scipy.io.wavfile import write
-from utils import ignore_warnings; ignore_warnings()
-from utils import parse_yaml, load_ss_model
-from models.clap_encoder import CLAP_Encoder
-
-
-def build_audiosep(config_yaml, checkpoint_path, device):
- configs = parse_yaml(config_yaml)
-
- query_encoder = CLAP_Encoder().eval()
- model = load_ss_model(
- configs=configs,
- checkpoint_path=checkpoint_path,
- query_encoder=query_encoder
- ).eval().to(device)
-
- print(f'Load AudioSep model from [{checkpoint_path}]')
- return model
-
-
-def inference(model, audio_file, text, output_file, device='cuda'):
- print(f'Separate audio from [{audio_file}] with textual query [{text}]')
- mixture, fs = librosa.load(audio_file, sr=32000, mono=True)
- with torch.no_grad():
- text = [text]
-
- conditions = model.query_encoder.get_query_embed(
- modality='text',
- text=text,
- device=device
- )
-
- input_dict = {
- "mixture": torch.Tensor(mixture)[None, None, :].to(device),
- "condition": conditions,
- }
-
- sep_segment = model.ss_model(input_dict)["waveform"]
-
- sep_segment = sep_segment.squeeze(0).squeeze(0).data.cpu().numpy()
-
- write(output_file, 32000, np.round(sep_segment * 32767).astype(np.int16))
- print(f'Write separated audio to [{output_file}]')
-
-
-if __name__ == '__main__':
- device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
- model = build_audiosep(
- config_yaml='config/audiosep_base.yaml',
- checkpoint_path='checkpoint/step=3920000.ckpt',
- device=device)
-
- audio_file = '/mnt/bn/data-xubo/project/AudioShop/YT_audios/Y3VHpLxtd498.wav'
- text = 'pigeons are cooing in the background'
- output_file='separated_audio.wav'
-
- inference(model, audio_file, text, output_file, device)
-
-
-
-
-
-
diff --git a/spaces/bai54188/BingAI3.0/README.md b/spaces/bai54188/BingAI3.0/README.md
deleted file mode 100644
index 3247bbae01467f0bfcce05601eb1f9fac8b394a5..0000000000000000000000000000000000000000
--- a/spaces/bai54188/BingAI3.0/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: BingAI3.0
-emoji: 📈
-colorFrom: pink
-colorTo: pink
-sdk: docker
-pinned: false
-license: mit
-app_port: 8080
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/banana-projects/web3d/node_modules/three/src/math/Box3.js b/spaces/banana-projects/web3d/node_modules/three/src/math/Box3.js
deleted file mode 100644
index 304defa3f7b5cfcc1c0af639e54d259d117f60e0..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/src/math/Box3.js
+++ /dev/null
@@ -1,615 +0,0 @@
-import { Vector3 } from './Vector3.js';
-
-/**
- * @author bhouston / http://clara.io
- * @author WestLangley / http://github.com/WestLangley
- */
-
-function Box3( min, max ) {
-
- this.min = ( min !== undefined ) ? min : new Vector3( + Infinity, + Infinity, + Infinity );
- this.max = ( max !== undefined ) ? max : new Vector3( - Infinity, - Infinity, - Infinity );
-
-}
-
-Object.assign( Box3.prototype, {
-
- isBox3: true,
-
- set: function ( min, max ) {
-
- this.min.copy( min );
- this.max.copy( max );
-
- return this;
-
- },
-
- setFromArray: function ( array ) {
-
- var minX = + Infinity;
- var minY = + Infinity;
- var minZ = + Infinity;
-
- var maxX = - Infinity;
- var maxY = - Infinity;
- var maxZ = - Infinity;
-
- for ( var i = 0, l = array.length; i < l; i += 3 ) {
-
- var x = array[ i ];
- var y = array[ i + 1 ];
- var z = array[ i + 2 ];
-
- if ( x < minX ) minX = x;
- if ( y < minY ) minY = y;
- if ( z < minZ ) minZ = z;
-
- if ( x > maxX ) maxX = x;
- if ( y > maxY ) maxY = y;
- if ( z > maxZ ) maxZ = z;
-
- }
-
- this.min.set( minX, minY, minZ );
- this.max.set( maxX, maxY, maxZ );
-
- return this;
-
- },
-
- setFromBufferAttribute: function ( attribute ) {
-
- var minX = + Infinity;
- var minY = + Infinity;
- var minZ = + Infinity;
-
- var maxX = - Infinity;
- var maxY = - Infinity;
- var maxZ = - Infinity;
-
- for ( var i = 0, l = attribute.count; i < l; i ++ ) {
-
- var x = attribute.getX( i );
- var y = attribute.getY( i );
- var z = attribute.getZ( i );
-
- if ( x < minX ) minX = x;
- if ( y < minY ) minY = y;
- if ( z < minZ ) minZ = z;
-
- if ( x > maxX ) maxX = x;
- if ( y > maxY ) maxY = y;
- if ( z > maxZ ) maxZ = z;
-
- }
-
- this.min.set( minX, minY, minZ );
- this.max.set( maxX, maxY, maxZ );
-
- return this;
-
- },
-
- setFromPoints: function ( points ) {
-
- this.makeEmpty();
-
- for ( var i = 0, il = points.length; i < il; i ++ ) {
-
- this.expandByPoint( points[ i ] );
-
- }
-
- return this;
-
- },
-
- setFromCenterAndSize: function () {
-
- var v1 = new Vector3();
-
- return function setFromCenterAndSize( center, size ) {
-
- var halfSize = v1.copy( size ).multiplyScalar( 0.5 );
-
- this.min.copy( center ).sub( halfSize );
- this.max.copy( center ).add( halfSize );
-
- return this;
-
- };
-
- }(),
-
- setFromObject: function ( object ) {
-
- this.makeEmpty();
-
- return this.expandByObject( object );
-
- },
-
- clone: function () {
-
- return new this.constructor().copy( this );
-
- },
-
- copy: function ( box ) {
-
- this.min.copy( box.min );
- this.max.copy( box.max );
-
- return this;
-
- },
-
- makeEmpty: function () {
-
- this.min.x = this.min.y = this.min.z = + Infinity;
- this.max.x = this.max.y = this.max.z = - Infinity;
-
- return this;
-
- },
-
- isEmpty: function () {
-
- // this is a more robust check for empty than ( volume <= 0 ) because volume can get positive with two negative axes
-
- return ( this.max.x < this.min.x ) || ( this.max.y < this.min.y ) || ( this.max.z < this.min.z );
-
- },
-
- getCenter: function ( target ) {
-
- if ( target === undefined ) {
-
- console.warn( 'THREE.Box3: .getCenter() target is now required' );
- target = new Vector3();
-
- }
-
- return this.isEmpty() ? target.set( 0, 0, 0 ) : target.addVectors( this.min, this.max ).multiplyScalar( 0.5 );
-
- },
-
- getSize: function ( target ) {
-
- if ( target === undefined ) {
-
- console.warn( 'THREE.Box3: .getSize() target is now required' );
- target = new Vector3();
-
- }
-
- return this.isEmpty() ? target.set( 0, 0, 0 ) : target.subVectors( this.max, this.min );
-
- },
-
- expandByPoint: function ( point ) {
-
- this.min.min( point );
- this.max.max( point );
-
- return this;
-
- },
-
- expandByVector: function ( vector ) {
-
- this.min.sub( vector );
- this.max.add( vector );
-
- return this;
-
- },
-
- expandByScalar: function ( scalar ) {
-
- this.min.addScalar( - scalar );
- this.max.addScalar( scalar );
-
- return this;
-
- },
-
- expandByObject: function () {
-
- // Computes the world-axis-aligned bounding box of an object (including its children),
- // accounting for both the object's, and children's, world transforms
-
- var scope, i, l;
-
- var v1 = new Vector3();
-
- function traverse( node ) {
-
- var geometry = node.geometry;
-
- if ( geometry !== undefined ) {
-
- if ( geometry.isGeometry ) {
-
- var vertices = geometry.vertices;
-
- for ( i = 0, l = vertices.length; i < l; i ++ ) {
-
- v1.copy( vertices[ i ] );
- v1.applyMatrix4( node.matrixWorld );
-
- scope.expandByPoint( v1 );
-
- }
-
- } else if ( geometry.isBufferGeometry ) {
-
- var attribute = geometry.attributes.position;
-
- if ( attribute !== undefined ) {
-
- for ( i = 0, l = attribute.count; i < l; i ++ ) {
-
- v1.fromBufferAttribute( attribute, i ).applyMatrix4( node.matrixWorld );
-
- scope.expandByPoint( v1 );
-
- }
-
- }
-
- }
-
- }
-
- }
-
- return function expandByObject( object ) {
-
- scope = this;
-
- object.updateMatrixWorld( true );
-
- object.traverse( traverse );
-
- return this;
-
- };
-
- }(),
-
- containsPoint: function ( point ) {
-
- return point.x < this.min.x || point.x > this.max.x ||
- point.y < this.min.y || point.y > this.max.y ||
- point.z < this.min.z || point.z > this.max.z ? false : true;
-
- },
-
- containsBox: function ( box ) {
-
- return this.min.x <= box.min.x && box.max.x <= this.max.x &&
- this.min.y <= box.min.y && box.max.y <= this.max.y &&
- this.min.z <= box.min.z && box.max.z <= this.max.z;
-
- },
-
- getParameter: function ( point, target ) {
-
- // This can potentially have a divide by zero if the box
- // has a size dimension of 0.
-
- if ( target === undefined ) {
-
- console.warn( 'THREE.Box3: .getParameter() target is now required' );
- target = new Vector3();
-
- }
-
- return target.set(
- ( point.x - this.min.x ) / ( this.max.x - this.min.x ),
- ( point.y - this.min.y ) / ( this.max.y - this.min.y ),
- ( point.z - this.min.z ) / ( this.max.z - this.min.z )
- );
-
- },
-
- intersectsBox: function ( box ) {
-
- // using 6 splitting planes to rule out intersections.
- return box.max.x < this.min.x || box.min.x > this.max.x ||
- box.max.y < this.min.y || box.min.y > this.max.y ||
- box.max.z < this.min.z || box.min.z > this.max.z ? false : true;
-
- },
-
- intersectsSphere: ( function () {
-
- var closestPoint = new Vector3();
-
- return function intersectsSphere( sphere ) {
-
- // Find the point on the AABB closest to the sphere center.
- this.clampPoint( sphere.center, closestPoint );
-
- // If that point is inside the sphere, the AABB and sphere intersect.
- return closestPoint.distanceToSquared( sphere.center ) <= ( sphere.radius * sphere.radius );
-
- };
-
- } )(),
-
- intersectsPlane: function ( plane ) {
-
- // We compute the minimum and maximum dot product values. If those values
- // are on the same side (back or front) of the plane, then there is no intersection.
-
- var min, max;
-
- if ( plane.normal.x > 0 ) {
-
- min = plane.normal.x * this.min.x;
- max = plane.normal.x * this.max.x;
-
- } else {
-
- min = plane.normal.x * this.max.x;
- max = plane.normal.x * this.min.x;
-
- }
-
- if ( plane.normal.y > 0 ) {
-
- min += plane.normal.y * this.min.y;
- max += plane.normal.y * this.max.y;
-
- } else {
-
- min += plane.normal.y * this.max.y;
- max += plane.normal.y * this.min.y;
-
- }
-
- if ( plane.normal.z > 0 ) {
-
- min += plane.normal.z * this.min.z;
- max += plane.normal.z * this.max.z;
-
- } else {
-
- min += plane.normal.z * this.max.z;
- max += plane.normal.z * this.min.z;
-
- }
-
- return ( min <= - plane.constant && max >= - plane.constant );
-
- },
-
- intersectsTriangle: ( function () {
-
- // triangle centered vertices
- var v0 = new Vector3();
- var v1 = new Vector3();
- var v2 = new Vector3();
-
- // triangle edge vectors
- var f0 = new Vector3();
- var f1 = new Vector3();
- var f2 = new Vector3();
-
- var testAxis = new Vector3();
-
- var center = new Vector3();
- var extents = new Vector3();
-
- var triangleNormal = new Vector3();
-
- function satForAxes( axes ) {
-
- var i, j;
-
- for ( i = 0, j = axes.length - 3; i <= j; i += 3 ) {
-
- testAxis.fromArray( axes, i );
- // project the aabb onto the seperating axis
- var r = extents.x * Math.abs( testAxis.x ) + extents.y * Math.abs( testAxis.y ) + extents.z * Math.abs( testAxis.z );
- // project all 3 vertices of the triangle onto the seperating axis
- var p0 = v0.dot( testAxis );
- var p1 = v1.dot( testAxis );
- var p2 = v2.dot( testAxis );
- // actual test, basically see if either of the most extreme of the triangle points intersects r
- if ( Math.max( - Math.max( p0, p1, p2 ), Math.min( p0, p1, p2 ) ) > r ) {
-
- // points of the projected triangle are outside the projected half-length of the aabb
- // the axis is seperating and we can exit
- return false;
-
- }
-
- }
-
- return true;
-
- }
-
- return function intersectsTriangle( triangle ) {
-
- if ( this.isEmpty() ) {
-
- return false;
-
- }
-
- // compute box center and extents
- this.getCenter( center );
- extents.subVectors( this.max, center );
-
- // translate triangle to aabb origin
- v0.subVectors( triangle.a, center );
- v1.subVectors( triangle.b, center );
- v2.subVectors( triangle.c, center );
-
- // compute edge vectors for triangle
- f0.subVectors( v1, v0 );
- f1.subVectors( v2, v1 );
- f2.subVectors( v0, v2 );
-
- // test against axes that are given by cross product combinations of the edges of the triangle and the edges of the aabb
- // make an axis testing of each of the 3 sides of the aabb against each of the 3 sides of the triangle = 9 axis of separation
- // axis_ij = u_i x f_j (u0, u1, u2 = face normals of aabb = x,y,z axes vectors since aabb is axis aligned)
- var axes = [
- 0, - f0.z, f0.y, 0, - f1.z, f1.y, 0, - f2.z, f2.y,
- f0.z, 0, - f0.x, f1.z, 0, - f1.x, f2.z, 0, - f2.x,
- - f0.y, f0.x, 0, - f1.y, f1.x, 0, - f2.y, f2.x, 0
- ];
- if ( ! satForAxes( axes ) ) {
-
- return false;
-
- }
-
- // test 3 face normals from the aabb
- axes = [ 1, 0, 0, 0, 1, 0, 0, 0, 1 ];
- if ( ! satForAxes( axes ) ) {
-
- return false;
-
- }
-
- // finally testing the face normal of the triangle
- // use already existing triangle edge vectors here
- triangleNormal.crossVectors( f0, f1 );
- axes = [ triangleNormal.x, triangleNormal.y, triangleNormal.z ];
- return satForAxes( axes );
-
- };
-
- } )(),
-
- clampPoint: function ( point, target ) {
-
- if ( target === undefined ) {
-
- console.warn( 'THREE.Box3: .clampPoint() target is now required' );
- target = new Vector3();
-
- }
-
- return target.copy( point ).clamp( this.min, this.max );
-
- },
-
- distanceToPoint: function () {
-
- var v1 = new Vector3();
-
- return function distanceToPoint( point ) {
-
- var clampedPoint = v1.copy( point ).clamp( this.min, this.max );
- return clampedPoint.sub( point ).length();
-
- };
-
- }(),
-
- getBoundingSphere: function () {
-
- var v1 = new Vector3();
-
- return function getBoundingSphere( target ) {
-
- if ( target === undefined ) {
-
- console.error( 'THREE.Box3: .getBoundingSphere() target is now required' );
- //target = new Sphere(); // removed to avoid cyclic dependency
-
- }
-
- this.getCenter( target.center );
-
- target.radius = this.getSize( v1 ).length() * 0.5;
-
- return target;
-
- };
-
- }(),
-
- intersect: function ( box ) {
-
- this.min.max( box.min );
- this.max.min( box.max );
-
- // ensure that if there is no overlap, the result is fully empty, not slightly empty with non-inf/+inf values that will cause subsequence intersects to erroneously return valid values.
- if ( this.isEmpty() ) this.makeEmpty();
-
- return this;
-
- },
-
- union: function ( box ) {
-
- this.min.min( box.min );
- this.max.max( box.max );
-
- return this;
-
- },
-
- applyMatrix4: function () {
-
- var points = [
- new Vector3(),
- new Vector3(),
- new Vector3(),
- new Vector3(),
- new Vector3(),
- new Vector3(),
- new Vector3(),
- new Vector3()
- ];
-
- return function applyMatrix4( matrix ) {
-
- // transform of empty box is an empty box.
- if ( this.isEmpty() ) return this;
-
- // NOTE: I am using a binary pattern to specify all 2^3 combinations below
- points[ 0 ].set( this.min.x, this.min.y, this.min.z ).applyMatrix4( matrix ); // 000
- points[ 1 ].set( this.min.x, this.min.y, this.max.z ).applyMatrix4( matrix ); // 001
- points[ 2 ].set( this.min.x, this.max.y, this.min.z ).applyMatrix4( matrix ); // 010
- points[ 3 ].set( this.min.x, this.max.y, this.max.z ).applyMatrix4( matrix ); // 011
- points[ 4 ].set( this.max.x, this.min.y, this.min.z ).applyMatrix4( matrix ); // 100
- points[ 5 ].set( this.max.x, this.min.y, this.max.z ).applyMatrix4( matrix ); // 101
- points[ 6 ].set( this.max.x, this.max.y, this.min.z ).applyMatrix4( matrix ); // 110
- points[ 7 ].set( this.max.x, this.max.y, this.max.z ).applyMatrix4( matrix ); // 111
-
- this.setFromPoints( points );
-
- return this;
-
- };
-
- }(),
-
- translate: function ( offset ) {
-
- this.min.add( offset );
- this.max.add( offset );
-
- return this;
-
- },
-
- equals: function ( box ) {
-
- return box.min.equals( this.min ) && box.max.equals( this.max );
-
- }
-
-} );
-
-
-export { Box3 };
diff --git a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderLib/meshmatcap_frag.glsl.js b/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderLib/meshmatcap_frag.glsl.js
deleted file mode 100644
index b1e4bbefb655f41970d58a9f00d5ce7c1550c1f2..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderLib/meshmatcap_frag.glsl.js
+++ /dev/null
@@ -1,66 +0,0 @@
-export default /* glsl */`
-#define MATCAP
-
-uniform vec3 diffuse;
-uniform float opacity;
-uniform sampler2D matcap;
-
-varying vec3 vViewPosition;
-
-#ifndef FLAT_SHADED
-
- varying vec3 vNormal;
-
-#endif
-
-#include
-#include
-#include
-#include
-
-#include
-#include
-#include
-#include
-#include
-
-void main() {
-
- #include
-
- vec4 diffuseColor = vec4( diffuse, opacity );
-
- #include
- #include
- #include
- #include
- #include
- #include
-
- vec3 viewDir = normalize( vViewPosition );
- vec3 x = normalize( vec3( viewDir.z, 0.0, - viewDir.x ) );
- vec3 y = cross( viewDir, x );
- vec2 uv = vec2( dot( x, normal ), dot( y, normal ) ) * 0.495 + 0.5; // 0.495 to remove artifacts caused by undersized matcap disks
-
- #ifdef USE_MATCAP
-
- vec4 matcapColor = texture2D( matcap, uv );
- matcapColor = matcapTexelToLinear( matcapColor );
-
- #else
-
- vec4 matcapColor = vec4( 1.0 );
-
- #endif
-
- vec3 outgoingLight = diffuseColor.rgb * matcapColor.rgb;
-
- gl_FragColor = vec4( outgoingLight, diffuseColor.a );
-
- #include
- #include
- #include
- #include
-
-}
-`;
diff --git a/spaces/bastiendechamps/geoguessr-bot/geoguessr_bot/__main__.py b/spaces/bastiendechamps/geoguessr-bot/geoguessr_bot/__main__.py
deleted file mode 100644
index c06ecc2273e7b01036114f6277c32852fcaeb377..0000000000000000000000000000000000000000
--- a/spaces/bastiendechamps/geoguessr-bot/geoguessr_bot/__main__.py
+++ /dev/null
@@ -1,17 +0,0 @@
-import fire
-from configue import load
-
-
-class CLI:
- """Regroup all the commands of the CLI
- """
-
- @staticmethod
- def run(config_path: str) -> None:
- config = load(config_path)
- command = config["command"]
- command.run()
-
-
-if __name__ == "__main__":
- fire.Fire(CLI)
diff --git a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327001316.py b/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327001316.py
deleted file mode 100644
index 0a38d76ce2ad23d2334dcc1d23d9094842aa1493..0000000000000000000000000000000000000000
--- a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327001316.py
+++ /dev/null
@@ -1,65 +0,0 @@
-import os
-#os.system("pip install gfpgan")
-
-#os.system("pip freeze")
-#os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v0.2.0/GFPGANCleanv1-NoCE-C2.pth -P .")
-import random
-import gradio as gr
-from PIL import Image
-import torch
-# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/ab/Abraham_Lincoln_O-77_matte_collodion_print.jpg/1024px-Abraham_Lincoln_O-77_matte_collodion_print.jpg', 'lincoln.jpg')
-# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/5/50/Albert_Einstein_%28Nobel%29.png', 'einstein.png')
-# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/9/9d/Thomas_Edison2.jpg/1024px-Thomas_Edison2.jpg', 'edison.jpg')
-# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/a9/Henry_Ford_1888.jpg/1024px-Henry_Ford_1888.jpg', 'Henry.jpg')
-# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/0/06/Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg/800px-Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg', 'Frida.jpg')
-
-
-
-
-import cv2
-import glob
-import numpy as np
-from basicsr.utils import imwrite
-from gfpgan import GFPGANer
-
-import warnings
-warnings.warn('The unoptimized RealESRGAN is very slow on CPU. We do not use it. '
- 'If you really want to use it, please modify the corresponding codes.')
-bg_upsampler = None
-
-
-
-# set up GFPGAN restorer
-restorer = GFPGANer(
- model_path='experiments/pretrained_models/GFPGANv1.3.pth',
- upscale=2,
- arch='clean',
- channel_multiplier=2,
- bg_upsampler=bg_upsampler)
-
-
-def inference(img):
- input_img = cv2.imread(img, cv2.IMREAD_COLOR)
- cropped_faces, restored_faces, restored_img = restorer.enhance(
- input_img, has_aligned=False, only_center_face=False, paste_back=True)
-
- return Image.fromarray(restored_faces[0][:,:,::-1])
-
-title = "GFP-GAN"
-description = "Gradio demo for GFP-GAN: Towards Real-World Blind Face Restoration with Generative Facial Prior. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below. Please click submit only once"
-article = "
"
-gr.Interface(
- inference,
- [gr.inputs.Image(type="filepath", label="Input")],
- gr.outputs.Image(type="pil", label="Output"),
- title=title,
- description=description,
- article=article,
- examples=[
- ['lincoln.jpg'],
- ['einstein.png'],
- ['edison.jpg'],
- ['Henry.jpg'],
- ['Frida.jpg']
- ]
- ).launch(enable_queue=True,cache_examples=True)
\ No newline at end of file
diff --git a/spaces/bigPear/digitalWDF/tests/convert_comparison.py b/spaces/bigPear/digitalWDF/tests/convert_comparison.py
deleted file mode 100644
index c77e6fbb3e22828b6735590b2fc2a4faec6e9b0b..0000000000000000000000000000000000000000
--- a/spaces/bigPear/digitalWDF/tests/convert_comparison.py
+++ /dev/null
@@ -1,32 +0,0 @@
-# coding=utf-8
-
-import json
-
-
-if __name__ == "__main__":
-
- dataset = []
-
- with open("comparison_data_v2.json", "r", encoding="utf-8") as f:
- data = json.load(f)
-
- for example in data:
- instruction = example["user_input"]
- resp_with_score = [(float(resp["score"]), resp["response"]) for resp in example["responses_and_scores"]]
- resp_with_score.sort()
-
- while len(resp_with_score[0][1]) == 0:
- resp_with_score.pop(0)
- if len(resp_with_score) == 0:
- continue
-
- min_score, max_score = resp_with_score[0][0], resp_with_score[-1][0]
- if min_score < 5.0 and max_score > 5.0:
- dataset.append({
- "instruction": instruction,
- "input": "",
- "output": [resp_with_score[-1][1], resp_with_score[0][1]]
- })
-
- with open("comparison_gpt4_data_en.json", "w", encoding="utf-8", newline="\n") as f:
- json.dump(dataset, f, indent=2, ensure_ascii=False)
diff --git a/spaces/bigjoker/stable-diffusion-webui/modules/ui_tempdir.py b/spaces/bigjoker/stable-diffusion-webui/modules/ui_tempdir.py
deleted file mode 100644
index 126f73a21d71070887fd094beaf0fe6d7e12df9c..0000000000000000000000000000000000000000
--- a/spaces/bigjoker/stable-diffusion-webui/modules/ui_tempdir.py
+++ /dev/null
@@ -1,82 +0,0 @@
-import os
-import tempfile
-from collections import namedtuple
-from pathlib import Path
-
-import gradio as gr
-
-from PIL import PngImagePlugin
-
-from modules import shared
-
-
-Savedfile = namedtuple("Savedfile", ["name"])
-
-
-def register_tmp_file(gradio, filename):
- if hasattr(gradio, 'temp_file_sets'): # gradio 3.15
- gradio.temp_file_sets[0] = gradio.temp_file_sets[0] | {os.path.abspath(filename)}
-
- if hasattr(gradio, 'temp_dirs'): # gradio 3.9
- gradio.temp_dirs = gradio.temp_dirs | {os.path.abspath(os.path.dirname(filename))}
-
-
-def check_tmp_file(gradio, filename):
- if hasattr(gradio, 'temp_file_sets'):
- return any([filename in fileset for fileset in gradio.temp_file_sets])
-
- if hasattr(gradio, 'temp_dirs'):
- return any(Path(temp_dir).resolve() in Path(filename).resolve().parents for temp_dir in gradio.temp_dirs)
-
- return False
-
-
-def save_pil_to_file(pil_image, dir=None):
- already_saved_as = getattr(pil_image, 'already_saved_as', None)
- if already_saved_as and os.path.isfile(already_saved_as):
- register_tmp_file(shared.demo, already_saved_as)
-
- file_obj = Savedfile(already_saved_as)
- return file_obj
-
- if shared.opts.temp_dir != "":
- dir = shared.opts.temp_dir
-
- use_metadata = False
- metadata = PngImagePlugin.PngInfo()
- for key, value in pil_image.info.items():
- if isinstance(key, str) and isinstance(value, str):
- metadata.add_text(key, value)
- use_metadata = True
-
- file_obj = tempfile.NamedTemporaryFile(delete=False, suffix=".png", dir=dir)
- pil_image.save(file_obj, pnginfo=(metadata if use_metadata else None))
- return file_obj
-
-
-# override save to file function so that it also writes PNG info
-gr.processing_utils.save_pil_to_file = save_pil_to_file
-
-
-def on_tmpdir_changed():
- if shared.opts.temp_dir == "" or shared.demo is None:
- return
-
- os.makedirs(shared.opts.temp_dir, exist_ok=True)
-
- register_tmp_file(shared.demo, os.path.join(shared.opts.temp_dir, "x"))
-
-
-def cleanup_tmpdr():
- temp_dir = shared.opts.temp_dir
- if temp_dir == "" or not os.path.isdir(temp_dir):
- return
-
- for root, dirs, files in os.walk(temp_dir, topdown=False):
- for name in files:
- _, extension = os.path.splitext(name)
- if extension != ".png":
- continue
-
- filename = os.path.join(root, name)
- os.remove(filename)
diff --git a/spaces/bioriAsaeru/text-to-voice/Dod Sturmbot Download BEST.md b/spaces/bioriAsaeru/text-to-voice/Dod Sturmbot Download BEST.md
deleted file mode 100644
index 24020fa99d34d4935117f9f4958276616be7f1c8..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Dod Sturmbot Download BEST.md
+++ /dev/null
@@ -1,35 +0,0 @@
-
-
2019 is nearly over and with the help of Martee and others for the few donations the target of upgrading this website, dodbits.com and the sturmbot package has almost reached what I set out to do, upgrade most of the important and very broken waypoints to keep Sturmbot alive for two more years at least.
The updated main downloads will also include Rich Nagel's updated sturmbot .dll file SturmBot v1.7 "Stosstruppe" and "Scharfschütze" Bot Class Fix (SturmBot v1.9) that compliments the updated Sturmbot package and kills off a lot of annoying bugs.
-
All of this is down to me neglecting this spin off site of dodbits.com, it started in July 2019 when I decided not to close dodbits.com and this site and go after and bugs the site and the downloads in them were holding. Sturmbot.org one of the last download sites still around so it deserves the attention.
-
That effort on the "unofficial" manual is also changing the downloads, as I go I am adding single waypoint downloads for each map, at the end of the review, fix ones that don't work and make my own 2019 pack. You can see that new download section here. About the only one left to fix is dod_caen. Martee is currently making waypoints and some of his were that good... they beat some of the official waypoints.
-
-
Other items are Sturmbot downloads for older versions. There is also a guide on what version of dod to use with what version of Sturmbot. It isn't 100% finished, I am still getting files for Linux versions.
-
There are many facets to the program, you can download a simple to use installer here on this site ready to play with 600+ maps, (600 waypoint files in the install not maps) and also find some hard to get custom files for Day of Defeat as well.
-
2. INsane's Sturmbot Menus. I have always provided custom menus in these packages, the installer has a menu that helps you play sturmbot. It also has controls video, audio, netcode, netgraph, chat and more.
-
UPDATE June 2013:For those of us that want a quick installer package that works with StrumBOT and Steam dod 1.3 (SteamPipe update) then stop reading and go to that link. Also download a nice top 22 map pack via a installer or zip, these two packages take just minutes to set up, no frigging around with files and reading endless hard to get info'. There is a special menu in the package so starting StrumBOT on the latest steam dod 1.3 is the easy way!
-
game "Day of Defeat" url_info " " url_dl "" version "1.3" size "5" svonly "0" cldll "1" secure "1" hlversion "1110" type "multiplayer_only" nomodels "1" nohimodel "1" mpentity "info_player_allies" //gamedll "dlls\dod.dll" //gamedll_linux "dlls/dod_i386.so" gamedll "sturmbot\dlls\STURM_bot.dll" gamedll_linux "sturmbot/dlls/sturmbot-i486.so"
-
Edit March 2012 update. I have recently made a auto install package download this for dod steam 1.3 using SturmBot 1.7b and skip the manual install section, then go to this section and read from there to play and waypoint.
-
2. Don't get fooled here, the day of defeat folder may sound right but you want the "dod" folder. Put both files of the download in the dod folder...commandmenu.txt and SturmBOT_Menu.cfg
-
I have set some binds in the SturmBOT_Menu.cfg file. You might like to change them? Just open SturmBOT_Menu.cfg up like you did the config.cfg and change the bind keys to your liking. If you downloaded the installer package, Here is what they are set at...
-
A word on the RCBot2 site, I noticed my Malwarebytes real time monitoring software has that blacklisted as a bad site. No idea why as no downloads from there are bad, the community is fine. I just made it exempt and looks OK to me not sure why its blacklisted. I think its because its related to bots united domain, they did also have a problem, I certainly had no issues with them over the last 10 years either.
-
Since I released a installer for RCBot2 (installs for dod:s only) and added support for map downloads 30 July 2019, there has been 3 new waypoints made that are not yet in the main installer from RCBot2 or my dod:s only installer.
-
Its been a year... I better update. dodbits.com is funded for another year (costs are about $300 per year by the way) so dodbits.com and sturmbot.org (Bot for dod Goldsourse HL1 dod) will hang around for at least November 2018.
-
There are many facets to the program, you can download a simple to use installer on the site ready to play with 600+ maps, (600 waypoint files in the install not maps) and also find some hard to get custom files for Day of Defeat as well like my map pack for 22 of the best maps ever made.
-
Aghhhh, Damn you SteamPipe! The articles on this site have been hit with errors because of the new SteamPipe formats. Just look for this logo for pages that have been updated ... if it does not have a logo... it could have incorrect info or files in it. I have done some downloads and a page or two already. Going to take a while... contact me if confused about something.
-
There were a few items I was worried about like all my downloads and places like gamebanana ...will it still work? Mostly it does. There seems to be some custom models missing bits here and there, that may be down to some .VMF and .VTF file paths now incorrect in the files. Basically all you have to do with older custom stuff is where it used to go in the "dod" folder it now goes in "dod\custom" then make a new folder and call it any name you like. That folder is now the "dod" folder for custom files.
-
So when downloading older content if the readme says to install the folders in "dod" make sure you visit the new "dod\custom" folder, make a new folder of what that custom item is say... "terminator player skin" and all the folders like 'models' and 'materials' now go in "dod\custom\terminator player skin". It really is as simple as that.
-
I have placed the St Lo map on the shelf... having a rethink. I did however get enough time to finish off another one from the dark depths of my hard drives, office. CSS office was always a favorite but for dod it is a disaster because the axis spawn in the play area and dods does not have hostage mode. I think I have a good enough version to playtest, please see here and here for a look and download.
-
I would not count on any other fixes, just disruption. I cannot say this is a good move but there is little anyone can do about it, the disruption to sites like mine and Gamebanana are that all of the downloads will be...broken.
-
The version 11 of my HUD will drag on a bit more yet, it is ready, at Beta 12 and here for a download, (See first post). Small changes happen a lot, I am waiting for the file system in orangebox games to change and break instalers and manual downloads. The Version 11 beta 12 does also have a huge set of manual files, it is in the first post but here it is anyway... download (warning, installing a HUD this complex is for advanced users ONLY, better use the installer if you are not sure)
-
I have added some pages server operators may find useful, a page that makes a custom server.cfg file ready to download. One of the most annoying things when setting up a Source based server is the server config, what to put in it and what range the commands have. I have pages for DoD:s, TF2 and CSS .
-
Just fill out the input lines for the server name and the passwords, (web page will not remember them) drop down arrow choices that have defaults and recommended values and press a button at the bottom... a server.cfg is ready to download. The page also has information beside it so you can read about each command if you want to know more.
-
I made these because the few of them that are on other sites are in non English, full of commands that are not for a game or have long since been replaced or the default changed. I have tested each command to ensure it is a real command ... after looking at some other config makers and downloadable server.cfg files, I can assure you there are LOTS of authors who do not test to the level I have with my web pages. Every similar page I tried had at least 10% of useless or non working commands.
-
The version 11 HUD/GUI for dods is quite advanced now and sitting on "beta 10". Just keep and eye on the first post in my forum for the latest. It now has a complete guide on the team menu both in the download and in the dods hud section.
-
Update: there has been a change to the separate site forums. The website now has a new integrated forum that shares the membership with dodbits.com. I will be shutting down the old forums soon, sorry if this has upset anyone, please feel free to join dodbits now, access to the downloads section, (for downloading and uploading your files) is now available, you can also post comments in the downloads and the articles along with forum posts, all under the same membership log in. Register here.
-
dodbits main site: If you want to upload to the downloads section you can now fill out a simple form. You can edit and even delete your download after it is posted. Also members can add comments to the bottom of any article or download.
-
Other things are in the works, a download for the old dod Sturmbot package has been made, a installer that gets Sturmbot up and running in seconds. Just make sure you run the old steam dod ver 1.3 game in steam first, run the installer and use the desktop shortcut... it works, no fooling around with read only files and such. It has custom crosshairs, strumbot menus that support waypointing, may even include a custom gui for the old dod soon too.
-
Beta 3.0 was released in July 2002 and added the Allied Sergeant, who carried a M3 Grease Gun, as well as the para gameplay mode which was similar to Counter-Strike in that players did not respawn until the end of the round. The Germans could now also choose between two models of the powerful and deafeningly loud FG 42 Fallschirmjäger (bipod/scope) and the Gewehr could now be selected as a class, in order to compete with the semi-automatic Garand rifle the Allies used. Valve then made Day of Defeat an official valve mod and released 1.0v in May 2003 which featured a lot of changes. Activision distributed a retail version of the game though it could still be downloaded for free, if the player has Half-Life. Later version 1.1 became the first Steam release. 1.0 included quite a few new features - the pace of the game was increased, which helped to attract new players. Friendly-fire was made non-default, an on-screen map where one's allies and thrown grenades were displayed was added, as was a Battlefield-style flag hanging over the head of friends and foes for identification. Pop-up help messages, spoken by a dog wearing a helmet (in the same vein as Microsoft's Office Assistant), also appeared in v1.0. Bleeding - a key feature of the betas - was removed, as testing found that new players had difficulty understanding the concept of pressing the bandage key when health could not be recovered. Night time battlefields were removed as they tended to be the least-played of the beta maps. Version 1.0 also included auto-reload (which defaulted to "always on"), some new maps and major modifications to some old maps (eg. Anzio). At first old players felt that the Garand had been made weaker, adding an Axis bias to the game. It was later learned that there were issues with hitboxes, which caused a lot of shots to register as hitting different body parts and doing less damage. British Troops were also issued in 1.0, but were only featured in 3 maps and had only 5 weapon classes. The American Bazooka, German Panzershreck and British PIAT became independent classes in 1.2v and Mortar-classes were proposed, but never got released. Para-maps were kept, but the special gameplay was removed and replaced by the traditional Flag-capture or objective gameplay. Version 1.0 also introduced the bipod for the BAR, allowing for it to be deployed in the same locations as the machine guns and FG42s. In September 2005 Day of Defeat: Source was released.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/bioriAsaeru/text-to-voice/How to Use ADORAGE EDIUS PLUGINS to Create Spectacular Scenic Transitions in Your Videos.md b/spaces/bioriAsaeru/text-to-voice/How to Use ADORAGE EDIUS PLUGINS to Create Spectacular Scenic Transitions in Your Videos.md
deleted file mode 100644
index 95447ce4fb1c8349b6d5dbba15aacc639191c466..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/How to Use ADORAGE EDIUS PLUGINS to Create Spectacular Scenic Transitions in Your Videos.md
+++ /dev/null
@@ -1,7 +0,0 @@
-
-
Adorage Effects Package 13 offers hundreds of modifiable effects and transitions for themes such as family, celebrations and party. Complete in HD, with 64-Bit support and plugins for the best editing solution: excellent quality of effects, extremely fast analysis, professional results.
-
"If you installed Premiere Pro Cs 5.5, Adorage Plugin work (if you had allready Installed Adorage plugins). Just Copy & Paste C:\Program Files\Adobe\Common\Plug-ins\CS5\MediaCore\ Copy Folder - proDAD to C:\Program Files\Adobe\Common\Plug-ins\CS5.5\MediaCore\ Paste Folder - proDAD"
Answer: You just installed the "basic proDAD Adorage", without any volume (you probably run adorage-30-service32bit.exe only). Install one of the Adorage Volume you purchased, and it will 'unlock' your version in Pinnacle Studio (the demo logo will vanish)
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/bodah/RVC-Models-bo/lib/infer_pack/modules/F0Predictor/__init__.py b/spaces/bodah/RVC-Models-bo/lib/infer_pack/modules/F0Predictor/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/grids/compression/_explorers.py b/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/grids/compression/_explorers.py
deleted file mode 100644
index eed30d5b8a1c14676503148ddf133c79ed2e33bf..0000000000000000000000000000000000000000
--- a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/grids/compression/_explorers.py
+++ /dev/null
@@ -1,55 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import treetable as tt
-
-from .._base_explorers import BaseExplorer
-
-
-class CompressionExplorer(BaseExplorer):
- eval_metrics = ["sisnr", "visqol"]
-
- def stages(self):
- return ["train", "valid", "evaluate"]
-
- def get_grid_meta(self):
- """Returns the list of Meta information to display for each XP/job.
- """
- return [
- tt.leaf("index", align=">"),
- tt.leaf("name", wrap=140),
- tt.leaf("state"),
- tt.leaf("sig", align=">"),
- ]
-
- def get_grid_metrics(self):
- """Return the metrics that should be displayed in the tracking table.
- """
- return [
- tt.group(
- "train",
- [
- tt.leaf("epoch"),
- tt.leaf("bandwidth", ".2f"),
- tt.leaf("adv", ".4f"),
- tt.leaf("d_loss", ".4f"),
- ],
- align=">",
- ),
- tt.group(
- "valid",
- [
- tt.leaf("bandwidth", ".2f"),
- tt.leaf("adv", ".4f"),
- tt.leaf("msspec", ".4f"),
- tt.leaf("sisnr", ".2f"),
- ],
- align=">",
- ),
- tt.group(
- "evaluate", [tt.leaf(name, ".3f") for name in self.eval_metrics], align=">"
- ),
- ]
diff --git a/spaces/brainblow/AudioCreator_Music-Audio_Generation/tests/modules/__init__.py b/spaces/brainblow/AudioCreator_Music-Audio_Generation/tests/modules/__init__.py
deleted file mode 100644
index 0952fcc3f57e34b3747962e9ebd6fc57aeea63fa..0000000000000000000000000000000000000000
--- a/spaces/brainblow/AudioCreator_Music-Audio_Generation/tests/modules/__init__.py
+++ /dev/null
@@ -1,5 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
diff --git a/spaces/candlend/vits-hoshimi/sovits/add_speaker.py b/spaces/candlend/vits-hoshimi/sovits/add_speaker.py
deleted file mode 100644
index e224f07c892a5fe1837e3cbf1745e0d8992ea283..0000000000000000000000000000000000000000
--- a/spaces/candlend/vits-hoshimi/sovits/add_speaker.py
+++ /dev/null
@@ -1,62 +0,0 @@
-import os
-import argparse
-from tqdm import tqdm
-from random import shuffle
-import json
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument("--train_list", type=str, default="./filelists/train.txt", help="path to train list")
- parser.add_argument("--val_list", type=str, default="./filelists/val.txt", help="path to val list")
- parser.add_argument("--test_list", type=str, default="./filelists/test.txt", help="path to test list")
- parser.add_argument("--source_dir", type=str, default="./dataset/32k", help="path to source dir")
- args = parser.parse_args()
-
- previous_config = json.load(open("configs/config.json", "rb"))
-
- train = []
- val = []
- test = []
- idx = 0
- spk_dict = previous_config["spk"]
- spk_id = max([i for i in spk_dict.values()]) + 1
- for speaker in tqdm(os.listdir(args.source_dir)):
- if speaker not in spk_dict.keys():
- spk_dict[speaker] = spk_id
- spk_id += 1
- wavs = [os.path.join(args.source_dir, speaker, i)for i in os.listdir(os.path.join(args.source_dir, speaker))]
- wavs = [i for i in wavs if i.endswith("wav")]
- shuffle(wavs)
- train += wavs[2:-10]
- val += wavs[:2]
- test += wavs[-10:]
-
- assert previous_config["model"]["n_speakers"] > len(spk_dict.keys())
- shuffle(train)
- shuffle(val)
- shuffle(test)
-
- print("Writing", args.train_list)
- with open(args.train_list, "w") as f:
- for fname in tqdm(train):
- wavpath = fname
- f.write(wavpath + "\n")
-
- print("Writing", args.val_list)
- with open(args.val_list, "w") as f:
- for fname in tqdm(val):
- wavpath = fname
- f.write(wavpath + "\n")
-
- print("Writing", args.test_list)
- with open(args.test_list, "w") as f:
- for fname in tqdm(test):
- wavpath = fname
- f.write(wavpath + "\n")
-
- previous_config["spk"] = spk_dict
-
- print("Writing configs/config.json")
- with open("configs/config.json", "w") as f:
- json.dump(previous_config, f, indent=2)
diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/modeling/roi_heads/__init__.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/modeling/roi_heads/__init__.py
deleted file mode 100644
index d13e9c57235b982f3e0645bc316de2b75755dfda..0000000000000000000000000000000000000000
--- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/modeling/roi_heads/__init__.py
+++ /dev/null
@@ -1,29 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-from .box_head import ROI_BOX_HEAD_REGISTRY, build_box_head, FastRCNNConvFCHead
-from .keypoint_head import (
- ROI_KEYPOINT_HEAD_REGISTRY,
- build_keypoint_head,
- BaseKeypointRCNNHead,
- KRCNNConvDeconvUpsampleHead,
-)
-from .mask_head import (
- ROI_MASK_HEAD_REGISTRY,
- build_mask_head,
- BaseMaskRCNNHead,
- MaskRCNNConvUpsampleHead,
-)
-from .roi_heads import (
- ROI_HEADS_REGISTRY,
- ROIHeads,
- Res5ROIHeads,
- StandardROIHeads,
- build_roi_heads,
- select_foreground_proposals,
-)
-from .cascade_rcnn import CascadeROIHeads
-from .rotated_fast_rcnn import RROIHeads
-from .fast_rcnn import FastRCNNOutputLayers
-
-from . import cascade_rcnn # isort:skip
-
-__all__ = list(globals().keys())
diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/tests/layers/test_blocks.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/tests/layers/test_blocks.py
deleted file mode 100644
index 5a0488adbfcf0c7eca08616f43ebf695acad4b7e..0000000000000000000000000000000000000000
--- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/tests/layers/test_blocks.py
+++ /dev/null
@@ -1,51 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-import unittest
-import torch
-from torch import nn
-
-from detectron2.layers import ASPP, DepthwiseSeparableConv2d, FrozenBatchNorm2d
-from detectron2.modeling.backbone.resnet import BasicStem, ResNet
-
-
-"""
-Test for misc layers.
-"""
-
-
-class TestBlocks(unittest.TestCase):
- def test_separable_conv(self):
- DepthwiseSeparableConv2d(3, 10, norm1="BN", activation1=nn.PReLU())
-
- def test_aspp(self):
- m = ASPP(3, 10, [2, 3, 4], norm="", activation=nn.PReLU())
- self.assertIsNot(m.convs[0].activation.weight, m.convs[1].activation.weight)
- self.assertIsNot(m.convs[0].activation.weight, m.project.activation.weight)
-
- @unittest.skipIf(not torch.cuda.is_available(), "CUDA not available")
- def test_frozen_batchnorm_fp16(self):
- from torch.cuda.amp import autocast
-
- C = 10
- input = torch.rand(1, C, 10, 10).cuda()
- m = FrozenBatchNorm2d(C).cuda()
- with autocast():
- output = m(input.half())
- self.assertEqual(output.dtype, torch.float16)
-
- # requires_grad triggers a different codepath
- input.requires_grad_()
- with autocast():
- output = m(input.half())
- self.assertEqual(output.dtype, torch.float16)
-
- def test_resnet_unused_stages(self):
- resnet = ResNet(BasicStem(), ResNet.make_default_stages(18), out_features=["res2"])
- self.assertTrue(hasattr(resnet, "res2"))
- self.assertFalse(hasattr(resnet, "res3"))
- self.assertFalse(hasattr(resnet, "res5"))
-
- resnet = ResNet(BasicStem(), ResNet.make_default_stages(18), out_features=["res2", "res5"])
- self.assertTrue(hasattr(resnet, "res2"))
- self.assertTrue(hasattr(resnet, "res4"))
- self.assertTrue(hasattr(resnet, "res5"))
diff --git a/spaces/ccolas/TastyPiano/src/pianocktail.py b/spaces/ccolas/TastyPiano/src/pianocktail.py
deleted file mode 100644
index 1d3754e0f2a712d8dba35660f2bae2cad6b6e570..0000000000000000000000000000000000000000
--- a/spaces/ccolas/TastyPiano/src/pianocktail.py
+++ /dev/null
@@ -1,79 +0,0 @@
-import time
-import os
-import pickle
-from src.music.pipeline.music_pipeline import encode_music
-from src.music2cocktailrep.pipeline.music2cocktailrep import music2cocktailrep, setup_translation_models, debug_translation
-from src.cocktails.pipeline.cocktailrep2recipe import cocktailrep2recipe
-from src.debugger import Debugger
-from datetime import datetime
-from shutil import copy
-
-synestesia_path = '../data/synesthesia/'
-debugger = Debugger()
-
-def pianocktail(record=False, url=None, midi=None, audio=None, processed=None, crop=40, verbose=False, debug=False, level=0):
- assert url is not None or midi is not None or audio is not None or processed is not None
- if verbose: print('------\nNew synesthetic exploration!')
- init_time = time.time()
- music_ai_rep, music_handcoded_rep, all_paths, error = encode_music(record=record, url=url, audio_path=audio, midi_path=midi, nb_aug=0, noise_injection=False,
- augmentation=False, processed_path=processed, crop=crop, apply_filtering=False, verbose=verbose,
- level=level+2)
- if music_ai_rep is not None:
- cocktail_rep, affective_cluster_id, affect = music2cocktailrep(music_ai_rep, music_handcoded_rep, verbose=verbose, level=level+2)
- cocktail_recipes, scores = cocktailrep2recipe(cocktail_rep, target_affective_cluster=affective_cluster_id, verbose=verbose, full_verbose=verbose, level=level+2)
- cocktail_recipe = cocktail_recipes[0]
- recipe_score = scores[0]
- if debug:
- music_reconstruction = debug_translation(music_ai_rep)
- debugger.extract_info(all_paths, affective_cluster_id, affect, cocktail_rep, music_reconstruction, recipe_score, verbose=verbose, level=level+2)
- debug_info = debugger.debug_dict
- else:
- debug_info = None
- if verbose:
- print(cocktail_recipe.replace('Recipe', ' ' * (level + 2) + 'Generated recipe:').replace('None ()', ''))
- debugger.print_debug(level=level+2)
- print(f'\nEnd of synesthetic exploration ({int(time.time() - init_time)} secs).\n------')
-
- else:
- cocktail_recipe = None
- debug_info = None
- return cocktail_recipe, debug_info
-
-def setup_and_run(url=None, midi=None, audio=None, verbose=False, debug=False, extra_code=None):
- assert url is not None or midi is not None or audio is not None
- now = datetime.now()
- folder_name = f'{now.year}-{now.month}-{now.day}_{now.hour}:{now.minute}:{now.second}'
- folder_path = synestesia_path + folder_name
- if extra_code is not None:
- folder_path += '_' + extra_code
- if os.path.exists(folder_path):
- folder_path += '_2'
- folder_path += '/'
- os.makedirs(folder_path, exist_ok=True)
- recipe, debug = pianocktail(url=url, verbose=verbose, debug=debug)
- with open(folder_path + 'debug.pk', 'wb') as f:
- pickle.dump(debug, f)
- with open(folder_path + 'recipe.txt', 'w') as f:
- f.write(recipe)
- paths = debug['all_paths']
- if paths['url'] is not None:
- with open(folder_path + 'url.txt', 'w') as f:
- f.write(paths['url'])
- for k in ['audio_path', 'midi_path']:
- origin = paths[k]
- copy(origin, folder_path + origin.split('/')[-1])
-
-
-if __name__ == '__main__':
- urls = ["https://www.youtube.com/watch?v=PLFVGwGQcB0",
- "https://www.youtube.com/watch?v=VQmuAr93OlI",
- "https://www.youtube.com/watch?v=Nv2GgV34qIg&list=PLO9E3V4rGLD8_iWrCioJRWZXJJE3Fzu_J&index=4",
- "https://www.youtube.com/watch?v=qAEIjWYdoYc&list=PLO9E3V4rGLD8_iWrCioJRWZXJJE3Fzu_J&index=1",
- "https://www.youtube.com/watch?v=M73x3O7dhmg&list=PLO9E3V4rGLD8_iWrCioJRWZXJJE3Fzu_J&index=5"]
- setup_translation_models()
- setup_and_run(url=urls[0], verbose=True, debug=True)
- recipes = []
- for url in urls:
- recipe = pianocktail(url=url, verbose=True, debug=True)[0]
- recipes.append(recipe)
- stop = 1
diff --git a/spaces/ceckenrode/Memory-Chat-Story-Generator-Bloom/README.md b/spaces/ceckenrode/Memory-Chat-Story-Generator-Bloom/README.md
deleted file mode 100644
index 655490e5e51b1b597c7362b1ad91b4548471318f..0000000000000000000000000000000000000000
--- a/spaces/ceckenrode/Memory-Chat-Story-Generator-Bloom/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Memory Chat Story Generator Bloom
-emoji: 📊
-colorFrom: pink
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.24.1
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/chendl/compositional_test/multimodal/open_flamingo/src/utils.py b/spaces/chendl/compositional_test/multimodal/open_flamingo/src/utils.py
deleted file mode 100644
index 815c70016c33ca9133aba60811a4948e31a2df27..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/multimodal/open_flamingo/src/utils.py
+++ /dev/null
@@ -1,31 +0,0 @@
-def extend_instance(obj, mixin):
- """Apply mixins to a class instance after creation"""
- base_cls = obj.__class__
- base_cls_name = obj.__class__.__name__
- obj.__class__ = type(
- base_cls_name, (mixin, base_cls), {}
- ) # mixin needs to go first for our forward() logic to work
-
-
-def getattr_recursive(obj, att):
- """
- Return nested attribute of obj
- Example: getattr_recursive(obj, 'a.b.c') is equivalent to obj.a.b.c
- """
- if att == "":
- return obj
- i = att.find(".")
- if i < 0:
- return getattr(obj, att)
- else:
- return getattr_recursive(getattr(obj, att[:i]), att[i + 1 :])
-
-
-def setattr_recursive(obj, att, val):
- """
- Set nested attribute of obj
- Example: setattr_recursive(obj, 'a.b.c', val) is equivalent to obj.a.b.c = val
- """
- if "." in att:
- obj = getattr_recursive(obj, ".".join(att.split(".")[:-1]))
- setattr(obj, att.split(".")[-1], val)
diff --git a/spaces/chendl/compositional_test/transformers/examples/research_projects/seq2seq-distillation/lightning_base.py b/spaces/chendl/compositional_test/transformers/examples/research_projects/seq2seq-distillation/lightning_base.py
deleted file mode 100644
index f246ecab0dd01bceda5c612dad9b0679a9691a6a..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/transformers/examples/research_projects/seq2seq-distillation/lightning_base.py
+++ /dev/null
@@ -1,393 +0,0 @@
-import argparse
-import logging
-import os
-from pathlib import Path
-from typing import Any, Dict
-
-import pytorch_lightning as pl
-from pytorch_lightning.utilities import rank_zero_info
-
-from transformers import (
- AdamW,
- AutoConfig,
- AutoModel,
- AutoModelForPreTraining,
- AutoModelForQuestionAnswering,
- AutoModelForSeq2SeqLM,
- AutoModelForSequenceClassification,
- AutoModelForTokenClassification,
- AutoModelWithLMHead,
- AutoTokenizer,
- PretrainedConfig,
- PreTrainedTokenizer,
-)
-from transformers.optimization import (
- Adafactor,
- get_cosine_schedule_with_warmup,
- get_cosine_with_hard_restarts_schedule_with_warmup,
- get_linear_schedule_with_warmup,
- get_polynomial_decay_schedule_with_warmup,
-)
-from transformers.utils.versions import require_version
-
-
-logger = logging.getLogger(__name__)
-
-require_version("pytorch_lightning>=1.0.4")
-
-MODEL_MODES = {
- "base": AutoModel,
- "sequence-classification": AutoModelForSequenceClassification,
- "question-answering": AutoModelForQuestionAnswering,
- "pretraining": AutoModelForPreTraining,
- "token-classification": AutoModelForTokenClassification,
- "language-modeling": AutoModelWithLMHead,
- "summarization": AutoModelForSeq2SeqLM,
- "translation": AutoModelForSeq2SeqLM,
-}
-
-
-# update this and the import above to support new schedulers from transformers.optimization
-arg_to_scheduler = {
- "linear": get_linear_schedule_with_warmup,
- "cosine": get_cosine_schedule_with_warmup,
- "cosine_w_restarts": get_cosine_with_hard_restarts_schedule_with_warmup,
- "polynomial": get_polynomial_decay_schedule_with_warmup,
- # '': get_constant_schedule, # not supported for now
- # '': get_constant_schedule_with_warmup, # not supported for now
-}
-arg_to_scheduler_choices = sorted(arg_to_scheduler.keys())
-arg_to_scheduler_metavar = "{" + ", ".join(arg_to_scheduler_choices) + "}"
-
-
-class BaseTransformer(pl.LightningModule):
- def __init__(
- self,
- hparams: argparse.Namespace,
- num_labels=None,
- mode="base",
- config=None,
- tokenizer=None,
- model=None,
- **config_kwargs,
- ):
- """Initialize a model, tokenizer and config."""
- super().__init__()
- # TODO: move to self.save_hyperparameters()
- # self.save_hyperparameters()
- # can also expand arguments into trainer signature for easier reading
-
- self.save_hyperparameters(hparams)
- self.step_count = 0
- self.output_dir = Path(self.hparams.output_dir)
- cache_dir = self.hparams.cache_dir if self.hparams.cache_dir else None
- if config is None:
- self.config = AutoConfig.from_pretrained(
- self.hparams.config_name if self.hparams.config_name else self.hparams.model_name_or_path,
- **({"num_labels": num_labels} if num_labels is not None else {}),
- cache_dir=cache_dir,
- **config_kwargs,
- )
- else:
- self.config: PretrainedConfig = config
-
- extra_model_params = ("encoder_layerdrop", "decoder_layerdrop", "dropout", "attention_dropout")
- for p in extra_model_params:
- if getattr(self.hparams, p, None):
- assert hasattr(self.config, p), f"model config doesn't have a `{p}` attribute"
- setattr(self.config, p, getattr(self.hparams, p))
-
- if tokenizer is None:
- self.tokenizer = AutoTokenizer.from_pretrained(
- self.hparams.tokenizer_name if self.hparams.tokenizer_name else self.hparams.model_name_or_path,
- cache_dir=cache_dir,
- )
- else:
- self.tokenizer: PreTrainedTokenizer = tokenizer
- self.model_type = MODEL_MODES[mode]
- if model is None:
- self.model = self.model_type.from_pretrained(
- self.hparams.model_name_or_path,
- from_tf=bool(".ckpt" in self.hparams.model_name_or_path),
- config=self.config,
- cache_dir=cache_dir,
- )
- else:
- self.model = model
-
- def load_hf_checkpoint(self, *args, **kwargs):
- self.model = self.model_type.from_pretrained(*args, **kwargs)
-
- def get_lr_scheduler(self):
- get_schedule_func = arg_to_scheduler[self.hparams.lr_scheduler]
- scheduler = get_schedule_func(
- self.opt, num_warmup_steps=self.hparams.warmup_steps, num_training_steps=self.total_steps()
- )
- scheduler = {"scheduler": scheduler, "interval": "step", "frequency": 1}
- return scheduler
-
- def configure_optimizers(self):
- """Prepare optimizer and schedule (linear warmup and decay)"""
- model = self.model
- no_decay = ["bias", "LayerNorm.weight"]
- optimizer_grouped_parameters = [
- {
- "params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)],
- "weight_decay": self.hparams.weight_decay,
- },
- {
- "params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)],
- "weight_decay": 0.0,
- },
- ]
- if self.hparams.adafactor:
- optimizer = Adafactor(
- optimizer_grouped_parameters, lr=self.hparams.learning_rate, scale_parameter=False, relative_step=False
- )
-
- else:
- optimizer = AdamW(
- optimizer_grouped_parameters, lr=self.hparams.learning_rate, eps=self.hparams.adam_epsilon
- )
- self.opt = optimizer
-
- scheduler = self.get_lr_scheduler()
-
- return [optimizer], [scheduler]
-
- def test_step(self, batch, batch_nb):
- return self.validation_step(batch, batch_nb)
-
- def test_epoch_end(self, outputs):
- return self.validation_end(outputs)
-
- def total_steps(self) -> int:
- """The number of total training steps that will be run. Used for lr scheduler purposes."""
- num_devices = max(1, self.hparams.gpus) # TODO: consider num_tpu_cores
- effective_batch_size = self.hparams.train_batch_size * self.hparams.accumulate_grad_batches * num_devices
- return (self.dataset_size / effective_batch_size) * self.hparams.max_epochs
-
- def setup(self, mode):
- if mode == "test":
- self.dataset_size = len(self.test_dataloader().dataset)
- else:
- self.train_loader = self.get_dataloader("train", self.hparams.train_batch_size, shuffle=True)
- self.dataset_size = len(self.train_dataloader().dataset)
-
- def get_dataloader(self, type_path: str, batch_size: int, shuffle: bool = False):
- raise NotImplementedError("You must implement this for your task")
-
- def train_dataloader(self):
- return self.train_loader
-
- def val_dataloader(self):
- return self.get_dataloader("dev", self.hparams.eval_batch_size, shuffle=False)
-
- def test_dataloader(self):
- return self.get_dataloader("test", self.hparams.eval_batch_size, shuffle=False)
-
- def _feature_file(self, mode):
- return os.path.join(
- self.hparams.data_dir,
- "cached_{}_{}_{}".format(
- mode,
- list(filter(None, self.hparams.model_name_or_path.split("/"))).pop(),
- str(self.hparams.max_seq_length),
- ),
- )
-
- @pl.utilities.rank_zero_only
- def on_save_checkpoint(self, checkpoint: Dict[str, Any]) -> None:
- save_path = self.output_dir.joinpath("best_tfmr")
- self.model.config.save_step = self.step_count
- self.model.save_pretrained(save_path)
- self.tokenizer.save_pretrained(save_path)
-
- @staticmethod
- def add_model_specific_args(parser, root_dir):
- parser.add_argument(
- "--model_name_or_path",
- default=None,
- type=str,
- required=True,
- help="Path to pretrained model or model identifier from huggingface.co/models",
- )
- parser.add_argument(
- "--config_name", default="", type=str, help="Pretrained config name or path if not the same as model_name"
- )
- parser.add_argument(
- "--tokenizer_name",
- default=None,
- type=str,
- help="Pretrained tokenizer name or path if not the same as model_name",
- )
- parser.add_argument(
- "--cache_dir",
- default="",
- type=str,
- help="Where do you want to store the pre-trained models downloaded from huggingface.co",
- )
- parser.add_argument(
- "--encoder_layerdrop",
- type=float,
- help="Encoder layer dropout probability (Optional). Goes into model.config",
- )
- parser.add_argument(
- "--decoder_layerdrop",
- type=float,
- help="Decoder layer dropout probability (Optional). Goes into model.config",
- )
- parser.add_argument(
- "--dropout",
- type=float,
- help="Dropout probability (Optional). Goes into model.config",
- )
- parser.add_argument(
- "--attention_dropout",
- type=float,
- help="Attention dropout probability (Optional). Goes into model.config",
- )
- parser.add_argument("--learning_rate", default=5e-5, type=float, help="The initial learning rate for Adam.")
- parser.add_argument(
- "--lr_scheduler",
- default="linear",
- choices=arg_to_scheduler_choices,
- metavar=arg_to_scheduler_metavar,
- type=str,
- help="Learning rate scheduler",
- )
- parser.add_argument("--weight_decay", default=0.0, type=float, help="Weight decay if we apply some.")
- parser.add_argument("--adam_epsilon", default=1e-8, type=float, help="Epsilon for Adam optimizer.")
- parser.add_argument("--warmup_steps", default=0, type=int, help="Linear warmup over warmup_steps.")
- parser.add_argument("--num_workers", default=4, type=int, help="kwarg passed to DataLoader")
- parser.add_argument("--num_train_epochs", dest="max_epochs", default=3, type=int)
- parser.add_argument("--train_batch_size", default=32, type=int)
- parser.add_argument("--eval_batch_size", default=32, type=int)
- parser.add_argument("--adafactor", action="store_true")
-
-
-class LoggingCallback(pl.Callback):
- def on_batch_end(self, trainer, pl_module):
- lr_scheduler = trainer.lr_schedulers[0]["scheduler"]
- lrs = {f"lr_group_{i}": lr for i, lr in enumerate(lr_scheduler.get_lr())}
- pl_module.logger.log_metrics(lrs)
-
- def on_validation_end(self, trainer: pl.Trainer, pl_module: pl.LightningModule):
- rank_zero_info("***** Validation results *****")
- metrics = trainer.callback_metrics
- # Log results
- for key in sorted(metrics):
- if key not in ["log", "progress_bar"]:
- rank_zero_info("{} = {}\n".format(key, str(metrics[key])))
-
- def on_test_end(self, trainer: pl.Trainer, pl_module: pl.LightningModule):
- rank_zero_info("***** Test results *****")
- metrics = trainer.callback_metrics
- # Log and save results to file
- output_test_results_file = os.path.join(pl_module.hparams.output_dir, "test_results.txt")
- with open(output_test_results_file, "w") as writer:
- for key in sorted(metrics):
- if key not in ["log", "progress_bar"]:
- rank_zero_info("{} = {}\n".format(key, str(metrics[key])))
- writer.write("{} = {}\n".format(key, str(metrics[key])))
-
-
-def add_generic_args(parser, root_dir) -> None:
- # To allow all pl args uncomment the following line
- # parser = pl.Trainer.add_argparse_args(parser)
- parser.add_argument(
- "--output_dir",
- default=None,
- type=str,
- required=True,
- help="The output directory where the model predictions and checkpoints will be written.",
- )
- parser.add_argument(
- "--fp16",
- action="store_true",
- help="Whether to use 16-bit (mixed) precision (through NVIDIA apex) instead of 32-bit",
- )
-
- parser.add_argument(
- "--fp16_opt_level",
- type=str,
- default="O2",
- help=(
- "For fp16: Apex AMP optimization level selected in ['O0', 'O1', 'O2', and 'O3']."
- "See details at https://nvidia.github.io/apex/amp.html"
- ),
- )
- parser.add_argument("--n_tpu_cores", dest="tpu_cores", type=int)
- parser.add_argument("--max_grad_norm", dest="gradient_clip_val", default=1.0, type=float, help="Max gradient norm")
- parser.add_argument("--do_train", action="store_true", help="Whether to run training.")
- parser.add_argument("--do_predict", action="store_true", help="Whether to run predictions on the test set.")
- parser.add_argument(
- "--gradient_accumulation_steps",
- dest="accumulate_grad_batches",
- type=int,
- default=1,
- help="Number of updates steps to accumulate before performing a backward/update pass.",
- )
- parser.add_argument("--seed", type=int, default=42, help="random seed for initialization")
- parser.add_argument(
- "--data_dir",
- default=None,
- type=str,
- required=True,
- help="The input data dir. Should contain the training files for the CoNLL-2003 NER task.",
- )
-
-
-def generic_train(
- model: BaseTransformer,
- args: argparse.Namespace,
- early_stopping_callback=None,
- logger=True, # can pass WandbLogger() here
- extra_callbacks=[],
- checkpoint_callback=None,
- logging_callback=None,
- **extra_train_kwargs,
-):
- pl.seed_everything(args.seed)
-
- # init model
- odir = Path(model.hparams.output_dir)
- odir.mkdir(exist_ok=True)
-
- # add custom checkpoints
- if checkpoint_callback is None:
- checkpoint_callback = pl.callbacks.ModelCheckpoint(
- filepath=args.output_dir, prefix="checkpoint", monitor="val_loss", mode="min", save_top_k=1
- )
- if early_stopping_callback:
- extra_callbacks.append(early_stopping_callback)
- if logging_callback is None:
- logging_callback = LoggingCallback()
-
- train_params = {}
-
- # TODO: remove with PyTorch 1.6 since pl uses native amp
- if args.fp16:
- train_params["precision"] = 16
- train_params["amp_level"] = args.fp16_opt_level
-
- if args.gpus > 1:
- train_params["distributed_backend"] = "ddp"
-
- train_params["accumulate_grad_batches"] = args.accumulate_grad_batches
- train_params["accelerator"] = extra_train_kwargs.get("accelerator", None)
- train_params["profiler"] = extra_train_kwargs.get("profiler", None)
-
- trainer = pl.Trainer.from_argparse_args(
- args,
- weights_summary=None,
- callbacks=[logging_callback] + extra_callbacks,
- logger=logger,
- checkpoint_callback=checkpoint_callback,
- **train_params,
- )
-
- if args.do_train:
- trainer.fit(model)
-
- return trainer
diff --git a/spaces/chinhon/malay_headlines_writer/app.py b/spaces/chinhon/malay_headlines_writer/app.py
deleted file mode 100644
index dc4c22240d7bd505dce749b23a5d366ffc6a75c1..0000000000000000000000000000000000000000
--- a/spaces/chinhon/malay_headlines_writer/app.py
+++ /dev/null
@@ -1,89 +0,0 @@
-import gradio as gr
-import re
-
-from gradio.mix import Parallel
-from transformers import (
- AutoTokenizer,
- AutoModelForSeq2SeqLM,
-)
-
-def clean_text(text):
- text = text.encode("ascii", errors="ignore").decode(
- "ascii"
- ) # remove non-ascii, Chinese characters
- text = re.sub(r"\n", " ", text)
- text = re.sub(r"\n\n", " ", text)
- text = re.sub(r"\t", " ", text)
- text = text.strip(" ")
- text = re.sub(
- " +", " ", text
- ).strip() # get rid of multiple spaces and replace with a single
- return text
-
-
-modchoice_1 = "chinhon/pegasus-newsroom-malay_headlines"
-
-def headline_writer1(text):
- input_text = clean_text(text)
-
- tokenizer_1 = AutoTokenizer.from_pretrained(modchoice_1)
-
- model_1 = AutoModelForSeq2SeqLM.from_pretrained(modchoice_1)
-
- with tokenizer_1.as_target_tokenizer():
- batch = tokenizer_1(
- input_text, truncation=True, padding="longest", return_tensors="pt"
- )
-
- translated = model_1.generate(**batch)
-
- summary_1 = tokenizer_1.batch_decode(translated, skip_special_tokens=True)
-
- return summary_1[0]
-
-
-headline1 = gr.Interface(
- fn=headline_writer1,
- inputs=gr.inputs.Textbox(),
- outputs=gr.outputs.Textbox(label=""),
-)
-
-
-modchoice_2 = "chinhon/pegasus-multi_news-malay_headlines_02"
-
-def headline_writer2(text):
- input_text = clean_text(text)
-
- tokenizer_2 = AutoTokenizer.from_pretrained(modchoice_2)
-
- model_2 = AutoModelForSeq2SeqLM.from_pretrained(modchoice_2)
-
- with tokenizer_2.as_target_tokenizer():
- batch = tokenizer_2(
- input_text, truncation=True, padding="longest", return_tensors="pt"
- )
-
- translated = model_2.generate(**batch)
-
- summary_2 = tokenizer_2.batch_decode(translated, skip_special_tokens=True)
-
- return summary_2[0]
-
-
-headline2 = gr.Interface(
- fn=headline_writer2,
- inputs=gr.inputs.Textbox(),
- outputs=gr.outputs.Textbox(label=""),
-)
-
-
-Parallel(
- headline1,
- headline2,
- title="Malay News Headlines Generator",
- inputs=gr.inputs.Textbox(
- lines=20,
- label="Paste the first few paragraphs of a Malay language news story here, and choose from 2 suggested headlines",
- ),
- theme="darkdefault",
-).launch(enable_queue=True)
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/altair/vegalite/v5/schema/channels.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/altair/vegalite/v5/schema/channels.py
deleted file mode 100644
index 07f9f43e8e1387a374e60ae99ee9a92e1549d1e1..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/altair/vegalite/v5/schema/channels.py
+++ /dev/null
@@ -1,17317 +0,0 @@
-# The contents of this file are automatically written by
-# tools/generate_schema_wrapper.py. Do not modify directly.
-
-import sys
-from . import core
-import pandas as pd
-from altair.utils.schemapi import Undefined, with_property_setters
-from altair.utils import parse_shorthand
-from typing import overload, List
-
-if sys.version_info >= (3, 8):
- from typing import Literal
-else:
- from typing_extensions import Literal
-
-
-class FieldChannelMixin:
- def to_dict(self, validate=True, ignore=(), context=None):
- context = context or {}
- shorthand = self._get('shorthand')
- field = self._get('field')
-
- if shorthand is not Undefined and field is not Undefined:
- raise ValueError("{} specifies both shorthand={} and field={}. "
- "".format(self.__class__.__name__, shorthand, field))
-
- if isinstance(shorthand, (tuple, list)):
- # If given a list of shorthands, then transform it to a list of classes
- kwds = self._kwds.copy()
- kwds.pop('shorthand')
- return [self.__class__(sh, **kwds).to_dict(validate=validate, ignore=ignore, context=context)
- for sh in shorthand]
-
- if shorthand is Undefined:
- parsed = {}
- elif isinstance(shorthand, str):
- parsed = parse_shorthand(shorthand, data=context.get('data', None))
- type_required = 'type' in self._kwds
- type_in_shorthand = 'type' in parsed
- type_defined_explicitly = self._get('type') is not Undefined
- if not type_required:
- # Secondary field names don't require a type argument in VegaLite 3+.
- # We still parse it out of the shorthand, but drop it here.
- parsed.pop('type', None)
- elif not (type_in_shorthand or type_defined_explicitly):
- if isinstance(context.get('data', None), pd.DataFrame):
- raise ValueError(
- 'Unable to determine data type for the field "{}";'
- " verify that the field name is not misspelled."
- " If you are referencing a field from a transform,"
- " also confirm that the data type is specified correctly.".format(shorthand)
- )
- else:
- raise ValueError("{} encoding field is specified without a type; "
- "the type cannot be automatically inferred because "
- "the data is not specified as a pandas.DataFrame."
- "".format(shorthand))
- else:
- # Shorthand is not a string; we pass the definition to field,
- # and do not do any parsing.
- parsed = {'field': shorthand}
- context["parsed_shorthand"] = parsed
-
- return super(FieldChannelMixin, self).to_dict(
- validate=validate,
- ignore=ignore,
- context=context
- )
-
-
-class ValueChannelMixin:
- def to_dict(self, validate=True, ignore=(), context=None):
- context = context or {}
- condition = self._get('condition', Undefined)
- copy = self # don't copy unless we need to
- if condition is not Undefined:
- if isinstance(condition, core.SchemaBase):
- pass
- elif 'field' in condition and 'type' not in condition:
- kwds = parse_shorthand(condition['field'], context.get('data', None))
- copy = self.copy(deep=['condition'])
- copy['condition'].update(kwds)
- return super(ValueChannelMixin, copy).to_dict(validate=validate,
- ignore=ignore,
- context=context)
-
-
-class DatumChannelMixin:
- def to_dict(self, validate=True, ignore=(), context=None):
- context = context or {}
- datum = self._get('datum', Undefined)
- copy = self # don't copy unless we need to
- if datum is not Undefined:
- if isinstance(datum, core.SchemaBase):
- pass
- return super(DatumChannelMixin, copy).to_dict(validate=validate,
- ignore=ignore,
- context=context)
-
-
-@with_property_setters
-class Angle(FieldChannelMixin, core.FieldOrDatumDefWithConditionMarkPropFieldDefnumber):
- """Angle schema wrapper
-
- Mapping(required=[shorthand])
-
- Parameters
- ----------
-
- shorthand : string
- shorthand for field, aggregate, and type
- aggregate : :class:`Aggregate`
- Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``,
- ``"min"``, ``"max"``, ``"count"`` ).
-
- **Default value:** ``undefined`` (None)
-
- **See also:** `aggregate `__
- documentation.
- bandPosition : float
- Relative position on a band of a stacked, binned, time unit, or band scale. For
- example, the marks will be positioned at the beginning of the band if set to ``0``,
- and at the middle of the band if set to ``0.5``.
- bin : anyOf(boolean, :class:`BinParams`, None)
- A flag for binning a ``quantitative`` field, `an object defining binning parameters
- `__, or indicating
- that the data for ``x`` or ``y`` channel are binned before they are imported into
- Vega-Lite ( ``"binned"`` ).
-
-
- If ``true``, default `binning parameters
- `__ will be applied.
-
- If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are
- already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end
- field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to
- binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also
- set the axis's `tickMinStep
- `__ property.
-
- **Default value:** ``false``
-
- **See also:** `bin `__
- documentation.
- condition : anyOf(:class:`ConditionalValueDefnumberExprRef`, List(:class:`ConditionalValueDefnumberExprRef`))
- One or more value definition(s) with `a parameter or a test predicate
- `__.
-
- **Note:** A field definition's ``condition`` property can only contain `conditional
- value definitions `__
- since Vega-Lite only allows at most one encoded field per encoding channel.
- field : :class:`Field`
- **Required.** A string defining the name of the field from which to pull a data
- value or an object defining iterated values from the `repeat
- `__ operator.
-
- **See also:** `field `__
- documentation.
-
- **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access
- nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If
- field names contain dots or brackets but are not nested, you can use ``\\`` to
- escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details
- about escaping in the `field documentation
- `__. 2) ``field`` is not required
- if ``aggregate`` is ``count``.
- legend : anyOf(:class:`Legend`, None)
- An object defining properties of the legend. If ``null``, the legend for the
- encoding channel will be removed.
-
- **Default value:** If undefined, default `legend properties
- `__ are applied.
-
- **See also:** `legend `__
- documentation.
- scale : anyOf(:class:`Scale`, None)
- An object defining properties of the channel's scale, which is the function that
- transforms values in the data domain (numbers, dates, strings, etc) to visual values
- (pixels, colors, sizes) of the encoding channels.
-
- If ``null``, the scale will be `disabled and the data value will be directly encoded
- `__.
-
- **Default value:** If undefined, default `scale properties
- `__ are applied.
-
- **See also:** `scale `__
- documentation.
- sort : :class:`Sort`
- Sort order for the encoded field.
-
- For continuous fields (quantitative or temporal), ``sort`` can be either
- ``"ascending"`` or ``"descending"``.
-
- For discrete fields, ``sort`` can be one of the following:
-
-
- * ``"ascending"`` or ``"descending"`` -- for sorting by the values' natural order in
- JavaScript.
- * `A string indicating an encoding channel name to sort by
- `__ (e.g.,
- ``"x"`` or ``"y"`` ) with an optional minus prefix for descending sort (e.g.,
- ``"-x"`` to sort by x-field, descending). This channel string is short-form of `a
- sort-by-encoding definition
- `__. For
- example, ``"sort": "-x"`` is equivalent to ``"sort": {"encoding": "x", "order":
- "descending"}``.
- * `A sort field definition
- `__ for sorting by
- another field.
- * `An array specifying the field values in preferred order
- `__. In this case, the
- sort order will obey the values in the array, followed by any unspecified values
- in their original order. For discrete time field, values in the sort array can be
- `date-time definition objects
- `__. In addition, for time
- units ``"month"`` and ``"day"``, the values can be the month or day names (case
- insensitive) or their 3-letter initials (e.g., ``"Mon"``, ``"Tue"`` ).
- * ``null`` indicating no sort.
-
- **Default value:** ``"ascending"``
-
- **Note:** ``null`` and sorting by another channel is not supported for ``row`` and
- ``column``.
-
- **See also:** `sort `__
- documentation.
- timeUnit : anyOf(:class:`TimeUnit`, :class:`TimeUnitParams`)
- Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal
- field. or `a temporal field that gets casted as ordinal
- `__.
-
- **Default value:** ``undefined`` (None)
-
- **See also:** `timeUnit `__
- documentation.
- title : anyOf(:class:`Text`, None)
- A title for the field. If ``null``, the title will be removed.
-
- **Default value:** derived from the field's name and transformation function (
- ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function,
- the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the
- field is binned or has a time unit applied, the applied function is shown in
- parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ).
- Otherwise, the title is simply the field name.
-
- **Notes** :
-
- 1) You can customize the default field title format by providing the `fieldTitle
- `__ property in
- the `config `__ or `fieldTitle
- function via the compile function's options
- `__.
-
- 2) If both field definition's ``title`` and axis, header, or legend ``title`` are
- defined, axis/header/legend title will be used.
- type : :class:`StandardType`
- The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or
- ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also
- be a ``"geojson"`` type for encoding `'geoshape'
- `__.
-
- Vega-Lite automatically infers data types in many cases as discussed below. However,
- type is required for a field if: (1) the field is not nominal and the field encoding
- has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale
- type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal
- scale for a field with ``bin`` or ``timeUnit``.
-
- **Default value:**
-
- 1) For a data ``field``, ``"nominal"`` is the default data type unless the field
- encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or
- ``timeUnit`` that satisfies the following criteria:
-
-
- * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin``
- or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is
- ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a
- quantitative scale `__.
- * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit``
- or (2) the specified scale type is a time or utc scale
- * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort
- order
- `__,
- (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding
- channel is ``order``.
-
- 2) For a constant value in data domain ( ``datum`` ):
-
-
- * ``"quantitative"`` if the datum is a number
- * ``"nominal"`` if the datum is a string
- * ``"temporal"`` if the datum is `a date time object
- `__
-
- **Note:**
-
-
- * Data ``type`` describes the semantics of the data rather than the primitive data
- types (number, string, etc.). The same primitive data type can have different
- types of measurement. For example, numeric data can represent quantitative,
- ordinal, or nominal data.
- * Data values for a temporal field can be either a date-time string (e.g.,
- ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a
- timestamp number (e.g., ``1552199579097`` ).
- * When using with `bin `__, the
- ``type`` property can be either ``"quantitative"`` (for using a linear bin scale)
- or `"ordinal" (for using an ordinal bin scale)
- `__.
- * When using with `timeUnit
- `__, the ``type`` property
- can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal"
- (for using an ordinal scale)
- `__.
- * When using with `aggregate
- `__, the ``type`` property
- refers to the post-aggregation data type. For example, we can calculate count
- ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct",
- "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``.
- * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have
- ``type`` as they must have exactly the same type as their primary channels (e.g.,
- ``x``, ``y`` ).
-
- **See also:** `type `__
- documentation.
- """
- _class_is_valid_at_instantiation = False
- _encoding_name = "angle"
-
- @overload # type: ignore[no-overload-impl]
- def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def aggregate(self, argmax=Undefined, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def aggregate(self, argmin=Undefined, **kwds) -> 'Angle':
- ...
-
- def bandPosition(self, _: float, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def bin(self, _: bool, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def bin(self, anchor=Undefined, base=Undefined, binned=Undefined, divide=Undefined, extent=Undefined, maxbins=Undefined, minstep=Undefined, nice=Undefined, step=Undefined, steps=Undefined, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def bin(self, _: None, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, test=Undefined, value=Undefined, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, _: List[core.ConditionalValueDefnumberExprRef], **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def field(self, _: str, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def field(self, repeat=Undefined, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def legend(self, aria=Undefined, clipHeight=Undefined, columnPadding=Undefined, columns=Undefined, cornerRadius=Undefined, description=Undefined, direction=Undefined, fillColor=Undefined, format=Undefined, formatType=Undefined, gradientLength=Undefined, gradientOpacity=Undefined, gradientStrokeColor=Undefined, gradientStrokeWidth=Undefined, gradientThickness=Undefined, gridAlign=Undefined, labelAlign=Undefined, labelBaseline=Undefined, labelColor=Undefined, labelExpr=Undefined, labelFont=Undefined, labelFontSize=Undefined, labelFontStyle=Undefined, labelFontWeight=Undefined, labelLimit=Undefined, labelOffset=Undefined, labelOpacity=Undefined, labelOverlap=Undefined, labelPadding=Undefined, labelSeparation=Undefined, legendX=Undefined, legendY=Undefined, offset=Undefined, orient=Undefined, padding=Undefined, rowPadding=Undefined, strokeColor=Undefined, symbolDash=Undefined, symbolDashOffset=Undefined, symbolFillColor=Undefined, symbolLimit=Undefined, symbolOffset=Undefined, symbolOpacity=Undefined, symbolSize=Undefined, symbolStrokeColor=Undefined, symbolStrokeWidth=Undefined, symbolType=Undefined, tickCount=Undefined, tickMinStep=Undefined, title=Undefined, titleAlign=Undefined, titleAnchor=Undefined, titleBaseline=Undefined, titleColor=Undefined, titleFont=Undefined, titleFontSize=Undefined, titleFontStyle=Undefined, titleFontWeight=Undefined, titleLimit=Undefined, titleLineHeight=Undefined, titleOpacity=Undefined, titleOrient=Undefined, titlePadding=Undefined, type=Undefined, values=Undefined, zindex=Undefined, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def legend(self, _: None, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def scale(self, align=Undefined, base=Undefined, bins=Undefined, clamp=Undefined, constant=Undefined, domain=Undefined, domainMax=Undefined, domainMid=Undefined, domainMin=Undefined, exponent=Undefined, interpolate=Undefined, nice=Undefined, padding=Undefined, paddingInner=Undefined, paddingOuter=Undefined, range=Undefined, rangeMax=Undefined, rangeMin=Undefined, reverse=Undefined, round=Undefined, scheme=Undefined, type=Undefined, zero=Undefined, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def scale(self, _: None, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: List[float], **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: List[str], **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: List[bool], **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: List[core.DateTime], **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: Literal["ascending", "descending"], **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: Literal["x", "y", "color", "fill", "stroke", "strokeWidth", "size", "shape", "fillOpacity", "strokeOpacity", "opacity", "text"], **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: Literal["-x", "-y", "-color", "-fill", "-stroke", "-strokeWidth", "-size", "-shape", "-fillOpacity", "-strokeOpacity", "-opacity", "-text"], **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, field=Undefined, op=Undefined, order=Undefined, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, encoding=Undefined, order=Undefined, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: None, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: str, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: List[str], **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: None, **kwds) -> 'Angle':
- ...
-
- def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal"], **kwds) -> 'Angle':
- ...
-
-
- def __init__(self, shorthand=Undefined, aggregate=Undefined, bandPosition=Undefined, bin=Undefined,
- condition=Undefined, field=Undefined, legend=Undefined, scale=Undefined,
- sort=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds):
- super(Angle, self).__init__(shorthand=shorthand, aggregate=aggregate, bandPosition=bandPosition,
- bin=bin, condition=condition, field=field, legend=legend,
- scale=scale, sort=sort, timeUnit=timeUnit, title=title, type=type,
- **kwds)
-
-
-@with_property_setters
-class AngleDatum(DatumChannelMixin, core.FieldOrDatumDefWithConditionDatumDefnumber):
- """AngleDatum schema wrapper
-
- Mapping(required=[])
-
- Parameters
- ----------
-
- bandPosition : float
- Relative position on a band of a stacked, binned, time unit, or band scale. For
- example, the marks will be positioned at the beginning of the band if set to ``0``,
- and at the middle of the band if set to ``0.5``.
- condition : anyOf(:class:`ConditionalValueDefnumberExprRef`, List(:class:`ConditionalValueDefnumberExprRef`))
- One or more value definition(s) with `a parameter or a test predicate
- `__.
-
- **Note:** A field definition's ``condition`` property can only contain `conditional
- value definitions `__
- since Vega-Lite only allows at most one encoded field per encoding channel.
- datum : anyOf(:class:`PrimitiveValue`, :class:`DateTime`, :class:`ExprRef`, :class:`RepeatRef`)
- A constant value in data domain.
- title : anyOf(:class:`Text`, None)
- A title for the field. If ``null``, the title will be removed.
-
- **Default value:** derived from the field's name and transformation function (
- ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function,
- the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the
- field is binned or has a time unit applied, the applied function is shown in
- parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ).
- Otherwise, the title is simply the field name.
-
- **Notes** :
-
- 1) You can customize the default field title format by providing the `fieldTitle
- `__ property in
- the `config `__ or `fieldTitle
- function via the compile function's options
- `__.
-
- 2) If both field definition's ``title`` and axis, header, or legend ``title`` are
- defined, axis/header/legend title will be used.
- type : :class:`Type`
- The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or
- ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also
- be a ``"geojson"`` type for encoding `'geoshape'
- `__.
-
- Vega-Lite automatically infers data types in many cases as discussed below. However,
- type is required for a field if: (1) the field is not nominal and the field encoding
- has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale
- type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal
- scale for a field with ``bin`` or ``timeUnit``.
-
- **Default value:**
-
- 1) For a data ``field``, ``"nominal"`` is the default data type unless the field
- encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or
- ``timeUnit`` that satisfies the following criteria:
-
-
- * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin``
- or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is
- ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a
- quantitative scale `__.
- * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit``
- or (2) the specified scale type is a time or utc scale
- * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort
- order
- `__,
- (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding
- channel is ``order``.
-
- 2) For a constant value in data domain ( ``datum`` ):
-
-
- * ``"quantitative"`` if the datum is a number
- * ``"nominal"`` if the datum is a string
- * ``"temporal"`` if the datum is `a date time object
- `__
-
- **Note:**
-
-
- * Data ``type`` describes the semantics of the data rather than the primitive data
- types (number, string, etc.). The same primitive data type can have different
- types of measurement. For example, numeric data can represent quantitative,
- ordinal, or nominal data.
- * Data values for a temporal field can be either a date-time string (e.g.,
- ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a
- timestamp number (e.g., ``1552199579097`` ).
- * When using with `bin `__, the
- ``type`` property can be either ``"quantitative"`` (for using a linear bin scale)
- or `"ordinal" (for using an ordinal bin scale)
- `__.
- * When using with `timeUnit
- `__, the ``type`` property
- can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal"
- (for using an ordinal scale)
- `__.
- * When using with `aggregate
- `__, the ``type`` property
- refers to the post-aggregation data type. For example, we can calculate count
- ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct",
- "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``.
- * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have
- ``type`` as they must have exactly the same type as their primary channels (e.g.,
- ``x``, ``y`` ).
-
- **See also:** `type `__
- documentation.
- """
- _class_is_valid_at_instantiation = False
- _encoding_name = "angle"
-
- def bandPosition(self, _: float, **kwds) -> 'AngleDatum':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, test=Undefined, value=Undefined, **kwds) -> 'AngleDatum':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'AngleDatum':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, _: List[core.ConditionalValueDefnumberExprRef], **kwds) -> 'AngleDatum':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: str, **kwds) -> 'AngleDatum':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: List[str], **kwds) -> 'AngleDatum':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: None, **kwds) -> 'AngleDatum':
- ...
-
- def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal", "geojson"], **kwds) -> 'AngleDatum':
- ...
-
-
- def __init__(self, datum, bandPosition=Undefined, condition=Undefined, title=Undefined,
- type=Undefined, **kwds):
- super(AngleDatum, self).__init__(datum=datum, bandPosition=bandPosition, condition=condition,
- title=title, type=type, **kwds)
-
-
-@with_property_setters
-class AngleValue(ValueChannelMixin, core.ValueDefWithConditionMarkPropFieldOrDatumDefnumber):
- """AngleValue schema wrapper
-
- Mapping(required=[])
-
- Parameters
- ----------
-
- condition : anyOf(:class:`ConditionalMarkPropFieldOrDatumDef`, :class:`ConditionalValueDefnumberExprRef`, List(:class:`ConditionalValueDefnumberExprRef`))
- A field definition or one or more value definition(s) with a parameter predicate.
- value : anyOf(float, :class:`ExprRef`)
- A constant value in visual domain (e.g., ``"red"`` / ``"#0099ff"`` / `gradient
- definition `__ for color,
- values between ``0`` to ``1`` for opacity).
- """
- _class_is_valid_at_instantiation = False
- _encoding_name = "angle"
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, field=Undefined, legend=Undefined, scale=Undefined, sort=Undefined, test=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds) -> 'AngleValue':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, bandPosition=Undefined, datum=Undefined, legend=Undefined, scale=Undefined, test=Undefined, title=Undefined, type=Undefined, **kwds) -> 'AngleValue':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, empty=Undefined, field=Undefined, legend=Undefined, param=Undefined, scale=Undefined, sort=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds) -> 'AngleValue':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, bandPosition=Undefined, datum=Undefined, empty=Undefined, legend=Undefined, param=Undefined, scale=Undefined, title=Undefined, type=Undefined, **kwds) -> 'AngleValue':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, test=Undefined, value=Undefined, **kwds) -> 'AngleValue':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'AngleValue':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, _: List[core.ConditionalValueDefnumberExprRef], **kwds) -> 'AngleValue':
- ...
-
-
- def __init__(self, value, condition=Undefined, **kwds):
- super(AngleValue, self).__init__(value=value, condition=condition, **kwds)
-
-
-@with_property_setters
-class Color(FieldChannelMixin, core.FieldOrDatumDefWithConditionMarkPropFieldDefGradientstringnull):
- """Color schema wrapper
-
- Mapping(required=[shorthand])
-
- Parameters
- ----------
-
- shorthand : string
- shorthand for field, aggregate, and type
- aggregate : :class:`Aggregate`
- Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``,
- ``"min"``, ``"max"``, ``"count"`` ).
-
- **Default value:** ``undefined`` (None)
-
- **See also:** `aggregate `__
- documentation.
- bandPosition : float
- Relative position on a band of a stacked, binned, time unit, or band scale. For
- example, the marks will be positioned at the beginning of the band if set to ``0``,
- and at the middle of the band if set to ``0.5``.
- bin : anyOf(boolean, :class:`BinParams`, None)
- A flag for binning a ``quantitative`` field, `an object defining binning parameters
- `__, or indicating
- that the data for ``x`` or ``y`` channel are binned before they are imported into
- Vega-Lite ( ``"binned"`` ).
-
-
- If ``true``, default `binning parameters
- `__ will be applied.
-
- If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are
- already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end
- field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to
- binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also
- set the axis's `tickMinStep
- `__ property.
-
- **Default value:** ``false``
-
- **See also:** `bin `__
- documentation.
- condition : anyOf(:class:`ConditionalValueDefGradientstringnullExprRef`, List(:class:`ConditionalValueDefGradientstringnullExprRef`))
- One or more value definition(s) with `a parameter or a test predicate
- `__.
-
- **Note:** A field definition's ``condition`` property can only contain `conditional
- value definitions `__
- since Vega-Lite only allows at most one encoded field per encoding channel.
- field : :class:`Field`
- **Required.** A string defining the name of the field from which to pull a data
- value or an object defining iterated values from the `repeat
- `__ operator.
-
- **See also:** `field `__
- documentation.
-
- **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access
- nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If
- field names contain dots or brackets but are not nested, you can use ``\\`` to
- escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details
- about escaping in the `field documentation
- `__. 2) ``field`` is not required
- if ``aggregate`` is ``count``.
- legend : anyOf(:class:`Legend`, None)
- An object defining properties of the legend. If ``null``, the legend for the
- encoding channel will be removed.
-
- **Default value:** If undefined, default `legend properties
- `__ are applied.
-
- **See also:** `legend `__
- documentation.
- scale : anyOf(:class:`Scale`, None)
- An object defining properties of the channel's scale, which is the function that
- transforms values in the data domain (numbers, dates, strings, etc) to visual values
- (pixels, colors, sizes) of the encoding channels.
-
- If ``null``, the scale will be `disabled and the data value will be directly encoded
- `__.
-
- **Default value:** If undefined, default `scale properties
- `__ are applied.
-
- **See also:** `scale `__
- documentation.
- sort : :class:`Sort`
- Sort order for the encoded field.
-
- For continuous fields (quantitative or temporal), ``sort`` can be either
- ``"ascending"`` or ``"descending"``.
-
- For discrete fields, ``sort`` can be one of the following:
-
-
- * ``"ascending"`` or ``"descending"`` -- for sorting by the values' natural order in
- JavaScript.
- * `A string indicating an encoding channel name to sort by
- `__ (e.g.,
- ``"x"`` or ``"y"`` ) with an optional minus prefix for descending sort (e.g.,
- ``"-x"`` to sort by x-field, descending). This channel string is short-form of `a
- sort-by-encoding definition
- `__. For
- example, ``"sort": "-x"`` is equivalent to ``"sort": {"encoding": "x", "order":
- "descending"}``.
- * `A sort field definition
- `__ for sorting by
- another field.
- * `An array specifying the field values in preferred order
- `__. In this case, the
- sort order will obey the values in the array, followed by any unspecified values
- in their original order. For discrete time field, values in the sort array can be
- `date-time definition objects
- `__. In addition, for time
- units ``"month"`` and ``"day"``, the values can be the month or day names (case
- insensitive) or their 3-letter initials (e.g., ``"Mon"``, ``"Tue"`` ).
- * ``null`` indicating no sort.
-
- **Default value:** ``"ascending"``
-
- **Note:** ``null`` and sorting by another channel is not supported for ``row`` and
- ``column``.
-
- **See also:** `sort `__
- documentation.
- timeUnit : anyOf(:class:`TimeUnit`, :class:`TimeUnitParams`)
- Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal
- field. or `a temporal field that gets casted as ordinal
- `__.
-
- **Default value:** ``undefined`` (None)
-
- **See also:** `timeUnit `__
- documentation.
- title : anyOf(:class:`Text`, None)
- A title for the field. If ``null``, the title will be removed.
-
- **Default value:** derived from the field's name and transformation function (
- ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function,
- the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the
- field is binned or has a time unit applied, the applied function is shown in
- parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ).
- Otherwise, the title is simply the field name.
-
- **Notes** :
-
- 1) You can customize the default field title format by providing the `fieldTitle
- `__ property in
- the `config `__ or `fieldTitle
- function via the compile function's options
- `__.
-
- 2) If both field definition's ``title`` and axis, header, or legend ``title`` are
- defined, axis/header/legend title will be used.
- type : :class:`StandardType`
- The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or
- ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also
- be a ``"geojson"`` type for encoding `'geoshape'
- `__.
-
- Vega-Lite automatically infers data types in many cases as discussed below. However,
- type is required for a field if: (1) the field is not nominal and the field encoding
- has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale
- type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal
- scale for a field with ``bin`` or ``timeUnit``.
-
- **Default value:**
-
- 1) For a data ``field``, ``"nominal"`` is the default data type unless the field
- encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or
- ``timeUnit`` that satisfies the following criteria:
-
-
- * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin``
- or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is
- ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a
- quantitative scale `__.
- * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit``
- or (2) the specified scale type is a time or utc scale
- * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort
- order
- `__,
- (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding
- channel is ``order``.
-
- 2) For a constant value in data domain ( ``datum`` ):
-
-
- * ``"quantitative"`` if the datum is a number
- * ``"nominal"`` if the datum is a string
- * ``"temporal"`` if the datum is `a date time object
- `__
-
- **Note:**
-
-
- * Data ``type`` describes the semantics of the data rather than the primitive data
- types (number, string, etc.). The same primitive data type can have different
- types of measurement. For example, numeric data can represent quantitative,
- ordinal, or nominal data.
- * Data values for a temporal field can be either a date-time string (e.g.,
- ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a
- timestamp number (e.g., ``1552199579097`` ).
- * When using with `bin `__, the
- ``type`` property can be either ``"quantitative"`` (for using a linear bin scale)
- or `"ordinal" (for using an ordinal bin scale)
- `__.
- * When using with `timeUnit
- `__, the ``type`` property
- can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal"
- (for using an ordinal scale)
- `__.
- * When using with `aggregate
- `__, the ``type`` property
- refers to the post-aggregation data type. For example, we can calculate count
- ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct",
- "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``.
- * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have
- ``type`` as they must have exactly the same type as their primary channels (e.g.,
- ``x``, ``y`` ).
-
- **See also:** `type `__
- documentation.
- """
- _class_is_valid_at_instantiation = False
- _encoding_name = "color"
-
- @overload # type: ignore[no-overload-impl]
- def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def aggregate(self, argmax=Undefined, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def aggregate(self, argmin=Undefined, **kwds) -> 'Color':
- ...
-
- def bandPosition(self, _: float, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def bin(self, _: bool, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def bin(self, anchor=Undefined, base=Undefined, binned=Undefined, divide=Undefined, extent=Undefined, maxbins=Undefined, minstep=Undefined, nice=Undefined, step=Undefined, steps=Undefined, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def bin(self, _: None, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, test=Undefined, value=Undefined, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, _: List[core.ConditionalValueDefGradientstringnullExprRef], **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def field(self, _: str, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def field(self, repeat=Undefined, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def legend(self, aria=Undefined, clipHeight=Undefined, columnPadding=Undefined, columns=Undefined, cornerRadius=Undefined, description=Undefined, direction=Undefined, fillColor=Undefined, format=Undefined, formatType=Undefined, gradientLength=Undefined, gradientOpacity=Undefined, gradientStrokeColor=Undefined, gradientStrokeWidth=Undefined, gradientThickness=Undefined, gridAlign=Undefined, labelAlign=Undefined, labelBaseline=Undefined, labelColor=Undefined, labelExpr=Undefined, labelFont=Undefined, labelFontSize=Undefined, labelFontStyle=Undefined, labelFontWeight=Undefined, labelLimit=Undefined, labelOffset=Undefined, labelOpacity=Undefined, labelOverlap=Undefined, labelPadding=Undefined, labelSeparation=Undefined, legendX=Undefined, legendY=Undefined, offset=Undefined, orient=Undefined, padding=Undefined, rowPadding=Undefined, strokeColor=Undefined, symbolDash=Undefined, symbolDashOffset=Undefined, symbolFillColor=Undefined, symbolLimit=Undefined, symbolOffset=Undefined, symbolOpacity=Undefined, symbolSize=Undefined, symbolStrokeColor=Undefined, symbolStrokeWidth=Undefined, symbolType=Undefined, tickCount=Undefined, tickMinStep=Undefined, title=Undefined, titleAlign=Undefined, titleAnchor=Undefined, titleBaseline=Undefined, titleColor=Undefined, titleFont=Undefined, titleFontSize=Undefined, titleFontStyle=Undefined, titleFontWeight=Undefined, titleLimit=Undefined, titleLineHeight=Undefined, titleOpacity=Undefined, titleOrient=Undefined, titlePadding=Undefined, type=Undefined, values=Undefined, zindex=Undefined, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def legend(self, _: None, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def scale(self, align=Undefined, base=Undefined, bins=Undefined, clamp=Undefined, constant=Undefined, domain=Undefined, domainMax=Undefined, domainMid=Undefined, domainMin=Undefined, exponent=Undefined, interpolate=Undefined, nice=Undefined, padding=Undefined, paddingInner=Undefined, paddingOuter=Undefined, range=Undefined, rangeMax=Undefined, rangeMin=Undefined, reverse=Undefined, round=Undefined, scheme=Undefined, type=Undefined, zero=Undefined, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def scale(self, _: None, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: List[float], **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: List[str], **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: List[bool], **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: List[core.DateTime], **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: Literal["ascending", "descending"], **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: Literal["x", "y", "color", "fill", "stroke", "strokeWidth", "size", "shape", "fillOpacity", "strokeOpacity", "opacity", "text"], **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: Literal["-x", "-y", "-color", "-fill", "-stroke", "-strokeWidth", "-size", "-shape", "-fillOpacity", "-strokeOpacity", "-opacity", "-text"], **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, field=Undefined, op=Undefined, order=Undefined, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, encoding=Undefined, order=Undefined, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: None, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: str, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: List[str], **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: None, **kwds) -> 'Color':
- ...
-
- def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal"], **kwds) -> 'Color':
- ...
-
-
- def __init__(self, shorthand=Undefined, aggregate=Undefined, bandPosition=Undefined, bin=Undefined,
- condition=Undefined, field=Undefined, legend=Undefined, scale=Undefined,
- sort=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds):
- super(Color, self).__init__(shorthand=shorthand, aggregate=aggregate, bandPosition=bandPosition,
- bin=bin, condition=condition, field=field, legend=legend,
- scale=scale, sort=sort, timeUnit=timeUnit, title=title, type=type,
- **kwds)
-
-
-@with_property_setters
-class ColorDatum(DatumChannelMixin, core.FieldOrDatumDefWithConditionDatumDefGradientstringnull):
- """ColorDatum schema wrapper
-
- Mapping(required=[])
-
- Parameters
- ----------
-
- bandPosition : float
- Relative position on a band of a stacked, binned, time unit, or band scale. For
- example, the marks will be positioned at the beginning of the band if set to ``0``,
- and at the middle of the band if set to ``0.5``.
- condition : anyOf(:class:`ConditionalValueDefGradientstringnullExprRef`, List(:class:`ConditionalValueDefGradientstringnullExprRef`))
- One or more value definition(s) with `a parameter or a test predicate
- `__.
-
- **Note:** A field definition's ``condition`` property can only contain `conditional
- value definitions `__
- since Vega-Lite only allows at most one encoded field per encoding channel.
- datum : anyOf(:class:`PrimitiveValue`, :class:`DateTime`, :class:`ExprRef`, :class:`RepeatRef`)
- A constant value in data domain.
- title : anyOf(:class:`Text`, None)
- A title for the field. If ``null``, the title will be removed.
-
- **Default value:** derived from the field's name and transformation function (
- ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function,
- the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the
- field is binned or has a time unit applied, the applied function is shown in
- parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ).
- Otherwise, the title is simply the field name.
-
- **Notes** :
-
- 1) You can customize the default field title format by providing the `fieldTitle
- `__ property in
- the `config `__ or `fieldTitle
- function via the compile function's options
- `__.
-
- 2) If both field definition's ``title`` and axis, header, or legend ``title`` are
- defined, axis/header/legend title will be used.
- type : :class:`Type`
- The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or
- ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also
- be a ``"geojson"`` type for encoding `'geoshape'
- `__.
-
- Vega-Lite automatically infers data types in many cases as discussed below. However,
- type is required for a field if: (1) the field is not nominal and the field encoding
- has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale
- type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal
- scale for a field with ``bin`` or ``timeUnit``.
-
- **Default value:**
-
- 1) For a data ``field``, ``"nominal"`` is the default data type unless the field
- encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or
- ``timeUnit`` that satisfies the following criteria:
-
-
- * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin``
- or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is
- ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a
- quantitative scale `__.
- * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit``
- or (2) the specified scale type is a time or utc scale
- * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort
- order
- `__,
- (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding
- channel is ``order``.
-
- 2) For a constant value in data domain ( ``datum`` ):
-
-
- * ``"quantitative"`` if the datum is a number
- * ``"nominal"`` if the datum is a string
- * ``"temporal"`` if the datum is `a date time object
- `__
-
- **Note:**
-
-
- * Data ``type`` describes the semantics of the data rather than the primitive data
- types (number, string, etc.). The same primitive data type can have different
- types of measurement. For example, numeric data can represent quantitative,
- ordinal, or nominal data.
- * Data values for a temporal field can be either a date-time string (e.g.,
- ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a
- timestamp number (e.g., ``1552199579097`` ).
- * When using with `bin `__, the
- ``type`` property can be either ``"quantitative"`` (for using a linear bin scale)
- or `"ordinal" (for using an ordinal bin scale)
- `__.
- * When using with `timeUnit
- `__, the ``type`` property
- can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal"
- (for using an ordinal scale)
- `__.
- * When using with `aggregate
- `__, the ``type`` property
- refers to the post-aggregation data type. For example, we can calculate count
- ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct",
- "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``.
- * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have
- ``type`` as they must have exactly the same type as their primary channels (e.g.,
- ``x``, ``y`` ).
-
- **See also:** `type `__
- documentation.
- """
- _class_is_valid_at_instantiation = False
- _encoding_name = "color"
-
- def bandPosition(self, _: float, **kwds) -> 'ColorDatum':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, test=Undefined, value=Undefined, **kwds) -> 'ColorDatum':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'ColorDatum':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, _: List[core.ConditionalValueDefGradientstringnullExprRef], **kwds) -> 'ColorDatum':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: str, **kwds) -> 'ColorDatum':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: List[str], **kwds) -> 'ColorDatum':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: None, **kwds) -> 'ColorDatum':
- ...
-
- def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal", "geojson"], **kwds) -> 'ColorDatum':
- ...
-
-
- def __init__(self, datum, bandPosition=Undefined, condition=Undefined, title=Undefined,
- type=Undefined, **kwds):
- super(ColorDatum, self).__init__(datum=datum, bandPosition=bandPosition, condition=condition,
- title=title, type=type, **kwds)
-
-
-@with_property_setters
-class ColorValue(ValueChannelMixin, core.ValueDefWithConditionMarkPropFieldOrDatumDefGradientstringnull):
- """ColorValue schema wrapper
-
- Mapping(required=[])
-
- Parameters
- ----------
-
- condition : anyOf(:class:`ConditionalMarkPropFieldOrDatumDef`, :class:`ConditionalValueDefGradientstringnullExprRef`, List(:class:`ConditionalValueDefGradientstringnullExprRef`))
- A field definition or one or more value definition(s) with a parameter predicate.
- value : anyOf(:class:`Gradient`, string, None, :class:`ExprRef`)
- A constant value in visual domain (e.g., ``"red"`` / ``"#0099ff"`` / `gradient
- definition `__ for color,
- values between ``0`` to ``1`` for opacity).
- """
- _class_is_valid_at_instantiation = False
- _encoding_name = "color"
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, field=Undefined, legend=Undefined, scale=Undefined, sort=Undefined, test=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds) -> 'ColorValue':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, bandPosition=Undefined, datum=Undefined, legend=Undefined, scale=Undefined, test=Undefined, title=Undefined, type=Undefined, **kwds) -> 'ColorValue':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, empty=Undefined, field=Undefined, legend=Undefined, param=Undefined, scale=Undefined, sort=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds) -> 'ColorValue':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, bandPosition=Undefined, datum=Undefined, empty=Undefined, legend=Undefined, param=Undefined, scale=Undefined, title=Undefined, type=Undefined, **kwds) -> 'ColorValue':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, test=Undefined, value=Undefined, **kwds) -> 'ColorValue':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'ColorValue':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, _: List[core.ConditionalValueDefGradientstringnullExprRef], **kwds) -> 'ColorValue':
- ...
-
-
- def __init__(self, value, condition=Undefined, **kwds):
- super(ColorValue, self).__init__(value=value, condition=condition, **kwds)
-
-
-@with_property_setters
-class Column(FieldChannelMixin, core.RowColumnEncodingFieldDef):
- """Column schema wrapper
-
- Mapping(required=[shorthand])
-
- Parameters
- ----------
-
- shorthand : string
- shorthand for field, aggregate, and type
- aggregate : :class:`Aggregate`
- Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``,
- ``"min"``, ``"max"``, ``"count"`` ).
-
- **Default value:** ``undefined`` (None)
-
- **See also:** `aggregate `__
- documentation.
- align : :class:`LayoutAlign`
- The alignment to apply to row/column facet's subplot. The supported string values
- are ``"all"``, ``"each"``, and ``"none"``.
-
-
- * For ``"none"``, a flow layout will be used, in which adjacent subviews are simply
- placed one after the other.
- * For ``"each"``, subviews will be aligned into a clean grid structure, but each row
- or column may be of variable size.
- * For ``"all"``, subviews will be aligned and each row or column will be sized
- identically based on the maximum observed size. String values for this property
- will be applied to both grid rows and columns.
-
- **Default value:** ``"all"``.
- bandPosition : float
- Relative position on a band of a stacked, binned, time unit, or band scale. For
- example, the marks will be positioned at the beginning of the band if set to ``0``,
- and at the middle of the band if set to ``0.5``.
- bin : anyOf(boolean, :class:`BinParams`, None)
- A flag for binning a ``quantitative`` field, `an object defining binning parameters
- `__, or indicating
- that the data for ``x`` or ``y`` channel are binned before they are imported into
- Vega-Lite ( ``"binned"`` ).
-
-
- If ``true``, default `binning parameters
- `__ will be applied.
-
- If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are
- already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end
- field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to
- binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also
- set the axis's `tickMinStep
- `__ property.
-
- **Default value:** ``false``
-
- **See also:** `bin `__
- documentation.
- center : boolean
- Boolean flag indicating if facet's subviews should be centered relative to their
- respective rows or columns.
-
- **Default value:** ``false``
- field : :class:`Field`
- **Required.** A string defining the name of the field from which to pull a data
- value or an object defining iterated values from the `repeat
- `__ operator.
-
- **See also:** `field `__
- documentation.
-
- **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access
- nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If
- field names contain dots or brackets but are not nested, you can use ``\\`` to
- escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details
- about escaping in the `field documentation
- `__. 2) ``field`` is not required
- if ``aggregate`` is ``count``.
- header : anyOf(:class:`Header`, None)
- An object defining properties of a facet's header.
- sort : anyOf(:class:`SortArray`, :class:`SortOrder`, :class:`EncodingSortField`, None)
- Sort order for the encoded field.
-
- For continuous fields (quantitative or temporal), ``sort`` can be either
- ``"ascending"`` or ``"descending"``.
-
- For discrete fields, ``sort`` can be one of the following:
-
-
- * ``"ascending"`` or ``"descending"`` -- for sorting by the values' natural order in
- JavaScript.
- * `A sort field definition
- `__ for sorting by
- another field.
- * `An array specifying the field values in preferred order
- `__. In this case, the
- sort order will obey the values in the array, followed by any unspecified values
- in their original order. For discrete time field, values in the sort array can be
- `date-time definition objects
- `__. In addition, for time
- units ``"month"`` and ``"day"``, the values can be the month or day names (case
- insensitive) or their 3-letter initials (e.g., ``"Mon"``, ``"Tue"`` ).
- * ``null`` indicating no sort.
-
- **Default value:** ``"ascending"``
-
- **Note:** ``null`` is not supported for ``row`` and ``column``.
- spacing : float
- The spacing in pixels between facet's sub-views.
-
- **Default value** : Depends on ``"spacing"`` property of `the view composition
- configuration `__ (
- ``20`` by default)
- timeUnit : anyOf(:class:`TimeUnit`, :class:`TimeUnitParams`)
- Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal
- field. or `a temporal field that gets casted as ordinal
- `__.
-
- **Default value:** ``undefined`` (None)
-
- **See also:** `timeUnit `__
- documentation.
- title : anyOf(:class:`Text`, None)
- A title for the field. If ``null``, the title will be removed.
-
- **Default value:** derived from the field's name and transformation function (
- ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function,
- the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the
- field is binned or has a time unit applied, the applied function is shown in
- parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ).
- Otherwise, the title is simply the field name.
-
- **Notes** :
-
- 1) You can customize the default field title format by providing the `fieldTitle
- `__ property in
- the `config `__ or `fieldTitle
- function via the compile function's options
- `__.
-
- 2) If both field definition's ``title`` and axis, header, or legend ``title`` are
- defined, axis/header/legend title will be used.
- type : :class:`StandardType`
- The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or
- ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also
- be a ``"geojson"`` type for encoding `'geoshape'
- `__.
-
- Vega-Lite automatically infers data types in many cases as discussed below. However,
- type is required for a field if: (1) the field is not nominal and the field encoding
- has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale
- type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal
- scale for a field with ``bin`` or ``timeUnit``.
-
- **Default value:**
-
- 1) For a data ``field``, ``"nominal"`` is the default data type unless the field
- encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or
- ``timeUnit`` that satisfies the following criteria:
-
-
- * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin``
- or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is
- ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a
- quantitative scale `__.
- * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit``
- or (2) the specified scale type is a time or utc scale
- * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort
- order
- `__,
- (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding
- channel is ``order``.
-
- 2) For a constant value in data domain ( ``datum`` ):
-
-
- * ``"quantitative"`` if the datum is a number
- * ``"nominal"`` if the datum is a string
- * ``"temporal"`` if the datum is `a date time object
- `__
-
- **Note:**
-
-
- * Data ``type`` describes the semantics of the data rather than the primitive data
- types (number, string, etc.). The same primitive data type can have different
- types of measurement. For example, numeric data can represent quantitative,
- ordinal, or nominal data.
- * Data values for a temporal field can be either a date-time string (e.g.,
- ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a
- timestamp number (e.g., ``1552199579097`` ).
- * When using with `bin `__, the
- ``type`` property can be either ``"quantitative"`` (for using a linear bin scale)
- or `"ordinal" (for using an ordinal bin scale)
- `__.
- * When using with `timeUnit
- `__, the ``type`` property
- can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal"
- (for using an ordinal scale)
- `__.
- * When using with `aggregate
- `__, the ``type`` property
- refers to the post-aggregation data type. For example, we can calculate count
- ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct",
- "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``.
- * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have
- ``type`` as they must have exactly the same type as their primary channels (e.g.,
- ``x``, ``y`` ).
-
- **See also:** `type `__
- documentation.
- """
- _class_is_valid_at_instantiation = False
- _encoding_name = "column"
-
- @overload # type: ignore[no-overload-impl]
- def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def aggregate(self, argmax=Undefined, **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def aggregate(self, argmin=Undefined, **kwds) -> 'Column':
- ...
-
- def align(self, _: Literal["all", "each", "none"], **kwds) -> 'Column':
- ...
-
- def bandPosition(self, _: float, **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def bin(self, _: bool, **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def bin(self, anchor=Undefined, base=Undefined, binned=Undefined, divide=Undefined, extent=Undefined, maxbins=Undefined, minstep=Undefined, nice=Undefined, step=Undefined, steps=Undefined, **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def bin(self, _: None, **kwds) -> 'Column':
- ...
-
- def center(self, _: bool, **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def field(self, _: str, **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def field(self, repeat=Undefined, **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def header(self, format=Undefined, formatType=Undefined, labelAlign=Undefined, labelAnchor=Undefined, labelAngle=Undefined, labelBaseline=Undefined, labelColor=Undefined, labelExpr=Undefined, labelFont=Undefined, labelFontSize=Undefined, labelFontStyle=Undefined, labelFontWeight=Undefined, labelLimit=Undefined, labelLineHeight=Undefined, labelOrient=Undefined, labelPadding=Undefined, labels=Undefined, orient=Undefined, title=Undefined, titleAlign=Undefined, titleAnchor=Undefined, titleAngle=Undefined, titleBaseline=Undefined, titleColor=Undefined, titleFont=Undefined, titleFontSize=Undefined, titleFontStyle=Undefined, titleFontWeight=Undefined, titleLimit=Undefined, titleLineHeight=Undefined, titleOrient=Undefined, titlePadding=Undefined, **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def header(self, _: None, **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: List[float], **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: List[str], **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: List[bool], **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: List[core.DateTime], **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: Literal["ascending", "descending"], **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, field=Undefined, op=Undefined, order=Undefined, **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: None, **kwds) -> 'Column':
- ...
-
- def spacing(self, _: float, **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: str, **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: List[str], **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: None, **kwds) -> 'Column':
- ...
-
- def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal"], **kwds) -> 'Column':
- ...
-
-
- def __init__(self, shorthand=Undefined, aggregate=Undefined, align=Undefined,
- bandPosition=Undefined, bin=Undefined, center=Undefined, field=Undefined,
- header=Undefined, sort=Undefined, spacing=Undefined, timeUnit=Undefined,
- title=Undefined, type=Undefined, **kwds):
- super(Column, self).__init__(shorthand=shorthand, aggregate=aggregate, align=align,
- bandPosition=bandPosition, bin=bin, center=center, field=field,
- header=header, sort=sort, spacing=spacing, timeUnit=timeUnit,
- title=title, type=type, **kwds)
-
-
-@with_property_setters
-class Description(FieldChannelMixin, core.StringFieldDefWithCondition):
- """Description schema wrapper
-
- Mapping(required=[shorthand])
-
- Parameters
- ----------
-
- shorthand : string
- shorthand for field, aggregate, and type
- aggregate : :class:`Aggregate`
- Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``,
- ``"min"``, ``"max"``, ``"count"`` ).
-
- **Default value:** ``undefined`` (None)
-
- **See also:** `aggregate `__
- documentation.
- bandPosition : float
- Relative position on a band of a stacked, binned, time unit, or band scale. For
- example, the marks will be positioned at the beginning of the band if set to ``0``,
- and at the middle of the band if set to ``0.5``.
- bin : anyOf(boolean, :class:`BinParams`, string, None)
- A flag for binning a ``quantitative`` field, `an object defining binning parameters
- `__, or indicating
- that the data for ``x`` or ``y`` channel are binned before they are imported into
- Vega-Lite ( ``"binned"`` ).
-
-
- If ``true``, default `binning parameters
- `__ will be applied.
-
- If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are
- already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end
- field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to
- binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also
- set the axis's `tickMinStep
- `__ property.
-
- **Default value:** ``false``
-
- **See also:** `bin `__
- documentation.
- condition : anyOf(:class:`ConditionalValueDefstringExprRef`, List(:class:`ConditionalValueDefstringExprRef`))
- One or more value definition(s) with `a parameter or a test predicate
- `__.
-
- **Note:** A field definition's ``condition`` property can only contain `conditional
- value definitions `__
- since Vega-Lite only allows at most one encoded field per encoding channel.
- field : :class:`Field`
- **Required.** A string defining the name of the field from which to pull a data
- value or an object defining iterated values from the `repeat
- `__ operator.
-
- **See also:** `field `__
- documentation.
-
- **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access
- nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If
- field names contain dots or brackets but are not nested, you can use ``\\`` to
- escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details
- about escaping in the `field documentation
- `__. 2) ``field`` is not required
- if ``aggregate`` is ``count``.
- format : anyOf(string, :class:`Dict`)
- When used with the default ``"number"`` and ``"time"`` format type, the text
- formatting pattern for labels of guides (axes, legends, headers) and text marks.
-
-
- * If the format type is ``"number"`` (e.g., for quantitative fields), this is D3's
- `number format pattern `__.
- * If the format type is ``"time"`` (e.g., for temporal fields), this is D3's `time
- format pattern `__.
-
- See the `format documentation `__
- for more examples.
-
- When used with a `custom formatType
- `__, this
- value will be passed as ``format`` alongside ``datum.value`` to the registered
- function.
-
- **Default value:** Derived from `numberFormat
- `__ config for number
- format and from `timeFormat
- `__ config for time
- format.
- formatType : string
- The format type for labels. One of ``"number"``, ``"time"``, or a `registered custom
- format type
- `__.
-
- **Default value:**
-
-
- * ``"time"`` for temporal fields and ordinal and nominal fields with ``timeUnit``.
- * ``"number"`` for quantitative fields as well as ordinal and nominal fields without
- ``timeUnit``.
- timeUnit : anyOf(:class:`TimeUnit`, :class:`TimeUnitParams`)
- Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal
- field. or `a temporal field that gets casted as ordinal
- `__.
-
- **Default value:** ``undefined`` (None)
-
- **See also:** `timeUnit `__
- documentation.
- title : anyOf(:class:`Text`, None)
- A title for the field. If ``null``, the title will be removed.
-
- **Default value:** derived from the field's name and transformation function (
- ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function,
- the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the
- field is binned or has a time unit applied, the applied function is shown in
- parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ).
- Otherwise, the title is simply the field name.
-
- **Notes** :
-
- 1) You can customize the default field title format by providing the `fieldTitle
- `__ property in
- the `config `__ or `fieldTitle
- function via the compile function's options
- `__.
-
- 2) If both field definition's ``title`` and axis, header, or legend ``title`` are
- defined, axis/header/legend title will be used.
- type : :class:`StandardType`
- The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or
- ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also
- be a ``"geojson"`` type for encoding `'geoshape'
- `__.
-
- Vega-Lite automatically infers data types in many cases as discussed below. However,
- type is required for a field if: (1) the field is not nominal and the field encoding
- has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale
- type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal
- scale for a field with ``bin`` or ``timeUnit``.
-
- **Default value:**
-
- 1) For a data ``field``, ``"nominal"`` is the default data type unless the field
- encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or
- ``timeUnit`` that satisfies the following criteria:
-
-
- * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin``
- or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is
- ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a
- quantitative scale `__.
- * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit``
- or (2) the specified scale type is a time or utc scale
- * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort
- order
- `__,
- (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding
- channel is ``order``.
-
- 2) For a constant value in data domain ( ``datum`` ):
-
-
- * ``"quantitative"`` if the datum is a number
- * ``"nominal"`` if the datum is a string
- * ``"temporal"`` if the datum is `a date time object
- `__
-
- **Note:**
-
-
- * Data ``type`` describes the semantics of the data rather than the primitive data
- types (number, string, etc.). The same primitive data type can have different
- types of measurement. For example, numeric data can represent quantitative,
- ordinal, or nominal data.
- * Data values for a temporal field can be either a date-time string (e.g.,
- ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a
- timestamp number (e.g., ``1552199579097`` ).
- * When using with `bin `__, the
- ``type`` property can be either ``"quantitative"`` (for using a linear bin scale)
- or `"ordinal" (for using an ordinal bin scale)
- `__.
- * When using with `timeUnit
- `__, the ``type`` property
- can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal"
- (for using an ordinal scale)
- `__.
- * When using with `aggregate
- `__, the ``type`` property
- refers to the post-aggregation data type. For example, we can calculate count
- ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct",
- "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``.
- * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have
- ``type`` as they must have exactly the same type as their primary channels (e.g.,
- ``x``, ``y`` ).
-
- **See also:** `type `__
- documentation.
- """
- _class_is_valid_at_instantiation = False
- _encoding_name = "description"
-
- @overload # type: ignore[no-overload-impl]
- def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'Description':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def aggregate(self, argmax=Undefined, **kwds) -> 'Description':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def aggregate(self, argmin=Undefined, **kwds) -> 'Description':
- ...
-
- def bandPosition(self, _: float, **kwds) -> 'Description':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def bin(self, _: bool, **kwds) -> 'Description':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def bin(self, anchor=Undefined, base=Undefined, binned=Undefined, divide=Undefined, extent=Undefined, maxbins=Undefined, minstep=Undefined, nice=Undefined, step=Undefined, steps=Undefined, **kwds) -> 'Description':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def bin(self, _: str, **kwds) -> 'Description':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def bin(self, _: None, **kwds) -> 'Description':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, test=Undefined, value=Undefined, **kwds) -> 'Description':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'Description':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, _: List[core.ConditionalValueDefstringExprRef], **kwds) -> 'Description':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def field(self, _: str, **kwds) -> 'Description':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def field(self, repeat=Undefined, **kwds) -> 'Description':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def format(self, _: str, **kwds) -> 'Description':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def format(self, _: dict, **kwds) -> 'Description':
- ...
-
- def formatType(self, _: str, **kwds) -> 'Description':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'Description':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'Description':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'Description':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'Description':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'Description':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: str, **kwds) -> 'Description':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: List[str], **kwds) -> 'Description':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: None, **kwds) -> 'Description':
- ...
-
- def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal"], **kwds) -> 'Description':
- ...
-
-
- def __init__(self, shorthand=Undefined, aggregate=Undefined, bandPosition=Undefined, bin=Undefined,
- condition=Undefined, field=Undefined, format=Undefined, formatType=Undefined,
- timeUnit=Undefined, title=Undefined, type=Undefined, **kwds):
- super(Description, self).__init__(shorthand=shorthand, aggregate=aggregate,
- bandPosition=bandPosition, bin=bin, condition=condition,
- field=field, format=format, formatType=formatType,
- timeUnit=timeUnit, title=title, type=type, **kwds)
-
-
-@with_property_setters
-class DescriptionValue(ValueChannelMixin, core.StringValueDefWithCondition):
- """DescriptionValue schema wrapper
-
- Mapping(required=[])
-
- Parameters
- ----------
-
- condition : anyOf(:class:`ConditionalMarkPropFieldOrDatumDef`, :class:`ConditionalValueDefstringnullExprRef`, List(:class:`ConditionalValueDefstringnullExprRef`))
- A field definition or one or more value definition(s) with a parameter predicate.
- value : anyOf(string, None, :class:`ExprRef`)
- A constant value in visual domain (e.g., ``"red"`` / ``"#0099ff"`` / `gradient
- definition `__ for color,
- values between ``0`` to ``1`` for opacity).
- """
- _class_is_valid_at_instantiation = False
- _encoding_name = "description"
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, field=Undefined, legend=Undefined, scale=Undefined, sort=Undefined, test=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds) -> 'DescriptionValue':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, bandPosition=Undefined, datum=Undefined, legend=Undefined, scale=Undefined, test=Undefined, title=Undefined, type=Undefined, **kwds) -> 'DescriptionValue':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, empty=Undefined, field=Undefined, legend=Undefined, param=Undefined, scale=Undefined, sort=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds) -> 'DescriptionValue':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, bandPosition=Undefined, datum=Undefined, empty=Undefined, legend=Undefined, param=Undefined, scale=Undefined, title=Undefined, type=Undefined, **kwds) -> 'DescriptionValue':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, test=Undefined, value=Undefined, **kwds) -> 'DescriptionValue':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'DescriptionValue':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, _: List[core.ConditionalValueDefstringnullExprRef], **kwds) -> 'DescriptionValue':
- ...
-
-
- def __init__(self, value, condition=Undefined, **kwds):
- super(DescriptionValue, self).__init__(value=value, condition=condition, **kwds)
-
-
-@with_property_setters
-class Detail(FieldChannelMixin, core.FieldDefWithoutScale):
- """Detail schema wrapper
-
- Mapping(required=[shorthand])
- Definition object for a data field, its type and transformation of an encoding channel.
-
- Parameters
- ----------
-
- shorthand : string
- shorthand for field, aggregate, and type
- aggregate : :class:`Aggregate`
- Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``,
- ``"min"``, ``"max"``, ``"count"`` ).
-
- **Default value:** ``undefined`` (None)
-
- **See also:** `aggregate `__
- documentation.
- bandPosition : float
- Relative position on a band of a stacked, binned, time unit, or band scale. For
- example, the marks will be positioned at the beginning of the band if set to ``0``,
- and at the middle of the band if set to ``0.5``.
- bin : anyOf(boolean, :class:`BinParams`, string, None)
- A flag for binning a ``quantitative`` field, `an object defining binning parameters
- `__, or indicating
- that the data for ``x`` or ``y`` channel are binned before they are imported into
- Vega-Lite ( ``"binned"`` ).
-
-
- If ``true``, default `binning parameters
- `__ will be applied.
-
- If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are
- already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end
- field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to
- binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also
- set the axis's `tickMinStep
- `__ property.
-
- **Default value:** ``false``
-
- **See also:** `bin `__
- documentation.
- field : :class:`Field`
- **Required.** A string defining the name of the field from which to pull a data
- value or an object defining iterated values from the `repeat
- `__ operator.
-
- **See also:** `field `__
- documentation.
-
- **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access
- nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If
- field names contain dots or brackets but are not nested, you can use ``\\`` to
- escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details
- about escaping in the `field documentation
- `__. 2) ``field`` is not required
- if ``aggregate`` is ``count``.
- timeUnit : anyOf(:class:`TimeUnit`, :class:`TimeUnitParams`)
- Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal
- field. or `a temporal field that gets casted as ordinal
- `__.
-
- **Default value:** ``undefined`` (None)
-
- **See also:** `timeUnit `__
- documentation.
- title : anyOf(:class:`Text`, None)
- A title for the field. If ``null``, the title will be removed.
-
- **Default value:** derived from the field's name and transformation function (
- ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function,
- the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the
- field is binned or has a time unit applied, the applied function is shown in
- parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ).
- Otherwise, the title is simply the field name.
-
- **Notes** :
-
- 1) You can customize the default field title format by providing the `fieldTitle
- `__ property in
- the `config `__ or `fieldTitle
- function via the compile function's options
- `__.
-
- 2) If both field definition's ``title`` and axis, header, or legend ``title`` are
- defined, axis/header/legend title will be used.
- type : :class:`StandardType`
- The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or
- ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also
- be a ``"geojson"`` type for encoding `'geoshape'
- `__.
-
- Vega-Lite automatically infers data types in many cases as discussed below. However,
- type is required for a field if: (1) the field is not nominal and the field encoding
- has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale
- type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal
- scale for a field with ``bin`` or ``timeUnit``.
-
- **Default value:**
-
- 1) For a data ``field``, ``"nominal"`` is the default data type unless the field
- encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or
- ``timeUnit`` that satisfies the following criteria:
-
-
- * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin``
- or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is
- ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a
- quantitative scale `__.
- * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit``
- or (2) the specified scale type is a time or utc scale
- * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort
- order
- `__,
- (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding
- channel is ``order``.
-
- 2) For a constant value in data domain ( ``datum`` ):
-
-
- * ``"quantitative"`` if the datum is a number
- * ``"nominal"`` if the datum is a string
- * ``"temporal"`` if the datum is `a date time object
- `__
-
- **Note:**
-
-
- * Data ``type`` describes the semantics of the data rather than the primitive data
- types (number, string, etc.). The same primitive data type can have different
- types of measurement. For example, numeric data can represent quantitative,
- ordinal, or nominal data.
- * Data values for a temporal field can be either a date-time string (e.g.,
- ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a
- timestamp number (e.g., ``1552199579097`` ).
- * When using with `bin `__, the
- ``type`` property can be either ``"quantitative"`` (for using a linear bin scale)
- or `"ordinal" (for using an ordinal bin scale)
- `__.
- * When using with `timeUnit
- `__, the ``type`` property
- can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal"
- (for using an ordinal scale)
- `__.
- * When using with `aggregate
- `__, the ``type`` property
- refers to the post-aggregation data type. For example, we can calculate count
- ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct",
- "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``.
- * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have
- ``type`` as they must have exactly the same type as their primary channels (e.g.,
- ``x``, ``y`` ).
-
- **See also:** `type `__
- documentation.
- """
- _class_is_valid_at_instantiation = False
- _encoding_name = "detail"
-
- @overload # type: ignore[no-overload-impl]
- def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'Detail':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def aggregate(self, argmax=Undefined, **kwds) -> 'Detail':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def aggregate(self, argmin=Undefined, **kwds) -> 'Detail':
- ...
-
- def bandPosition(self, _: float, **kwds) -> 'Detail':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def bin(self, _: bool, **kwds) -> 'Detail':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def bin(self, anchor=Undefined, base=Undefined, binned=Undefined, divide=Undefined, extent=Undefined, maxbins=Undefined, minstep=Undefined, nice=Undefined, step=Undefined, steps=Undefined, **kwds) -> 'Detail':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def bin(self, _: str, **kwds) -> 'Detail':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def bin(self, _: None, **kwds) -> 'Detail':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def field(self, _: str, **kwds) -> 'Detail':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def field(self, repeat=Undefined, **kwds) -> 'Detail':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'Detail':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'Detail':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'Detail':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'Detail':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'Detail':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: str, **kwds) -> 'Detail':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: List[str], **kwds) -> 'Detail':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: None, **kwds) -> 'Detail':
- ...
-
- def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal"], **kwds) -> 'Detail':
- ...
-
-
- def __init__(self, shorthand=Undefined, aggregate=Undefined, bandPosition=Undefined, bin=Undefined,
- field=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds):
- super(Detail, self).__init__(shorthand=shorthand, aggregate=aggregate,
- bandPosition=bandPosition, bin=bin, field=field, timeUnit=timeUnit,
- title=title, type=type, **kwds)
-
-
-@with_property_setters
-class Facet(FieldChannelMixin, core.FacetEncodingFieldDef):
- """Facet schema wrapper
-
- Mapping(required=[shorthand])
-
- Parameters
- ----------
-
- shorthand : string
- shorthand for field, aggregate, and type
- aggregate : :class:`Aggregate`
- Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``,
- ``"min"``, ``"max"``, ``"count"`` ).
-
- **Default value:** ``undefined`` (None)
-
- **See also:** `aggregate `__
- documentation.
- align : anyOf(:class:`LayoutAlign`, :class:`RowColLayoutAlign`)
- The alignment to apply to grid rows and columns. The supported string values are
- ``"all"``, ``"each"``, and ``"none"``.
-
-
- * For ``"none"``, a flow layout will be used, in which adjacent subviews are simply
- placed one after the other.
- * For ``"each"``, subviews will be aligned into a clean grid structure, but each row
- or column may be of variable size.
- * For ``"all"``, subviews will be aligned and each row or column will be sized
- identically based on the maximum observed size. String values for this property
- will be applied to both grid rows and columns.
-
- Alternatively, an object value of the form ``{"row": string, "column": string}`` can
- be used to supply different alignments for rows and columns.
-
- **Default value:** ``"all"``.
- bandPosition : float
- Relative position on a band of a stacked, binned, time unit, or band scale. For
- example, the marks will be positioned at the beginning of the band if set to ``0``,
- and at the middle of the band if set to ``0.5``.
- bin : anyOf(boolean, :class:`BinParams`, None)
- A flag for binning a ``quantitative`` field, `an object defining binning parameters
- `__, or indicating
- that the data for ``x`` or ``y`` channel are binned before they are imported into
- Vega-Lite ( ``"binned"`` ).
-
-
- If ``true``, default `binning parameters
-