diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/ACDSee for Windows 10 The Best Photo Editing Software You Can Try for Free.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/ACDSee for Windows 10 The Best Photo Editing Software You Can Try for Free.md
deleted file mode 100644
index 8f4fba51302f78cb622d13e4a8de6491a76e6227..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/ACDSee for Windows 10 The Best Photo Editing Software You Can Try for Free.md
+++ /dev/null
@@ -1,29 +0,0 @@
-
-
How to Free Download ACDSee for Windows 10
-
If you are looking for a powerful and easy-to-use photo editing software, you might want to try ACDSee. ACDSee is a popular program that allows you to organize, edit, and share your photos with ease. It has many features and tools that can help you enhance your images and create stunning results.
But how can you get ACDSee for Windows 10? Is there a way to free download it? The answer is yes, but you need to be careful. There are many websites that claim to offer free downloads of ACDSee, but some of them might be scams or contain viruses. You don't want to risk your computer's security or waste your time with fake downloads.
-
That's why we recommend you to use the official website of ACDSee. There, you can find the latest version of ACDSee for Windows 10, as well as other products and services from the company. You can also get a free trial of ACDSee for 30 days, which will let you test all the features and functions of the software before you decide to buy it.
-
To free download ACDSee for Windows 10 from the official website, follow these steps:
Select the product you want to download. In this case, choose "ACDSee Photo Studio Ultimate 2023" or "ACDSee Photo Studio Professional 2023", depending on your needs and preferences.
-
Click on the "Free Trial" button and fill in your name and email address. You will receive a confirmation email with a link to download the software.
-
Click on the link in the email and follow the instructions to install ACDSee on your Windows 10 computer.
-
Enjoy your free trial of ACDSee for 30 days. You can use all the features and tools of the software without any limitations or watermarks.
-
-
That's it! You have successfully free downloaded ACDSee for Windows 10. Now you can start editing and sharing your photos with this amazing software. If you like it, you can purchase a license from the official website or from an authorized reseller. ACDSee offers different plans and prices to suit your budget and needs.
-
We hope this article was helpful and informative. If you have any questions or feedback, feel free to leave a comment below. Thank you for reading!
-
-
-
Why Choose ACDSee for Windows 10?
-
ACDSee is one of the best photo editing software for Windows 10. It has many advantages and benefits that make it stand out from other programs. Here are some of the reasons why you should choose ACDSee for Windows 10:
-
-
ACDSee is fast and efficient. It can handle large and complex files without slowing down your computer. It also has a smooth and intuitive interface that makes it easy to navigate and use.
-
ACDSee is versatile and flexible. It can support various file formats, including RAW, JPEG, PNG, TIFF, GIF, and more. It also has a wide range of tools and features that can help you with different tasks, such as cropping, resizing, rotating, adjusting colors, applying filters, adding text, removing blemishes, and more.
-
ACDSee is powerful and professional. It can perform advanced editing and processing functions, such as HDR, panorama, focus stacking, facial recognition, batch editing, watermarking, and more. It also has a built-in digital asset management system that allows you to organize, sort, tag, rate, and search your photos easily.
-
ACDSee is creative and fun. It can help you unleash your artistic potential and create stunning results. It has a variety of modes and options that can let you experiment and explore different styles and effects. You can also share your photos with your friends and family through email, social media, or cloud services.
-
-
As you can see, ACDSee is a great choice for Windows 10 users who want to edit and manage their photos in a fast, easy, and professional way. If you haven't tried it yet, don't miss this opportunity to free download ACDSee for Windows 10 from the official website. You won't regret it!
ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/AerosoftCrackerV2.exel Save Money and Time with This Amazing Cracker.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/AerosoftCrackerV2.exel Save Money and Time with This Amazing Cracker.md
deleted file mode 100644
index b404f57a568c080885e91f518a0681b2f49cb166..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/AerosoftCrackerV2.exel Save Money and Time with This Amazing Cracker.md
+++ /dev/null
@@ -1,151 +0,0 @@
-
-
What is AerosoftCrackerV2.exe and why you should avoid it
-
Have you ever heard of AerosoftCrackerV2.exe? If you are a fan of flight simulation games, you may have come across this file online. It claims to be a crack for Aerosoft products, which are popular add-ons for Microsoft Flight Simulator X (FSX) and Prepar3D (P3D). However, don't be fooled by its name. AerosoftCrackerV2.exe is not a legitimate crack, but a malicious program that can harm your computer and compromise your security.
-
In this article, we will explain what AerosoftCrackerV2.exe is, how it works, what are the symptoms of its infection, how to remove it from your computer, and how to prevent it from infecting your computer in the future. By reading this article, you will learn how to protect yourself from this dangerous threat and enjoy your flight simulation games safely.
AerosoftCrackerV2.exe is a type of malware that belongs to the Trojan category. A Trojan is a program that pretends to be something else in order to trick users into downloading or running it. Once executed, a Trojan can perform various malicious actions on the infected computer without the user's knowledge or consent.
-
AerosoftCrackerV2.exe works by posing as a crack for Aerosoft products. A crack is a program that modifies or bypasses the security features of a software product in order to use it for free or without restrictions. Some users may be tempted to use cracks for flight simulation add-ons because they are expensive or hard to find. However, using cracks is illegal and risky, as they may contain malware or viruses that can damage your computer or steal your personal information.
-
When you download or run AerosoftCrackerV2.exe on your computer, it will install itself in a hidden location and create several files and registry entries that allow it to run automatically every time you start your computer. It will also try to disable your antivirus software or firewall in order to avoid detection and removal. Then, it will perform various malicious activities on your computer, such as:
-
-
Downloading and installing other malware or viruses on your computer
-
Stealing your personal information, such as passwords, credit card numbers, bank account details, etc.
-
Monitoring your online activities, such as browsing history, keystrokes, etc.
-
Displaying unwanted ads or pop-ups on your screen
-
Redirecting your web browser to malicious websites
-
Slowing down your computer performance or causing crashes or errors
-
-
What are the symptoms of AerosoftCrackerV2.exe infection?
-
If your computer is infected by AerosoftCrackerV2.exe, you may notice some of the following signs:
-
-
Your antivirus software or firewall is disabled or not working properly
-
Your computer runs slower than usual or freezes frequently
-
You see strange files or folders on your computer that you don't recognize
-
You see unwanted ads or pop-ups on your screen that are related to flight simulation products or services
-
Your web browser is redirected to unfamiliar websites that ask you to download or buy something
-
You receive warnings or alerts from unknown sources that claim your computer is infected or needs repair
-
You notice unauthorized charges on your credit card or bank account statements
-
-
How to remove AerosoftCrackerV2.exe from your computer?
-
If you suspect that your computer is infected by AerosoftCrackerV2.exe, you should take immediate action to remove it from your computer. There are two methods that you can use to remove AerosoftCrackerV2.exe: manual removal method and automatic removal method.
-
Manual removal method
-
The manual removal method involves deleting AerosoftCrackerV2.exe and its related files and registry entries from your computer manually. This method requires some technical skills and knowledge of how to access and modify system files and settings. If you are not confident or experienced in doing this, we recommend that you use the automatic removal method instead.
-
To manually remove AerosoftCrackerV2.exe from your computer, follow these steps:
-
-
Restart your computer in Safe Mode with Networking. To do this, press F8 repeatedly while booting up until you see a menu with different options. Choose Safe Mode with Networking and press Enter.
-
Open Task Manager by pressing Ctrl+Alt+Delete keys together. Look for any suspicious processes that are related to AerosoftCrackerV2.exe and end them.
-
Open File Explorer by pressing Windows+E keys together. Navigate to the following locations and delete any files or folders that are related to AerosoftCrackerV2.exe:
-
-
%profile%\downloads\fsx-p3d-fsx se - aerosoft - airbus a318-a319-a320-a321 v1.31\cracks
-
%sysdrive%\22222\aerosoft
-
%sysdrive%\22222\utilities and tools pack
-
%desktop%\traduçao\mega airport prague
-
%desktop%
-
%sysdrive%\p3d\fsx-p3d-fsx se - aerosoft - airbus a318-a319-a320-a321 v1.31
-
%programfiles%\microsoft games
-
%sysdrive%\torrent\nassau x (fsx-p3d)
-
%sysdrive%\torrent\fsx-p3d-fsx se - aerosoft - airbus a318-a319-a320-a321 v1.31
-
%sysdrive%\salvar\simulador de voo\simulador de voo fsx prepar3d\prepar3d v4.0 academic\cenarios\new version (fsx-p3d)
-
-
Open Registry Editor by pressing Windows+R keys together and typing regedit in the Run box. Click OK. Navigate to the following registry keys and delete any sub-keys or values that are related to AerosoftCrackerV2.exe:
Close Registry Editor and restart your computer normally.
-
-
Automatic removal method
-
The automatic removal method involves using a reliable anti-malware tool to scan and remove AerosoftCrackerV2.exe and its related files and registry entries from your computer automatically. This method is easier and safer than the manual removal method, as it does not require any technical skills or knowledge of how to access and modify system files and settings. It also ensures that no traces of AerosoftCrackerV2.exe are left behind on your computer.
-
How to use AerosoftCrackerV2.exel to crack software
-AerosoftCrackerV2.exel download link
-Is AerosoftCrackerV2.exel safe and virus-free?
-AerosoftCrackerV2.exel tutorial and guide
-AerosoftCrackerV2.exel reviews and feedback
-AerosoftCrackerV2.exel alternatives and competitors
-AerosoftCrackerV2.exel compatibility and requirements
-AerosoftCrackerV2.exel features and benefits
-AerosoftCrackerV2.exel updates and patches
-AerosoftCrackerV2.exel license and terms of use
-AerosoftCrackerV2.exel support and customer service
-AerosoftCrackerV2.exel errors and troubleshooting
-AerosoftCrackerV2.exel tips and tricks
-AerosoftCrackerV2.exel best practices and recommendations
-AerosoftCrackerV2.exel case studies and success stories
-How to uninstall AerosoftCrackerV2.exel
-How to optimize AerosoftCrackerV2.exel performance
-How to customize AerosoftCrackerV2.exel settings
-How to integrate AerosoftCrackerV2.exel with other tools
-How to backup and restore AerosoftCrackerV2.exel data
-How to upgrade from AerosoftCrackerV1 to AerosoftCrackerV2.exel
-How to get a free trial of AerosoftCrackerV2.exel
-How to buy AerosoftCrackerV2.exel with a discount code
-How to contact the developer of AerosoftCrackerV2.exel
-How to report a bug or issue with AerosoftCrackerV2.exel
-How to join the community of AerosoftCrackerV2.exel users
-How to access the documentation of AerosoftCrackerV2.exel
-How to learn more about the technology behind AerosoftCrackerV2.exel
-How to crack Adobe Photoshop with AerosoftCrackerV2.exel
-How to crack Microsoft Office with AerosoftCrackerV2.exel
-How to crack Autodesk AutoCAD with AerosoftCrackerV2.exel
-How to crack CorelDRAW with AerosoftCrackerV2.exel
-How to crack FL Studio with AerosoftCrackerV2.exel
-How to crack Adobe Premiere Pro with AerosoftCrackerV2.exel
-How to crack Sony Vegas Pro with AerosoftCrackerV2.exel
-How to crack Ableton Live with AerosoftCrackerV2.exel
-How to crack Adobe Illustrator with AerosoftCrackerV2.exel
-How to crack Adobe InDesign with AerosoftCrackerV2.exel
-How to crack Adobe After Effects with AerosoftCrackerV2.exel
-How to crack Adobe Acrobat Pro with AerosoftCrackerV2.exel
-How to crack SketchUp Pro with AerosoftCrackerV2.exel
-How to crack Camtasia Studio with AerosoftCrackerV2.exel
-How to crack Nero Burning ROM with AerosoftCrackerV2.exel
-How to crack WinRAR with AerosoftCrackerV2.exel
-How to crack VMware Workstation with AerosoftCrackerV2.exel
-How to crack CyberLink PowerDVD with AerosoftCrackerV2.exel
-How to crack Avast Antivirus with AerosoftCrackerV2.exel
-How to crack Malwarebytes Anti-Malware with AerosoftCrackerV2.exel
-How to crack CCleaner Professional with AerosoftCrackerV2.exel
-
To automatically remove AerosoftCrackerV2.exe from your computer, follow these steps:
-
-
Download and install a reputable anti-malware tool on your computer. You can choose from various options, such as Malwarebytes, SpyHunter, Trend Micro, etc.
-
Launch the anti-malware tool and update its database to the latest version.
-
Perform a full system scan with the anti-malware tool and wait for it to finish.
-
Review the scan results and select all the detected threats related to AerosoftCrackerV2.exe.
-
Click on the Remove or Quarantine button to delete or isolate AerosoftCrackerV2.exe and its related files and registry entries from your computer.
-
Restart your computer if prompted by the anti-malware tool.
-
-
How to prevent AerosoftCrackerV2.exe infection in the future?
-
Now that you have removed AerosoftCrackerV2.exe from your computer, you may wonder how to prevent it from infecting your computer again in the future. Here are some tips that you can follow to avoid downloading or running malicious programs like AerosoftCrackerV2.exe:
-
-
Avoid using cracks for flight simulation add-ons or any other software products. They are illegal and risky, as they may contain malware or viruses that can damage your computer or steal your personal information.
-
Only download flight simulation add-ons or any other software products from official or trusted sources. Do not trust unknown or suspicious websites that offer free or cheap downloads.
-
Always scan any downloaded files with a reliable anti-virus or anti-malware tool before opening or running them. This will help you detect and remove any potential threats before they can harm your computer.
-
Keep your operating system and software products updated with the latest patches and security fixes. This will help you fix any vulnerabilities that may be exploited by malware or hackers.
-
Use a strong password for your online accounts and change it regularly. This will help you prevent unauthorized access to your personal information or data.
-
Backup your important data regularly to an external drive or cloud storage. This will help you recover your data in case of a malware attack or system failure.
-
-
Conclusion
-
AerosoftCrackerV2.exe is a malicious program that claims to be a crack for Aerosoft products, which are popular add-ons for flight simulation games. However, it is not a legitimate crack, but a Trojan that can harm your computer and compromise your security. It can perform various malicious activities on your computer, such as downloading and installing other malware or viruses, stealing your personal information, monitoring your online activities, displaying unwanted ads or pop-ups, redirecting your web browser to malicious websites, slowing down your computer performance or causing crashes or errors.
-
To protect yourself from this dangerous threat, you should avoid using cracks for flight simulation add-ons or any other software products. You should also only download flight simulation add-ons or any other software products from official or trusted sources. You should always scan any downloaded files with a reliable anti-virus or anti-malware tool before opening or running them. You should also keep your operating system and software products updated with the latest patches and security fixes. You should also use a strong password for your online accounts and change it regularly. You should also backup your important data regularly to an external drive or cloud storage.
-
If you suspect that your computer is infected by AerosoftCrackerV2.exe, you should take immediate action to remove it from your computer. You can use either the manual removal method or the automatic removal method to do so. The manual removal method involves deleting AerosoftCrackerV2.exe and its related files and registry entries from your computer manually. The automatic removal method involves using a reliable anti-malware tool to scan and remove AerosoftCrackerV2.exe and its related files and registry entries from your computer automatically.
-
We hope this article has helped you understand what AerosoftCrackerV2.exe is, how it works, what are the symptoms of its infection, how to remove it from your computer, and how to prevent it from infecting your computer in the future. By following these tips, you will be able to enjoy your flight simulation games safely and securely.
-
FAQs
-
Here are some frequently asked questions and answers about AerosoftCrackerV2.exe:
-
-
What is Aerosoft?
-
Aerosoft is a German company that develops and publishes add-ons for flight simulation games, such as Microsoft Flight Simulator X (FSX) and Prepar3D (P3D). They offer various products that enhance the realism and immersion of flight simulation games, such as airports, aircrafts, sceneries, tools, etc.
-
What is a crack?
-
A crack is a program that modifies or bypasses the security features of a software product in order to use it for free or without restrictions. Some users may be tempted to use cracks for flight simulation add-ons because they are expensive or hard to find. However, using cracks is illegal and risky, as they may contain malware or viruses that can damage your computer or steal your personal information.
-
What is a Trojan?
-
A Trojan is a type of malware that pretends to be something else in order to trick users into downloading or running it. Once executed, a Trojan can perform various malicious actions on the infected computer without the user's knowledge or consent. Trojans are often used by hackers to gain remote access to computers, steal data, install other malware, etc.
-
How can I tell if my computer is infected by AerosoftCrackerV2.exe?
-
If your computer is infected by AerosoftCrackerV2.exe, you may notice some of the following signs: Your antivirus software or firewall is disabled or not working properly; Your computer runs slower than usual or freezes frequently; You see strange files or folders on your computer that you don't recognize; You see unwanted ads or pop-ups on your screen that are related to flight simulation products or services; Your web browser is redirected to unfamiliar websites that ask you to download or buy something; You receive warnings or alerts from unknown sources that claim your computer is infected or needs repair; You notice unauthorized charges on your credit card or bank account statements.
-
How can I protect my computer from malware?
-
You can protect your computer from malware by following some simple tips, such as: Use a firewall and an anti-malware tool and keep them updated; Don't open email messages from unfamiliar senders or email attachments that you don't recognize; Use a pop-up blocker and a modern browser with SmartScreen enabled; Pay attention to Windows SmartScreen notifications and don't run unrecognized apps downloaded from the internet; Keep Windows and other software products updated with the latest patches and security fixes; Use strong passwords and change them regularly; Backup your important data regularly to an external drive or cloud storage.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/ABYSS CRAWLERS Plus Game Hack Password !!BETTER!!.md b/spaces/1gistliPinn/ChatGPT4/Examples/ABYSS CRAWLERS Plus Game Hack Password !!BETTER!!.md
deleted file mode 100644
index 57235a8cc6310ad0e241d3326cdd5d411c01f8bb..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/ABYSS CRAWLERS Plus Game Hack Password !!BETTER!!.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- d5da3c52bf
-
-
-
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Bollettino Postale 896 22.pdfl ((FREE)).md b/spaces/1gistliPinn/ChatGPT4/Examples/Bollettino Postale 896 22.pdfl ((FREE)).md
deleted file mode 100644
index 60607a7b3a3d6fcdbf775a1739dc1145bb0c4f77..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Bollettino Postale 896 22.pdfl ((FREE)).md
+++ /dev/null
@@ -1,75 +0,0 @@
-
-
Bollettino Postale 896 22.pdf: come scaricarlo e pagarlo online
-
Il bollettino postale è uno dei metodi più usati per effettuare pagamenti a soggetti pubblici o privati che dispongono di un conto corrente postale. Esistono diversi tipi di bollettini postali, a seconda della finalità e della modalità di compilazione. In questo articolo ci concentreremo sul bollettino postale 896 22.pdf, un bollettino precompilato che serve per il versamento di tasse, contributi, bolli e altri oneri. Vedremo cos'è, come compilarlo, dove trovarlo e come pagarlo online.
-
Cos'è il bollettino postale 896 22.pdf?
-
Il bollettino postale 896 22.pdf è un documento che consente di effettuare un versamento presso un qualsiasi ufficio postale, in favore di un determinato soggetto titolare di un conto corrente postale. Si tratta di un bollettino precompilato, ovvero che presenta già alcuni campi riempiti con le informazioni necessarie per il pagamento. Questo tipo di bollettino è usato per il versamento di tasse, contributi, bolli e altri oneri.
Per compilare il bollettino postale 896 22.pdf devi inserire i seguenti dati:
-
-
numero di conto corrente dell'intestatario: è il numero a 12 cifre che identifica il soggetto che riceve il pagamento. Questo campo è già precompilato sul bollettino;
-
importo del versamento: è la somma che devi pagare al destinatario. Devi scriverla sia in lettere che in numeri. Questo campo può essere già precompilato o lasciato vuoto sul bollettino;
-
intestatario del versamento: è il nome e il cognome (o la ragione sociale) del soggetto che riceve il pagamento. Questo campo è già precompilato sul bollettino;
-
causale: è il motivo per cui effettui il pagamento. Puoi scrivere una breve descrizione o un codice alfanumerico. Questo campo può essere già precompilato o lasciato vuoto sul bollettino;
-
dati personali di chi effettua il pagamento: sono i tuoi dati anagrafici (nome, cognome, indirizzo). Questo campo devi compilarlo tu sul bollettino.
-
-
Dove trovare il bollettino postale 896 22.pdf?
-
Puoi trovare il bollettino postale 896 22.pdf in diversi modi:
-
-
sul sito web del soggetto che richiede il pagamento: molti enti pubblici o privati mettono a disposizione sul loro sito web i modelli di bollettini postali precompilati con i loro dati. Puoi scaricare e stampare il bollettino dal sito e poi portarlo all'ufficio postale per pagarlo;
-
sul sito web delle Poste Italiane: puoi accedere al servizio online Bollettini Online delle Poste Italiane e scegliere il tipo di bollettino che ti serve (precompilato o bianco). Puoi compilare i campi richiesti e stampare il bollettino dal tuo computer. Puoi anche pagare online con carta di credito o conto bancario;
-
sull'app Postepay: puoi scaricare l'app Postepay sul tuo smartphone e accedere alla sezione Bollettini. Puoi scegliere il tipo di bollettino che ti serve (precompilato o bianco) e compilare i campi richiesti. Puoi anche pagare online con la tua carta Postepay o con altri metodi;
-
sugli sportelli ATM Postamat: puoi recarti presso uno sportello ATM Postamat e scegliere l'opzione Bollettini Postali. Puoi inserire i dati richiesti e stampare il bollettino dallo sportello. Puoi anche pagarlo con la tua carta Postamat o con altre carte abilitate;
-
sugli sportelli self-service delle Poste Italiane: puoi recarti presso uno sportello self-service delle Poste Italiane e scegliere l'opzione Bollettini Postali. Puoi inserire i dati richiesti e stampare il bollettino dallo sportello. Puoi anche pagarlo con la tua carta Postamat o con altre carte abilitate.
-
-
Come pagare il bollettino postale 896 22.pdf online?
-
Se vuoi evitare le code agli uffici postali, puoi pagare il bollettino postale 896 22.pdf online, tramite i seguenti servizi:
-
-
Bollettini Online delle Poste Italiane: puoi accedere al servizio online Bollettini Online delle Poste Italiane e scegliere il tipo di bollettino che ti serve (precompilato o bianco). Puoi compilare i campi richiesti e pagare online con carta di credito o conto bancario;
-
App Postepay: puoi scaricare l'app Postepay sul tuo smartphone e accedere alla sezione Bollettini. Puoi scegliere il tipo di bollettino che ti serve (precompilato o bianco) e compilare i campi richiesti. Puoi anche pagare online con la tua carta Postepay o con altri metodi;
-
Servizi bancari online: puoi accedere al tuo servizio bancario online e cercare l'opzione per pagare i bollettini postali. Puoi inserire i dati richiesti e pagare online con il tuo conto bancario o con altre carte abilitate;
-
Servizi di pagamento online: puoi usare dei servizi di pagamento online come PayPal, Satispay, Nexi Pay, ecc. per pagare i bollettini postali. Puoi collegare il tuo conto bancario o la tua carta di credito a questi servizi e pagare online con facilità.
-
-
Quali sono i vantaggi e gli svantaggi del bollettino postale 896 22.pdf?
-
Il bollettino postale 896 22.pdf è un metodo di pagamento molto diffuso e utilizzato in Italia. Tuttavia, come ogni cosa, ha dei vantaggi e degli svantaggi che devi conoscere prima di usarlo. Vediamoli insieme:
-
-
Vantaggi:
-
-
è un metodo di pagamento sicuro e tracciabile, che ti permette di avere una prova del versamento;
-
è un metodo di pagamento semplice e veloce, che ti richiede solo di compilare alcuni campi e portare il bollettino all'ufficio postale o pagarlo online;
-
è un metodo di pagamento universale, che ti consente di pagare qualsiasi soggetto che dispone di un conto corrente postale;
-
è un metodo di pagamento economico, che ti costa solo il costo del bollettino (1,50 euro) e eventuali commissioni bancarie o postali.
-
-
-
Svantaggi:
-
-
è un metodo di pagamento obsoleto e poco innovativo, che non si adatta alle esigenze della società digitale;
-
è un metodo di pagamento soggetto a errori umani, che possono causare ritardi o problemi nel versamento;
-
è un metodo di pagamento limitato, che non ti consente di pagare con altre modalità come il bonifico bancario o il contante;
-
è un metodo di pagamento vincolato agli orari degli uffici postali, che possono essere scomodi o inaccessibili.
-
-
-
-
Come risolvere i problemi con il bollettino postale 896 22.pdf?
-
A volte può capitare di avere dei problemi con il bollettino postale 896 22.pdf. Ad esempio, puoi averlo perso, sbagliato, strappato o non ricevuto. In questi casi, devi sapere come risolvere la situazione. Ecco alcuni consigli:
-
-
Se hai perso il bollettino postale: puoi richiederne una copia al soggetto che te lo ha inviato o scaricarlo dal suo sito web. Puoi anche cercare il codice a barre del bollettino sul sito delle Poste Italiane e stamparlo nuovamente.
-
Se hai sbagliato il bollettino postale: puoi correggere l'errore con una penna nera e una riga sopra il dato errato. Puoi anche annullare il bollettino sbagliato e compilarne uno nuovo.
-
Se hai strappato il bollettino postale: puoi incollarlo con dello scotch trasparente e portarlo all'ufficio postale. Puoi anche stamparne una copia dal sito delle Poste Italiane o dal sito del soggetto che te lo ha inviato.
-
Se non hai ricevuto il bollettino postale: puoi contattare il soggetto che te lo doveva inviare e chiedere spiegazioni. Puoi anche verificare se è disponibile sul suo sito web o sul sito delle Poste Italiane.
-
-
Quali sono le alternative al bollettino postale 896 22.pdf?
-
Se non vuoi usare il bollettino postale 896 22.pdf per effettuare i tuoi pagamenti, puoi scegliere tra diverse alternative. Alcune di queste sono:
-
-
Bonifico bancario: è un metodo di pagamento che ti consente di trasferire denaro da un conto bancario ad un altro. Puoi effettuare un bonifico bancario online, tramite il tuo servizio bancario online, o presso una filiale della tua banca. Devi conoscere il codice IBAN del destinatario e la causale del pagamento.
-
PagoPA: è un sistema di pagamento elettronico che ti consente di pagare le tasse e i servizi pubblici in modo semplice e sicuro. Puoi accedere a PagoPA tramite il sito web o l'app dell'ente che richiede il pagamento, o tramite il portale www.pagopa.gov.it. Puoi scegliere tra diverse modalità di pagamento, come carta di credito, conto corrente, Satispay, ecc.
-
Carta di credito: è un metodo di pagamento che ti consente di pagare con il denaro che hai a disposizione sul tuo conto corrente o con il credito che ti viene concesso dalla tua banca. Puoi usare la carta di credito per pagare online, tramite il sito web o l'app del soggetto che richiede il pagamento, o presso i terminali POS abilitati.
-
Contanti: è il metodo di pagamento più tradizionale e semplice, che consiste nello scambiare denaro fisico tra chi paga e chi riceve. Puoi usare i contanti per pagare presso gli uffici postali, le tabaccherie, le edicole e altri esercizi commerciali convenzionati.
-
-
Conclusione
-
Il bollettino postale 896 22.pdf è uno dei metodi più usati per effettuare pagamenti a soggetti pubblici o privati che dispongono di un conto corrente postale. Si tratta di un bollettino precompilato che serve per il versamento di tasse, contributi, bolli e altri oneri. Per usarlo devi compilare alcuni campi con i dati richiesti e portarlo all'ufficio postale o pagarlo online. Il bollettino postale 896 22.pdf ha dei vantaggi e degli svantaggi che devi conoscere prima di sceglierlo. Inoltre, esistono delle alternative al bollettino postale 896 22.pdf che puoi valutare in base alle tue esigenze e preferenze.
-
-
Conclusione
-
Il bollettino postale 896 22.pdf è uno dei metodi più usati per effettuare pagamenti a soggetti pubblici o privati che dispongono di un conto corrente postale. Si tratta di un bollettino precompilato che serve per il versamento di tasse, contributi, bolli e altri oneri. Per usarlo devi compilare alcuni campi con i dati richiesti e portarlo all'ufficio postale o pagarlo online. Il bollettino postale 896 22.pdf ha dei vantaggi e degli svantaggi che devi conoscere prima di sceglierlo. Inoltre, esistono delle alternative al bollettino postale 896 22.pdf che puoi valutare in base alle tue esigenze e preferenze.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Android Oyun Clubta Truck Simulator Ultimate Apk Para Hileli Oyna - Gereki ehirler ve Trlar.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Android Oyun Clubta Truck Simulator Ultimate Apk Para Hileli Oyna - Gereki ehirler ve Trlar.md
deleted file mode 100644
index 5fac705814d0c8627255d0661104e8d82b8bed0f..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Android Oyun Clubta Truck Simulator Ultimate Apk Para Hileli Oyna - Gereki ehirler ve Trlar.md
+++ /dev/null
@@ -1,109 +0,0 @@
-
-
Truck Simulator Ultimate Apk: A Realistic and Fun Truck Driving Game
-
If you are a fan of truck driving games, you might have heard of Truck Simulator Ultimate Apk, a new and exciting game that lets you experience the thrill of driving a truck across different countries and continents. In this article, we will tell you everything you need to know about this game, including its features, how to download and install it, and how to get para hilesi from Android Oyun Club, a popular Turkish website for modded games.
-
What is Truck Simulator Ultimate Apk?
-
Truck Simulator Ultimate Apk is a simulation game developed by Zuuks Games, the same company that created Bus Simulator and Euro Truck Driver. The game was released in September 2021 and has already gained millions of downloads and positive reviews from players around the world. The game aims to provide a realistic and fun truck driving experience, with stunning graphics, realistic physics, and diverse gameplay options.
-
truck simulator ultimate apk android oyun club para hilesi
Truck Simulator Ultimate Apk has many features that make it stand out from other truck driving games. Here are some of them:
-
Realistic truck models and physics
-
The game features over 30 different truck models from famous brands such as Mercedes-Benz, Volvo, Scania, MAN, Renault, and more. Each truck has its own specifications, performance, and sound effects. The game also uses advanced physics engine to simulate the weight, speed, braking, steering, suspension, and damage of the trucks.
-
Customizable trucks and trailers
-
You can customize your trucks and trailers with various options such as paint, decals, wheels, lights, horns, exhausts, bumpers, spoilers, and more. You can also upgrade your trucks with different engines, transmissions, chassis, tires, and accessories. You can create your own unique truck style and show it off to other players.
-
Dynamic weather and day-night cycle
-
The game has a dynamic weather system that changes according to the location and time of the day. You can drive in sunny, rainy, snowy, foggy, or stormy conditions. You can also experience the day-night cycle that affects the visibility and traffic on the roads. You have to adapt your driving style to the changing weather and lighting conditions.
-
Various cargo types and delivery missions
-
The game offers a variety of cargo types such as containers, cars, logs, food, chemicals, livestock, and more. You have to load your cargo onto your trailer and deliver it to the destination safely and on time. You have to follow the traffic rules, avoid accidents, pay tolls, refuel your truck, rest when needed, and manage your budget. You can earn money and experience points by completing delivery missions.
-
truck simulator ultimate apk indir android oyun club
-truck simulator ultimate apk hileli oyun indir club
-truck simulator ultimate apk mod para hilesi android
-truck simulator ultimate apk full sürüm android oyun club
-truck simulator ultimate apk son sürüm para hileli
-truck simulator ultimate apk android oyun club güncel
-truck simulator ultimate apk ücretsiz para hilesi indir
-truck simulator ultimate apk android oyun club kurulumu
-truck simulator ultimate apk hile nasıl yapılır android oyun club
-truck simulator ultimate apk android oyun club yorumları
-truck simulator ultimate apk android oyun club alternatifleri
-truck simulator ultimate apk android oyun club benzeri oyunlar
-truck simulator ultimate apk android oyun club sistem gereksinimleri
-truck simulator ultimate apk android oyun club online modu
-truck simulator ultimate apk android oyun club multiplayer özelliği
-truck simulator ultimate apk android oyun club grafik ayarları
-truck simulator ultimate apk android oyun club türkçe dil desteği
-truck simulator ultimate apk android oyun club araç modelleri
-truck simulator ultimate apk android oyun club harita genişliği
-truck simulator ultimate apk android oyun club gerçekçilik seviyesi
-truck simulator ultimate apk android oyun club tycoon modu nedir
-truck simulator ultimate apk android oyun club tycoon modu hileleri
-truck simulator ultimate apk android oyun club tycoon modu ipuçları
-truck simulator ultimate apk android oyun club tycoon modu rehberi
-truck simulator ultimate apk android oyun club tycoon modu stratejileri
-truck simulator ultimate apk android oyun club tycoon modu en iyi araçlar
-truck simulator ultimate apk android oyun club tycoon modu en iyi rotalar
-truck simulator ultimate apk android oyun club tycoon modu en iyi yatırımlar
-truck simulator ultimate apk android oyun club tycoon modu en iyi personel
-truck simulator ultimate apk android oyun club tycoon modu en iyi müşteriler
-truck simulator ultimate apk para hilesi nasıl yapılır android
-truck simulator ultimate apk para hilesi indirme linki android
-truck simulator ultimate apk para hilesi güvenli mi android
-truck simulator ultimate apk para hilesi ban riski var mı android
-truck simulator ultimate apk para hilesi avantajları nelerdir android
-truck simulator ultimate apk para hilesi dezavantajları nelerdir android
-truck simulator ultimate apk para hilesi kullanıcı yorumları android
-truck simulator ultimate apk para hilesi video anlatımı android
-truck simulator ultimate apk para hilesi sorun çözümleri android
-truck simulator ultimate apk para hilesi alternatif yöntemler android
-zuuks games truck simulator ultimate apk indir para hileli
-zuuks games truck simulator ultimate apk güncelleme para hileli
-zuuks games truck simulator ultimate apk inceleme para hileli
-zuuks games truck simulator ultimate apk özellikleri para hileli
-zuuks games truck simulator ultimate apk farkı nedir para hileli
-zuuks games truck simulator ultimate ap
-
Multiplayer mode and online ranking system
-
The game has a multiplayer mode that allows you to play with other players online. You can join or create a convoy with your friends or other players and drive together on the same map. You can chat with other players using voice or text messages. You can also compete with other players in the online ranking system based on your level, money earned, distance driven, cargo delivered, etc.
-
How to download and install Truck Simulator Ultimate Apk?
-
If you want to download and install Truck Simulator Ultimate Apk on your Android device, you can follow these simple steps:
-
Requirements and compatibility
-
Before you download and install the game, you need to make sure that your device meets the minimum requirements and is compatible with the game. The game requires Android 5.0 or higher, at least 3 GB of RAM, and 1.5 GB of free storage space. The game also supports 64-bit devices and controllers.
-
Download link and installation steps
-
You can download the game from the official Google Play Store by clicking on this link. Alternatively, you can also download the game from other sources such as APKPure or APKMirror, but make sure that you download the latest version and from a trusted website. After you download the game, you need to follow these steps to install it:
-
-
Go to your device settings and enable the option to install apps from unknown sources.
-
Locate the downloaded APK file and tap on it to start the installation process.
-
Follow the on-screen instructions and grant the necessary permissions to the game.
-
Wait for the installation to finish and launch the game from your app drawer or home screen.
-
-
Congratulations, you have successfully installed Truck Simulator Ultimate Apk on your device. You can now enjoy driving your truck across different countries and continents.
-
What is Android Oyun Club and how to get para hilesi?
-
If you want to enhance your gaming experience and get some extra benefits in Truck Simulator Ultimate Apk, you might be interested in Android Oyun Club and para hilesi. Let's see what they are and how to use them.
-
Android Oyun Club: a popular Turkish website for modded games
-
Android Oyun Club is a website that provides modded versions of various Android games, including Truck Simulator Ultimate Apk. A modded game is a game that has been modified or hacked to provide some advantages or features that are not available in the original game. For example, a modded game might have unlimited money, unlocked items, premium features, etc.
-
Para hilesi: a cheat that gives unlimited money in the game
-
Para hilesi is a Turkish term that means money cheat. It is a cheat that gives you unlimited money in Truck Simulator Ultimate Apk. With unlimited money, you can buy any truck, trailer, upgrade, or customization that you want without worrying about your budget. You can also skip some delivery missions that are too hard or boring for you.
-
How to use para hilesi in Truck Simulator Ultimate Apk?
-
If you want to use para hilesi in Truck Simulator Ultimate Apk, you need to download the modded version of the game from Android Oyun Club. You can find the link to the modded game here. After you download the modded game, you need to follow these steps to use para hilesi:
-
-
Delete or uninstall the original version of the game from your device.
-
Install the modded version of the game following the same steps as above.
-
Launch the modded game and create a new profile or load an existing one.
-
You will see that you have unlimited money in your account. You can use it to buy anything you want in the game.
-
-
Enjoy playing Truck Simulator Ultimate Apk with para hilesi from Android Oyun Club.
-
Conclusion
-
In this article, we have covered everything you need to know about Truck Simulator Ultimate Apk, a realistic and fun truck driving game. We have explained its features, how to download and install it, and how to get para hilesi from Android Oyun Club. We hope that this article has been helpful and informative for you. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy trucking!
-
Frequently Asked Questions
-
-
Q: Is Truck Simulator Ultimate Apk free?
-
A: Yes, Truck Simulator Ultimate Apk is free to download and play. However, some items and features in the game may require real money purchases.
-
Q: Is Truck Simulator Ultimate Apk safe?
-
A: Yes, Truck Simulator Ultimate Apk is safe as long as you download it from a trusted source such as Google Play Store or APKPure. However, if you download it from other sources such as Android Oyun Club, you should be careful and scan it for viruses or malware before installing it.
-
li>Q: Is Truck Simulator Ultimate Apk realistic?
-
A: Yes, Truck Simulator Ultimate Apk is realistic in terms of graphics, physics, sound, and gameplay. The game features realistic truck models, weather effects, traffic rules, cargo types, and delivery missions. The game also simulates the challenges and risks of truck driving, such as fuel consumption, damage, fatigue, tolls, etc.
-
Q: How many countries and continents are available in Truck Simulator Ultimate Apk?
-
A: Truck Simulator Ultimate Apk currently offers 12 countries and 3 continents to explore. The countries are Germany, France, Italy, Spain, Turkey, UK, USA, Canada, Brazil, Mexico, Argentina, and Chile. The continents are Europe, North America, and South America. The game developers plan to add more countries and continents in the future updates.
-
Q: How can I play with my friends in Truck Simulator Ultimate Apk?
-
A: You can play with your friends in Truck Simulator Ultimate Apk by using the multiplayer mode. You can join or create a convoy with your friends or other players and drive together on the same map. You can also chat with them using voice or text messages. To use the multiplayer mode, you need to have an internet connection and a Truck Simulator Ultimate account.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Bolt APK - The Best App for Booking Rides and Scooters.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Bolt APK - The Best App for Booking Rides and Scooters.md
deleted file mode 100644
index 4f4e42e632e18b544cb02ab393bce29333722c3f..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Bolt APK - The Best App for Booking Rides and Scooters.md
+++ /dev/null
@@ -1,150 +0,0 @@
-
-
What is APK Bolt and How to Use It?
-
Introduction
-
If you are looking for a convenient and cost-effective way to get around your city, you might want to try out APK Bolt. APK Bolt is an Android app that allows you to request a ride from a nearby driver, and enjoy a low-cost ride to your destination. But what exactly is an APK file, and what is Bolt? In this article, we will explain what APK Bolt is, how it works, what are its benefits, how to download and install it, and how it compares with other transportation apps.
An APK file is a file format that is used to distribute and install applications on Android devices. APK stands for Android Package Kit, and it contains all the files and code that are needed for an app to run on your device. You can download APK files from various sources, such as the Google Play Store, third-party websites, or directly from the app developers. However, you need to enable the option to install apps from unknown sources in your device settings before you can install an APK file.
-
What is Bolt?
-
Bolt is a transportation app that was formerly known as Taxify. It was founded in 2013 in Estonia, and it operates in 45 countries and 400 cities around the world. Bolt's mission is to provide fast, reliable, and affordable transportation to millions of people, while also helping thousands of drivers support their families. Bolt offers different types of services, such as ride-hailing, car-sharing, scooter-sharing, food delivery, and electric bikes.
-
What is APK Bolt?
-
APK Bolt is the name of the Android app that you can use to access the Bolt services on your device. You can download the APK Bolt file from various sources, such as [APKCombo](^1^), [APKPure], or [Uptodown]. With APK Bolt, you can tap the button to order a ride, see the price of your ride before you order, use a range of safety features, pay inside the app or with cash, and leave a rating for your driver.
-
Benefits of Using APK Bolt
-
Fast and Affordable Rides
-
One of the main benefits of using APK Bolt is that you can get a comfortable, low-cost ride in minutes. You don't have to wait for a long time for a driver to pick you up, as there are thousands of drivers available 24/7. You also don't have to pay a lot for your ride, as APK Bolt offers competitive prices that are cheaper than other transportation apps. You can also save money by using promo codes, discounts, and offers that are regularly available on the app.
-
apk bolt: request a ride
-apk bolt driver: drive with bolt
-apk bolt food: food delivery
-apk bolt business: manage your rides
-apk bolt lite: low-cost rides
-apk bolt browser: fast and secure web browser
-apk bolt taxi: book a taxi online
-apk bolt scooter: electric scooter rental
-apk bolt mod: unlocked features and unlimited money
-apk bolt vpn: protect your privacy online
-apk bolt app: download and install bolt app
-apk bolt game: race and drift with bolt cars
-apk bolt launcher: customize your home screen
-apk bolt video downloader: download videos from any website
-apk bolt music player: play and stream music offline
-apk bolt photo editor: edit and enhance your photos
-apk bolt keyboard: type faster and easier
-apk bolt lock screen: secure your phone with bolt pattern
-apk bolt wallpaper: beautify your phone with bolt wallpapers
-apk bolt theme: change the look and feel of your phone
-apk bolt chat: chat and make new friends
-apk bolt social media: connect with people around the world
-apk bolt news: get the latest news and updates
-apk bolt weather: check the weather forecast and alerts
-apk bolt maps: navigate and explore with bolt maps
-apk bolt fitness: track your health and fitness goals
-apk bolt calculator: perform calculations and conversions
-apk bolt clock: set alarms and timers with bolt clock
-apk bolt calendar: organize your schedule and events
-apk bolt notes: take notes and reminders with bolt notes
-apk bolt file manager: manage your files and folders
-apk bolt antivirus: protect your phone from viruses and malware
-apk bolt cleaner: optimize your phone performance and battery life
-apk bolt flashlight: turn your phone into a flashlight
-apk bolt compass: find your direction with bolt compass
-apk bolt qr scanner: scan qr codes and barcodes
-apk bolt pdf reader: view and edit pdf files
-apk bolt translator: translate text and speech in any language
-apk bolt voice recorder: record and play audio files
-apk bolt radio: listen to live radio stations online
-apk bolt podcast: discover and listen to podcasts on any topic
-apk bolt ebook reader: read ebooks and audiobooks offline
-apk bolt shopping: shop online and get the best deals
-apk bolt travel: book flights, hotels, and car rentals online
-apk bolt dating: find your match and date online
-apk bolt learning: learn new skills and hobbies online
-apk bolt entertainment: watch movies, shows, and live tv online
-apk bolt sports: follow your favorite sports teams and players online
-apk bolt finance: manage your money and investments online
-
Safety Features
-
Another benefit of using APK Bolt is that you can use a range of safety features that ensure your security and peace of mind. For example, you can share details of your journey with your friends or family members, so they can track your location and status. You can also contact the customer support team or the emergency services in case you need any assistance or help. Moreover, you can see the ratings and reviews of your driver before you accept the ride, so you can choose the best option for you.
-
Flexible Payment Options
-
A third benefit of using APK Bolt is that you can choose from different payment options that suit your preference and convenience. You can pay inside the app using your credit or debit card, or you can also pay with cash, or use other methods such as PayPal, Google Pay, or Apple Pay. You can also tip your driver if you are satisfied with their service, and rate them after the ride.
-
How to Download and Install APK Bolt
-
Steps to Download APK Bolt
-
If you want to download APK Bolt on your Android device, you can follow these simple steps:
-
-
Go to one of the sources that offer the APK Bolt file, such as [APKCombo], [APKPure], or [Uptodown].
-
Search for APK Bolt in the search bar, or browse the categories to find it.
-
Tap on the APK Bolt icon, and then tap on the download button.
-
Wait for the download to finish, and then locate the file in your device storage.
-
-
Steps to Install APK Bolt
-
Before you can install APK Bolt on your device, you need to enable the option to install apps from unknown sources. To do this, you can follow these steps:
-
-
Go to your device settings, and then tap on security or privacy.
-
Find the option that says "Unknown sources" or "Install unknown apps", and toggle it on.
-
Confirm your choice by tapping on OK or Allow.
-
-
Once you have enabled this option, you can install APK Bolt by following these steps:
-
-
Locate the APK Bolt file in your device storage, and tap on it.
-
Tap on Install, and wait for the installation to complete.
-
Tap on Open, and grant the necessary permissions to the app.
-
-
Steps to Request a Ride with APK Bolt
-
After you have installed APK Bolt on your device, you can start using it to request a ride. To do this, you can follow these steps:
-
-
Open the APK Bolt app, and sign up or log in with your phone number or email address.
-
Select your pickup location and destination by typing them in or using the map.
-
Select the type of ride you want, such as Bolt Lite, Bolt Comfort, or Bolt Green.
-
See the price of your ride before you order, and choose your payment method.
-
Tap on Request a Ride, and wait for a driver to accept your request.
-
See the details of your driver and their vehicle, and contact them if needed.
-
Enjoy your ride, and pay inside the app or with cash.
-
Leave a rating and a tip for your driver if you wish.
-
-
Comparison of APK Bolt with Other Transportation Apps
-
If you are wondering how APK Bolt compares with other transportation apps, such as Uber, Lyft, or Grab, here is a brief overview of their features and prices:
-
Uber
-
Uber is one of the most popular transportation apps in the world, operating in over 80 countries and 900 cities. Uber offers different types of services, such as UberX, UberXL, UberPool, UberBlack, UberEats, and more. Uber's main advantages are its global reach, its variety of options, and its user-friendly interface. However, Uber's main disadvantages are its high prices, its surge pricing during peak hours or high demand, and its controversies over safety and ethics.
-
Lyft
-
Lyft is another popular transportation app in the US and Canada, operating in over 600 cities. Lyft offers different types of services, such as Lyft Line, Lyft Plus, Lyft Premier, Lyft Lux, and more. Lyft's main advantages are its lower prices than Uber, its social and environmental initiatives, and its friendly drivers. However, Lyft's main disadvantages are its limited availability outside the US and Canada, its lack of options in some areas, and its lower quality of service in some cases.
-
Grab
-
Grab is the leading transportation app in Southeast Asia, operating in over 300 cities in 8 countries. Grab offers different types of services, such as GrabCar, GrabTaxi, GrabBike, GrabHitch, GrabExpress, and more. Grab's main advantages are its wide coverage in the region, its local knowledge and expertise, and its integration with other services such as food delivery, payments, and travel. However, Grab's main disadvantages are its high prices in some markets, its frequent cancellations by drivers, and its technical issues and glitches.
-
Table: Features and Prices of Different Transportation Apps
-
-
-
App
-
Features
-
Prices
-
-
-
APK Bolt
-
- Fast and affordable rides - Safety features - Flexible payment options - Available in 45 countries and 400 cities
-
- Base fare: $1.00 - Per mile: $0.50 - Per minute: $0.10 - Minimum fare: $2.00 - Cancellation fee: $1.00
-
-
-
Uber
-
- Global reach - Variety of options - User-friendly interface - Available in over 80 countries and 900 cities
-
- Base fare: $1.50 - Per mile: $1.00 - Per minute: $0.20 - Minimum fare: $5.00 - Cancellation fee: $5.00
-
-
-
Lyft
-
- Lower prices than Uber - Social and environmental initiatives - Friendly drivers - Available in the US and Canada
-
- Base fare: $1.00 - Per mile: $0.75 - Per minute: $0.15 - Minimum fare: $3.50 - Cancellation fee: $5.00
-
-
-
Grab
-
- Wide coverage in Southeast Asia - Local knowledge and expertise - Integration with other services - Available in 8 countries and over 300 cities
-
- Base fare: $1.50 - Per mile: $1.25 - Per minute: $0.25 - Minimum fare: $4.00 - Cancellation fee: $2.00
-
-
-
Conclusion
-
In conclusion, APK Bolt is a great app that you can use to get a fast, reliable, and affordable ride to your destination. You can download the APK Bolt file from various sources, install it on your device, and start using it to request a ride from a nearby driver. You can also enjoy the benefits of using APK Bolt, such as safety features, flexible payment options, and competitive prices. You can also compare APK Bolt with other transportation apps, such as Uber, Lyft, or Grab, and see which one suits your needs better.
-
FAQs
-
Here are some frequently asked questions about APK Bolt:
-
-
Is APK Bolt safe? Yes, APK Bolt is safe to use, as it has a range of safety features that ensure your security and peace of mind. You can share details of your journey with your friends or family members, contact the customer support team or the emergency services if needed, and see the ratings and reviews of your driver before you accept the ride.
-
Is APK Bolt legal? Yes, APK Bolt is legal to use in most countries where it operates. However, you should check the local laws and regulations before you use APK Bolt in a new location, as some places may have restrictions or bans on ride-hailing services.
-
Is APK Bolt free? No, APK Bolt is not free to use, as you have to pay for your ride according to the distance, time, and traffic of your ride. However, APK Bolt offers competitive prices that are cheaper than other transportation apps, and you can also save money by using promo codes, discounts, and offers that are regularly available on the app.
-
How can I contact APK Bolt? You can contact APK Bolt by using the in-app chat feature, or by sending an email to support@bolt.eu. You can also visit their website at https://bolt.eu/ or follow them on social media platforms such as Facebook, Twitter, Instagram, or YouTube.
-
How can I update APK Bolt? You can update APK Bolt by downloading the latest version of the APK file from the same source that you used to download it initially, and then installing it over the existing app. You can also check for updates within the app by tapping on the menu icon, and then tapping on Settings and About.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Car Traffic Racing Game Stunning 3D Graphics and Smooth Car Handling.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Car Traffic Racing Game Stunning 3D Graphics and Smooth Car Handling.md
deleted file mode 100644
index 90e5daa9d7745470be5b9449157364c9ea1cfd47..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Car Traffic Racing Game Stunning 3D Graphics and Smooth Car Handling.md
+++ /dev/null
@@ -1,141 +0,0 @@
-
-
Download Car Traffic Racing Game: A Guide for Beginners
-
Do you love racing games? Do you want to experience the thrill of driving through busy traffic? Do you want to customize your own car and compete with other players online? If you answered yes to any of these questions, then you should download Car Traffic Racing Game, one of the best car racing games available on Google Play. In this article, we will tell you everything you need to know about this game, including its features, benefits, how to download and install it, how to play it, how to upgrade and customize your car, and how to join online multiplayer races. By the end of this article, you will be ready to hit the road and enjoy the ultimate car racing experience.
-
What is Car Traffic Racing Game?
-
Car Traffic Racing Game is a milestone in the genre of endless arcade racing games. It is developed by TOJ Games, a company that specializes in creating fun and addictive games for mobile devices. Car Traffic Racing Game lets you drive your car through highway traffic, earn cash, upgrade your car, and buy new ones. You can also participate in online races with other players from around the world. You can choose from over 40 different cars and five detailed environments, such as suburb, desert, snowy, rainy, and city night. You can also choose from five game modes, such as Endless, Two-Way, Time Trial, Police Chase, and Free Ride. You can enjoy stunning 3D graphics, smooth and realistic car handling, rich types of NPC traffic, basic customization through paint and wheels, online leaderboards and achievements, and more.
Car Traffic Racing Game has many features that make it stand out from other racing games. Some of these features are:
-
-
Stunning 3D graphics: The game has amazing 3D graphics that make you feel like you are driving in real life. You can see the details of the cars, the environments, the traffic, the weather effects, and more.
-
Smooth and realistic car handling: The game has a simple and intuitive control system that lets you steer your car with ease. You can tilt or touch your device to steer, touch the gas button to accelerate, and touch the brake button to slow down. You can also adjust the sensitivity of the steering and the camera angle.
-
40+ different cars to choose from: The game has a wide variety of cars that you can unlock and buy with the cash you earn from racing. You can choose from sports cars, muscle cars, trucks, buses, SUVs, and more. Each car has its own speed, acceleration, handling, braking, and price.
-
5 detailed environments: The game has five different environments that you can race in. Each environment has its own scenery, traffic density, weather condition, time of day, and difficulty level. You can race in suburb, desert, snowy, rainy, or city night.
-
5 game modes: The game has five different game modes that you can play. Each mode has its own objective, challenge, and reward. You can play Endless mode where you drive as long as you can without crashing; Two-Way mode where you drive in the opposite direction of the traffic; Time Trial mode where you race against the clock; Police Chase mode where you evade the police cars; and Free Ride mode where you explore the environment at your own pace.
-
Rich types of NPC traffic: The game has a realistic and diverse traffic system that makes the racing more challenging and fun. You can encounter cars, trucks, buses, motorcycles, vans, and more on the road. You can also see traffic lights, road signs, speed cameras, and more.
-
Basic customization through paint and wheels: The game allows you to customize your car with different colors and wheels. You can change the paint of your car body, roof, hood, spoiler, and rims. You can also choose from different types of wheels, such as alloy, chrome, or steel.
-
Online leaderboards and achievements: The game has a global ranking system that lets you compare your scores and achievements with other players. You can see your rank, your best score, your best distance, your best speed, and more. You can also unlock various achievements by completing different tasks in the game.
-
-
The benefits of playing Car Traffic Racing Game
-
Playing Car Traffic Racing Game is not only fun but also beneficial for you. Some of the benefits are:
-
-
It improves your concentration and reflexes: Playing Car Traffic Racing Game requires you to pay attention to the road, the traffic, the obstacles, and the other cars. You also need to react quickly to avoid collisions and accidents. This helps you improve your concentration and reflexes, which are useful skills in real life.
-
It boosts your mood and reduces stress: Playing Car Traffic Racing Game gives you a sense of excitement and satisfaction. You can enjoy the adrenaline rush of driving fast, the thrill of overtaking other cars, the joy of earning cash and rewards, and the pride of unlocking new cars and achievements. This helps you boost your mood and reduce stress, which are important for your mental health.
-
It enhances your creativity and imagination: Playing Car Traffic Racing Game allows you to express your personality and style through your car. You can customize your car with different colors and wheels, and make it look unique and cool. You can also imagine yourself as a professional racer or a fugitive on the run, and create your own stories and scenarios in the game. This helps you enhance your creativity and imagination, which are valuable for your personal growth.
-
-
How to download and install Car Traffic Racing Game?
-
Downloading and installing Car Traffic Racing Game is easy and fast. Here are the requirements and steps for doing so:
-
Download Traffic Racer game for Android
-How to play Traffic Tour online for free
-Best car racing games with traffic on PC
-Download Traffic Games from CrazyGames website
-Traffic Racer tips and tricks to earn cash and upgrade cars
-Traffic Tour review and gameplay features
-Car traffic racing game with realistic graphics and physics
-Download Traffic Racer mod apk with unlimited money
-How to install Traffic Tour on Windows 10
-Car racing games with traffic and police chase mode
-Download Traffic Games for iOS devices
-Traffic Racer vs Traffic Tour: which one is better?
-Car traffic racing game with different environments and weather
-Download Traffic Racer for Chromebook
-How to play Traffic Tour with friends online
-Car racing games with traffic and customization options
-Download Traffic Games for Mac OS
-Traffic Racer cheats and hacks to unlock all cars
-How to stream Traffic Tour on Twitch or YouTube
-Car traffic racing game with leaderboards and achievements
-Download Traffic Racer for Kindle Fire
-How to play Traffic Tour offline without internet connection
-Car racing games with traffic and time trial mode
-Download Traffic Games for Linux
-Traffic Racer updates and new features
-How to play Traffic Tour with a controller or a steering wheel
-Car traffic racing game with different camera angles and views
-Download Traffic Racer for Samsung Galaxy devices
-How to play Traffic Tour on a big screen TV or a projector
-Car racing games with traffic and free ride mode
-Download Traffic Games for Nokia phones
-Traffic Racer ratings and reviews from users and critics
-How to play Traffic Tour on a VR headset or a 3D monitor
-Car traffic racing game with different game modes and challenges
-Download Traffic Racer for Huawei devices
-How to play Traffic Tour on a laptop or a desktop computer
-Car racing games with traffic and sound effects and music
-Download Traffic Games for Sony Xperia devices
-Traffic Racer FAQs and troubleshooting tips
-How to play Traffic Tour on a tablet or a smartphone
-Car traffic racing game with different car types and models
-Download Traffic Racer for LG devices
-How to play Traffic Tour on a browser or a web app
-Car racing games with traffic and realistic car handling and controls
-Download Traffic Games for Motorola devices
-Traffic Racer system requirements and compatibility issues
-How to play Traffic Tour on a smartwatch or a wearable device
-Car traffic racing game with different languages and subtitles
-
The requirements for downloading Car Traffic Racing Game
-
To download and install Car Traffic Racing Game, you need to have a compatible device and a stable internet connection. The game is compatible with Android devices that have Android 4.4 or higher as their operating system. The game size is about 100 MB, so make sure you have enough storage space on your device.
-
The steps for downloading and installing Car Traffic Racing Game
-
To download and install Car Traffic Racing Game, follow these steps:
Tap on the "Install" button to start downloading the game.
-
Wait for the download to finish and then tap on the "Open" button to launch the game.
-
Enjoy playing Car Traffic Racing Game!
-
-
How to play Car Traffic Racing Game?
-
Playing Car Traffic Racing Game is simple and fun. Here are some tips on how to play it:
-
The modes of Car Traffic Racing Game
-
The game has five modes that you can choose from: Endless, Two-Way, Time Trial, Police Chase, and Free Ride. Each mode has its own objective, challenge, and reward.
-
-
Endless mode: In this mode, you drive as long as you can without crashing or running out of fuel. The longer you drive, the more cash you earn. You can also collect coins and power-ups on the road to boost your score and performance.
-
Two-Way mode: In this mode, you drive in the opposite direction of the traffic. The more cars you overtake, the more cash you earn. You can also collect coins and power-ups on the road to boost your score and performance.
-
Time Trial mode: In this mode, you race against the clock. You have a limited amount of time to reach the checkpoints and extend your time. The faster you drive, the more cash you earn. You can also collect coins and power-ups on the road to boost your score and performance.
-
Police Chase mode: In this mode, you evade the police cars that are chasing you. The more police cars you escape, the more cash you earn. You can also collect coins and power-ups on the road to boost your score and performance.
-
Free Ride mode: In this mode, you explore the environment at your own pace. You can drive anywhere you want, without any traffic or police. You can also collect coins and power-ups on the road to boost your score and performance.
-
-
The controls of Car Traffic Racing Game
-
The game has a simple and intuitive control system that lets you steer your car with ease. You can choose from two options: tilt or touch. You can also adjust the sensitivity of the steering and the camera angle in the settings menu.
-
-
Tilt: In this option, you tilt your device left or right to steer your car. You touch the gas button on the right side of the screen to accelerate, and touch the brake button on the left side of the screen to slow down.
-
Touch: In this option, you touch the left or right side of the screen to steer your car. You touch the gas button on the right side of the screen to accelerate, and touch the brake button on the left side of the screen to slow down.
-
-
The tips and tricks for Car Traffic Racing Game
-
The game is easy to play but hard to master. Here are some tips and tricks that can help you improve your skills and enjoy the game more:
-
-
Drive faster to earn more cash: The faster you drive, the more cash you earn. You can use the cash to buy new cars, upgrade your car, or customize your car. However, driving faster also means more risk of crashing, so be careful and avoid collisions.
-
Overtake other cars closely to get bonus cash: The closer you overtake other cars, the more bonus cash you get. You can see a yellow bar on top of your screen that shows how much bonus cash you are earning. However, overtaking closely also means more risk of crashing, so be careful and avoid collisions.
-
Collect coins and power-ups on the road: The game has various coins and power-ups that you can collect on the road. Coins can increase your score and cash, while power-ups can give you different effects, such as speed boost, nitro boost, magnet, shield, or fuel refill. However, some coins and power-ups may be hard to reach or hidden behind obstacles, so be careful and avoid collisions.
-
Use nitro boost wisely: The game has a nitro boost feature that lets you drive faster for a short period of time. You can activate it by touching the nitro button on the bottom right corner of the screen. However, nitro boost is limited and needs time to recharge, so use it wisely and save it for when you need it most.
-
Change lanes frequently: The game has multiple lanes that you can switch between by steering your car left or right. Changing lanes frequently can help you avoid traffic, find coins and power-ups, overtake other cars, or escape police cars. However, changing lanes frequently also means more risk of crashing, so be careful and avoid collisions.
-
-
How to upgrade and customize your car in Car Traffic Racing Game?
-
The game allows you to upgrade and customize your car with different options. Here are some details on how to do so:
-
The currency and rewards in Car Traffic Racing Game
-
The game has two types of currency: cash and diamonds. Cash is earned by playing the game modes, while diamonds are earned by watching ads or buying them with real money. You can use cash to buy new cars or upgrade your car's speed, acceleration, handling, or braking. You can use diamonds to buy premium cars or customize your car's paint or wheels.
-
The game also has various rewards that you can get by playing the game modes or completing achievements. Rewards include coins, power-ups, fuel refills, nitro refills, or free cars.
-
The options for upgrading and customizing your car in Car Traffic Racing Game
-
The game has a garage menu where you can upgrade and customize your car. You can access it by tapping on the li>Tap on the "Start" button to begin the race. The game will show you the countdown and then the race will start.
-
Drive your car as fast and as far as you can, while avoiding traffic, obstacles, and other players. You can see your rank, distance, speed, and overtakes on the top of the screen. You can also see the other players' names, cars, and positions on the map on the bottom right corner of the screen.
-
When the race is over, the game will show you the results and the rewards. You can see your rank, score, cash, diamonds, and achievements. You can also see the other players' ranks, scores, and cars.
-
Tap on the "Continue" button to return to the online menu. You can choose to play another race or exit the online mode.
-
-
Conclusion
-
Car Traffic Racing Game is a fun and addictive game that lets you drive your car through highway traffic, earn cash, upgrade your car, and buy new ones. You can also join online races with other players from around the world. The game has stunning 3D graphics, smooth and realistic car handling, 40+ different cars to choose from, 5 detailed environments, 5 game modes, rich types of NPC traffic, basic customization through paint and wheels, online leaderboards and achievements, and more. If you are looking for a game that can challenge your skills, boost your mood, and enhance your creativity, then you should download Car Traffic Racing Game today. You will not regret it!
-
FAQs
-
Here are some frequently asked questions about Car Traffic Racing Game:
-
-
Q: How can I get more diamonds in Car Traffic Racing Game?
-
A: You can get more diamonds in Car Traffic Racing Game by watching ads or buying them with real money. You can also get diamonds by completing daily and weekly challenges or unlocking achievements in online mode.
-
Q: How can I unlock new cars in Car Traffic Racing Game?
-
A: You can unlock new cars in Car Traffic Racing Game by reaching certain ranks or completing certain challenges in online mode. You can also buy new cars with cash or diamonds in the garage menu.
-
Q: How can I change the camera angle in Car Traffic Racing Game?
-
A: You can change the camera angle in Car Traffic Racing Game by tapping on the camera icon on the top right corner of the screen. You can choose from four different camera angles: behind, top-down, hood, or cockpit.
-
Q: How can I pause or exit the game in Car Traffic Racing Game?
-
A: You can pause or exit the game in Car Traffic Racing Game by tapping on the pause icon on the top left corner of the screen. You can resume or restart the game by tapping on the resume or restart buttons. You can also exit the game by tapping on the exit button.
-
Q: How can I contact the developer of Car Traffic Racing Game?
-
A: You can contact the developer of Car Traffic Racing Game by sending an email to tojgames@gmail.com or visiting their website at www.tojgames.com. You can also follow them on Facebook or Twitter for updates and news.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cat Simulator Annual Life Kitty Pet MOD - The Best Game for Cat Fans.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cat Simulator Annual Life Kitty Pet MOD - The Best Game for Cat Fans.md
deleted file mode 100644
index 8dc68f303ff40971d5ae77ab0e8f4331c77ca81e..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cat Simulator Annual Life Kitty Pet MOD - The Best Game for Cat Fans.md
+++ /dev/null
@@ -1,106 +0,0 @@
-
-
-
-
-
Cat Simulator: Annual Life Kitty Pet Mod APK
-
Have you ever wondered what it would be like to live as a cat? To explore a vast world full of adventures, mysteries, and fun? To interact with other animals and make friends or enemies? To customize your kitty with different outfits and accessories? If you answered yes to any of these questions, then you should try Cat Simulator: Annual Life Kitty Pet Mod APK, a game that lets you experience all that and more!
-
What is Cat Simulator: Annual Life Kitty Pet Mod APK?
-
Cat Simulator: Annual Life Kitty Pet Mod APK is a modified version of Cat Simulator : Kitties Family, a game developed by Avelog Games. In this game, you can choose your kitty from different breeds and colors, and then explore a beautiful 3D world full of different locations, such as a farm, a forest, a lake, and more. You can interact with other animals, such as dogs, cows, chickens, and even other cats. You can also complete various quests and challenges, such as catching mice, stealing food, destroying objects, and more. You can earn coins and rewards for your achievements, and use them to buy new items and accessories for your kitty. You can also unlock new breeds and colors as you progress in the game.
Cat Simulator: Annual Life Kitty Pet Mod APK is different from the original game in that it gives you access to unlimited coins, unlocked items, and other features that are not available in the original version. This means that you can enjoy the game without any limitations or restrictions. You can customize your kitty however you want, explore the world without any boundaries, and have more fun and excitement.
-
How to download and install Cat Simulator: Annual Life Kitty Pet Mod APK?
-
Downloading and installing Cat Simulator: Annual Life Kitty Pet Mod APK is very easy and simple. Just follow these steps:
-
-
Click on the download button below to get the APK file of the modded version of the game.
-
Once the download is complete, locate the file on your device and tap on it to start the installation process.
-
Allow the installation of unknown sources if prompted by your device.
-
Wait for the installation to finish and then launch the game from your app drawer or home screen.
-
Enjoy playing Cat Simulator: Annual Life Kitty Pet Mod APK with unlimited coins and unlocked items!
What are the benefits of Cat Simulator: Annual Life Kitty Pet Mod APK?
-
Cat Simulator: Annual Life Kitty Pet Mod APK has many benefits that make it better than the original game. Here are some of them:
-
-
You get unlimited coins that you can use to buy anything you want in the game.
-
You get all the items and accessories unlocked from the start, so you can customize your kitty with different outfits, hats, glasses, collars, etc.
-
You get all the breeds and colors unlocked from the start, so you can choose your kitty from a variety of options.
-
You get to play the game without any ads or interruptions.
-
You get to play the game without any bugs or glitches.
-
-
What are the drawbacks of Cat Simulator: Annual Life Kitty Pet Mod APK?
-
Cat Simulator: Annual Life Kitty Pet Mod APK also has some drawbacks that you should be aware of before downloading it. Here are some of them:
-
-
You may face compatibility issues with some devices or Android versions.
-
You may face security risks from downloading an unofficial version of the game from unknown sources.
-
You may lose your progress or data if you uninstall the game or switch to another device.
-
You may not be able to play online or with other players who have the original version of the game.
-
You may not be able to receive updates or new features from the developers of the game.
-
-
How to play Cat Simulator: Annual Life Kitty Pet Mod APK?
-
Playing Cat Simulator: Annual Life Kitty Pet Mod APK is very easy and fun. You just need to follow these steps:
-
Choose your kitty
-
The first thing you need to do is choose your kitty from different breeds and colors. You can swipe left or right on the screen to see the available options. You can also tap on the customize button to change your kitty's appearance, such as its eyes, nose, ears, tail, etc. You can also tap on the dress up button to put on different items and accessories on your kitty, such as hats, glasses, collars, etc. You can save your kitty's look by tapping on the save button.
-
cat simulator 2023: live as a kitty in this pet game mod apk
-cat simulator: family life - adopt and raise kitties mod apk
-cat simulator: farm adventure - explore the kitty world mod apk
-cat simulator: online - play with other kitties and pets mod apk
-cat simulator: realistic 3D - experience the kitty life mod apk
-cat simulator: ultimate - create your own kitty family mod apk
-cat simulator: wild life - survive as a feral kitty mod apk
-cat simulator: winter edition - enjoy the snowy kitty fun mod apk
-cute kitty cat simulator: pet care and dress up mod apk
-fluffy cat simulator: cuddle and play with your kitty mod apk
-funny cat simulator: make your kitty do hilarious things mod apk
-happy cat simulator: feed and pamper your kitty mod apk
-kawaii cat simulator: decorate your kitty's home mod apk
-lazy cat simulator: relax and nap with your kitty mod apk
-magic cat simulator: cast spells and explore the kitty world mod apk
-my cat simulator: virtual pet - adopt and love your kitty mod apk
-my talking kitty cat simulator: chat and play with your pet mod apk
-naughty cat simulator: prank and annoy your owner mod apk
-neon cat simulator: glow in the dark with your kitty mod apk
-pocket cat simulator: carry your kitty everywhere mod apk
-pregnant cat simulator: take care of your expecting kitty mod apk
-rainbow cat simulator: enjoy the colorful kitty fun mod apk
-robot cat simulator: transform and fight with your kitty mod apk
-scary cat simulator: spook and haunt with your kitty mod apk
-space cat simulator: travel the galaxy with your kitty mod apk
-super cat simulator: be a hero with your kitty mod apk
-talking tom cat simulator: mimic and repeat with your pet mod apk
-tiny cat simulator: shrink and explore the kitty world mod apk
-unicorn cat simulator: fly and sparkle with your kitty mod apk
-warrior cat simulator: battle and hunt with your clan mod apk
-
Explore the world
-
The next thing you need to do is explore the world around you. You can move your kitty by using the joystick on the left side of the screen. You can also jump by tapping on the jump button on the right side of the screen. You can see your health bar and coin counter at the top of the screen. You can also see your map and quest list at the bottom of the screen. You can tap on them to see more details. You can explore different locations in the game, such as a farm, a forest, a lake, and more. You can find various objects and items in each location that you can interact with by tapping on them.
-
Interact with other animals
-
Another thing you can do is interact with other animals in the game. You can find different animals in each location, such as dogs, cows, chickens, and even other cats. You can tap on them to see their names and moods. You can also tap on the interact button to do various actions with them, such as play, fight, cuddle, etc. You can also see their health bars and relationship bars at the top of the screen. You can make friends or enemies with other animals depending on your actions. You can also join a cat family or clan by finding a mate and having kittens.
-
Complete quests and challenges
-
One more thing you can do is complete quests and challenges in the game. You can see your quest list at the bottom of the screen. You can tap on it to see the details of each quest. You can also see the rewards for completing each quest, such as coins, stars, items, etc. You can complete various quests and challenges in the game, such as catching mice, stealing food, destroying objects, and more. You can also see your progress and achievements in the game by tapping on the menu button at the top left corner of the screen.
-
Upgrade your kitty
-
The last thing you can do is upgrade your kitty in the game. You can use your coins to buy new items and accessories for your kitty in the shop. You can also use your stars to unlock new breeds and colors for your kitty in the gallery. You can also use your coins to upgrade your kitty's skills and abilities, such as speed, stealth, strength, etc. You can also use your coins to buy new homes and furniture for your kitty in the home menu.
-
Tips and tricks for Cat Simulator: Annual Life Kitty Pet Mod APK
-
Here are some tips and tricks that will help you play Cat Simulator: Annual Life Kitty Pet Mod APK better:
-
Use stealth mode
-
One tip is to use stealth mode to sneak up on other animals and avoid detection. You can activate stealth mode by tapping on the stealth button on the right side of the screen. When you are in stealth mode, you will become invisible and silent to other animals. You can use this mode to surprise attack other animals or to escape from danger. However, be careful not to bump into other animals or objects while in stealth mode, as this will break your stealth and alert other animals.
-
Collect all the stars
-
Another tip is to collect all the stars that are hidden in each location. You can find these stars by looking around carefully or by using your map. These stars are very valuable, as they can be used to unlock new items and breeds for your kitty. There are 20 stars in each location, so try to find them all and collect them.
-
Watch ads for extra coins
-
A final tip is to watch ads for extra coins if you need more money in the game. You can watch ads by tapping on the watch ad button at the top right corner of the screen. You will get 100 coins for each ad you watch. This is a good way to get more coins for free without spending any real money.
-
Conclusion
-
Cat Simulator: Annual Life Kitty Pet Mod APK is a fun and exciting game that lets you live as a cat in a 3D world full of adventures and interactions. You can choose your kitty from different breeds and colors, explore different locations, interact with other animals, complete quests and challenges, upgrade your kitty, and more. You can also enjoy unlimited coins and unlocked items with this modded version of the game.
-
If you love cats and want to experience their life in a realistic and immersive way, then you should download Cat Simulator: Annual Life Kitty Pet Mod APK today and start playing!
-
FAQs
-
-
Q: Is Cat Simulator: Annual Life Kitty Pet Mod APK safe to download?
-
A: Yes, Cat Simulator: Annual Life Kitty Pet Mod APK is safe to download as long as you get it from a trusted source. However, you should always be careful when downloading any modded or unofficial version of a game from unknown sources, as they may contain viruses or malware that could harm your device.
-
Q: How do I update Cat Simulator: Annual Life Kitty Pet Mod APK?
-
A: Unfortunately, you cannot update Cat Simulator: Annual Life Kitty Pet Mod APK from the Google Play Store or from the developers of the game. You will have to download a new version of the modded game from another source whenever there is an update available.
-
Q: Can I play Cat Simulator: Annual Life Kitty Pet Mod APK online or with other players?
-
A: No, you cannot play Cat Simulator: Annual Life Kitty Pet Mod APK online or with other players who have the original version of the game. You can only play the modded game offline and by yourself.
-
Q: What are the best breeds and colors for my kitty in Cat Simulator: Annual Life Kitty Pet Mod APK?
-
A: The best breeds and colors for your kitty in Cat Simulator: Annual Life Kitty Pet Mod APK depend on your personal preference and style. You can choose from a variety of options, such as Persian, Siamese, Bengal, Maine Coon, etc. You can also choose from different colors, such as black, white, orange, gray, etc. You can mix and match different breeds and colors to create your unique kitty.
-
Q: How do I save my progress and data in Cat Simulator: Annual Life Kitty Pet Mod APK?
-
A: You can save your progress and data in Cat Simulator: Annual Life Kitty Pet Mod APK by tapping on the menu button at the top left corner of the screen and then tapping on the save button. You can also load your saved data by tapping on the load button. However, be careful not to uninstall the game or switch to another device, as this may cause you to lose your progress and data.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download ibis Paint X MOD APK and Unleash Your Creativity - Premium Unlocked.md b/spaces/1phancelerku/anime-remove-background/Download ibis Paint X MOD APK and Unleash Your Creativity - Premium Unlocked.md
deleted file mode 100644
index 8456a893bfca360f3de155488d9452cf45ee5a7b..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download ibis Paint X MOD APK and Unleash Your Creativity - Premium Unlocked.md
+++ /dev/null
@@ -1,200 +0,0 @@
-
-
Download ibis Paint X Mod APK: A Versatile Drawing App for Android
-
If you are looking for a drawing app that provides a smooth and comfortable drawing experience with over 15000 brushes, over 15000 materials, over 1000 fonts, 80 filters, 46 screentones, 27 blending modes, and various ruler and clipping mask features, then you should try ibis Paint X. And if you want to enjoy all the premium features of this app for free, then you should download ibis Paint X Mod APK. In this article, we will tell you what is ibis Paint X, what is ibis Paint X Mod APK, how to download and install it, and what are some alternatives to it.
-
What is ibis Paint X?
-
ibis Paint X is a popular and versatile drawing app downloaded more than 280 million times in total as a series, which provides over 15000 brushes, over 15000 materials, over 1000 fonts, 80 filters, 46 screentones, 27 blending modes, recording drawing processes, stroke stabilization feature, various ruler features such as radial line rulers or symmetry rulers, and clipping mask features. It is an app that allows you to create stunning digital art and comics on your Android device. You can also share your drawing process as a video and learn from other users' drawing techniques on the community site "ibispaint.com".
Brushes: You can choose from over 15000 kinds of brushes including dip pens, felt tip pens, digital pens, air brushes, fan brushes, flat brushes, pencils, oil brushes, charcoal brushes, crayons and stamps. You can also adjust various brush parameters such as starting/ending thickness, starting/ending opacity, and initial/final brush angle. You can also use quick sliders to quickly adjust brush thickness and opacity. You can also see real time brush previews.
-
Layers: You can add as many layers as you need with no limit. You can also set layer parameters such as layer opacity, alpha blending, adding, subtracting, and multiplying. You can also use a handy clipping feature for clipping images. You can also use various layer commands such as layer duplication, import from the photo library, horizontal inversion, vertical inversion, layer rotation, layer moving, and zooming in/out. You can also set layer names to distinguish different layers.
-
Materials: You can access over 15000 materials in both color and monotone, including traditional Japanese backdrops, patterns, background tones, speech bubbles, line effects, and more.
-
Fonts: You can use over 1000 fonts for adding text to your drawings. You can also adjust font size, color, alignment, spacing, rotation, and more.
-
Filters: You can apply over 80 filters to your drawings such as blurring, color balance, gradation or ones generating anime-like or manga-like backgrounds from imported images.
-
Screentones: You can use over 46 screentones for creating manga-style drawings. You can also adjust screentone size, angle, density, and more.
-
Blending modes: You can use over 27 blending modes for creating various effects on your drawings such as such as multiply, screen, overlay, darken, lighten, color dodge, color burn, hard light, soft light, difference, exclusion, hue, saturation, color, and luminosity.
-
Rulers: You can use various ruler features such as radial line rulers or symmetry rulers to assist your drawing. You can also draw a line that follows the direction of the line drawn by you beforehand by using a forced entry/exit ruler.
-
Clipping mask features: You can clip multiple layers with a single layer. You can also invert the clipping mask and exclude the clipped area.
-
Recording drawing processes: You can record your drawing process and save it as a video. You can also export your video in high resolution and share it on social media or the community site "ibispaint.com".
-
Stroke stabilization feature: You can stabilize your strokes by using a stabilization slider. The smoother the stroke will be if the value is larger.
-
Dark mode: You can switch to dark mode to reduce eye strain and save battery life.
-
Prime membership: You can become a prime member by paying a monthly fee and enjoy the following benefits: no ads in the app, access to prime materials, access to prime fonts, tone curve filter, gradation map filter, cloud filter, and more.
-
-
Benefits of ibis Paint X
-
Some of the benefits of ibis Paint X are:
-
-
Easy to use: ibis Paint X has a user-friendly interface that allows you to easily access all the features and tools. You can also customize your toolbar and shortcut settings according to your preference.
-
Creative and fun: ibis Paint X lets you unleash your creativity and have fun with drawing. You can create various kinds of art and comics with different styles and effects. You can also learn from other users' drawing techniques by watching their videos or browsing their artworks on the community site "ibispaint.com".
-
Affordable and reliable: ibis Paint X is free to download and use. You can also enjoy most of the features without paying anything. If you want to support the developers and get more features, you can become a prime member for a reasonable price. ibis Paint X is also regularly updated and improved to provide you with the best drawing experience.
-
-
What is ibis Paint X Mod APK?
-
ibis Paint X Mod APK is a modified version of ibis Paint X that allows you to enjoy all the premium features of the app for free. You don't need to pay for the prime membership or watch ads to access the prime materials, fonts, filters, and more. You can also remove the watermark from your videos and export them in high resolution. With ibis Paint X Mod APK, you can have unlimited fun and creativity with drawing.
-
Features of ibis Paint X Mod APK
-
Some of the features of ibis Paint X Mod APK are:
-
-
All premium features unlocked: You can access all the premium features of ibis Paint X without paying anything. You can use over 15000 brushes, over 15000 materials, over 1000 fonts, 80 filters, 46 screentones, 27 blending modes, tone curve filter, gradation map filter, cloud filter, and more.
-
No ads: You don't need to watch ads to use the app or access the prime materials and fonts. You can enjoy a smooth and uninterrupted drawing experience.
-
No watermark: You don't need to worry about the watermark on your videos. You can export your videos without any watermark and share them with your friends or followers.
-
High resolution export: You can export your videos in high resolution up to 4K. You can also adjust the frame rate and quality of your videos according to your preference.
-
-
Benefits of ibis Paint X Mod APK
-
Some of the benefits of ibis Paint X Mod APK are:
-
-
Saves money: You don't need to spend money on the prime membership or buy any in-app purchases. You can get all the premium features for free with ibis Paint X Mod APK.
-
Saves time: You don't need to waste time on watching ads or waiting for them to finish. You can use the app without any interruption or delay.
-
Saves storage space: You don't need to download any additional files or updates to use ibis Paint X Mod APK. You can download the app once and enjoy it forever.
-
Enhances creativity: You can use all the features and tools of ibis Paint X without any limitation or restriction. You can experiment with different brushes, materials, fonts, filters, screentones, blending modes, and more. You can create amazing digital art and comics with ibis Paint X Mod APK.
-
-
How to Download and Install ibis Paint X Mod APK?
-
If you want to download and install ibis Paint X Mod APK on your Android device, you need to follow these simple steps:
-
Steps to Download and Install ibis Paint X Mod APK
-
-
Download the APK file: You need to download the APK file of ibis Paint X Mod APK from a trusted source. You can use the link below to download the latest version of ibis Paint X Mod APK.
-
Enable unknown sources: You need to enable unknown sources on your device to install the APK file. You can do this by going to Settings > Security > Unknown Sources and turning it on.
-
Install the APK file: You need to locate the downloaded APK file on your device and tap on it to install it. You may need to grant some permissions to the app during the installation process.
-
Launch the app: You need to launch the app by tapping on its icon on your home screen or app drawer. You can now enjoy all the premium features of ibis Paint X for free.
-
-
Tips to Use ibis Paint X Mod APK
-
Some of the tips to use ibis Paint X Mod APK are:
-
-
Watch tutorials: If you are new to ibis Paint X or want to learn more about its features and tools, you can watch tutorials on the app or on YouTube. You can also visit the official website of ibis Paint X for more information and support.
-
Join the community: If you want to share your artworks, get feedback, or learn from other users, you can join the community site "ibispaint.com". You can also follow ibis Paint X on social media platforms such as Facebook, Twitter, Instagram, and TikTok.
-
Backup your data: If you want to save your drawings, videos, materials, fonts, and settings, you can backup your data on the cloud or on your device. You can do this by going to Settings > Backup/Restore > Backup Data or Restore Data.
-
-
Alternatives to ibis Paint X Mod APK
-
If you are looking for some alternatives to ibis Paint X Mod APK, you can try these apps:
-
download ibis paint x mod apk premium unlocked
-download ibis paint x mod apk latest version
-download ibis paint x mod apk for android
-download ibis paint x mod apk free
-download ibis paint x mod apk no ads
-download ibis paint x mod apk happymod
-download ibis paint x mod apk 10.1.3
-download ibis paint x mod apk unlimited brushes
-download ibis paint x mod apk pro
-download ibis paint x mod apk full version
-download ibis paint x mod apk with prime membership
-download ibis paint x mod apk 2023
-download ibis paint x mod apk for pc
-download ibis paint x mod apk revdl
-download ibis paint x mod apk rexdl
-download ibis paint x mod apk 10.0.10
-download ibis paint x mod apk without watermark
-download ibis paint x mod apk for ios
-download ibis paint x mod apk with all features
-download ibis paint x mod apk 9.1.0
-download ibis paint x mod apk 8.1.1
-download ibis paint x mod apk 7.1.0
-download ibis paint x mod apk 6.4.0
-download ibis paint x mod apk 5.6.1
-download ibis paint x mod apk 4.3.2
-how to download ibis paint x mod apk
-where to download ibis paint x mod apk
-best site to download ibis paint x mod apk
-safe way to download ibis paint x mod apk
-easy steps to download ibis paint x mod apk
-benefits of downloading ibis paint x mod apk
-features of downloading ibis paint x mod apk
-tips and tricks for downloading ibis paint x mod apk
-reviews of downloading ibis paint x mod apk
-alternatives to downloading ibis paint x mod apk
-problems with downloading ibis paint x mod apk
-solutions for downloading ibis paint x mod apk
-guide for downloading ibis paint x mod apk
-tutorial for downloading ibis paint x mod apk
-video for downloading ibis paint x mod apk
A simple and expressive drawing app that lets you create realistic sketches and paintings with various brushes, pencils, pens, markers, and more.
-
- Layer support - Custom brushes - Adobe Creative Cloud integration - Perspective grids - Shape stencils - No ads or in-app purchases
-
-
-
Comparison of Alternatives to ibis Paint X Mod APK
-
Here is a comparison of the alternatives to ibis Paint X Mod APK based on some criteria:
-
-
-
Criteria
-
MediBang Paint
-
Procreate Pocket
-
SketchBook
-
Clip Studio Paint
-
Adobe Photoshop Sketch
-
-
-
Price
-
Free
-
$4.99
-
Free
-
$0.99/month or $9.49/year
-
Free
-
-
-
Rating
-
4.5/5.0
-
4.7/5.0
-
4.3/5.0
-
4.6/5.0
-
4.2/5.0
-
-
-
Downloads
-
10M+
-
1M+
-
10M+
-
10M+
-
10M+
-
-
-
User reviews
-
"Great app for beginners and professionals alike. It has a lot of features and tools that are easy to use and customize."
-
"Best drawing app ever. It has everything you need to create amazing artworks on your phone."
-
"Very smooth and responsive app. It has a lot of brushes and options to choose from. It also works well with a stylus."
-
"The best app for manga and comic creation. It has a lot of features and functions that are very useful and convenient."
-
"A simple and fun app to sketch and paint. It has a nice interface and a good selection of brushes."
-
-
Conclusion
-
In conclusion, ibis Paint X is a versatile drawing app that provides a smooth and comfortable drawing experience with over 15000 brushes, over 15000 materials, over 1000 fonts, 80 filters, 46 screentones, 27 blending modes, and various ruler and clipping mask features. It is an app that allows you to create stunning digital art and comics on your Android device. You can also share your drawing process as a video and learn from other users' drawing techniques on the community site "ibispaint.com".
-
If you want to enjoy all the premium features of ibis Paint X for free, you can download ibis Paint X Mod APK. It is a modified version of ibis Paint X that allows you to access all the prime materials, fonts, filters, and more without paying anything. You can also remove the watermark from your videos and export them in high resolution. With ibis Paint X Mod APK, you can have unlimited fun and creativity with drawing.
-
If you are looking for some alternatives to ibis Paint X Mod APK, you can try MediBang Paint, Procreate Pocket, SketchBook, Clip Studio Paint, or Adobe Photoshop Sketch. They are all great drawing and painting apps that offer different features and tools for creating amazing artworks on your device.
-
We hope this article has helped you to learn more about ibis Paint X, ibis Paint X Mod APK, and some alternatives to it. If you have any questions or feedback, please feel free to leave a comment below. Happy drawing!
-
FAQs
-
Here are some frequently asked questions about ibis Paint X and ibis Paint X Mod APK:
-
Is ibis Paint X safe to use?
-
Yes, ibis Paint X is safe to use. It is a legitimate app that is developed by ibis mobile inc., a Japanese company that specializes in developing apps for digital art and comics. It is also available on the Google Play Store and the App Store. However, you should be careful when downloading ibis Paint X Mod APK from third-party sources, as they may contain viruses or malware that can harm your device.
-
Is ibis Paint X free to use?
-
Yes, ibis Paint X is free to use. You can download and use the app without paying anything. However, if you want to access the prime materials, fonts, filters, and more, you need to watch ads or pay for the prime membership. Alternatively, you can download ibis Paint X Mod APK and enjoy all the premium features for free.
-
How do I update ibis Paint X Mod APK?
-
If you want to update ibis Paint X Mod APK, you need to download the latest version of the APK file from a trusted source and install it on your device. You may need to uninstall the previous version of the app before installing the new one. You should also backup your data before updating the app.
-
Can I use ibis Paint X on PC?
-
No, ibis Paint X is not available for PC. It is only compatible with Android and iOS devices. However, you can use an Android emulator such as BlueStacks or Nox Player to run ibis Paint X on your PC. You can also use a drawing tablet or a stylus to draw on your PC with ibis Paint X.
-
Can I use ibis Paint X offline?
-
Yes, you can use ibis Paint X offline. You don't need an internet connection to draw or save your artworks on your device. However, you need an internet connection to access the prime materials and fonts, share your videos or artworks on social media or the community site "ibispaint.com", or update the app.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/FIFA 09 APK for Android - The Ultimate Guide to Download and Install.md b/spaces/1phancelerku/anime-remove-background/FIFA 09 APK for Android - The Ultimate Guide to Download and Install.md
deleted file mode 100644
index 88ea3762750426a9fa37acc28f683dae17f0ec31..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/FIFA 09 APK for Android - The Ultimate Guide to Download and Install.md
+++ /dev/null
@@ -1,98 +0,0 @@
-
-
How to Download FIFA 09 APK for Android
-
If you are looking for a fun and realistic football game to play on your Android device, you should try FIFA 09. This is one of the best games in the FIFA series, developed by EA Sports. It has amazing graphics, smooth controls, and diverse content that will keep you entertained for hours. In this article, we will tell you what FIFA 09 is, what are its features and benefits, and how to download FIFA 09 APK for Android.
-
What is FIFA 09 and why you should play it
What is FIFA 09 and why you should play it
-
FIFA 09 is a football simulation game developed by EA Sports. It was released in October 2008 for various platforms, including PC, consoles, and mobile devices. It has over 250 gameplay improvements and enhancements that make it more realistic and responsive. It has a variety of game modes, such as Be a Pro, Manager Mode, Ultimate Team, and Online Multiplayer.
FIFA 09 is a football simulation game developed by EA Sports
-
EA Sports is a division of Electronic Arts that specializes in sports video games. It is one of the most popular and successful game developers in the industry. EA Sports has produced many acclaimed titles, such as Madden NFL, NBA Live, NHL, and FIFA. FIFA is the flagship franchise of EA Sports, and it has been running since 1993. FIFA 09 is the 16th installment in the series, and it is considered one of the best by critics and fans alike.
-
FIFA 09 is a fun and exciting game for football fans and gamers alike
-
If you love football, you will love FIFA 09. This game lets you play as your favorite teams and players from around the world. You can choose from over 500 licensed teams and more than 30 leagues, including the Premier League, La Liga, Bundesliga, Serie A, and more. You can also create your own custom teams and players with the Ultimate Team mode. This mode allows you to collect cards of players, kits, stadiums, and other items, and use them to build your dream team.
-
But playing FIFA 09 is not just about choosing teams and players. It is also about competing with other players online in 10 vs. 10 matches or tournaments. You can join or create your own club with your friends or other players, and play against other clubs from around the world. You can also chat with your teammates and opponents using the voice or text chat feature. Playing online is a great way to test your skills and have fun with other football enthusiasts.
What are the features and benefits of FIFA 09
-
FIFA 09 is not just a game, it is an experience. It has stunning graphics and animations that bring the game to life. It has smooth and intuitive controls that make it easy to play. It has a rich and diverse content that keeps you entertained for hours. Here are some of the features and benefits of FIFA 09 that you should know.
-
FIFA 09 has stunning graphics and animations that bring the game to life
-
One of the things that make FIFA 09 stand out is its visual quality. It uses leading-edge visuals that exploit the power of high-spec gaming devices. It features photorealistic likenesses of star players and stadiums. It has a revamped collision system that calculates speed, weight, and power when players collide. It has subtle animations that enable you to take first-time shots, volleys, and headers. It also has a dynamic weather system that affects the gameplay and atmosphere. You will feel like you are watching a real match on TV or playing on the pitch yourself.
-
FIFA 09 has smooth and intuitive controls that make it easy to play
-
Another thing that makes FIFA 09 enjoyable is its control scheme. It has a customizable control scheme that suits your preferences and device. You can choose from different options, such as buttons, gestures, or tilt. You can also adjust the sensitivity and responsiveness of the controls. You can also use a new jostle system that allows you to control the ball with more precision and skill. You can use the right analog stick to shield the ball, push off defenders, or perform tricks. You can also use the left trigger to sprint, the right trigger to slow down, or the shoulder buttons to switch players or tactics.
-
FIFA 09 has a rich and diverse content that keeps you entertained for hours
-
The last thing that makes FIFA 09 amazing is its content. It has over 500 licensed teams and more than 30 leagues from around the world. You can play as any team or player you want, from Manchester United to Barcelona, from Cristiano Ronaldo to Lionel Messi. You can also play in different game modes, such as Be a Pro, Manager Mode, Ultimate Team, and Online Multiplayer. Each mode has its own challenges and rewards. You can also play in different minigames and challenges that test your skills and knowledge. You can play in penalty shootouts, free kicks, dribbling courses, trivia quizzes, and more.
How to download FIFA 09 APK for Android
-
Now that you know what FIFA 09 is and what it offers, you might be wondering how to download it on your Android device. Well, you can't find it on the Google Play Store, because it is an old game that is not compatible with the latest Android versions. But don't worry, there is a way to play it on your device. You just need to download FIFA 09 APK for Android.
-
How to download fifa 09 apk for android free
-Download fifa 09 apk for android offline mode
-Download fifa 09 apk for android with obb file
-Download fifa 09 apk for android full version
-Download fifa 09 apk for android modded
-Download fifa 09 apk for android no verification
-Download fifa 09 apk for android latest update
-Download fifa 09 apk for android highly compressed
-Download fifa 09 apk for android unlimited coins
-Download fifa 09 apk for android from google play
-Best site to download fifa 09 apk for android
-Download fifa 09 apk for android without root
-Download fifa 09 apk for android on pc
-Download fifa 09 apk for android emulator
-Download fifa 09 apk for android cracked
-Download fifa 09 apk for android hack
-Download fifa 09 apk for android cheats
-Download fifa 09 apk for android gameplay
-Download fifa 09 apk for android review
-Download fifa 09 apk for android tips and tricks
-Download fifa 09 apk for android requirements
-Download fifa 09 apk for android size
-Download fifa 09 apk for android features
-Download fifa 09 apk for android graphics
-Download fifa 09 apk for android soundtracks
-Download fifa 09 apk for android teams and players
-Download fifa 09 apk for android modes and tournaments
-Download fifa 09 apk for android controls and settings
-Download fifa 09 apk for android bugs and fixes
-Download fifa 09 apk for android comparison with other versions
-Benefits of downloading fifa 09 apk for android
-Risks of downloading fifa 09 apk for android
-Alternatives to download fifa 09 apk for android
-How to install and run fifa 09 apk for android
-How to update and uninstall fifa 09 apk for android
-How to backup and restore fifa 09 apk for android data
-How to transfer and share fifa 09 apk for android files
-How to customize and optimize fifa 09 apk for android performance
-How to troubleshoot and solve fifa 09 apk for android problems
-How to contact and get support for fifa 09 apk for android issues
-
FIFA 09 APK is a file that allows you to install the game on your Android device without using the Google Play Store
-
APK stands for Android Package Kit, and it is a file format that contains all the necessary components of an Android app. It is useful if you have a device that is not compatible with the official version or if you want to save storage space. It is also useful if you want to play the game offline or with mods and cheats.
-
To download FIFA 09 APK for Android, you need to follow these steps:
-
Downloading FIFA 09 APK for Android is not difficult, but you need to be careful and follow some precautions. Here are the steps you need to take:
-
-
Find a reliable source that offers the APK file for free. You can use one of these links: . Make sure you scan the file for viruses and malware before downloading it.
-
Download the APK file to your device or transfer it from your PC using a USB cable or Bluetooth connection.
-
Enable the installation of apps from unknown sources on your device. You can do this by going to Settings > Security > Unknown Sources and toggling it on.
-
Locate the APK file on your device using a file manager app or your browser's downloads folder. Tap on it to start the installation process.
-
Follow the instructions on the screen to complete the installation. You may need to grant some permissions or accept some terms and conditions.
-
Launch the game from your app drawer or home screen and enjoy playing FIFA 09 on your Android device.
-
-
Conclusion
-
FIFA 09 is one of the best football games ever made, and you can play it on your Android device with FIFA 09 APK. It has amazing graphics, smooth controls, and diverse content that will keep you entertained for hours. You can play as your favorite teams and players, create your own custom teams and players, compete with other players online, and more. You just need to follow some simple steps to download and install the game on your device.
-
Here are some tips or recommendations for playing FIFA 09 on Android:
-
-
Make sure you have enough storage space and battery life on your device before playing the game.
-
Adjust the graphics settings and sound options according to your device's performance and preferences.
-
Use a Wi-Fi connection or a data plan with enough bandwidth when playing online.
-
Keep your device updated with the latest software and security patches.
-
Have fun and enjoy the game!
-
-
We hope you found this article helpful and informative. If you have any feedback or questions, please feel free to leave them in the comments section below. We would love to hear from you!
-
Frequently Asked Questions
-
Here are some of the most common questions that people ask about FIFA 09 APK for Android:
-
Q: Is FIFA 09 APK for Android safe to download and install?
-
A: Yes, as long as you download it from a reliable source and scan it for viruses and malware before installing it. However, we cannot guarantee that it will work perfectly on every device or that it will not cause any issues or damage to your device. Use it at your own risk and discretion.
-
Q: Is FIFA 09 APK for Android legal to use?
-
A: That depends on where you live and what laws apply there. In some countries, downloading and using APK files from unknown sources may be considered illegal or infringing on intellectual property rights. In other countries, it may be legal or tolerated as long as you own a copy of the original game or app. We advise you to check your local laws and regulations before downloading and using FIFA 09 APK for Android.
-
Q: Is FIFA 09 APK for Android compatible with my device?
-
A: FIFA 09 APK for Android is designed to work on most Android devices that run on Android 4.0 or higher. However, some devices may not be compatible due to hardware limitations, software conflicts, or other reasons. If you encounter any problems or errors
when playing the game, you may try to uninstall and reinstall the game, clear the cache and data, or contact the developer for support.
-
Q: How can I update FIFA 09 APK for Android?
-
A: FIFA 09 APK for Android is not an official version of the game, so it does not receive regular updates from EA Sports. However, some sources may offer updated versions of the APK file with new features or bug fixes. You can check the source where you downloaded the APK file for any updates or look for other sources that offer newer versions. To update the game, you need to download and install the new APK file over the old one.
-
Q: Can I play FIFA 09 APK for Android with a controller or a keyboard?
-
A: Yes, you can play FIFA 09 APK for Android with a controller or a keyboard if your device supports them. You can connect your controller or keyboard to your device via Bluetooth, USB, or OTG cable. You can also use an app like Octopus or Panda Gamepad Pro to map the buttons and keys to the game controls. However, some controllers or keyboards may not work well with the game or may cause some issues or errors.
-
-
This is the end of the article. Thank you for reading and I hope you learned something new and useful. If you have any questions or comments, please leave them below and I will try to answer them as soon as possible. Have a great day!
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/2023Liu2023/bingo/src/components/voice.tsx b/spaces/2023Liu2023/bingo/src/components/voice.tsx
deleted file mode 100644
index 074d0e145229947282a472bd84f6578cf0b3c71c..0000000000000000000000000000000000000000
--- a/spaces/2023Liu2023/bingo/src/components/voice.tsx
+++ /dev/null
@@ -1,52 +0,0 @@
-import React, { useEffect } from 'react'
-import { useSetAtom } from 'jotai'
-import { useBing } from '@/lib/hooks/use-bing'
-import Image from 'next/image'
-import VoiceIcon from '@/assets/images/voice.svg'
-import VoiceButton from './ui/voice'
-import { SR } from '@/lib/bots/bing/sr'
-import { voiceListenAtom } from '@/state'
-
-const sr = new SR(['发送', '清空', '退出'])
-
-const Voice = ({ setInput, input, sendMessage, isSpeaking }: Pick, 'setInput' | 'sendMessage' | 'input' | 'isSpeaking'>) => {
- const setListen = useSetAtom(voiceListenAtom)
- useEffect(() => {
- if (sr.listening) return
- sr.transcript = !isSpeaking
- }, [isSpeaking])
-
- useEffect(() => {
- sr.onchange = (msg: string, command?: string) => {
- switch (command) {
- case '退出':
- sr.stop()
- break;
- case '发送':
- sendMessage(input)
- case '清空':
- setInput('')
- break;
- default:
- setInput(input + msg)
- }
- }
- }, [input])
-
- const switchSR = (enable: boolean = false) => {
- setListen(enable)
- if (enable) {
- sr.start()
- } else {
- sr.stop()
- }
- }
-
- return sr.listening ? (
- switchSR(false)} />
- ) : (
- switchSR(true)} />
- )
-};
-
-export default Voice;
diff --git a/spaces/801artistry/RVC801/infer/lib/audio.py b/spaces/801artistry/RVC801/infer/lib/audio.py
deleted file mode 100644
index 9ad4ff74218957cf18782fa71add40a734b47e78..0000000000000000000000000000000000000000
--- a/spaces/801artistry/RVC801/infer/lib/audio.py
+++ /dev/null
@@ -1,197 +0,0 @@
-import librosa
-import numpy as np
-import av
-from io import BytesIO
-import ffmpeg
-import os
-import sys
-
-import random
-from infer.lib.csvutil import CSVutil
-#import csv
-
-platform_stft_mapping = {
- 'linux': 'stftpitchshift',
- 'darwin': 'stftpitchshift',
- 'win32': 'stftpitchshift.exe',
-}
-
-stft = platform_stft_mapping.get(sys.platform)
-
-def wav2(i, o, format):
- inp = av.open(i, 'rb')
- if format == "m4a": format = "mp4"
- out = av.open(o, 'wb', format=format)
- if format == "ogg": format = "libvorbis"
- if format == "mp4": format = "aac"
-
- ostream = out.add_stream(format)
-
- for frame in inp.decode(audio=0):
- for p in ostream.encode(frame): out.mux(p)
-
- for p in ostream.encode(None): out.mux(p)
-
- out.close()
- inp.close()
-
-def audio2(i, o, format, sr):
- inp = av.open(i, 'rb')
- out = av.open(o, 'wb', format=format)
- if format == "ogg": format = "libvorbis"
- if format == "f32le": format = "pcm_f32le"
-
- ostream = out.add_stream(format, channels=1)
- ostream.sample_rate = sr
-
- for frame in inp.decode(audio=0):
- for p in ostream.encode(frame): out.mux(p)
-
- out.close()
- inp.close()
-
-def load_audion(file, sr):
- try:
- file = (
- file.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
- ) # 防止小白拷路径头尾带了空格和"和回车
- with open(file, "rb") as f:
- with BytesIO() as out:
- audio2(f, out, "f32le", sr)
- return np.frombuffer(out.getvalue(), np.float32).flatten()
-
- except AttributeError:
- audio = file[1] / 32768.0
- if len(audio.shape) == 2:
- audio = np.mean(audio, -1)
- return librosa.resample(audio, orig_sr=file[0], target_sr=16000)
-
- except Exception as e:
- raise RuntimeError(f"Failed to load audio: {e}")
-
-
-
-
-def load_audio(file, sr, DoFormant=False, Quefrency=1.0, Timbre=1.0):
- converted = False
- DoFormant, Quefrency, Timbre = CSVutil("csvdb/formanting.csv", "r", "formanting")
- try:
- # https://github.com/openai/whisper/blob/main/whisper/audio.py#L26
- # This launches a subprocess to decode audio while down-mixing and resampling as necessary.
- # Requires the ffmpeg CLI and `ffmpeg-python` package to be installed.
- file = (
- file.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
- ) # 防止小白拷路径头尾带了空格和"和回车
- file_formanted = file.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
-
- # print(f"dofor={bool(DoFormant)} timbr={Timbre} quef={Quefrency}\n")
-
- if (
- lambda DoFormant: True
- if DoFormant.lower() == "true"
- else (False if DoFormant.lower() == "false" else DoFormant)
- )(DoFormant):
- numerator = round(random.uniform(1, 4), 4)
- # os.system(f"stftpitchshift -i {file} -q {Quefrency} -t {Timbre} -o {file_formanted}")
- # print('stftpitchshift -i "%s" -p 1.0 --rms -w 128 -v 8 -q %s -t %s -o "%s"' % (file, Quefrency, Timbre, file_formanted))
-
- if not file.endswith(".wav"):
- if not os.path.isfile(f"{file_formanted}.wav"):
- converted = True
- # print(f"\nfile = {file}\n")
- # print(f"\nfile_formanted = {file_formanted}\n")
- converting = (
- ffmpeg.input(file_formanted, threads=0)
- .output(f"{file_formanted}.wav")
- .run(
- cmd=["ffmpeg", "-nostdin"],
- capture_stdout=True,
- capture_stderr=True,
- )
- )
- else:
- pass
-
- file_formanted = (
- f"{file_formanted}.wav"
- if not file_formanted.endswith(".wav")
- else file_formanted
- )
-
- print(f" · Formanting {file_formanted}...\n")
-
- os.system(
- '%s -i "%s" -q "%s" -t "%s" -o "%sFORMANTED_%s.wav"'
- % (
- stft,
- file_formanted,
- Quefrency,
- Timbre,
- file_formanted,
- str(numerator),
- )
- )
-
- print(f" · Formanted {file_formanted}!\n")
-
- # filepraat = (os.path.abspath(os.getcwd()) + '\\' + file).replace('/','\\')
- # file_formantedpraat = ('"' + os.path.abspath(os.getcwd()) + '/' + 'formanted'.join(file_formanted) + '"').replace('/','\\')
- # print("%sFORMANTED_%s.wav" % (file_formanted, str(numerator)))
-
- out, _ = (
- ffmpeg.input(
- "%sFORMANTED_%s.wav" % (file_formanted, str(numerator)), threads=0
- )
- .output("-", format="f32le", acodec="pcm_f32le", ac=1, ar=sr)
- .run(
- cmd=["ffmpeg", "-nostdin"], capture_stdout=True, capture_stderr=True
- )
- )
-
- try:
- os.remove("%sFORMANTED_%s.wav" % (file_formanted, str(numerator)))
- except Exception:
- pass
- print("couldn't remove formanted type of file")
-
- else:
- out, _ = (
- ffmpeg.input(file, threads=0)
- .output("-", format="f32le", acodec="pcm_f32le", ac=1, ar=sr)
- .run(
- cmd=["ffmpeg", "-nostdin"], capture_stdout=True, capture_stderr=True
- )
- )
- except Exception as e:
- raise RuntimeError(f"Failed to load audio: {e}")
-
- if converted:
- try:
- os.remove(file_formanted)
- except Exception:
- pass
- print("couldn't remove converted type of file")
- converted = False
-
- return np.frombuffer(out, np.float32).flatten()
-
-
-def check_audio_duration(file):
- try:
- file = file.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
-
- probe = ffmpeg.probe(file)
-
- duration = float(probe['streams'][0]['duration'])
-
- if duration < 0.76:
- print(
- f"\n------------\n"
- f"Audio file, {file.split('/')[-1]}, under ~0.76s detected - file is too short. Target at least 1-2s for best results."
- f"\n------------\n\n"
- )
- return False
-
- return True
- except Exception as e:
- raise RuntimeError(f"Failed to check audio duration: {e}")
\ No newline at end of file
diff --git a/spaces/A666sxr/Genshin_TTS/monotonic_align/__init__.py b/spaces/A666sxr/Genshin_TTS/monotonic_align/__init__.py
deleted file mode 100644
index 3d7009c40fea3a98168e3e3bc9ae061e91327422..0000000000000000000000000000000000000000
--- a/spaces/A666sxr/Genshin_TTS/monotonic_align/__init__.py
+++ /dev/null
@@ -1,19 +0,0 @@
-import numpy as np
-import torch
-from .monotonic_align.core import maximum_path_c
-
-
-def maximum_path(neg_cent, mask):
- """ Cython optimized version.
- neg_cent: [b, t_t, t_s]
- mask: [b, t_t, t_s]
- """
- device = neg_cent.device
- dtype = neg_cent.dtype
- neg_cent = neg_cent.data.cpu().numpy().astype(np.float32)
- path = np.zeros(neg_cent.shape, dtype=np.int32)
-
- t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(np.int32)
- t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(np.int32)
- maximum_path_c(path, neg_cent, t_t_max, t_s_max)
- return torch.from_numpy(path).to(device=device, dtype=dtype)
diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/utils/hparams.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/utils/hparams.py
deleted file mode 100644
index c76c5cfc896308d9a84c6254a7ca00b8235e7516..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/utils/hparams.py
+++ /dev/null
@@ -1,129 +0,0 @@
-import argparse
-import os
-import yaml
-
-global_print_hparams = True
-hparams = {}
-
-
-class Args:
- def __init__(self, **kwargs):
- for k, v in kwargs.items():
- self.__setattr__(k, v)
-
-
-def override_config(old_config: dict, new_config: dict):
- for k, v in new_config.items():
- if isinstance(v, dict) and k in old_config:
- override_config(old_config[k], new_config[k])
- else:
- old_config[k] = v
-
-
-def set_hparams(config='', exp_name='', hparams_str='', print_hparams=True, global_hparams=True):
- if config == '' and exp_name == '':
- parser = argparse.ArgumentParser(description='')
- parser.add_argument('--config', type=str, default='',
- help='location of the data corpus')
- parser.add_argument('--exp_name', type=str, default='', help='exp_name')
- parser.add_argument('-hp', '--hparams', type=str, default='',
- help='location of the data corpus')
- parser.add_argument('--infer', action='store_true', help='infer')
- parser.add_argument('--validate', action='store_true', help='validate')
- parser.add_argument('--reset', action='store_true', help='reset hparams')
- parser.add_argument('--remove', action='store_true', help='remove old ckpt')
- parser.add_argument('--debug', action='store_true', help='debug')
- args, unknown = parser.parse_known_args()
- print("| Unknow hparams: ", unknown)
- else:
- args = Args(config=config, exp_name=exp_name, hparams=hparams_str,
- infer=False, validate=False, reset=False, debug=False, remove=False)
- global hparams
- assert args.config != '' or args.exp_name != ''
- if args.config != '':
- assert os.path.exists(args.config)
-
- config_chains = []
- loaded_config = set()
-
- def load_config(config_fn):
- # deep first inheritance and avoid the second visit of one node
- if not os.path.exists(config_fn):
- return {}
- with open(config_fn) as f:
- hparams_ = yaml.safe_load(f)
- loaded_config.add(config_fn)
- if 'base_config' in hparams_:
- ret_hparams = {}
- if not isinstance(hparams_['base_config'], list):
- hparams_['base_config'] = [hparams_['base_config']]
- for c in hparams_['base_config']:
- if c.startswith('.'):
- c = f'{os.path.dirname(config_fn)}/{c}'
- c = os.path.normpath(c)
- if c not in loaded_config:
- override_config(ret_hparams, load_config(c))
- override_config(ret_hparams, hparams_)
- else:
- ret_hparams = hparams_
- config_chains.append(config_fn)
- return ret_hparams
-
- saved_hparams = {}
- args_work_dir = ''
- if args.exp_name != '':
- args_work_dir = f'{args.exp_name}' # modified
- ckpt_config_path = f'{args_work_dir}/config.yaml'
- if os.path.exists(ckpt_config_path):
- with open(ckpt_config_path) as f:
- saved_hparams_ = yaml.safe_load(f)
- if saved_hparams_ is not None:
- saved_hparams.update(saved_hparams_)
- hparams_ = {}
- if args.config != '':
- hparams_.update(load_config(args.config))
- if not args.reset:
- hparams_.update(saved_hparams)
- hparams_['work_dir'] = args_work_dir
-
- # Support config overriding in command line. Support list type config overriding.
- # Examples: --hparams="a=1,b.c=2,d=[1 1 1]"
- if args.hparams != "":
- for new_hparam in args.hparams.split(","):
- k, v = new_hparam.split("=")
- v = v.strip("\'\" ")
- config_node = hparams_
- for k_ in k.split(".")[:-1]:
- config_node = config_node[k_]
- k = k.split(".")[-1]
- if v in ['True', 'False'] or type(config_node[k]) in [bool, list, dict]:
- if type(config_node[k]) == list:
- v = v.replace(" ", ",")
- config_node[k] = eval(v)
- else:
- config_node[k] = type(config_node[k])(v)
- if args_work_dir != '' and args.remove:
- answer = input("REMOVE old checkpoint? Y/N [Default: N]: ")
- if answer.lower() == "y":
- remove_file(args_work_dir)
- if args_work_dir != '' and (not os.path.exists(ckpt_config_path) or args.reset) and not args.infer:
- os.makedirs(hparams_['work_dir'], exist_ok=True)
- with open(ckpt_config_path, 'w') as f:
- yaml.safe_dump(hparams_, f)
-
- hparams_['infer'] = args.infer
- hparams_['debug'] = args.debug
- hparams_['validate'] = args.validate
- hparams_['exp_name'] = args.exp_name
- global global_print_hparams
- if global_hparams:
- hparams.clear()
- hparams.update(hparams_)
- if print_hparams and global_print_hparams and global_hparams:
- print('| Hparams chains: ', config_chains)
- print('| Hparams: ')
- for i, (k, v) in enumerate(sorted(hparams_.items())):
- print(f"\033[;33;m{k}\033[0m: {v}, ", end="\n" if i % 5 == 4 else "")
- print("")
- global_print_hparams = False
- return hparams_
\ No newline at end of file
diff --git a/spaces/AIGC-Audio/AudioGPT/audio_to_text/captioning/models/encoder.py b/spaces/AIGC-Audio/AudioGPT/audio_to_text/captioning/models/encoder.py
deleted file mode 100644
index 0d6d8e87e0ed07abc04f6e79b0fa08cd102398a0..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/audio_to_text/captioning/models/encoder.py
+++ /dev/null
@@ -1,686 +0,0 @@
-# -*- coding: utf-8 -*-
-
-import math
-import copy
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from torchaudio import transforms
-from torchlibrosa.augmentation import SpecAugmentation
-
-from .utils import mean_with_lens, max_with_lens, \
- init, pack_wrapper, generate_length_mask, PositionalEncoding
-
-
-def init_layer(layer):
- """Initialize a Linear or Convolutional layer. """
- nn.init.xavier_uniform_(layer.weight)
-
- if hasattr(layer, 'bias'):
- if layer.bias is not None:
- layer.bias.data.fill_(0.)
-
-
-def init_bn(bn):
- """Initialize a Batchnorm layer. """
- bn.bias.data.fill_(0.)
- bn.weight.data.fill_(1.)
-
-
-class BaseEncoder(nn.Module):
-
- """
- Encode the given audio into embedding
- Base encoder class, cannot be called directly
- All encoders should inherit from this class
- """
-
- def __init__(self, spec_dim, fc_feat_dim, attn_feat_dim):
- super(BaseEncoder, self).__init__()
- self.spec_dim = spec_dim
- self.fc_feat_dim = fc_feat_dim
- self.attn_feat_dim = attn_feat_dim
-
-
- def forward(self, x):
- #########################
- # an encoder first encodes audio feature into embedding, obtaining
- # `encoded`: {
- # fc_embs: [N, fc_emb_dim],
- # attn_embs: [N, attn_max_len, attn_emb_dim],
- # attn_emb_lens: [N,]
- # }
- #########################
- raise NotImplementedError
-
-
-class Block2D(nn.Module):
-
- def __init__(self, cin, cout, kernel_size=3, padding=1):
- super().__init__()
- self.block = nn.Sequential(
- nn.BatchNorm2d(cin),
- nn.Conv2d(cin,
- cout,
- kernel_size=kernel_size,
- padding=padding,
- bias=False),
- nn.LeakyReLU(inplace=True, negative_slope=0.1))
-
- def forward(self, x):
- return self.block(x)
-
-
-class LinearSoftPool(nn.Module):
- """LinearSoftPool
- Linear softmax, takes logits and returns a probability, near to the actual maximum value.
- Taken from the paper:
- A Comparison of Five Multiple Instance Learning Pooling Functions for Sound Event Detection with Weak Labeling
- https://arxiv.org/abs/1810.09050
- """
- def __init__(self, pooldim=1):
- super().__init__()
- self.pooldim = pooldim
-
- def forward(self, logits, time_decision):
- return (time_decision**2).sum(self.pooldim) / time_decision.sum(
- self.pooldim)
-
-
-class MeanPool(nn.Module):
-
- def __init__(self, pooldim=1):
- super().__init__()
- self.pooldim = pooldim
-
- def forward(self, logits, decision):
- return torch.mean(decision, dim=self.pooldim)
-
-
-class AttentionPool(nn.Module):
- """docstring for AttentionPool"""
- def __init__(self, inputdim, outputdim=10, pooldim=1, **kwargs):
- super().__init__()
- self.inputdim = inputdim
- self.outputdim = outputdim
- self.pooldim = pooldim
- self.transform = nn.Linear(inputdim, outputdim)
- self.activ = nn.Softmax(dim=self.pooldim)
- self.eps = 1e-7
-
- def forward(self, logits, decision):
- # Input is (B, T, D)
- # B, T, D
- w = self.activ(torch.clamp(self.transform(logits), -15, 15))
- detect = (decision * w).sum(
- self.pooldim) / (w.sum(self.pooldim) + self.eps)
- # B, T, D
- return detect
-
-
-class MMPool(nn.Module):
-
- def __init__(self, dims):
- super().__init__()
- self.avgpool = nn.AvgPool2d(dims)
- self.maxpool = nn.MaxPool2d(dims)
-
- def forward(self, x):
- return self.avgpool(x) + self.maxpool(x)
-
-
-def parse_poolingfunction(poolingfunction_name='mean', **kwargs):
- """parse_poolingfunction
- A heler function to parse any temporal pooling
- Pooling is done on dimension 1
- :param poolingfunction_name:
- :param **kwargs:
- """
- poolingfunction_name = poolingfunction_name.lower()
- if poolingfunction_name == 'mean':
- return MeanPool(pooldim=1)
- elif poolingfunction_name == 'linear':
- return LinearSoftPool(pooldim=1)
- elif poolingfunction_name == 'attention':
- return AttentionPool(inputdim=kwargs['inputdim'],
- outputdim=kwargs['outputdim'])
-
-
-def embedding_pooling(x, lens, pooling="mean"):
- if pooling == "max":
- fc_embs = max_with_lens(x, lens)
- elif pooling == "mean":
- fc_embs = mean_with_lens(x, lens)
- elif pooling == "mean+max":
- x_mean = mean_with_lens(x, lens)
- x_max = max_with_lens(x, lens)
- fc_embs = x_mean + x_max
- elif pooling == "last":
- indices = (lens - 1).reshape(-1, 1, 1).repeat(1, 1, x.size(-1))
- # indices: [N, 1, hidden]
- fc_embs = torch.gather(x, 1, indices).squeeze(1)
- else:
- raise Exception(f"pooling method {pooling} not support")
- return fc_embs
-
-
-class Cdur5Encoder(BaseEncoder):
-
- def __init__(self, spec_dim, fc_feat_dim, attn_feat_dim, pooling="mean"):
- super().__init__(spec_dim, fc_feat_dim, attn_feat_dim)
- self.pooling = pooling
- self.features = nn.Sequential(
- Block2D(1, 32),
- nn.LPPool2d(4, (2, 4)),
- Block2D(32, 128),
- Block2D(128, 128),
- nn.LPPool2d(4, (2, 4)),
- Block2D(128, 128),
- Block2D(128, 128),
- nn.LPPool2d(4, (1, 4)),
- nn.Dropout(0.3),
- )
- with torch.no_grad():
- rnn_input_dim = self.features(
- torch.randn(1, 1, 500, spec_dim)).shape
- rnn_input_dim = rnn_input_dim[1] * rnn_input_dim[-1]
-
- self.gru = nn.GRU(rnn_input_dim,
- 128,
- bidirectional=True,
- batch_first=True)
- self.apply(init)
-
- def forward(self, input_dict):
- x = input_dict["spec"]
- lens = input_dict["spec_len"]
- if "upsample" not in input_dict:
- input_dict["upsample"] = False
- lens = torch.as_tensor(copy.deepcopy(lens))
- N, T, _ = x.shape
- x = x.unsqueeze(1)
- x = self.features(x)
- x = x.transpose(1, 2).contiguous().flatten(-2)
- x, _ = self.gru(x)
- if input_dict["upsample"]:
- x = nn.functional.interpolate(
- x.transpose(1, 2),
- T,
- mode='linear',
- align_corners=False).transpose(1, 2)
- else:
- lens //= 4
- attn_emb = x
- fc_emb = embedding_pooling(x, lens, self.pooling)
- return {
- "attn_emb": attn_emb,
- "fc_emb": fc_emb,
- "attn_emb_len": lens
- }
-
-
-def conv_conv_block(in_channel, out_channel):
- return nn.Sequential(
- nn.Conv2d(in_channel,
- out_channel,
- kernel_size=3,
- bias=False,
- padding=1),
- nn.BatchNorm2d(out_channel),
- nn.ReLU(True),
- nn.Conv2d(out_channel,
- out_channel,
- kernel_size=3,
- bias=False,
- padding=1),
- nn.BatchNorm2d(out_channel),
- nn.ReLU(True)
- )
-
-
-class Cdur8Encoder(BaseEncoder):
-
- def __init__(self, spec_dim, fc_feat_dim, attn_feat_dim, pooling="mean"):
- super().__init__(spec_dim, fc_feat_dim, attn_feat_dim)
- self.pooling = pooling
- self.features = nn.Sequential(
- conv_conv_block(1, 64),
- MMPool((2, 2)),
- nn.Dropout(0.2, True),
- conv_conv_block(64, 128),
- MMPool((2, 2)),
- nn.Dropout(0.2, True),
- conv_conv_block(128, 256),
- MMPool((1, 2)),
- nn.Dropout(0.2, True),
- conv_conv_block(256, 512),
- MMPool((1, 2)),
- nn.Dropout(0.2, True),
- nn.AdaptiveAvgPool2d((None, 1)),
- )
- self.init_bn = nn.BatchNorm2d(spec_dim)
- self.embedding = nn.Linear(512, 512)
- self.gru = nn.GRU(512, 256, bidirectional=True, batch_first=True)
- self.apply(init)
-
- def forward(self, input_dict):
- x = input_dict["spec"]
- lens = input_dict["spec_len"]
- lens = torch.as_tensor(copy.deepcopy(lens))
- x = x.unsqueeze(1) # B x 1 x T x D
- x = x.transpose(1, 3)
- x = self.init_bn(x)
- x = x.transpose(1, 3)
- x = self.features(x)
- x = x.transpose(1, 2).contiguous().flatten(-2)
- x = F.dropout(x, p=0.5, training=self.training)
- x = F.relu_(self.embedding(x))
- x, _ = self.gru(x)
- attn_emb = x
- lens //= 4
- fc_emb = embedding_pooling(x, lens, self.pooling)
- return {
- "attn_emb": attn_emb,
- "fc_emb": fc_emb,
- "attn_emb_len": lens
- }
-
-
-class Cnn10Encoder(BaseEncoder):
-
- def __init__(self, spec_dim, fc_feat_dim, attn_feat_dim):
- super().__init__(spec_dim, fc_feat_dim, attn_feat_dim)
- self.features = nn.Sequential(
- conv_conv_block(1, 64),
- nn.AvgPool2d((2, 2)),
- nn.Dropout(0.2, True),
- conv_conv_block(64, 128),
- nn.AvgPool2d((2, 2)),
- nn.Dropout(0.2, True),
- conv_conv_block(128, 256),
- nn.AvgPool2d((2, 2)),
- nn.Dropout(0.2, True),
- conv_conv_block(256, 512),
- nn.AvgPool2d((2, 2)),
- nn.Dropout(0.2, True),
- nn.AdaptiveAvgPool2d((None, 1)),
- )
- self.init_bn = nn.BatchNorm2d(spec_dim)
- self.embedding = nn.Linear(512, 512)
- self.apply(init)
-
- def forward(self, input_dict):
- x = input_dict["spec"]
- lens = input_dict["spec_len"]
- lens = torch.as_tensor(copy.deepcopy(lens))
- x = x.unsqueeze(1) # [N, 1, T, D]
- x = x.transpose(1, 3)
- x = self.init_bn(x)
- x = x.transpose(1, 3)
- x = self.features(x) # [N, 512, T/16, 1]
- x = x.transpose(1, 2).contiguous().flatten(-2) # [N, T/16, 512]
- attn_emb = x
- lens //= 16
- fc_emb = embedding_pooling(x, lens, "mean+max")
- fc_emb = F.dropout(fc_emb, p=0.5, training=self.training)
- fc_emb = self.embedding(fc_emb)
- fc_emb = F.relu_(fc_emb)
- return {
- "attn_emb": attn_emb,
- "fc_emb": fc_emb,
- "attn_emb_len": lens
- }
-
-
-class ConvBlock(nn.Module):
- def __init__(self, in_channels, out_channels):
-
- super(ConvBlock, self).__init__()
-
- self.conv1 = nn.Conv2d(in_channels=in_channels,
- out_channels=out_channels,
- kernel_size=(3, 3), stride=(1, 1),
- padding=(1, 1), bias=False)
-
- self.conv2 = nn.Conv2d(in_channels=out_channels,
- out_channels=out_channels,
- kernel_size=(3, 3), stride=(1, 1),
- padding=(1, 1), bias=False)
-
- self.bn1 = nn.BatchNorm2d(out_channels)
- self.bn2 = nn.BatchNorm2d(out_channels)
-
- self.init_weight()
-
- def init_weight(self):
- init_layer(self.conv1)
- init_layer(self.conv2)
- init_bn(self.bn1)
- init_bn(self.bn2)
-
-
- def forward(self, input, pool_size=(2, 2), pool_type='avg'):
-
- x = input
- x = F.relu_(self.bn1(self.conv1(x)))
- x = F.relu_(self.bn2(self.conv2(x)))
- if pool_type == 'max':
- x = F.max_pool2d(x, kernel_size=pool_size)
- elif pool_type == 'avg':
- x = F.avg_pool2d(x, kernel_size=pool_size)
- elif pool_type == 'avg+max':
- x1 = F.avg_pool2d(x, kernel_size=pool_size)
- x2 = F.max_pool2d(x, kernel_size=pool_size)
- x = x1 + x2
- else:
- raise Exception('Incorrect argument!')
-
- return x
-
-
-class Cnn14Encoder(nn.Module):
- def __init__(self, sample_rate=32000):
- super().__init__()
- sr_to_fmax = {
- 32000: 14000,
- 16000: 8000
- }
- # Logmel spectrogram extractor
- self.melspec_extractor = transforms.MelSpectrogram(
- sample_rate=sample_rate,
- n_fft=32 * sample_rate // 1000,
- win_length=32 * sample_rate // 1000,
- hop_length=10 * sample_rate // 1000,
- f_min=50,
- f_max=sr_to_fmax[sample_rate],
- n_mels=64,
- norm="slaney",
- mel_scale="slaney"
- )
- self.hop_length = 10 * sample_rate // 1000
- self.db_transform = transforms.AmplitudeToDB()
- # Spec augmenter
- self.spec_augmenter = SpecAugmentation(time_drop_width=64,
- time_stripes_num=2, freq_drop_width=8, freq_stripes_num=2)
-
- self.bn0 = nn.BatchNorm2d(64)
-
- self.conv_block1 = ConvBlock(in_channels=1, out_channels=64)
- self.conv_block2 = ConvBlock(in_channels=64, out_channels=128)
- self.conv_block3 = ConvBlock(in_channels=128, out_channels=256)
- self.conv_block4 = ConvBlock(in_channels=256, out_channels=512)
- self.conv_block5 = ConvBlock(in_channels=512, out_channels=1024)
- self.conv_block6 = ConvBlock(in_channels=1024, out_channels=2048)
-
- self.downsample_ratio = 32
-
- self.fc1 = nn.Linear(2048, 2048, bias=True)
-
- self.init_weight()
-
- def init_weight(self):
- init_bn(self.bn0)
- init_layer(self.fc1)
-
- def load_pretrained(self, pretrained):
- checkpoint = torch.load(pretrained, map_location="cpu")
-
- if "model" in checkpoint:
- state_keys = checkpoint["model"].keys()
- backbone = False
- for key in state_keys:
- if key.startswith("backbone."):
- backbone = True
- break
-
- if backbone: # COLA
- state_dict = {}
- for key, value in checkpoint["model"].items():
- if key.startswith("backbone."):
- model_key = key.replace("backbone.", "")
- state_dict[model_key] = value
- else: # PANNs
- state_dict = checkpoint["model"]
- elif "state_dict" in checkpoint: # CLAP
- state_dict = checkpoint["state_dict"]
- state_dict_keys = list(filter(
- lambda x: "audio_encoder" in x, state_dict.keys()))
- state_dict = {
- key.replace('audio_encoder.', ''): state_dict[key]
- for key in state_dict_keys
- }
- else:
- raise Exception("Unkown checkpoint format")
-
- model_dict = self.state_dict()
- pretrained_dict = {
- k: v for k, v in state_dict.items() if (k in model_dict) and (
- model_dict[k].shape == v.shape)
- }
- model_dict.update(pretrained_dict)
- self.load_state_dict(model_dict, strict=True)
-
- def forward(self, input_dict):
- """
- Input: (batch_size, n_samples)"""
- waveform = input_dict["wav"]
- wave_length = input_dict["wav_len"]
- specaug = input_dict["specaug"]
- x = self.melspec_extractor(waveform)
- x = self.db_transform(x) # (batch_size, mel_bins, time_steps)
- x = x.transpose(1, 2)
- x = x.unsqueeze(1) # (batch_size, 1, time_steps, mel_bins)
-
- # SpecAugment
- if self.training and specaug:
- x = self.spec_augmenter(x)
-
- x = x.transpose(1, 3)
- x = self.bn0(x)
- x = x.transpose(1, 3)
-
- x = self.conv_block1(x, pool_size=(2, 2), pool_type='avg')
- x = F.dropout(x, p=0.2, training=self.training)
- x = self.conv_block2(x, pool_size=(2, 2), pool_type='avg')
- x = F.dropout(x, p=0.2, training=self.training)
- x = self.conv_block3(x, pool_size=(2, 2), pool_type='avg')
- x = F.dropout(x, p=0.2, training=self.training)
- x = self.conv_block4(x, pool_size=(2, 2), pool_type='avg')
- x = F.dropout(x, p=0.2, training=self.training)
- x = self.conv_block5(x, pool_size=(2, 2), pool_type='avg')
- x = F.dropout(x, p=0.2, training=self.training)
- x = self.conv_block6(x, pool_size=(1, 1), pool_type='avg')
- x = F.dropout(x, p=0.2, training=self.training)
- x = torch.mean(x, dim=3)
- attn_emb = x.transpose(1, 2)
-
- wave_length = torch.as_tensor(wave_length)
- feat_length = torch.div(wave_length, self.hop_length,
- rounding_mode="floor") + 1
- feat_length = torch.div(feat_length, self.downsample_ratio,
- rounding_mode="floor")
- x_max = max_with_lens(attn_emb, feat_length)
- x_mean = mean_with_lens(attn_emb, feat_length)
- x = x_max + x_mean
- x = F.dropout(x, p=0.5, training=self.training)
- x = F.relu_(self.fc1(x))
- fc_emb = F.dropout(x, p=0.5, training=self.training)
-
- output_dict = {
- 'fc_emb': fc_emb,
- 'attn_emb': attn_emb,
- 'attn_emb_len': feat_length
- }
-
- return output_dict
-
-
-class RnnEncoder(BaseEncoder):
-
- def __init__(self, spec_dim, fc_feat_dim, attn_feat_dim,
- pooling="mean", **kwargs):
- super().__init__(spec_dim, fc_feat_dim, attn_feat_dim)
- self.pooling = pooling
- self.hidden_size = kwargs.get('hidden_size', 512)
- self.bidirectional = kwargs.get('bidirectional', False)
- self.num_layers = kwargs.get('num_layers', 1)
- self.dropout = kwargs.get('dropout', 0.2)
- self.rnn_type = kwargs.get('rnn_type', "GRU")
- self.in_bn = kwargs.get('in_bn', False)
- self.embed_dim = self.hidden_size * (self.bidirectional + 1)
- self.network = getattr(nn, self.rnn_type)(
- attn_feat_dim,
- self.hidden_size,
- num_layers=self.num_layers,
- bidirectional=self.bidirectional,
- dropout=self.dropout,
- batch_first=True)
- if self.in_bn:
- self.bn = nn.BatchNorm1d(self.embed_dim)
- self.apply(init)
-
- def forward(self, input_dict):
- x = input_dict["attn"]
- lens = input_dict["attn_len"]
- lens = torch.as_tensor(lens)
- # x: [N, T, E]
- if self.in_bn:
- x = pack_wrapper(self.bn, x, lens)
- out = pack_wrapper(self.network, x, lens)
- # out: [N, T, hidden]
- attn_emb = out
- fc_emb = embedding_pooling(out, lens, self.pooling)
- return {
- "attn_emb": attn_emb,
- "fc_emb": fc_emb,
- "attn_emb_len": lens
- }
-
-
-class Cnn14RnnEncoder(nn.Module):
- def __init__(self, sample_rate=32000, pretrained=None,
- freeze_cnn=False, freeze_cnn_bn=False,
- pooling="mean", **kwargs):
- super().__init__()
- self.cnn = Cnn14Encoder(sample_rate)
- self.rnn = RnnEncoder(64, 2048, 2048, pooling, **kwargs)
- if pretrained is not None:
- self.cnn.load_pretrained(pretrained)
- if freeze_cnn:
- assert pretrained is not None, "cnn is not pretrained but frozen"
- for param in self.cnn.parameters():
- param.requires_grad = False
- self.freeze_cnn_bn = freeze_cnn_bn
-
- def train(self, mode):
- super().train(mode=mode)
- if self.freeze_cnn_bn:
- def bn_eval(module):
- class_name = module.__class__.__name__
- if class_name.find("BatchNorm") != -1:
- module.eval()
- self.cnn.apply(bn_eval)
- return self
-
- def forward(self, input_dict):
- output_dict = self.cnn(input_dict)
- output_dict["attn"] = output_dict["attn_emb"]
- output_dict["attn_len"] = output_dict["attn_emb_len"]
- del output_dict["attn_emb"], output_dict["attn_emb_len"]
- output_dict = self.rnn(output_dict)
- return output_dict
-
-
-class TransformerEncoder(BaseEncoder):
-
- def __init__(self, spec_dim, fc_feat_dim, attn_feat_dim, d_model, **kwargs):
- super().__init__(spec_dim, fc_feat_dim, attn_feat_dim)
- self.d_model = d_model
- dropout = kwargs.get("dropout", 0.2)
- self.nhead = kwargs.get("nhead", self.d_model // 64)
- self.nlayers = kwargs.get("nlayers", 2)
- self.dim_feedforward = kwargs.get("dim_feedforward", self.d_model * 4)
-
- self.attn_proj = nn.Sequential(
- nn.Linear(attn_feat_dim, self.d_model),
- nn.ReLU(),
- nn.Dropout(dropout),
- nn.LayerNorm(self.d_model)
- )
- layer = nn.TransformerEncoderLayer(d_model=self.d_model,
- nhead=self.nhead,
- dim_feedforward=self.dim_feedforward,
- dropout=dropout)
- self.model = nn.TransformerEncoder(layer, self.nlayers)
- self.cls_token = nn.Parameter(torch.zeros(d_model))
- self.init_params()
-
- def init_params(self):
- for p in self.parameters():
- if p.dim() > 1:
- nn.init.xavier_uniform_(p)
-
- def forward(self, input_dict):
- attn_feat = input_dict["attn"]
- attn_feat_len = input_dict["attn_len"]
- attn_feat_len = torch.as_tensor(attn_feat_len)
-
- attn_feat = self.attn_proj(attn_feat) # [bs, T, d_model]
-
- cls_emb = self.cls_token.reshape(1, 1, self.d_model).repeat(
- attn_feat.size(0), 1, 1)
- attn_feat = torch.cat((cls_emb, attn_feat), dim=1)
- attn_feat = attn_feat.transpose(0, 1)
-
- attn_feat_len += 1
- src_key_padding_mask = ~generate_length_mask(
- attn_feat_len, attn_feat.size(0)).to(attn_feat.device)
- output = self.model(attn_feat, src_key_padding_mask=src_key_padding_mask)
-
- attn_emb = output.transpose(0, 1)
- fc_emb = attn_emb[:, 0]
- return {
- "attn_emb": attn_emb,
- "fc_emb": fc_emb,
- "attn_emb_len": attn_feat_len
- }
-
-
-class Cnn14TransformerEncoder(nn.Module):
- def __init__(self, sample_rate=32000, pretrained=None,
- freeze_cnn=False, freeze_cnn_bn=False,
- d_model="mean", **kwargs):
- super().__init__()
- self.cnn = Cnn14Encoder(sample_rate)
- self.trm = TransformerEncoder(64, 2048, 2048, d_model, **kwargs)
- if pretrained is not None:
- self.cnn.load_pretrained(pretrained)
- if freeze_cnn:
- assert pretrained is not None, "cnn is not pretrained but frozen"
- for param in self.cnn.parameters():
- param.requires_grad = False
- self.freeze_cnn_bn = freeze_cnn_bn
-
- def train(self, mode):
- super().train(mode=mode)
- if self.freeze_cnn_bn:
- def bn_eval(module):
- class_name = module.__class__.__name__
- if class_name.find("BatchNorm") != -1:
- module.eval()
- self.cnn.apply(bn_eval)
- return self
-
- def forward(self, input_dict):
- output_dict = self.cnn(input_dict)
- output_dict["attn"] = output_dict["attn_emb"]
- output_dict["attn_len"] = output_dict["attn_emb_len"]
- del output_dict["attn_emb"], output_dict["attn_emb_len"]
- output_dict = self.trm(output_dict)
- return output_dict
-
-
-
-
-
diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/commons/conv.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/commons/conv.py
deleted file mode 100644
index 7edf126a080767f760dc7d19a349fb9a44afeb46..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/commons/conv.py
+++ /dev/null
@@ -1,167 +0,0 @@
-import math
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-from text_to_speech.modules.commons.layers import LayerNorm, Embedding
-
-
-class LambdaLayer(nn.Module):
- def __init__(self, lambd):
- super(LambdaLayer, self).__init__()
- self.lambd = lambd
-
- def forward(self, x):
- return self.lambd(x)
-
-
-def init_weights_func(m):
- classname = m.__class__.__name__
- if classname.find("Conv1d") != -1:
- torch.nn.init.xavier_uniform_(m.weight)
-
-
-class ResidualBlock(nn.Module):
- """Implements conv->PReLU->norm n-times"""
-
- def __init__(self, channels, kernel_size, dilation, n=2, norm_type='bn', dropout=0.0,
- c_multiple=2, ln_eps=1e-12):
- super(ResidualBlock, self).__init__()
-
- if norm_type == 'bn':
- norm_builder = lambda: nn.BatchNorm1d(channels)
- elif norm_type == 'in':
- norm_builder = lambda: nn.InstanceNorm1d(channels, affine=True)
- elif norm_type == 'gn':
- norm_builder = lambda: nn.GroupNorm(8, channels)
- elif norm_type == 'ln':
- norm_builder = lambda: LayerNorm(channels, dim=1, eps=ln_eps)
- else:
- norm_builder = lambda: nn.Identity()
-
- self.blocks = [
- nn.Sequential(
- norm_builder(),
- nn.Conv1d(channels, c_multiple * channels, kernel_size, dilation=dilation,
- padding=(dilation * (kernel_size - 1)) // 2),
- LambdaLayer(lambda x: x * kernel_size ** -0.5),
- nn.GELU(),
- nn.Conv1d(c_multiple * channels, channels, 1, dilation=dilation),
- )
- for i in range(n)
- ]
-
- self.blocks = nn.ModuleList(self.blocks)
- self.dropout = dropout
-
- def forward(self, x):
- nonpadding = (x.abs().sum(1) > 0).float()[:, None, :]
- for b in self.blocks:
- x_ = b(x)
- if self.dropout > 0 and self.training:
- x_ = F.dropout(x_, self.dropout, training=self.training)
- x = x + x_
- x = x * nonpadding
- return x
-
-
-class ConvBlocks(nn.Module):
- """Decodes the expanded phoneme encoding into spectrograms"""
-
- def __init__(self, hidden_size, out_dims, dilations, kernel_size,
- norm_type='ln', layers_in_block=2, c_multiple=2,
- dropout=0.0, ln_eps=1e-5,
- init_weights=True, is_BTC=True, num_layers=None, post_net_kernel=3):
- super(ConvBlocks, self).__init__()
- self.is_BTC = is_BTC
- if num_layers is not None:
- dilations = [1] * num_layers
- self.res_blocks = nn.Sequential(
- *[ResidualBlock(hidden_size, kernel_size, d,
- n=layers_in_block, norm_type=norm_type, c_multiple=c_multiple,
- dropout=dropout, ln_eps=ln_eps)
- for d in dilations],
- )
- if norm_type == 'bn':
- norm = nn.BatchNorm1d(hidden_size)
- elif norm_type == 'in':
- norm = nn.InstanceNorm1d(hidden_size, affine=True)
- elif norm_type == 'gn':
- norm = nn.GroupNorm(8, hidden_size)
- elif norm_type == 'ln':
- norm = LayerNorm(hidden_size, dim=1, eps=ln_eps)
- self.last_norm = norm
- self.post_net1 = nn.Conv1d(hidden_size, out_dims, kernel_size=post_net_kernel,
- padding=post_net_kernel // 2)
- if init_weights:
- self.apply(init_weights_func)
-
- def forward(self, x, nonpadding=None):
- """
-
- :param x: [B, T, H]
- :return: [B, T, H]
- """
- if self.is_BTC:
- x = x.transpose(1, 2)
- if nonpadding is None:
- nonpadding = (x.abs().sum(1) > 0).float()[:, None, :]
- elif self.is_BTC:
- nonpadding = nonpadding.transpose(1, 2)
- x = self.res_blocks(x) * nonpadding
- x = self.last_norm(x) * nonpadding
- x = self.post_net1(x) * nonpadding
- if self.is_BTC:
- x = x.transpose(1, 2)
- return x
-
-
-class TextConvEncoder(ConvBlocks):
- def __init__(self, dict_size, hidden_size, out_dims, dilations, kernel_size,
- norm_type='ln', layers_in_block=2, c_multiple=2,
- dropout=0.0, ln_eps=1e-5, init_weights=True, num_layers=None, post_net_kernel=3):
- super().__init__(hidden_size, out_dims, dilations, kernel_size,
- norm_type, layers_in_block, c_multiple,
- dropout, ln_eps, init_weights, num_layers=num_layers,
- post_net_kernel=post_net_kernel)
- self.embed_tokens = Embedding(dict_size, hidden_size, 0)
- self.embed_scale = math.sqrt(hidden_size)
-
- def forward(self, txt_tokens):
- """
-
- :param txt_tokens: [B, T]
- :return: {
- 'encoder_out': [B x T x C]
- }
- """
- x = self.embed_scale * self.embed_tokens(txt_tokens)
- return super().forward(x)
-
-
-class ConditionalConvBlocks(ConvBlocks):
- def __init__(self, hidden_size, c_cond, c_out, dilations, kernel_size,
- norm_type='ln', layers_in_block=2, c_multiple=2,
- dropout=0.0, ln_eps=1e-5, init_weights=True, is_BTC=True, num_layers=None):
- super().__init__(hidden_size, c_out, dilations, kernel_size,
- norm_type, layers_in_block, c_multiple,
- dropout, ln_eps, init_weights, is_BTC=False, num_layers=num_layers)
- self.g_prenet = nn.Conv1d(c_cond, hidden_size, 3, padding=1)
- self.is_BTC_ = is_BTC
- if init_weights:
- self.g_prenet.apply(init_weights_func)
-
- def forward(self, x, cond, nonpadding=None):
- if self.is_BTC_:
- x = x.transpose(1, 2)
- cond = cond.transpose(1, 2)
- if nonpadding is not None:
- nonpadding = nonpadding.transpose(1, 2)
- if nonpadding is None:
- nonpadding = x.abs().sum(1)[:, None]
- x = x + self.g_prenet(cond)
- x = x * nonpadding
- x = super(ConditionalConvBlocks, self).forward(x) # input needs to be BTC
- if self.is_BTC_:
- x = x.transpose(1, 2)
- return x
diff --git a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/encoders/open_clap/linear_probe.py b/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/encoders/open_clap/linear_probe.py
deleted file mode 100644
index bb2841dd4e28201db8b5bd4a215e1b8b9a60d25a..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/encoders/open_clap/linear_probe.py
+++ /dev/null
@@ -1,63 +0,0 @@
-import numpy as np
-import torch.nn.functional as F
-from torch import nn
-from .model import MLPLayers
-
-
-class LinearProbe(nn.Module):
- def __init__(self, model, mlp, freeze, in_ch, out_ch, act=None):
- """
- Args:
- model: nn.Module
- mlp: bool, if True, then use the MLP layer as the linear probe module
- freeze: bool, if Ture, then freeze all the CLAP model's layers when training the linear probe
- in_ch: int, the output channel from CLAP model
- out_ch: int, the output channel from linear probe (class_num)
- act: torch.nn.functional, the activation function before the loss function
- """
- super().__init__()
- in_ch = 512
- self.clap_model = model
- self.clap_model.text_branch = None # to save memory
- self.freeze = freeze
- if mlp:
- self.lp_layer = MLPLayers(units=[in_ch, in_ch * 2, out_ch])
- else:
- self.lp_layer = nn.Linear(in_ch, out_ch)
-
- if self.freeze:
- for param in self.clap_model.parameters():
- param.requires_grad = False
-
- if act == 'None':
- self.act = None
- elif act == 'relu':
- self.act = nn.ReLU()
- elif act == 'elu':
- self.act = nn.ELU()
- elif act == 'prelu':
- self.act = nn.PReLU(num_parameters=in_ch)
- elif act == 'softmax':
- self.act = nn.Softmax(dim=-1)
- elif act == 'sigmoid':
- self.act = nn.Sigmoid()
-
- def forward(self, x, mix_lambda=None, device=None):
- """
- Args:
- x: waveform, torch.tensor [batch, t_samples] / batch of mel_spec and longer list
- mix_lambda: torch.tensor [batch], the mixup lambda
- Returns:
- class_prob: torch.tensor [batch, class_num]
-
- """
- # batchnorm cancel grandient
- if self.freeze:
- self.clap_model.eval()
-
- x = self.clap_model.audio_projection(
- self.clap_model.audio_branch(x, mixup_lambda=mix_lambda, device=device)["embedding"])
- out = self.lp_layer(x)
- if self.act is not None:
- out = self.act(out)
- return out
diff --git a/spaces/AIGText/GlyphControl/ldm/modules/midas/utils.py b/spaces/AIGText/GlyphControl/ldm/modules/midas/utils.py
deleted file mode 100644
index 9a9d3b5b66370fa98da9e067ba53ead848ea9a59..0000000000000000000000000000000000000000
--- a/spaces/AIGText/GlyphControl/ldm/modules/midas/utils.py
+++ /dev/null
@@ -1,189 +0,0 @@
-"""Utils for monoDepth."""
-import sys
-import re
-import numpy as np
-import cv2
-import torch
-
-
-def read_pfm(path):
- """Read pfm file.
-
- Args:
- path (str): path to file
-
- Returns:
- tuple: (data, scale)
- """
- with open(path, "rb") as file:
-
- color = None
- width = None
- height = None
- scale = None
- endian = None
-
- header = file.readline().rstrip()
- if header.decode("ascii") == "PF":
- color = True
- elif header.decode("ascii") == "Pf":
- color = False
- else:
- raise Exception("Not a PFM file: " + path)
-
- dim_match = re.match(r"^(\d+)\s(\d+)\s$", file.readline().decode("ascii"))
- if dim_match:
- width, height = list(map(int, dim_match.groups()))
- else:
- raise Exception("Malformed PFM header.")
-
- scale = float(file.readline().decode("ascii").rstrip())
- if scale < 0:
- # little-endian
- endian = "<"
- scale = -scale
- else:
- # big-endian
- endian = ">"
-
- data = np.fromfile(file, endian + "f")
- shape = (height, width, 3) if color else (height, width)
-
- data = np.reshape(data, shape)
- data = np.flipud(data)
-
- return data, scale
-
-
-def write_pfm(path, image, scale=1):
- """Write pfm file.
-
- Args:
- path (str): pathto file
- image (array): data
- scale (int, optional): Scale. Defaults to 1.
- """
-
- with open(path, "wb") as file:
- color = None
-
- if image.dtype.name != "float32":
- raise Exception("Image dtype must be float32.")
-
- image = np.flipud(image)
-
- if len(image.shape) == 3 and image.shape[2] == 3: # color image
- color = True
- elif (
- len(image.shape) == 2 or len(image.shape) == 3 and image.shape[2] == 1
- ): # greyscale
- color = False
- else:
- raise Exception("Image must have H x W x 3, H x W x 1 or H x W dimensions.")
-
- file.write("PF\n" if color else "Pf\n".encode())
- file.write("%d %d\n".encode() % (image.shape[1], image.shape[0]))
-
- endian = image.dtype.byteorder
-
- if endian == "<" or endian == "=" and sys.byteorder == "little":
- scale = -scale
-
- file.write("%f\n".encode() % scale)
-
- image.tofile(file)
-
-
-def read_image(path):
- """Read image and output RGB image (0-1).
-
- Args:
- path (str): path to file
-
- Returns:
- array: RGB image (0-1)
- """
- img = cv2.imread(path)
-
- if img.ndim == 2:
- img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
-
- img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) / 255.0
-
- return img
-
-
-def resize_image(img):
- """Resize image and make it fit for network.
-
- Args:
- img (array): image
-
- Returns:
- tensor: data ready for network
- """
- height_orig = img.shape[0]
- width_orig = img.shape[1]
-
- if width_orig > height_orig:
- scale = width_orig / 384
- else:
- scale = height_orig / 384
-
- height = (np.ceil(height_orig / scale / 32) * 32).astype(int)
- width = (np.ceil(width_orig / scale / 32) * 32).astype(int)
-
- img_resized = cv2.resize(img, (width, height), interpolation=cv2.INTER_AREA)
-
- img_resized = (
- torch.from_numpy(np.transpose(img_resized, (2, 0, 1))).contiguous().float()
- )
- img_resized = img_resized.unsqueeze(0)
-
- return img_resized
-
-
-def resize_depth(depth, width, height):
- """Resize depth map and bring to CPU (numpy).
-
- Args:
- depth (tensor): depth
- width (int): image width
- height (int): image height
-
- Returns:
- array: processed depth
- """
- depth = torch.squeeze(depth[0, :, :, :]).to("cpu")
-
- depth_resized = cv2.resize(
- depth.numpy(), (width, height), interpolation=cv2.INTER_CUBIC
- )
-
- return depth_resized
-
-def write_depth(path, depth, bits=1):
- """Write depth map to pfm and png file.
-
- Args:
- path (str): filepath without extension
- depth (array): depth
- """
- write_pfm(path + ".pfm", depth.astype(np.float32))
-
- depth_min = depth.min()
- depth_max = depth.max()
-
- max_val = (2**(8*bits))-1
-
- if depth_max - depth_min > np.finfo("float").eps:
- out = max_val * (depth - depth_min) / (depth_max - depth_min)
- else:
- out = np.zeros(depth.shape, dtype=depth.type)
-
- if bits == 1:
- cv2.imwrite(path + ".png", out.astype("uint8"))
- elif bits == 2:
- cv2.imwrite(path + ".png", out.astype("uint16"))
-
- return
diff --git a/spaces/AIatUIUC/CodeLATS/generators/parse.py b/spaces/AIatUIUC/CodeLATS/generators/parse.py
deleted file mode 100644
index c4e925f38f5cb2cf5afdbe804bf9c075b5f4782b..0000000000000000000000000000000000000000
--- a/spaces/AIatUIUC/CodeLATS/generators/parse.py
+++ /dev/null
@@ -1,49 +0,0 @@
-import re
-from typing import Optional
-
-
-def parse_code_block(string: str, lang: str) -> Optional[str]:
- code_pattern = fr"```{lang}\n(.*?)\n```"
- match = re.search(code_pattern, string, re.DOTALL)
-
- if match:
- return match.group(1)
-
- generic_code_pattern = r"```\n(.*?)\n```"
- match = re.search(generic_code_pattern, string, re.DOTALL)
-
- if match:
- return match.group(1)
-
- return parse_first_func(string, lang)
-
-
-def parse_first_func(code: str, lang: str) -> Optional[str]:
- assert lang == "python", "Only python is supported for now. TODO: Rust"
- code_lines = code.split("\n")
- def_i = -1
- last_i = 0
- got_return = False
- for i, line in enumerate(code_lines):
- if line.startswith("def "):
- if def_i == -1:
- def_i = i
- else:
- break
- elif "return" in line and def_i != -1:
- got_return = True
- if line == "" and def_i != -1 and got_return:
- last_i = i
- break
-
- if last_i == 0:
- last_i = len(code_lines) - 1
-
- if def_i == -1:
- return None
-
- return "\n".join(code_lines[def_i:last_i+1]).rstrip("[/PYTHON]")
-
-
-def add_code_block(string: str, lang: str) -> str:
- return f"```{lang}\n{string}\n```"
diff --git a/spaces/Adapter/CoAdapter/ldm/models/diffusion/ddim.py b/spaces/Adapter/CoAdapter/ldm/models/diffusion/ddim.py
deleted file mode 100644
index 1b72e4b1226992226dfdad4200a9b9973e658929..0000000000000000000000000000000000000000
--- a/spaces/Adapter/CoAdapter/ldm/models/diffusion/ddim.py
+++ /dev/null
@@ -1,293 +0,0 @@
-"""SAMPLING ONLY."""
-
-import torch
-import numpy as np
-from tqdm import tqdm
-
-from ldm.modules.diffusionmodules.util import make_ddim_sampling_parameters, make_ddim_timesteps, noise_like, \
- extract_into_tensor
-
-
-class DDIMSampler(object):
- def __init__(self, model, schedule="linear", **kwargs):
- super().__init__()
- self.model = model
- self.ddpm_num_timesteps = model.num_timesteps
- self.schedule = schedule
-
- def register_buffer(self, name, attr):
- if type(attr) == torch.Tensor:
- if attr.device != torch.device("cuda"):
- attr = attr.to(torch.device("cuda"))
- setattr(self, name, attr)
-
- def make_schedule(self, ddim_num_steps, ddim_discretize="uniform", ddim_eta=0., verbose=True):
- self.ddim_timesteps = make_ddim_timesteps(ddim_discr_method=ddim_discretize, num_ddim_timesteps=ddim_num_steps,
- num_ddpm_timesteps=self.ddpm_num_timesteps, verbose=verbose)
- alphas_cumprod = self.model.alphas_cumprod
- assert alphas_cumprod.shape[0] == self.ddpm_num_timesteps, 'alphas have to be defined for each timestep'
- to_torch = lambda x: x.clone().detach().to(torch.float32).to(self.model.device)
-
- self.register_buffer('betas', to_torch(self.model.betas))
- self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod))
- self.register_buffer('alphas_cumprod_prev', to_torch(self.model.alphas_cumprod_prev))
-
- # calculations for diffusion q(x_t | x_{t-1}) and others
- self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod.cpu())))
- self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod.cpu())))
- self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod.cpu())))
- self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu())))
- self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu() - 1)))
-
- # ddim sampling parameters
- ddim_sigmas, ddim_alphas, ddim_alphas_prev = make_ddim_sampling_parameters(alphacums=alphas_cumprod.cpu(),
- ddim_timesteps=self.ddim_timesteps,
- eta=ddim_eta, verbose=verbose)
- self.register_buffer('ddim_sigmas', ddim_sigmas)
- self.register_buffer('ddim_alphas', ddim_alphas)
- self.register_buffer('ddim_alphas_prev', ddim_alphas_prev)
- self.register_buffer('ddim_sqrt_one_minus_alphas', np.sqrt(1. - ddim_alphas))
- sigmas_for_original_sampling_steps = ddim_eta * torch.sqrt(
- (1 - self.alphas_cumprod_prev) / (1 - self.alphas_cumprod) * (
- 1 - self.alphas_cumprod / self.alphas_cumprod_prev))
- self.register_buffer('ddim_sigmas_for_original_num_steps', sigmas_for_original_sampling_steps)
-
- @torch.no_grad()
- def sample(self,
- S,
- batch_size,
- shape,
- conditioning=None,
- callback=None,
- normals_sequence=None,
- img_callback=None,
- quantize_x0=False,
- eta=0.,
- mask=None,
- x0=None,
- temperature=1.,
- noise_dropout=0.,
- score_corrector=None,
- corrector_kwargs=None,
- verbose=True,
- x_T=None,
- log_every_t=100,
- unconditional_guidance_scale=1.,
- unconditional_conditioning=None,
- features_adapter=None,
- append_to_context=None,
- cond_tau=0.4,
- style_cond_tau=1.0,
- # this has to come in the same format as the conditioning, # e.g. as encoded tokens, ...
- **kwargs
- ):
- if conditioning is not None:
- if isinstance(conditioning, dict):
- cbs = conditioning[list(conditioning.keys())[0]].shape[0]
- if cbs != batch_size:
- print(f"Warning: Got {cbs} conditionings but batch-size is {batch_size}")
- else:
- if conditioning.shape[0] != batch_size:
- print(f"Warning: Got {conditioning.shape[0]} conditionings but batch-size is {batch_size}")
-
- self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=verbose)
- # sampling
- C, H, W = shape
- size = (batch_size, C, H, W)
- print(f'Data shape for DDIM sampling is {size}, eta {eta}')
-
- samples, intermediates = self.ddim_sampling(conditioning, size,
- callback=callback,
- img_callback=img_callback,
- quantize_denoised=quantize_x0,
- mask=mask, x0=x0,
- ddim_use_original_steps=False,
- noise_dropout=noise_dropout,
- temperature=temperature,
- score_corrector=score_corrector,
- corrector_kwargs=corrector_kwargs,
- x_T=x_T,
- log_every_t=log_every_t,
- unconditional_guidance_scale=unconditional_guidance_scale,
- unconditional_conditioning=unconditional_conditioning,
- features_adapter=features_adapter,
- append_to_context=append_to_context,
- cond_tau=cond_tau,
- style_cond_tau=style_cond_tau,
- )
- return samples, intermediates
-
- @torch.no_grad()
- def ddim_sampling(self, cond, shape,
- x_T=None, ddim_use_original_steps=False,
- callback=None, timesteps=None, quantize_denoised=False,
- mask=None, x0=None, img_callback=None, log_every_t=100,
- temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None,
- unconditional_guidance_scale=1., unconditional_conditioning=None, features_adapter=None,
- append_to_context=None, cond_tau=0.4, style_cond_tau=1.0):
- device = self.model.betas.device
- b = shape[0]
- if x_T is None:
- img = torch.randn(shape, device=device)
- else:
- img = x_T
-
- if timesteps is None:
- timesteps = self.ddpm_num_timesteps if ddim_use_original_steps else self.ddim_timesteps
- elif timesteps is not None and not ddim_use_original_steps:
- subset_end = int(min(timesteps / self.ddim_timesteps.shape[0], 1) * self.ddim_timesteps.shape[0]) - 1
- timesteps = self.ddim_timesteps[:subset_end]
-
- intermediates = {'x_inter': [img], 'pred_x0': [img]}
- time_range = reversed(range(0, timesteps)) if ddim_use_original_steps else np.flip(timesteps)
- total_steps = timesteps if ddim_use_original_steps else timesteps.shape[0]
- print(f"Running DDIM Sampling with {total_steps} timesteps")
-
- iterator = tqdm(time_range, desc='DDIM Sampler', total=total_steps)
-
- for i, step in enumerate(iterator):
- index = total_steps - i - 1
- ts = torch.full((b,), step, device=device, dtype=torch.long)
-
- if mask is not None:
- assert x0 is not None
- img_orig = self.model.q_sample(x0, ts) # TODO: deterministic forward pass?
- img = img_orig * mask + (1. - mask) * img
-
- outs = self.p_sample_ddim(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps,
- quantize_denoised=quantize_denoised, temperature=temperature,
- noise_dropout=noise_dropout, score_corrector=score_corrector,
- corrector_kwargs=corrector_kwargs,
- unconditional_guidance_scale=unconditional_guidance_scale,
- unconditional_conditioning=unconditional_conditioning,
- features_adapter=None if index < int(
- (1 - cond_tau) * total_steps) else features_adapter,
- append_to_context=None if index < int(
- (1 - style_cond_tau) * total_steps) else append_to_context,
- )
- img, pred_x0 = outs
- if callback: callback(i)
- if img_callback: img_callback(pred_x0, i)
-
- if index % log_every_t == 0 or index == total_steps - 1:
- intermediates['x_inter'].append(img)
- intermediates['pred_x0'].append(pred_x0)
-
- return img, intermediates
-
- @torch.no_grad()
- def p_sample_ddim(self, x, c, t, index, repeat_noise=False, use_original_steps=False, quantize_denoised=False,
- temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None,
- unconditional_guidance_scale=1., unconditional_conditioning=None, features_adapter=None,
- append_to_context=None):
- b, *_, device = *x.shape, x.device
-
- if unconditional_conditioning is None or unconditional_guidance_scale == 1.:
- if append_to_context is not None:
- model_output = self.model.apply_model(x, t, torch.cat([c, append_to_context], dim=1),
- features_adapter=features_adapter)
- else:
- model_output = self.model.apply_model(x, t, c, features_adapter=features_adapter)
- else:
- x_in = torch.cat([x] * 2)
- t_in = torch.cat([t] * 2)
- if isinstance(c, dict):
- assert isinstance(unconditional_conditioning, dict)
- c_in = dict()
- for k in c:
- if isinstance(c[k], list):
- c_in[k] = [torch.cat([
- unconditional_conditioning[k][i],
- c[k][i]]) for i in range(len(c[k]))]
- else:
- c_in[k] = torch.cat([
- unconditional_conditioning[k],
- c[k]])
- elif isinstance(c, list):
- c_in = list()
- assert isinstance(unconditional_conditioning, list)
- for i in range(len(c)):
- c_in.append(torch.cat([unconditional_conditioning[i], c[i]]))
- else:
- if append_to_context is not None:
- pad_len = append_to_context.size(1)
- new_unconditional_conditioning = torch.cat(
- [unconditional_conditioning, unconditional_conditioning[:, -pad_len:, :]], dim=1)
- new_c = torch.cat([c, append_to_context], dim=1)
- c_in = torch.cat([new_unconditional_conditioning, new_c])
- else:
- c_in = torch.cat([unconditional_conditioning, c])
- model_uncond, model_t = self.model.apply_model(x_in, t_in, c_in, features_adapter=features_adapter).chunk(2)
- model_output = model_uncond + unconditional_guidance_scale * (model_t - model_uncond)
-
- if self.model.parameterization == "v":
- e_t = self.model.predict_eps_from_z_and_v(x, t, model_output)
- else:
- e_t = model_output
-
- if score_corrector is not None:
- assert self.model.parameterization == "eps", 'not implemented'
- e_t = score_corrector.modify_score(self.model, e_t, x, t, c, **corrector_kwargs)
-
- alphas = self.model.alphas_cumprod if use_original_steps else self.ddim_alphas
- alphas_prev = self.model.alphas_cumprod_prev if use_original_steps else self.ddim_alphas_prev
- sqrt_one_minus_alphas = self.model.sqrt_one_minus_alphas_cumprod if use_original_steps else self.ddim_sqrt_one_minus_alphas
- sigmas = self.model.ddim_sigmas_for_original_num_steps if use_original_steps else self.ddim_sigmas
- # select parameters corresponding to the currently considered timestep
- a_t = torch.full((b, 1, 1, 1), alphas[index], device=device)
- a_prev = torch.full((b, 1, 1, 1), alphas_prev[index], device=device)
- sigma_t = torch.full((b, 1, 1, 1), sigmas[index], device=device)
- sqrt_one_minus_at = torch.full((b, 1, 1, 1), sqrt_one_minus_alphas[index], device=device)
-
- # current prediction for x_0
- if self.model.parameterization != "v":
- pred_x0 = (x - sqrt_one_minus_at * e_t) / a_t.sqrt()
- else:
- pred_x0 = self.model.predict_start_from_z_and_v(x, t, model_output)
-
- if quantize_denoised:
- pred_x0, _, *_ = self.model.first_stage_model.quantize(pred_x0)
- # direction pointing to x_t
- dir_xt = (1. - a_prev - sigma_t ** 2).sqrt() * e_t
- noise = sigma_t * noise_like(x.shape, device, repeat_noise) * temperature
- if noise_dropout > 0.:
- noise = torch.nn.functional.dropout(noise, p=noise_dropout)
- x_prev = a_prev.sqrt() * pred_x0 + dir_xt + noise
- return x_prev, pred_x0
-
- @torch.no_grad()
- def stochastic_encode(self, x0, t, use_original_steps=False, noise=None):
- # fast, but does not allow for exact reconstruction
- # t serves as an index to gather the correct alphas
- if use_original_steps:
- sqrt_alphas_cumprod = self.sqrt_alphas_cumprod
- sqrt_one_minus_alphas_cumprod = self.sqrt_one_minus_alphas_cumprod
- else:
- sqrt_alphas_cumprod = torch.sqrt(self.ddim_alphas)
- sqrt_one_minus_alphas_cumprod = self.ddim_sqrt_one_minus_alphas
-
- if noise is None:
- noise = torch.randn_like(x0)
- return (extract_into_tensor(sqrt_alphas_cumprod, t, x0.shape) * x0 +
- extract_into_tensor(sqrt_one_minus_alphas_cumprod, t, x0.shape) * noise)
-
- @torch.no_grad()
- def decode(self, x_latent, cond, t_start, unconditional_guidance_scale=1.0, unconditional_conditioning=None,
- use_original_steps=False):
-
- timesteps = np.arange(self.ddpm_num_timesteps) if use_original_steps else self.ddim_timesteps
- timesteps = timesteps[:t_start]
-
- time_range = np.flip(timesteps)
- total_steps = timesteps.shape[0]
- print(f"Running DDIM Sampling with {total_steps} timesteps")
-
- iterator = tqdm(time_range, desc='Decoding image', total=total_steps)
- x_dec = x_latent
- for i, step in enumerate(iterator):
- index = total_steps - i - 1
- ts = torch.full((x_latent.shape[0],), step, device=x_latent.device, dtype=torch.long)
- x_dec, _ = self.p_sample_ddim(x_dec, cond, ts, index=index, use_original_steps=use_original_steps,
- unconditional_guidance_scale=unconditional_guidance_scale,
- unconditional_conditioning=unconditional_conditioning)
- return x_dec
diff --git a/spaces/AdithyaSNair/Dog_breed_predictor/README.md b/spaces/AdithyaSNair/Dog_breed_predictor/README.md
deleted file mode 100644
index 96a9afb4a20da7e5e1d2495240e3209336a4336d..0000000000000000000000000000000000000000
--- a/spaces/AdithyaSNair/Dog_breed_predictor/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Dog Breed Predictor
-emoji: 🏆
-colorFrom: indigo
-colorTo: green
-sdk: streamlit
-sdk_version: 1.26.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/touchcursor-plugin.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/touchcursor-plugin.js
deleted file mode 100644
index 0aba4335bee173e28c40e7fca4b14ea8529f3ac5..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/touchcursor-plugin.js
+++ /dev/null
@@ -1,20 +0,0 @@
-import TouchCursor from './touchcursor.js';
-
-class TouchCursorPlugin extends Phaser.Plugins.BasePlugin {
-
- constructor(pluginManager) {
- super(pluginManager);
- }
-
- start() {
- var eventEmitter = this.game.events;
- eventEmitter.on('destroy', this.destroy, this);
- }
-
- add(gameObject, config) {
- return new TouchCursor(gameObject, config);
- }
-
-}
-
-export default TouchCursorPlugin;
\ No newline at end of file
diff --git a/spaces/Alesteba/NeRF_ficus-pxl/config.py b/spaces/Alesteba/NeRF_ficus-pxl/config.py
deleted file mode 100644
index 9f062ebe3532b740155f5f86b93659dcec49d565..0000000000000000000000000000000000000000
--- a/spaces/Alesteba/NeRF_ficus-pxl/config.py
+++ /dev/null
@@ -1,16 +0,0 @@
-import streamlit as st
-import tensorflow as tf
-import numpy as np
-
-# Setting random seed to obtain reproducible results.
-tf.random.set_seed(42)
-
-# Initialize global variables.
-AUTO = tf.data.AUTOTUNE
-BATCH_SIZE = 1
-NUM_SAMPLES = 32
-POS_ENCODE_DIMS = 16
-EPOCHS = 30
-H = 50
-W = 50
-focal = 0.6911112070083618
\ No newline at end of file
diff --git a/spaces/Amon1/ChatGPTForAcadamic/crazy_functions/test_project/cpp/libJPG/jpge.h b/spaces/Amon1/ChatGPTForAcadamic/crazy_functions/test_project/cpp/libJPG/jpge.h
deleted file mode 100644
index a46c805ab80aab491f7f9508b3a008b149866bee..0000000000000000000000000000000000000000
--- a/spaces/Amon1/ChatGPTForAcadamic/crazy_functions/test_project/cpp/libJPG/jpge.h
+++ /dev/null
@@ -1,172 +0,0 @@
-
-// jpge.h - C++ class for JPEG compression.
-// Public domain, Rich Geldreich
-// Alex Evans: Added RGBA support, linear memory allocator.
-#ifndef JPEG_ENCODER_H
-#define JPEG_ENCODER_H
-
-#include
-
-namespace jpge
-{
- typedef unsigned char uint8;
- typedef signed short int16;
- typedef signed int int32;
- typedef unsigned short uint16;
- typedef unsigned int uint32;
- typedef unsigned int uint;
-
- // JPEG chroma subsampling factors. Y_ONLY (grayscale images) and H2V2 (color images) are the most common.
- enum subsampling_t { Y_ONLY = 0, H1V1 = 1, H2V1 = 2, H2V2 = 3 };
-
- // JPEG compression parameters structure.
- struct params
- {
- inline params() : m_quality(85), m_subsampling(H2V2), m_no_chroma_discrim_flag(false), m_two_pass_flag(false) { }
-
- inline bool check_valid() const
- {
- if ((m_quality < 1) || (m_quality > 100)) return false;
- if ((uint)m_subsampling > (uint)H2V2) return false;
- return true;
- }
-
- // Quality: 1-100, higher is better. Typical values are around 50-95.
- int m_quality;
-
- // m_subsampling:
- // 0 = Y (grayscale) only
- // 1 = YCbCr, no subsampling (H1V1, YCbCr 1x1x1, 3 blocks per MCU)
- // 2 = YCbCr, H2V1 subsampling (YCbCr 2x1x1, 4 blocks per MCU)
- // 3 = YCbCr, H2V2 subsampling (YCbCr 4x1x1, 6 blocks per MCU-- very common)
- subsampling_t m_subsampling;
-
- // Disables CbCr discrimination - only intended for testing.
- // If true, the Y quantization table is also used for the CbCr channels.
- bool m_no_chroma_discrim_flag;
-
- bool m_two_pass_flag;
- };
-
- // Writes JPEG image to a file.
- // num_channels must be 1 (Y) or 3 (RGB), image pitch must be width*num_channels.
- bool compress_image_to_jpeg_file(const char *pFilename, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params = params());
-
- // Writes JPEG image to memory buffer.
- // On entry, buf_size is the size of the output buffer pointed at by pBuf, which should be at least ~1024 bytes.
- // If return value is true, buf_size will be set to the size of the compressed data.
- bool compress_image_to_jpeg_file_in_memory(void *pBuf, int64_t &buf_size, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params = params());
-
- // Output stream abstract class - used by the jpeg_encoder class to write to the output stream.
- // put_buf() is generally called with len==JPGE_OUT_BUF_SIZE bytes, but for headers it'll be called with smaller amounts.
- class output_stream
- {
- public:
- virtual ~output_stream() { };
- virtual bool put_buf(const void* Pbuf, int64_t len) = 0;
- template inline bool put_obj(const T& obj) { return put_buf(&obj, sizeof(T)); }
- };
-
- // Lower level jpeg_encoder class - useful if more control is needed than the above helper functions.
- class jpeg_encoder
- {
- public:
- jpeg_encoder();
- ~jpeg_encoder();
-
- // Initializes the compressor.
- // pStream: The stream object to use for writing compressed data.
- // params - Compression parameters structure, defined above.
- // width, height - Image dimensions.
- // channels - May be 1, or 3. 1 indicates grayscale, 3 indicates RGB source data.
- // Returns false on out of memory or if a stream write fails.
- bool init(output_stream *pStream, int64_t width, int64_t height, int64_t src_channels, const params &comp_params = params());
-
- const params &get_params() const { return m_params; }
-
- // Deinitializes the compressor, freeing any allocated memory. May be called at any time.
- void deinit();
-
- uint get_total_passes() const { return m_params.m_two_pass_flag ? 2 : 1; }
- inline uint get_cur_pass() { return m_pass_num; }
-
- // Call this method with each source scanline.
- // width * src_channels bytes per scanline is expected (RGB or Y format).
- // You must call with NULL after all scanlines are processed to finish compression.
- // Returns false on out of memory or if a stream write fails.
- bool process_scanline(const void* pScanline);
-
- private:
- jpeg_encoder(const jpeg_encoder &);
- jpeg_encoder &operator =(const jpeg_encoder &);
-
- typedef int32 sample_array_t;
-
- output_stream *m_pStream;
- params m_params;
- uint8 m_num_components;
- uint8 m_comp_h_samp[3], m_comp_v_samp[3];
- int m_image_x, m_image_y, m_image_bpp, m_image_bpl;
- int m_image_x_mcu, m_image_y_mcu;
- int m_image_bpl_xlt, m_image_bpl_mcu;
- int m_mcus_per_row;
- int m_mcu_x, m_mcu_y;
- uint8 *m_mcu_lines[16];
- uint8 m_mcu_y_ofs;
- sample_array_t m_sample_array[64];
- int16 m_coefficient_array[64];
- int32 m_quantization_tables[2][64];
- uint m_huff_codes[4][256];
- uint8 m_huff_code_sizes[4][256];
- uint8 m_huff_bits[4][17];
- uint8 m_huff_val[4][256];
- uint32 m_huff_count[4][256];
- int m_last_dc_val[3];
- enum { JPGE_OUT_BUF_SIZE = 2048 };
- uint8 m_out_buf[JPGE_OUT_BUF_SIZE];
- uint8 *m_pOut_buf;
- uint m_out_buf_left;
- uint32 m_bit_buffer;
- uint m_bits_in;
- uint8 m_pass_num;
- bool m_all_stream_writes_succeeded;
-
- void optimize_huffman_table(int table_num, int table_len);
- void emit_byte(uint8 i);
- void emit_word(uint i);
- void emit_marker(int marker);
- void emit_jfif_app0();
- void emit_dqt();
- void emit_sof();
- void emit_dht(uint8 *bits, uint8 *val, int index, bool ac_flag);
- void emit_dhts();
- void emit_sos();
- void emit_markers();
- void compute_huffman_table(uint *codes, uint8 *code_sizes, uint8 *bits, uint8 *val);
- void compute_quant_table(int32 *dst, int16 *src);
- void adjust_quant_table(int32 *dst, int32 *src);
- void first_pass_init();
- bool second_pass_init();
- bool jpg_open(int p_x_res, int p_y_res, int src_channels);
- void load_block_8_8_grey(int x);
- void load_block_8_8(int x, int y, int c);
- void load_block_16_8(int x, int c);
- void load_block_16_8_8(int x, int c);
- void load_quantized_coefficients(int component_num);
- void flush_output_buffer();
- void put_bits(uint bits, uint len);
- void code_coefficients_pass_one(int component_num);
- void code_coefficients_pass_two(int component_num);
- void code_block(int component_num);
- void process_mcu_row();
- bool terminate_pass_one();
- bool terminate_pass_two();
- bool process_end_of_image();
- void load_mcu(const void* src);
- void clear();
- void init();
- };
-
-} // namespace jpge
-
-#endif // JPEG_ENCODER
\ No newline at end of file
diff --git "a/spaces/Amon1/ChatGPTForAcadamic/crazy_functions/\346\200\273\347\273\223word\346\226\207\346\241\243.py" "b/spaces/Amon1/ChatGPTForAcadamic/crazy_functions/\346\200\273\347\273\223word\346\226\207\346\241\243.py"
deleted file mode 100644
index a117fb3c0668457b30e63373c6ab8d85281ee044..0000000000000000000000000000000000000000
--- "a/spaces/Amon1/ChatGPTForAcadamic/crazy_functions/\346\200\273\347\273\223word\346\226\207\346\241\243.py"
+++ /dev/null
@@ -1,127 +0,0 @@
-from predict import predict_no_ui
-from toolbox import CatchException, report_execption, write_results_to_file, predict_no_ui_but_counting_down
-fast_debug = False
-
-
-def 解析docx(file_manifest, project_folder, top_p, temperature, chatbot, history, systemPromptTxt):
- import time, os
- # pip install python-docx 用于docx格式,跨平台
- # pip install pywin32 用于doc格式,仅支持Win平台
-
- print('begin analysis on:', file_manifest)
- for index, fp in enumerate(file_manifest):
- if fp.split(".")[-1] == "docx":
- from docx import Document
- doc = Document(fp)
- file_content = "\n".join([para.text for para in doc.paragraphs])
- else:
- import win32com.client
- word = win32com.client.Dispatch("Word.Application")
- word.visible = False
- # 打开文件
- print('fp', os.getcwd())
- doc = word.Documents.Open(os.getcwd() + '/' + fp)
- # file_content = doc.Content.Text
- doc = word.ActiveDocument
- file_content = doc.Range().Text
- doc.Close()
- word.Quit()
-
- print(file_content)
-
- prefix = "接下来请你逐文件分析下面的论文文件," if index == 0 else ""
- # private_upload里面的文件名在解压zip后容易出现乱码(rar和7z格式正常),故可以只分析文章内容,不输入文件名
- i_say = prefix + f'请对下面的文章片段用中英文做概述,文件名是{os.path.relpath(fp, project_folder)},' \
- f'文章内容是 ```{file_content}```'
- i_say_show_user = prefix + f'[{index+1}/{len(file_manifest)}] 假设你是论文审稿专家,请对下面的文章片段做概述: {os.path.abspath(fp)}'
- chatbot.append((i_say_show_user, "[Local Message] waiting gpt response."))
- yield chatbot, history, '正常'
-
- if not fast_debug:
- msg = '正常'
- # ** gpt request **
- gpt_say = yield from predict_no_ui_but_counting_down(i_say, i_say_show_user, chatbot, top_p, temperature,
- history=[]) # 带超时倒计时
- chatbot[-1] = (i_say_show_user, gpt_say)
- history.append(i_say_show_user);
- history.append(gpt_say)
- yield chatbot, history, msg
- if not fast_debug: time.sleep(2)
-
- """
- # 可按需启用
- i_say = f'根据你上述的分析,对全文进行概括,用学术性语言写一段中文摘要,然后再写一篇英文的。'
- chatbot.append((i_say, "[Local Message] waiting gpt response."))
- yield chatbot, history, '正常'
-
-
- i_say = f'我想让你做一个论文写作导师。您的任务是使用人工智能工具(例如自然语言处理)提供有关如何改进其上述文章的反馈。' \
- f'您还应该利用您在有效写作技巧方面的修辞知识和经验来建议作者可以更好地以书面形式表达他们的想法和想法的方法。' \
- f'根据你之前的分析,提出建议'
- chatbot.append((i_say, "[Local Message] waiting gpt response."))
- yield chatbot, history, '正常'
-
- """
-
- if not fast_debug:
- msg = '正常'
- # ** gpt request **
- gpt_say = yield from predict_no_ui_but_counting_down(i_say, i_say, chatbot, top_p, temperature,
- history=history) # 带超时倒计时
-
- chatbot[-1] = (i_say, gpt_say)
- history.append(i_say)
- history.append(gpt_say)
- yield chatbot, history, msg
- res = write_results_to_file(history)
- chatbot.append(("完成了吗?", res))
- yield chatbot, history, msg
-
-
-@CatchException
-def 总结word文档(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT):
- import glob, os
-
- # 基本信息:功能、贡献者
- chatbot.append([
- "函数插件功能?",
- "批量总结Word文档。函数插件贡献者: JasonGuo1"])
- yield chatbot, history, '正常'
-
- # 尝试导入依赖,如果缺少依赖,则给出安装建议
- try:
- from docx import Document
- except:
- report_execption(chatbot, history,
- a=f"解析项目: {txt}",
- b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade python-docx pywin32```。")
- yield chatbot, history, '正常'
- return
-
- # 清空历史,以免输入溢出
- history = []
-
- # 检测输入参数,如没有给定输入参数,直接退出
- if os.path.exists(txt):
- project_folder = txt
- else:
- if txt == "": txt = '空空如也的输入栏'
- report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
- yield chatbot, history, '正常'
- return
-
- # 搜索需要处理的文件清单
- file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.docx', recursive=True)] + \
- [f for f in glob.glob(f'{project_folder}/**/*.doc', recursive=True)]
- # [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] + \
- # [f for f in glob.glob(f'{project_folder}/**/*.cpp', recursive=True)] + \
- # [f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)]
-
- # 如果没找到任何文件
- if len(file_manifest) == 0:
- report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何.docx或doc文件: {txt}")
- yield chatbot, history, '正常'
- return
-
- # 开始正式执行任务
- yield from 解析docx(file_manifest, project_folder, top_p, temperature, chatbot, history, systemPromptTxt)
diff --git a/spaces/Amrrs/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/stylegan2/op/upfirdn2d.py b/spaces/Amrrs/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/stylegan2/op/upfirdn2d.py
deleted file mode 100644
index 6727f2bf0857c1f4e0d50de363de75e7b8d4de50..0000000000000000000000000000000000000000
--- a/spaces/Amrrs/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/stylegan2/op/upfirdn2d.py
+++ /dev/null
@@ -1,60 +0,0 @@
-import os
-
-import torch
-from torch.nn import functional as F
-
-
-module_path = os.path.dirname(__file__)
-
-
-def upfirdn2d(input, kernel, up=1, down=1, pad=(0, 0)):
- out = upfirdn2d_native(
- input, kernel, up, up, down, down, pad[0], pad[1], pad[0], pad[1]
- )
-
- return out
-
-
-def upfirdn2d_native(
- input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1
-):
- _, channel, in_h, in_w = input.shape
- input = input.reshape(-1, in_h, in_w, 1)
-
- _, in_h, in_w, minor = input.shape
- kernel_h, kernel_w = kernel.shape
-
- out = input.view(-1, in_h, 1, in_w, 1, minor)
- out = F.pad(out, [0, 0, 0, up_x - 1, 0, 0, 0, up_y - 1])
- out = out.view(-1, in_h * up_y, in_w * up_x, minor)
-
- out = F.pad(
- out, [0, 0, max(pad_x0, 0), max(pad_x1, 0),
- max(pad_y0, 0), max(pad_y1, 0)]
- )
- out = out[
- :,
- max(-pad_y0, 0): out.shape[1] - max(-pad_y1, 0),
- max(-pad_x0, 0): out.shape[2] - max(-pad_x1, 0),
- :,
- ]
-
- out = out.permute(0, 3, 1, 2)
- out = out.reshape(
- [-1, 1, in_h * up_y + pad_y0 + pad_y1, in_w * up_x + pad_x0 + pad_x1]
- )
- w = torch.flip(kernel, [0, 1]).view(1, 1, kernel_h, kernel_w)
- out = F.conv2d(out, w)
- out = out.reshape(
- -1,
- minor,
- in_h * up_y + pad_y0 + pad_y1 - kernel_h + 1,
- in_w * up_x + pad_x0 + pad_x1 - kernel_w + 1,
- )
- out = out.permute(0, 2, 3, 1)
- out = out[:, ::down_y, ::down_x, :]
-
- out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h) // down_y + 1
- out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w) // down_x + 1
-
- return out.view(-1, channel, out_h, out_w)
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py
deleted file mode 100644
index a95015a2b850dcbd1f69b68856cdb2d79e40d767..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py
+++ /dev/null
@@ -1,1020 +0,0 @@
-# Copyright 2023 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import inspect
-import math
-import warnings
-from typing import Any, Callable, Dict, List, Optional, Tuple, Union
-
-import numpy as np
-import torch
-from torch.nn import functional as F
-from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer
-
-from ...image_processor import VaeImageProcessor
-from ...loaders import LoraLoaderMixin, TextualInversionLoaderMixin
-from ...models import AutoencoderKL, UNet2DConditionModel
-from ...models.attention_processor import Attention
-from ...schedulers import KarrasDiffusionSchedulers
-from ...utils import logging, randn_tensor, replace_example_docstring
-from ..pipeline_utils import DiffusionPipeline
-from . import StableDiffusionPipelineOutput
-from .safety_checker import StableDiffusionSafetyChecker
-
-
-logger = logging.get_logger(__name__)
-
-EXAMPLE_DOC_STRING = """
- Examples:
- ```py
- >>> import torch
- >>> from diffusers import StableDiffusionAttendAndExcitePipeline
-
- >>> pipe = StableDiffusionAttendAndExcitePipeline.from_pretrained(
- ... "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16
- ... ).to("cuda")
-
-
- >>> prompt = "a cat and a frog"
-
- >>> # use get_indices function to find out indices of the tokens you want to alter
- >>> pipe.get_indices(prompt)
- {0: '<|startoftext|>', 1: 'a', 2: 'cat', 3: 'and', 4: 'a', 5: 'frog', 6: '<|endoftext|>'}
-
- >>> token_indices = [2, 5]
- >>> seed = 6141
- >>> generator = torch.Generator("cuda").manual_seed(seed)
-
- >>> images = pipe(
- ... prompt=prompt,
- ... token_indices=token_indices,
- ... guidance_scale=7.5,
- ... generator=generator,
- ... num_inference_steps=50,
- ... max_iter_to_alter=25,
- ... ).images
-
- >>> image = images[0]
- >>> image.save(f"../images/{prompt}_{seed}.png")
- ```
-"""
-
-
-class AttentionStore:
- @staticmethod
- def get_empty_store():
- return {"down": [], "mid": [], "up": []}
-
- def __call__(self, attn, is_cross: bool, place_in_unet: str):
- if self.cur_att_layer >= 0 and is_cross:
- if attn.shape[1] == np.prod(self.attn_res):
- self.step_store[place_in_unet].append(attn)
-
- self.cur_att_layer += 1
- if self.cur_att_layer == self.num_att_layers:
- self.cur_att_layer = 0
- self.between_steps()
-
- def between_steps(self):
- self.attention_store = self.step_store
- self.step_store = self.get_empty_store()
-
- def get_average_attention(self):
- average_attention = self.attention_store
- return average_attention
-
- def aggregate_attention(self, from_where: List[str]) -> torch.Tensor:
- """Aggregates the attention across the different layers and heads at the specified resolution."""
- out = []
- attention_maps = self.get_average_attention()
- for location in from_where:
- for item in attention_maps[location]:
- cross_maps = item.reshape(-1, self.attn_res[0], self.attn_res[1], item.shape[-1])
- out.append(cross_maps)
- out = torch.cat(out, dim=0)
- out = out.sum(0) / out.shape[0]
- return out
-
- def reset(self):
- self.cur_att_layer = 0
- self.step_store = self.get_empty_store()
- self.attention_store = {}
-
- def __init__(self, attn_res):
- """
- Initialize an empty AttentionStore :param step_index: used to visualize only a specific step in the diffusion
- process
- """
- self.num_att_layers = -1
- self.cur_att_layer = 0
- self.step_store = self.get_empty_store()
- self.attention_store = {}
- self.curr_step_index = 0
- self.attn_res = attn_res
-
-
-class AttendExciteAttnProcessor:
- def __init__(self, attnstore, place_in_unet):
- super().__init__()
- self.attnstore = attnstore
- self.place_in_unet = place_in_unet
-
- def __call__(self, attn: Attention, hidden_states, encoder_hidden_states=None, attention_mask=None):
- batch_size, sequence_length, _ = hidden_states.shape
- attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length, batch_size)
-
- query = attn.to_q(hidden_states)
-
- is_cross = encoder_hidden_states is not None
- encoder_hidden_states = encoder_hidden_states if encoder_hidden_states is not None else hidden_states
- key = attn.to_k(encoder_hidden_states)
- value = attn.to_v(encoder_hidden_states)
-
- query = attn.head_to_batch_dim(query)
- key = attn.head_to_batch_dim(key)
- value = attn.head_to_batch_dim(value)
-
- attention_probs = attn.get_attention_scores(query, key, attention_mask)
-
- # only need to store attention maps during the Attend and Excite process
- if attention_probs.requires_grad:
- self.attnstore(attention_probs, is_cross, self.place_in_unet)
-
- hidden_states = torch.bmm(attention_probs, value)
- hidden_states = attn.batch_to_head_dim(hidden_states)
-
- # linear proj
- hidden_states = attn.to_out[0](hidden_states)
- # dropout
- hidden_states = attn.to_out[1](hidden_states)
-
- return hidden_states
-
-
-class StableDiffusionAttendAndExcitePipeline(DiffusionPipeline, TextualInversionLoaderMixin):
- r"""
- Pipeline for text-to-image generation using Stable Diffusion and Attend-and-Excite.
-
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods
- implemented for all pipelines (downloading, saving, running on a particular device, etc.).
-
- Args:
- vae ([`AutoencoderKL`]):
- Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
- text_encoder ([`~transformers.CLIPTextModel`]):
- Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
- tokenizer ([`~transformers.CLIPTokenizer`]):
- A `CLIPTokenizer` to tokenize text.
- unet ([`UNet2DConditionModel`]):
- A `UNet2DConditionModel` to denoise the encoded image latents.
- scheduler ([`SchedulerMixin`]):
- A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
- [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
- safety_checker ([`StableDiffusionSafetyChecker`]):
- Classification module that estimates whether generated images could be considered offensive or harmful.
- Please refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for more details
- about a model's potential harms.
- feature_extractor ([`~transformers.CLIPImageProcessor`]):
- A `CLIPImageProcessor` to extract features from generated images; used as inputs to the `safety_checker`.
- """
- _optional_components = ["safety_checker", "feature_extractor"]
-
- def __init__(
- self,
- vae: AutoencoderKL,
- text_encoder: CLIPTextModel,
- tokenizer: CLIPTokenizer,
- unet: UNet2DConditionModel,
- scheduler: KarrasDiffusionSchedulers,
- safety_checker: StableDiffusionSafetyChecker,
- feature_extractor: CLIPImageProcessor,
- requires_safety_checker: bool = True,
- ):
- super().__init__()
-
- if safety_checker is None and requires_safety_checker:
- logger.warning(
- f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
- " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
- " results in services or applications open to the public. Both the diffusers team and Hugging Face"
- " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
- " it only for use-cases that involve analyzing network behavior or auditing its results. For more"
- " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
- )
-
- if safety_checker is not None and feature_extractor is None:
- raise ValueError(
- "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety"
- " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead."
- )
-
- self.register_modules(
- vae=vae,
- text_encoder=text_encoder,
- tokenizer=tokenizer,
- unet=unet,
- scheduler=scheduler,
- safety_checker=safety_checker,
- feature_extractor=feature_extractor,
- )
- self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1)
- self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor)
- self.register_to_config(requires_safety_checker=requires_safety_checker)
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_vae_slicing
- def enable_vae_slicing(self):
- r"""
- Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
- compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
- """
- self.vae.enable_slicing()
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.disable_vae_slicing
- def disable_vae_slicing(self):
- r"""
- Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
- computing decoding in one step.
- """
- self.vae.disable_slicing()
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._encode_prompt
- def _encode_prompt(
- self,
- prompt,
- device,
- num_images_per_prompt,
- do_classifier_free_guidance,
- negative_prompt=None,
- prompt_embeds: Optional[torch.FloatTensor] = None,
- negative_prompt_embeds: Optional[torch.FloatTensor] = None,
- lora_scale: Optional[float] = None,
- ):
- r"""
- Encodes the prompt into text encoder hidden states.
-
- Args:
- prompt (`str` or `List[str]`, *optional*):
- prompt to be encoded
- device: (`torch.device`):
- torch device
- num_images_per_prompt (`int`):
- number of images that should be generated per prompt
- do_classifier_free_guidance (`bool`):
- whether to use classifier free guidance or not
- negative_prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts not to guide the image generation. If not defined, one has to pass
- `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
- less than `1`).
- prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
- provided, text embeddings will be generated from `prompt` input argument.
- negative_prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
- weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
- argument.
- lora_scale (`float`, *optional*):
- A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- """
- # set lora scale so that monkey patched LoRA
- # function of text encoder can correctly access it
- if lora_scale is not None and isinstance(self, LoraLoaderMixin):
- self._lora_scale = lora_scale
-
- if prompt is not None and isinstance(prompt, str):
- batch_size = 1
- elif prompt is not None and isinstance(prompt, list):
- batch_size = len(prompt)
- else:
- batch_size = prompt_embeds.shape[0]
-
- if prompt_embeds is None:
- # textual inversion: procecss multi-vector tokens if necessary
- if isinstance(self, TextualInversionLoaderMixin):
- prompt = self.maybe_convert_prompt(prompt, self.tokenizer)
-
- text_inputs = self.tokenizer(
- prompt,
- padding="max_length",
- max_length=self.tokenizer.model_max_length,
- truncation=True,
- return_tensors="pt",
- )
- text_input_ids = text_inputs.input_ids
- untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids
-
- if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(
- text_input_ids, untruncated_ids
- ):
- removed_text = self.tokenizer.batch_decode(
- untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1]
- )
- logger.warning(
- "The following part of your input was truncated because CLIP can only handle sequences up to"
- f" {self.tokenizer.model_max_length} tokens: {removed_text}"
- )
-
- if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask:
- attention_mask = text_inputs.attention_mask.to(device)
- else:
- attention_mask = None
-
- prompt_embeds = self.text_encoder(
- text_input_ids.to(device),
- attention_mask=attention_mask,
- )
- prompt_embeds = prompt_embeds[0]
-
- prompt_embeds = prompt_embeds.to(dtype=self.text_encoder.dtype, device=device)
-
- bs_embed, seq_len, _ = prompt_embeds.shape
- # duplicate text embeddings for each generation per prompt, using mps friendly method
- prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1)
- prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1)
-
- # get unconditional embeddings for classifier free guidance
- if do_classifier_free_guidance and negative_prompt_embeds is None:
- uncond_tokens: List[str]
- if negative_prompt is None:
- uncond_tokens = [""] * batch_size
- elif prompt is not None and type(prompt) is not type(negative_prompt):
- raise TypeError(
- f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
- f" {type(prompt)}."
- )
- elif isinstance(negative_prompt, str):
- uncond_tokens = [negative_prompt]
- elif batch_size != len(negative_prompt):
- raise ValueError(
- f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
- f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
- " the batch size of `prompt`."
- )
- else:
- uncond_tokens = negative_prompt
-
- # textual inversion: procecss multi-vector tokens if necessary
- if isinstance(self, TextualInversionLoaderMixin):
- uncond_tokens = self.maybe_convert_prompt(uncond_tokens, self.tokenizer)
-
- max_length = prompt_embeds.shape[1]
- uncond_input = self.tokenizer(
- uncond_tokens,
- padding="max_length",
- max_length=max_length,
- truncation=True,
- return_tensors="pt",
- )
-
- if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask:
- attention_mask = uncond_input.attention_mask.to(device)
- else:
- attention_mask = None
-
- negative_prompt_embeds = self.text_encoder(
- uncond_input.input_ids.to(device),
- attention_mask=attention_mask,
- )
- negative_prompt_embeds = negative_prompt_embeds[0]
-
- if do_classifier_free_guidance:
- # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
- seq_len = negative_prompt_embeds.shape[1]
-
- negative_prompt_embeds = negative_prompt_embeds.to(dtype=self.text_encoder.dtype, device=device)
-
- negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1)
- negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1)
-
- # For classifier free guidance, we need to do two forward passes.
- # Here we concatenate the unconditional and text embeddings into a single batch
- # to avoid doing two forward passes
- prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds])
-
- return prompt_embeds
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.run_safety_checker
- def run_safety_checker(self, image, device, dtype):
- if self.safety_checker is None:
- has_nsfw_concept = None
- else:
- if torch.is_tensor(image):
- feature_extractor_input = self.image_processor.postprocess(image, output_type="pil")
- else:
- feature_extractor_input = self.image_processor.numpy_to_pil(image)
- safety_checker_input = self.feature_extractor(feature_extractor_input, return_tensors="pt").to(device)
- image, has_nsfw_concept = self.safety_checker(
- images=image, clip_input=safety_checker_input.pixel_values.to(dtype)
- )
- return image, has_nsfw_concept
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.decode_latents
- def decode_latents(self, latents):
- warnings.warn(
- "The decode_latents method is deprecated and will be removed in a future version. Please"
- " use VaeImageProcessor instead",
- FutureWarning,
- )
- latents = 1 / self.vae.config.scaling_factor * latents
- image = self.vae.decode(latents, return_dict=False)[0]
- image = (image / 2 + 0.5).clamp(0, 1)
- # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16
- image = image.cpu().permute(0, 2, 3, 1).float().numpy()
- return image
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs
- def prepare_extra_step_kwargs(self, generator, eta):
- # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
- # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
- # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
- # and should be between [0, 1]
-
- accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
- extra_step_kwargs = {}
- if accepts_eta:
- extra_step_kwargs["eta"] = eta
-
- # check if the scheduler accepts generator
- accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys())
- if accepts_generator:
- extra_step_kwargs["generator"] = generator
- return extra_step_kwargs
-
- def check_inputs(
- self,
- prompt,
- indices,
- height,
- width,
- callback_steps,
- negative_prompt=None,
- prompt_embeds=None,
- negative_prompt_embeds=None,
- ):
- if height % 8 != 0 or width % 8 != 0:
- raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
-
- if (callback_steps is None) or (
- callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
- ):
- raise ValueError(
- f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
- f" {type(callback_steps)}."
- )
-
- if prompt is not None and prompt_embeds is not None:
- raise ValueError(
- f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to"
- " only forward one of the two."
- )
- elif prompt is None and prompt_embeds is None:
- raise ValueError(
- "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined."
- )
- elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)):
- raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
-
- if negative_prompt is not None and negative_prompt_embeds is not None:
- raise ValueError(
- f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:"
- f" {negative_prompt_embeds}. Please make sure to only forward one of the two."
- )
-
- if prompt_embeds is not None and negative_prompt_embeds is not None:
- if prompt_embeds.shape != negative_prompt_embeds.shape:
- raise ValueError(
- "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but"
- f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`"
- f" {negative_prompt_embeds.shape}."
- )
-
- indices_is_list_ints = isinstance(indices, list) and isinstance(indices[0], int)
- indices_is_list_list_ints = (
- isinstance(indices, list) and isinstance(indices[0], list) and isinstance(indices[0][0], int)
- )
-
- if not indices_is_list_ints and not indices_is_list_list_ints:
- raise TypeError("`indices` must be a list of ints or a list of a list of ints")
-
- if indices_is_list_ints:
- indices_batch_size = 1
- elif indices_is_list_list_ints:
- indices_batch_size = len(indices)
-
- if prompt is not None and isinstance(prompt, str):
- prompt_batch_size = 1
- elif prompt is not None and isinstance(prompt, list):
- prompt_batch_size = len(prompt)
- elif prompt_embeds is not None:
- prompt_batch_size = prompt_embeds.shape[0]
-
- if indices_batch_size != prompt_batch_size:
- raise ValueError(
- f"indices batch size must be same as prompt batch size. indices batch size: {indices_batch_size}, prompt batch size: {prompt_batch_size}"
- )
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_latents
- def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None):
- shape = (batch_size, num_channels_latents, height // self.vae_scale_factor, width // self.vae_scale_factor)
- if isinstance(generator, list) and len(generator) != batch_size:
- raise ValueError(
- f"You have passed a list of generators of length {len(generator)}, but requested an effective batch"
- f" size of {batch_size}. Make sure the batch size matches the length of the generators."
- )
-
- if latents is None:
- latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
- else:
- latents = latents.to(device)
-
- # scale the initial noise by the standard deviation required by the scheduler
- latents = latents * self.scheduler.init_noise_sigma
- return latents
-
- @staticmethod
- def _compute_max_attention_per_index(
- attention_maps: torch.Tensor,
- indices: List[int],
- ) -> List[torch.Tensor]:
- """Computes the maximum attention value for each of the tokens we wish to alter."""
- attention_for_text = attention_maps[:, :, 1:-1]
- attention_for_text *= 100
- attention_for_text = torch.nn.functional.softmax(attention_for_text, dim=-1)
-
- # Shift indices since we removed the first token
- indices = [index - 1 for index in indices]
-
- # Extract the maximum values
- max_indices_list = []
- for i in indices:
- image = attention_for_text[:, :, i]
- smoothing = GaussianSmoothing().to(attention_maps.device)
- input = F.pad(image.unsqueeze(0).unsqueeze(0), (1, 1, 1, 1), mode="reflect")
- image = smoothing(input).squeeze(0).squeeze(0)
- max_indices_list.append(image.max())
- return max_indices_list
-
- def _aggregate_and_get_max_attention_per_token(
- self,
- indices: List[int],
- ):
- """Aggregates the attention for each token and computes the max activation value for each token to alter."""
- attention_maps = self.attention_store.aggregate_attention(
- from_where=("up", "down", "mid"),
- )
- max_attention_per_index = self._compute_max_attention_per_index(
- attention_maps=attention_maps,
- indices=indices,
- )
- return max_attention_per_index
-
- @staticmethod
- def _compute_loss(max_attention_per_index: List[torch.Tensor]) -> torch.Tensor:
- """Computes the attend-and-excite loss using the maximum attention value for each token."""
- losses = [max(0, 1.0 - curr_max) for curr_max in max_attention_per_index]
- loss = max(losses)
- return loss
-
- @staticmethod
- def _update_latent(latents: torch.Tensor, loss: torch.Tensor, step_size: float) -> torch.Tensor:
- """Update the latent according to the computed loss."""
- grad_cond = torch.autograd.grad(loss.requires_grad_(True), [latents], retain_graph=True)[0]
- latents = latents - step_size * grad_cond
- return latents
-
- def _perform_iterative_refinement_step(
- self,
- latents: torch.Tensor,
- indices: List[int],
- loss: torch.Tensor,
- threshold: float,
- text_embeddings: torch.Tensor,
- step_size: float,
- t: int,
- max_refinement_steps: int = 20,
- ):
- """
- Performs the iterative latent refinement introduced in the paper. Here, we continuously update the latent code
- according to our loss objective until the given threshold is reached for all tokens.
- """
- iteration = 0
- target_loss = max(0, 1.0 - threshold)
- while loss > target_loss:
- iteration += 1
-
- latents = latents.clone().detach().requires_grad_(True)
- self.unet(latents, t, encoder_hidden_states=text_embeddings).sample
- self.unet.zero_grad()
-
- # Get max activation value for each subject token
- max_attention_per_index = self._aggregate_and_get_max_attention_per_token(
- indices=indices,
- )
-
- loss = self._compute_loss(max_attention_per_index)
-
- if loss != 0:
- latents = self._update_latent(latents, loss, step_size)
-
- logger.info(f"\t Try {iteration}. loss: {loss}")
-
- if iteration >= max_refinement_steps:
- logger.info(f"\t Exceeded max number of iterations ({max_refinement_steps})! ")
- break
-
- # Run one more time but don't compute gradients and update the latents.
- # We just need to compute the new loss - the grad update will occur below
- latents = latents.clone().detach().requires_grad_(True)
- _ = self.unet(latents, t, encoder_hidden_states=text_embeddings).sample
- self.unet.zero_grad()
-
- # Get max activation value for each subject token
- max_attention_per_index = self._aggregate_and_get_max_attention_per_token(
- indices=indices,
- )
- loss = self._compute_loss(max_attention_per_index)
- logger.info(f"\t Finished with loss of: {loss}")
- return loss, latents, max_attention_per_index
-
- def register_attention_control(self):
- attn_procs = {}
- cross_att_count = 0
- for name in self.unet.attn_processors.keys():
- if name.startswith("mid_block"):
- place_in_unet = "mid"
- elif name.startswith("up_blocks"):
- place_in_unet = "up"
- elif name.startswith("down_blocks"):
- place_in_unet = "down"
- else:
- continue
-
- cross_att_count += 1
- attn_procs[name] = AttendExciteAttnProcessor(attnstore=self.attention_store, place_in_unet=place_in_unet)
-
- self.unet.set_attn_processor(attn_procs)
- self.attention_store.num_att_layers = cross_att_count
-
- def get_indices(self, prompt: str) -> Dict[str, int]:
- """Utility function to list the indices of the tokens you wish to alte"""
- ids = self.tokenizer(prompt).input_ids
- indices = {i: tok for tok, i in zip(self.tokenizer.convert_ids_to_tokens(ids), range(len(ids)))}
- return indices
-
- @torch.no_grad()
- @replace_example_docstring(EXAMPLE_DOC_STRING)
- def __call__(
- self,
- prompt: Union[str, List[str]],
- token_indices: Union[List[int], List[List[int]]],
- height: Optional[int] = None,
- width: Optional[int] = None,
- num_inference_steps: int = 50,
- guidance_scale: float = 7.5,
- negative_prompt: Optional[Union[str, List[str]]] = None,
- num_images_per_prompt: int = 1,
- eta: float = 0.0,
- generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
- latents: Optional[torch.FloatTensor] = None,
- prompt_embeds: Optional[torch.FloatTensor] = None,
- negative_prompt_embeds: Optional[torch.FloatTensor] = None,
- output_type: Optional[str] = "pil",
- return_dict: bool = True,
- callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
- callback_steps: int = 1,
- cross_attention_kwargs: Optional[Dict[str, Any]] = None,
- max_iter_to_alter: int = 25,
- thresholds: dict = {0: 0.05, 10: 0.5, 20: 0.8},
- scale_factor: int = 20,
- attn_res: Optional[Tuple[int]] = (16, 16),
- ):
- r"""
- The call function to the pipeline for generation.
-
- Args:
- prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- token_indices (`List[int]`):
- The token indices to alter with attend-and-excite.
- height (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`):
- The height in pixels of the generated image.
- width (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`):
- The width in pixels of the generated image.
- num_inference_steps (`int`, *optional*, defaults to 50):
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
- expense of slower inference.
- guidance_scale (`float`, *optional*, defaults to 7.5):
- A higher guidance scale value encourages the model to generate images closely linked to the text
- `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- negative_prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts to guide what to not include in image generation. If not defined, you need to
- pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- num_images_per_prompt (`int`, *optional*, defaults to 1):
- The number of images to generate per prompt.
- eta (`float`, *optional*, defaults to 0.0):
- Corresponds to parameter eta (η) from the [DDIM](https://arxiv.org/abs/2010.02502) paper. Only applies
- to the [`~schedulers.DDIMScheduler`], and is ignored in other schedulers.
- generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
- A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
- generation deterministic.
- latents (`torch.FloatTensor`, *optional*):
- Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
- generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
- tensor is generated by sampling using the supplied random `generator`.
- prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
- provided, text embeddings are generated from the `prompt` input argument.
- negative_prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
- not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- output_type (`str`, *optional*, defaults to `"pil"`):
- The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
- plain tuple.
- callback (`Callable`, *optional*):
- A function that calls every `callback_steps` steps during inference. The function is called with the
- following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
- callback_steps (`int`, *optional*, defaults to 1):
- The frequency at which the `callback` function is called. If not specified, the callback is called at
- every step.
- cross_attention_kwargs (`dict`, *optional*):
- A kwargs dictionary that if specified is passed along to the [`AttentionProcessor`] as defined in
- [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cross_attention.py).
- max_iter_to_alter (`int`, *optional*, defaults to `25`):
- Number of denoising steps to apply attend-and-excite. The `max_iter_to_alter` denoising steps are when
- attend-and-excite is applied. For example, if `max_iter_to_alter` is `25` and there are a total of `30`
- denoising steps, the first `25` denoising steps applies attend-and-excite and the last `5` will not.
- thresholds (`dict`, *optional*, defaults to `{0: 0.05, 10: 0.5, 20: 0.8}`):
- Dictionary defining the iterations and desired thresholds to apply iterative latent refinement in.
- scale_factor (`int`, *optional*, default to 20):
- Scale factor to control the step size of each attend-and-excite update.
- attn_res (`tuple`, *optional*, default computed from width and height):
- The 2D resolution of the semantic attention map.
-
- Examples:
-
- Returns:
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
- If `return_dict` is `True`, [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] is returned,
- otherwise a `tuple` is returned where the first element is a list with the generated images and the
- second element is a list of `bool`s indicating whether the corresponding generated image contains
- "not-safe-for-work" (nsfw) content.
- """
-
- # 0. Default height and width to unet
- height = height or self.unet.config.sample_size * self.vae_scale_factor
- width = width or self.unet.config.sample_size * self.vae_scale_factor
-
- # 1. Check inputs. Raise error if not correct
- self.check_inputs(
- prompt,
- token_indices,
- height,
- width,
- callback_steps,
- negative_prompt,
- prompt_embeds,
- negative_prompt_embeds,
- )
-
- # 2. Define call parameters
- if prompt is not None and isinstance(prompt, str):
- batch_size = 1
- elif prompt is not None and isinstance(prompt, list):
- batch_size = len(prompt)
- else:
- batch_size = prompt_embeds.shape[0]
-
- device = self._execution_device
- # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
- # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
- # corresponds to doing no classifier free guidance.
- do_classifier_free_guidance = guidance_scale > 1.0
-
- # 3. Encode input prompt
- prompt_embeds = self._encode_prompt(
- prompt,
- device,
- num_images_per_prompt,
- do_classifier_free_guidance,
- negative_prompt,
- prompt_embeds=prompt_embeds,
- negative_prompt_embeds=negative_prompt_embeds,
- )
-
- # 4. Prepare timesteps
- self.scheduler.set_timesteps(num_inference_steps, device=device)
- timesteps = self.scheduler.timesteps
-
- # 5. Prepare latent variables
- num_channels_latents = self.unet.config.in_channels
- latents = self.prepare_latents(
- batch_size * num_images_per_prompt,
- num_channels_latents,
- height,
- width,
- prompt_embeds.dtype,
- device,
- generator,
- latents,
- )
-
- # 6. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline
- extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
-
- if attn_res is None:
- attn_res = int(np.ceil(width / 32)), int(np.ceil(height / 32))
- self.attention_store = AttentionStore(attn_res)
- self.register_attention_control()
-
- # default config for step size from original repo
- scale_range = np.linspace(1.0, 0.5, len(self.scheduler.timesteps))
- step_size = scale_factor * np.sqrt(scale_range)
-
- text_embeddings = (
- prompt_embeds[batch_size * num_images_per_prompt :] if do_classifier_free_guidance else prompt_embeds
- )
-
- if isinstance(token_indices[0], int):
- token_indices = [token_indices]
-
- indices = []
-
- for ind in token_indices:
- indices = indices + [ind] * num_images_per_prompt
-
- # 7. Denoising loop
- num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order
- with self.progress_bar(total=num_inference_steps) as progress_bar:
- for i, t in enumerate(timesteps):
- # Attend and excite process
- with torch.enable_grad():
- latents = latents.clone().detach().requires_grad_(True)
- updated_latents = []
- for latent, index, text_embedding in zip(latents, indices, text_embeddings):
- # Forward pass of denoising with text conditioning
- latent = latent.unsqueeze(0)
- text_embedding = text_embedding.unsqueeze(0)
-
- self.unet(
- latent,
- t,
- encoder_hidden_states=text_embedding,
- cross_attention_kwargs=cross_attention_kwargs,
- ).sample
- self.unet.zero_grad()
-
- # Get max activation value for each subject token
- max_attention_per_index = self._aggregate_and_get_max_attention_per_token(
- indices=index,
- )
-
- loss = self._compute_loss(max_attention_per_index=max_attention_per_index)
-
- # If this is an iterative refinement step, verify we have reached the desired threshold for all
- if i in thresholds.keys() and loss > 1.0 - thresholds[i]:
- loss, latent, max_attention_per_index = self._perform_iterative_refinement_step(
- latents=latent,
- indices=index,
- loss=loss,
- threshold=thresholds[i],
- text_embeddings=text_embedding,
- step_size=step_size[i],
- t=t,
- )
-
- # Perform gradient update
- if i < max_iter_to_alter:
- if loss != 0:
- latent = self._update_latent(
- latents=latent,
- loss=loss,
- step_size=step_size[i],
- )
- logger.info(f"Iteration {i} | Loss: {loss:0.4f}")
-
- updated_latents.append(latent)
-
- latents = torch.cat(updated_latents, dim=0)
-
- # expand the latents if we are doing classifier free guidance
- latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
- latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
-
- # predict the noise residual
- noise_pred = self.unet(
- latent_model_input,
- t,
- encoder_hidden_states=prompt_embeds,
- cross_attention_kwargs=cross_attention_kwargs,
- ).sample
-
- # perform guidance
- if do_classifier_free_guidance:
- noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
- noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
-
- # compute the previous noisy sample x_t -> x_t-1
- latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
-
- # call the callback, if provided
- if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0):
- progress_bar.update()
- if callback is not None and i % callback_steps == 0:
- callback(i, t, latents)
-
- # 8. Post-processing
- if not output_type == "latent":
- image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False)[0]
- image, has_nsfw_concept = self.run_safety_checker(image, device, prompt_embeds.dtype)
- else:
- image = latents
- has_nsfw_concept = None
-
- if has_nsfw_concept is None:
- do_denormalize = [True] * image.shape[0]
- else:
- do_denormalize = [not has_nsfw for has_nsfw in has_nsfw_concept]
-
- image = self.image_processor.postprocess(image, output_type=output_type, do_denormalize=do_denormalize)
-
- if not return_dict:
- return (image, has_nsfw_concept)
-
- return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
-
-
-class GaussianSmoothing(torch.nn.Module):
- """
- Arguments:
- Apply gaussian smoothing on a 1d, 2d or 3d tensor. Filtering is performed seperately for each channel in the input
- using a depthwise convolution.
- channels (int, sequence): Number of channels of the input tensors. Output will
- have this number of channels as well.
- kernel_size (int, sequence): Size of the gaussian kernel. sigma (float, sequence): Standard deviation of the
- gaussian kernel. dim (int, optional): The number of dimensions of the data.
- Default value is 2 (spatial).
- """
-
- # channels=1, kernel_size=kernel_size, sigma=sigma, dim=2
- def __init__(
- self,
- channels: int = 1,
- kernel_size: int = 3,
- sigma: float = 0.5,
- dim: int = 2,
- ):
- super().__init__()
-
- if isinstance(kernel_size, int):
- kernel_size = [kernel_size] * dim
- if isinstance(sigma, float):
- sigma = [sigma] * dim
-
- # The gaussian kernel is the product of the
- # gaussian function of each dimension.
- kernel = 1
- meshgrids = torch.meshgrid([torch.arange(size, dtype=torch.float32) for size in kernel_size])
- for size, std, mgrid in zip(kernel_size, sigma, meshgrids):
- mean = (size - 1) / 2
- kernel *= 1 / (std * math.sqrt(2 * math.pi)) * torch.exp(-(((mgrid - mean) / (2 * std)) ** 2))
-
- # Make sure sum of values in gaussian kernel equals 1.
- kernel = kernel / torch.sum(kernel)
-
- # Reshape to depthwise convolutional weight
- kernel = kernel.view(1, 1, *kernel.size())
- kernel = kernel.repeat(channels, *[1] * (kernel.dim() - 1))
-
- self.register_buffer("weight", kernel)
- self.groups = channels
-
- if dim == 1:
- self.conv = F.conv1d
- elif dim == 2:
- self.conv = F.conv2d
- elif dim == 3:
- self.conv = F.conv3d
- else:
- raise RuntimeError("Only 1, 2 and 3 dimensions are supported. Received {}.".format(dim))
-
- def forward(self, input):
- """
- Arguments:
- Apply gaussian filter to input.
- input (torch.Tensor): Input to apply gaussian filter on.
- Returns:
- filtered (torch.Tensor): Filtered output.
- """
- return self.conv(input, weight=self.weight.to(input.dtype), groups=self.groups)
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_model.py b/spaces/Awiny/Image2Paragraph/models/grit_model.py
deleted file mode 100644
index a0a55a56277c0ad8c4829bb5e522871f4c211e9b..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_model.py
+++ /dev/null
@@ -1,27 +0,0 @@
-import os
-from models.grit_src.image_dense_captions import image_caption_api
-
-class DenseCaptioning():
- def __init__(self, device):
- self.device = device
-
-
- def initialize_model(self):
- pass
-
- def image_dense_caption_debug(self, image_src):
- dense_caption = """
- 1. the broccoli is green, [0, 0, 333, 325];
- 2. a piece of broccoli, [0, 147, 143, 324];
- 3. silver fork on plate, [4, 547, 252, 612];
- """
- return dense_caption
-
- def image_dense_caption(self, image_src):
- dense_caption = image_caption_api(image_src, self.device)
- print('\033[1;35m' + '*' * 100 + '\033[0m')
- print("Step2, Dense Caption:\n")
- print(dense_caption)
- print('\033[1;35m' + '*' * 100 + '\033[0m')
- return dense_caption
-
\ No newline at end of file
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/__init__.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/__init__.py
deleted file mode 100644
index a44bedc15e5f0e762fc4d77efd6f1b07c6ff77d0..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/__init__.py
+++ /dev/null
@@ -1,9 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-from .coco import load_coco_json, load_sem_seg, register_coco_instances, convert_to_coco_json
-from .coco_panoptic import register_coco_panoptic, register_coco_panoptic_separated
-from .lvis import load_lvis_json, register_lvis_instances, get_lvis_instances_meta
-from .pascal_voc import load_voc_instances, register_pascal_voc
-from . import builtin as _builtin # ensure the builtin datasets are registered
-
-
-__all__ = [k for k in globals().keys() if not k.startswith("_")]
diff --git a/spaces/Bart92/RVC_HF/go-applio-manager-recode.bat b/spaces/Bart92/RVC_HF/go-applio-manager-recode.bat
deleted file mode 100644
index 91b8acfc0c69a356fd5b1d77650b2cd728b1072b..0000000000000000000000000000000000000000
--- a/spaces/Bart92/RVC_HF/go-applio-manager-recode.bat
+++ /dev/null
@@ -1,322 +0,0 @@
-@echo off
-title Applio Installer
-
-::: _ _ _____ _
-::: /\ | (_) | __ \ | |
-::: / \ _ __ _ __ | |_ ___ | |__) |___ ___ ___ __| | ___
-::: / /\ \ | '_ \| '_ \| | |/ _ \ | _ // _ \/ __/ _ \ / _` |/ _ \
-::: / ____ \| |_) | |_) | | | (_) | | | \ \ __/ (_| (_) | (_| | __/
-::: /_/ \_\ .__/| .__/|_|_|\___/ |_| \_\___|\___\___/ \__,_|\___|
-::: | | | |
-::: |_| |_|
-:::
-:::
-
-setlocal
-set "branch=applio-recode"
-set "runtime=runtime-recode"
-set "repoUrl=https://github.com/IAHispano/Applio-RVC-Fork/archive/refs/heads/%branch%.zip"
-set "fixesFolder=fixes"
-set "localFixesPy=local_fixes.py"
-set "principal=%cd%"
-set "URL_BASE=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main"
-set "URL_EXTRA=https://huggingface.co/IAHispano/applio/resolve/main"
-
-:menu
-for /f "delims=: tokens=*" %%A in ('findstr /b ":::" "%~f0"') do @echo(%%A
-
-echo [1] Reinstall Applio
-echo [2] Update Applio
-echo [3] Update Applio + Runtime
-echo.
-
-set /p choice=Select an option:
-set choice=%choice: =%
-
-if "%choice%"=="1" (
- cls
- echo Starting Applio Reinstaller...
- echo.
- goto reinstaller
- pause
- cls
- goto menu
-
-)
-
-if "%choice%"=="2" (
- cls
- echo Starting Applio Updater...
- echo.
- goto updater
- pause
- cls
- goto menu
-)
-
-if "%choice%"=="3" (
- cls
- echo Updating Applio + Runtime...
- echo.
- goto updaterRuntime
- pause
- cls
- goto menu
-
-)
-
-cls
-echo Invalid option. Please enter a number from 1 to 3.
-echo.
-echo Press 'Enter' to access the main menu...
-pause>nul
-cls
-goto menu
-
-:reinstaller
-
-echo WARNING: Remember to install Microsoft C++ Build Tools, Redistributable, Python, and Git before continuing.
-echo.
-echo Step-by-step guide: https://rentry.org/appliolocal
-echo Build Tools: https://aka.ms/vs/17/release/vs_BuildTools.exe
-echo Redistributable: https://aka.ms/vs/17/release/vc_redist.x64.exe
-echo Git: https://github.com/git-for-windows/git/releases/download/v2.42.0.windows.2/Git-2.42.0.2-64-bit.exe
-echo Python: Add this route to the windows enviroment variables the user path variable: %principal%\runtime\Scripts
-echo.
-pause
-cls
-
-echo Downloading ZIP file...
-powershell -command "& { Invoke-WebRequest -Uri '%repoUrl%' -OutFile '%principal%\repo.zip' }"
-echo.
-
-echo Extracting ZIP file...
-powershell -command "& { Add-Type -AssemblyName System.IO.Compression.FileSystem ; [System.IO.Compression.ZipFile]::ExtractToDirectory('%principal%\repo.zip', '%principal%') }"
-echo.
-
-echo Copying folder and file structure from subdirectory to main directory...
-robocopy "%principal%\Applio-RVC-Fork-%branch%" "%principal%" /E
-echo.
-
-echo Deleting contents of subdirectory (files and folders)...
-rmdir "%principal%\Applio-RVC-Fork-%branch%" /S /Q
-echo.
-
-echo Cleaning up...
-del "%principal%\repo.zip"
-echo.
-cls
-
-echo Proceeding to download the models...
-echo.
-
-echo WARNING: At this point, it's recommended to disable antivirus or firewall, as errors might occur when downloading pretrained models.
-pause
-cls
-
-echo Downloading models in the assets folder...
-cd "assets"
-echo.
-echo Downloading the "pretrained" folder...
-cd "pretrained"
-curl -LJO "%URL_BASE%/pretrained/D32k.pth"
-curl -LJO "%URL_BASE%/pretrained/D40k.pth"
-curl -LJO "%URL_BASE%/pretrained/D48k.pth"
-curl -LJO "%URL_BASE%/pretrained/G32k.pth"
-curl -LJO "%URL_BASE%/pretrained/G40k.pth"
-curl -LJO "%URL_BASE%/pretrained/G48k.pth"
-curl -LJO "%URL_BASE%/pretrained/f0D32k.pth"
-curl -LJO "%URL_BASE%/pretrained/f0D40k.pth"
-curl -LJO "%URL_BASE%/pretrained/f0D48k.pth"
-curl -LJO "%URL_BASE%/pretrained/f0G32k.pth"
-curl -LJO "%URL_BASE%/pretrained/f0G40k.pth"
-curl -LJO "%URL_BASE%/pretrained/f0G48k.pth"
-cd ".."
-echo.
-cls
-
-echo Downloading the "pretrained_v2" folder...
-cd "pretrained_v2"
-curl -LJO "%URL_BASE%/pretrained_v2/D32k.pth"
-curl -LJO "%URL_BASE%/pretrained_v2/D40k.pth"
-curl -LJO "%URL_BASE%/pretrained_v2/D48k.pth"
-curl -LJO "%URL_BASE%/pretrained_v2/G32k.pth"
-curl -LJO "%URL_BASE%/pretrained_v2/G40k.pth"
-curl -LJO "%URL_BASE%/pretrained_v2/G48k.pth"
-curl -LJO "%URL_BASE%/pretrained_v2/f0D32k.pth"
-curl -LJO "%URL_BASE%/pretrained_v2/f0D40k.pth"
-curl -LJO "%URL_BASE%/pretrained_v2/f0D48k.pth"
-curl -LJO "%URL_BASE%/pretrained_v2/f0G32k.pth"
-curl -LJO "%URL_BASE%/pretrained_v2/f0G40k.pth"
-curl -LJO "%URL_BASE%/pretrained_v2/f0G48k.pth"
-cd ".."
-echo.
-cls
-
-echo Downloading the hubert_base.pt file...
-cd "hubert"
-curl -LJO "%URL_BASE%/hubert_base.pt"
-cd ".."
-echo.
-cls
-
-
-echo Downloading the rmvpe.pt file...
-cd "rmvpe"
-curl -LJO "%URL_BASE%/rmvpe.pt"
-echo.
-cls
-
-echo Downloading the rmvpe.onnx file...
-curl -LJO "%URL_BASE%/rmvpe.onnx"
-cd ".."
-cd ".."
-echo.
-cls
-
-echo Downloading the rest of the large files
-
-echo Downloading the "uvr5_weights" folder...
-cd "uvr5_weights"
-curl -LJO "%URL_BASE%/uvr5_weights/HP2_all_vocals.pth"
-curl -LJO "%URL_BASE%/uvr5_weights/HP3_all_vocals.pth"
-curl -LJO "%URL_BASE%/uvr5_weights/HP5_only_main_vocal.pth"
-curl -LJO "%URL_BASE%/uvr5_weights/VR-DeEchoAggressive.pth"
-curl -LJO "%URL_BASE%/uvr5_weights/VR-DeEchoDeReverb.pth"
-curl -LJO "%URL_BASE%/uvr5_weights/VR-DeEchoNormal.pth"
-cd ".."
-echo.
-cls
-
-echo Downloading the ffmpeg.exe file...
-curl -LJO "%URL_BASE%/ffmpeg.exe"
-echo.
-cls
-
-echo Downloading the ffprobe.exe file...
-curl -LJO "%URL_BASE%/ffprobe.exe"
-echo.
-cls
-
-echo Downloading the runtime.zip file...
-curl -LJO "%URL_EXTRA%/%runtime%.zip"
-echo.
-cls
-
-echo Extracting the runtime.zip file, this might take a while...
-powershell -Command "Expand-Archive -Path '%runtime%.zip' -DestinationPath '.'"
-del %runtime%.zip
-echo.
-cls
-
-echo Downloads completed!
-echo.
-
-echo Checking if the local_fixes.py file exists in the Fixes folder...
-if exist "%fixesFolder%\%localFixesPy%" (
- echo Running the file...
- runtime\python.exe "%fixesFolder%\%localFixesPy%"
-) else (
- echo The "%localFixesPy%" file was not found in the "Fixes" folder.
-)
-echo.
-
-echo Fixes Applied!
-echo.
-
-echo Applio has been reinstalled!
-echo.
-echo Press 'Enter' to access the main menu...
-pause>nul
-cls
-goto menu
-
-
-:updater
-
-echo Downloading the ZIP file...
-powershell -command "& { Invoke-WebRequest -Uri '%repoUrl%' -OutFile '%principal%\repo.zip' }"
-echo.
-
-echo Extracting ZIP file...
-powershell -command "& { Add-Type -AssemblyName System.IO.Compression.FileSystem ; [System.IO.Compression.ZipFile]::ExtractToDirectory('%principal%\repo.zip', '%principal%') }"
-echo.
-
-echo Copying folder and file structure from subdirectory to main directory...
-robocopy "%principal%\Applio-RVC-Fork-%branch%" "%principal%" /E
-echo.
-
-echo Deleting contents of the subdirectory (files and folders)...
-rmdir "%principal%\Applio-RVC-Fork-%branch%" /S /Q
-echo.
-
-echo Cleaning up...
-del "%principal%\repo.zip"
-echo.
-cls
-
-echo Verifying if the local_fixes.py file exists in the Fixes folder...
-if exist "%fixesFolder%\%localFixesPy%" (
- echo Running the file...
- runtime\python.exe "%fixesFolder%\%localFixesPy%"
-) else (
- echo The file "%localFixesPy%" was not found in the "Fixes" folder.
-)
-echo.
-
-echo Applio has been updated!
-echo.
-echo Press 'Enter' to access the main menu...
-pause>nul
-cls
-goto menu
-
-
-:updaterRuntime
-
-echo Downloading the ZIP file...
-powershell -command "& { Invoke-WebRequest -Uri '%repoUrl%' -OutFile '%principal%\repo.zip' }"
-echo.
-
-echo Extracting ZIP file...
-powershell -command "& { Add-Type -AssemblyName System.IO.Compression.FileSystem ; [System.IO.Compression.ZipFile]::ExtractToDirectory('%principal%\repo.zip', '%principal%') }"
-echo.
-
-echo Copying folder and file structure from subdirectory to main directory...
-robocopy "%principal%\Applio-RVC-Fork-%branch%" "%principal%" /E
-echo.
-
-echo Deleting contents of the subdirectory (files and folders)...
-rmdir "%principal%\Applio-RVC-Fork-%branch%" /S /Q
-echo.
-
-echo Cleaning up...
-del "%principal%\repo.zip"
-echo.
-cls
-
-echo Downloading the runtime.zip file...
-curl -LJO "%URL_EXTRA%/%runtime%.zip"
-echo.
-cls
-echo Extracting the runtime.zip file, this might take a while...
-powershell -Command "Expand-Archive -Path '%runtime%.zip' -DestinationPath '.'"
-del runtime.zip
-echo.
-cls
-
-echo Verifying if the local_fixes.py file exists in the Fixes folder...
-if exist "%fixesFolder%\%localFixesPy%" (
- echo Running the file...
- runtime\python.exe "%fixesFolder%\%localFixesPy%"
-) else (
- echo The file "%localFixesPy%" was not found in the "Fixes" folder.
-)
-echo.
-
-echo Applio has been updated!
-echo.
-echo Press 'Enter' to access the main menu...
-pause>nul
-cls
-goto menu
diff --git a/spaces/BartPoint/VoiceChange_Beta/vc_infer_pipeline.py b/spaces/BartPoint/VoiceChange_Beta/vc_infer_pipeline.py
deleted file mode 100644
index 82c15f59a8072e1b317fa1d750ccc1b814a6989d..0000000000000000000000000000000000000000
--- a/spaces/BartPoint/VoiceChange_Beta/vc_infer_pipeline.py
+++ /dev/null
@@ -1,443 +0,0 @@
-import numpy as np, parselmouth, torch, pdb, sys, os
-from time import time as ttime
-import torch.nn.functional as F
-import scipy.signal as signal
-import pyworld, os, traceback, faiss, librosa, torchcrepe
-from scipy import signal
-from functools import lru_cache
-
-now_dir = os.getcwd()
-sys.path.append(now_dir)
-
-bh, ah = signal.butter(N=5, Wn=48, btype="high", fs=16000)
-
-input_audio_path2wav = {}
-
-
-@lru_cache
-def cache_harvest_f0(input_audio_path, fs, f0max, f0min, frame_period):
- audio = input_audio_path2wav[input_audio_path]
- f0, t = pyworld.harvest(
- audio,
- fs=fs,
- f0_ceil=f0max,
- f0_floor=f0min,
- frame_period=frame_period,
- )
- f0 = pyworld.stonemask(audio, f0, t, fs)
- return f0
-
-
-def change_rms(data1, sr1, data2, sr2, rate): # 1是输入音频,2是输出音频,rate是2的占比
- # print(data1.max(),data2.max())
- rms1 = librosa.feature.rms(
- y=data1, frame_length=sr1 // 2 * 2, hop_length=sr1 // 2
- ) # 每半秒一个点
- rms2 = librosa.feature.rms(y=data2, frame_length=sr2 // 2 * 2, hop_length=sr2 // 2)
- rms1 = torch.from_numpy(rms1)
- rms1 = F.interpolate(
- rms1.unsqueeze(0), size=data2.shape[0], mode="linear"
- ).squeeze()
- rms2 = torch.from_numpy(rms2)
- rms2 = F.interpolate(
- rms2.unsqueeze(0), size=data2.shape[0], mode="linear"
- ).squeeze()
- rms2 = torch.max(rms2, torch.zeros_like(rms2) + 1e-6)
- data2 *= (
- torch.pow(rms1, torch.tensor(1 - rate))
- * torch.pow(rms2, torch.tensor(rate - 1))
- ).numpy()
- return data2
-
-
-class VC(object):
- def __init__(self, tgt_sr, config):
- self.x_pad, self.x_query, self.x_center, self.x_max, self.is_half = (
- config.x_pad,
- config.x_query,
- config.x_center,
- config.x_max,
- config.is_half,
- )
- self.sr = 16000 # hubert输入采样率
- self.window = 160 # 每帧点数
- self.t_pad = self.sr * self.x_pad # 每条前后pad时间
- self.t_pad_tgt = tgt_sr * self.x_pad
- self.t_pad2 = self.t_pad * 2
- self.t_query = self.sr * self.x_query # 查询切点前后查询时间
- self.t_center = self.sr * self.x_center # 查询切点位置
- self.t_max = self.sr * self.x_max # 免查询时长阈值
- self.device = config.device
-
- def get_f0(
- self,
- input_audio_path,
- x,
- p_len,
- f0_up_key,
- f0_method,
- filter_radius,
- inp_f0=None,
- ):
- global input_audio_path2wav
- time_step = self.window / self.sr * 1000
- f0_min = 50
- f0_max = 1100
- f0_mel_min = 1127 * np.log(1 + f0_min / 700)
- f0_mel_max = 1127 * np.log(1 + f0_max / 700)
- if f0_method == "pm":
- f0 = (
- parselmouth.Sound(x, self.sr)
- .to_pitch_ac(
- time_step=time_step / 1000,
- voicing_threshold=0.6,
- pitch_floor=f0_min,
- pitch_ceiling=f0_max,
- )
- .selected_array["frequency"]
- )
- pad_size = (p_len - len(f0) + 1) // 2
- if pad_size > 0 or p_len - len(f0) - pad_size > 0:
- f0 = np.pad(
- f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant"
- )
- elif f0_method == "harvest":
- input_audio_path2wav[input_audio_path] = x.astype(np.double)
- f0 = cache_harvest_f0(input_audio_path, self.sr, f0_max, f0_min, 10)
- if filter_radius > 2:
- f0 = signal.medfilt(f0, 3)
- elif f0_method == "crepe":
- model = "full"
- # Pick a batch size that doesn't cause memory errors on your gpu
- batch_size = 512
- # Compute pitch using first gpu
- audio = torch.tensor(np.copy(x))[None].float()
- f0, pd = torchcrepe.predict(
- audio,
- self.sr,
- self.window,
- f0_min,
- f0_max,
- model,
- batch_size=batch_size,
- device=self.device,
- return_periodicity=True,
- )
- pd = torchcrepe.filter.median(pd, 3)
- f0 = torchcrepe.filter.mean(f0, 3)
- f0[pd < 0.1] = 0
- f0 = f0[0].cpu().numpy()
- elif f0_method == "rmvpe":
- if hasattr(self, "model_rmvpe") == False:
- from rmvpe import RMVPE
-
- print("loading rmvpe model")
- self.model_rmvpe = RMVPE(
- "rmvpe.pt", is_half=self.is_half, device=self.device
- )
- f0 = self.model_rmvpe.infer_from_audio(x, thred=0.03)
- f0 *= pow(2, f0_up_key / 12)
- # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()]))
- tf0 = self.sr // self.window # 每秒f0点数
- if inp_f0 is not None:
- delta_t = np.round(
- (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1
- ).astype("int16")
- replace_f0 = np.interp(
- list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1]
- )
- shape = f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)].shape[0]
- f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)] = replace_f0[
- :shape
- ]
- # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()]))
- f0bak = f0.copy()
- f0_mel = 1127 * np.log(1 + f0 / 700)
- f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (
- f0_mel_max - f0_mel_min
- ) + 1
- f0_mel[f0_mel <= 1] = 1
- f0_mel[f0_mel > 255] = 255
- f0_coarse = np.rint(f0_mel).astype(np.int)
- return f0_coarse, f0bak # 1-0
-
- def vc(
- self,
- model,
- net_g,
- sid,
- audio0,
- pitch,
- pitchf,
- times,
- index,
- big_npy,
- index_rate,
- version,
- protect,
- ): # ,file_index,file_big_npy
- feats = torch.from_numpy(audio0)
- if self.is_half:
- feats = feats.half()
- else:
- feats = feats.float()
- if feats.dim() == 2: # double channels
- feats = feats.mean(-1)
- assert feats.dim() == 1, feats.dim()
- feats = feats.view(1, -1)
- padding_mask = torch.BoolTensor(feats.shape).to(self.device).fill_(False)
-
- inputs = {
- "source": feats.to(self.device),
- "padding_mask": padding_mask,
- "output_layer": 9 if version == "v1" else 12,
- }
- t0 = ttime()
- with torch.no_grad():
- logits = model.extract_features(**inputs)
- feats = model.final_proj(logits[0]) if version == "v1" else logits[0]
- if protect < 0.5 and pitch != None and pitchf != None:
- feats0 = feats.clone()
- if (
- isinstance(index, type(None)) == False
- and isinstance(big_npy, type(None)) == False
- and index_rate != 0
- ):
- npy = feats[0].cpu().numpy()
- if self.is_half:
- npy = npy.astype("float32")
-
- # _, I = index.search(npy, 1)
- # npy = big_npy[I.squeeze()]
-
- score, ix = index.search(npy, k=8)
- weight = np.square(1 / score)
- weight /= weight.sum(axis=1, keepdims=True)
- npy = np.sum(big_npy[ix] * np.expand_dims(weight, axis=2), axis=1)
-
- if self.is_half:
- npy = npy.astype("float16")
- feats = (
- torch.from_numpy(npy).unsqueeze(0).to(self.device) * index_rate
- + (1 - index_rate) * feats
- )
-
- feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1)
- if protect < 0.5 and pitch != None and pitchf != None:
- feats0 = F.interpolate(feats0.permute(0, 2, 1), scale_factor=2).permute(
- 0, 2, 1
- )
- t1 = ttime()
- p_len = audio0.shape[0] // self.window
- if feats.shape[1] < p_len:
- p_len = feats.shape[1]
- if pitch != None and pitchf != None:
- pitch = pitch[:, :p_len]
- pitchf = pitchf[:, :p_len]
-
- if protect < 0.5 and pitch != None and pitchf != None:
- pitchff = pitchf.clone()
- pitchff[pitchf > 0] = 1
- pitchff[pitchf < 1] = protect
- pitchff = pitchff.unsqueeze(-1)
- feats = feats * pitchff + feats0 * (1 - pitchff)
- feats = feats.to(feats0.dtype)
- p_len = torch.tensor([p_len], device=self.device).long()
- with torch.no_grad():
- if pitch != None and pitchf != None:
- audio1 = (
- (net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0])
- .data.cpu()
- .float()
- .numpy()
- )
- else:
- audio1 = (
- (net_g.infer(feats, p_len, sid)[0][0, 0]).data.cpu().float().numpy()
- )
- del feats, p_len, padding_mask
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- t2 = ttime()
- times[0] += t1 - t0
- times[2] += t2 - t1
- return audio1
-
- def pipeline(
- self,
- model,
- net_g,
- sid,
- audio,
- input_audio_path,
- times,
- f0_up_key,
- f0_method,
- file_index,
- # file_big_npy,
- index_rate,
- if_f0,
- filter_radius,
- tgt_sr,
- resample_sr,
- rms_mix_rate,
- version,
- protect,
- f0_file=None,
- ):
- if (
- file_index != ""
- # and file_big_npy != ""
- # and os.path.exists(file_big_npy) == True
- and os.path.exists(file_index) == True
- and index_rate != 0
- ):
- try:
- index = faiss.read_index(file_index)
- # big_npy = np.load(file_big_npy)
- big_npy = index.reconstruct_n(0, index.ntotal)
- except:
- traceback.print_exc()
- index = big_npy = None
- else:
- index = big_npy = None
- audio = signal.filtfilt(bh, ah, audio)
- audio_pad = np.pad(audio, (self.window // 2, self.window // 2), mode="reflect")
- opt_ts = []
- if audio_pad.shape[0] > self.t_max:
- audio_sum = np.zeros_like(audio)
- for i in range(self.window):
- audio_sum += audio_pad[i : i - self.window]
- for t in range(self.t_center, audio.shape[0], self.t_center):
- opt_ts.append(
- t
- - self.t_query
- + np.where(
- np.abs(audio_sum[t - self.t_query : t + self.t_query])
- == np.abs(audio_sum[t - self.t_query : t + self.t_query]).min()
- )[0][0]
- )
- s = 0
- audio_opt = []
- t = None
- t1 = ttime()
- audio_pad = np.pad(audio, (self.t_pad, self.t_pad), mode="reflect")
- p_len = audio_pad.shape[0] // self.window
- inp_f0 = None
- if hasattr(f0_file, "name") == True:
- try:
- with open(f0_file.name, "r") as f:
- lines = f.read().strip("\n").split("\n")
- inp_f0 = []
- for line in lines:
- inp_f0.append([float(i) for i in line.split(",")])
- inp_f0 = np.array(inp_f0, dtype="float32")
- except:
- traceback.print_exc()
- sid = torch.tensor(sid, device=self.device).unsqueeze(0).long()
- pitch, pitchf = None, None
- if if_f0 == 1:
- pitch, pitchf = self.get_f0(
- input_audio_path,
- audio_pad,
- p_len,
- f0_up_key,
- f0_method,
- filter_radius,
- inp_f0,
- )
- pitch = pitch[:p_len]
- pitchf = pitchf[:p_len]
- if self.device == "mps":
- pitchf = pitchf.astype(np.float32)
- pitch = torch.tensor(pitch, device=self.device).unsqueeze(0).long()
- pitchf = torch.tensor(pitchf, device=self.device).unsqueeze(0).float()
- t2 = ttime()
- times[1] += t2 - t1
- for t in opt_ts:
- t = t // self.window * self.window
- if if_f0 == 1:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[s : t + self.t_pad2 + self.window],
- pitch[:, s // self.window : (t + self.t_pad2) // self.window],
- pitchf[:, s // self.window : (t + self.t_pad2) // self.window],
- times,
- index,
- big_npy,
- index_rate,
- version,
- protect,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- else:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[s : t + self.t_pad2 + self.window],
- None,
- None,
- times,
- index,
- big_npy,
- index_rate,
- version,
- protect,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- s = t
- if if_f0 == 1:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[t:],
- pitch[:, t // self.window :] if t is not None else pitch,
- pitchf[:, t // self.window :] if t is not None else pitchf,
- times,
- index,
- big_npy,
- index_rate,
- version,
- protect,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- else:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[t:],
- None,
- None,
- times,
- index,
- big_npy,
- index_rate,
- version,
- protect,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- audio_opt = np.concatenate(audio_opt)
- if rms_mix_rate != 1:
- audio_opt = change_rms(audio, 16000, audio_opt, tgt_sr, rms_mix_rate)
- if resample_sr >= 16000 and tgt_sr != resample_sr:
- audio_opt = librosa.resample(
- audio_opt, orig_sr=tgt_sr, target_sr=resample_sr
- )
- audio_max = np.abs(audio_opt).max() / 0.99
- max_int16 = 32768
- if audio_max > 1:
- max_int16 /= audio_max
- audio_opt = (audio_opt * max_int16).astype(np.int16)
- del pitch, pitchf, sid
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- return audio_opt
diff --git a/spaces/BetterAPI/BetterChat_new/src/hooks.server.ts b/spaces/BetterAPI/BetterChat_new/src/hooks.server.ts
deleted file mode 100644
index 04cc75cac042fda3cabd7244584ae9aa5bf2a46f..0000000000000000000000000000000000000000
--- a/spaces/BetterAPI/BetterChat_new/src/hooks.server.ts
+++ /dev/null
@@ -1,37 +0,0 @@
-import { dev } from "$app/environment";
-import { COOKIE_NAME } from "$env/static/private";
-import type { Handle } from "@sveltejs/kit";
-import { PUBLIC_GOOGLE_ANALYTICS_ID } from "$env/static/public";
-import { addYears } from "date-fns";
-
-export const handle: Handle = async ({ event, resolve }) => {
- const token = event.cookies.get(COOKIE_NAME);
-
- event.locals.sessionId = token || crypto.randomUUID();
-
- // Refresh cookie expiration date
- event.cookies.set(COOKIE_NAME, event.locals.sessionId, {
- path: "/",
- // So that it works inside the space's iframe
- sameSite: dev ? "lax" : "none",
- secure: !dev,
- httpOnly: true,
- expires: addYears(new Date(), 1),
- });
-
- let replaced = false;
-
- const response = await resolve(event, {
- transformPageChunk: (chunk) => {
- // For some reason, Sveltekit doesn't let us load env variables from .env in the app.html template
- if (replaced || !chunk.html.includes("%gaId%")) {
- return chunk.html;
- }
- replaced = true;
-
- return chunk.html.replace("%gaId%", PUBLIC_GOOGLE_ANALYTICS_ID);
- },
- });
-
- return response;
-};
diff --git a/spaces/BillBojangeles2000/WikiGPT/README.md b/spaces/BillBojangeles2000/WikiGPT/README.md
deleted file mode 100644
index d7b841426bcaffad9d751fd2134aef2fc02d1812..0000000000000000000000000000000000000000
--- a/spaces/BillBojangeles2000/WikiGPT/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Karki TEST
-emoji: 🐠
-colorFrom: blue
-colorTo: red
-sdk: streamlit
-sdk_version: 1.17.0
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/structures/__init__.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/structures/__init__.py
deleted file mode 100644
index e7eef0005151406d7b74433f49075a8bb5a213f9..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/structures/__init__.py
+++ /dev/null
@@ -1,10 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-from .boxes import Boxes, BoxMode, pairwise_iou
-from .image_list import ImageList
-from .instances import Instances
-from .keypoints import Keypoints, heatmaps_to_keypoints
-from .masks import BitMasks, PolygonMasks, rasterize_polygons_within_box, polygons_to_bitmask
-from .rotated_boxes import RotatedBoxes
-from .rotated_boxes import pairwise_iou as pairwise_iou_rotated
-
-__all__ = [k for k in globals().keys() if not k.startswith("_")]
diff --git a/spaces/CVPR/LIVE/thrust/thrust/mr/sync_pool.h b/spaces/CVPR/LIVE/thrust/thrust/mr/sync_pool.h
deleted file mode 100644
index 9cf8640cab158b87bc806976b6f10d1ec0a6e7c0..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/mr/sync_pool.h
+++ /dev/null
@@ -1,116 +0,0 @@
-/*
- * Copyright 2018 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-/*! \file sync_pool.h
- * \brief A mutex-synchronized version of \p unsynchronized_pool_resource.
- */
-
-#pragma once
-
-#include
-
-#if THRUST_CPP_DIALECT >= 2011
-
-#include
-
-#include
-
-namespace thrust
-{
-namespace mr
-{
-
-/*! \addtogroup memory_management Memory Management
- * \addtogroup memory_management_classes Memory Management Classes
- * \addtogroup memory_resources Memory Resources
- * \ingroup memory_resources
- * \{
- */
-
-/*! A mutex-synchronized version of \p unsynchronized_pool_resource. Uses \p std::mutex, and therefore requires C++11.
- *
- * \tparam Upstream the type of memory resources that will be used for allocating memory
- */
-template
-struct synchronized_pool_resource : public memory_resource
-{
- typedef unsynchronized_pool_resource unsync_pool;
- typedef std::lock_guard lock_t;
-
- typedef typename Upstream::pointer void_ptr;
-
-public:
- /*! Get the default options for a pool. These are meant to be a sensible set of values for many use cases,
- * and as such, may be tuned in the future. This function is exposed so that creating a set of options that are
- * just a slight departure from the defaults is easy.
- */
- static pool_options get_default_options()
- {
- return unsync_pool::get_default_options();
- }
-
- /*! Constructor.
- *
- * \param upstream the upstream memory resource for allocations
- * \param options pool options to use
- */
- synchronized_pool_resource(Upstream * upstream, pool_options options = get_default_options())
- : upstream_pool(upstream, options)
- {
- }
-
- /*! Constructor. The upstream resource is obtained by calling \p get_global_resource.
- *
- * \param options pool options to use
- */
- synchronized_pool_resource(pool_options options = get_default_options())
- : upstream_pool(get_global_resource(), options)
- {
- }
-
- /*! Releases all held memory to upstream.
- */
- void release()
- {
- lock_t lock(mtx);
- upstream_pool.release();
- }
-
- THRUST_NODISCARD virtual void_ptr do_allocate(std::size_t bytes, std::size_t alignment = THRUST_MR_DEFAULT_ALIGNMENT) THRUST_OVERRIDE
- {
- lock_t lock(mtx);
- return upstream_pool.do_allocate(bytes, alignment);
- }
-
- virtual void do_deallocate(void_ptr p, std::size_t n, std::size_t alignment = THRUST_MR_DEFAULT_ALIGNMENT) THRUST_OVERRIDE
- {
- lock_t lock(mtx);
- upstream_pool.do_deallocate(p, n, alignment);
- }
-
-private:
- std::mutex mtx;
- unsync_pool upstream_pool;
-};
-
-/*! \}
- */
-
-} // end mr
-} // end thrust
-
-#endif // THRUST_CPP_DIALECT >= 2011
-
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/logical.h b/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/logical.h
deleted file mode 100644
index c6ae90664ad9538e73febfde86c334011de417c8..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/logical.h
+++ /dev/null
@@ -1,22 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-
-// this system has no special version of this algorithm
-
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/mismatch.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/mismatch.h
deleted file mode 100644
index 50e9f678b1ff6a85c2d32e5ab45aed88a1c7224b..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/mismatch.h
+++ /dev/null
@@ -1,58 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-
-#pragma once
-
-#include
-#include
-
-namespace thrust
-{
-namespace system
-{
-namespace detail
-{
-namespace generic
-{
-
-
-template
-__host__ __device__
- thrust::pair
- mismatch(thrust::execution_policy &exec,
- InputIterator1 first1,
- InputIterator1 last1,
- InputIterator2 first2);
-
-
-template
-__host__ __device__
- thrust::pair
- mismatch(thrust::execution_policy &exec,
- InputIterator1 first1,
- InputIterator1 last1,
- InputIterator2 first2,
- BinaryPredicate pred);
-
-
-} // end namespace generic
-} // end namespace detail
-} // end namespace system
-} // end namespace thrust
-
-#include
-
diff --git a/spaces/CVPR/SPOTER_Sign_Language_Recognition/spoter_mod/datasets/czech_slr_dataset.py b/spaces/CVPR/SPOTER_Sign_Language_Recognition/spoter_mod/datasets/czech_slr_dataset.py
deleted file mode 100644
index 5ce737b8c3a5e9f6865a002d44393d6fc1dfae8a..0000000000000000000000000000000000000000
--- a/spaces/CVPR/SPOTER_Sign_Language_Recognition/spoter_mod/datasets/czech_slr_dataset.py
+++ /dev/null
@@ -1,153 +0,0 @@
-import ast
-import torch
-
-import pandas as pd
-import torch.utils.data as torch_data
-
-from random import randrange
-from augmentations import *
-from normalization.body_normalization import BODY_IDENTIFIERS
-from normalization.hand_normalization import HAND_IDENTIFIERS
-from normalization.body_normalization import normalize_single_dict as normalize_single_body_dict
-from normalization.hand_normalization import normalize_single_dict as normalize_single_hand_dict
-
-HAND_IDENTIFIERS = [id + "_0" for id in HAND_IDENTIFIERS] + [id + "_1" for id in HAND_IDENTIFIERS]
-
-DEFAULT_AUGMENTATIONS_CONFIG = {
- "rotate-angle": 13,
- "perspective-transform-ratio": 0.1,
- "squeeze-ratio": 0.15,
- "arm-joint-rotate-angle": 4,
- "arm-joint-rotate-probability": 0.3
-}
-
-
-def load_dataset(file_location: str):
-
- # Load the datset csv file
- df = pd.read_csv(file_location, encoding="utf-8")
-
- # TO BE DELETED
- df.columns = [item.replace("_Left_", "_0_").replace("_Right_", "_1_") for item in list(df.columns)]
- if "neck_X" not in df.columns:
- df["neck_X"] = [0 for _ in range(df.shape[0])]
- df["neck_Y"] = [0 for _ in range(df.shape[0])]
-
- # TEMP
- labels = df["labels"].to_list()
- labels = [label + 1 for label in df["labels"].to_list()]
- data = []
-
- for row_index, row in df.iterrows():
- current_row = np.empty(shape=(len(ast.literal_eval(row["leftEar_X"])), len(BODY_IDENTIFIERS + HAND_IDENTIFIERS), 2))
- for index, identifier in enumerate(BODY_IDENTIFIERS + HAND_IDENTIFIERS):
- current_row[:, index, 0] = ast.literal_eval(row[identifier + "_X"])
- current_row[:, index, 1] = ast.literal_eval(row[identifier + "_Y"])
-
- data.append(current_row)
-
- return data, labels
-
-
-def tensor_to_dictionary(landmarks_tensor: torch.Tensor) -> dict:
-
- data_array = landmarks_tensor.numpy()
- output = {}
-
- for landmark_index, identifier in enumerate(BODY_IDENTIFIERS + HAND_IDENTIFIERS):
- output[identifier] = data_array[:, landmark_index]
-
- return output
-
-
-def dictionary_to_tensor(landmarks_dict: dict) -> torch.Tensor:
-
- output = np.empty(shape=(len(landmarks_dict["leftEar"]), len(BODY_IDENTIFIERS + HAND_IDENTIFIERS), 2))
-
- for landmark_index, identifier in enumerate(BODY_IDENTIFIERS + HAND_IDENTIFIERS):
- output[:, landmark_index, 0] = [frame[0] for frame in landmarks_dict[identifier]]
- output[:, landmark_index, 1] = [frame[1] for frame in landmarks_dict[identifier]]
-
- return torch.from_numpy(output)
-
-
-class CzechSLRDataset(torch_data.Dataset):
- """Advanced object representation of the HPOES dataset for loading hand joints landmarks utilizing the Torch's
- built-in Dataset properties"""
-
- data: [np.ndarray]
- labels: [np.ndarray]
-
- def __init__(self, dataset_filename: str, num_labels=5, transform=None, augmentations=False,
- augmentations_prob=0.5, normalize=True, augmentations_config: dict = DEFAULT_AUGMENTATIONS_CONFIG):
- """
- Initiates the HPOESDataset with the pre-loaded data from the h5 file.
-
- :param dataset_filename: Path to the h5 file
- :param transform: Any data transformation to be applied (default: None)
- """
-
- loaded_data = load_dataset(dataset_filename)
- data, labels = loaded_data[0], loaded_data[1]
-
- self.data = data
- self.labels = labels
- self.targets = list(labels)
- self.num_labels = num_labels
- self.transform = transform
-
- self.augmentations = augmentations
- self.augmentations_prob = augmentations_prob
- self.augmentations_config = augmentations_config
- self.normalize = normalize
-
- def __getitem__(self, idx):
- """
- Allocates, potentially transforms and returns the item at the desired index.
-
- :param idx: Index of the item
- :return: Tuple containing both the depth map and the label
- """
-
- depth_map = torch.from_numpy(np.copy(self.data[idx]))
- label = torch.Tensor([self.labels[idx] - 1])
-
- depth_map = tensor_to_dictionary(depth_map)
-
- # Apply potential augmentations
- if self.augmentations and random.random() < self.augmentations_prob:
-
- selected_aug = randrange(4)
-
- if selected_aug == 0:
- depth_map = augment_rotate(depth_map, (-self.augmentations_config["rotate-angle"], self.augmentations_config["rotate-angle"]))
-
- if selected_aug == 1:
- depth_map = augment_shear(depth_map, "perspective", (0, self.augmentations_config["perspective-transform-ratio"]))
-
- if selected_aug == 2:
- depth_map = augment_shear(depth_map, "squeeze", (0, self.augmentations_config["squeeze-ratio"]))
-
- if selected_aug == 3:
- depth_map = augment_arm_joint_rotate(depth_map, self.augmentations_config["arm-joint-rotate-probability"], (-self.augmentations_config["arm-joint-rotate-angle"], self.augmentations_config["arm-joint-rotate-angle"]))
-
- if self.normalize:
- depth_map = normalize_single_body_dict(depth_map)
- depth_map = normalize_single_hand_dict(depth_map)
-
- depth_map = dictionary_to_tensor(depth_map)
-
- # Move the landmark position interval to improve performance
- depth_map = depth_map - 0.5
-
- if self.transform:
- depth_map = self.transform(depth_map)
-
- return depth_map, label
-
- def __len__(self):
- return len(self.labels)
-
-
-if __name__ == "__main__":
- pass
diff --git a/spaces/CVPR/WALT/configs/walt/walt_people.py b/spaces/CVPR/WALT/configs/walt/walt_people.py
deleted file mode 100644
index 2dc45cd270a2cdb64f33a3a47b32eadd15a98c57..0000000000000000000000000000000000000000
--- a/spaces/CVPR/WALT/configs/walt/walt_people.py
+++ /dev/null
@@ -1,80 +0,0 @@
-_base_ = [
- '../_base_/models/occ_mask_rcnn_swin_fpn.py',
- '../_base_/datasets/walt_people.py',
- '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
-]
-
-model = dict(
- backbone=dict(
- embed_dim=96,
- depths=[2, 2, 6, 2],
- num_heads=[3, 6, 12, 24],
- window_size=7,
- ape=False,
- drop_path_rate=0.1,
- patch_norm=True,
- use_checkpoint=False
- ),
- neck=dict(in_channels=[96, 192, 384, 768]))
-
-img_norm_cfg = dict(
- mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
-
-# augmentation strategy originates from DETR / Sparse RCNN
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
- dict(type='RandomFlip', flip_ratio=0.5),
- dict(type='AutoAugment',
- policies=[
- [
- dict(type='Resize',
- img_scale=[(480, 1333), (512, 1333), (544, 1333), (576, 1333),
- (608, 1333), (640, 1333), (672, 1333), (704, 1333),
- (736, 1333), (768, 1333), (800, 1333)],
- multiscale_mode='value',
- keep_ratio=True)
- ],
- [
- dict(type='Resize',
- img_scale=[(400, 1333), (500, 1333), (600, 1333)],
- multiscale_mode='value',
- keep_ratio=True),
- dict(type='RandomCrop',
- crop_type='absolute_range',
- crop_size=(384, 600),
- allow_negative_crop=True),
- dict(type='Resize',
- img_scale=[(480, 1333), (512, 1333), (544, 1333),
- (576, 1333), (608, 1333), (640, 1333),
- (672, 1333), (704, 1333), (736, 1333),
- (768, 1333), (800, 1333)],
- multiscale_mode='value',
- override=True,
- keep_ratio=True)
- ]
- ]),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']),
-]
-data = dict(train=dict(pipeline=train_pipeline))
-
-optimizer = dict(_delete_=True, type='AdamW', lr=0.0001, betas=(0.9, 0.999), weight_decay=0.05,
- paramwise_cfg=dict(custom_keys={'absolute_pos_embed': dict(decay_mult=0.),
- 'relative_position_bias_table': dict(decay_mult=0.),
- 'norm': dict(decay_mult=0.)}))
-lr_config = dict(step=[8, 11])
-runner = dict(type='EpochBasedRunnerAmp', max_epochs=12)
-
-# do not use mmdet version fp16
-fp16 = None
-optimizer_config = dict(
- type="DistOptimizerHook",
- update_interval=1,
- grad_clip=None,
- coalesce=True,
- bucket_size_mb=-1,
- use_fp16=True,
-)
diff --git a/spaces/CVPR/WALT/mmdet/datasets/builder.py b/spaces/CVPR/WALT/mmdet/datasets/builder.py
deleted file mode 100644
index c9466a517dee746a6677b27a19713f2e89ed7194..0000000000000000000000000000000000000000
--- a/spaces/CVPR/WALT/mmdet/datasets/builder.py
+++ /dev/null
@@ -1,143 +0,0 @@
-import copy
-import platform
-import random
-from functools import partial
-
-import numpy as np
-from mmcv.parallel import collate
-from mmcv.runner import get_dist_info
-from mmcv.utils import Registry, build_from_cfg
-from torch.utils.data import DataLoader
-
-from .samplers import DistributedGroupSampler, DistributedSampler, GroupSampler
-
-if platform.system() != 'Windows':
- # https://github.com/pytorch/pytorch/issues/973
- import resource
- rlimit = resource.getrlimit(resource.RLIMIT_NOFILE)
- hard_limit = rlimit[1]
- soft_limit = min(4096, hard_limit)
- resource.setrlimit(resource.RLIMIT_NOFILE, (soft_limit, hard_limit))
-
-DATASETS = Registry('dataset')
-PIPELINES = Registry('pipeline')
-
-
-def _concat_dataset(cfg, default_args=None):
- from .dataset_wrappers import ConcatDataset
- ann_files = cfg['ann_file']
- img_prefixes = cfg.get('img_prefix', None)
- seg_prefixes = cfg.get('seg_prefix', None)
- proposal_files = cfg.get('proposal_file', None)
- separate_eval = cfg.get('separate_eval', True)
-
- datasets = []
- num_dset = len(ann_files)
- for i in range(num_dset):
- data_cfg = copy.deepcopy(cfg)
- # pop 'separate_eval' since it is not a valid key for common datasets.
- if 'separate_eval' in data_cfg:
- data_cfg.pop('separate_eval')
- data_cfg['ann_file'] = ann_files[i]
- if isinstance(img_prefixes, (list, tuple)):
- data_cfg['img_prefix'] = img_prefixes[i]
- if isinstance(seg_prefixes, (list, tuple)):
- data_cfg['seg_prefix'] = seg_prefixes[i]
- if isinstance(proposal_files, (list, tuple)):
- data_cfg['proposal_file'] = proposal_files[i]
- datasets.append(build_dataset(data_cfg, default_args))
-
- return ConcatDataset(datasets, separate_eval)
-
-
-def build_dataset(cfg, default_args=None):
- from .dataset_wrappers import (ConcatDataset, RepeatDataset,
- ClassBalancedDataset)
- if isinstance(cfg, (list, tuple)):
- dataset = ConcatDataset([build_dataset(c, default_args) for c in cfg])
- elif cfg['type'] == 'ConcatDataset':
- dataset = ConcatDataset(
- [build_dataset(c, default_args) for c in cfg['datasets']],
- cfg.get('separate_eval', True))
- elif cfg['type'] == 'RepeatDataset':
- dataset = RepeatDataset(
- build_dataset(cfg['dataset'], default_args), cfg['times'])
- elif cfg['type'] == 'ClassBalancedDataset':
- dataset = ClassBalancedDataset(
- build_dataset(cfg['dataset'], default_args), cfg['oversample_thr'])
- elif isinstance(cfg.get('ann_file'), (list, tuple)):
- dataset = _concat_dataset(cfg, default_args)
- else:
- dataset = build_from_cfg(cfg, DATASETS, default_args)
-
- return dataset
-
-
-def build_dataloader(dataset,
- samples_per_gpu,
- workers_per_gpu,
- num_gpus=1,
- dist=True,
- shuffle=True,
- seed=None,
- **kwargs):
- """Build PyTorch DataLoader.
-
- In distributed training, each GPU/process has a dataloader.
- In non-distributed training, there is only one dataloader for all GPUs.
-
- Args:
- dataset (Dataset): A PyTorch dataset.
- samples_per_gpu (int): Number of training samples on each GPU, i.e.,
- batch size of each GPU.
- workers_per_gpu (int): How many subprocesses to use for data loading
- for each GPU.
- num_gpus (int): Number of GPUs. Only used in non-distributed training.
- dist (bool): Distributed training/test or not. Default: True.
- shuffle (bool): Whether to shuffle the data at every epoch.
- Default: True.
- kwargs: any keyword argument to be used to initialize DataLoader
-
- Returns:
- DataLoader: A PyTorch dataloader.
- """
- rank, world_size = get_dist_info()
- if dist:
- # DistributedGroupSampler will definitely shuffle the data to satisfy
- # that images on each GPU are in the same group
- if shuffle:
- sampler = DistributedGroupSampler(
- dataset, samples_per_gpu, world_size, rank, seed=seed)
- else:
- sampler = DistributedSampler(
- dataset, world_size, rank, shuffle=False, seed=seed)
- batch_size = samples_per_gpu
- num_workers = workers_per_gpu
- else:
- sampler = GroupSampler(dataset, samples_per_gpu) if shuffle else None
- batch_size = num_gpus * samples_per_gpu
- num_workers = num_gpus * workers_per_gpu
-
- init_fn = partial(
- worker_init_fn, num_workers=num_workers, rank=rank,
- seed=seed) if seed is not None else None
-
- data_loader = DataLoader(
- dataset,
- batch_size=batch_size,
- sampler=sampler,
- num_workers=num_workers,
- collate_fn=partial(collate, samples_per_gpu=samples_per_gpu),
- pin_memory=False,
- worker_init_fn=init_fn,
- **kwargs)
-
- return data_loader
-
-
-def worker_init_fn(worker_id, num_workers, rank, seed):
- # The seed of each worker equals to
- # num_worker * rank + worker_id + user_seed
- worker_seed = num_workers * rank + worker_id + seed
- np.random.seed(worker_seed)
- random.seed(worker_seed)
diff --git a/spaces/Chujinze/Res2Net/README.md b/spaces/Chujinze/Res2Net/README.md
deleted file mode 100644
index 08136cd740a9589de8235927d5293a3e09c5bbeb..0000000000000000000000000000000000000000
--- a/spaces/Chujinze/Res2Net/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Res2Net
-emoji: 👁
-colorFrom: green
-colorTo: gray
-sdk: gradio
-sdk_version: 3.0.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/CikeyQI/Yunzai/Yunzai/plugins/adapter/go-cqhttp.js b/spaces/CikeyQI/Yunzai/Yunzai/plugins/adapter/go-cqhttp.js
deleted file mode 100644
index 78cc78330088d80b49d4afa30c940f4086029480..0000000000000000000000000000000000000000
--- a/spaces/CikeyQI/Yunzai/Yunzai/plugins/adapter/go-cqhttp.js
+++ /dev/null
@@ -1,842 +0,0 @@
-import { randomUUID } from "crypto"
-import path from "node:path"
-import fs from "node:fs"
-
-Bot.adapter.push(new class gocqhttpAdapter {
- constructor() {
- this.id = "QQ"
- this.name = "go-cqhttp"
- this.path = this.name
- }
-
- toStr(data) {
- switch (typeof data) {
- case "string":
- return data
- case "number":
- return String(data)
- case "object":
- if (Buffer.isBuffer(data))
- return Buffer.from(data, "utf8").toString()
- else
- return JSON.stringify(data)
- }
- return data
- }
-
- makeLog(msg) {
- return this.toStr(msg).replace(/base64:\/\/.*?(,|]|")/g, "base64://...$1")
- }
-
- sendApi(ws, action, params) {
- const echo = randomUUID()
- const msg = { action, params, echo }
- ws.sendMsg(msg)
- return new Promise(resolve =>
- Bot.once(echo, data =>
- resolve({ ...data, ...data.data })))
- }
-
- setProfile(data, profile) {
- logger.info(`${logger.blue(`[${data.self_id}]`)} 设置资料:${JSON.stringify(profile)}`)
- return data.bot.sendApi("set_qq_profile", profile)
- }
-
- makeMsg(msg) {
- if (!Array.isArray(msg))
- msg = [msg]
- const msgs = []
- for (const i of msg)
- if (typeof i == "object") {
- if (i.data)
- msgs.push(i)
- else
- msgs.push({ type: i.type, data: { ...i, type: undefined }})
- } else {
- msgs.push({ type: "text", data: { text: i }})
- }
- return msgs
- }
-
- sendFriendMsg(data, msg) {
- if (msg?.type == "node")
- return this.sendFriendForwardMsg(data, msg.data)
-
- logger.info(`${logger.blue(`[${data.self_id} => ${data.user_id}]`)} 发送好友消息:${this.makeLog(msg)}`)
- return data.bot.sendApi("send_msg", {
- user_id: data.user_id,
- message: this.makeMsg(msg),
- })
- }
-
- sendGroupMsg(data, msg) {
- if (msg?.type == "node")
- return this.sendGroupForwardMsg(data, msg.data)
-
- logger.info(`${logger.blue(`[${data.self_id} => ${data.group_id}]`)} 发送群消息:${this.makeLog(msg)}`)
- return data.bot.sendApi("send_msg", {
- group_id: data.group_id,
- message: this.makeMsg(msg),
- })
- }
-
- sendGuildMsg(data, msg) {
- if (msg?.type == "node")
- return Bot.sendForwardMsg(msg => this.sendGuildMsg(data, msg), msg)
-
- logger.info(`${logger.blue(`[${data.self_id}] => ${data.guild_id}-${data.channel_id}`)} 发送频道消息:${this.makeLog(msg)}`)
- return data.bot.sendApi("send_guild_channel_msg", {
- guild_id: data.guild_id,
- channel_id: data.channel_id,
- message: this.makeMsg(msg),
- })
- }
-
- async getMsg(data, message_id) {
- const msg = (await data.bot.sendApi("get_msg", { message_id })).data
-
- if (msg?.message) {
- const message = []
- for (const i of msg.message)
- message.push({ ...i.data, type: i.type })
- msg.message = message
- }
-
- return msg
- }
-
- recallMsg(data, message_id) {
- logger.info(`${logger.blue(`[${data.self_id}]`)} 撤回消息:${message_id}`)
- return data.bot.sendApi("delete_msg", { message_id })
- }
-
- getForwardMsg(data, message_id) {
- return data.bot.sendApi("get_forward_msg", { message_id })
- }
-
- makeForwardMsg(msg) {
- const messages = []
- for (const i of msg)
- messages.push({
- type: "node",
- data: {
- name: i.nickname || "匿名消息",
- uin: Number(i.user_id) || 80000000,
- content: this.makeMsg(i.message),
- time: i.time,
- },
- })
- return messages
- }
-
- async sendFriendForwardMsg(data, msg) {
- logger.info(`${logger.blue(`[${data.self_id} => ${data.user_id}]`)} 发送好友转发消息:${this.makeLog(msg)}`)
- msg = await data.bot.sendApi("send_private_forward_msg", {
- user_id: data.user_id,
- messages: this.makeForwardMsg(msg),
- })
- return msg
- }
-
- async sendGroupForwardMsg(data, msg) {
- logger.info(`${logger.blue(`[${data.self_id} => ${data.group_id}]`)} 发送群转发消息:${this.makeLog(msg)}`)
- msg = await data.bot.sendApi("send_group_forward_msg", {
- group_id: data.group_id,
- messages: this.makeForwardMsg(msg),
- })
- return msg
- }
-
- async getFriendArray(data) {
- return (await data.bot.sendApi("get_friend_list")).data
- }
-
- async getFriendList(data) {
- const array = []
- for (const { user_id } of (await this.getFriendArray(data)))
- array.push(user_id)
- return array
- }
-
- async getFriendMap(data) {
- for (const i of (await this.getFriendArray(data)))
- data.bot.fl.set(i.user_id, i)
- return data.bot.fl
- }
-
- getFriendInfo(data) {
- return data.bot.sendApi("get_stranger_info", {
- user_id: data.user_id,
- })
- }
-
- async getGroupArray(data) {
- const array = (await data.bot.sendApi("get_group_list")).data
- for (const guild of (await this.getGuildArray(data)))
- for (const channel of (await this.getGuildChannelArray({
- ...data,
- guild_id: guild.guild_id,
- })))
- array.push({
- guild,
- channel,
- group_id: `${guild.guild_id}-${channel.channel_id}`,
- group_name: `${guild.guild_name}-${channel.channel_name}`,
- })
- return array
- }
-
- async getGroupList(data) {
- const array = []
- for (const { group_id } of (await this.getGroupArray(data)))
- array.push(group_id)
- return array
- }
-
- async getGroupMap(data) {
- for (const i of (await this.getGroupArray(data)))
- data.bot.gl.set(i.group_id, i)
- return data.bot.gl
- }
-
- getGroupInfo(data) {
- return data.bot.sendApi("get_group_info", {
- group_id: data.group_id,
- })
- }
-
- async getMemberArray(data) {
- return (await data.bot.sendApi("get_group_member_list", {
- group_id: data.group_id,
- })).data
- }
-
- async getMemberList(data) {
- const array = []
- for (const { user_id } of (await this.getMemberArray(data)))
- array.push(user_id)
- return array
- }
-
- async getMemberMap(data) {
- const map = new Map
- for (const i of (await this.getMemberArray(data)))
- map.set(i.user_id, i)
- return map
- }
-
- getMemberInfo(data) {
- return data.bot.sendApi("get_group_member_info", {
- group_id: data.group_id,
- user_id: data.user_id,
- })
- }
-
- async getGuildArray(data) {
- return (await data.bot.sendApi("get_guild_list")).data
- }
-
- getGuildInfo(data) {
- return data.bot.sendApi("get_guild_meta_by_guest", {
- guild_id: data.guild_id,
- })
- }
-
- async getGuildChannelArray(data) {
- return (await data.bot.sendApi("get_guild_channel_list", {
- guild_id: data.guild_id,
- })).data
- }
-
- async getGuildChannelMap(data) {
- const map = new Map
- for (const i of (await this.getGuildChannelArray(data)))
- map.set(i.channel_id, i)
- return map
- }
-
- async getGuildMemberArray(data) {
- const array = []
- let next_token = ""
- while (true) {
- const list = (await data.bot.sendApi("get_guild_member_list", {
- guild_id: data.guild_id,
- next_token,
- })).data
-
- for (const i of list.members)
- array.push({
- ...i,
- user_id: i.tiny_id,
- })
- if (list.finished) break
- next_token = list.next_token
- }
- return array
- }
-
- async getGuildMemberList(data) {
- const array = []
- for (const { user_id } of (await this.getGuildMemberArray(data)))
- array.push(user_id)
- return array.push
- }
-
- async getGuildMemberMap(data) {
- const map = new Map
- for (const i of (await this.getGuildMemberArray(data)))
- map.set(i.user_id, i)
- return map
- }
-
- getGuildMemberInfo(data) {
- return data.bot.sendApi("get_guild_member_profile", {
- guild_id: data.guild_id,
- user_id: data.user_id,
- })
- }
-
- setGroupName(data, group_name) {
- logger.info(`${logger.blue(`[${data.self_id}]`)} 设置群名:[${data.group_id}] ${group_name}`)
- return data.bot.sendApi("set_group_name", {
- group_id: data.group_id,
- group_name,
- })
- }
-
- setGroupAvatar(data, file) {
- logger.info(`${logger.blue(`[${data.self_id}]`)} 设置群头像:[${data.group_id}] ${file}`)
- return data.bot.sendApi("set_group_portrait", {
- group_id: data.group_id,
- file: segment.image(file).file,
- })
- }
-
- setGroupAdmin(data, user_id, enable) {
- logger.info(`${logger.blue(`[${data.self_id}]`)} ${enable ? "设置" : "取消"}群管理员:[${data.group_id}] ${user_id}`)
- return data.bot.sendApi("set_group_admin", {
- group_id: data.group_id,
- user_id,
- enable,
- })
- }
-
- setGroupCard(data, user_id, card) {
- logger.info(`${logger.blue(`[${data.self_id}]`)} 设置群名片:[${data.group_id}] ${user_id} ${card}`)
- return data.bot.sendApi("set_group_card", {
- group_id: data.group_id,
- user_id,
- card,
- })
- }
-
- setGroupTitle(data, user_id, special_title, duration) {
- logger.info(`${logger.blue(`[${data.self_id}]`)} 设置群头衔:[${data.group_id}] ${user_id} ${special_title} ${duration}`)
- return data.bot.sendApi("set_group_special_title", {
- group_id: data.group_id,
- user_id,
- special_title,
- duration,
- })
- }
-
- downloadFile(data, url, thread_count, headers) {
- return data.bot.sendApi("download_file", {
- url,
- thread_count,
- headers,
- })
- }
-
- async makeFile(data, file, name = path.basename(file)) {
- if (file.match(/^https?:\/\//))
- file = (await this.downloadFile(data, file)).file
- else if (fs.existsSync(file))
- file = path.resolve(file)
- return { file, name }
- }
-
- async sendFriendFile(data, file, name) {
- logger.info(`${logger.blue(`[${data.self_id} => ${data.user_id}]`)} 发送好友文件:${name}(${file})`)
- return data.bot.sendApi("upload_private_file", {
- user_id: data.user_id,
- ...await this.makeFile(data, file, name),
- })
- }
-
- async sendGroupFile(data, file, folder, name) {
- logger.info(`${logger.blue(`[${data.self_id}]`)} 发送群文件:[${data.group_id}] ${folder||""}/${name}(${file})`)
- return data.bot.sendApi("upload_group_file", {
- group_id: data.group_id,
- folder,
- ...await this.makeFile(data, file, name),
- })
- }
-
- deleteGroupFile(data, file_id, busid) {
- logger.info(`${logger.blue(`[${data.self_id}]`)} 删除群文件:[${data.group_id}] ${file_id}(${busid})`)
- return data.bot.sendApi("delete_group_file", {
- group_id: data.group_id,
- file_id,
- busid,
- })
- }
-
- createGroupFileFolder(data, name) {
- logger.info(`${logger.blue(`[${data.self_id}]`)} 创建群文件夹:[${data.group_id}] ${name}`)
- return data.bot.sendApi("create_group_file_folder", {
- group_id: data.group_id,
- name,
- })
- }
-
- getGroupFileSystemInfo(data) {
- return data.bot.sendApi("get_group_file_system_info", {
- group_id: data.group_id,
- })
- }
-
- getGroupFiles(data, folder_id) {
- if (folder_id)
- return data.bot.sendApi("get_group_files_by_folder", {
- group_id: data.group_id,
- folder_id,
- })
- return data.bot.sendApi("get_group_root_files", {
- group_id: data.group_id,
- })
- }
-
- getGroupFileUrl(data, file_id, busid) {
- return data.bot.sendApi("get_group_file_url", {
- group_id: data.group_id,
- file_id,
- busid,
- })
- }
-
- getGroupFs(data) {
- return {
- upload: (file, folder, name) => this.sendGroupFile(data, file, folder, name),
- rm: (file_id, busid) => this.deleteGroupFile(data, file_id, busid),
- mkdir: name => this.createGroupFileFolder(data, name),
- df: () => this.getGroupFileSystemInfo(data),
- ls: folder_id => this.getGroupFiles(data, folder_id),
- download: (file_id, busid) => this.getGroupFileUrl(data, file_id, busid),
- }
- }
-
- setFriendAddRequest(data, flag, approve, remark) {
- return data.bot.sendApi("set_friend_add_request", {
- flag,
- approve,
- remark,
- })
- }
-
- setGroupAddRequest(data, flag, sub_type, approve, reason) {
- return data.bot.sendApi("set_group_add_request", {
- flag,
- sub_type,
- approve,
- reason,
- })
- }
-
- pickFriend(data, user_id) {
- const i = {
- ...data.bot.fl.get(user_id),
- ...data,
- user_id,
- }
- return {
- ...i,
- sendMsg: msg => this.sendFriendMsg(i, msg),
- getMsg: message_id => this.getMsg(i, message_id),
- recallMsg: message_id => this.recallMsg(i, message_id),
- getForwardMsg: message_id => this.getForwardMsg(i, message_id),
- sendForwardMsg: msg => this.sendFriendForwardMsg(i, msg),
- sendFile: (file, name) => this.sendFriendFile(i, file, name),
- getInfo: () => this.getFriendInfo(i),
- getAvatarUrl: () => `https://q1.qlogo.cn/g?b=qq&s=0&nk=${user_id}`,
- }
- }
-
- pickMember(data, group_id, user_id) {
- if (typeof group_id == "string" && group_id.match("-")) {
- const guild_id = group_id.split("-")
- const i = {
- ...data,
- guild_id: guild_id[0],
- channel_id: guild_id[1],
- user_id,
- }
- return {
- ...this.pickGroup(i, group_id),
- ...i,
- getInfo: () => this.getGuildMemberInfo(i),
- getAvatarUrl: async () => (await this.getGuildMemberInfo(i)).avatar_url,
- }
- }
-
- const i = {
- ...data.bot.fl.get(user_id),
- ...data,
- group_id,
- user_id,
- }
- return {
- ...this.pickFriend(i, user_id),
- ...i,
- getInfo: () => this.getMemberInfo(i),
- poke: () => this.sendGroupMsg(i, segment.poke(user_id)),
- }
- }
-
- pickGroup(data, group_id) {
- if (typeof group_id == "string" && group_id.match("-")) {
- const guild_id = group_id.split("-")
- const i = {
- ...data.bot.gl.get(group_id),
- ...data,
- guild_id: guild_id[0],
- channel_id: guild_id[1],
- }
- return {
- ...i,
- sendMsg: msg => this.sendGuildMsg(i, msg),
- getMsg: message_id => this.getMsg(i, message_id),
- recallMsg: message_id => this.recallMsg(i, message_id),
- getForwardMsg: message_id => this.getForwardMsg(i, message_id),
- getInfo: () => this.getGuildInfo(i),
- getChannelArray: () => this.getGuildChannelArray(i),
- getChannelList: () => this.getGuildChannelList(i),
- getChannelMap: () => this.getGuildChannelMap(i),
- getMemberArray: () => this.getGuildMemberArray(i),
- getMemberList: () => this.getGuildMemberList(i),
- getMemberMap: () => this.getGuildMemberMap(i),
- pickMember: user_id => this.pickMember(i, group_id, user_id),
- }
- }
-
- const i = {
- ...data.bot.gl.get(group_id),
- ...data,
- group_id,
- }
- return {
- ...i,
- sendMsg: msg => this.sendGroupMsg(i, msg),
- getMsg: message_id => this.getMsg(i, message_id),
- recallMsg: message_id => this.recallMsg(i, message_id),
- getForwardMsg: message_id => this.getForwardMsg(i, message_id),
- sendForwardMsg: msg => this.sendGroupForwardMsg(i, msg),
- sendFile: (file, name) => this.sendGroupFile(i, file, undefined, name),
- getInfo: () => this.getGroupInfo(i),
- getAvatarUrl: () => `https://p.qlogo.cn/gh/${group_id}/${group_id}/0`,
- getMemberArray: () => this.getMemberArray(i),
- getMemberList: () => this.getMemberList(i),
- getMemberMap: () => this.getMemberMap(i),
- pickMember: user_id => this.pickMember(i, group_id, user_id),
- pokeMember: user_id => this.sendGroupMsg(i, segment.poke(user_id)),
- setName: group_name => this.setGroupName(i, group_name),
- setAvatar: file => this.setGroupAvatar(i, file),
- setAdmin: (user_id, enable) => this.setGroupAdmin(i, user_id, enable),
- setCard: (user_id, card) => this.setGroupCard(i, user_id, card),
- setTitle: (user_id, special_title, duration) => this.setGroupTitle(i, user_id, special_title, duration),
- fs: this.getGroupFs(i),
- }
- }
-
- async connect(data, ws) {
- Bot[data.self_id] = {
- adapter: this,
- ws: ws,
- sendApi: (action, params) => this.sendApi(ws, action, params),
- stat: { start_time: data.time },
- model: "TRSS Yunzai ",
-
- info: {},
- get uin() { return this.info.user_id },
- get nickname() { return this.info.nickname },
- get avatar() { return `https://q1.qlogo.cn/g?b=qq&s=0&nk=${this.uin}` },
-
- setProfile: profile => this.setProfile(data, profile),
- setNickname: nickname => this.setProfile(data, { nickname }),
-
- pickFriend: user_id => this.pickFriend(data, user_id),
- get pickUser() { return this.pickFriend },
- getFriendArray: () => this.getFriendArray(data),
- getFriendList: () => this.getFriendList(data),
- getFriendMap: () => this.getFriendMap(data),
- fl: new Map,
-
- pickMember: (group_id, user_id) => this.pickMember(data, group_id, user_id),
- pickGroup: group_id => this.pickGroup(data, group_id),
- getGroupArray: () => this.getGroupArray(data),
- getGroupList: () => this.getGroupList(data),
- getGroupMap: () => this.getGroupMap(data),
- gl: new Map,
- gml: new Map,
-
- request_list: [],
- getSystemMsg: () => data.bot.request_list,
- setFriendAddRequest: (flag, approve, remark) => this.setFriendAddRequest(data, flag, approve, remark),
- setGroupAddRequest: (flag, sub_type, approve, reason) => this.setGroupAddRequest(data, flag, sub_type, approve, reason),
- }
- data.bot = Bot[data.self_id]
-
- if (!Bot.uin.includes(data.self_id))
- Bot.uin.push(data.self_id)
-
- data.bot.sendApi("_set_model_show", {
- model: data.bot.model,
- model_show: data.bot.model,
- })
-
- data.bot.info = (await data.bot.sendApi("get_login_info")).data
- data.bot.guild_info = (await data.bot.sendApi("get_guild_service_profile")).data
- data.bot.clients = (await data.bot.sendApi("get_online_clients")).clients
- data.bot.version = {
- ...(await data.bot.sendApi("get_version_info")).data,
- id: this.id,
- name: this.name,
- }
-
- data.bot.getFriendMap()
- data.bot.getGroupMap()
-
- logger.mark(`${logger.blue(`[${data.self_id}]`)} ${this.name}(${this.id}) ${data.bot.version.app_full_name} 已连接`)
- Bot.em(`connect.${data.self_id}`, data)
- }
-
- makeMessage(data) {
- const message = []
- for (const i of data.message)
- message.push({ ...i.data, type: i.type })
- data.message = message
-
- switch (data.message_type) {
- case "private":
- logger.info(`${logger.blue(`[${data.self_id}]`)} 好友消息:[${data.sender.nickname}(${data.user_id})] ${data.raw_message}`)
- break
- case "group":
- logger.info(`${logger.blue(`[${data.self_id}]`)} 群消息:[${data.group_id}, ${data.sender.card||data.sender.nickname}(${data.user_id})] ${data.raw_message}`)
- break
- case "guild":
- data.message_type = "group"
- data.group_id = `${data.guild_id}-${data.channel_id}`
- logger.info(`${logger.blue(`[${data.self_id}]`)} 频道消息:[${data.group_id}, ${data.sender.nickname}(${data.user_id})] ${JSON.stringify(data.message)}`)
- Object.defineProperty(data, "friend", { get() { return this.member || {}}})
- break
- default:
- logger.warn(`${logger.blue(`[${data.self_id}]`)} 未知消息:${logger.magenta(JSON.stringify(data))}`)
- }
-
- Bot.em(`${data.post_type}.${data.message_type}.${data.sub_type}`, data)
- }
-
- async makeNotice(data) {
- switch (data.notice_type) {
- case "friend_recall":
- logger.info(`${logger.blue(`[${data.self_id}]`)} 好友消息撤回:[${data.user_id}] ${data.message_id}`)
- break
- case "group_recall":
- logger.info(`${logger.blue(`[${data.self_id}]`)} 群消息撤回:[${data.group_id}, ${data.operator_id}=>${data.user_id}] ${data.message_id}`)
- break
- case "group_increase":
- logger.info(`${logger.blue(`[${data.self_id}]`)} 群成员增加:[${data.group_id}, ${data.operator_id}=>${data.user_id}] ${data.sub_type}`)
- if (data.user_id == data.self_id)
- data.bot.getGroupMap()
- break
- case "group_decrease":
- logger.info(`${logger.blue(`[${data.self_id}]`)} 群成员减少:[${data.group_id}, ${data.operator_id}=>${data.user_id}] ${data.sub_type}`)
- if (data.user_id == data.self_id)
- data.bot.getGroupMap()
- break
- case "group_admin":
- logger.info(`${logger.blue(`[${data.self_id}]`)} 群管理员变动:[${data.group_id}, ${data.user_id}] ${data.sub_type}`)
- data.set = data.sub_type == "set"
- break
- case "group_upload":
- logger.info(`${logger.blue(`[${data.self_id}]`)} 群文件上传:[${data.group_id}, ${data.user_id}] ${JSON.stringify(data.file)}`)
- break
- case "group_ban":
- logger.info(`${logger.blue(`[${data.self_id}]`)} 群禁言:[${data.group_id}, ${data.operator_id}=>${data.user_id}] ${data.sub_type} ${data.duration}秒`)
- break
- case "friend_add":
- logger.info(`${logger.blue(`[${data.self_id}]`)} 好友添加:[${data.user_id}]`)
- data.bot.getFriendMap()
- break
- case "notify":
- if (data.group_id)
- data.notice_type = "group"
- else
- data.notice_type = "friend"
- switch (data.sub_type) {
- case "poke":
- data.operator_id = data.user_id
- if (data.group_id)
- logger.info(`${logger.blue(`[${data.self_id}]`)} 群戳一戳:[${data.group_id}, ${data.operator_id}=>${data.target_id}]`)
- else
- logger.info(`${logger.blue(`[${data.self_id}]`)} 好友戳一戳:[${data.operator_id}=>${data.target_id}]`)
- break
- case "honor":
- logger.info(`${logger.blue(`[${data.self_id}]`)} 群荣誉:[${data.group_id}, ${data.user_id}] ${data.honor_type}`)
- break
- case "title":
- logger.info(`${logger.blue(`[${data.self_id}]`)} 群头衔:[${data.group_id}, ${data.user_id}] ${data.title}`)
- break
- default:
- logger.warn(`${logger.blue(`[${data.self_id}]`)} 未知通知:${logger.magenta(JSON.stringify(data))}`)
- }
- break
- case "group_card":
- logger.info(`${logger.blue(`[${data.self_id}]`)} 群名片更新:[${data.group_id}, ${data.user_id}] ${data.card_old}=>${data.card_new}`)
- break
- case "offline_file":
- logger.info(`${logger.blue(`[${data.self_id}]`)} 离线文件:[${data.user_id}] ${JSON.stringify(data.file)}`)
- break
- case "client_status":
- logger.info(`${logger.blue(`[${data.self_id}]`)} 客户端${data.online ? "上线" : "下线"}:${JSON.stringify(data.client)}`)
- data.clients = (await data.bot.sendApi("get_online_clients")).clients
- data.bot.clients = data.clients
- break
- case "essence":
- data.notice_type = "group_essence"
- logger.info(`${logger.blue(`[${data.self_id}]`)} 群精华消息:[${data.group_id}, ${data.operator_id}=>${data.sender_id}] ${data.sub_type} ${data.message_id}`)
- break
- case "guild_channel_recall":
- logger.info(`${logger.blue(`[${data.self_id}]`)} 频道消息撤回:[${data.guild_id}-${data.channel_id}, ${data.operator_id}=>${data.user_id}] ${data.message_id}`)
- break
- case "message_reactions_updated":
- data.notice_type = "guild_message_reactions_updated"
- logger.info(`${logger.blue(`[${data.self_id}]`)} 频道消息表情贴:[${data.guild_id}-${data.channel_id}, ${data.user_id}] ${data.message_id} ${JSON.stringify(data.current_reactions)}`)
- break
- case "channel_updated":
- data.notice_type = "guild_channel_updated"
- logger.info(`${logger.blue(`[${data.self_id}]`)} 子频道更新:[${data.guild_id}-${data.channel_id}, ${data.user_id}] ${JSON.stringify(data.old_info)}=>${JSON.stringify(data.new_info)}`)
- break
- case "channel_created":
- data.notice_type = "guild_channel_created"
- logger.info(`${logger.blue(`[${data.self_id}]`)} 子频道创建:[${data.guild_id}-${data.channel_id}, ${data.user_id}] ${JSON.stringify(data.channel_info)}`)
- data.bot.getGroupMap()
- break
- case "channel_destroyed":
- data.notice_type = "guild_channel_destroyed"
- logger.info(`${logger.blue(`[${data.self_id}]`)} 子频道删除:[${data.guild_id}-${data.channel_id}, ${data.user_id}] ${JSON.stringify(data.channel_info)}`)
- data.bot.getGroupMap()
- break
- default:
- logger.warn(`${logger.blue(`[${data.self_id}]`)} 未知通知:${logger.magenta(JSON.stringify(data))}`)
- }
-
- let notice = data.notice_type.split("_")
- data.notice_type = notice.shift()
- notice = notice.join("_")
- if (notice)
- data.sub_type = notice
-
- if (data.guild_id && data.channel_id) {
- data.group_id = `${data.guild_id}-${data.channel_id}`
- Object.defineProperty(data, "friend", { get() { return this.member || {}}})
- }
-
- Bot.em(`${data.post_type}.${data.notice_type}.${data.sub_type}`, data)
- }
-
- makeRequest(data) {
- switch (data.request_type) {
- case "friend":
- logger.info(`${logger.blue(`[${data.self_id}]`)} 加好友请求:[${data.user_id}] ${data.comment}(${data.flag})`)
- data.sub_type = "add"
- data.approve = approve => data.bot.setFriendAddRequest(data.flag, approve)
- break
- case "group":
- logger.info(`${logger.blue(`[${data.self_id}]`)} 加群请求:[${data.group_id}, ${data.user_id}] ${data.sub_type} ${data.comment}(${data.flag})`)
- data.approve = approve => data.bot.setGroupAddRequest(data.flag, data.sub_type, approve)
- break
- default:
- logger.warn(`${logger.blue(`[${data.self_id}]`)} 未知请求:${logger.magenta(JSON.stringify(data))}`)
- }
-
- data.bot.request_list.push(data)
- Bot.em(`${data.post_type}.${data.request_type}.${data.sub_type}`, data)
- }
-
- heartbeat(data) {
- if (data.status?.stat)
- data.bot.stat = {
- ...data.status,
- lost_pkt_cnt: data.status.stat.packet_lost,
- lost_times: data.status.stat.lost_times,
- recv_msg_cnt: data.status.stat.message_received,
- recv_pkt_cnt: data.status.stat.packet_received,
- sent_msg_cnt: data.status.stat.message_sent,
- sent_pkt_cnt: data.status.stat.packet_sent,
- start_time: data.bot.stat.start_time,
- }
- }
-
- makeMeta(data, ws) {
- switch (data.meta_event_type) {
- case "heartbeat":
- this.heartbeat(data)
- break
- case "lifecycle":
- this.connect(data, ws)
- break
- default:
- logger.warn(`${logger.blue(`[${data.self_id}]`)} 未知消息:${logger.magenta(JSON.stringify(data))}`)
- }
- }
-
- message(data, ws) {
- try {
- data = JSON.parse(data)
- } catch (err) {
- return logger.error(`解码数据失败:${logger.red(err)}`)
- }
-
- if (data.post_type) {
- if (data.meta_event_type != "lifecycle" && !Bot.uin.includes(data.self_id)) {
- logger.warn(`${logger.blue(`[${data.self_id}]`)} 找不到对应Bot,忽略消息:${logger.magenta(JSON.stringify(data))}`)
- return false
- }
- data.bot = Bot[data.self_id]
-
- switch (data.post_type) {
- case "meta_event":
- this.makeMeta(data, ws)
- break
- case "message":
- this.makeMessage(data)
- break
- case "notice":
- this.makeNotice(data)
- break
- case "request":
- this.makeRequest(data)
- break
- case "message_sent":
- data.post_type = "message"
- this.makeMessage(data)
- break
- default:
- logger.warn(`${logger.blue(`[${data.self_id}]`)} 未知消息:${logger.magenta(JSON.stringify(data))}`)
- }
- } else if (data.echo) {
- Bot.emit(data.echo, data)
- } else {
- logger.warn(`${logger.blue(`[${data.self_id}]`)} 未知消息:${logger.magenta(JSON.stringify(data))}`)
- }
- }
-
- load() {
- if (!Array.isArray(Bot.wsf[this.path]))
- Bot.wsf[this.path] = []
- Bot.wsf[this.path].push((ws, ...args) =>
- ws.on("message", data => this.message(data, ws, ...args))
- )
- }
-})
\ No newline at end of file
diff --git a/spaces/CobaltZvc/Hyper_Bot/style.css b/spaces/CobaltZvc/Hyper_Bot/style.css
deleted file mode 100644
index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000
--- a/spaces/CobaltZvc/Hyper_Bot/style.css
+++ /dev/null
@@ -1,28 +0,0 @@
-body {
- padding: 2rem;
- font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif;
-}
-
-h1 {
- font-size: 16px;
- margin-top: 0;
-}
-
-p {
- color: rgb(107, 114, 128);
- font-size: 15px;
- margin-bottom: 10px;
- margin-top: 5px;
-}
-
-.card {
- max-width: 620px;
- margin: 0 auto;
- padding: 16px;
- border: 1px solid lightgray;
- border-radius: 16px;
-}
-
-.card p:last-child {
- margin-bottom: 0;
-}
diff --git a/spaces/CofAI/netlist/index.html b/spaces/CofAI/netlist/index.html
deleted file mode 100644
index 15388ffe25e26693f2232ba80adc6f0d2caa5700..0000000000000000000000000000000000000000
--- a/spaces/CofAI/netlist/index.html
+++ /dev/null
@@ -1,12 +0,0 @@
-
- NetList
-
-
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/CorvaeOboro/gen_ability_icon/torch_utils/ops/bias_act.h b/spaces/CorvaeOboro/gen_ability_icon/torch_utils/ops/bias_act.h
deleted file mode 100644
index a32187e1fb7e3bae509d4eceaf900866866875a4..0000000000000000000000000000000000000000
--- a/spaces/CorvaeOboro/gen_ability_icon/torch_utils/ops/bias_act.h
+++ /dev/null
@@ -1,38 +0,0 @@
-// Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
-//
-// NVIDIA CORPORATION and its licensors retain all intellectual property
-// and proprietary rights in and to this software, related documentation
-// and any modifications thereto. Any use, reproduction, disclosure or
-// distribution of this software and related documentation without an express
-// license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-//------------------------------------------------------------------------
-// CUDA kernel parameters.
-
-struct bias_act_kernel_params
-{
- const void* x; // [sizeX]
- const void* b; // [sizeB] or NULL
- const void* xref; // [sizeX] or NULL
- const void* yref; // [sizeX] or NULL
- const void* dy; // [sizeX] or NULL
- void* y; // [sizeX]
-
- int grad;
- int act;
- float alpha;
- float gain;
- float clamp;
-
- int sizeX;
- int sizeB;
- int stepB;
- int loopX;
-};
-
-//------------------------------------------------------------------------
-// CUDA kernel selection.
-
-template void* choose_bias_act_kernel(const bias_act_kernel_params& p);
-
-//------------------------------------------------------------------------
diff --git a/spaces/Curranj/GPT-SQL/README.md b/spaces/Curranj/GPT-SQL/README.md
deleted file mode 100644
index ae8932ce98d6665219909798f8bc8e59707cda81..0000000000000000000000000000000000000000
--- a/spaces/Curranj/GPT-SQL/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: GPT SQL
-emoji: 💻
-colorFrom: purple
-colorTo: blue
-sdk: gradio
-sdk_version: 3.16.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/config/defaults.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/config/defaults.py
deleted file mode 100644
index aa35ac474b5d42a99361d1ac5ba2d8e164ae0a2c..0000000000000000000000000000000000000000
--- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/config/defaults.py
+++ /dev/null
@@ -1,471 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
-import os
-
-from yacs.config import CfgNode as CN
-
-
-# -----------------------------------------------------------------------------
-# Convention about Training / Test specific parameters
-# -----------------------------------------------------------------------------
-# Whenever an argument can be either used for training or for testing, the
-# corresponding name will be post-fixed by a _TRAIN for a training parameter,
-# or _TEST for a test-specific parameter.
-# For example, the number of images during training will be
-# IMAGES_PER_BATCH_TRAIN, while the number of images for testing will be
-# IMAGES_PER_BATCH_TEST
-
-# -----------------------------------------------------------------------------
-# Config definition
-# -----------------------------------------------------------------------------
-
-_C = CN()
-
-_C.MODEL = CN()
-_C.MODEL.RPN_ONLY = False
-_C.MODEL.MASK_ON = False
-_C.MODEL.FCOS_ON = False
-_C.MODEL.KE_ON = False
-_C.MODEL.BOUNDARY_ON = False
-_C.MODEL.MSR_ON = False
-_C.MODEL.RETINANET_ON = False
-_C.MODEL.KEYPOINT_ON = False
-_C.MODEL.DEVICE = "cuda"
-_C.MODEL.META_ARCHITECTURE = "GeneralizedRCNN"
-_C.MODEL.CLS_AGNOSTIC_BBOX_REG = False
-
-# If the WEIGHT starts with a catalog://, like :R-50, the code will look for
-# the path in paths_catalog. Else, it will use it as the specified absolute
-# path
-_C.MODEL.WEIGHT = ""
-
-
-# -----------------------------------------------------------------------------
-# INPUT
-# -----------------------------------------------------------------------------
-_C.INPUT = CN()
-# Size of the smallest side of the image during training
-_C.INPUT.MIN_SIZE_TRAIN = (800,) # (800,)
-# The range of the smallest side for multi-scale training
-_C.INPUT.MIN_SIZE_RANGE_TRAIN = (-1, -1) # -1 means disabled and it will use MIN_SIZE_TRAIN
-# Maximum size of the side of the image during training
-_C.INPUT.MAX_SIZE_TRAIN = 1333
-# Size of the smallest side of the image during testing
-_C.INPUT.MIN_SIZE_TEST = 1000
-# Maximum size of the side of the image during testing
-_C.INPUT.MAX_SIZE_TEST = 1333
-# Values to be used for image normalization
-_C.INPUT.PIXEL_MEAN = [102.9801, 115.9465, 122.7717]
-# Values to be used for image normalization
-_C.INPUT.PIXEL_STD = [1., 1., 1.]
-# Convert image to BGR format (for Caffe2 models), in range 0-255
-_C.INPUT.TO_BGR255 = True
-_C.INPUT.CROP_PROB_TRAIN = 1.0
-_C.INPUT.ROTATE_PROB_TRAIN = 0.3
-_C.INPUT.ROTATE_DEGREE = (0,15,-15,45,-45,90,-90)
-# _C.INPUT.ROTATE_DEGREE = 15
-
-
-
-
-# -----------------------------------------------------------------------------
-# Dataset
-# -----------------------------------------------------------------------------
-_C.DATASETS = CN()
-# List of the dataset names for training, as present in paths_catalog.py
-_C.DATASETS.TRAIN = ()
-# List of the dataset names for testing, as present in paths_catalog.py
-_C.DATASETS.TEST = ()
-_C.DATASETS.Test_Visual = False
-# -----------------------------------------------------------------------------
-# DataLoader
-# -----------------------------------------------------------------------------
-_C.DATALOADER = CN()
-# Number of data loading threads
-_C.DATALOADER.NUM_WORKERS = 4
-# If > 0, this enforces that each collated batch should have a size divisible
-# by SIZE_DIVISIBILITY
-_C.DATALOADER.SIZE_DIVISIBILITY = 0
-# If True, each batch should contain only images for which the aspect ratio
-# is compatible. This groups portrait images together, and landscape images
-# are not batched with portrait images.
-_C.DATALOADER.ASPECT_RATIO_GROUPING = True
-
-
-# ---------------------------------------------------------------------------- #
-# Backbone options
-# ---------------------------------------------------------------------------- #
-_C.MODEL.BACKBONE = CN()
-
-# The backbone conv body to use
-# The string must match a function that is imported in modeling.model_builder
-# (e.g., 'FPN.add_fpn_ResNet101_conv5_body' to specify a ResNet-101-FPN
-# backbone)
-_C.MODEL.BACKBONE.CONV_BODY = "R-50-C4"
-
-# Add StopGrad at a specified stage so the bottom layers are frozen
-_C.MODEL.BACKBONE.FREEZE_CONV_BODY_AT = 2
-# GN for backbone
-
-##123123123
-_C.MODEL.BACKBONE.USE_GN = False
-
-
-# ---------------------------------------------------------------------------- #
-# FPN options
-# ---------------------------------------------------------------------------- #
-_C.MODEL.FPN = CN()
-
-# 123123123
-_C.MODEL.FPN.USE_GN = False
-_C.MODEL.FPN.USE_RELU = False
-
-#############123123123
-_C.MODEL.FPN.USE_DEFORMABLE = False
-
-
-# ---------------------------------------------------------------------------- #
-# Group Norm options
-# ---------------------------------------------------------------------------- #
-_C.MODEL.GROUP_NORM = CN()
-# Number of dimensions per group in GroupNorm (-1 if using NUM_GROUPS)
-_C.MODEL.GROUP_NORM.DIM_PER_GP = -1
-# Number of groups in GroupNorm (-1 if using DIM_PER_GP)
-_C.MODEL.GROUP_NORM.NUM_GROUPS = 32
-# GroupNorm's small constant in the denominator
-_C.MODEL.GROUP_NORM.EPSILON = 1e-5
-
-
-# ---------------------------------------------------------------------------- #
-# RPN options
-# ---------------------------------------------------------------------------- #
-_C.MODEL.RPN = CN()
-_C.MODEL.RPN.USE_FPN = False
-# Base RPN anchor sizes given in absolute pixels w.r.t. the scaled network input
-_C.MODEL.RPN.ANCHOR_SIZES = (32, 64, 128, 256, 512)
-# Stride of the feature map that RPN is attached.
-# For FPN, number of strides should match number of scales
-_C.MODEL.RPN.ANCHOR_STRIDE = (16,)
-# RPN anchor aspect ratios
-_C.MODEL.RPN.ASPECT_RATIOS = (0.5, 1.0, 2.0)
-# Remove RPN anchors that go outside the image by RPN_STRADDLE_THRESH pixels
-# Set to -1 or a large value, e.g. 100000, to disable pruning anchors
-_C.MODEL.RPN.STRADDLE_THRESH = 0
-# Minimum overlap required between an anchor and ground-truth box for the
-# (anchor, gt box) pair to be a positive example (IoU >= FG_IOU_THRESHOLD
-# ==> positive RPN example)
-_C.MODEL.RPN.FG_IOU_THRESHOLD = 0.7
-# Maximum overlap allowed between an anchor and ground-truth box for the
-# (anchor, gt box) pair to be a negative examples (IoU < BG_IOU_THRESHOLD
-# ==> negative RPN example)
-_C.MODEL.RPN.BG_IOU_THRESHOLD = 0.3
-# Total number of RPN examples per image
-_C.MODEL.RPN.BATCH_SIZE_PER_IMAGE = 256
-# Target fraction of foreground (positive) examples per RPN minibatch
-_C.MODEL.RPN.POSITIVE_FRACTION = 0.5
-# Number of top scoring RPN proposals to keep before applying NMS
-# When FPN is used, this is *per FPN level* (not total)
-_C.MODEL.RPN.PRE_NMS_TOP_N_TRAIN = 12000
-
-_C.MODEL.RPN.PRE_NMS_TOP_N_TEST = 6000
-# Number of top scoring RPN proposals to keep after applying NMS
-_C.MODEL.RPN.POST_NMS_TOP_N_TRAIN = 2000
-_C.MODEL.RPN.POST_NMS_TOP_N_TEST = 1000
-# NMS threshold used on RPN proposals
-_C.MODEL.RPN.NMS_THRESH = 0.7
-# Proposal height and width both need to be greater than RPN_MIN_SIZE
-# (a the scale used during training or inference)
-_C.MODEL.RPN.MIN_SIZE = 0
-# Number of top scoring RPN proposals to keep after combining proposals from
-# all FPN levels
-_C.MODEL.RPN.FPN_POST_NMS_TOP_N_TRAIN = 2000
-_C.MODEL.RPN.FPN_POST_NMS_TOP_N_TEST = 2000
-# Custom rpn head, empty to use default conv or separable conv
-_C.MODEL.RPN.RPN_HEAD = "SingleConvRPNHead_1"
-
-
-# ---------------------------------------------------------------------------- #
-# ROI HEADS options
-# ---------------------------------------------------------------------------- #
-_C.MODEL.ROI_HEADS = CN()
-_C.MODEL.ROI_HEADS.USE_FPN = False
-_C.MODEL.ROI_HEADS.USE_FPN = False
-# Overlap threshold for an RoI to be considered foreground (if >= FG_IOU_THRESHOLD)
-_C.MODEL.ROI_HEADS.FG_IOU_THRESHOLD = 0.5
-# Overlap threshold for an RoI to be considered background
-# (class = 0 if overlap in [0, BG_IOU_THRESHOLD))
-_C.MODEL.ROI_HEADS.BG_IOU_THRESHOLD = 0.5
-# Default weights on (dx, dy, dw, dh) for normalizing bbox regression targets
-# These are empirically chosen to approximately lead to unit variance targets
-_C.MODEL.ROI_HEADS.BBOX_REG_WEIGHTS = (10., 10., 5., 5.)
-# RoI minibatch size *per image* (number of regions of interest [ROIs])
-# Total number of RoIs per training minibatch =
-# TRAIN.BATCH_SIZE_PER_IM * TRAIN.IMS_PER_BATCH
-# E.g., a common configuration is: 512 * 2 * 8 = 8192
-_C.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 512
-# Target fraction of RoI minibatch that is labeled foreground (i.e. class > 0)
-_C.MODEL.ROI_HEADS.POSITIVE_FRACTION = 0.25
-
-# Only used on test mode
-
-# Minimum score threshold (assuming scores in a [0, 1] range); a value chosen to
-# balance obtaining high recall with not having too many low precision
-# detections that will slow down inference post processing steps (like NMS)
-_C.MODEL.ROI_HEADS.SCORE_THRESH = 0.05
-# Overlap threshold used for non-maximum suppression (suppress boxes with
-# IoU >= this threshold)
-_C.MODEL.ROI_HEADS.NMS = 0.5
-# Maximum number of detections to return per image (100 is based on the limit established for the COCO dataset)
-_C.MODEL.ROI_HEADS.DETECTIONS_PER_IMG = 100
-
-
-_C.MODEL.ROI_BOX_HEAD = CN()
-_C.MODEL.ROI_BOX_HEAD.FEATURE_EXTRACTOR = "ResNet50Conv5ROIFeatureExtractor"
-_C.MODEL.ROI_BOX_HEAD.PREDICTOR = "FastRCNNPredictor"
-_C.MODEL.ROI_BOX_HEAD.POOLER_RESOLUTION = 14
-_C.MODEL.ROI_BOX_HEAD.POOLER_SAMPLING_RATIO = 0
-_C.MODEL.ROI_BOX_HEAD.POOLER_SCALES = (1.0 / 16,)
-_C.MODEL.ROI_BOX_HEAD.NUM_CLASSES = 81
-# Hidden layer dimension when using an MLP for the RoI box head
-_C.MODEL.ROI_BOX_HEAD.MLP_HEAD_DIM = 1024
-# GN
-#####123123123
-_C.MODEL.ROI_BOX_HEAD.USE_GN = False
-# Dilation
-_C.MODEL.ROI_BOX_HEAD.DILATION = 1
-_C.MODEL.ROI_BOX_HEAD.CONV_HEAD_DIM = 256
-
-#### 123123
-_C.MODEL.ROI_BOX_HEAD.NUM_STACKED_CONVS = 4
-_C.MODEL.ROI_BOX_HEAD.CLASS_WEIGHT = 0.1
-_C.MODEL.ROI_BOX_HEAD.DEFORMABLE_POOLING = False
-
-_C.MODEL.ROI_MASK_HEAD = CN()
-# Whether or not resize and translate masks to the input image.
-_C.MODEL.ROI_MASK_HEAD.POSTPROCESS_MASKS = False
-_C.MODEL.ROI_MASK_HEAD.POSTPROCESS_MASKS_THRESHOLD = 0.5
-_C.MODEL.ROI_MASK_HEAD.DILATION = 1
-_C.MODEL.ROI_MASK_HEAD.USE_GN = False
-
-# Boundary edge
-_C.MODEL.ROI_BOUNDARY_HEAD = CN()
-_C.MODEL.ROI_BOUNDARY_HEAD.DEFORMABLE_POOLING = False
-
-_C.MODEL.ROI_BOUNDARY_HEAD.FEATURE_EXTRACTOR = "ResNet50Conv5ROIFeatureExtractor"
-_C.MODEL.ROI_BOUNDARY_HEAD.POOLER_RESOLUTION = 14
-_C.MODEL.ROI_BOUNDARY_HEAD.POOLER_SCALES = (1.0 / 16,)
-_C.MODEL.ROI_BOUNDARY_HEAD.POOLER_SAMPLING_RATIO = 0
-_C.MODEL.ROI_BOUNDARY_HEAD.CONV_LAYERS = (256, 256, 256, 256)
-
-_C.MODEL.ROI_BOUNDARY_HEAD.PREDICTOR = "KERCNNC4Predictor"
-_C.MODEL.ROI_BOUNDARY_HEAD.RESOLUTION = 14
-_C.MODEL.ROI_BOUNDARY_HEAD.SHARE_BOX_FEATURE_EXTRACTOR = True
-_C.MODEL.ROI_BOUNDARY_HEAD.BO_WEIGHT = 1.0
-_C.MODEL.ROI_BOUNDARY_HEAD.Loss_balance = 1.2
-
-# ---------------------------------------------------------------------------- #
-# ResNe[X]t options (ResNets = {ResNet, ResNeXt}
-# Note that parts of a resnet may be used for both the backbone and the head
-# These options apply to both
-# ---------------------------------------------------------------------------- #
-_C.MODEL.RESNETS = CN()
-
-# Number of groups to use; 1 ==> ResNet; > 1 ==> ResNeXt
-_C.MODEL.RESNETS.NUM_GROUPS = 1
-
-# Baseline width of each group
-_C.MODEL.RESNETS.WIDTH_PER_GROUP = 64
-
-# Place the stride 2 conv on the 1x1 filter
-# Use True only for the original MSRA ResNet; use False for C2 and Torch models
-_C.MODEL.RESNETS.STRIDE_IN_1X1 = True
-
-# Residual transformation function
-_C.MODEL.RESNETS.TRANS_FUNC = "BottleneckWithFixedBatchNorm"
-_C.MODEL.RESNETS.DEF_FUNC = "DeformableConvWithFixedBatchNorm"
-# ResNet's stem function (conv1 and pool1)
-_C.MODEL.RESNETS.STEM_FUNC = "StemWithFixedBatchNorm"
-_C.MODEL.RESNETS.DEF_START_MODULE = "NA"
-
-#########123123123
-_C.MODEL.RESNETS.DEFORM_POOLING = False
-
-# Apply dilation in stage "res5"
-_C.MODEL.RESNETS.RES5_DILATION = 1
-
-_C.MODEL.RESNETS.BACKBONE_OUT_CHANNELS = 256 * 4
-_C.MODEL.RESNETS.RES2_OUT_CHANNELS = 256
-_C.MODEL.RESNETS.STEM_OUT_CHANNELS = 64
-
-# ---------------------------------------------------------------------------- #
-# FCOS Options
-# ---------------------------------------------------------------------------- #
-_C.MODEL.FCOS = CN()
-_C.MODEL.FCOS.NUM_CLASSES = 81 # the number of classes including background
-_C.MODEL.FCOS.FPN_STRIDES = [8, 16, 32, 64, 128]
-_C.MODEL.FCOS.PRIOR_PROB = 0.01
-_C.MODEL.FCOS.INFERENCE_TH = 0.05
-_C.MODEL.FCOS.NMS_TH = 0.4
-_C.MODEL.FCOS.PRE_NMS_TOP_N = 1000
-
-# Focal loss parameter: alpha
-_C.MODEL.FCOS.LOSS_ALPHA = 0.25
-# Focal loss parameter: gamma
-_C.MODEL.FCOS.LOSS_GAMMA = 2.0
-_C.MODEL.FCOS.SIZES_OF_INTEREST = [64, 128, 256, 512]
-
-# the number of convolutions used in the cls and bbox tower
-_C.MODEL.FCOS.NUM_CONVS = 4
-
-# ---------------------------------------------------------------------------- #
-# RetinaNet Options (Follow the Detectron version)
-# ---------------------------------------------------------------------------- #
-_C.MODEL.RETINANET = CN()
-
-# This is the number of foreground classes and background.
-_C.MODEL.RETINANET.NUM_CLASSES = 81
-
-# Anchor aspect ratios to use
-_C.MODEL.RETINANET.ANCHOR_SIZES = (32, 64, 128, 256, 512)
-_C.MODEL.RETINANET.ASPECT_RATIOS = (0.5, 1.0, 2.0)
-_C.MODEL.RETINANET.ANCHOR_STRIDES = (8, 16, 32, 64, 128)
-_C.MODEL.RETINANET.STRADDLE_THRESH = 0
-
-# Anchor scales per octave
-_C.MODEL.RETINANET.OCTAVE = 2.0
-_C.MODEL.RETINANET.SCALES_PER_OCTAVE = 3
-
-# Use C5 or P5 to generate P6
-_C.MODEL.RETINANET.USE_C5 = True
-
-# Convolutions to use in the cls and bbox tower
-# NOTE: this doesn't include the last conv for logits
-_C.MODEL.RETINANET.NUM_CONVS = 4
-
-# Weight for bbox_regression loss
-_C.MODEL.RETINANET.BBOX_REG_WEIGHT = 4.0
-
-# Smooth L1 loss beta for bbox regression
-_C.MODEL.RETINANET.BBOX_REG_BETA = 0.11
-
-# During inference, #locs to select based on cls score before NMS is performed
-# per FPN level
-_C.MODEL.RETINANET.PRE_NMS_TOP_N = 1000
-
-# IoU overlap ratio for labeling an anchor as positive
-# Anchors with >= iou overlap are labeled positive
-_C.MODEL.RETINANET.FG_IOU_THRESHOLD = 0.5
-
-# IoU overlap ratio for labeling an anchor as negative
-# Anchors with < iou overlap are labeled negative
-_C.MODEL.RETINANET.BG_IOU_THRESHOLD = 0.4
-
-# Focal loss parameter: alpha
-_C.MODEL.RETINANET.LOSS_ALPHA = 0.25
-
-# Focal loss parameter: gamma
-_C.MODEL.RETINANET.LOSS_GAMMA = 2.0
-
-# Prior prob for the positives at the beginning of training. This is used to set
-# the bias init for the logits layer
-_C.MODEL.RETINANET.PRIOR_PROB = 0.01
-
-# Inference cls score threshold, anchors with score > INFERENCE_TH are
-# considered for inference
-_C.MODEL.RETINANET.INFERENCE_TH = 0.05
-
-# NMS threshold used in RetinaNet
-_C.MODEL.RETINANET.NMS_TH = 0.4
-
-
-# ---------------------------------------------------------------------------- #
-# FBNet options
-# ---------------------------------------------------------------------------- #
-_C.MODEL.FBNET = CN()
-_C.MODEL.FBNET.ARCH = "default"
-# custom arch
-_C.MODEL.FBNET.ARCH_DEF = ""
-_C.MODEL.FBNET.BN_TYPE = "bn"
-_C.MODEL.FBNET.SCALE_FACTOR = 1.0
-# the output channels will be divisible by WIDTH_DIVISOR
-_C.MODEL.FBNET.WIDTH_DIVISOR = 1
-_C.MODEL.FBNET.DW_CONV_SKIP_BN = True
-_C.MODEL.FBNET.DW_CONV_SKIP_RELU = True
-
-# > 0 scale, == 0 skip, < 0 same dimension
-_C.MODEL.FBNET.DET_HEAD_LAST_SCALE = 1.0
-_C.MODEL.FBNET.DET_HEAD_BLOCKS = []
-# overwrite the stride for the head, 0 to use original value
-_C.MODEL.FBNET.DET_HEAD_STRIDE = 0
-
-# > 0 scale, == 0 skip, < 0 same dimension
-_C.MODEL.FBNET.KPTS_HEAD_LAST_SCALE = 0.0
-_C.MODEL.FBNET.KPTS_HEAD_BLOCKS = []
-# overwrite the stride for the head, 0 to use original value
-_C.MODEL.FBNET.KPTS_HEAD_STRIDE = 0
-
-# > 0 scale, == 0 skip, < 0 same dimension
-_C.MODEL.FBNET.MASK_HEAD_LAST_SCALE = 0.0
-_C.MODEL.FBNET.MASK_HEAD_BLOCKS = []
-# overwrite the stride for the head, 0 to use original value
-_C.MODEL.FBNET.MASK_HEAD_STRIDE = 0
-
-# 0 to use all blocks defined in arch_def
-_C.MODEL.FBNET.RPN_HEAD_BLOCKS = 0
-_C.MODEL.FBNET.RPN_BN_TYPE = ""
-
-
-# ---------------------------------------------------------------------------- #
-# Solver
-# ---------------------------------------------------------------------------- #
-_C.SOLVER = CN()
-_C.SOLVER.MAX_ITER = 40000
-
-_C.SOLVER.BASE_LR = 0.001
-_C.SOLVER.BIAS_LR_FACTOR = 2
-
-_C.SOLVER.MOMENTUM = 0.9
-
-_C.SOLVER.WEIGHT_DECAY = 0.0005
-_C.SOLVER.WEIGHT_DECAY_BIAS = 0
-
-_C.SOLVER.GAMMA = 0.1
-_C.SOLVER.STEPS = (30000,)
-
-_C.SOLVER.WARMUP_FACTOR = 1.0 / 3
-_C.SOLVER.WARMUP_ITERS = 500
-_C.SOLVER.WARMUP_METHOD = "linear"
-
-_C.SOLVER.CHECKPOINT_PERIOD = 2500
-
-# Number of images per batch
-# This is global, so if we have 8 GPUs and IMS_PER_BATCH = 16, each GPU will
-# see 2 images per batch
-_C.SOLVER.IMS_PER_BATCH = 4
-
-# ---------------------------------------------------------------------------- #
-# Specific test options
-# ---------------------------------------------------------------------------- #
-_C.TEST = CN()
-_C.TEST.EXPECTED_RESULTS = []
-_C.TEST.EXPECTED_RESULTS_SIGMA_TOL = 4
-# Number of images per batch
-# This is global, so if we have 8 GPUs and IMS_PER_BATCH = 16, each GPU will
-# see 2 images per batch
-_C.TEST.IMS_PER_BATCH = 16
-# Number of detections per image
-_C.TEST.DETECTIONS_PER_IMG = 100
-
-
-# ---------------------------------------------------------------------------- #
-# Misc options
-# ---------------------------------------------------------------------------- #
-_C.OUTPUT_DIR = "./1"
-_C.IS_LOAD_OPTIMIZER = True
-_C.IS_LOAD_SCHEDULER = True
-_C.PROCESS = CN()
-
-#####123123123
-_C.PROCESS.PNMS = False
-_C.PROCESS.NMS_THRESH = 0.4
-
-_C.PATHS_CATALOG = os.path.join(os.path.dirname(__file__), "paths_catalog.py")
diff --git a/spaces/DGSpitzer/TXT-2-IMG-2-MUSIC-2-VIDEO-w-RIFFUSION/utils.py b/spaces/DGSpitzer/TXT-2-IMG-2-MUSIC-2-VIDEO-w-RIFFUSION/utils.py
deleted file mode 100644
index 6fab4572bdf1e1bfb56c47f17093e9f3a2d087e9..0000000000000000000000000000000000000000
--- a/spaces/DGSpitzer/TXT-2-IMG-2-MUSIC-2-VIDEO-w-RIFFUSION/utils.py
+++ /dev/null
@@ -1,50 +0,0 @@
-import json
-import numpy as np
-import httpx
-
-from constants import MUBERT_TAGS, MUBERT_LICENSE, MUBERT_MODE, MUBERT_TOKEN
-
-
-def get_mubert_tags_embeddings(w2v_model):
- return w2v_model.encode(MUBERT_TAGS)
-
-
-def get_pat(email: str):
- r = httpx.post('https://api-b2b.mubert.com/v2/GetServiceAccess',
- json={
- "method": "GetServiceAccess",
- "params": {
- "email": email,
- "license": MUBERT_LICENSE,
- "token": MUBERT_TOKEN,
- "mode": MUBERT_MODE,
- }
- })
-
- rdata = json.loads(r.text)
- assert rdata['status'] == 1, "probably incorrect e-mail"
- pat = rdata['data']['pat']
- return pat
-
-
-def find_similar(em, embeddings, method='cosine'):
- scores = []
- for ref in embeddings:
- if method == 'cosine':
- scores.append(1 - np.dot(ref, em) / (np.linalg.norm(ref) * np.linalg.norm(em)))
- if method == 'norm':
- scores.append(np.linalg.norm(ref - em))
- return np.array(scores), np.argsort(scores)
-
-
-def get_tags_for_prompts(w2v_model, mubert_tags_embeddings, prompts, top_n=3, debug=False):
- prompts_embeddings = w2v_model.encode(prompts)
- ret = []
- for i, pe in enumerate(prompts_embeddings):
- scores, idxs = find_similar(pe, mubert_tags_embeddings)
- top_tags = MUBERT_TAGS[idxs[:top_n]]
- top_prob = 1 - scores[idxs[:top_n]]
- if debug:
- print(f"Prompt: {prompts[i]}\nTags: {', '.join(top_tags)}\nScores: {top_prob}\n\n\n")
- ret.append((prompts[i], list(top_tags)))
- return ret
\ No newline at end of file
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/FontFile.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/FontFile.py
deleted file mode 100644
index 5ec0a6632e3182382467688662ebc5e6c324da91..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/FontFile.py
+++ /dev/null
@@ -1,110 +0,0 @@
-#
-# The Python Imaging Library
-# $Id$
-#
-# base class for raster font file parsers
-#
-# history:
-# 1997-06-05 fl created
-# 1997-08-19 fl restrict image width
-#
-# Copyright (c) 1997-1998 by Secret Labs AB
-# Copyright (c) 1997-1998 by Fredrik Lundh
-#
-# See the README file for information on usage and redistribution.
-#
-
-
-import os
-
-from . import Image, _binary
-
-WIDTH = 800
-
-
-def puti16(fp, values):
- """Write network order (big-endian) 16-bit sequence"""
- for v in values:
- if v < 0:
- v += 65536
- fp.write(_binary.o16be(v))
-
-
-class FontFile:
- """Base class for raster font file handlers."""
-
- bitmap = None
-
- def __init__(self):
- self.info = {}
- self.glyph = [None] * 256
-
- def __getitem__(self, ix):
- return self.glyph[ix]
-
- def compile(self):
- """Create metrics and bitmap"""
-
- if self.bitmap:
- return
-
- # create bitmap large enough to hold all data
- h = w = maxwidth = 0
- lines = 1
- for glyph in self:
- if glyph:
- d, dst, src, im = glyph
- h = max(h, src[3] - src[1])
- w = w + (src[2] - src[0])
- if w > WIDTH:
- lines += 1
- w = src[2] - src[0]
- maxwidth = max(maxwidth, w)
-
- xsize = maxwidth
- ysize = lines * h
-
- if xsize == 0 and ysize == 0:
- return ""
-
- self.ysize = h
-
- # paste glyphs into bitmap
- self.bitmap = Image.new("1", (xsize, ysize))
- self.metrics = [None] * 256
- x = y = 0
- for i in range(256):
- glyph = self[i]
- if glyph:
- d, dst, src, im = glyph
- xx = src[2] - src[0]
- # yy = src[3] - src[1]
- x0, y0 = x, y
- x = x + xx
- if x > WIDTH:
- x, y = 0, y + h
- x0, y0 = x, y
- x = xx
- s = src[0] + x0, src[1] + y0, src[2] + x0, src[3] + y0
- self.bitmap.paste(im.crop(src), s)
- self.metrics[i] = d, dst, s
-
- def save(self, filename):
- """Save font"""
-
- self.compile()
-
- # font data
- self.bitmap.save(os.path.splitext(filename)[0] + ".pbm", "PNG")
-
- # font metrics
- with open(os.path.splitext(filename)[0] + ".pil", "wb") as fp:
- fp.write(b"PILfont\n")
- fp.write(f";;;;;;{self.ysize};\n".encode("ascii")) # HACK!!!
- fp.write(b"DATA\n")
- for id in range(256):
- m = self.metrics[id]
- if not m:
- puti16(fp, [0] * 10)
- else:
- puti16(fp, m[0] + m[1] + m[2])
diff --git a/spaces/Datasculptor/StyleGAN-NADA/e4e/options/train_options.py b/spaces/Datasculptor/StyleGAN-NADA/e4e/options/train_options.py
deleted file mode 100644
index 583ea1423fdc9a649cd7044d74d554bf0ac2bf51..0000000000000000000000000000000000000000
--- a/spaces/Datasculptor/StyleGAN-NADA/e4e/options/train_options.py
+++ /dev/null
@@ -1,84 +0,0 @@
-from argparse import ArgumentParser
-from configs.paths_config import model_paths
-
-
-class TrainOptions:
-
- def __init__(self):
- self.parser = ArgumentParser()
- self.initialize()
-
- def initialize(self):
- self.parser.add_argument('--exp_dir', type=str, help='Path to experiment output directory')
- self.parser.add_argument('--dataset_type', default='ffhq_encode', type=str,
- help='Type of dataset/experiment to run')
- self.parser.add_argument('--encoder_type', default='Encoder4Editing', type=str, help='Which encoder to use')
-
- self.parser.add_argument('--batch_size', default=4, type=int, help='Batch size for training')
- self.parser.add_argument('--test_batch_size', default=2, type=int, help='Batch size for testing and inference')
- self.parser.add_argument('--workers', default=4, type=int, help='Number of train dataloader workers')
- self.parser.add_argument('--test_workers', default=2, type=int,
- help='Number of test/inference dataloader workers')
-
- self.parser.add_argument('--learning_rate', default=0.0001, type=float, help='Optimizer learning rate')
- self.parser.add_argument('--optim_name', default='ranger', type=str, help='Which optimizer to use')
- self.parser.add_argument('--train_decoder', default=False, type=bool, help='Whether to train the decoder model')
- self.parser.add_argument('--start_from_latent_avg', action='store_true',
- help='Whether to add average latent vector to generate codes from encoder.')
- self.parser.add_argument('--lpips_type', default='alex', type=str, help='LPIPS backbone')
-
- self.parser.add_argument('--lpips_lambda', default=0.8, type=float, help='LPIPS loss multiplier factor')
- self.parser.add_argument('--id_lambda', default=0.1, type=float, help='ID loss multiplier factor')
- self.parser.add_argument('--l2_lambda', default=1.0, type=float, help='L2 loss multiplier factor')
-
- self.parser.add_argument('--stylegan_weights', default=model_paths['stylegan_ffhq'], type=str,
- help='Path to StyleGAN model weights')
- self.parser.add_argument('--stylegan_size', default=1024, type=int,
- help='size of pretrained StyleGAN Generator')
- self.parser.add_argument('--checkpoint_path', default=None, type=str, help='Path to pSp model checkpoint')
-
- self.parser.add_argument('--max_steps', default=500000, type=int, help='Maximum number of training steps')
- self.parser.add_argument('--image_interval', default=100, type=int,
- help='Interval for logging train images during training')
- self.parser.add_argument('--board_interval', default=50, type=int,
- help='Interval for logging metrics to tensorboard')
- self.parser.add_argument('--val_interval', default=1000, type=int, help='Validation interval')
- self.parser.add_argument('--save_interval', default=None, type=int, help='Model checkpoint interval')
-
- # Discriminator flags
- self.parser.add_argument('--w_discriminator_lambda', default=0, type=float, help='Dw loss multiplier')
- self.parser.add_argument('--w_discriminator_lr', default=2e-5, type=float, help='Dw learning rate')
- self.parser.add_argument("--r1", type=float, default=10, help="weight of the r1 regularization")
- self.parser.add_argument("--d_reg_every", type=int, default=16,
- help="interval for applying r1 regularization")
- self.parser.add_argument('--use_w_pool', action='store_true',
- help='Whether to store a latnet codes pool for the discriminator\'s training')
- self.parser.add_argument("--w_pool_size", type=int, default=50,
- help="W\'s pool size, depends on --use_w_pool")
-
- # e4e specific
- self.parser.add_argument('--delta_norm', type=int, default=2, help="norm type of the deltas")
- self.parser.add_argument('--delta_norm_lambda', type=float, default=2e-4, help="lambda for delta norm loss")
-
- # Progressive training
- self.parser.add_argument('--progressive_steps', nargs='+', type=int, default=None,
- help="The training steps of training new deltas. steps[i] starts the delta_i training")
- self.parser.add_argument('--progressive_start', type=int, default=None,
- help="The training step to start training the deltas, overrides progressive_steps")
- self.parser.add_argument('--progressive_step_every', type=int, default=2_000,
- help="Amount of training steps for each progressive step")
-
- # Save additional training info to enable future training continuation from produced checkpoints
- self.parser.add_argument('--save_training_data', action='store_true',
- help='Save intermediate training data to resume training from the checkpoint')
- self.parser.add_argument('--sub_exp_dir', default=None, type=str, help='Name of sub experiment directory')
- self.parser.add_argument('--keep_optimizer', action='store_true',
- help='Whether to continue from the checkpoint\'s optimizer')
- self.parser.add_argument('--resume_training_from_ckpt', default=None, type=str,
- help='Path to training checkpoint, works when --save_training_data was set to True')
- self.parser.add_argument('--update_param_list', nargs='+', type=str, default=None,
- help="Name of training parameters to update the loaded training checkpoint")
-
- def parse(self):
- opts = self.parser.parse_args()
- return opts
diff --git a/spaces/Devika/Briefly/README.md b/spaces/Devika/Briefly/README.md
deleted file mode 100644
index eae276712932515b8895ed5b5212c364e7af2dcb..0000000000000000000000000000000000000000
--- a/spaces/Devika/Briefly/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Briefly
-emoji: 🎯
-colorFrom: gray
-colorTo: yellow
-sdk: streamlit
-app_file: app.py
-pinned: false
----
-
-# Briefly
-Read trending news in less than 60 words.
diff --git a/spaces/Dinoking/Guccio-AI-Designer/models/stylegan/stylegan_tf/dnnlib/tflib/tfutil.py b/spaces/Dinoking/Guccio-AI-Designer/models/stylegan/stylegan_tf/dnnlib/tflib/tfutil.py
deleted file mode 100644
index a431a4d4d18a32c9cd44a14ce89f35e038dc312c..0000000000000000000000000000000000000000
--- a/spaces/Dinoking/Guccio-AI-Designer/models/stylegan/stylegan_tf/dnnlib/tflib/tfutil.py
+++ /dev/null
@@ -1,240 +0,0 @@
-# Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved.
-#
-# This work is licensed under the Creative Commons Attribution-NonCommercial
-# 4.0 International License. To view a copy of this license, visit
-# http://creativecommons.org/licenses/by-nc/4.0/ or send a letter to
-# Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.
-
-"""Miscellaneous helper utils for Tensorflow."""
-
-import os
-import numpy as np
-import tensorflow as tf
-
-from typing import Any, Iterable, List, Union
-
-TfExpression = Union[tf.Tensor, tf.Variable, tf.Operation]
-"""A type that represents a valid Tensorflow expression."""
-
-TfExpressionEx = Union[TfExpression, int, float, np.ndarray]
-"""A type that can be converted to a valid Tensorflow expression."""
-
-
-def run(*args, **kwargs) -> Any:
- """Run the specified ops in the default session."""
- assert_tf_initialized()
- return tf.get_default_session().run(*args, **kwargs)
-
-
-def is_tf_expression(x: Any) -> bool:
- """Check whether the input is a valid Tensorflow expression, i.e., Tensorflow Tensor, Variable, or Operation."""
- return isinstance(x, (tf.Tensor, tf.Variable, tf.Operation))
-
-
-def shape_to_list(shape: Iterable[tf.Dimension]) -> List[Union[int, None]]:
- """Convert a Tensorflow shape to a list of ints."""
- return [dim.value for dim in shape]
-
-
-def flatten(x: TfExpressionEx) -> TfExpression:
- """Shortcut function for flattening a tensor."""
- with tf.name_scope("Flatten"):
- return tf.reshape(x, [-1])
-
-
-def log2(x: TfExpressionEx) -> TfExpression:
- """Logarithm in base 2."""
- with tf.name_scope("Log2"):
- return tf.log(x) * np.float32(1.0 / np.log(2.0))
-
-
-def exp2(x: TfExpressionEx) -> TfExpression:
- """Exponent in base 2."""
- with tf.name_scope("Exp2"):
- return tf.exp(x * np.float32(np.log(2.0)))
-
-
-def lerp(a: TfExpressionEx, b: TfExpressionEx, t: TfExpressionEx) -> TfExpressionEx:
- """Linear interpolation."""
- with tf.name_scope("Lerp"):
- return a + (b - a) * t
-
-
-def lerp_clip(a: TfExpressionEx, b: TfExpressionEx, t: TfExpressionEx) -> TfExpression:
- """Linear interpolation with clip."""
- with tf.name_scope("LerpClip"):
- return a + (b - a) * tf.clip_by_value(t, 0.0, 1.0)
-
-
-def absolute_name_scope(scope: str) -> tf.name_scope:
- """Forcefully enter the specified name scope, ignoring any surrounding scopes."""
- return tf.name_scope(scope + "/")
-
-
-def absolute_variable_scope(scope: str, **kwargs) -> tf.variable_scope:
- """Forcefully enter the specified variable scope, ignoring any surrounding scopes."""
- return tf.variable_scope(tf.VariableScope(name=scope, **kwargs), auxiliary_name_scope=False)
-
-
-def _sanitize_tf_config(config_dict: dict = None) -> dict:
- # Defaults.
- cfg = dict()
- cfg["rnd.np_random_seed"] = None # Random seed for NumPy. None = keep as is.
- cfg["rnd.tf_random_seed"] = "auto" # Random seed for TensorFlow. 'auto' = derive from NumPy random state. None = keep as is.
- cfg["env.TF_CPP_MIN_LOG_LEVEL"] = "1" # 0 = Print all available debug info from TensorFlow. 1 = Print warnings and errors, but disable debug info.
- cfg["graph_options.place_pruned_graph"] = True # False = Check that all ops are available on the designated device. True = Skip the check for ops that are not used.
- cfg["gpu_options.allow_growth"] = True # False = Allocate all GPU memory at the beginning. True = Allocate only as much GPU memory as needed.
-
- # User overrides.
- if config_dict is not None:
- cfg.update(config_dict)
- return cfg
-
-
-def init_tf(config_dict: dict = None) -> None:
- """Initialize TensorFlow session using good default settings."""
- # Skip if already initialized.
- if tf.get_default_session() is not None:
- return
-
- # Setup config dict and random seeds.
- cfg = _sanitize_tf_config(config_dict)
- np_random_seed = cfg["rnd.np_random_seed"]
- if np_random_seed is not None:
- np.random.seed(np_random_seed)
- tf_random_seed = cfg["rnd.tf_random_seed"]
- if tf_random_seed == "auto":
- tf_random_seed = np.random.randint(1 << 31)
- if tf_random_seed is not None:
- tf.set_random_seed(tf_random_seed)
-
- # Setup environment variables.
- for key, value in list(cfg.items()):
- fields = key.split(".")
- if fields[0] == "env":
- assert len(fields) == 2
- os.environ[fields[1]] = str(value)
-
- # Create default TensorFlow session.
- create_session(cfg, force_as_default=True)
-
-
-def assert_tf_initialized():
- """Check that TensorFlow session has been initialized."""
- if tf.get_default_session() is None:
- raise RuntimeError("No default TensorFlow session found. Please call dnnlib.tflib.init_tf().")
-
-
-def create_session(config_dict: dict = None, force_as_default: bool = False) -> tf.Session:
- """Create tf.Session based on config dict."""
- # Setup TensorFlow config proto.
- cfg = _sanitize_tf_config(config_dict)
- config_proto = tf.ConfigProto()
- for key, value in cfg.items():
- fields = key.split(".")
- if fields[0] not in ["rnd", "env"]:
- obj = config_proto
- for field in fields[:-1]:
- obj = getattr(obj, field)
- setattr(obj, fields[-1], value)
-
- # Create session.
- session = tf.Session(config=config_proto)
- if force_as_default:
- # pylint: disable=protected-access
- session._default_session = session.as_default()
- session._default_session.enforce_nesting = False
- session._default_session.__enter__() # pylint: disable=no-member
-
- return session
-
-
-def init_uninitialized_vars(target_vars: List[tf.Variable] = None) -> None:
- """Initialize all tf.Variables that have not already been initialized.
-
- Equivalent to the following, but more efficient and does not bloat the tf graph:
- tf.variables_initializer(tf.report_uninitialized_variables()).run()
- """
- assert_tf_initialized()
- if target_vars is None:
- target_vars = tf.global_variables()
-
- test_vars = []
- test_ops = []
-
- with tf.control_dependencies(None): # ignore surrounding control_dependencies
- for var in target_vars:
- assert is_tf_expression(var)
-
- try:
- tf.get_default_graph().get_tensor_by_name(var.name.replace(":0", "/IsVariableInitialized:0"))
- except KeyError:
- # Op does not exist => variable may be uninitialized.
- test_vars.append(var)
-
- with absolute_name_scope(var.name.split(":")[0]):
- test_ops.append(tf.is_variable_initialized(var))
-
- init_vars = [var for var, inited in zip(test_vars, run(test_ops)) if not inited]
- run([var.initializer for var in init_vars])
-
-
-def set_vars(var_to_value_dict: dict) -> None:
- """Set the values of given tf.Variables.
-
- Equivalent to the following, but more efficient and does not bloat the tf graph:
- tflib.run([tf.assign(var, value) for var, value in var_to_value_dict.items()]
- """
- assert_tf_initialized()
- ops = []
- feed_dict = {}
-
- for var, value in var_to_value_dict.items():
- assert is_tf_expression(var)
-
- try:
- setter = tf.get_default_graph().get_tensor_by_name(var.name.replace(":0", "/setter:0")) # look for existing op
- except KeyError:
- with absolute_name_scope(var.name.split(":")[0]):
- with tf.control_dependencies(None): # ignore surrounding control_dependencies
- setter = tf.assign(var, tf.placeholder(var.dtype, var.shape, "new_value"), name="setter") # create new setter
-
- ops.append(setter)
- feed_dict[setter.op.inputs[1]] = value
-
- run(ops, feed_dict)
-
-
-def create_var_with_large_initial_value(initial_value: np.ndarray, *args, **kwargs):
- """Create tf.Variable with large initial value without bloating the tf graph."""
- assert_tf_initialized()
- assert isinstance(initial_value, np.ndarray)
- zeros = tf.zeros(initial_value.shape, initial_value.dtype)
- var = tf.Variable(zeros, *args, **kwargs)
- set_vars({var: initial_value})
- return var
-
-
-def convert_images_from_uint8(images, drange=[-1,1], nhwc_to_nchw=False):
- """Convert a minibatch of images from uint8 to float32 with configurable dynamic range.
- Can be used as an input transformation for Network.run().
- """
- images = tf.cast(images, tf.float32)
- if nhwc_to_nchw:
- images = tf.transpose(images, [0, 3, 1, 2])
- return (images - drange[0]) * ((drange[1] - drange[0]) / 255)
-
-
-def convert_images_to_uint8(images, drange=[-1,1], nchw_to_nhwc=False, shrink=1):
- """Convert a minibatch of images from float32 to uint8 with configurable dynamic range.
- Can be used as an output transformation for Network.run().
- """
- images = tf.cast(images, tf.float32)
- if shrink > 1:
- ksize = [1, 1, shrink, shrink]
- images = tf.nn.avg_pool(images, ksize=ksize, strides=ksize, padding="VALID", data_format="NCHW")
- if nchw_to_nhwc:
- images = tf.transpose(images, [0, 2, 3, 1])
- scale = 255 / (drange[1] - drange[0])
- images = images * scale + (0.5 - drange[0] * scale)
- return tf.saturate_cast(images, tf.uint8)
diff --git a/spaces/Dinoking/Guccio-AI-Designer/models/stylegan2/stylegan2-pytorch/model.py b/spaces/Dinoking/Guccio-AI-Designer/models/stylegan2/stylegan2-pytorch/model.py
deleted file mode 100644
index 4e3c9687a3f4f7301cf053bee95c1e288b1c939b..0000000000000000000000000000000000000000
--- a/spaces/Dinoking/Guccio-AI-Designer/models/stylegan2/stylegan2-pytorch/model.py
+++ /dev/null
@@ -1,703 +0,0 @@
-import math
-import random
-import functools
-import operator
-
-import torch
-from torch import nn
-from torch.nn import functional as F
-from torch.autograd import Function
-
-from op import FusedLeakyReLU, fused_leaky_relu, upfirdn2d
-
-
-class PixelNorm(nn.Module):
- def __init__(self):
- super().__init__()
-
- def forward(self, input):
- return input * torch.rsqrt(torch.mean(input ** 2, dim=1, keepdim=True) + 1e-8)
-
-
-def make_kernel(k):
- k = torch.tensor(k, dtype=torch.float32)
-
- if k.ndim == 1:
- k = k[None, :] * k[:, None]
-
- k /= k.sum()
-
- return k
-
-
-class Upsample(nn.Module):
- def __init__(self, kernel, factor=2):
- super().__init__()
-
- self.factor = factor
- kernel = make_kernel(kernel) * (factor ** 2)
- self.register_buffer('kernel', kernel)
-
- p = kernel.shape[0] - factor
-
- pad0 = (p + 1) // 2 + factor - 1
- pad1 = p // 2
-
- self.pad = (pad0, pad1)
-
- def forward(self, input):
- out = upfirdn2d(input, self.kernel, up=self.factor, down=1, pad=self.pad)
-
- return out
-
-
-class Downsample(nn.Module):
- def __init__(self, kernel, factor=2):
- super().__init__()
-
- self.factor = factor
- kernel = make_kernel(kernel)
- self.register_buffer('kernel', kernel)
-
- p = kernel.shape[0] - factor
-
- pad0 = (p + 1) // 2
- pad1 = p // 2
-
- self.pad = (pad0, pad1)
-
- def forward(self, input):
- out = upfirdn2d(input, self.kernel, up=1, down=self.factor, pad=self.pad)
-
- return out
-
-
-class Blur(nn.Module):
- def __init__(self, kernel, pad, upsample_factor=1):
- super().__init__()
-
- kernel = make_kernel(kernel)
-
- if upsample_factor > 1:
- kernel = kernel * (upsample_factor ** 2)
-
- self.register_buffer('kernel', kernel)
-
- self.pad = pad
-
- def forward(self, input):
- out = upfirdn2d(input, self.kernel, pad=self.pad)
-
- return out
-
-
-class EqualConv2d(nn.Module):
- def __init__(
- self, in_channel, out_channel, kernel_size, stride=1, padding=0, bias=True
- ):
- super().__init__()
-
- self.weight = nn.Parameter(
- torch.randn(out_channel, in_channel, kernel_size, kernel_size)
- )
- self.scale = 1 / math.sqrt(in_channel * kernel_size ** 2)
-
- self.stride = stride
- self.padding = padding
-
- if bias:
- self.bias = nn.Parameter(torch.zeros(out_channel))
-
- else:
- self.bias = None
-
- def forward(self, input):
- out = F.conv2d(
- input,
- self.weight * self.scale,
- bias=self.bias,
- stride=self.stride,
- padding=self.padding,
- )
-
- return out
-
- def __repr__(self):
- return (
- f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]},'
- f' {self.weight.shape[2]}, stride={self.stride}, padding={self.padding})'
- )
-
-
-class EqualLinear(nn.Module):
- def __init__(
- self, in_dim, out_dim, bias=True, bias_init=0, lr_mul=1, activation=None
- ):
- super().__init__()
-
- self.weight = nn.Parameter(torch.randn(out_dim, in_dim).div_(lr_mul))
-
- if bias:
- self.bias = nn.Parameter(torch.zeros(out_dim).fill_(bias_init))
-
- else:
- self.bias = None
-
- self.activation = activation
-
- self.scale = (1 / math.sqrt(in_dim)) * lr_mul
- self.lr_mul = lr_mul
-
- def forward(self, input):
- if self.activation:
- out = F.linear(input, self.weight * self.scale)
- out = fused_leaky_relu(out, self.bias * self.lr_mul)
-
- else:
- out = F.linear(
- input, self.weight * self.scale, bias=self.bias * self.lr_mul
- )
-
- return out
-
- def __repr__(self):
- return (
- f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]})'
- )
-
-
-class ScaledLeakyReLU(nn.Module):
- def __init__(self, negative_slope=0.2):
- super().__init__()
-
- self.negative_slope = negative_slope
-
- def forward(self, input):
- out = F.leaky_relu(input, negative_slope=self.negative_slope)
-
- return out * math.sqrt(2)
-
-
-class ModulatedConv2d(nn.Module):
- def __init__(
- self,
- in_channel,
- out_channel,
- kernel_size,
- style_dim,
- demodulate=True,
- upsample=False,
- downsample=False,
- blur_kernel=[1, 3, 3, 1],
- ):
- super().__init__()
-
- self.eps = 1e-8
- self.kernel_size = kernel_size
- self.in_channel = in_channel
- self.out_channel = out_channel
- self.upsample = upsample
- self.downsample = downsample
-
- if upsample:
- factor = 2
- p = (len(blur_kernel) - factor) - (kernel_size - 1)
- pad0 = (p + 1) // 2 + factor - 1
- pad1 = p // 2 + 1
-
- self.blur = Blur(blur_kernel, pad=(pad0, pad1), upsample_factor=factor)
-
- if downsample:
- factor = 2
- p = (len(blur_kernel) - factor) + (kernel_size - 1)
- pad0 = (p + 1) // 2
- pad1 = p // 2
-
- self.blur = Blur(blur_kernel, pad=(pad0, pad1))
-
- fan_in = in_channel * kernel_size ** 2
- self.scale = 1 / math.sqrt(fan_in)
- self.padding = kernel_size // 2
-
- self.weight = nn.Parameter(
- torch.randn(1, out_channel, in_channel, kernel_size, kernel_size)
- )
-
- self.modulation = EqualLinear(style_dim, in_channel, bias_init=1)
-
- self.demodulate = demodulate
-
- def __repr__(self):
- return (
- f'{self.__class__.__name__}({self.in_channel}, {self.out_channel}, {self.kernel_size}, '
- f'upsample={self.upsample}, downsample={self.downsample})'
- )
-
- def forward(self, input, style):
- batch, in_channel, height, width = input.shape
-
- style = self.modulation(style).view(batch, 1, in_channel, 1, 1)
- weight = self.scale * self.weight * style
-
- if self.demodulate:
- demod = torch.rsqrt(weight.pow(2).sum([2, 3, 4]) + 1e-8)
- weight = weight * demod.view(batch, self.out_channel, 1, 1, 1)
-
- weight = weight.view(
- batch * self.out_channel, in_channel, self.kernel_size, self.kernel_size
- )
-
- if self.upsample:
- input = input.view(1, batch * in_channel, height, width)
- weight = weight.view(
- batch, self.out_channel, in_channel, self.kernel_size, self.kernel_size
- )
- weight = weight.transpose(1, 2).reshape(
- batch * in_channel, self.out_channel, self.kernel_size, self.kernel_size
- )
- out = F.conv_transpose2d(input, weight, padding=0, stride=2, groups=batch)
- _, _, height, width = out.shape
- out = out.view(batch, self.out_channel, height, width)
- out = self.blur(out)
-
- elif self.downsample:
- input = self.blur(input)
- _, _, height, width = input.shape
- input = input.view(1, batch * in_channel, height, width)
- out = F.conv2d(input, weight, padding=0, stride=2, groups=batch)
- _, _, height, width = out.shape
- out = out.view(batch, self.out_channel, height, width)
-
- else:
- input = input.view(1, batch * in_channel, height, width)
- out = F.conv2d(input, weight, padding=self.padding, groups=batch)
- _, _, height, width = out.shape
- out = out.view(batch, self.out_channel, height, width)
-
- return out
-
-
-class NoiseInjection(nn.Module):
- def __init__(self):
- super().__init__()
-
- self.weight = nn.Parameter(torch.zeros(1))
-
- def forward(self, image, noise=None):
- if noise is None:
- batch, _, height, width = image.shape
- noise = image.new_empty(batch, 1, height, width).normal_()
-
- return image + self.weight * noise
-
-
-class ConstantInput(nn.Module):
- def __init__(self, channel, size=4):
- super().__init__()
-
- self.input = nn.Parameter(torch.randn(1, channel, size, size))
-
- def forward(self, input):
- batch = input.shape[0]
- out = self.input.repeat(batch, 1, 1, 1)
-
- return out
-
-
-class StyledConv(nn.Module):
- def __init__(
- self,
- in_channel,
- out_channel,
- kernel_size,
- style_dim,
- upsample=False,
- blur_kernel=[1, 3, 3, 1],
- demodulate=True,
- ):
- super().__init__()
-
- self.conv = ModulatedConv2d(
- in_channel,
- out_channel,
- kernel_size,
- style_dim,
- upsample=upsample,
- blur_kernel=blur_kernel,
- demodulate=demodulate,
- )
-
- self.noise = NoiseInjection()
- # self.bias = nn.Parameter(torch.zeros(1, out_channel, 1, 1))
- # self.activate = ScaledLeakyReLU(0.2)
- self.activate = FusedLeakyReLU(out_channel)
-
- def forward(self, input, style, noise=None):
- out = self.conv(input, style)
- out = self.noise(out, noise=noise)
- # out = out + self.bias
- out = self.activate(out)
-
- return out
-
-
-class ToRGB(nn.Module):
- def __init__(self, in_channel, style_dim, upsample=True, blur_kernel=[1, 3, 3, 1]):
- super().__init__()
-
- if upsample:
- self.upsample = Upsample(blur_kernel)
-
- self.conv = ModulatedConv2d(in_channel, 3, 1, style_dim, demodulate=False)
- self.bias = nn.Parameter(torch.zeros(1, 3, 1, 1))
-
- def forward(self, input, style, skip=None):
- out = self.conv(input, style)
- out = out + self.bias
-
- if skip is not None:
- skip = self.upsample(skip)
-
- out = out + skip
-
- return out
-
-# Wrapper that gives name to tensor
-class NamedTensor(nn.Module):
- def __init__(self):
- super().__init__()
-
- def forward(self, x):
- return x
-
-# Give each style a unique name
-class StridedStyle(nn.ModuleList):
- def __init__(self, n_latents):
- super().__init__([NamedTensor() for _ in range(n_latents)])
- self.n_latents = n_latents
-
- def forward(self, x):
- # x already strided
- styles = [self[i](x[:, i, :]) for i in range(self.n_latents)]
- return torch.stack(styles, dim=1)
-
-class Generator(nn.Module):
- def __init__(
- self,
- size,
- style_dim,
- n_mlp,
- channel_multiplier=2,
- blur_kernel=[1, 3, 3, 1],
- lr_mlp=0.01,
- ):
- super().__init__()
-
- self.size = size
-
- self.style_dim = style_dim
-
- layers = [PixelNorm()]
-
- for i in range(n_mlp):
- layers.append(
- EqualLinear(
- style_dim, style_dim, lr_mul=lr_mlp, activation='fused_lrelu'
- )
- )
-
- self.style = nn.Sequential(*layers)
-
- self.channels = {
- 4: 512,
- 8: 512,
- 16: 512,
- 32: 512,
- 64: 256 * channel_multiplier,
- 128: 128 * channel_multiplier,
- 256: 64 * channel_multiplier,
- 512: 32 * channel_multiplier,
- 1024: 16 * channel_multiplier,
- }
-
- self.input = ConstantInput(self.channels[4])
- self.conv1 = StyledConv(
- self.channels[4], self.channels[4], 3, style_dim, blur_kernel=blur_kernel
- )
- self.to_rgb1 = ToRGB(self.channels[4], style_dim, upsample=False)
-
- self.log_size = int(math.log(size, 2))
- self.num_layers = (self.log_size - 2) * 2 + 1
-
- self.convs = nn.ModuleList()
- self.upsamples = nn.ModuleList()
- self.to_rgbs = nn.ModuleList()
- self.noises = nn.Module()
-
- in_channel = self.channels[4]
-
- for layer_idx in range(self.num_layers):
- res = (layer_idx + 5) // 2
- shape = [1, 1, 2 ** res, 2 ** res]
- self.noises.register_buffer(f'noise_{layer_idx}', torch.randn(*shape))
-
- for i in range(3, self.log_size + 1):
- out_channel = self.channels[2 ** i]
-
- self.convs.append(
- StyledConv(
- in_channel,
- out_channel,
- 3,
- style_dim,
- upsample=True,
- blur_kernel=blur_kernel,
- )
- )
-
- self.convs.append(
- StyledConv(
- out_channel, out_channel, 3, style_dim, blur_kernel=blur_kernel
- )
- )
-
- self.to_rgbs.append(ToRGB(out_channel, style_dim))
-
- in_channel = out_channel
-
- self.n_latent = self.log_size * 2 - 2
- self.strided_style = StridedStyle(self.n_latent)
-
- def make_noise(self):
- device = self.input.input.device
-
- noises = [torch.randn(1, 1, 2 ** 2, 2 ** 2, device=device)]
-
- for i in range(3, self.log_size + 1):
- for _ in range(2):
- noises.append(torch.randn(1, 1, 2 ** i, 2 ** i, device=device))
-
- return noises
-
- def mean_latent(self, n_latent):
- latent_in = torch.randn(
- n_latent, self.style_dim, device=self.input.input.device
- )
- latent = self.style(latent_in).mean(0, keepdim=True)
-
- return latent
-
- def get_latent(self, input):
- return self.style(input)
-
- def forward(
- self,
- styles,
- return_latents=False,
- inject_index=None,
- truncation=1,
- truncation_latent=None,
- input_is_w=False,
- noise=None,
- randomize_noise=True,
- ):
- if not input_is_w:
- styles = [self.style(s) for s in styles]
-
- if noise is None:
- if randomize_noise:
- noise = [None] * self.num_layers
- else:
- noise = [
- getattr(self.noises, f'noise_{i}') for i in range(self.num_layers)
- ]
-
- if truncation < 1:
- style_t = []
-
- for style in styles:
- style_t.append(
- truncation_latent + truncation * (style - truncation_latent)
- )
-
- styles = style_t
-
- if len(styles) == 1:
- # One global latent
- inject_index = self.n_latent
-
- if styles[0].ndim < 3:
- latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1)
-
- else:
- latent = styles[0]
-
- elif len(styles) == 2:
- # Latent mixing with two latents
- if inject_index is None:
- inject_index = random.randint(1, self.n_latent - 1)
-
- latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1)
- latent2 = styles[1].unsqueeze(1).repeat(1, self.n_latent - inject_index, 1)
-
- latent = self.strided_style(torch.cat([latent, latent2], 1))
- else:
- # One latent per layer
- assert len(styles) == self.n_latent, f'Expected {self.n_latents} latents, got {len(styles)}'
- styles = torch.stack(styles, dim=1) # [N, 18, 512]
- latent = self.strided_style(styles)
-
- out = self.input(latent)
- out = self.conv1(out, latent[:, 0], noise=noise[0])
-
- skip = self.to_rgb1(out, latent[:, 1])
-
- i = 1
- for conv1, conv2, noise1, noise2, to_rgb in zip(
- self.convs[::2], self.convs[1::2], noise[1::2], noise[2::2], self.to_rgbs
- ):
- out = conv1(out, latent[:, i], noise=noise1)
- out = conv2(out, latent[:, i + 1], noise=noise2)
- skip = to_rgb(out, latent[:, i + 2], skip)
-
- i += 2
-
- image = skip
-
- if return_latents:
- return image, latent
-
- else:
- return image, None
-
-
-class ConvLayer(nn.Sequential):
- def __init__(
- self,
- in_channel,
- out_channel,
- kernel_size,
- downsample=False,
- blur_kernel=[1, 3, 3, 1],
- bias=True,
- activate=True,
- ):
- layers = []
-
- if downsample:
- factor = 2
- p = (len(blur_kernel) - factor) + (kernel_size - 1)
- pad0 = (p + 1) // 2
- pad1 = p // 2
-
- layers.append(Blur(blur_kernel, pad=(pad0, pad1)))
-
- stride = 2
- self.padding = 0
-
- else:
- stride = 1
- self.padding = kernel_size // 2
-
- layers.append(
- EqualConv2d(
- in_channel,
- out_channel,
- kernel_size,
- padding=self.padding,
- stride=stride,
- bias=bias and not activate,
- )
- )
-
- if activate:
- if bias:
- layers.append(FusedLeakyReLU(out_channel))
-
- else:
- layers.append(ScaledLeakyReLU(0.2))
-
- super().__init__(*layers)
-
-
-class ResBlock(nn.Module):
- def __init__(self, in_channel, out_channel, blur_kernel=[1, 3, 3, 1]):
- super().__init__()
-
- self.conv1 = ConvLayer(in_channel, in_channel, 3)
- self.conv2 = ConvLayer(in_channel, out_channel, 3, downsample=True)
-
- self.skip = ConvLayer(
- in_channel, out_channel, 1, downsample=True, activate=False, bias=False
- )
-
- def forward(self, input):
- out = self.conv1(input)
- out = self.conv2(out)
-
- skip = self.skip(input)
- out = (out + skip) / math.sqrt(2)
-
- return out
-
-
-class Discriminator(nn.Module):
- def __init__(self, size, channel_multiplier=2, blur_kernel=[1, 3, 3, 1]):
- super().__init__()
-
- channels = {
- 4: 512,
- 8: 512,
- 16: 512,
- 32: 512,
- 64: 256 * channel_multiplier,
- 128: 128 * channel_multiplier,
- 256: 64 * channel_multiplier,
- 512: 32 * channel_multiplier,
- 1024: 16 * channel_multiplier,
- }
-
- convs = [ConvLayer(3, channels[size], 1)]
-
- log_size = int(math.log(size, 2))
-
- in_channel = channels[size]
-
- for i in range(log_size, 2, -1):
- out_channel = channels[2 ** (i - 1)]
-
- convs.append(ResBlock(in_channel, out_channel, blur_kernel))
-
- in_channel = out_channel
-
- self.convs = nn.Sequential(*convs)
-
- self.stddev_group = 4
- self.stddev_feat = 1
-
- self.final_conv = ConvLayer(in_channel + 1, channels[4], 3)
- self.final_linear = nn.Sequential(
- EqualLinear(channels[4] * 4 * 4, channels[4], activation='fused_lrelu'),
- EqualLinear(channels[4], 1),
- )
-
- def forward(self, input):
- out = self.convs(input)
-
- batch, channel, height, width = out.shape
- group = min(batch, self.stddev_group)
- stddev = out.view(
- group, -1, self.stddev_feat, channel // self.stddev_feat, height, width
- )
- stddev = torch.sqrt(stddev.var(0, unbiased=False) + 1e-8)
- stddev = stddev.mean([2, 3, 4], keepdims=True).squeeze(2)
- stddev = stddev.repeat(group, 1, height, width)
- out = torch.cat([out, stddev], 1)
-
- out = self.final_conv(out)
-
- out = out.view(batch, -1)
- out = self.final_linear(out)
-
- return out
-
diff --git a/spaces/DragGan/DragGan-Inversion/gui_utils/gl_utils.py b/spaces/DragGan/DragGan-Inversion/gui_utils/gl_utils.py
deleted file mode 100644
index 922db6ff7c8643352334c36b83039b8d2dad8a0f..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan-Inversion/gui_utils/gl_utils.py
+++ /dev/null
@@ -1,455 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-import math
-import os
-import functools
-import contextlib
-import numpy as np
-import OpenGL.GL as gl
-import OpenGL.GL.ARB.texture_float
-import dnnlib
-
-# ----------------------------------------------------------------------------
-
-
-def init_egl():
- # Must be set before importing OpenGL.
- assert os.environ['PYOPENGL_PLATFORM'] == 'egl'
- import OpenGL.EGL as egl
- import ctypes
-
- # Initialize EGL.
- display = egl.eglGetDisplay(egl.EGL_DEFAULT_DISPLAY)
- assert display != egl.EGL_NO_DISPLAY
- major = ctypes.c_int32()
- minor = ctypes.c_int32()
- ok = egl.eglInitialize(display, major, minor)
- assert ok
- assert major.value * 10 + minor.value >= 14
-
- # Choose config.
- config_attribs = [
- egl.EGL_RENDERABLE_TYPE, egl.EGL_OPENGL_BIT,
- egl.EGL_SURFACE_TYPE, egl.EGL_PBUFFER_BIT,
- egl.EGL_NONE
- ]
- configs = (ctypes.c_int32 * 1)()
- num_configs = ctypes.c_int32()
- ok = egl.eglChooseConfig(display, config_attribs, configs, 1, num_configs)
- assert ok
- assert num_configs.value == 1
- config = configs[0]
-
- # Create dummy pbuffer surface.
- surface_attribs = [
- egl.EGL_WIDTH, 1,
- egl.EGL_HEIGHT, 1,
- egl.EGL_NONE
- ]
- surface = egl.eglCreatePbufferSurface(display, config, surface_attribs)
- assert surface != egl.EGL_NO_SURFACE
-
- # Setup GL context.
- ok = egl.eglBindAPI(egl.EGL_OPENGL_API)
- assert ok
- context = egl.eglCreateContext(display, config, egl.EGL_NO_CONTEXT, None)
- assert context != egl.EGL_NO_CONTEXT
- ok = egl.eglMakeCurrent(display, surface, surface, context)
- assert ok
-
-# ----------------------------------------------------------------------------
-
-
-_texture_formats = {
- ('uint8', 1): dnnlib.EasyDict(type=gl.GL_UNSIGNED_BYTE, format=gl.GL_LUMINANCE, internalformat=gl.GL_LUMINANCE8),
- ('uint8', 2): dnnlib.EasyDict(type=gl.GL_UNSIGNED_BYTE, format=gl.GL_LUMINANCE_ALPHA, internalformat=gl.GL_LUMINANCE8_ALPHA8),
- ('uint8', 3): dnnlib.EasyDict(type=gl.GL_UNSIGNED_BYTE, format=gl.GL_RGB, internalformat=gl.GL_RGB8),
- ('uint8', 4): dnnlib.EasyDict(type=gl.GL_UNSIGNED_BYTE, format=gl.GL_RGBA, internalformat=gl.GL_RGBA8),
- ('float32', 1): dnnlib.EasyDict(type=gl.GL_FLOAT, format=gl.GL_LUMINANCE, internalformat=OpenGL.GL.ARB.texture_float.GL_LUMINANCE32F_ARB),
- ('float32', 2): dnnlib.EasyDict(type=gl.GL_FLOAT, format=gl.GL_LUMINANCE_ALPHA, internalformat=OpenGL.GL.ARB.texture_float.GL_LUMINANCE_ALPHA32F_ARB),
- ('float32', 3): dnnlib.EasyDict(type=gl.GL_FLOAT, format=gl.GL_RGB, internalformat=gl.GL_RGB32F),
- ('float32', 4): dnnlib.EasyDict(type=gl.GL_FLOAT, format=gl.GL_RGBA, internalformat=gl.GL_RGBA32F),
-}
-
-
-def get_texture_format(dtype, channels):
- return _texture_formats[(np.dtype(dtype).name, int(channels))]
-
-# ----------------------------------------------------------------------------
-
-
-def prepare_texture_data(image):
- image = np.asarray(image)
- if image.ndim == 2:
- image = image[:, :, np.newaxis]
- if image.dtype.name == 'float64':
- image = image.astype('float32')
- return image
-
-# ----------------------------------------------------------------------------
-
-
-def draw_pixels(image, *, pos=0, zoom=1, align=0, rint=True):
- pos = np.broadcast_to(np.asarray(pos, dtype='float32'), [2])
- zoom = np.broadcast_to(np.asarray(zoom, dtype='float32'), [2])
- align = np.broadcast_to(np.asarray(align, dtype='float32'), [2])
- image = prepare_texture_data(image)
- height, width, channels = image.shape
- size = zoom * [width, height]
- pos = pos - size * align
- if rint:
- pos = np.rint(pos)
- fmt = get_texture_format(image.dtype, channels)
-
- gl.glPushAttrib(gl.GL_CURRENT_BIT | gl.GL_PIXEL_MODE_BIT)
- gl.glPushClientAttrib(gl.GL_CLIENT_PIXEL_STORE_BIT)
- gl.glRasterPos2f(pos[0], pos[1])
- gl.glPixelZoom(zoom[0], -zoom[1])
- gl.glPixelStorei(gl.GL_UNPACK_ALIGNMENT, 1)
- gl.glDrawPixels(width, height, fmt.format, fmt.type, image)
- gl.glPopClientAttrib()
- gl.glPopAttrib()
-
-# ----------------------------------------------------------------------------
-
-
-def read_pixels(width, height, *, pos=0, dtype='uint8', channels=3):
- pos = np.broadcast_to(np.asarray(pos, dtype='float32'), [2])
- dtype = np.dtype(dtype)
- fmt = get_texture_format(dtype, channels)
- image = np.empty([height, width, channels], dtype=dtype)
-
- gl.glPushClientAttrib(gl.GL_CLIENT_PIXEL_STORE_BIT)
- gl.glPixelStorei(gl.GL_PACK_ALIGNMENT, 1)
- gl.glReadPixels(int(np.round(pos[0])), int(
- np.round(pos[1])), width, height, fmt.format, fmt.type, image)
- gl.glPopClientAttrib()
- return np.flipud(image)
-
-# ----------------------------------------------------------------------------
-
-
-class Texture:
- def __init__(self, *, image=None, width=None, height=None, channels=None, dtype=None, bilinear=True, mipmap=True):
- self.gl_id = None
- self.bilinear = bilinear
- self.mipmap = mipmap
-
- # Determine size and dtype.
- if image is not None:
- image = prepare_texture_data(image)
- self.height, self.width, self.channels = image.shape
- self.dtype = image.dtype
- else:
- assert width is not None and height is not None
- self.width = width
- self.height = height
- self.channels = channels if channels is not None else 3
- self.dtype = np.dtype(dtype) if dtype is not None else np.uint8
-
- # Validate size and dtype.
- assert isinstance(self.width, int) and self.width >= 0
- assert isinstance(self.height, int) and self.height >= 0
- assert isinstance(self.channels, int) and self.channels >= 1
- assert self.is_compatible(
- width=width, height=height, channels=channels, dtype=dtype)
-
- # Create texture object.
- self.gl_id = gl.glGenTextures(1)
- with self.bind():
- gl.glTexParameterf(
- gl.GL_TEXTURE_2D, gl.GL_TEXTURE_WRAP_S, gl.GL_CLAMP_TO_EDGE)
- gl.glTexParameterf(
- gl.GL_TEXTURE_2D, gl.GL_TEXTURE_WRAP_T, gl.GL_CLAMP_TO_EDGE)
- gl.glTexParameterf(gl.GL_TEXTURE_2D, gl.GL_TEXTURE_MAG_FILTER,
- gl.GL_LINEAR if self.bilinear else gl.GL_NEAREST)
- gl.glTexParameterf(gl.GL_TEXTURE_2D, gl.GL_TEXTURE_MIN_FILTER,
- gl.GL_LINEAR_MIPMAP_LINEAR if self.mipmap else gl.GL_NEAREST)
- self.update(image)
-
- def delete(self):
- if self.gl_id is not None:
- gl.glDeleteTextures([self.gl_id])
- self.gl_id = None
-
- def __del__(self):
- try:
- self.delete()
- except:
- pass
-
- @contextlib.contextmanager
- def bind(self):
- prev_id = gl.glGetInteger(gl.GL_TEXTURE_BINDING_2D)
- gl.glBindTexture(gl.GL_TEXTURE_2D, self.gl_id)
- yield
- gl.glBindTexture(gl.GL_TEXTURE_2D, prev_id)
-
- def update(self, image):
- if image is not None:
- image = prepare_texture_data(image)
- assert self.is_compatible(image=image)
- with self.bind():
- fmt = get_texture_format(self.dtype, self.channels)
- gl.glPushClientAttrib(gl.GL_CLIENT_PIXEL_STORE_BIT)
- gl.glPixelStorei(gl.GL_UNPACK_ALIGNMENT, 1)
- gl.glTexImage2D(gl.GL_TEXTURE_2D, 0, fmt.internalformat,
- self.width, self.height, 0, fmt.format, fmt.type, image)
- if self.mipmap:
- gl.glGenerateMipmap(gl.GL_TEXTURE_2D)
- gl.glPopClientAttrib()
-
- def draw(self, *, pos=0, zoom=1, align=0, rint=False, color=1, alpha=1, rounding=0):
- zoom = np.broadcast_to(np.asarray(zoom, dtype='float32'), [2])
- size = zoom * [self.width, self.height]
- with self.bind():
- gl.glPushAttrib(gl.GL_ENABLE_BIT)
- gl.glEnable(gl.GL_TEXTURE_2D)
- draw_rect(pos=pos, size=size, align=align, rint=rint,
- color=color, alpha=alpha, rounding=rounding)
- gl.glPopAttrib()
-
- def is_compatible(self, *, image=None, width=None, height=None, channels=None, dtype=None): # pylint: disable=too-many-return-statements
- if image is not None:
- if image.ndim != 3:
- return False
- ih, iw, ic = image.shape
- if not self.is_compatible(width=iw, height=ih, channels=ic, dtype=image.dtype):
- return False
- if width is not None and self.width != width:
- return False
- if height is not None and self.height != height:
- return False
- if channels is not None and self.channels != channels:
- return False
- if dtype is not None and self.dtype != dtype:
- return False
- return True
-
-# ----------------------------------------------------------------------------
-
-
-class Framebuffer:
- def __init__(self, *, texture=None, width=None, height=None, channels=None, dtype=None, msaa=0):
- self.texture = texture
- self.gl_id = None
- self.gl_color = None
- self.gl_depth_stencil = None
- self.msaa = msaa
-
- # Determine size and dtype.
- if texture is not None:
- assert isinstance(self.texture, Texture)
- self.width = texture.width
- self.height = texture.height
- self.channels = texture.channels
- self.dtype = texture.dtype
- else:
- assert width is not None and height is not None
- self.width = width
- self.height = height
- self.channels = channels if channels is not None else 4
- self.dtype = np.dtype(dtype) if dtype is not None else np.float32
-
- # Validate size and dtype.
- assert isinstance(self.width, int) and self.width >= 0
- assert isinstance(self.height, int) and self.height >= 0
- assert isinstance(self.channels, int) and self.channels >= 1
- assert width is None or width == self.width
- assert height is None or height == self.height
- assert channels is None or channels == self.channels
- assert dtype is None or dtype == self.dtype
-
- # Create framebuffer object.
- self.gl_id = gl.glGenFramebuffers(1)
- with self.bind():
-
- # Setup color buffer.
- if self.texture is not None:
- assert self.msaa == 0
- gl.glFramebufferTexture2D(
- gl.GL_FRAMEBUFFER, gl.GL_COLOR_ATTACHMENT0, gl.GL_TEXTURE_2D, self.texture.gl_id, 0)
- else:
- fmt = get_texture_format(self.dtype, self.channels)
- self.gl_color = gl.glGenRenderbuffers(1)
- gl.glBindRenderbuffer(gl.GL_RENDERBUFFER, self.gl_color)
- gl.glRenderbufferStorageMultisample(
- gl.GL_RENDERBUFFER, self.msaa, fmt.internalformat, self.width, self.height)
- gl.glFramebufferRenderbuffer(
- gl.GL_FRAMEBUFFER, gl.GL_COLOR_ATTACHMENT0, gl.GL_RENDERBUFFER, self.gl_color)
-
- # Setup depth/stencil buffer.
- self.gl_depth_stencil = gl.glGenRenderbuffers(1)
- gl.glBindRenderbuffer(gl.GL_RENDERBUFFER, self.gl_depth_stencil)
- gl.glRenderbufferStorageMultisample(
- gl.GL_RENDERBUFFER, self.msaa, gl.GL_DEPTH24_STENCIL8, self.width, self.height)
- gl.glFramebufferRenderbuffer(
- gl.GL_FRAMEBUFFER, gl.GL_DEPTH_STENCIL_ATTACHMENT, gl.GL_RENDERBUFFER, self.gl_depth_stencil)
-
- def delete(self):
- if self.gl_id is not None:
- gl.glDeleteFramebuffers([self.gl_id])
- self.gl_id = None
- if self.gl_color is not None:
- gl.glDeleteRenderbuffers(1, [self.gl_color])
- self.gl_color = None
- if self.gl_depth_stencil is not None:
- gl.glDeleteRenderbuffers(1, [self.gl_depth_stencil])
- self.gl_depth_stencil = None
-
- def __del__(self):
- try:
- self.delete()
- except:
- pass
-
- @contextlib.contextmanager
- def bind(self):
- prev_fbo = gl.glGetInteger(gl.GL_FRAMEBUFFER_BINDING)
- prev_rbo = gl.glGetInteger(gl.GL_RENDERBUFFER_BINDING)
- gl.glBindFramebuffer(gl.GL_FRAMEBUFFER, self.gl_id)
- if self.width is not None and self.height is not None:
- gl.glViewport(0, 0, self.width, self.height)
- yield
- gl.glBindFramebuffer(gl.GL_FRAMEBUFFER, prev_fbo)
- gl.glBindRenderbuffer(gl.GL_RENDERBUFFER, prev_rbo)
-
- def blit(self, dst=None):
- assert dst is None or isinstance(dst, Framebuffer)
- with self.bind():
- gl.glBindFramebuffer(gl.GL_DRAW_FRAMEBUFFER,
- 0 if dst is None else dst.fbo)
- gl.glBlitFramebuffer(0, 0, self.width, self.height, 0, 0,
- self.width, self.height, gl.GL_COLOR_BUFFER_BIT, gl.GL_NEAREST)
-
-# ----------------------------------------------------------------------------
-
-
-def draw_shape(vertices, *, mode=gl.GL_TRIANGLE_FAN, pos=0, size=1, color=1, alpha=1):
- assert vertices.ndim == 2 and vertices.shape[1] == 2
- pos = np.broadcast_to(np.asarray(pos, dtype='float32'), [2])
- size = np.broadcast_to(np.asarray(size, dtype='float32'), [2])
- color = np.broadcast_to(np.asarray(color, dtype='float32'), [3])
- alpha = np.clip(np.broadcast_to(
- np.asarray(alpha, dtype='float32'), []), 0, 1)
-
- gl.glPushClientAttrib(gl.GL_CLIENT_VERTEX_ARRAY_BIT)
- gl.glPushAttrib(gl.GL_CURRENT_BIT | gl.GL_TRANSFORM_BIT)
- gl.glMatrixMode(gl.GL_MODELVIEW)
- gl.glPushMatrix()
-
- gl.glEnableClientState(gl.GL_VERTEX_ARRAY)
- gl.glEnableClientState(gl.GL_TEXTURE_COORD_ARRAY)
- gl.glVertexPointer(2, gl.GL_FLOAT, 0, vertices)
- gl.glTexCoordPointer(2, gl.GL_FLOAT, 0, vertices)
- gl.glTranslate(pos[0], pos[1], 0)
- gl.glScale(size[0], size[1], 1)
- gl.glColor4f(color[0] * alpha, color[1] * alpha, color[2] * alpha, alpha)
- gl.glDrawArrays(mode, 0, vertices.shape[0])
-
- gl.glPopMatrix()
- gl.glPopAttrib()
- gl.glPopClientAttrib()
-
-# ----------------------------------------------------------------------------
-
-
-def draw_arrow(x1, y1, x2, y2, l=10, width=1.0):
- # Compute the length and angle of the arrow
- dx = x2 - x1
- dy = y2 - y1
- length = math.sqrt(dx**2 + dy**2)
- if length < l:
- return
- angle = math.atan2(dy, dx)
-
- # Save the current modelview matrix
- gl.glPushMatrix()
-
- # Translate and rotate the coordinate system
- gl.glTranslatef(x1, y1, 0.0)
- gl.glRotatef(angle * 180.0 / math.pi, 0.0, 0.0, 1.0)
-
- # Set the line width
- gl.glLineWidth(width)
- # gl.glColor3f(0.75, 0.75, 0.75)
-
- # Begin drawing lines
- gl.glBegin(gl.GL_LINES)
-
- # Draw the shaft of the arrow
- gl.glVertex2f(0.0, 0.0)
- gl.glVertex2f(length, 0.0)
-
- # Draw the head of the arrow
- gl.glVertex2f(length, 0.0)
- gl.glVertex2f(length - 2 * l, l)
- gl.glVertex2f(length, 0.0)
- gl.glVertex2f(length - 2 * l, -l)
-
- # End drawing lines
- gl.glEnd()
-
- # Restore the modelview matrix
- gl.glPopMatrix()
-
-# ----------------------------------------------------------------------------
-
-
-def draw_rect(*, pos=0, pos2=None, size=None, align=0, rint=False, color=1, alpha=1, rounding=0):
- assert pos2 is None or size is None
- pos = np.broadcast_to(np.asarray(pos, dtype='float32'), [2])
- pos2 = np.broadcast_to(np.asarray(pos2, dtype='float32'), [
- 2]) if pos2 is not None else None
- size = np.broadcast_to(np.asarray(size, dtype='float32'), [
- 2]) if size is not None else None
- size = size if size is not None else pos2 - \
- pos if pos2 is not None else np.array([1, 1], dtype='float32')
- pos = pos - size * align
- if rint:
- pos = np.rint(pos)
- rounding = np.broadcast_to(np.asarray(rounding, dtype='float32'), [2])
- rounding = np.minimum(
- np.abs(rounding) / np.maximum(np.abs(size), 1e-8), 0.5)
- if np.min(rounding) == 0:
- rounding *= 0
- vertices = _setup_rect(float(rounding[0]), float(rounding[1]))
- draw_shape(vertices, mode=gl.GL_TRIANGLE_FAN, pos=pos,
- size=size, color=color, alpha=alpha)
-
-
-@functools.lru_cache(maxsize=10000)
-def _setup_rect(rx, ry):
- t = np.linspace(0, np.pi / 2, 1 if max(rx, ry) == 0 else 64)
- s = 1 - np.sin(t)
- c = 1 - np.cos(t)
- x = [c * rx, 1 - s * rx, 1 - c * rx, s * rx]
- y = [s * ry, c * ry, 1 - s * ry, 1 - c * ry]
- v = np.stack([x, y], axis=-1).reshape(-1, 2)
- return v.astype('float32')
-
-# ----------------------------------------------------------------------------
-
-
-def draw_circle(*, center=0, radius=100, hole=0, color=1, alpha=1):
- hole = np.broadcast_to(np.asarray(hole, dtype='float32'), [])
- vertices = _setup_circle(float(hole))
- draw_shape(vertices, mode=gl.GL_TRIANGLE_STRIP, pos=center,
- size=radius, color=color, alpha=alpha)
-
-
-@functools.lru_cache(maxsize=10000)
-def _setup_circle(hole):
- t = np.linspace(0, np.pi * 2, 128)
- s = np.sin(t)
- c = np.cos(t)
- v = np.stack([c, s, c * hole, s * hole], axis=-1).reshape(-1, 2)
- return v.astype('float32')
-
-# ----------------------------------------------------------------------------
diff --git a/spaces/DragGan/DragGan/gui_utils/glfw_window.py b/spaces/DragGan/DragGan/gui_utils/glfw_window.py
deleted file mode 100644
index 83264eb89a855ec5038cf255994ee2b4b3ddb5ee..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan/gui_utils/glfw_window.py
+++ /dev/null
@@ -1,229 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-import time
-import glfw
-import OpenGL.GL as gl
-from . import gl_utils
-
-#----------------------------------------------------------------------------
-
-class GlfwWindow: # pylint: disable=too-many-public-methods
- def __init__(self, *, title='GlfwWindow', window_width=1920, window_height=1080, deferred_show=True, close_on_esc=True):
- self._glfw_window = None
- self._drawing_frame = False
- self._frame_start_time = None
- self._frame_delta = 0
- self._fps_limit = None
- self._vsync = None
- self._skip_frames = 0
- self._deferred_show = deferred_show
- self._close_on_esc = close_on_esc
- self._esc_pressed = False
- self._drag_and_drop_paths = None
- self._capture_next_frame = False
- self._captured_frame = None
-
- # Create window.
- glfw.init()
- glfw.window_hint(glfw.VISIBLE, False)
- self._glfw_window = glfw.create_window(width=window_width, height=window_height, title=title, monitor=None, share=None)
- self._attach_glfw_callbacks()
- self.make_context_current()
-
- # Adjust window.
- self.set_vsync(False)
- self.set_window_size(window_width, window_height)
- if not self._deferred_show:
- glfw.show_window(self._glfw_window)
-
- def close(self):
- if self._drawing_frame:
- self.end_frame()
- if self._glfw_window is not None:
- glfw.destroy_window(self._glfw_window)
- self._glfw_window = None
- #glfw.terminate() # Commented out to play it nice with other glfw clients.
-
- def __del__(self):
- try:
- self.close()
- except:
- pass
-
- @property
- def window_width(self):
- return self.content_width
-
- @property
- def window_height(self):
- return self.content_height + self.title_bar_height
-
- @property
- def content_width(self):
- width, _height = glfw.get_window_size(self._glfw_window)
- return width
-
- @property
- def content_height(self):
- _width, height = glfw.get_window_size(self._glfw_window)
- return height
-
- @property
- def title_bar_height(self):
- _left, top, _right, _bottom = glfw.get_window_frame_size(self._glfw_window)
- return top
-
- @property
- def monitor_width(self):
- _, _, width, _height = glfw.get_monitor_workarea(glfw.get_primary_monitor())
- return width
-
- @property
- def monitor_height(self):
- _, _, _width, height = glfw.get_monitor_workarea(glfw.get_primary_monitor())
- return height
-
- @property
- def frame_delta(self):
- return self._frame_delta
-
- def set_title(self, title):
- glfw.set_window_title(self._glfw_window, title)
-
- def set_window_size(self, width, height):
- width = min(width, self.monitor_width)
- height = min(height, self.monitor_height)
- glfw.set_window_size(self._glfw_window, width, max(height - self.title_bar_height, 0))
- if width == self.monitor_width and height == self.monitor_height:
- self.maximize()
-
- def set_content_size(self, width, height):
- self.set_window_size(width, height + self.title_bar_height)
-
- def maximize(self):
- glfw.maximize_window(self._glfw_window)
-
- def set_position(self, x, y):
- glfw.set_window_pos(self._glfw_window, x, y + self.title_bar_height)
-
- def center(self):
- self.set_position((self.monitor_width - self.window_width) // 2, (self.monitor_height - self.window_height) // 2)
-
- def set_vsync(self, vsync):
- vsync = bool(vsync)
- if vsync != self._vsync:
- glfw.swap_interval(1 if vsync else 0)
- self._vsync = vsync
-
- def set_fps_limit(self, fps_limit):
- self._fps_limit = int(fps_limit)
-
- def should_close(self):
- return glfw.window_should_close(self._glfw_window) or (self._close_on_esc and self._esc_pressed)
-
- def skip_frame(self):
- self.skip_frames(1)
-
- def skip_frames(self, num): # Do not update window for the next N frames.
- self._skip_frames = max(self._skip_frames, int(num))
-
- def is_skipping_frames(self):
- return self._skip_frames > 0
-
- def capture_next_frame(self):
- self._capture_next_frame = True
-
- def pop_captured_frame(self):
- frame = self._captured_frame
- self._captured_frame = None
- return frame
-
- def pop_drag_and_drop_paths(self):
- paths = self._drag_and_drop_paths
- self._drag_and_drop_paths = None
- return paths
-
- def draw_frame(self): # To be overridden by subclass.
- self.begin_frame()
- # Rendering code goes here.
- self.end_frame()
-
- def make_context_current(self):
- if self._glfw_window is not None:
- glfw.make_context_current(self._glfw_window)
-
- def begin_frame(self):
- # End previous frame.
- if self._drawing_frame:
- self.end_frame()
-
- # Apply FPS limit.
- if self._frame_start_time is not None and self._fps_limit is not None:
- delay = self._frame_start_time - time.perf_counter() + 1 / self._fps_limit
- if delay > 0:
- time.sleep(delay)
- cur_time = time.perf_counter()
- if self._frame_start_time is not None:
- self._frame_delta = cur_time - self._frame_start_time
- self._frame_start_time = cur_time
-
- # Process events.
- glfw.poll_events()
-
- # Begin frame.
- self._drawing_frame = True
- self.make_context_current()
-
- # Initialize GL state.
- gl.glViewport(0, 0, self.content_width, self.content_height)
- gl.glMatrixMode(gl.GL_PROJECTION)
- gl.glLoadIdentity()
- gl.glTranslate(-1, 1, 0)
- gl.glScale(2 / max(self.content_width, 1), -2 / max(self.content_height, 1), 1)
- gl.glMatrixMode(gl.GL_MODELVIEW)
- gl.glLoadIdentity()
- gl.glEnable(gl.GL_BLEND)
- gl.glBlendFunc(gl.GL_ONE, gl.GL_ONE_MINUS_SRC_ALPHA) # Pre-multiplied alpha.
-
- # Clear.
- gl.glClearColor(0, 0, 0, 1)
- gl.glClear(gl.GL_COLOR_BUFFER_BIT | gl.GL_DEPTH_BUFFER_BIT)
-
- def end_frame(self):
- assert self._drawing_frame
- self._drawing_frame = False
-
- # Skip frames if requested.
- if self._skip_frames > 0:
- self._skip_frames -= 1
- return
-
- # Capture frame if requested.
- if self._capture_next_frame:
- self._captured_frame = gl_utils.read_pixels(self.content_width, self.content_height)
- self._capture_next_frame = False
-
- # Update window.
- if self._deferred_show:
- glfw.show_window(self._glfw_window)
- self._deferred_show = False
- glfw.swap_buffers(self._glfw_window)
-
- def _attach_glfw_callbacks(self):
- glfw.set_key_callback(self._glfw_window, self._glfw_key_callback)
- glfw.set_drop_callback(self._glfw_window, self._glfw_drop_callback)
-
- def _glfw_key_callback(self, _window, key, _scancode, action, _mods):
- if action == glfw.PRESS and key == glfw.KEY_ESCAPE:
- self._esc_pressed = True
-
- def _glfw_drop_callback(self, _window, paths):
- self._drag_and_drop_paths = paths
-
-#----------------------------------------------------------------------------
diff --git a/spaces/DragGan/DragGan/stylegan_human/training/training_loop.py b/spaces/DragGan/DragGan/stylegan_human/training/training_loop.py
deleted file mode 100644
index ddd0c15e226b0436048fee4469341e3fb653c71b..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan/stylegan_human/training/training_loop.py
+++ /dev/null
@@ -1,427 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-"""Main training loop."""
-
-import os
-import time
-import copy
-import json
-import pickle
-import psutil
-import PIL.Image
-import numpy as np
-import torch
-import dnnlib
-from torch_utils import misc
-from torch_utils import training_stats
-from torch_utils.ops import conv2d_gradfix
-from torch_utils.ops import grid_sample_gradfix
-
-import legacy
-from metrics import metric_main
-
-#----------------------------------------------------------------------------
-
-def setup_snapshot_image_grid(training_set, random_seed=0):
- rnd = np.random.RandomState(random_seed)
- gw = np.clip(7680 // training_set.image_shape[2], 7, 32)
- gh = np.clip(4320 // training_set.image_shape[1], 4, 32)
-
- # No labels => show random subset of training samples.
- if not training_set.has_labels:
- all_indices = list(range(len(training_set)))
- rnd.shuffle(all_indices)
- grid_indices = [all_indices[i % len(all_indices)] for i in range(gw * gh)]
-
- else:
- # Group training samples by label.
- label_groups = dict() # label => [idx, ...]
- for idx in range(len(training_set)):
- label = tuple(training_set.get_details(idx).raw_label.flat[::-1])
- if label not in label_groups:
- label_groups[label] = []
- label_groups[label].append(idx)
-
- # Reorder.
- label_order = sorted(label_groups.keys())
- for label in label_order:
- rnd.shuffle(label_groups[label])
-
- # Organize into grid.
- grid_indices = []
- for y in range(gh):
- label = label_order[y % len(label_order)]
- indices = label_groups[label]
- grid_indices += [indices[x % len(indices)] for x in range(gw)]
- label_groups[label] = [indices[(i + gw) % len(indices)] for i in range(len(indices))]
-
- # Load data.
- images, labels = zip(*[training_set[i] for i in grid_indices])
- return (gw, gh), np.stack(images), np.stack(labels)
-
-#----------------------------------------------------------------------------
-
-def save_image_grid(img, fname, drange, grid_size):
- lo, hi = drange
- img = np.asarray(img, dtype=np.float32)
- img = (img - lo) * (255 / (hi - lo))
- img = np.rint(img).clip(0, 255).astype(np.uint8)
-
- gw, gh = grid_size
- _N, C, H, W = img.shape
- img = img.reshape([gh, gw, C, H, W])
- img = img.transpose(0, 3, 1, 4, 2)
- img = img.reshape([gh * H, gw * W, C])
-
- assert C in [1, 3]
- if C == 1:
- PIL.Image.fromarray(img[:, :, 0], 'L').save(fname)
- if C == 3:
- PIL.Image.fromarray(img, 'RGB').save(fname)
-
-#----------------------------------------------------------------------------
-
-def training_loop(
- run_dir = '.', # Output directory.
- training_set_kwargs = {}, # Options for training set.
- data_loader_kwargs = {}, # Options for torch.utils.data.DataLoader.
- G_kwargs = {}, # Options for generator network.
- D_kwargs = {}, # Options for discriminator network.
- G_opt_kwargs = {}, # Options for generator optimizer.
- D_opt_kwargs = {}, # Options for discriminator optimizer.
- augment_kwargs = None, # Options for augmentation pipeline. None = disable.
- loss_kwargs = {}, # Options for loss function.
- metrics = [], # Metrics to evaluate during training.
- random_seed = 0, # Global random seed.
- num_gpus = 1, # Number of GPUs participating in the training.
- rank = 0, # Rank of the current process in [0, num_gpus[.
- batch_size = 4, # Total batch size for one training iteration. Can be larger than batch_gpu * num_gpus.
- batch_gpu = 4, # Number of samples processed at a time by one GPU.
- ema_kimg = 10, # Half-life of the exponential moving average (EMA) of generator weights.
- ema_rampup = 0.05, # EMA ramp-up coefficient. None = no rampup.
- G_reg_interval = None, # How often to perform regularization for G? None = disable lazy regularization.
- D_reg_interval = 16, # How often to perform regularization for D? None = disable lazy regularization.
- augment_p = 0, # Initial value of augmentation probability.
- ada_target = None, # ADA target value. None = fixed p.
- ada_interval = 4, # How often to perform ADA adjustment?
- ada_kimg = 500, # ADA adjustment speed, measured in how many kimg it takes for p to increase/decrease by one unit.
- total_kimg = 25000, # Total length of the training, measured in thousands of real images.
- kimg_per_tick = 4, # Progress snapshot interval.
- image_snapshot_ticks = 50, # How often to save image snapshots? None = disable.
- network_snapshot_ticks = 50, # How often to save network snapshots? None = disable.
- resume_pkl = None, # Network pickle to resume training from.
- resume_kimg = 0, # First kimg to report when resuming training.
- cudnn_benchmark = True, # Enable torch.backends.cudnn.benchmark?
- abort_fn = None, # Callback function for determining whether to abort training. Must return consistent results across ranks.
- progress_fn = None, # Callback function for updating training progress. Called for all ranks.
-):
- # Initialize.
- start_time = time.time()
- device = torch.device('cuda', rank)
- np.random.seed(random_seed * num_gpus + rank)
- torch.manual_seed(random_seed * num_gpus + rank)
- torch.backends.cudnn.benchmark = cudnn_benchmark # Improves training speed.
- torch.backends.cuda.matmul.allow_tf32 = False # Improves numerical accuracy.
- torch.backends.cudnn.allow_tf32 = False # Improves numerical accuracy.
- conv2d_gradfix.enabled = True # Improves training speed.
- grid_sample_gradfix.enabled = True # Avoids errors with the augmentation pipe.
-
- # Load training set.
- if rank == 0:
- print('Loading training set...')
- training_set = dnnlib.util.construct_class_by_name(**training_set_kwargs) # subclass of training.dataset.Dataset
- training_set_sampler = misc.InfiniteSampler(dataset=training_set, rank=rank, num_replicas=num_gpus, seed=random_seed)
- training_set_iterator = iter(torch.utils.data.DataLoader(dataset=training_set, sampler=training_set_sampler, batch_size=batch_size//num_gpus, **data_loader_kwargs))
- if rank == 0:
- print()
- print('Num images: ', len(training_set))
- print('Image shape:', training_set.image_shape)
- print('Label shape:', training_set.label_shape)
- print()
-
- # Construct networks.
- if rank == 0:
- print('Constructing networks...')
- common_kwargs = dict(c_dim=training_set.label_dim, img_resolution=training_set.resolution, img_channels=training_set.num_channels)
- G = dnnlib.util.construct_class_by_name(**G_kwargs, **common_kwargs).train().requires_grad_(False).to(device) # subclass of torch.nn.Module
- D = dnnlib.util.construct_class_by_name(**D_kwargs, **common_kwargs).train().requires_grad_(False).to(device) # subclass of torch.nn.Module
- G_ema = copy.deepcopy(G).eval()
-
- # Resume from existing pickle.
- if (resume_pkl is not None) and (rank == 0):
- print(f'Resuming from "{resume_pkl}"')
- with dnnlib.util.open_url(resume_pkl) as f:
- resume_data = legacy.load_network_pkl(f)
- for name, module in [('G', G), ('D', D), ('G_ema', G_ema)]:
- misc.copy_params_and_buffers(resume_data[name], module, require_all=False)
-
- # Print network summary tables.
- if rank == 0:
- z = torch.empty([batch_gpu, G.z_dim], device=device)
- c = torch.empty([batch_gpu, G.c_dim], device=device)
- img = misc.print_module_summary(G, [z, c])
- misc.print_module_summary(D, [img, c])
-
- # Setup augmentation.
- if rank == 0:
- print('Setting up augmentation...')
- augment_pipe = None
- ada_stats = None
- if (augment_kwargs is not None) and (augment_p > 0 or ada_target is not None):
- augment_pipe = dnnlib.util.construct_class_by_name(**augment_kwargs).train().requires_grad_(False).to(device) # subclass of torch.nn.Module
- augment_pipe.p.copy_(torch.as_tensor(augment_p))
- if ada_target is not None:
- ada_stats = training_stats.Collector(regex='Loss/signs/real')
-
- # Distribute across GPUs.
- if rank == 0:
- print(f'Distributing across {num_gpus} GPUs...')
- for module in [G, D, G_ema, augment_pipe]:
- if module is not None and num_gpus > 1:
- for param in misc.params_and_buffers(module):
- torch.distributed.broadcast(param, src=0)
-
- # Setup training phases.
- if rank == 0:
- print('Setting up training phases...')
- loss = dnnlib.util.construct_class_by_name(device=device, G=G, D=D, augment_pipe=augment_pipe, **loss_kwargs) # subclass of training.loss.Loss
- phases = []
- for name, module, opt_kwargs, reg_interval in [('G', G, G_opt_kwargs, G_reg_interval), ('D', D, D_opt_kwargs, D_reg_interval)]:
- if reg_interval is None:
- opt = dnnlib.util.construct_class_by_name(params=module.parameters(), **opt_kwargs) # subclass of torch.optim.Optimizer
- phases += [dnnlib.EasyDict(name=name+'both', module=module, opt=opt, interval=1)]
- else: # Lazy regularization.
- mb_ratio = reg_interval / (reg_interval + 1)
- opt_kwargs = dnnlib.EasyDict(opt_kwargs)
- opt_kwargs.lr = opt_kwargs.lr * mb_ratio
- opt_kwargs.betas = [beta ** mb_ratio for beta in opt_kwargs.betas]
- opt = dnnlib.util.construct_class_by_name(module.parameters(), **opt_kwargs) # subclass of torch.optim.Optimizer
- phases += [dnnlib.EasyDict(name=name+'main', module=module, opt=opt, interval=1)]
- phases += [dnnlib.EasyDict(name=name+'reg', module=module, opt=opt, interval=reg_interval)]
- for phase in phases:
- phase.start_event = None
- phase.end_event = None
- if rank == 0:
- phase.start_event = torch.cuda.Event(enable_timing=True)
- phase.end_event = torch.cuda.Event(enable_timing=True)
-
- # Export sample images.
- grid_size = None
- grid_z = None
- grid_c = None
- if rank == 0:
- print('Exporting sample images...')
- grid_size, images, labels = setup_snapshot_image_grid(training_set=training_set)
- save_image_grid(images, os.path.join(run_dir, 'reals.png'), drange=[0,255], grid_size=grid_size)
- grid_z = torch.randn([labels.shape[0], G.z_dim], device=device).split(batch_gpu)
- grid_c = torch.from_numpy(labels).to(device).split(batch_gpu)
- images = torch.cat([G_ema(z=z, c=c, noise_mode='const').cpu() for z, c in zip(grid_z, grid_c)]).numpy()
- save_image_grid(images, os.path.join(run_dir, 'fakes_init.png'), drange=[-1,1], grid_size=grid_size)
-
- # Initialize logs.
- if rank == 0:
- print('Initializing logs...')
- stats_collector = training_stats.Collector(regex='.*')
- stats_metrics = dict()
- stats_jsonl = None
- stats_tfevents = None
- if rank == 0:
- stats_jsonl = open(os.path.join(run_dir, 'stats.jsonl'), 'wt')
- try:
- import torch.utils.tensorboard as tensorboard
- stats_tfevents = tensorboard.SummaryWriter(run_dir)
- except ImportError as err:
- print('Skipping tfevents export:', err)
-
- # Train.
- if rank == 0:
- print(f'Training for {total_kimg} kimg...')
- print()
- cur_nimg = resume_kimg * 1000
- cur_tick = 0
- tick_start_nimg = cur_nimg
- tick_start_time = time.time()
- maintenance_time = tick_start_time - start_time
- batch_idx = 0
- if progress_fn is not None:
- progress_fn(0, total_kimg)
- while True:
-
- # Fetch training data.
- with torch.autograd.profiler.record_function('data_fetch'):
- phase_real_img, phase_real_c = next(training_set_iterator)
- phase_real_img = (phase_real_img.to(device).to(torch.float32) / 127.5 - 1).split(batch_gpu)
- phase_real_c = phase_real_c.to(device).split(batch_gpu)
- all_gen_z = torch.randn([len(phases) * batch_size, G.z_dim], device=device)
- all_gen_z = [phase_gen_z.split(batch_gpu) for phase_gen_z in all_gen_z.split(batch_size)]
- all_gen_c = [training_set.get_label(np.random.randint(len(training_set))) for _ in range(len(phases) * batch_size)]
- all_gen_c = torch.from_numpy(np.stack(all_gen_c)).pin_memory().to(device)
- all_gen_c = [phase_gen_c.split(batch_gpu) for phase_gen_c in all_gen_c.split(batch_size)]
-
- # Execute training phases.
- for phase, phase_gen_z, phase_gen_c in zip(phases, all_gen_z, all_gen_c):
- if batch_idx % phase.interval != 0:
- continue
- if phase.start_event is not None:
- phase.start_event.record(torch.cuda.current_stream(device))
-
- # Accumulate gradients.
- phase.opt.zero_grad(set_to_none=True)
- phase.module.requires_grad_(True)
- for real_img, real_c, gen_z, gen_c in zip(phase_real_img, phase_real_c, phase_gen_z, phase_gen_c):
- loss.accumulate_gradients(phase=phase.name, real_img=real_img, real_c=real_c, gen_z=gen_z, gen_c=gen_c, gain=phase.interval, cur_nimg=cur_nimg)
- phase.module.requires_grad_(False)
-
- # Update weights.
- with torch.autograd.profiler.record_function(phase.name + '_opt'):
- params = [param for param in phase.module.parameters() if param.grad is not None]
- if len(params) > 0:
- flat = torch.cat([param.grad.flatten() for param in params])
- if num_gpus > 1:
- torch.distributed.all_reduce(flat)
- flat /= num_gpus
- misc.nan_to_num(flat, nan=0, posinf=1e5, neginf=-1e5, out=flat)
- grads = flat.split([param.numel() for param in params])
- for param, grad in zip(params, grads):
- param.grad = grad.reshape(param.shape)
- phase.opt.step()
-
- # Phase done.
- if phase.end_event is not None:
- phase.end_event.record(torch.cuda.current_stream(device))
-
- # Update G_ema.
- with torch.autograd.profiler.record_function('Gema'):
- ema_nimg = ema_kimg * 1000
- if ema_rampup is not None:
- ema_nimg = min(ema_nimg, cur_nimg * ema_rampup)
- ema_beta = 0.5 ** (batch_size / max(ema_nimg, 1e-8))
- for p_ema, p in zip(G_ema.parameters(), G.parameters()):
- p_ema.copy_(p.lerp(p_ema, ema_beta))
- for b_ema, b in zip(G_ema.buffers(), G.buffers()):
- b_ema.copy_(b)
-
- # Update state.
- cur_nimg += batch_size
- batch_idx += 1
-
- # Execute ADA heuristic.
- if (ada_stats is not None) and (batch_idx % ada_interval == 0):
- ada_stats.update()
- adjust = np.sign(ada_stats['Loss/signs/real'] - ada_target) * (batch_size * ada_interval) / (ada_kimg * 1000)
- augment_pipe.p.copy_((augment_pipe.p + adjust).max(misc.constant(0, device=device)))
-
- # Perform maintenance tasks once per tick.
- done = (cur_nimg >= total_kimg * 1000)
- if (not done) and (cur_tick != 0) and (cur_nimg < tick_start_nimg + kimg_per_tick * 1000):
- continue
-
- # Print status line, accumulating the same information in training_stats.
- tick_end_time = time.time()
- fields = []
- fields += [f"tick {training_stats.report0('Progress/tick', cur_tick):<5d}"]
- fields += [f"kimg {training_stats.report0('Progress/kimg', cur_nimg / 1e3):<8.1f}"]
- fields += [f"time {dnnlib.util.format_time(training_stats.report0('Timing/total_sec', tick_end_time - start_time)):<12s}"]
- fields += [f"sec/tick {training_stats.report0('Timing/sec_per_tick', tick_end_time - tick_start_time):<7.1f}"]
- fields += [f"sec/kimg {training_stats.report0('Timing/sec_per_kimg', (tick_end_time - tick_start_time) / (cur_nimg - tick_start_nimg) * 1e3):<7.2f}"]
- fields += [f"maintenance {training_stats.report0('Timing/maintenance_sec', maintenance_time):<6.1f}"]
- fields += [f"cpumem {training_stats.report0('Resources/cpu_mem_gb', psutil.Process(os.getpid()).memory_info().rss / 2**30):<6.2f}"]
- fields += [f"gpumem {training_stats.report0('Resources/peak_gpu_mem_gb', torch.cuda.max_memory_allocated(device) / 2**30):<6.2f}"]
- fields += [f"reserved {training_stats.report0('Resources/peak_gpu_mem_reserved_gb', torch.cuda.max_memory_reserved(device) / 2**30):<6.2f}"]
- torch.cuda.reset_peak_memory_stats()
- fields += [f"augment {training_stats.report0('Progress/augment', float(augment_pipe.p.cpu()) if augment_pipe is not None else 0):.3f}"]
- training_stats.report0('Timing/total_hours', (tick_end_time - start_time) / (60 * 60))
- training_stats.report0('Timing/total_days', (tick_end_time - start_time) / (24 * 60 * 60))
- if rank == 0:
- print(' '.join(fields))
-
- # Check for abort.
- if (not done) and (abort_fn is not None) and abort_fn():
- done = True
- if rank == 0:
- print()
- print('Aborting...')
-
- # Save image snapshot.
- if (rank == 0) and (image_snapshot_ticks is not None) and (done or cur_tick % image_snapshot_ticks == 0):
- images = torch.cat([G_ema(z=z, c=c, noise_mode='const').cpu() for z, c in zip(grid_z, grid_c)]).numpy()
- save_image_grid(images, os.path.join(run_dir, f'fakes{cur_nimg//1000:06d}.png'), drange=[-1,1], grid_size=grid_size)
-
- # Save network snapshot.
- snapshot_pkl = None
- snapshot_data = None
- if (network_snapshot_ticks is not None) and (done or cur_tick % network_snapshot_ticks == 0):
- snapshot_data = dict(G=G, D=D, G_ema=G_ema, augment_pipe=augment_pipe, training_set_kwargs=dict(training_set_kwargs))
- for key, value in snapshot_data.items():
- if isinstance(value, torch.nn.Module):
- value = copy.deepcopy(value).eval().requires_grad_(False)
- if num_gpus > 1:
- misc.check_ddp_consistency(value, ignore_regex=r'.*\.[^.]+_(avg|ema)')
- for param in misc.params_and_buffers(value):
- torch.distributed.broadcast(param, src=0)
- snapshot_data[key] = value.cpu()
- del value # conserve memory
- snapshot_pkl = os.path.join(run_dir, f'network-snapshot-{cur_nimg//1000:06d}.pkl')
- if rank == 0:
- with open(snapshot_pkl, 'wb') as f:
- pickle.dump(snapshot_data, f)
-
- # Evaluate metrics.
- if (snapshot_data is not None) and (len(metrics) > 0):
- if rank == 0:
- print('Evaluating metrics...')
- for metric in metrics:
- result_dict = metric_main.calc_metric(metric=metric, G=snapshot_data['G_ema'],
- dataset_kwargs=training_set_kwargs, num_gpus=num_gpus, rank=rank, device=device)
- if rank == 0:
- metric_main.report_metric(result_dict, run_dir=run_dir, snapshot_pkl=snapshot_pkl)
- stats_metrics.update(result_dict.results)
- del snapshot_data # conserve memory
-
- # Collect statistics.
- for phase in phases:
- value = []
- if (phase.start_event is not None) and (phase.end_event is not None):
- phase.end_event.synchronize()
- value = phase.start_event.elapsed_time(phase.end_event)
- training_stats.report0('Timing/' + phase.name, value)
- stats_collector.update()
- stats_dict = stats_collector.as_dict()
-
- # Update logs.
- timestamp = time.time()
- if stats_jsonl is not None:
- fields = dict(stats_dict, timestamp=timestamp)
- stats_jsonl.write(json.dumps(fields) + '\n')
- stats_jsonl.flush()
- if stats_tfevents is not None:
- global_step = int(cur_nimg / 1e3)
- walltime = timestamp - start_time
- for name, value in stats_dict.items():
- stats_tfevents.add_scalar(name, value.mean, global_step=global_step, walltime=walltime)
- for name, value in stats_metrics.items():
- stats_tfevents.add_scalar(f'Metrics/{name}', value, global_step=global_step, walltime=walltime)
- stats_tfevents.flush()
- if progress_fn is not None:
- progress_fn(cur_nimg // 1000, total_kimg)
-
- # Update state.
- cur_tick += 1
- tick_start_nimg = cur_nimg
- tick_start_time = time.time()
- maintenance_time = tick_start_time - tick_end_time
- if done:
- break
-
- # Done.
- if rank == 0:
- print()
- print('Exiting...')
-
-#----------------------------------------------------------------------------
diff --git a/spaces/ECCV2022/PARSeq-OCR/README.md b/spaces/ECCV2022/PARSeq-OCR/README.md
deleted file mode 100644
index 25976e88ef1520b7bc736749f2f798f3caaedcc7..0000000000000000000000000000000000000000
--- a/spaces/ECCV2022/PARSeq-OCR/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: PARSeq OCR
-emoji: 📚
-colorFrom: red
-colorTo: purple
-sdk: gradio
-sdk_version: 3.1.3
-python_version: 3.9.13
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/ECCV2022/bytetrack/yolox/motdt_tracker/matching.py b/spaces/ECCV2022/bytetrack/yolox/motdt_tracker/matching.py
deleted file mode 100644
index 01d07da874a793c06eecba172d1e44c7a368234b..0000000000000000000000000000000000000000
--- a/spaces/ECCV2022/bytetrack/yolox/motdt_tracker/matching.py
+++ /dev/null
@@ -1,116 +0,0 @@
-import cv2
-import numpy as np
-import lap
-from scipy.spatial.distance import cdist
-
-from cython_bbox import bbox_overlaps as bbox_ious
-from yolox.motdt_tracker import kalman_filter
-
-
-def _indices_to_matches(cost_matrix, indices, thresh):
- matched_cost = cost_matrix[tuple(zip(*indices))]
- matched_mask = (matched_cost <= thresh)
-
- matches = indices[matched_mask]
- unmatched_a = tuple(set(range(cost_matrix.shape[0])) - set(matches[:, 0]))
- unmatched_b = tuple(set(range(cost_matrix.shape[1])) - set(matches[:, 1]))
-
- return matches, unmatched_a, unmatched_b
-
-
-def linear_assignment(cost_matrix, thresh):
- if cost_matrix.size == 0:
- return np.empty((0, 2), dtype=int), tuple(range(cost_matrix.shape[0])), tuple(range(cost_matrix.shape[1]))
- matches, unmatched_a, unmatched_b = [], [], []
- cost, x, y = lap.lapjv(cost_matrix, extend_cost=True, cost_limit=thresh)
- for ix, mx in enumerate(x):
- if mx >= 0:
- matches.append([ix, mx])
- unmatched_a = np.where(x < 0)[0]
- unmatched_b = np.where(y < 0)[0]
- matches = np.asarray(matches)
- return matches, unmatched_a, unmatched_b
-
-
-def ious(atlbrs, btlbrs):
- """
- Compute cost based on IoU
- :type atlbrs: list[tlbr] | np.ndarray
- :type atlbrs: list[tlbr] | np.ndarray
- :rtype ious np.ndarray
- """
- ious = np.zeros((len(atlbrs), len(btlbrs)), dtype=np.float)
- if ious.size == 0:
- return ious
-
- ious = bbox_ious(
- np.ascontiguousarray(atlbrs, dtype=np.float),
- np.ascontiguousarray(btlbrs, dtype=np.float)
- )
-
- return ious
-
-
-def iou_distance(atracks, btracks):
- """
- Compute cost based on IoU
- :type atracks: list[STrack]
- :type btracks: list[STrack]
- :rtype cost_matrix np.ndarray
- """
- atlbrs = [track.tlbr for track in atracks]
- btlbrs = [track.tlbr for track in btracks]
- _ious = ious(atlbrs, btlbrs)
- cost_matrix = 1 - _ious
-
- return cost_matrix
-
-
-def nearest_reid_distance(tracks, detections, metric='cosine'):
- """
- Compute cost based on ReID features
- :type tracks: list[STrack]
- :type detections: list[BaseTrack]
- :rtype cost_matrix np.ndarray
- """
- cost_matrix = np.zeros((len(tracks), len(detections)), dtype=np.float)
- if cost_matrix.size == 0:
- return cost_matrix
-
- det_features = np.asarray([track.curr_feature for track in detections], dtype=np.float32)
- for i, track in enumerate(tracks):
- cost_matrix[i, :] = np.maximum(0.0, cdist(track.features, det_features, metric).min(axis=0))
-
- return cost_matrix
-
-
-def mean_reid_distance(tracks, detections, metric='cosine'):
- """
- Compute cost based on ReID features
- :type tracks: list[STrack]
- :type detections: list[BaseTrack]
- :type metric: str
- :rtype cost_matrix np.ndarray
- """
- cost_matrix = np.empty((len(tracks), len(detections)), dtype=np.float)
- if cost_matrix.size == 0:
- return cost_matrix
-
- track_features = np.asarray([track.curr_feature for track in tracks], dtype=np.float32)
- det_features = np.asarray([track.curr_feature for track in detections], dtype=np.float32)
- cost_matrix = cdist(track_features, det_features, metric)
-
- return cost_matrix
-
-
-def gate_cost_matrix(kf, cost_matrix, tracks, detections, only_position=False):
- if cost_matrix.size == 0:
- return cost_matrix
- gating_dim = 2 if only_position else 4
- gating_threshold = kalman_filter.chi2inv95[gating_dim]
- measurements = np.asarray([det.to_xyah() for det in detections])
- for row, track in enumerate(tracks):
- gating_distance = kf.gating_distance(
- track.mean, track.covariance, measurements, only_position)
- cost_matrix[row, gating_distance > gating_threshold] = np.inf
- return cost_matrix
\ No newline at end of file
diff --git a/spaces/EstebanDC/UCS_JG/app.py b/spaces/EstebanDC/UCS_JG/app.py
deleted file mode 100644
index aed3bceed71e1a8b53f3443a8842cadfc537889b..0000000000000000000000000000000000000000
--- a/spaces/EstebanDC/UCS_JG/app.py
+++ /dev/null
@@ -1,65 +0,0 @@
-import pickle
-import numpy as np
-import gradio as gr
-import sklearn
-import pandas as pd
-from sklearn.model_selection import train_test_split
-from sklearn.ensemble import ExtraTreesRegressor
-
-filename = 'Dataset_RCS_3.csv'
-names0 = ['JET', "Suelo",'SPT', 'WtoC', 'Presion', 'Velocidad','RCS']
-dataset=pd.read_csv(filename, names=names0)
-
-y = dataset['RCS']
-X = dataset.drop('RCS', axis=1)
-
-validation_size = 0.20
-seed = 10
-X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=validation_size, random_state=seed)
-
-
-modelodef=ExtraTreesRegressor(
- n_estimators=1000,
- max_depth=9,
- min_samples_leaf=1,
- random_state=seed)
-modelodef.fit(X_train, y_train)
-
-pickle.dump(modelodef, open("modelodef.pkl", "wb"))
-
-
-def RCS(JET, Suelo,SPT, WtoC, Presion, Velocidad):
- modelodef = pickle.load(open("modelodef.pkl", "rb"))
- prediction0 = modelodef.predict([[JET, Suelo,SPT, WtoC, Presion, Velocidad]])
- prediction = np.round(prediction0,2)
- return prediction
-
-title = "ASSESSMENT OF UNIAXIAL COMPRESSIVE STRENGTH OF JET GROUTING"
-description = "This app corresponds to the research paper: Assessment of compressive strength of jet grouting by machine learning"
-article = """
- Notes:
- - Click submit/enviar button to obtain the UCS prediction
- - Click clear/limpiar button to refresh text
- - Please note the application ranges of the variables in the above-referenced paper (https://doi.org/10.1016/j.jrmge.2023.03.008). Outside these ranges, the predictions may not be reliable
- - As a decimal separator you can use either a point or a comma
- """
-
-app = gr.Interface(
- RCS,
- inputs=[
- gr.Radio(['1', '2', '3'], label="Jet system. 1: Single. 2: Double. 3: Triple",value="1"),
- gr.Radio(['1', '2', '3', '4'], label="Soil type. 1: Coarse without fines. 2: Coarse with fines. 3: Fine. 4: Organic",value="1"),
- gr.Number(value=1, label="Nspt"),
- gr.Number(value=1, label="W/C"),
- gr.Number(value=1, label="Grout pressure (MPa)"),
- gr.Number(value=1, label="Rotation speed (rpm)"),
-
- ],
- outputs=[gr.Text(label="UCS (MPa)")],
- title=title,
- description=description,
- article = article,
- theme="dark-seafoam"
-)
-
-app.launch()
\ No newline at end of file
diff --git a/spaces/EuroPython2022/mmocr-demo/configs/textrecog/sar/sar_r31_parallel_decoder_toy_dataset.py b/spaces/EuroPython2022/mmocr-demo/configs/textrecog/sar/sar_r31_parallel_decoder_toy_dataset.py
deleted file mode 100644
index 40688d1290080c010beccc271214e5b246b45a32..0000000000000000000000000000000000000000
--- a/spaces/EuroPython2022/mmocr-demo/configs/textrecog/sar/sar_r31_parallel_decoder_toy_dataset.py
+++ /dev/null
@@ -1,30 +0,0 @@
-_base_ = [
- '../../_base_/default_runtime.py', '../../_base_/recog_models/sar.py',
- '../../_base_/schedules/schedule_adam_step_5e.py',
- '../../_base_/recog_pipelines/sar_pipeline.py',
- '../../_base_/recog_datasets/toy_data.py'
-]
-
-train_list = {{_base_.train_list}}
-test_list = {{_base_.test_list}}
-
-train_pipeline = {{_base_.train_pipeline}}
-test_pipeline = {{_base_.test_pipeline}}
-
-data = dict(
- workers_per_gpu=2,
- samples_per_gpu=8,
- train=dict(
- type='UniformConcatDataset',
- datasets=train_list,
- pipeline=train_pipeline),
- val=dict(
- type='UniformConcatDataset',
- datasets=test_list,
- pipeline=test_pipeline),
- test=dict(
- type='UniformConcatDataset',
- datasets=test_list,
- pipeline=test_pipeline))
-
-evaluation = dict(interval=1, metric='acc')
diff --git a/spaces/Felladrin/MiniSearch/src/modules/urlParams.ts b/spaces/Felladrin/MiniSearch/src/modules/urlParams.ts
deleted file mode 100644
index 1802fcc4255db54bc72b491e9ab4e125d75b2562..0000000000000000000000000000000000000000
--- a/spaces/Felladrin/MiniSearch/src/modules/urlParams.ts
+++ /dev/null
@@ -1,4 +0,0 @@
-const urlParams = new URLSearchParams(window.location.search);
-export const debug = urlParams.has("debug");
-export const query = urlParams.get("q");
-export const disableWorkers = urlParams.has("disableWorkers");
diff --git a/spaces/FrankZxShen/so-vits-svc-models-ba/diffusion/vocoder.py b/spaces/FrankZxShen/so-vits-svc-models-ba/diffusion/vocoder.py
deleted file mode 100644
index bbaa47f64fd5a3191a24dfaa054c423fa86e5bae..0000000000000000000000000000000000000000
--- a/spaces/FrankZxShen/so-vits-svc-models-ba/diffusion/vocoder.py
+++ /dev/null
@@ -1,94 +0,0 @@
-import torch
-from vdecoder.nsf_hifigan.nvSTFT import STFT
-from vdecoder.nsf_hifigan.models import load_model,load_config
-from torchaudio.transforms import Resample
-
-
-class Vocoder:
- def __init__(self, vocoder_type, vocoder_ckpt, device = None):
- if device is None:
- device = 'cuda' if torch.cuda.is_available() else 'cpu'
- self.device = device
-
- if vocoder_type == 'nsf-hifigan':
- self.vocoder = NsfHifiGAN(vocoder_ckpt, device = device)
- elif vocoder_type == 'nsf-hifigan-log10':
- self.vocoder = NsfHifiGANLog10(vocoder_ckpt, device = device)
- else:
- raise ValueError(f" [x] Unknown vocoder: {vocoder_type}")
-
- self.resample_kernel = {}
- self.vocoder_sample_rate = self.vocoder.sample_rate()
- self.vocoder_hop_size = self.vocoder.hop_size()
- self.dimension = self.vocoder.dimension()
-
- def extract(self, audio, sample_rate, keyshift=0):
-
- # resample
- if sample_rate == self.vocoder_sample_rate:
- audio_res = audio
- else:
- key_str = str(sample_rate)
- if key_str not in self.resample_kernel:
- self.resample_kernel[key_str] = Resample(sample_rate, self.vocoder_sample_rate, lowpass_filter_width = 128).to(self.device)
- audio_res = self.resample_kernel[key_str](audio)
-
- # extract
- mel = self.vocoder.extract(audio_res, keyshift=keyshift) # B, n_frames, bins
- return mel
-
- def infer(self, mel, f0):
- f0 = f0[:,:mel.size(1),0] # B, n_frames
- audio = self.vocoder(mel, f0)
- return audio
-
-
-class NsfHifiGAN(torch.nn.Module):
- def __init__(self, model_path, device=None):
- super().__init__()
- if device is None:
- device = 'cuda' if torch.cuda.is_available() else 'cpu'
- self.device = device
- self.model_path = model_path
- self.model = None
- self.h = load_config(model_path)
- self.stft = STFT(
- self.h.sampling_rate,
- self.h.num_mels,
- self.h.n_fft,
- self.h.win_size,
- self.h.hop_size,
- self.h.fmin,
- self.h.fmax)
-
- def sample_rate(self):
- return self.h.sampling_rate
-
- def hop_size(self):
- return self.h.hop_size
-
- def dimension(self):
- return self.h.num_mels
-
- def extract(self, audio, keyshift=0):
- mel = self.stft.get_mel(audio, keyshift=keyshift).transpose(1, 2) # B, n_frames, bins
- return mel
-
- def forward(self, mel, f0):
- if self.model is None:
- print('| Load HifiGAN: ', self.model_path)
- self.model, self.h = load_model(self.model_path, device=self.device)
- with torch.no_grad():
- c = mel.transpose(1, 2)
- audio = self.model(c, f0)
- return audio
-
-class NsfHifiGANLog10(NsfHifiGAN):
- def forward(self, mel, f0):
- if self.model is None:
- print('| Load HifiGAN: ', self.model_path)
- self.model, self.h = load_model(self.model_path, device=self.device)
- with torch.no_grad():
- c = 0.434294 * mel.transpose(1, 2)
- audio = self.model(c, f0)
- return audio
\ No newline at end of file
diff --git a/spaces/Freiburg-AI-Research/dermoscopic_image_generation/glide_text2im/xf.py b/spaces/Freiburg-AI-Research/dermoscopic_image_generation/glide_text2im/xf.py
deleted file mode 100644
index 5dfff440b489f3cc3c62450dc28c2f35f692dd94..0000000000000000000000000000000000000000
--- a/spaces/Freiburg-AI-Research/dermoscopic_image_generation/glide_text2im/xf.py
+++ /dev/null
@@ -1,130 +0,0 @@
-"""
-Transformer implementation adapted from CLIP ViT:
-https://github.com/openai/CLIP/blob/4c0275784d6d9da97ca1f47eaaee31de1867da91/clip/model.py
-"""
-
-import math
-
-import torch as th
-import torch.nn as nn
-
-
-def convert_module_to_f16(l):
- """
- Convert primitive modules to float16.
- """
- if isinstance(l, (nn.Linear, nn.Conv2d, nn.ConvTranspose2d)):
- l.weight.data = l.weight.data.half()
- if l.bias is not None:
- l.bias.data = l.bias.data.half()
-
-
-class LayerNorm(nn.LayerNorm):
- """
- Implementation that supports fp16 inputs but fp32 gains/biases.
- """
-
- def forward(self, x: th.Tensor):
- return super().forward(x.float()).to(x.dtype)
-
-
-class MultiheadAttention(nn.Module):
- def __init__(self, n_ctx, width, heads):
- super().__init__()
- self.n_ctx = n_ctx
- self.width = width
- self.heads = heads
- self.c_qkv = nn.Linear(width, width * 3)
- self.c_proj = nn.Linear(width, width)
- self.attention = QKVMultiheadAttention(heads, n_ctx)
-
- def forward(self, x):
- x = self.c_qkv(x)
- x = self.attention(x)
- x = self.c_proj(x)
- return x
-
-
-class MLP(nn.Module):
- def __init__(self, width):
- super().__init__()
- self.width = width
- self.c_fc = nn.Linear(width, width * 4)
- self.c_proj = nn.Linear(width * 4, width)
- self.gelu = nn.GELU()
-
- def forward(self, x):
- return self.c_proj(self.gelu(self.c_fc(x)))
-
-
-class QKVMultiheadAttention(nn.Module):
- def __init__(self, n_heads: int, n_ctx: int):
- super().__init__()
- self.n_heads = n_heads
- self.n_ctx = n_ctx
-
- def forward(self, qkv):
- bs, n_ctx, width = qkv.shape
- attn_ch = width // self.n_heads // 3
- scale = 1 / math.sqrt(math.sqrt(attn_ch))
- qkv = qkv.view(bs, n_ctx, self.n_heads, -1)
- q, k, v = th.split(qkv, attn_ch, dim=-1)
- weight = th.einsum(
- "bthc,bshc->bhts", q * scale, k * scale
- ) # More stable with f16 than dividing afterwards
- wdtype = weight.dtype
- weight = th.softmax(weight.float(), dim=-1).type(wdtype)
- return th.einsum("bhts,bshc->bthc", weight, v).reshape(bs, n_ctx, -1)
-
-
-class ResidualAttentionBlock(nn.Module):
- def __init__(
- self,
- n_ctx: int,
- width: int,
- heads: int,
- ):
- super().__init__()
-
- self.attn = MultiheadAttention(
- n_ctx,
- width,
- heads,
- )
- self.ln_1 = LayerNorm(width)
- self.mlp = MLP(width)
- self.ln_2 = LayerNorm(width)
-
- def forward(self, x: th.Tensor):
- x = x + self.attn(self.ln_1(x))
- x = x + self.mlp(self.ln_2(x))
- return x
-
-
-class Transformer(nn.Module):
- def __init__(
- self,
- n_ctx: int,
- width: int,
- layers: int,
- heads: int,
- ):
- super().__init__()
- self.n_ctx = n_ctx
- self.width = width
- self.layers = layers
- self.resblocks = nn.ModuleList(
- [
- ResidualAttentionBlock(
- n_ctx,
- width,
- heads,
- )
- for _ in range(layers)
- ]
- )
-
- def forward(self, x: th.Tensor):
- for block in self.resblocks:
- x = block(x)
- return x
diff --git a/spaces/Fuyuka29/Anime_Background_Remover/README.md b/spaces/Fuyuka29/Anime_Background_Remover/README.md
deleted file mode 100644
index 1ba3cb5ea0e994e246d57b7d62b8aa5a6331901c..0000000000000000000000000000000000000000
--- a/spaces/Fuyuka29/Anime_Background_Remover/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Anime Remove Background
-emoji: 🪄🖼️
-colorFrom: indigo
-colorTo: pink
-sdk: gradio
-sdk_version: 3.1.4
-app_file: app.py
-pinned: false
-license: apache-2.0
-duplicated_from: skytnt/anime-remove-background
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/GXSA/bingo/src/app/layout.tsx b/spaces/GXSA/bingo/src/app/layout.tsx
deleted file mode 100644
index 8b5122759987177b8dc4e4356d1d06cea25c15ea..0000000000000000000000000000000000000000
--- a/spaces/GXSA/bingo/src/app/layout.tsx
+++ /dev/null
@@ -1,47 +0,0 @@
-import { Metadata } from 'next'
-import { Toaster } from 'react-hot-toast'
-import { TailwindIndicator } from '@/components/tailwind-indicator'
-import { Providers } from '@/components/providers'
-import { Header } from '@/components/header'
-
-import '@/app/globals.scss'
-
-
-export const metadata: Metadata = {
- title: {
- default: 'Bing AI Chatbot',
- template: `%s - Bing AI Chatbot`
- },
- description: 'Bing AI Chatbot Web App.',
- themeColor: [
- { media: '(prefers-color-scheme: light)', color: 'white' },
- { media: '(prefers-color-scheme: dark)', color: 'dark' }
- ],
- icons: {
- icon: '/favicon.ico',
- shortcut: '../assets/images/logo.svg',
- apple: '../assets/images/logo.svg'
- }
-}
-
-interface RootLayoutProps {
- children: React.ReactNode
-}
-
-export default function RootLayout({ children }: RootLayoutProps) {
- return (
-
-
-
-
-
- {/* @ts-ignore */}
-
- {children}
-
-
-
-
-
- )
-}
diff --git a/spaces/Gallifraid/prompthero-openjourney-v2/app.py b/spaces/Gallifraid/prompthero-openjourney-v2/app.py
deleted file mode 100644
index 4fa45eda1d4a0af263ec59b35e375b837fe1ecf1..0000000000000000000000000000000000000000
--- a/spaces/Gallifraid/prompthero-openjourney-v2/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/prompthero/openjourney-v2").launch()
\ No newline at end of file
diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/models/backbone_full.py b/spaces/Gen-Sim/Gen-Sim/cliport/models/backbone_full.py
deleted file mode 100644
index 9b99b145d2c84444771045ad74992d0bf360f39b..0000000000000000000000000000000000000000
--- a/spaces/Gen-Sim/Gen-Sim/cliport/models/backbone_full.py
+++ /dev/null
@@ -1,162 +0,0 @@
-# Copyright (c) Aishwarya Kamath & Nicolas Carion. Licensed under the Apache License 2.0. All Rights Reserved
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-"""
-Backbone modules.
-"""
-from collections import OrderedDict
-
-import torch
-import torch.nn.functional as F
-import torchvision
-from timm.models import create_model
-from torch import nn
-from torchvision.models._utils import IntermediateLayerGetter
-
-from cliport.models.misc import NestedTensor
-
-
-
-class FrozenBatchNorm2d(torch.nn.Module):
- """
- BatchNorm2d where the batch statistics and the affine parameters are fixed.
-
- Copy-paste from torchvision.misc.ops with added eps before rqsrt,
- without which any other models than torchvision.models.resnet[18,34,50,101]
- produce nans.
- """
-
- def __init__(self, n):
- super(FrozenBatchNorm2d, self).__init__()
- self.register_buffer("weight", torch.ones(n))
- self.register_buffer("bias", torch.zeros(n))
- self.register_buffer("running_mean", torch.zeros(n))
- self.register_buffer("running_var", torch.ones(n))
-
- def _load_from_state_dict(
- self, state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs
- ):
- num_batches_tracked_key = prefix + "num_batches_tracked"
- if num_batches_tracked_key in state_dict:
- del state_dict[num_batches_tracked_key]
-
- super(FrozenBatchNorm2d, self)._load_from_state_dict(
- state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs
- )
-
- def forward(self, x):
- # move reshapes to the beginning
- # to make it fuser-friendly
- w = self.weight.reshape(1, -1, 1, 1)
- b = self.bias.reshape(1, -1, 1, 1)
- rv = self.running_var.reshape(1, -1, 1, 1)
- rm = self.running_mean.reshape(1, -1, 1, 1)
- eps = 1e-5
- scale = w * (rv + eps).rsqrt()
- bias = b - rm * scale
- return x * scale + bias
-
-
-class BackboneBase(nn.Module):
- def __init__(self, backbone: nn.Module, train_backbone: bool, num_channels: int, return_interm_layers: bool):
- super().__init__()
- for name, parameter in backbone.named_parameters():
- if not train_backbone or "layer2" not in name and "layer3" not in name and "layer4" not in name:
- parameter.requires_grad_(False)
- if return_interm_layers:
- return_layers = {"layer1": "0", "layer2": "1", "layer3": "2", "layer4": "3"}
- else:
- return_layers = {"layer4": 0}
- self.body = IntermediateLayerGetter(backbone, return_layers=return_layers)
- self.num_channels = num_channels
-
- def forward(self, tensor_list):
- xs = self.body(tensor_list.tensors)
- out = OrderedDict()
- for name, x in xs.items():
- mask = F.interpolate(tensor_list.mask[None].float(), size=x.shape[-2:]).bool()[0]
- out[name] = NestedTensor(x, mask)
- return out
-
-
-class Backbone(BackboneBase):
- """ResNet backbone with frozen BatchNorm."""
-
- def __init__(self, name: str, train_backbone: bool, return_interm_layers: bool, dilation: bool):
- backbone = getattr(torchvision.models, name)(
- replace_stride_with_dilation=[False, False, dilation], pretrained=False, norm_layer=FrozenBatchNorm2d
- )
- num_channels = 512 if name in ("resnet18", "resnet34") else 2048
- super().__init__(backbone, train_backbone, num_channels, return_interm_layers)
-
-
-class GroupNorm32(torch.nn.GroupNorm):
- def __init__(self, num_channels, num_groups=32, **kargs):
- super().__init__(num_groups, num_channels, **kargs)
-
-
-class GroupNormBackbone(BackboneBase):
- """ResNet backbone with GroupNorm with 32 channels."""
-
- def __init__(self, name: str, train_backbone: bool, return_interm_layers: bool, dilation: bool):
- name_map = {
- "resnet50-gn": ("resnet50", "/checkpoint/szagoruyko/imagenet/22014122/checkpoint.pth"),
- "resnet101-gn": ("resnet101", "/checkpoint/szagoruyko/imagenet/22080524/checkpoint.pth"),
- }
- backbone = getattr(torchvision.models, name_map[name][0])(
- replace_stride_with_dilation=[False, False, dilation], pretrained=False, norm_layer=GroupNorm32
- )
- checkpoint = torch.load(name_map[name][1], map_location="cpu")
- state_dict = {k[7:]: p for k, p in checkpoint["model"].items()}
- backbone.load_state_dict(state_dict)
- num_channels = 512 if name_map[name][0] in ("resnet18", "resnet34") else 2048
- super().__init__(backbone, train_backbone, num_channels, return_interm_layers)
-
-
-def replace_bn(m, name=""):
- for attr_str in dir(m):
- target_attr = getattr(m, attr_str)
- if isinstance(target_attr, torch.nn.BatchNorm2d):
- frozen = FrozenBatchNorm2d(target_attr.num_features)
- bn = getattr(m, attr_str)
- frozen.weight.data.copy_(bn.weight)
- frozen.bias.data.copy_(bn.bias)
- frozen.running_mean.data.copy_(bn.running_mean)
- frozen.running_var.data.copy_(bn.running_var)
- setattr(m, attr_str, frozen)
- for n, ch in m.named_children():
- replace_bn(ch, n)
-
-
-class GN_8(nn.Module):
- def __init__(self, num_channels):
- super().__init__()
- self.gn = torch.nn.GroupNorm(8, num_channels)
-
- def forward(self, x):
- return self.gn(x)
-
-
-class TimmBackbone(nn.Module):
- def __init__(self, name, return_interm_layers, main_layer=-1, group_norm=False):
- super().__init__()
- backbone = create_model(name, pretrained=True, in_chans=3, features_only=True, out_indices=(1, 2, 3, 4))
-
- with torch.no_grad():
- replace_bn(backbone)
- num_channels = backbone.feature_info.channels()[-1]
- self.body = backbone
- self.num_channels = num_channels
- self.interm = return_interm_layers
- self.main_layer = main_layer
-
- def forward(self, tensor_list):
- xs = self.body(tensor_list.tensors)
- if not self.interm:
- xs = [xs[self.main_layer]]
- out = OrderedDict()
- for i, x in enumerate(xs):
- mask = F.interpolate(tensor_list.mask[None].float(), size=x.shape[-2:]).bool()[0]
- out[f"layer{i}"] = NestedTensor(x, mask)
- return out
-
-
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/cascade_rcnn/cascade_mask_rcnn_r50_fpn_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/cascade_rcnn/cascade_mask_rcnn_r50_fpn_1x_coco.py
deleted file mode 100644
index 49ab539aa4cdf7c396b6f109efe2dc7a6d596a2a..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/cascade_rcnn/cascade_mask_rcnn_r50_fpn_1x_coco.py
+++ /dev/null
@@ -1,5 +0,0 @@
-_base_ = [
- '../_base_/models/cascade_mask_rcnn_r50_fpn.py',
- '../_base_/datasets/coco_instance.py',
- '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
-]
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_r50_caffe_fpn_mstrain_1x_coco-person-bicycle-car.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_r50_caffe_fpn_mstrain_1x_coco-person-bicycle-car.py
deleted file mode 100644
index 23d72852f22d025c9eaf2328721909f75b34e2e9..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_r50_caffe_fpn_mstrain_1x_coco-person-bicycle-car.py
+++ /dev/null
@@ -1,9 +0,0 @@
-_base_ = './faster_rcnn_r50_caffe_fpn_mstrain_1x_coco.py'
-model = dict(roi_head=dict(bbox_head=dict(num_classes=3)))
-classes = ('person', 'bicycle', 'car')
-data = dict(
- train=dict(classes=classes),
- val=dict(classes=classes),
- test=dict(classes=classes))
-
-load_from = 'http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_caffe_fpn_mstrain_3x_coco/faster_rcnn_r50_caffe_fpn_mstrain_3x_coco_bbox_mAP-0.398_20200504_163323-30042637.pth' # noqa
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/gn/mask_rcnn_r50_fpn_gn-all_contrib_2x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/gn/mask_rcnn_r50_fpn_gn-all_contrib_2x_coco.py
deleted file mode 100644
index 89caaafbc17d871d836e810ba7c038648937254c..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/gn/mask_rcnn_r50_fpn_gn-all_contrib_2x_coco.py
+++ /dev/null
@@ -1,15 +0,0 @@
-_base_ = '../mask_rcnn/mask_rcnn_r50_fpn_1x_coco.py'
-norm_cfg = dict(type='GN', num_groups=32, requires_grad=True)
-model = dict(
- pretrained='open-mmlab://contrib/resnet50_gn',
- backbone=dict(norm_cfg=norm_cfg),
- neck=dict(norm_cfg=norm_cfg),
- roi_head=dict(
- bbox_head=dict(
- type='Shared4Conv1FCBBoxHead',
- conv_out_channels=256,
- norm_cfg=norm_cfg),
- mask_head=dict(norm_cfg=norm_cfg)))
-# learning policy
-lr_config = dict(step=[16, 22])
-runner = dict(type='EpochBasedRunner', max_epochs=24)
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/vfnet/vfnet_x101_32x4d_fpn_mdconv_c3-c5_mstrain_2x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/vfnet/vfnet_x101_32x4d_fpn_mdconv_c3-c5_mstrain_2x_coco.py
deleted file mode 100644
index ebeef6ff6640e83378391d3ce7072aa296826c32..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/vfnet/vfnet_x101_32x4d_fpn_mdconv_c3-c5_mstrain_2x_coco.py
+++ /dev/null
@@ -1,16 +0,0 @@
-_base_ = './vfnet_r50_fpn_mdconv_c3-c5_mstrain_2x_coco.py'
-model = dict(
- pretrained='open-mmlab://resnext101_32x4d',
- backbone=dict(
- type='ResNeXt',
- depth=101,
- groups=32,
- base_width=4,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=1,
- norm_cfg=dict(type='BN', requires_grad=True),
- norm_eval=True,
- style='pytorch',
- dcn=dict(type='DCNv2', deform_groups=1, fallback_on_stride=False),
- stage_with_dcn=(False, True, True, True)))
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/danet/danet_r50-d8_512x512_160k_ade20k.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/danet/danet_r50-d8_512x512_160k_ade20k.py
deleted file mode 100644
index 22aaf857c3212d0b36b0b04e7990616025a3ef9b..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/danet/danet_r50-d8_512x512_160k_ade20k.py
+++ /dev/null
@@ -1,6 +0,0 @@
-_base_ = [
- '../_base_/models/danet_r50-d8.py', '../_base_/datasets/ade20k.py',
- '../_base_/default_runtime.py', '../_base_/schedules/schedule_160k.py'
-]
-model = dict(
- decode_head=dict(num_classes=150), auxiliary_head=dict(num_classes=150))
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/mobilenet_v2/README.md b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/mobilenet_v2/README.md
deleted file mode 100644
index 7356a0ec4d7205782fe8b27e480311b58d4293ff..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/mobilenet_v2/README.md
+++ /dev/null
@@ -1,35 +0,0 @@
-# MobileNetV2: Inverted Residuals and Linear Bottlenecks
-
-## Introduction
-
-
-
-```latex
-@inproceedings{sandler2018mobilenetv2,
- title={Mobilenetv2: Inverted residuals and linear bottlenecks},
- author={Sandler, Mark and Howard, Andrew and Zhu, Menglong and Zhmoginov, Andrey and Chen, Liang-Chieh},
- booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
- pages={4510--4520},
- year={2018}
-}
-```
-
-## Results and models
-
-### Cityscapes
-
-| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download |
-| ---------- | -------- | --------- | ------: | -------: | -------------- | ----: | ------------- | ---------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| FCN | M-V2-D8 | 512x1024 | 80000 | 3.4 | 14.2 | 61.54 | - | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/mobilenet_v2/fcn_m-v2-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/mobilenet_v2/fcn_m-v2-d8_512x1024_80k_cityscapes/fcn_m-v2-d8_512x1024_80k_cityscapes_20200825_124817-d24c28c1.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/mobilenet_v2/fcn_m-v2-d8_512x1024_80k_cityscapes/fcn_m-v2-d8_512x1024_80k_cityscapes-20200825_124817.log.json) |
-| PSPNet | M-V2-D8 | 512x1024 | 80000 | 3.6 | 11.2 | 70.23 | - | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/mobilenet_v2/pspnet_m-v2-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/mobilenet_v2/pspnet_m-v2-d8_512x1024_80k_cityscapes/pspnet_m-v2-d8_512x1024_80k_cityscapes_20200825_124817-19e81d51.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/mobilenet_v2/pspnet_m-v2-d8_512x1024_80k_cityscapes/pspnet_m-v2-d8_512x1024_80k_cityscapes-20200825_124817.log.json) |
-| DeepLabV3 | M-V2-D8 | 512x1024 | 80000 | 3.9 | 8.4 | 73.84 | - | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/mobilenet_v2/deeplabv3_m-v2-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/mobilenet_v2/deeplabv3_m-v2-d8_512x1024_80k_cityscapes/deeplabv3_m-v2-d8_512x1024_80k_cityscapes_20200825_124836-bef03590.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/mobilenet_v2/deeplabv3_m-v2-d8_512x1024_80k_cityscapes/deeplabv3_m-v2-d8_512x1024_80k_cityscapes-20200825_124836.log.json) |
-| DeepLabV3+ | M-V2-D8 | 512x1024 | 80000 | 5.1 | 8.4 | 75.20 | - | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/mobilenet_v2/deeplabv3plus_m-v2-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/mobilenet_v2/deeplabv3plus_m-v2-d8_512x1024_80k_cityscapes/deeplabv3plus_m-v2-d8_512x1024_80k_cityscapes_20200825_124836-d256dd4b.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/mobilenet_v2/deeplabv3plus_m-v2-d8_512x1024_80k_cityscapes/deeplabv3plus_m-v2-d8_512x1024_80k_cityscapes-20200825_124836.log.json) |
-
-### ADE20k
-
-| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download |
-| ---------- | -------- | --------- | ------: | -------: | -------------- | ----: | ------------- | ------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
-| FCN | M-V2-D8 | 512x512 | 160000 | 6.5 | 64.4 | 19.71 | - | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/mobilenet_v2/fcn_m-v2-d8_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/mobilenet_v2/fcn_m-v2-d8_512x512_160k_ade20k/fcn_m-v2-d8_512x512_160k_ade20k_20200825_214953-c40e1095.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/mobilenet_v2/fcn_m-v2-d8_512x512_160k_ade20k/fcn_m-v2-d8_512x512_160k_ade20k-20200825_214953.log.json) |
-| PSPNet | M-V2-D8 | 512x512 | 160000 | 6.5 | 57.7 | 29.68 | - | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/mobilenet_v2/pspnet_m-v2-d8_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/mobilenet_v2/pspnet_m-v2-d8_512x512_160k_ade20k/pspnet_m-v2-d8_512x512_160k_ade20k_20200825_214953-f5942f7a.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/mobilenet_v2/pspnet_m-v2-d8_512x512_160k_ade20k/pspnet_m-v2-d8_512x512_160k_ade20k-20200825_214953.log.json) |
-| DeepLabV3 | M-V2-D8 | 512x512 | 160000 | 6.8 | 39.9 | 34.08 | - | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/mobilenet_v2/deeplabv3_m-v2-d8_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/mobilenet_v2/deeplabv3_m-v2-d8_512x512_160k_ade20k/deeplabv3_m-v2-d8_512x512_160k_ade20k_20200825_223255-63986343.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/mobilenet_v2/deeplabv3_m-v2-d8_512x512_160k_ade20k/deeplabv3_m-v2-d8_512x512_160k_ade20k-20200825_223255.log.json) |
-| DeepLabV3+ | M-V2-D8 | 512x512 | 160000 | 8.2 | 43.1 | 34.02 | - | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/mobilenet_v2/deeplabv3plus_m-v2-d8_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/mobilenet_v2/deeplabv3plus_m-v2-d8_512x512_160k_ade20k/deeplabv3plus_m-v2-d8_512x512_160k_ade20k_20200825_223255-465a01d4.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/mobilenet_v2/deeplabv3plus_m-v2-d8_512x512_160k_ade20k/deeplabv3plus_m-v2-d8_512x512_160k_ade20k-20200825_223255.log.json) |
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/version.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/version.py
deleted file mode 100644
index e090d9f31aae3ce0a8fd6392d519163130f437dc..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/version.py
+++ /dev/null
@@ -1,18 +0,0 @@
-# Copyright (c) Open-MMLab. All rights reserved.
-
-__version__ = '0.13.0'
-
-
-def parse_version_info(version_str):
- version_info = []
- for x in version_str.split('.'):
- if x.isdigit():
- version_info.append(int(x))
- elif x.find('rc') != -1:
- patch_version = x.split('rc')
- version_info.append(int(patch_version[0]))
- version_info.append(f'rc{patch_version[1]}')
- return tuple(version_info)
-
-
-version_info = parse_version_info(__version__)
diff --git a/spaces/Grezz/generate_human_motion/pyrender/pyrender/constants.py b/spaces/Grezz/generate_human_motion/pyrender/pyrender/constants.py
deleted file mode 100644
index 8a5785b6fdb21910a174252c5af2f05b40ece4a5..0000000000000000000000000000000000000000
--- a/spaces/Grezz/generate_human_motion/pyrender/pyrender/constants.py
+++ /dev/null
@@ -1,149 +0,0 @@
-DEFAULT_Z_NEAR = 0.05 # Near clipping plane, in meters
-DEFAULT_Z_FAR = 100.0 # Far clipping plane, in meters
-DEFAULT_SCENE_SCALE = 2.0 # Default scene scale
-MAX_N_LIGHTS = 4 # Maximum number of lights of each type allowed
-TARGET_OPEN_GL_MAJOR = 4 # Target OpenGL Major Version
-TARGET_OPEN_GL_MINOR = 1 # Target OpenGL Minor Version
-MIN_OPEN_GL_MAJOR = 3 # Minimum OpenGL Major Version
-MIN_OPEN_GL_MINOR = 3 # Minimum OpenGL Minor Version
-FLOAT_SZ = 4 # Byte size of GL float32
-UINT_SZ = 4 # Byte size of GL uint32
-SHADOW_TEX_SZ = 2048 # Width and Height of Shadow Textures
-TEXT_PADDING = 20 # Width of padding for rendering text (px)
-
-
-# Flags for render type
-class RenderFlags(object):
- """Flags for rendering in the scene.
-
- Combine them with the bitwise or. For example,
-
- >>> flags = OFFSCREEN | SHADOWS_DIRECTIONAL | VERTEX_NORMALS
-
- would result in an offscreen render with directional shadows and
- vertex normals enabled.
- """
- NONE = 0
- """Normal PBR Render."""
- DEPTH_ONLY = 1
- """Only render the depth buffer."""
- OFFSCREEN = 2
- """Render offscreen and return the depth and (optionally) color buffers."""
- FLIP_WIREFRAME = 4
- """Invert the status of wireframe rendering for each mesh."""
- ALL_WIREFRAME = 8
- """Render all meshes as wireframes."""
- ALL_SOLID = 16
- """Render all meshes as solids."""
- SHADOWS_DIRECTIONAL = 32
- """Render shadows for directional lights."""
- SHADOWS_POINT = 64
- """Render shadows for point lights."""
- SHADOWS_SPOT = 128
- """Render shadows for spot lights."""
- SHADOWS_ALL = 32 | 64 | 128
- """Render shadows for all lights."""
- VERTEX_NORMALS = 256
- """Render vertex normals."""
- FACE_NORMALS = 512
- """Render face normals."""
- SKIP_CULL_FACES = 1024
- """Do not cull back faces."""
- RGBA = 2048
- """Render the color buffer with the alpha channel enabled."""
- FLAT = 4096
- """Render the color buffer flat, with no lighting computations."""
- SEG = 8192
-
-
-class TextAlign:
- """Text alignment options for captions.
-
- Only use one at a time.
- """
- CENTER = 0
- """Center the text by width and height."""
- CENTER_LEFT = 1
- """Center the text by height and left-align it."""
- CENTER_RIGHT = 2
- """Center the text by height and right-align it."""
- BOTTOM_LEFT = 3
- """Put the text in the bottom-left corner."""
- BOTTOM_RIGHT = 4
- """Put the text in the bottom-right corner."""
- BOTTOM_CENTER = 5
- """Center the text by width and fix it to the bottom."""
- TOP_LEFT = 6
- """Put the text in the top-left corner."""
- TOP_RIGHT = 7
- """Put the text in the top-right corner."""
- TOP_CENTER = 8
- """Center the text by width and fix it to the top."""
-
-
-class GLTF(object):
- """Options for GL objects."""
- NEAREST = 9728
- """Nearest neighbor interpolation."""
- LINEAR = 9729
- """Linear interpolation."""
- NEAREST_MIPMAP_NEAREST = 9984
- """Nearest mipmapping."""
- LINEAR_MIPMAP_NEAREST = 9985
- """Linear mipmapping."""
- NEAREST_MIPMAP_LINEAR = 9986
- """Nearest mipmapping."""
- LINEAR_MIPMAP_LINEAR = 9987
- """Linear mipmapping."""
- CLAMP_TO_EDGE = 33071
- """Clamp to the edge of the texture."""
- MIRRORED_REPEAT = 33648
- """Mirror the texture."""
- REPEAT = 10497
- """Repeat the texture."""
- POINTS = 0
- """Render as points."""
- LINES = 1
- """Render as lines."""
- LINE_LOOP = 2
- """Render as a line loop."""
- LINE_STRIP = 3
- """Render as a line strip."""
- TRIANGLES = 4
- """Render as triangles."""
- TRIANGLE_STRIP = 5
- """Render as a triangle strip."""
- TRIANGLE_FAN = 6
- """Render as a triangle fan."""
-
-
-class BufFlags(object):
- POSITION = 0
- NORMAL = 1
- TANGENT = 2
- TEXCOORD_0 = 4
- TEXCOORD_1 = 8
- COLOR_0 = 16
- JOINTS_0 = 32
- WEIGHTS_0 = 64
-
-
-class TexFlags(object):
- NONE = 0
- NORMAL = 1
- OCCLUSION = 2
- EMISSIVE = 4
- BASE_COLOR = 8
- METALLIC_ROUGHNESS = 16
- DIFFUSE = 32
- SPECULAR_GLOSSINESS = 64
-
-
-class ProgramFlags:
- NONE = 0
- USE_MATERIAL = 1
- VERTEX_NORMALS = 2
- FACE_NORMALS = 4
-
-
-__all__ = ['RenderFlags', 'TextAlign', 'GLTF']
diff --git a/spaces/Guldeniz/aerial-to-map/README.md b/spaces/Guldeniz/aerial-to-map/README.md
deleted file mode 100644
index 43121c30c24eebb7e2848743909c3da2d8330c5b..0000000000000000000000000000000000000000
--- a/spaces/Guldeniz/aerial-to-map/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Maps To Aerial
-emoji: 📈
-colorFrom: gray
-colorTo: purple
-sdk: gradio
-sdk_version: 2.9.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/Haitangtangtangtang/AnimeBackgroundGAN/README.md b/spaces/Haitangtangtangtang/AnimeBackgroundGAN/README.md
deleted file mode 100644
index 9fde1b0be30d306bef54c19fa2057acad76d3fe8..0000000000000000000000000000000000000000
--- a/spaces/Haitangtangtangtang/AnimeBackgroundGAN/README.md
+++ /dev/null
@@ -1,33 +0,0 @@
----
-title: AnimeBackgroundGAN
-emoji: 🖼
-colorFrom: red
-colorTo: indigo
-sdk: gradio
-app_file: app.py
-pinned: true
-duplicated_from: akiyamasho/AnimeBackgroundGAN
----
-
-# Configuration
-
-`title`: _string_
-Anime Background GAN
-
-`emoji`: _string_
-🖼
-
-`colorFrom`: _string_
-red
-
-`colorTo`: _string_
-indigo
-
-`sdk`: _string_
-gradio
-
-`app_file`: _string_
-app.py
-
-`pinned`: _boolean_
-true
\ No newline at end of file
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/shuffled_word_order/README.finetuning.md b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/shuffled_word_order/README.finetuning.md
deleted file mode 100644
index ecbcb65884640c3327a2cbaef8aad4f3cfe812f7..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/shuffled_word_order/README.finetuning.md
+++ /dev/null
@@ -1,135 +0,0 @@
-# Fine-tuning details
-
-For each task (GLUE and PAWS), we perform hyperparam search for each model, and report the mean and standard deviation across 5 seeds of the best model. First, get the datasets following the instructions in [RoBERTa fine-tuning README](../roberta/README.glue.md). Alternatively, you can use [huggingface datasets](https://huggingface.co/docs/datasets/) to get the task data:
-
-```python
-from datasets import load_dataset
-import pandas as pd
-from pathlib import Path
-
-key2file = {
-"paws": {
- "loc": "paws_data",
- "columns": ["id", "sentence1", "sentence2", "label"],
- "train": "train.tsv",
- "validation": "dev.tsv",
- "test": "test.tsv"
- }
-}
-
-task_data = load_dataset("paws", "labeled_final")
-task_config = key2file["paws"]
-save_path = Path(task_config["loc"])
-save_path.mkdir(exist_ok=True, parents=True)
-for key, fl in task_config.items():
- if key in ["loc", "columns"]:
- continue
- print(f"Reading {key}")
- columns = task_config["columns"]
- df = pd.DataFrame(task_data[key])
- print(df.columns)
- df = df[columns]
- print(f"Got {len(df)} records")
- save_loc = save_path / fl
- print(f"Saving to : {save_loc}")
- df.to_csv(save_loc, sep="\t", header=None, index=None)
-
-```
-
-- Preprocess using RoBERTa GLUE preprocessing script, while keeping in mind the column numbers for `sentence1`, `sentence2` and `label` (which is 0,1,2 if you save the data according to the above example.)
-- Then, fine-tuning is performed similarly to RoBERTa (for example, in case of RTE):
-
-```bash
-TOTAL_NUM_UPDATES=30875 # 10 epochs through RTE for bsz 16
-WARMUP_UPDATES=1852 # 6 percent of the number of updates
-LR=2e-05 # Peak LR for polynomial LR scheduler.
-NUM_CLASSES=2
-MAX_SENTENCES=16 # Batch size.
-SHUFFLED_ROBERTA_PATH=/path/to/shuffled_roberta/model.pt
-
-CUDA_VISIBLE_DEVICES=0 fairseq-train RTE-bin/ \
- --restore-file $SHUFFLED_ROBERTA_PATH \
- --max-positions 512 \
- --batch-size $MAX_SENTENCES \
- --max-tokens 4400 \
- --task sentence_prediction \
- --reset-optimizer --reset-dataloader --reset-meters \
- --required-batch-size-multiple 1 \
- --init-token 0 --separator-token 2 \
- --arch roberta_large \
- --criterion sentence_prediction \
- --num-classes $NUM_CLASSES \
- --dropout 0.1 --attention-dropout 0.1 \
- --weight-decay 0.1 --optimizer adam --adam-betas "(0.9, 0.98)" --adam-eps 1e-06 \
- --clip-norm 0.0 \
- --lr-scheduler polynomial_decay --lr $LR --total-num-update $TOTAL_NUM_UPDATES --warmup-updates $WARMUP_UPDATES \
- --fp16 --fp16-init-scale 4 --threshold-loss-scale 1 --fp16-scale-window 128 \
- --max-epoch 10 \
- --find-unused-parameters \
- --best-checkpoint-metric accuracy --maximize-best-checkpoint-metric;
-```
-
-- `TOTAL_NUM_UPDATES` is computed based on the `--batch_size` value and the dataset size.
-- `WARMUP_UPDATES` is computed as 6% of `TOTAL_NUM_UPDATES`
-- Best hyperparam of `--lr` and `--batch_size` is reported below:
-
-## `--lr`
-
-| | name | RTE | MRPC | SST-2 | CoLA | QQP | QNLI | MNLI | PAWS |
-| --: | :----------- | ----: | ----: | ----: | ----: | ----: | ----: | ----: | ----: |
-| 0 | original | 2e-05 | 2e-05 | 1e-05 | 2e-05 | 1e-05 | 1e-05 | 1e-05 | 2e-05 |
-| 1 | n_1 | 2e-05 | 1e-05 | 1e-05 | 1e-05 | 3e-05 | 1e-05 | 2e-05 | 2e-05 |
-| 2 | n_2 | 2e-05 | 2e-05 | 1e-05 | 1e-05 | 2e-05 | 1e-05 | 1e-05 | 3e-05 |
-| 3 | n_3 | 3e-05 | 1e-05 | 2e-05 | 2e-05 | 3e-05 | 1e-05 | 1e-05 | 2e-05 |
-| 4 | n_4 | 3e-05 | 1e-05 | 2e-05 | 2e-05 | 2e-05 | 1e-05 | 1e-05 | 2e-05 |
-| 5 | r512 | 1e-05 | 3e-05 | 2e-05 | 2e-05 | 3e-05 | 2e-05 | 3e-05 | 2e-05 |
-| 6 | rand_corpus | 2e-05 | 1e-05 | 3e-05 | 1e-05 | 3e-05 | 3e-05 | 3e-05 | 2e-05 |
-| 7 | rand_uniform | 2e-05 | 1e-05 | 3e-05 | 2e-05 | 3e-05 | 3e-05 | 3e-05 | 1e-05 |
-| 8 | rand_init | 1e-05 | 1e-05 | 3e-05 | 1e-05 | 1e-05 | 1e-05 | 2e-05 | 1e-05 |
-| 9 | no_pos | 1e-05 | 3e-05 | 2e-05 | 1e-05 | 1e-05 | 1e-05 | 1e-05 | 1e-05 |
-
-## `--batch_size`
-
-| | name | RTE | MRPC | SST-2 | CoLA | QQP | QNLI | MNLI | PAWS |
-| --: | :----------- | --: | ---: | ----: | ---: | --: | ---: | ---: | ---: |
-| 0 | orig | 16 | 16 | 32 | 16 | 16 | 32 | 32 | 16 |
-| 1 | n_1 | 32 | 32 | 16 | 32 | 32 | 16 | 32 | 16 |
-| 2 | n_2 | 32 | 16 | 32 | 16 | 32 | 32 | 16 | 32 |
-| 3 | n_3 | 32 | 32 | 16 | 32 | 32 | 16 | 32 | 32 |
-| 4 | n_4 | 32 | 16 | 32 | 16 | 32 | 32 | 32 | 32 |
-| 5 | r512 | 32 | 16 | 16 | 32 | 32 | 16 | 16 | 16 |
-| 6 | rand_corpus | 16 | 16 | 16 | 16 | 32 | 16 | 16 | 32 |
-| 7 | rand_uniform | 16 | 32 | 16 | 16 | 32 | 16 | 16 | 16 |
-| 8 | rand_init | 16 | 16 | 32 | 16 | 16 | 16 | 32 | 16 |
-| 9 | no_pos | 16 | 32 | 16 | 16 | 32 | 16 | 16 | 16 |
-
-- Perform inference similar to RoBERTa as well:
-
-```python
-from fairseq.models.roberta import RobertaModel
-
-roberta = RobertaModel.from_pretrained(
- 'checkpoints/',
- checkpoint_file='checkpoint_best.pt',
- data_name_or_path='PAWS-bin'
-)
-
-label_fn = lambda label: roberta.task.label_dictionary.string(
- [label + roberta.task.label_dictionary.nspecial]
-)
-ncorrect, nsamples = 0, 0
-roberta.cuda()
-roberta.eval()
-with open('paws_data/dev.tsv') as fin:
- fin.readline()
- for index, line in enumerate(fin):
- tokens = line.strip().split('\t')
- sent1, sent2, target = tokens[0], tokens[1], tokens[2]
- tokens = roberta.encode(sent1, sent2)
- prediction = roberta.predict('sentence_classification_head', tokens).argmax().item()
- prediction_label = label_fn(prediction)
- ncorrect += int(prediction_label == target)
- nsamples += 1
-print('| Accuracy: ', float(ncorrect)/float(nsamples))
-
-```
diff --git a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/scripts/glow/train_glow.sh b/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/scripts/glow/train_glow.sh
deleted file mode 100644
index f12939d5d4563de555bf49408fa7a27397e0dae3..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/scripts/glow/train_glow.sh
+++ /dev/null
@@ -1,17 +0,0 @@
-#!/bin/bash
-
-gender='male'
-
-config='../../config/glow/'$gender'.json'
-modeldir='../../checkpoints/glow/'$gender
-logdir='../../logs/glow/'$gender
-init=1 # 1 if start from scratch. 0 if start from last checkpoint
-
-
-####################################################
-
-if [[ $init -eq 1 ]]
-then
- python ../../src/glow_tts/init.py -c $config -m $modeldir -l $logdir
-fi
-python ../../src/glow_tts/train.py -c $config -m $modeldir -l $logdir
diff --git a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/utils/inference/num_to_word_on_sent.py b/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/utils/inference/num_to_word_on_sent.py
deleted file mode 100644
index ce878a8c3ee6f5ef629abeaee418d5959f7179ed..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/utils/inference/num_to_word_on_sent.py
+++ /dev/null
@@ -1,1314 +0,0 @@
-import re
-import string
-
-# ----------------------------- indic_num.py -----------------------------
-supported_lang = {"en", "hi", "gu", "mr", "bn", "te", "ta", "kn", "or", "pa"}
-# supported_lang = {'eng', 'hin', 'guj', 'mar', 'ben', 'tel', 'tam', 'kan', 'ori', 'pan'} # Three alphabet lang code
-
-all_num = {
- "en": ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"],
- "hi": ["०", "१", "२", "३", "४", "५", "६", "७", "८", "९"],
- "gu": ["૦", "૧", "૨", "૩", "૪", "૫", "૬", "૭", "૮", "૯"],
- "mr": ["०", "१", "२", "३", "४", "५", "६", "७", "८", "९"],
- "bn": ["০", "১", "২", "৩", "৪", "৫", "৬", "৭", "৮", "৯"],
- "te": ["౦", "౧", "౨", "౩", "౪", "౫", "౬", "౭", "౮", "౯"],
- "ta": ["0", "௧", "௨", "௩", "௪", "௫", "௬", "௭", "௮", "௯", "௰"],
- "kn": ["೦", "೧", "೨", "೩", "೪", "೫", "೬", "೭", "೮", "೯"],
- "or": ["୦", "୧", "୨", "୩", "୪", "୫", "୬", "୭", "୮", "୯"],
- "pa": ["੦", "੧", "੨", "੩", "੪", "੫", "੬", "੭", "੮", "੯"],
-}
-
-num_dict = dict()
-num_dict["en"] = {
- "0": "zero",
- "1": "one",
- "2": "two",
- "3": "three",
- "4": "four",
- "5": "five",
- "6": "six",
- "7": "seven",
- "8": "eight",
- "9": "nine",
- "10": "ten",
- "11": "eleven",
- "12": "twelve",
- "13": "thirteen",
- "14": "fourteen",
- "15": "fifteen",
- "16": "sixteen",
- "17": "seventeen",
- "18": "eighteen",
- "19": "nineteen",
- "20": "twenty",
- "21": "twenty-one",
- "22": "twenty-two",
- "23": "twenty-three",
- "24": "twenty-four",
- "25": "twenty-five",
- "26": "twenty-six",
- "27": "twenty-seven",
- "28": "twenty-eight",
- "29": "twenty-nine",
- "30": "thirty",
- "31": "thirty-one",
- "32": "thirty-two",
- "33": "thirty-three",
- "34": "thirty-four",
- "35": "thirty-five",
- "36": "thirty-six",
- "37": "thirty-seven",
- "38": "thirty-eight",
- "39": "thirty-nine",
- "40": "forty",
- "41": "forty-one",
- "42": "forty-two",
- "43": "forty-three",
- "44": "forty-four",
- "45": "forty-five",
- "46": "forty-six",
- "47": "forty-seven",
- "48": "forty-eight",
- "49": "forty-nine",
- "50": "fifty",
- "51": "fifty-one",
- "52": "fifty-two",
- "53": "fifty-three",
- "54": "fifty-four",
- "55": "fifty-five",
- "56": "fifty-six",
- "57": "fifty-seven",
- "58": "fifty-eight",
- "59": "fifty-nine",
- "60": "sixty",
- "61": "sixty-one",
- "62": "sixty-two",
- "63": "sixty-three",
- "64": "sixty-four",
- "65": "sixty-five",
- "66": "sixty-six",
- "67": "sixty-seven",
- "68": "sixty-eight",
- "69": "sixty-nine",
- "70": "seventy",
- "71": "seventy-one",
- "72": "seventy-two",
- "73": "seventy-three",
- "74": "seventy-four",
- "75": "seventy-five",
- "76": "seventy-six",
- "77": "seventy-seven",
- "78": "seventy-eight",
- "79": "seventy-nine",
- "80": "eighty",
- "81": "eighty-one",
- "82": "eighty-two",
- "83": "eighty-three",
- "84": "eighty-four",
- "85": "eighty-five",
- "86": "eighty-six",
- "87": "eighty-seven",
- "88": "eighty-eight",
- "89": "eighty-nine",
- "90": "ninety",
- "91": "ninety-one",
- "92": "ninety-two",
- "93": "ninety-three",
- "94": "ninety-four",
- "95": "ninety-five",
- "96": "ninety-six",
- "97": "ninety-seven",
- "98": "ninety-eight",
- "99": "ninety-nine",
- "100": "hundred",
- "1000": "thousand",
- "100000": "lac",
- "10000000": "crore",
- "1000000000": "arab",
-} # English-India
-num_dict["hi"] = {
- "0": "शून्य",
- "1": "एक",
- "2": "दो",
- "3": "तीन",
- "4": "चार",
- "5": "पाँच",
- "6": "छः",
- "7": "सात",
- "8": "आठ",
- "9": "नौ",
- "10": "दस",
- "11": "ग्यारह",
- "12": "बारह",
- "13": "तेरह",
- "14": "चौदह",
- "15": "पंद्रह",
- "16": "सोलह",
- "17": "सत्रह",
- "18": "अट्ठारह",
- "19": "उन्नीस",
- "20": "बीस",
- "21": "इक्कीस",
- "22": "बाईस",
- "23": "तेईस",
- "24": "चौबिस",
- "25": "पच्चीस",
- "26": "छब्बीस",
- "27": "सत्ताईस",
- "28": "अट्ठाईस",
- "29": "उनतीस",
- "30": "तीस",
- "31": "इकतीस",
- "32": "बत्तीस",
- "33": "तैंतीस",
- "34": "चौंतीस",
- "35": "पैंतीस",
- "36": "छत्तीस",
- "37": "सैंतीस",
- "38": "अड़तीस",
- "39": "उनतालीस",
- "40": "चालीस",
- "41": "इकतालीस",
- "42": "बयालीस",
- "43": "तैंतालीस",
- "44": "चौंतालीस",
- "45": "पैंतालीस",
- "46": "छियालीस",
- "47": "सैंतालीस",
- "48": "अड़तालीस",
- "49": "उनचास",
- "50": "पचास",
- "51": "इक्यावन",
- "52": "बावन",
- "53": "तिरेपन",
- "54": "चौवन",
- "55": "पचपन",
- "56": "छप्पन",
- "57": "सत्तावन",
- "58": "अट्ठावन",
- "59": "उनसठ",
- "60": "साठ",
- "61": "इकसठ",
- "62": "बासठ",
- "63": "तिरेसठ",
- "64": "चौंसठ",
- "65": "पैंसठ",
- "66": "छयासठ",
- "67": "सरसठ",
- "68": "अड़सठ",
- "69": "उनहत्तर",
- "70": "सत्तर",
- "71": "इकहत्तर",
- "72": "बहत्तर",
- "73": "तिहत्तर",
- "74": "चौहत्तर",
- "75": "पचहत्तर",
- "76": "छिहत्तर",
- "77": "सतहत्तर",
- "78": "अठहत्तर",
- "79": "उन्यासी",
- "80": "अस्सी",
- "81": "इक्यासी",
- "82": "बयासी",
- "83": "तिरासी",
- "84": "चौरासी",
- "85": "पचासी",
- "86": "छियासी",
- "87": "सत्तासी",
- "88": "अठासी",
- "89": "नवासी",
- "90": "नब्बे",
- "91": "इक्यानवे",
- "92": "बानवे",
- "93": "तिरानवे",
- "94": "चौरानवे",
- "95": "पचानवे",
- "96": "छियानवे",
- "97": "सत्तानवे",
- "98": "अट्ठानवे",
- "99": "निन्यानवे",
- "100": "सौ",
- "1000": "हज़ार",
- "100000": "लाख",
- "10000000": "करोड़",
- "1000000000": "अरब",
-} # Hindi
-num_dict["gu"] = {
- "0": "શૂન્ય",
- "1": "એક",
- "2": "બે",
- "3": "ત્રણ",
- "4": "ચાર",
- "5": "પાંચ",
- "6": "છ",
- "7": "સાત",
- "8": "આઠ",
- "9": "નવ",
- "10": "દસ",
- "11": "અગિયાર",
- "12": "બાર",
- "13": "તેર",
- "14": "ચૌદ",
- "15": "પંદર",
- "16": "સોળ",
- "17": "સત્તર",
- "18": "અઢાર",
- "19": "ઓગણિસ",
- "20": "વીસ",
- "21": "એકવીસ",
- "22": "બાવીસ",
- "23": "તેવીસ",
- "24": "ચોવીસ",
- "25": "પચ્ચીસ",
- "26": "છવીસ",
- "27": "સત્તાવીસ",
- "28": "અઠ્ઠાવીસ",
- "29": "ઓગણત્રીસ",
- "30": "ત્રીસ",
- "31": "એકત્રીસ",
- "32": "બત્રીસ",
- "33": "તેત્રીસ",
- "34": "ચોત્રીસ",
- "35": "પાંત્રીસ",
- "36": "છત્રીસ",
- "37": "સડત્રીસ",
- "38": "અડત્રીસ",
- "39": "ઓગણચાલીસ",
- "40": "ચાલીસ",
- "41": "એકતાલીસ",
- "42": "બેતાલીસ",
- "43": "ત્રેતાલીસ",
- "44": "ચુંમાલીસ",
- "45": "પિસ્તાલીસ",
- "46": "છેતાલીસ",
- "47": "સુડતાલીસ",
- "48": "અડતાલીસ",
- "49": "ઓગણપચાસ",
- "50": "પચાસ",
- "51": "એકાવન",
- "52": "બાવન",
- "53": "ત્રેપન",
- "54": "ચોપન",
- "55": "પંચાવન",
- "56": "છપ્પન",
- "57": "સત્તાવન",
- "58": "અઠ્ઠાવન",
- "59": "ઓગણસાઠ",
- "60": "સાઈઠ",
- "61": "એકસઠ",
- "62": "બાસઠ",
- "63": "ત્રેસઠ",
- "64": "ચોસઠ",
- "65": "પાંસઠ",
- "66": "છાસઠ",
- "67": "સડસઠ",
- "68": "અડસઠ",
- "69": "અગણોસિત્તેર",
- "70": "સિત્તેર",
- "71": "એકોતેર",
- "72": "બોતેર",
- "73": "તોતેર",
- "74": "ચુમોતેર",
- "75": "પંચોતેર",
- "76": "છોતેર",
- "77": "સિત્યોતેર",
- "78": "ઇઠ્યોતેર",
- "79": "ઓગણાએંસી",
- "80": "એંસી",
- "81": "એક્યાસી",
- "82": "બ્યાસી",
- "83": "ત્યાસી",
- "84": "ચોર્યાસી",
- "85": "પંચાસી",
- "86": "છ્યાસી",
- "87": "સિત્યાસી",
- "88": "ઈઠ્યાસી",
- "89": "નેવ્યાસી",
- "90": "નેવું",
- "91": "એકાણું",
- "92": "બાણું",
- "93": "ત્રાણું",
- "94": "ચોરાણું",
- "95": "પંચાણું",
- "96": "છન્નું",
- "97": "સત્તાણું",
- "98": "અઠ્ઠાણું",
- "99": "નવ્વાણું",
- "100": "સો",
- "1000": "હજાર",
- "100000": "લાખ",
- "1000000": "દસ લાખ",
- "10000000": "કરોડ઼",
-} # Gujarati
-num_dict["mr"] = {
- "0": "शून्य",
- "1": "एक",
- "2": "दोन",
- "3": "तीन",
- "4": "चार",
- "5": "पाच",
- "6": "सहा",
- "7": "सात",
- "8": "आठ",
- "9": "नऊ",
- "10": "दहा",
- "11": "अकरा",
- "12": "बारा",
- "13": "तेरा",
- "14": "चौदा",
- "15": "पंधरा",
- "16": "सोळा",
- "17": "सतरा",
- "18": "अठरा",
- "19": "एकोणीस",
- "20": "वीस",
- "21": "एकवीस",
- "22": "बावीस",
- "23": "तेवीस",
- "24": "चोवीस",
- "25": "पंचवीस",
- "26": "सव्वीस",
- "27": "सत्तावीस",
- "28": "अठ्ठावीस",
- "29": "एकोणतीस",
- "30": "तीस",
- "31": "एकतीस",
- "32": "बत्तीस",
- "33": "तेहेतीस",
- "34": "चौतीस",
- "35": "पस्तीस",
- "36": "छत्तीस",
- "37": "सदतीस",
- "38": "अडतीस",
- "39": "एकोणचाळीस",
- "40": "चाळीस",
- "41": "एक्केचाळीस",
- "42": "बेचाळीस",
- "43": "त्रेचाळीस",
- "44": "चव्वेचाळीस",
- "45": "पंचेचाळीस",
- "46": "सेहेचाळीस",
- "47": "सत्तेचाळीस",
- "48": "अठ्ठेचाळीस",
- "49": "एकोणपन्नास",
- "50": "पन्नास",
- "51": "एक्कावन्न",
- "52": "बावन्न",
- "53": "त्रेपन्न",
- "54": "चोपन्न",
- "55": "पंचावन्न",
- "56": "छप्पन्न",
- "57": "सत्तावन्न",
- "58": "अठ्ठावन्न",
- "59": "एकोणसाठ",
- "60": "साठ",
- "61": "एकसष्ठ",
- "62": "बासष्ठ",
- "63": "त्रेसष्ठ",
- "64": "चौसष्ठ",
- "65": "पासष्ठ",
- "66": "सहासष्ठ",
- "67": "सदुसष्ठ",
- "68": "अडुसष्ठ",
- "69": "एकोणसत्तर",
- "70": "सत्तर",
- "71": "एक्काहत्तर",
- "72": "बाहत्तर",
- "73": "त्र्याहत्तर",
- "74": "चौर्याहत्तर",
- "75": "पंच्याहत्तर",
- "76": "शहात्तर",
- "77": "सत्याहत्तर",
- "78": "अठ्ठ्याहत्तर",
- "79": "एकोण ऐंशी",
- "80": "ऐंशी",
- "81": "एक्क्याऐंशी",
- "82": "ब्याऐंशी",
- "83": "त्र्याऐंशी",
- "84": "चौऱ्याऐंशी",
- "85": "पंच्याऐंशी",
- "86": "शहाऐंशी",
- "87": "सत्त्याऐंशी",
- "88": "अठ्ठ्याऐंशी",
- "89": "एकोणनव्वद",
- "90": "नव्वद",
- "91": "एक्क्याण्णव",
- "92": "ब्याण्णव",
- "93": "त्र्याण्णव",
- "94": "चौऱ्याण्णव",
- "95": "पंच्याण्णव",
- "96": "शहाण्णव",
- "97": "सत्त्याण्णव",
- "98": "अठ्ठ्याण्णव",
- "99": "नव्व्याण्णव",
- "100": "शे",
- "1000": "हजार",
- "100000": "लाख",
- "10000000": "कोटी",
- "1000000000": "अब्ज",
-} # Marathi
-num_dict["bn"] = {
- "0": "শূন্য",
- "1": "এক",
- "2": "দুই",
- "3": "তিন",
- "4": "চার",
- "5": "পাঁচ",
- "6": "ছয়",
- "7": "সাত",
- "8": "আট",
- "9": "নয়",
- "10": "দশ",
- "11": "এগার",
- "12": "বার",
- "13": "তের",
- "14": "চৌদ্দ",
- "15": "পনের",
- "16": "ষোল",
- "17": "সতের",
- "18": "আঠার",
- "19": "ঊনিশ",
- "20": "বিশ",
- "21": "একুশ",
- "22": "বাইশ",
- "23": "তেইশ",
- "24": "চব্বিশ",
- "25": "পঁচিশ",
- "26": "ছাব্বিশ",
- "27": "সাতাশ",
- "28": "আঠাশ",
- "29": "ঊনত্রিশ",
- "30": "ত্রিশ",
- "31": "একত্রিশ",
- "32": "বত্রিশ",
- "33": "তেত্রিশ",
- "34": "চৌত্রিশ",
- "35": "পঁয়ত্রিশ",
- "36": "ছত্রিশ",
- "37": "সাঁইত্রিশ",
- "38": "আটত্রিশ",
- "39": "ঊনচল্লিশ",
- "40": "চল্লিশ",
- "41": "একচল্লিশ",
- "42": "বিয়াল্লিশ",
- "43": "তেতাল্লিশ",
- "44": "চুয়াল্লিশ",
- "45": "পঁয়তাল্লিশ",
- "46": "ছেচল্লিশ",
- "47": "সাতচল্লিশ",
- "48": "আটচল্লিশ",
- "49": "ঊনপঞ্চাশ",
- "50": "পঞ্চাশ",
- "51": "একান্ন",
- "52": "বায়ান্ন",
- "53": "তিপ্পান্ন",
- "54": "চুয়ান্ন",
- "55": "পঞ্চান্ন",
- "56": "ছাপ্পান্ন",
- "57": "সাতান্ন",
- "58": "আটান্ন",
- "59": "ঊনষাট",
- "60": "ষাট",
- "61": "একষট্টি",
- "62": "বাষট্টি",
- "63": "তেষট্টি",
- "64": "চৌষট্টি",
- "65": "পঁয়ষট্টি",
- "66": "ছেষট্টি",
- "67": "সাতষট্টি",
- "68": "আটষট্টি",
- "69": "ঊনসত্তর",
- "70": "সত্তর",
- "71": "একাত্তর",
- "72": "বাহাত্তর",
- "73": "তিয়াত্তর",
- "74": "চুয়াত্তর",
- "75": "পঁচাত্তর",
- "76": "ছিয়াত্তর",
- "77": "সাতাত্তর",
- "78": "আটাত্তর",
- "79": "ঊনআশি",
- "80": "আশি",
- "81": "একাশি",
- "82": "বিরাশি",
- "83": "তিরাশি",
- "84": "চুরাশি",
- "85": "পঁচাশি",
- "86": "ছিয়াশি",
- "87": "সাতাশি",
- "88": "আটাশি",
- "89": "ঊননব্বই",
- "90": "নব্বই",
- "91": "একানব্বই",
- "92": "বিরানব্বই",
- "93": "তিরানব্বই",
- "94": "চুরানব্বই",
- "95": "পঁচানব্বই",
- "96": "ছিয়ানব্বই",
- "97": "সাতানব্বই",
- "98": "আটানব্বই",
- "99": "নিরানব্বই",
- "100": "শো",
- "1000": "হাজার",
- "100000": "লাখ",
- "10000000": "কোটি",
- "1000000000": "একশ’ কোটি",
-} # Bengali
-num_dict["te"] = {
- "0": "సున్నా",
- "1": "ఒకటి",
- "2": "రెండు",
- "3": "మూడు",
- "4": "నాలుగు",
- "5": "ఐదు",
- "6": "ఆరు",
- "7": "ఏడు",
- "8": "ఎనిమిది",
- "9": "తొమ్మిది",
- "10": "పది",
- "11": "పదకొండు",
- "12": "పన్నెండు",
- "13": "పదమూడు",
- "14": "పద్నాలుగు",
- "15": "పదిహేను",
- "16": "పదహారు",
- "17": "పదిహేడు",
- "18": "పద్దెనిమిది",
- "19": "పందొమ్మిది",
- "20": "ఇరవై",
- "21": "ఇరవై ఒకటి",
- "22": "ఇరవై రెండు",
- "23": "ఇరవై మూడు",
- "24": "ఇరవై నాలుగు",
- "25": "ఇరవై ఐదు",
- "26": "ఇరవై ఆరు",
- "27": "ఇరవై ఏడు",
- "28": "ఇరవై ఎనిమిది",
- "29": "ఇరవై తొమ్మిది",
- "30": "ముప్పై",
- "31": "ముప్పై ఒకటి",
- "32": "ముప్పై రెండు",
- "33": "ముప్పై మూడు",
- "34": "ముప్పై నాలుగు",
- "35": "ముప్పై ఐదు",
- "36": "ముప్పై ఆరు",
- "37": "ముప్పై ఏడు",
- "38": "ముప్పై ఎనిమిది",
- "39": "ముప్పై తొమ్మిది",
- "40": "నలభై",
- "41": "నలభై ఒకటి",
- "42": "నలభై రెండు",
- "43": "నలభై మూడు",
- "44": "నలభై నాలుగు",
- "45": "నలభై ఐదు",
- "46": "నలభై ఆరు",
- "47": "నలభై ఏడు",
- "48": "నలభై ఎనిమిది",
- "49": "నలభై తొమ్మిది",
- "50": "యాభై",
- "51": "యాభై ఒకటి",
- "52": "యాభై రెండు",
- "53": "యాభై మూడు",
- "54": "యాభై నాలుగు",
- "55": "యాభై ఐదు",
- "56": "యాభై ఆరు",
- "57": "యాభై ఏడు",
- "58": "యాభై ఎనిమిది",
- "59": "యాభై తొమ్మిది",
- "60": "అరవై",
- "61": "అరవై ఒకటి",
- "62": "అరవై రెండు",
- "63": "అరవై మూడు",
- "64": "అరవై నాలుగు",
- "65": "అరవై ఐదు",
- "66": "అరవై ఆరు",
- "67": "అరవై ఏడు",
- "68": "అరవై ఎనిమిది",
- "69": "అరవై తొమ్మిది",
- "70": "డెబ్బై",
- "71": "డెబ్బై ఒకటి",
- "72": "డెబ్బై రెండు",
- "73": "డెబ్బై మూడు",
- "74": "డెబ్బై నాలుగు",
- "75": "డెబ్బై ఐదు",
- "76": "డెబ్బై ఆరు",
- "77": "డెబ్బై ఏడు",
- "78": "డెబ్బై ఎనిమిది",
- "79": "డెబ్బై తొమ్మిది",
- "80": "ఎనభై",
- "81": "ఎనభై ఒకటి",
- "82": "ఎనభై రెండు",
- "83": "ఎనభై మూడు",
- "84": "ఎనభై నాలుగు",
- "85": "ఎనభై ఐదు",
- "86": "ఎనభై ఆరు",
- "87": "ఎనభై ఏడు",
- "88": "ఎనభై ఎనిమిది",
- "89": "ఎనభై తొమ్మిది",
- "90": "తొంభై",
- "91": "తొంభై ఒకటి",
- "92": "తొంభై రెండు",
- "93": "తొంభై మూడు",
- "94": "తొంభై నాలుగు",
- "95": "తొంభై ఐదు",
- "96": "తొంభై ఆరు",
- "97": "తొంభై ఏడు",
- "98": "తొంభై ఎనిమిది",
- "99": "తొంభై తొమ్మిది",
- "100": "వందల",
- "1000": "వేల",
- "100000": "లక్షల",
- "10000000": "కోట్ల",
- "1000000000": "బిలియన్",
-} # Telugu
-num_dict["ta"] = {
- "0": "பூஜ்ஜியம்",
- "1": "ஒன்று",
- "2": "இரண்டு",
- "3": "மூன்று",
- "4": "நான்கு",
- "5": "ஐந்து",
- "6": "ஆறு",
- "7": "ஏழு",
- "8": "எட்டு",
- "9": "ஒன்பது",
- "10": "பத்து",
- "11": "பதினொன்று",
- "12": "பன்னிரண்டு",
- "13": "பதிமூன்று",
- "14": "பதினான்கு",
- "15": "பதினைந்து",
- "16": "பதினாறு",
- "17": "பதினேழு",
- "18": "பதினெட்டு",
- "19": "பத்தொன்பது",
- "20": "இருபது",
- "21": "இருபது ஒன்று",
- "22": "இருபத்து இரண்டு",
- "23": "இருபத்து மூன்று",
- "24": "இருபத்து நான்கு",
- "25": "இருபத்து ஐந்து",
- "26": "இருபத்து ஆறு",
- "27": "இருபத்து ஏழு",
- "28": "இருபத்து எட்டு",
- "29": "இருபத்து ஒன்பது",
- "30": "முப்பது",
- "31": "முப்பத்து ஒன்று",
- "32": "முப்பத்து இரண்டு",
- "33": "முப்பத்து மூன்று",
- "34": "முப்பத்து நான்கு",
- "35": "முப்பத்து ஐந்து",
- "36": "முப்பத்து ஆறு",
- "37": "முப்பத்து ஏழு",
- "38": "முப்பத்து எட்டு",
- "39": "முப்பத்து ஒன்பது",
- "40": "நாற்பது",
- "41": "நாற்பத்து ஒன்று",
- "42": "நாற்பத்து இரண்டு",
- "43": "நாற்பத்து மூன்று",
- "44": "நாற்பத்து நான்கு",
- "45": "நாற்பத்து ஐந்து",
- "46": "நாற்பத்து ஆறு",
- "47": " நாற்பத்து ஏழு",
- "48": "நாற்பத்து எட்டு",
- "49": "நாற்பத்து ஒன்பது",
- "50": "ஐம்பது",
- "51": "ஐம்பத்து ஒன்று",
- "52": "ஐம்பத்து இரண்டு",
- "53": "ஐம்பத்து மூன்று",
- "54": "ஐம்பத்து நான்கு",
- "55": "ஐம்பத்து ஐந்து",
- "56": "ஐம்பத்து ஆறு",
- "57": "ஐம்பத்து ஏழு",
- "58": "ஐம்பத்து எட்டு",
- "59": "ஐம்பத்து ஒன்பது",
- "60": "அறுபது",
- "61": "அறுபத்து ஒன்று",
- "62": "அறுபத்து இரண்டு",
- "63": "அறுபத்து மூன்று",
- "64": "அறுபத்து நான்கு",
- "65": "அறுபத்து ஐந்து",
- "66": "அறுபத்து ஆறு",
- "67": "அறுபத்து ஏழு",
- "68": "அறுபத்து எட்டு",
- "69": "அறுபத்து ஒன்பது",
- "70": "எழுபது",
- "71": "எழுபத்தி ஒன்று",
- "72": "எழுபத்தி இரண்டு",
- "73": "எழுபத்தி முச்சக்கர",
- "74": "எழுபத்தி நான்கு",
- "75": "எழுபத்தி ஐந்து",
- "76": "எழுபத்தி ஆறு",
- "77": "எழுபத்தி ஏழு",
- "78": "எழுபத்தி எட்டு",
- "79": "எழுபத்தி ஒன்பது",
- "80": "எண்பது",
- "81": "எண்பத்தியொன்று",
- "82": "எண்பத்திரண்டு",
- "83": "எண்பத்திமூன்று",
- "84": "என்பதினான்கு",
- "85": "என்பதினைந்து",
- "86": "எண்பத்திஆறு",
- "87": "எண்பத்திஏழு",
- "88": "எண்பத்தியெட்டு",
- "89": "எண்பத்தியொன்பது",
- "90": "தொன்னூறு",
- "91": "தொண்ணூற்றியொன்று",
- "92": "தொண்ணூற்றிரண்டு",
- "93": "தொண்ணூற்றிமூன்று",
- "94": "தொண்ணூற்றிநான்கு",
- "95": "தொண்ணூற்றிஐந்து",
- "96": "தொண்ணூற்றியாறு",
- "97": "தொண்ணூற்றியேழு",
- "98": "தொண்ணூற்றியெட்டு",
- "99": "தொண்ணூற்றிஒன்பது",
- "100": "நூறு",
- "1000": "ஆயிரம்",
- "100000": "இலட்சம்",
- "10000000": "கோடி",
- "1000000000": "பில்லியன்",
-} # Tamil
-num_dict["kn"] = {
- "0": "ಸೊನ್ನೆ",
- "1": "ಒಂದು",
- "2": "ಎರಡು",
- "3": "ಮೂರು",
- "4": "ನಾಲ್ಕು",
- "5": "ಅಯ್ದು",
- "6": "ಆರು",
- "7": "ಏಳು",
- "8": "ಎಂಟು",
- "9": "ಒಂಬತ್ತು",
- "10": "ಹತ್ತು",
- "11": "ಹನ್ನೊಂದು",
- "12": "ಹನ್ನೆರಡು",
- "13": "ಹದಿಮೂರು",
- "14": "ಹದಿನಾಲ್ಕು",
- "15": "ಹದಿನೈದು",
- "16": "ಹದಿನಾರು",
- "17": "ಹದಿನೇಳು",
- "18": "ಹದಿನೆಂಟು",
- "19": "ಹತ್ತೊಂಬತ್ತು",
- "20": "ಇಪ್ಪತ್ತು",
- "21": "ಇಪ್ಪತ್ತ್’ಒಂದು",
- "22": "ಇಪ್ಪತ್ತ್’ಎರಡು",
- "23": "ಇಪ್ಪತ್ತ್’ಮೂರು",
- "24": "ಇಪ್ಪತ್ತ್’ನಾಲ್ಕು",
- "25": "ಇಪ್ಪತ್ತ್’ಐದು",
- "26": "ಇಪ್ಪತ್ತ್’ಆರು",
- "27": "ಇಪ್ಪತ್ತ್’ಏಳು",
- "28": "ಇಪ್ಪತ್ತ್’ಎಂಟು",
- "29": "ಇಪ್ಪತ್ತ್’ಒಂಬತ್ತು",
- "30": "ಮೂವತ್ತು",
- "31": "ಮುವತ್ತ್’ಒಂದು",
- "32": "ಮುವತ್ತ್’ಎರಡು",
- "33": "ಮುವತ್ತ್’ಮೂರು",
- "34": "ಮೂವತ್ತ್’ನಾಲ್ಕು",
- "35": "ಮೂವತ್ತ್’ಐದು",
- "36": "ಮೂವತ್ತ್’ಆರು",
- "37": "ಮೂವತ್ತ್’ಏಳು",
- "38": "ಮೂವತ್ತ್’ಎಂಟು",
- "39": "ಮೂವತ್ತ್’ಒಂಬತ್ತು",
- "40": "ನಲವತ್ತು",
- "41": "ನಲವತ್ತೊಂದು",
- "42": "ನಲವತ್ತ್ ಎರಡು",
- "43": "ನಲವತ್ತ್ ಮೂರು",
- "44": "ನಲವತ್ತ್ ನಾಲ್ಕು",
- "45": "ನಲವತ್ತೈದು",
- "46": "ನಲವತ್ತಾರು",
- "47": "ನಲವತ್ತೇಳು",
- "48": "ನಲವತ್ತೆಂಟು",
- "49": "ನಲವತ್ತೊಂಬತ್ತು",
- "50": "ಐವತ್ತು",
- "51": "ಐವತ್ತೊಂದು",
- "52": "ಐವತ್ತೆರಡು",
- "53": "ಐವತ್ತಮೂರು",
- "54": "ಐವತ್ತ್ನಾಲ್ಕು",
- "55": "ಐವತ್ತೈದು",
- "56": "ಐವತ್ತಾರು",
- "57": "ಐವತ್ತೇಳು",
- "58": "ಐವತ್ತೆಂಟು",
- "59": "ಐವತ್ತೊಂಬತ್ತು",
- "60": "ಅರವತ್ತು",
- "61": "ಅರವತ್ತೊಂದು",
- "62": "ಅರವತ್ತೆರಡು",
- "63": "ಅರವತ್ತ್ ಮೂರು",
- "64": "ಅರವತ್ತ್ ನಾಲ್ಕು",
- "65": "ಅರವತ್ತೈದು",
- "66": "ಅರವತ್ತಾರು",
- "67": "ಅರವತ್ತೇಳು",
- "68": "ಅರವತ್ತೆಂಟು",
- "69": "ಅರವತ್ತೊಂಬತ್ತು",
- "70": "ಎಪ್ಪತ್ತು",
- "71": "ಎಪ್ಪತ್ತೊಂದು",
- "72": "ಎಪ್ಪತ್ತೆರಡು",
- "73": "ಎಪ್ಪತ್ತ್ ಮೂರು",
- "74": "ಎಪ್ಪತ್ತ್ ನಾಲ್ಕು",
- "75": "ಎಪ್ಪತ್ತೈದು",
- "76": "ಎಪ್ಪತ್ತಾರು",
- "77": "ಎಪ್ಪತ್ತೇಳು",
- "78": "ಎಪ್ಪತ್ತೆಂಟು",
- "79": "ಎಪ್ಪತ್ತೊಂಬತ್ತು",
- "80": "ಎಂಬತ್ತು",
- "81": "ಎಂಬತ್ತೊಂದು",
- "82": "ಎಂಬತ್ತೆರಡು",
- "83": "ಎಂಬತ್ತ್ ಮೂರು",
- "84": "ಎಂಬತ್ತ್ ನಾಲ್ಕು",
- "85": "ಎಂಬತ್ತೈದು",
- "86": "ಎಂಬತ್ತಾರು",
- "87": "ಎಂಬತ್ತೇಳು",
- "88": "ಎಂಬತ್ತೆಂಟು",
- "89": "ಎಂಬತ್ತೊಂಬತ್ತು",
- "90": "ತೊಂಬತ್ತು",
- "91": "ತೊಂಬತ್ತೊಂದು",
- "92": "ತೊಂಬತ್ತೆರಡು",
- "93": "ತೊಂಬತ್ತ ಮೂರು",
- "94": "ತೊಂಬತ್ತ ನಾಲ್ಕು",
- "95": "ತೊಂಬತ್ತೈದು",
- "96": "ತೊಂಬತ್ತಾರು",
- "97": "ತೊಂಬತ್ತೇಳು",
- "98": "ತೊಂಬತ್ತೆಂಟು",
- "99": "ತೊಂಬತ್ತೊಂಬತ್ತು",
- "100": "ನೂರ",
- "1000": "ಸಾವಿರದ",
- "100000": "ಲಕ್ಷದ",
- "10000000": "ಕೋಟಿ",
- "1000000000": "ಶತಕೋಟಿ",
-} # Kannada
-num_dict["or"] = {
- "0": "ଶୁନ୍ୟ",
- "1": "ଏକ",
- "2": "ଦୁଇ",
- "3": "ତିନି",
- "4": "ଚାରି",
- "5": "ପାଞ୍ଚ",
- "6": "ଛଅ",
- "7": "ସାତ",
- "8": "ଆଠ",
- "9": "ନଅ",
- "10": "ନଅ",
- "11": "ଏଗାର",
- "12": "ବାର",
- "13": "ତେର",
- "14": "ଚଉଦ",
- "15": "ପନ୍ଦର",
- "16": "ଷୋହଳ",
- "17": "ସତର",
- "18": "ଅଠର",
- "19": "ଊଣାଇଶ",
- "20": "କୋଡିଏ",
- "21": "ଏକୋଇଶି",
- "22": "ବାଇଶି",
- "23": "ତେଇଶି",
- "24": "ଚବିଶି",
- "25": "ପଚିଶି",
- "26": "ଛବିଶି",
- "27": "ସତାଇଶି",
- "28": "ଅଠାଇଶି",
- "29": "ଅଣତିରିଶି",
- "30": "ତିରିଶି",
- "31": "ଏକତିରିଶି",
- "32": "ବତିଶି",
- "33": "ତେତିଶି",
- "34": "ଚଉତିରିଶି",
- "35": "ପଞ୍ଚତିରିଶି",
- "36": "ଛତିଶି",
- "37": "ସଂଇତିରିଶି",
- "38": "ଅଠତିରିଶି",
- "39": "ଅଣଚାଳିଶି",
- "40": "ଚାଳିଶି",
- "41": "ଏକଚାଳିଶି",
- "42": "ବୟାଳିଶି",
- "43": "ତେୟାଳିଶି",
- "44": "ଚଉରାଳିଶି",
- "45": "ପଞ୍ଚଚାଳିଶି",
- "46": "ଛୟାଳିଶି",
- "47": "ସତଚାଳିଶି",
- "48": "ଅଠଚାଳିଶି",
- "49": "ଅଣଚାଶ",
- "50": "ପଚାଶ",
- "51": "ଏକାବନ",
- "52": "ବାଉନ",
- "53": "ତେପନ",
- "54": "ଚଉବନ",
- "55": "ପଞ୍ଚାବନ",
- "56": "ଛପନ",
- "57": "ସତାବନ",
- "58": "ଅଠାବନ",
- "59": "ଅଣଷଠି",
- "60": "ଷାଠିଏ",
- "61": "ଏକଷଠି",
- "62": "ବାଷଠି",
- "63": "ତେଷଠି",
- "64": "ଚଉଷଠି",
- "65": "ପଞ୍ଚଷଠି",
- "66": "ଛଅଷଠି",
- "67": "ସତଷଠି",
- "68": "ଅଠଷଠି",
- "69": "ଅଣସ୍ତରୀ",
- "70": "ସତୂରୀ",
- "71": "ଏକସ୍ତରୀ",
- "72": "ବାସ୍ତରୀ",
- "73": "ତେସ୍ତରୀ",
- "74": "ଚଉସ୍ତରୀ",
- "75": "ପଞ୍ଚସ୍ତରୀ",
- "76": "ଛଅସ୍ତରୀ",
- "77": "ସତସ୍ତରୀ",
- "78": "ଅଠସ୍ତରୀ",
- "79": "ଅଣାଅଶୀ",
- "80": "ଅଶୀ",
- "81": "ଏକାଅଶୀ",
- "82": "ବୟାଅଶୀ",
- "83": "ତେୟାଅଶୀ",
- "84": "ଚଉରାଅଶୀ",
- "85": "ପଞ୍ଚାଅଶୀ",
- "86": "ଛୟାଅଶୀ",
- "87": "ସତାଅଶୀ",
- "88": "ଅଠାଅଶୀ",
- "89": "ଅଣାନବେ",
- "90": "ନବେ",
- "91": "ଏକାନବେ",
- "92": "ବୟାନବେ",
- "93": "ତେୟାନବେ",
- "94": "ଚଉରାନବେ",
- "95": "ପଞ୍ଚାନବେ",
- "96": "ଛୟାନବେ",
- "97": "ସତାନବେ",
- "98": "ଅଠାନବେ",
- "99": "ଅନେଶତ",
- "100": "ଶହେ",
- "1000": "ହଜାର",
- "100000": "ଲକ୍ଷ",
- "10000000": "କୋଟି",
- "1000000000": "କୋଟି",
-} # Oriya
-num_dict["pa"] = {
- "0": "ਸਿਫਰ ",
- "1": "ਇੱਕ",
- "2": "ਦੋ",
- "3": "ਤਿੰਨ",
- "4": "ਚਾਰ",
- "5": "ਪੰਜ",
- "6": "ਛੇ",
- "7": "ਸੱਤ",
- "8": "ਅੱਠ",
- "9": "ਨੌਂ",
- "10": "ਦੱਸ",
- "11": "ਗਿਆਰਾਂ",
- "12": "ਬਾਰਾਂ",
- "13": "ਤੇਰਾਂ",
- "14": "ਚੌਦਾਂ",
- "15": "ਪੰਦਰਾਂ",
- "16": "ਸੋਲ਼ਾਂ",
- "17": "ਸਤਾਰਾਂ",
- "18": "ਅਠਾਰਾਂ",
- "19": "ਉਨੀ",
- "20": "ਵੀਹ",
- "21": "ਇੱਕੀ",
- "22": "ਬਾਈ",
- "23": "ਤੇਈ",
- "24": "ਚੌਵੀ",
- "25": "ਪੰਝੀ",
- "26": "ਛੱਬੀ",
- "27": "ਸਤਾਈ",
- "28": "ਅਠਾਈ",
- "29": "ਉਨੱਤੀ",
- "30": "ਤੀਹ",
- "31": "ਇਕੱਤੀ",
- "32": "ਬੱਤੀ",
- "33": "ਤੇਤੀ",
- "34": "ਚੌਂਤੀ",
- "35": "ਪੈਂਤੀ",
- "36": "ਛੱਤੀ",
- "37": "ਸੈਂਤੀ",
- "38": "ਅਠੱਤੀ",
- "39": "ਉਨਤਾਲੀ",
- "40": "ਚਾਲੀ",
- "41": "ਇਕਤਾਲੀ",
- "42": "ਬਤਾਲੀ",
- "43": "ਤਰਤਾਲੀ",
- "44": "ਚੌਤਾਲੀ",
- "45": "ਪੰਜਤਾਲੀ",
- "46": "ਛਿਆਲੀ",
- "47": "ਸੰਤਾਲੀ",
- "48": "ਅੱਠਤਾਲੀ",
- "49": "ਉਣਿੰਜਾ",
- "50": "ਪੰਜਾਹ",
- "51": "ਇਕਵਿੰਜਾ",
- "52": "ਬਵਿੰਜਾ",
- "53": "ਤਰਵਿੰਜਾ",
- "54": "ਚਰਿੰਜਾ",
- "55": "ਪਚਵਿੰਜਾ",
- "56": "ਛਪਿੰਜਾ",
- "57": "ਸਤਵਿੰਜਾ",
- "58": "ਅੱਠਵਿੰਜਾ",
- "59": "ਉਣਾਠ",
- "60": "ਸੱਠ",
- "61": "ਇਕਾਠ",
- "62": "ਬਾਠ੍ਹ",
- "63": "ਤਰੇਠ੍ਹ",
- "64": "ਚੌਠ੍ਹ",
- "65": "ਪੈਂਠ",
- "66": "ਛਿਆਠ",
- "67": "ਸਤਾਹਠ",
- "68": "ਅੱਠਾਠ",
- "69": "ਉਣੱਤਰ",
- "70": "ਸੱਤਰ",
- "71": "ਇਕ੍ਹੱਤਰ",
- "72": "ਬਹੱਤਰ",
- "73": "ਤਹੱਤਰ",
- "74": "ਚੌਹੱਤਰ",
- "75": "ਪੰਜੱਤਰ",
- "76": "ਛਿਹੱਤਰ",
- "77": "ਸਤੱਤਰ",
- "78": "ਅਠੱਤਰ",
- "79": "ਉਣਾਸੀ",
- "80": "ਅੱਸੀ",
- "81": "ਇਕਾਸੀ",
- "82": "ਬਿਆਸੀ",
- "83": "ਤਰਾਸੀ",
- "84": "ਚਰਾਸੀ",
- "85": "ਪੰਜਾਸੀ",
- "86": "ਛਿਆਸੀ",
- "87": "ਸਤਾਸੀ",
- "88": "ਅਠਾਸੀ",
- "89": "ਉਣਾਨਵੇਂ",
- "90": "ਨੱਬੇ",
- "91": "ਇਕਾਨਵੇਂ",
- "92": "ਬਿਆਨਵੇਂ",
- "93": "ਤਰਾਨਵੇਂ",
- "94": "ਚਰਾਨਵੇਂ",
- "95": "ਪਚਾਨਵੇਂ",
- "96": "ਛਿਆਨਵੇਂ",
- "97": "ਸਤਾਨਵੇਂ",
- "98": "ਅਠਾਨਵੇਂ",
- "99": "ਨਿੜਾਨਵੇਂ",
- "100": "ਸੌ",
- "1000": "ਹਜਾਰ",
- "100000": "ਲੱਖ",
- "10000000": "ਕਰੋੜ",
- "1000000000": "ਅਰਬ",
-} # Punjabi
-
-# --------------------------- num_to_word.py ------------------------------
-"""
-Method to convert Numbers to Words
-for indian languages
-
-Use cases:-
-1) Speech recognition pre-processing
-2) Language modeling Data pre-processing
-
--------------------------
-check indic_numbers.py to add support
-for any indian language
-"""
-
-
-def language_specific_exception(words, lang, combiner):
- """
- Language Specific Exception will come here
- """
-
- def occurs_at_end(piece):
- return words[-len(piece) :] == piece
-
- if lang == "mr":
- words = words.replace("एक" + combiner + "शे", "शंभर")
- elif lang == "gu":
- words = words.replace("બે" + combiner + "સો", "બસ્સો")
- elif lang == "te":
- exception_dict = {
- "1": "ఒక",
- "100": "వంద",
- "100+": "వందలు",
- "1000": "వెయ్యి",
- "1000+": "వేలు",
- "100000": "లక్ష",
- "100000+": "లక్షలు",
- "10000000": "కోటి",
- "10000000+": "కోట్లు",
- }
-
- test_case = ["100", "1000", "100000", "10000000"]
- for test in test_case:
- test_word = num_dict["te"][test]
- match = num_dict["te"]["1"] + combiner + test_word
- # for numbers like : 100, 1000, 100000
- if words == match:
- return exception_dict[test]
- # for numbers like : 200, 4000, 800000
- elif occurs_at_end(test_word):
- words = words.replace(test_word, exception_dict[test + "+"])
- # for numbers like : 105, 1076, 123993
- elif not occurs_at_end(match):
- replacement = exception_dict["1"] + combiner + exception_dict[test]
- words = words.replace(match, replacement)
-
- # Exception case for 101...199
- special_case = "ఒక" + combiner + "వంద"
- words = words.replace(special_case, "నూట")
- elif lang == "kn":
- # special case for 100
- if words == ("ಒಂದು" + combiner + "ನೂರ"):
- return "ನೂರು"
- exception_dict = {
- "ನೂರ": "ನೂರು",
- "ಸಾವಿರದ": "ಸಾವಿರ",
- "ಲಕ್ಷದ": "ಲಕ್ಷ",
- "ಕೋಟಿಯ": "ಕೋಟಿ",
- }
- for expt in exception_dict:
- if occurs_at_end(expt):
- words = words.replace(expt, exception_dict[expt])
- return words
-
-
-def num_to_word(num, lang, separator=", ", combiner=" "):
- """
- Main Method
- :param num: Number digits from any indian language
- :param lang: Language Code from supported Language
- :param separator: Separator character i.e. separator = '-' --> 'two hundred-sixty'
- :param combiner: combine number with position i.e. combiner = '-' --> 'two-hundred sixty'
- :return: UTF-8 String of numbers in words
- """
- lang = lang.lower()
- num = str(num)
-
- # Load dictionary according to language code
- assert lang in supported_lang, "Language not supported"
- num_dic = num_dict[lang]
-
- # dash default combiner for english-india
- if (lang == "en") & (combiner == " "):
- combiner = "-"
-
- # Remove punctuations from numbers
- num = str(num).replace(",", "").replace(" ", "")
-
- # Replace native language numbers with english digits
- for language in supported_lang:
- for num_index in range(10):
- num = num.replace(all_num[language][num_index], all_num["en"][num_index])
-
- # Assert that input contains only integer number
- for digit in num:
- assert digit in all_num["en"], "Give proper input"
-
- # Process
- # For Number longer than 9 digits
- def all_two_digit(digits_2):
- if len(digits_2) <= 1: # Provided only one/zero digit
- return num_dic.get(digits_2, "")
- elif digits_2 == "00": # Two Zero provided
- return num_dic["0"] + separator + num_dic["0"]
- elif digits_2[0] == "0": # First digit is zero
- return num_dic["0"] + separator + num_dic[digits_2[1]]
- else: # Both digit provided
- return num_dic[digits_2]
-
- # For Number less than 9 digits
- def two_digit(digits_2):
- digits_2 = digits_2.lstrip("0")
- if len(digits_2) != 0:
- return num_dic[digits_2]
- else:
- return ""
-
- def all_digit(digits):
- digits = digits.lstrip("0")
- digit_len = len(digits)
- if digit_len > 3:
- num_of_digits_to_process = (digit_len % 2) + 1
- process_digits = digits[:num_of_digits_to_process]
- base = str(10 ** (int(digit_len / 2) * 2 - 1))
- remain_digits = digits[num_of_digits_to_process:]
- return (
- num_dic[process_digits]
- + combiner
- + num_dic[base]
- + separator
- + all_digit(remain_digits)
- )
- elif len(digits) == 3:
- return (
- num_dic[digits[:1]]
- + combiner
- + num_dic["100"]
- + separator
- + two_digit(digits[1:])
- )
- else:
- return two_digit(digits)
-
- num = num.lstrip("0")
- full_digit_len = len(num)
-
- if full_digit_len == 0:
- output = num_dic["0"]
- elif full_digit_len <= 9:
- output = all_digit(num)
- else:
- iteration = round(full_digit_len / 2)
- output = all_two_digit(num[:2]) # First to digit
- for i in range(1, iteration):
- output = (
- output + separator + all_two_digit(num[i * 2 : (i + 1) * 2])
- ) # Next two digit pairs
- remaining_digits = num[iteration * 2 :]
- if not all_two_digit(remaining_digits) == "":
- output = (
- output + separator + all_two_digit(remaining_digits)
- ) # remaining Last one/two digits
-
- output = output.strip(separator)
-
- output = language_specific_exception(output, lang, combiner)
-
- return output
-
-
-# --------------------------------- num_to_word_on_a_sent ---------------------------------
-
-
-def is_digit(word, digit_pattern):
- return re.search(digit_pattern, word)
-
-
-def remove_punct(sent):
- clean = re.sub("[%s]" % re.escape(string.punctuation), " ", sent)
- return " ".join([word for word in clean.split() if word])
-
-
-def normalize_nums(text, lang):
- """
- text: str (eg)
- lang: lang code ['en', 'hi']
-
- returns: str
- (eg)
- """
-
- if lang in supported_lang:
- words = text.split()
- lang_digits = [str(i) for i in range(0, 10)]
-
- digit_pattern = "[" + "".join(lang_digits) + "]"
- num_indices = [
- ind for ind, word in enumerate(words) if is_digit(word, digit_pattern)
- ]
-
- words_up = [
- num_to_word(word, lang, separator=" ", combiner=" ")
- if ind in num_indices
- else word
- for ind, word in enumerate(words)
- ]
- return " ".join(words_up)
- else:
- return text
-
-
-if __name__ == "__main__":
- print(normalize_nums("रीटा के पास 16 बिल्लियाँ हैं।", "hi"))
diff --git a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/tts_infer/transliterate.py b/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/tts_infer/transliterate.py
deleted file mode 100644
index 575430562683434cd44fd8d2e77d26dab9ced73b..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/tts_infer/transliterate.py
+++ /dev/null
@@ -1,919 +0,0 @@
-import torch
-import torch.nn as nn
-import numpy as np
-import pandas as pd
-import random
-import sys
-import os
-import json
-import enum
-import traceback
-import re
-
-F_DIR = os.path.dirname(os.environ.get('translit_model_base_path', os.path.realpath(__file__)))
-
-
-class XlitError(enum.Enum):
- lang_err = "Unsupported langauge ID requested ;( Please check available languages."
- string_err = "String passed is incompatable ;("
- internal_err = "Internal crash ;("
- unknown_err = "Unknown Failure"
- loading_err = "Loading failed ;( Check if metadata/paths are correctly configured."
-
-
-##=================== Network ==================================================
-
-
-class Encoder(nn.Module):
- def __init__(
- self,
- input_dim,
- embed_dim,
- hidden_dim,
- rnn_type="gru",
- layers=1,
- bidirectional=False,
- dropout=0,
- device="cpu",
- ):
- super(Encoder, self).__init__()
-
- self.input_dim = input_dim # src_vocab_sz
- self.enc_embed_dim = embed_dim
- self.enc_hidden_dim = hidden_dim
- self.enc_rnn_type = rnn_type
- self.enc_layers = layers
- self.enc_directions = 2 if bidirectional else 1
- self.device = device
-
- self.embedding = nn.Embedding(self.input_dim, self.enc_embed_dim)
-
- if self.enc_rnn_type == "gru":
- self.enc_rnn = nn.GRU(
- input_size=self.enc_embed_dim,
- hidden_size=self.enc_hidden_dim,
- num_layers=self.enc_layers,
- bidirectional=bidirectional,
- )
- elif self.enc_rnn_type == "lstm":
- self.enc_rnn = nn.LSTM(
- input_size=self.enc_embed_dim,
- hidden_size=self.enc_hidden_dim,
- num_layers=self.enc_layers,
- bidirectional=bidirectional,
- )
- else:
- raise Exception("XlitError: unknown RNN type mentioned")
-
- def forward(self, x, x_sz, hidden=None):
- """
- x_sz: (batch_size, 1) - Unpadded sequence lengths used for pack_pad
- """
- batch_sz = x.shape[0]
- # x: batch_size, max_length, enc_embed_dim
- x = self.embedding(x)
-
- ## pack the padded data
- # x: max_length, batch_size, enc_embed_dim -> for pack_pad
- x = x.permute(1, 0, 2)
- x = nn.utils.rnn.pack_padded_sequence(x, x_sz, enforce_sorted=False) # unpad
-
- # output: packed_size, batch_size, enc_embed_dim
- # hidden: n_layer**num_directions, batch_size, hidden_dim | if LSTM (h_n, c_n)
- output, hidden = self.enc_rnn(
- x
- ) # gru returns hidden state of all timesteps as well as hidden state at last timestep
-
- ## pad the sequence to the max length in the batch
- # output: max_length, batch_size, enc_emb_dim*directions)
- output, _ = nn.utils.rnn.pad_packed_sequence(output)
-
- # output: batch_size, max_length, hidden_dim
- output = output.permute(1, 0, 2)
-
- return output, hidden
-
- def get_word_embedding(self, x):
- """ """
- x_sz = torch.tensor([len(x)])
- x_ = torch.tensor(x).unsqueeze(0).to(dtype=torch.long)
- # x: 1, max_length, enc_embed_dim
- x = self.embedding(x_)
-
- ## pack the padded data
- # x: max_length, 1, enc_embed_dim -> for pack_pad
- x = x.permute(1, 0, 2)
- x = nn.utils.rnn.pack_padded_sequence(x, x_sz, enforce_sorted=False) # unpad
-
- # output: packed_size, 1, enc_embed_dim
- # hidden: n_layer**num_directions, 1, hidden_dim | if LSTM (h_n, c_n)
- output, hidden = self.enc_rnn(
- x
- ) # gru returns hidden state of all timesteps as well as hidden state at last timestep
-
- out_embed = hidden[0].squeeze()
-
- return out_embed
-
-
-class Decoder(nn.Module):
- def __init__(
- self,
- output_dim,
- embed_dim,
- hidden_dim,
- rnn_type="gru",
- layers=1,
- use_attention=True,
- enc_outstate_dim=None, # enc_directions * enc_hidden_dim
- dropout=0,
- device="cpu",
- ):
- super(Decoder, self).__init__()
-
- self.output_dim = output_dim # tgt_vocab_sz
- self.dec_hidden_dim = hidden_dim
- self.dec_embed_dim = embed_dim
- self.dec_rnn_type = rnn_type
- self.dec_layers = layers
- self.use_attention = use_attention
- self.device = device
- if self.use_attention:
- self.enc_outstate_dim = enc_outstate_dim if enc_outstate_dim else hidden_dim
- else:
- self.enc_outstate_dim = 0
-
- self.embedding = nn.Embedding(self.output_dim, self.dec_embed_dim)
-
- if self.dec_rnn_type == "gru":
- self.dec_rnn = nn.GRU(
- input_size=self.dec_embed_dim
- + self.enc_outstate_dim, # to concat attention_output
- hidden_size=self.dec_hidden_dim, # previous Hidden
- num_layers=self.dec_layers,
- batch_first=True,
- )
- elif self.dec_rnn_type == "lstm":
- self.dec_rnn = nn.LSTM(
- input_size=self.dec_embed_dim
- + self.enc_outstate_dim, # to concat attention_output
- hidden_size=self.dec_hidden_dim, # previous Hidden
- num_layers=self.dec_layers,
- batch_first=True,
- )
- else:
- raise Exception("XlitError: unknown RNN type mentioned")
-
- self.fc = nn.Sequential(
- nn.Linear(self.dec_hidden_dim, self.dec_embed_dim),
- nn.LeakyReLU(),
- # nn.Linear(self.dec_embed_dim, self.dec_embed_dim), nn.LeakyReLU(), # removing to reduce size
- nn.Linear(self.dec_embed_dim, self.output_dim),
- )
-
- ##----- Attention ----------
- if self.use_attention:
- self.W1 = nn.Linear(self.enc_outstate_dim, self.dec_hidden_dim)
- self.W2 = nn.Linear(self.dec_hidden_dim, self.dec_hidden_dim)
- self.V = nn.Linear(self.dec_hidden_dim, 1)
-
- def attention(self, x, hidden, enc_output):
- """
- x: (batch_size, 1, dec_embed_dim) -> after Embedding
- enc_output: batch_size, max_length, enc_hidden_dim *num_directions
- hidden: n_layers, batch_size, hidden_size | if LSTM (h_n, c_n)
- """
-
- ## perform addition to calculate the score
-
- # hidden_with_time_axis: batch_size, 1, hidden_dim
- ## hidden_with_time_axis = hidden.permute(1, 0, 2) ## replaced with below 2lines
- hidden_with_time_axis = (
- torch.sum(hidden, axis=0)
- if self.dec_rnn_type != "lstm"
- else torch.sum(hidden[0], axis=0)
- ) # h_n
-
- hidden_with_time_axis = hidden_with_time_axis.unsqueeze(1)
-
- # score: batch_size, max_length, hidden_dim
- score = torch.tanh(self.W1(enc_output) + self.W2(hidden_with_time_axis))
-
- # attention_weights: batch_size, max_length, 1
- # we get 1 at the last axis because we are applying score to self.V
- attention_weights = torch.softmax(self.V(score), dim=1)
-
- # context_vector shape after sum == (batch_size, hidden_dim)
- context_vector = attention_weights * enc_output
- context_vector = torch.sum(context_vector, dim=1)
- # context_vector: batch_size, 1, hidden_dim
- context_vector = context_vector.unsqueeze(1)
-
- # attend_out (batch_size, 1, dec_embed_dim + hidden_size)
- attend_out = torch.cat((context_vector, x), -1)
-
- return attend_out, attention_weights
-
- def forward(self, x, hidden, enc_output):
- """
- x: (batch_size, 1)
- enc_output: batch_size, max_length, dec_embed_dim
- hidden: n_layer, batch_size, hidden_size | lstm: (h_n, c_n)
- """
- if (hidden is None) and (self.use_attention is False):
- raise Exception(
- "XlitError: No use of a decoder with No attention and No Hidden"
- )
-
- batch_sz = x.shape[0]
-
- if hidden is None:
- # hidden: n_layers, batch_size, hidden_dim
- hid_for_att = torch.zeros(
- (self.dec_layers, batch_sz, self.dec_hidden_dim)
- ).to(self.device)
- elif self.dec_rnn_type == "lstm":
- hid_for_att = hidden[1] # c_n
-
- # x (batch_size, 1, dec_embed_dim) -> after embedding
- x = self.embedding(x)
-
- if self.use_attention:
- # x (batch_size, 1, dec_embed_dim + hidden_size) -> after attention
- # aw: (batch_size, max_length, 1)
- x, aw = self.attention(x, hidden, enc_output)
- else:
- x, aw = x, 0
-
- # passing the concatenated vector to the GRU
- # output: (batch_size, n_layers, hidden_size)
- # hidden: n_layers, batch_size, hidden_size | if LSTM (h_n, c_n)
- output, hidden = (
- self.dec_rnn(x, hidden) if hidden is not None else self.dec_rnn(x)
- )
-
- # output :shp: (batch_size * 1, hidden_size)
- output = output.view(-1, output.size(2))
-
- # output :shp: (batch_size * 1, output_dim)
- output = self.fc(output)
-
- return output, hidden, aw
-
-
-class Seq2Seq(nn.Module):
- """
- Class dependency: Encoder, Decoder
- """
-
- def __init__(
- self, encoder, decoder, pass_enc2dec_hid=False, dropout=0, device="cpu"
- ):
- super(Seq2Seq, self).__init__()
-
- self.encoder = encoder
- self.decoder = decoder
- self.device = device
- self.pass_enc2dec_hid = pass_enc2dec_hid
- _force_en2dec_hid_conv = False
-
- if self.pass_enc2dec_hid:
- assert (
- decoder.dec_hidden_dim == encoder.enc_hidden_dim
- ), "Hidden Dimension of encoder and decoder must be same, or unset `pass_enc2dec_hid`"
- if decoder.use_attention:
- assert (
- decoder.enc_outstate_dim
- == encoder.enc_directions * encoder.enc_hidden_dim
- ), "Set `enc_out_dim` correctly in decoder"
- assert (
- self.pass_enc2dec_hid or decoder.use_attention
- ), "No use of a decoder with No attention and No Hidden from Encoder"
-
- self.use_conv_4_enc2dec_hid = False
- if (
- self.pass_enc2dec_hid
- and (encoder.enc_directions * encoder.enc_layers != decoder.dec_layers)
- ) or _force_en2dec_hid_conv:
- if encoder.enc_rnn_type == "lstm" or encoder.enc_rnn_type == "lstm":
- raise Exception(
- "XlitError: conv for enc2dec_hid not implemented; Change the layer numbers appropriately"
- )
-
- self.use_conv_4_enc2dec_hid = True
- self.enc_hid_1ax = encoder.enc_directions * encoder.enc_layers
- self.dec_hid_1ax = decoder.dec_layers
- self.e2d_hidden_conv = nn.Conv1d(self.enc_hid_1ax, self.dec_hid_1ax, 1)
-
- def enc2dec_hidden(self, enc_hidden):
- """
- enc_hidden: n_layer, batch_size, hidden_dim*num_directions
- TODO: Implement the logic for LSTm bsed model
- """
- # hidden: batch_size, enc_layer*num_directions, enc_hidden_dim
- hidden = enc_hidden.permute(1, 0, 2).contiguous()
- # hidden: batch_size, dec_layers, dec_hidden_dim -> [N,C,Tstep]
- hidden = self.e2d_hidden_conv(hidden)
-
- # hidden: dec_layers, batch_size , dec_hidden_dim
- hidden_for_dec = hidden.permute(1, 0, 2).contiguous()
-
- return hidden_for_dec
-
- def active_beam_inference(self, src, beam_width=3, max_tgt_sz=50):
- """Search based decoding
- src: (sequence_len)
- """
-
- def _avg_score(p_tup):
- """Used for Sorting
- TODO: Dividing by length of sequence power alpha as hyperparam
- """
- return p_tup[0]
-
- import sys
-
- batch_size = 1
- start_tok = src[0]
- end_tok = src[-1]
- src_sz = torch.tensor([len(src)])
- src_ = src.unsqueeze(0)
-
- # enc_output: (batch_size, padded_seq_length, enc_hidden_dim*num_direction)
- # enc_hidden: (enc_layers*num_direction, batch_size, hidden_dim)
- enc_output, enc_hidden = self.encoder(src_, src_sz)
-
- if self.pass_enc2dec_hid:
- # dec_hidden: dec_layers, batch_size , dec_hidden_dim
- if self.use_conv_4_enc2dec_hid:
- init_dec_hidden = self.enc2dec_hidden(enc_hidden)
- else:
- init_dec_hidden = enc_hidden
- else:
- # dec_hidden -> Will be initialized to zeros internally
- init_dec_hidden = None
-
- # top_pred[][0] = Σ-log_softmax
- # top_pred[][1] = sequence torch.tensor shape: (1)
- # top_pred[][2] = dec_hidden
- top_pred_list = [(0, start_tok.unsqueeze(0), init_dec_hidden)]
-
- for t in range(max_tgt_sz):
- cur_pred_list = []
-
- for p_tup in top_pred_list:
- if p_tup[1][-1] == end_tok:
- cur_pred_list.append(p_tup)
- continue
-
- # dec_hidden: dec_layers, 1, hidden_dim
- # dec_output: 1, output_dim
- dec_output, dec_hidden, _ = self.decoder(
- x=p_tup[1][-1].view(1, 1), # dec_input: (1,1)
- hidden=p_tup[2],
- enc_output=enc_output,
- )
-
- ## π{prob} = Σ{log(prob)} -> to prevent diminishing
- # dec_output: (1, output_dim)
- dec_output = nn.functional.log_softmax(dec_output, dim=1)
- # pred_topk.values & pred_topk.indices: (1, beam_width)
- pred_topk = torch.topk(dec_output, k=beam_width, dim=1)
-
- for i in range(beam_width):
- sig_logsmx_ = p_tup[0] + pred_topk.values[0][i]
- # seq_tensor_ : (seq_len)
- seq_tensor_ = torch.cat((p_tup[1], pred_topk.indices[0][i].view(1)))
-
- cur_pred_list.append((sig_logsmx_, seq_tensor_, dec_hidden))
-
- cur_pred_list.sort(key=_avg_score, reverse=True) # Maximized order
- top_pred_list = cur_pred_list[:beam_width]
-
- # check if end_tok of all topk
- end_flags_ = [1 if t[1][-1] == end_tok else 0 for t in top_pred_list]
- if beam_width == sum(end_flags_):
- break
-
- pred_tnsr_list = [t[1] for t in top_pred_list]
-
- return pred_tnsr_list
-
-
-##===================== Glyph handlers =======================================
-
-
-class GlyphStrawboss:
- def __init__(self, glyphs="en"):
- """list of letters in a language in unicode
- lang: ISO Language code
- glyphs: json file with script information
- """
- if glyphs == "en":
- # Smallcase alone
- self.glyphs = [chr(alpha) for alpha in range(97, 122 + 1)]
- else:
- self.dossier = json.load(open(glyphs, encoding="utf-8"))
- self.glyphs = self.dossier["glyphs"]
- self.numsym_map = self.dossier["numsym_map"]
-
- self.char2idx = {}
- self.idx2char = {}
- self._create_index()
-
- def _create_index(self):
-
- self.char2idx["_"] = 0 # pad
- self.char2idx["$"] = 1 # start
- self.char2idx["#"] = 2 # end
- self.char2idx["*"] = 3 # Mask
- self.char2idx["'"] = 4 # apostrophe U+0027
- self.char2idx["%"] = 5 # unused
- self.char2idx["!"] = 6 # unused
-
- # letter to index mapping
- for idx, char in enumerate(self.glyphs):
- self.char2idx[char] = idx + 7 # +7 token initially
-
- # index to letter mapping
- for char, idx in self.char2idx.items():
- self.idx2char[idx] = char
-
- def size(self):
- return len(self.char2idx)
-
- def word2xlitvec(self, word):
- """Converts given string of gyphs(word) to vector(numpy)
- Also adds tokens for start and end
- """
- try:
- vec = [self.char2idx["$"]] # start token
- for i in list(word):
- vec.append(self.char2idx[i])
- vec.append(self.char2idx["#"]) # end token
-
- vec = np.asarray(vec, dtype=np.int64)
- return vec
-
- except Exception as error:
- print("XlitError: In word:", word, "Error Char not in Token:", error)
- sys.exit()
-
- def xlitvec2word(self, vector):
- """Converts vector(numpy) to string of glyphs(word)"""
- char_list = []
- for i in vector:
- char_list.append(self.idx2char[i])
-
- word = "".join(char_list).replace("$", "").replace("#", "") # remove tokens
- word = word.replace("_", "").replace("*", "") # remove tokens
- return word
-
-
-class VocabSanitizer:
- def __init__(self, data_file):
- """
- data_file: path to file conatining vocabulary list
- """
- extension = os.path.splitext(data_file)[-1]
- if extension == ".json":
- self.vocab_set = set(json.load(open(data_file, encoding="utf-8")))
- elif extension == ".csv":
- self.vocab_df = pd.read_csv(data_file).set_index("WORD")
- self.vocab_set = set(self.vocab_df.index)
- else:
- print("XlitError: Only Json/CSV file extension supported")
-
- def reposition(self, word_list):
- """Reorder Words in list"""
- new_list = []
- temp_ = word_list.copy()
- for v in word_list:
- if v in self.vocab_set:
- new_list.append(v)
- temp_.remove(v)
- new_list.extend(temp_)
-
- return new_list
-
-
-##=============== INSTANTIATION ================================================
-
-
-class XlitPiston:
- """
- For handling prediction & post-processing of transliteration for a single language
- Class dependency: Seq2Seq, GlyphStrawboss, VocabSanitizer
- Global Variables: F_DIR
- """
-
- def __init__(
- self,
- weight_path,
- vocab_file,
- tglyph_cfg_file,
- iglyph_cfg_file="en",
- device="cpu",
- ):
-
- self.device = device
- self.in_glyph_obj = GlyphStrawboss(iglyph_cfg_file)
- self.tgt_glyph_obj = GlyphStrawboss(glyphs=tglyph_cfg_file)
- self.voc_sanity = VocabSanitizer(vocab_file)
-
- self._numsym_set = set(
- json.load(open(tglyph_cfg_file, encoding="utf-8"))["numsym_map"].keys()
- )
- self._inchar_set = set("abcdefghijklmnopqrstuvwxyz")
- self._natscr_set = set().union(
- self.tgt_glyph_obj.glyphs, sum(self.tgt_glyph_obj.numsym_map.values(), [])
- )
-
- ## Model Config Static TODO: add defining in json support
- input_dim = self.in_glyph_obj.size()
- output_dim = self.tgt_glyph_obj.size()
- enc_emb_dim = 300
- dec_emb_dim = 300
- enc_hidden_dim = 512
- dec_hidden_dim = 512
- rnn_type = "lstm"
- enc2dec_hid = True
- attention = True
- enc_layers = 1
- dec_layers = 2
- m_dropout = 0
- enc_bidirect = True
- enc_outstate_dim = enc_hidden_dim * (2 if enc_bidirect else 1)
-
- enc = Encoder(
- input_dim=input_dim,
- embed_dim=enc_emb_dim,
- hidden_dim=enc_hidden_dim,
- rnn_type=rnn_type,
- layers=enc_layers,
- dropout=m_dropout,
- device=self.device,
- bidirectional=enc_bidirect,
- )
- dec = Decoder(
- output_dim=output_dim,
- embed_dim=dec_emb_dim,
- hidden_dim=dec_hidden_dim,
- rnn_type=rnn_type,
- layers=dec_layers,
- dropout=m_dropout,
- use_attention=attention,
- enc_outstate_dim=enc_outstate_dim,
- device=self.device,
- )
- self.model = Seq2Seq(enc, dec, pass_enc2dec_hid=enc2dec_hid, device=self.device)
- self.model = self.model.to(self.device)
- weights = torch.load(weight_path, map_location=torch.device(self.device))
-
- self.model.load_state_dict(weights)
- self.model.eval()
-
- def character_model(self, word, beam_width=1):
- in_vec = torch.from_numpy(self.in_glyph_obj.word2xlitvec(word)).to(self.device)
- ## change to active or passive beam
- p_out_list = self.model.active_beam_inference(in_vec, beam_width=beam_width)
- p_result = [
- self.tgt_glyph_obj.xlitvec2word(out.cpu().numpy()) for out in p_out_list
- ]
-
- result = self.voc_sanity.reposition(p_result)
-
- # List type
- return result
-
- def numsym_model(self, seg):
- """tgt_glyph_obj.numsym_map[x] returns a list object"""
- if len(seg) == 1:
- return [seg] + self.tgt_glyph_obj.numsym_map[seg]
-
- a = [self.tgt_glyph_obj.numsym_map[n][0] for n in seg]
- return [seg] + ["".join(a)]
-
- def _word_segementer(self, sequence):
-
- sequence = sequence.lower()
- accepted = set().union(self._numsym_set, self._inchar_set, self._natscr_set)
- # sequence = ''.join([i for i in sequence if i in accepted])
-
- segment = []
- idx = 0
- seq_ = list(sequence)
- while len(seq_):
- # for Number-Symbol
- temp = ""
- while len(seq_) and seq_[0] in self._numsym_set:
- temp += seq_[0]
- seq_.pop(0)
- if temp != "":
- segment.append(temp)
-
- # for Target Chars
- temp = ""
- while len(seq_) and seq_[0] in self._natscr_set:
- temp += seq_[0]
- seq_.pop(0)
- if temp != "":
- segment.append(temp)
-
- # for Input-Roman Chars
- temp = ""
- while len(seq_) and seq_[0] in self._inchar_set:
- temp += seq_[0]
- seq_.pop(0)
- if temp != "":
- segment.append(temp)
-
- temp = ""
- while len(seq_) and seq_[0] not in accepted:
- temp += seq_[0]
- seq_.pop(0)
- if temp != "":
- segment.append(temp)
-
- return segment
-
- def inferencer(self, sequence, beam_width=10):
-
- seg = self._word_segementer(sequence[:120])
- lit_seg = []
-
- p = 0
- while p < len(seg):
- if seg[p][0] in self._natscr_set:
- lit_seg.append([seg[p]])
- p += 1
-
- elif seg[p][0] in self._inchar_set:
- lit_seg.append(self.character_model(seg[p], beam_width=beam_width))
- p += 1
-
- elif seg[p][0] in self._numsym_set: # num & punc
- lit_seg.append(self.numsym_model(seg[p]))
- p += 1
- else:
- lit_seg.append([seg[p]])
- p += 1
-
- ## IF segment less/equal to 2 then return combinotorial,
- ## ELSE only return top1 of each result concatenated
- if len(lit_seg) == 1:
- final_result = lit_seg[0]
-
- elif len(lit_seg) == 2:
- final_result = [""]
- for seg in lit_seg:
- new_result = []
- for s in seg:
- for f in final_result:
- new_result.append(f + s)
- final_result = new_result
-
- else:
- new_result = []
- for seg in lit_seg:
- new_result.append(seg[0])
- final_result = ["".join(new_result)]
-
- return final_result
-
-
-from collections.abc import Iterable
-from pydload import dload
-import zipfile
-
-MODEL_DOWNLOAD_URL_PREFIX = "https://github.com/AI4Bharat/IndianNLP-Transliteration/releases/download/xlit_v0.5.0/"
-
-
-def is_folder_writable(folder):
- try:
- os.makedirs(folder, exist_ok=True)
- tmp_file = os.path.join(folder, ".write_test")
- with open(tmp_file, "w") as f:
- f.write("Permission Check")
- os.remove(tmp_file)
- return True
- except:
- return False
-
-
-def is_directory_writable(path):
- if os.name == "nt":
- return is_folder_writable(path)
- return os.access(path, os.W_OK | os.X_OK)
-
-
-class XlitEngine:
- """
- For Managing the top level tasks and applications of transliteration
- Global Variables: F_DIR
- """
-
- def __init__(
- self, lang2use="all", config_path="translit_models/default_lineup.json"
- ):
-
- lineup = json.load(open(os.path.join(F_DIR, config_path), encoding="utf-8"))
- self.lang_config = {}
- if isinstance(lang2use, str):
- if lang2use == "all":
- self.lang_config = lineup
- elif lang2use in lineup:
- self.lang_config[lang2use] = lineup[lang2use]
- else:
- raise Exception(
- "XlitError: The entered Langauge code not found. Available are {}".format(
- lineup.keys()
- )
- )
-
- elif isinstance(lang2use, Iterable):
- for l in lang2use:
- try:
- self.lang_config[l] = lineup[l]
- except:
- print(
- "XlitError: Language code {} not found, Skipping...".format(l)
- )
- else:
- raise Exception(
- "XlitError: lang2use must be a list of language codes (or) string of single language code"
- )
-
- if is_directory_writable(F_DIR):
- models_path = os.path.join(F_DIR, "translit_models")
- else:
- user_home = os.path.expanduser("~")
- models_path = os.path.join(user_home, ".AI4Bharat_Xlit_Models")
- os.makedirs(models_path, exist_ok=True)
- self.download_models(models_path)
-
- self.langs = {}
- self.lang_model = {}
- for la in self.lang_config:
- try:
- print("Loading {}...".format(la))
- self.lang_model[la] = XlitPiston(
- weight_path=os.path.join(
- models_path, self.lang_config[la]["weight"]
- ),
- vocab_file=os.path.join(models_path, self.lang_config[la]["vocab"]),
- tglyph_cfg_file=os.path.join(
- models_path, self.lang_config[la]["script"]
- ),
- iglyph_cfg_file="en",
- )
- self.langs[la] = self.lang_config[la]["name"]
- except Exception as error:
- print("XlitError: Failure in loading {} \n".format(la), error)
- print(XlitError.loading_err.value)
-
- def download_models(self, models_path):
- """
- Download models from GitHub Releases if not exists
- """
- for l in self.lang_config:
- lang_name = self.lang_config[l]["eng_name"]
- lang_model_path = os.path.join(models_path, lang_name)
- if not os.path.isdir(lang_model_path):
- print("Downloading model for language: %s" % lang_name)
- remote_url = MODEL_DOWNLOAD_URL_PREFIX + lang_name + ".zip"
- downloaded_zip_path = os.path.join(models_path, lang_name + ".zip")
- dload(url=remote_url, save_to_path=downloaded_zip_path, max_time=None)
-
- if not os.path.isfile(downloaded_zip_path):
- exit(
- f"ERROR: Unable to download model from {remote_url} into {models_path}"
- )
-
- with zipfile.ZipFile(downloaded_zip_path, "r") as zip_ref:
- zip_ref.extractall(models_path)
-
- if os.path.isdir(lang_model_path):
- os.remove(downloaded_zip_path)
- else:
- exit(
- f"ERROR: Unable to find models in {lang_model_path} after download"
- )
- return
-
- def translit_word(self, eng_word, lang_code="default", topk=7, beam_width=10):
- if eng_word == "":
- return []
-
- if lang_code in self.langs:
- try:
- res_list = self.lang_model[lang_code].inferencer(
- eng_word, beam_width=beam_width
- )
- return res_list[:topk]
-
- except Exception as error:
- print("XlitError:", traceback.format_exc())
- print(XlitError.internal_err.value)
- return XlitError.internal_err
-
- elif lang_code == "default":
- try:
- res_dict = {}
- for la in self.lang_model:
- res = self.lang_model[la].inferencer(
- eng_word, beam_width=beam_width
- )
- res_dict[la] = res[:topk]
- return res_dict
-
- except Exception as error:
- print("XlitError:", traceback.format_exc())
- print(XlitError.internal_err.value)
- return XlitError.internal_err
-
- else:
- print("XlitError: Unknown Langauge requested", lang_code)
- print(XlitError.lang_err.value)
- return XlitError.lang_err
-
- def translit_sentence(self, eng_sentence, lang_code="default", beam_width=10):
- if eng_sentence == "":
- return []
-
- if lang_code in self.langs:
- try:
- out_str = ""
- for word in eng_sentence.split():
- res_ = self.lang_model[lang_code].inferencer(
- word, beam_width=beam_width
- )
- out_str = out_str + res_[0] + " "
- return out_str[:-1]
-
- except Exception as error:
- print("XlitError:", traceback.format_exc())
- print(XlitError.internal_err.value)
- return XlitError.internal_err
-
- elif lang_code == "default":
- try:
- res_dict = {}
- for la in self.lang_model:
- out_str = ""
- for word in eng_sentence.split():
- res_ = self.lang_model[la].inferencer(
- word, beam_width=beam_width
- )
- out_str = out_str + res_[0] + " "
- res_dict[la] = out_str[:-1]
- return res_dict
-
- except Exception as error:
- print("XlitError:", traceback.format_exc())
- print(XlitError.internal_err.value)
- return XlitError.internal_err
-
- else:
- print("XlitError: Unknown Langauge requested", lang_code)
- print(XlitError.lang_err.value)
- return XlitError.lang_err
-
-
-if __name__ == "__main__":
-
- available_lang = [
- "bn",
- "gu",
- "hi",
- "kn",
- "gom",
- "mai",
- "ml",
- "mr",
- "pa",
- "sd",
- "si",
- "ta",
- "te",
- "ur",
- ]
-
- reg = re.compile(r"[a-zA-Z]")
- lang = "hi"
- engine = XlitEngine(
- lang
- ) # if you don't specify lang code here, this will give results in all langs available
- sent = "Hello World! ABCD क्या हाल है आपका?"
- words = [
- engine.translit_word(word, topk=1)[lang][0] if reg.match(word) else word
- for word in sent.split()
- ] # only transliterated en words, leaves rest as it is
- updated_sent = " ".join(words)
-
- print(updated_sent)
-
- # output : हेलो वर्ल्ड! क्या हाल है आपका?
-
- # y = engine.translit_sentence("Hello World !")['hi']
- # print(y)
diff --git a/spaces/Harveenchadha/en_to_indic_translation/indic_nlp_library/indicnlp/normalize/__init__.py b/spaces/Harveenchadha/en_to_indic_translation/indic_nlp_library/indicnlp/normalize/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Hasan777/IlluminatiAI-Illuminati_Diffusion_v1.0/README.md b/spaces/Hasan777/IlluminatiAI-Illuminati_Diffusion_v1.0/README.md
deleted file mode 100644
index 16118f711e72d59b6012922f8f87d106ec7e4443..0000000000000000000000000000000000000000
--- a/spaces/Hasan777/IlluminatiAI-Illuminati_Diffusion_v1.0/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: IlluminatiAI-Illuminati Diffusion V1.0
-emoji: 💩
-colorFrom: gray
-colorTo: green
-sdk: gradio
-sdk_version: 3.19.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/HgMenon/Transcribe_V0.2/src/whisper/abstractWhisperContainer.py b/spaces/HgMenon/Transcribe_V0.2/src/whisper/abstractWhisperContainer.py
deleted file mode 100644
index d14fb23d24256e3f1c12d8ae1db6ece891d49ec8..0000000000000000000000000000000000000000
--- a/spaces/HgMenon/Transcribe_V0.2/src/whisper/abstractWhisperContainer.py
+++ /dev/null
@@ -1,122 +0,0 @@
-import abc
-from typing import List
-from src.config import ModelConfig, VadInitialPromptMode
-
-from src.hooks.progressListener import ProgressListener
-from src.modelCache import GLOBAL_MODEL_CACHE, ModelCache
-
-class AbstractWhisperCallback:
- @abc.abstractmethod
- def invoke(self, audio, segment_index: int, prompt: str, detected_language: str, progress_listener: ProgressListener = None):
- """
- Peform the transcription of the given audio file or data.
-
- Parameters
- ----------
- audio: Union[str, np.ndarray, torch.Tensor]
- The audio file to transcribe, or the audio data as a numpy array or torch tensor.
- segment_index: int
- The target language of the transcription. If not specified, the language will be inferred from the audio content.
- task: str
- The task - either translate or transcribe.
- progress_listener: ProgressListener
- A callback to receive progress updates.
- """
- raise NotImplementedError()
-
- def _get_initial_prompt(self, initial_prompt: str, initial_prompt_mode: VadInitialPromptMode,
- prompt: str, segment_index: int):
- if (initial_prompt_mode == VadInitialPromptMode.PREPEND_ALL_SEGMENTS):
- return self._concat_prompt(initial_prompt, prompt)
- elif (initial_prompt_mode == VadInitialPromptMode.PREPREND_FIRST_SEGMENT):
- return self._concat_prompt(initial_prompt, prompt) if segment_index == 0 else prompt
- else:
- raise ValueError(f"Unknown initial prompt mode {initial_prompt_mode}")
-
- def _concat_prompt(self, prompt1, prompt2):
- if (prompt1 is None):
- return prompt2
- elif (prompt2 is None):
- return prompt1
- else:
- return prompt1 + " " + prompt2
-
-class AbstractWhisperContainer:
- def __init__(self, model_name: str, device: str = None, compute_type: str = "float16",
- download_root: str = None,
- cache: ModelCache = None, models: List[ModelConfig] = []):
- self.model_name = model_name
- self.device = device
- self.compute_type = compute_type
- self.download_root = download_root
- self.cache = cache
-
- # Will be created on demand
- self.model = None
-
- # List of known models
- self.models = models
-
- def get_model(self):
- if self.model is None:
-
- if (self.cache is None):
- self.model = self._create_model()
- else:
- model_key = "WhisperContainer." + self.model_name + ":" + (self.device if self.device else '')
- self.model = self.cache.get(model_key, self._create_model)
- return self.model
-
- @abc.abstractmethod
- def _create_model(self):
- raise NotImplementedError()
-
- def ensure_downloaded(self):
- pass
-
- @abc.abstractmethod
- def create_callback(self, language: str = None, task: str = None, initial_prompt: str = None,
- initial_prompt_mode: VadInitialPromptMode = VadInitialPromptMode.PREPREND_FIRST_SEGMENT,
- **decodeOptions: dict) -> AbstractWhisperCallback:
- """
- Create a WhisperCallback object that can be used to transcript audio files.
-
- Parameters
- ----------
- language: str
- The target language of the transcription. If not specified, the language will be inferred from the audio content.
- task: str
- The task - either translate or transcribe.
- initial_prompt: str
- The initial prompt to use for the transcription.
- initial_prompt_mode: VadInitialPromptMode
- The mode to use for the initial prompt. If set to PREPEND_FIRST_SEGMENT, the initial prompt will be prepended to the first segment of audio.
- If set to PREPEND_ALL_SEGMENTS, the initial prompt will be prepended to all segments of audio.
- decodeOptions: dict
- Additional options to pass to the decoder. Must be pickleable.
-
- Returns
- -------
- A WhisperCallback object.
- """
- raise NotImplementedError()
-
- # This is required for multiprocessing
- def __getstate__(self):
- return {
- "model_name": self.model_name,
- "device": self.device,
- "download_root": self.download_root,
- "models": self.models,
- "compute_type": self.compute_type
- }
-
- def __setstate__(self, state):
- self.model_name = state["model_name"]
- self.device = state["device"]
- self.download_root = state["download_root"]
- self.models = state["models"]
- self.compute_type = state["compute_type"]
- self.model = None
- # Depickled objects must use the global cache
- self.cache = GLOBAL_MODEL_CACHE
\ No newline at end of file
diff --git a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/blocks.py b/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/blocks.py
deleted file mode 100644
index dad4090c747cba3d38689642f4b5f17f5a004a58..0000000000000000000000000000000000000000
--- a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/blocks.py
+++ /dev/null
@@ -1,1673 +0,0 @@
-from __future__ import annotations
-
-import copy
-import getpass
-import inspect
-import json
-import os
-import pkgutil
-import random
-import sys
-import time
-import warnings
-import webbrowser
-from abc import abstractmethod
-from pathlib import Path
-from types import ModuleType
-from typing import TYPE_CHECKING, Any, Callable, Dict, Iterator, List, Set, Tuple, Type
-
-import anyio
-import requests
-from anyio import CapacityLimiter
-from typing_extensions import Literal
-
-from gradio import (
- components,
- encryptor,
- external,
- networking,
- queueing,
- routes,
- strings,
- utils,
-)
-from gradio.context import Context
-from gradio.deprecation import check_deprecated_parameters
-from gradio.documentation import document, set_documentation_group
-from gradio.exceptions import DuplicateBlockError, InvalidApiName
-from gradio.helpers import create_tracker, skip, special_args
-from gradio.tunneling import CURRENT_TUNNELS
-from gradio.utils import (
- TupleNoPrint,
- check_function_inputs_match,
- component_or_layout_class,
- delete_none,
- get_cancel_function,
- get_continuous_fn,
-)
-
-set_documentation_group("blocks")
-
-
-if TYPE_CHECKING: # Only import for type checking (is False at runtime).
- import comet_ml
- from fastapi.applications import FastAPI
-
- from gradio.components import Component
-
-
-class Block:
- def __init__(
- self,
- *,
- render: bool = True,
- elem_id: str | None = None,
- visible: bool = True,
- root_url: str | None = None, # URL that is prepended to all file paths
- _skip_init_processing: bool = False, # Used for loading from Spaces
- **kwargs,
- ):
- self._id = Context.id
- Context.id += 1
- self.visible = visible
- self.elem_id = elem_id
- self.root_url = root_url
- self._skip_init_processing = _skip_init_processing
- self._style = {}
- self.parent: BlockContext | None = None
-
- if render:
- self.render()
- check_deprecated_parameters(self.__class__.__name__, **kwargs)
-
- def render(self):
- """
- Adds self into appropriate BlockContext
- """
- if Context.root_block is not None and self._id in Context.root_block.blocks:
- raise DuplicateBlockError(
- f"A block with id: {self._id} has already been rendered in the current Blocks."
- )
- if Context.block is not None:
- Context.block.add(self)
- if Context.root_block is not None:
- Context.root_block.blocks[self._id] = self
- if isinstance(self, components.TempFileManager):
- Context.root_block.temp_file_sets.append(self.temp_files)
- return self
-
- def unrender(self):
- """
- Removes self from BlockContext if it has been rendered (otherwise does nothing).
- Removes self from the layout and collection of blocks, but does not delete any event triggers.
- """
- if Context.block is not None:
- try:
- Context.block.children.remove(self)
- except ValueError:
- pass
- if Context.root_block is not None:
- try:
- del Context.root_block.blocks[self._id]
- except KeyError:
- pass
- return self
-
- def get_block_name(self) -> str:
- """
- Gets block's class name.
-
- If it is template component it gets the parent's class name.
-
- @return: class name
- """
- return (
- self.__class__.__base__.__name__.lower()
- if hasattr(self, "is_template")
- else self.__class__.__name__.lower()
- )
-
- def get_expected_parent(self) -> Type[BlockContext] | None:
- return None
-
- def set_event_trigger(
- self,
- event_name: str,
- fn: Callable | None,
- inputs: Component | List[Component] | Set[Component] | None,
- outputs: Component | List[Component] | None,
- preprocess: bool = True,
- postprocess: bool = True,
- scroll_to_output: bool = False,
- show_progress: bool = True,
- api_name: str | None = None,
- js: str | None = None,
- no_target: bool = False,
- queue: bool | None = None,
- batch: bool = False,
- max_batch_size: int = 4,
- cancels: List[int] | None = None,
- every: float | None = None,
- ) -> Dict[str, Any]:
- """
- Adds an event to the component's dependencies.
- Parameters:
- event_name: event name
- fn: Callable function
- inputs: input list
- outputs: output list
- preprocess: whether to run the preprocess methods of components
- postprocess: whether to run the postprocess methods of components
- scroll_to_output: whether to scroll to output of dependency on trigger
- show_progress: whether to show progress animation while running.
- api_name: Defining this parameter exposes the endpoint in the api docs
- js: Optional frontend js method to run before running 'fn'. Input arguments for js method are values of 'inputs' and 'outputs', return should be a list of values for output components
- no_target: if True, sets "targets" to [], used for Blocks "load" event
- batch: whether this function takes in a batch of inputs
- max_batch_size: the maximum batch size to send to the function
- cancels: a list of other events to cancel when this event is triggered. For example, setting cancels=[click_event] will cancel the click_event, where click_event is the return value of another components .click method.
- Returns: None
- """
- # Support for singular parameter
- if isinstance(inputs, set):
- inputs_as_dict = True
- inputs = sorted(inputs, key=lambda x: x._id)
- else:
- inputs_as_dict = False
- if inputs is None:
- inputs = []
- elif not isinstance(inputs, list):
- inputs = [inputs]
-
- if isinstance(outputs, set):
- outputs = sorted(outputs, key=lambda x: x._id)
- else:
- if outputs is None:
- outputs = []
- elif not isinstance(outputs, list):
- outputs = [outputs]
-
- if fn is not None and not cancels:
- check_function_inputs_match(fn, inputs, inputs_as_dict)
-
- if Context.root_block is None:
- raise AttributeError(
- f"{event_name}() and other events can only be called within a Blocks context."
- )
- if every is not None and every <= 0:
- raise ValueError("Parameter every must be positive or None")
- if every and batch:
- raise ValueError(
- f"Cannot run {event_name} event in a batch and every {every} seconds. "
- "Either batch is True or every is non-zero but not both."
- )
-
- if every and fn:
- fn = get_continuous_fn(fn, every)
- elif every:
- raise ValueError("Cannot set a value for `every` without a `fn`.")
-
- Context.root_block.fns.append(
- BlockFunction(fn, inputs, outputs, preprocess, postprocess, inputs_as_dict)
- )
- if api_name is not None:
- api_name_ = utils.append_unique_suffix(
- api_name, [dep["api_name"] for dep in Context.root_block.dependencies]
- )
- if not (api_name == api_name_):
- warnings.warn(
- "api_name {} already exists, using {}".format(api_name, api_name_)
- )
- api_name = api_name_
-
- dependency = {
- "targets": [self._id] if not no_target else [],
- "trigger": event_name,
- "inputs": [block._id for block in inputs],
- "outputs": [block._id for block in outputs],
- "backend_fn": fn is not None,
- "js": js,
- "queue": False if fn is None else queue,
- "api_name": api_name,
- "scroll_to_output": scroll_to_output,
- "show_progress": show_progress,
- "every": every,
- "batch": batch,
- "max_batch_size": max_batch_size,
- "cancels": cancels or [],
- }
- Context.root_block.dependencies.append(dependency)
- return dependency
-
- def get_config(self):
- return {
- "visible": self.visible,
- "elem_id": self.elem_id,
- "style": self._style,
- "root_url": self.root_url,
- }
-
- @staticmethod
- @abstractmethod
- def update(**kwargs) -> Dict:
- return {}
-
- @classmethod
- def get_specific_update(cls, generic_update: Dict[str, Any]) -> Dict:
- del generic_update["__type__"]
- specific_update = cls.update(**generic_update)
- return specific_update
-
-
-class BlockContext(Block):
- def __init__(
- self,
- visible: bool = True,
- render: bool = True,
- **kwargs,
- ):
- """
- Parameters:
- visible: If False, this will be hidden but included in the Blocks config file (its visibility can later be updated).
- render: If False, this will not be included in the Blocks config file at all.
- """
- self.children: List[Block] = []
- super().__init__(visible=visible, render=render, **kwargs)
-
- def __enter__(self):
- self.parent = Context.block
- Context.block = self
- return self
-
- def add(self, child: Block):
- child.parent = self
- self.children.append(child)
-
- def fill_expected_parents(self):
- children = []
- pseudo_parent = None
- for child in self.children:
- expected_parent = child.get_expected_parent()
- if not expected_parent or isinstance(self, expected_parent):
- pseudo_parent = None
- children.append(child)
- else:
- if pseudo_parent is not None and isinstance(
- pseudo_parent, expected_parent
- ):
- pseudo_parent.children.append(child)
- else:
- pseudo_parent = expected_parent(render=False)
- children.append(pseudo_parent)
- pseudo_parent.children = [child]
- if Context.root_block:
- Context.root_block.blocks[pseudo_parent._id] = pseudo_parent
- child.parent = pseudo_parent
- self.children = children
-
- def __exit__(self, *args):
- if getattr(self, "allow_expected_parents", True):
- self.fill_expected_parents()
- Context.block = self.parent
-
- def postprocess(self, y):
- """
- Any postprocessing needed to be performed on a block context.
- """
- return y
-
-
-class BlockFunction:
- def __init__(
- self,
- fn: Callable | None,
- inputs: List[Component],
- outputs: List[Component],
- preprocess: bool,
- postprocess: bool,
- inputs_as_dict: bool,
- ):
- self.fn = fn
- self.inputs = inputs
- self.outputs = outputs
- self.preprocess = preprocess
- self.postprocess = postprocess
- self.total_runtime = 0
- self.total_runs = 0
- self.inputs_as_dict = inputs_as_dict
-
- def __str__(self):
- return str(
- {
- "fn": getattr(self.fn, "__name__", "fn")
- if self.fn is not None
- else None,
- "preprocess": self.preprocess,
- "postprocess": self.postprocess,
- }
- )
-
- def __repr__(self):
- return str(self)
-
-
-class class_or_instancemethod(classmethod):
- def __get__(self, instance, type_):
- descr_get = super().__get__ if instance is None else self.__func__.__get__
- return descr_get(instance, type_)
-
-
-def postprocess_update_dict(block: Block, update_dict: Dict, postprocess: bool = True):
- """
- Converts a dictionary of updates into a format that can be sent to the frontend.
- E.g. {"__type__": "generic_update", "value": "2", "interactive": False}
- Into -> {"__type__": "update", "value": 2.0, "mode": "static"}
-
- Parameters:
- block: The Block that is being updated with this update dictionary.
- update_dict: The original update dictionary
- postprocess: Whether to postprocess the "value" key of the update dictionary.
- """
- if update_dict.get("__type__", "") == "generic_update":
- update_dict = block.get_specific_update(update_dict)
- if update_dict.get("value") is components._Keywords.NO_VALUE:
- update_dict.pop("value")
- prediction_value = delete_none(update_dict, skip_value=True)
- if "value" in prediction_value and postprocess:
- assert isinstance(
- block, components.IOComponent
- ), f"Component {block.__class__} does not support value"
- prediction_value["value"] = block.postprocess(prediction_value["value"])
- return prediction_value
-
-
-def convert_component_dict_to_list(
- outputs_ids: List[int], predictions: Dict
-) -> List | Dict:
- """
- Converts a dictionary of component updates into a list of updates in the order of
- the outputs_ids and including every output component. Leaves other types of dictionaries unchanged.
- E.g. {"textbox": "hello", "number": {"__type__": "generic_update", "value": "2"}}
- Into -> ["hello", {"__type__": "generic_update"}, {"__type__": "generic_update", "value": "2"}]
- """
- keys_are_blocks = [isinstance(key, Block) for key in predictions.keys()]
- if all(keys_are_blocks):
- reordered_predictions = [skip() for _ in outputs_ids]
- for component, value in predictions.items():
- if component._id not in outputs_ids:
- raise ValueError(
- f"Returned component {component} not specified as output of function."
- )
- output_index = outputs_ids.index(component._id)
- reordered_predictions[output_index] = value
- predictions = utils.resolve_singleton(reordered_predictions)
- elif any(keys_are_blocks):
- raise ValueError(
- "Returned dictionary included some keys as Components. Either all keys must be Components to assign Component values, or return a List of values to assign output values in order."
- )
- return predictions
-
-
-@document("load")
-class Blocks(BlockContext):
- """
- Blocks is Gradio's low-level API that allows you to create more custom web
- applications and demos than Interfaces (yet still entirely in Python).
-
-
- Compared to the Interface class, Blocks offers more flexibility and control over:
- (1) the layout of components (2) the events that
- trigger the execution of functions (3) data flows (e.g. inputs can trigger outputs,
- which can trigger the next level of outputs). Blocks also offers ways to group
- together related demos such as with tabs.
-
-
- The basic usage of Blocks is as follows: create a Blocks object, then use it as a
- context (with the "with" statement), and then define layouts, components, or events
- within the Blocks context. Finally, call the launch() method to launch the demo.
-
- Example:
- import gradio as gr
- def update(name):
- return f"Welcome to Gradio, {name}!"
-
- with gr.Blocks() as demo:
- gr.Markdown("Start typing below and then click **Run** to see the output.")
- with gr.Row():
- inp = gr.Textbox(placeholder="What is your name?")
- out = gr.Textbox()
- btn = gr.Button("Run")
- btn.click(fn=update, inputs=inp, outputs=out)
-
- demo.launch()
- Demos: blocks_hello, blocks_flipper, blocks_speech_text_sentiment, generate_english_german, sound_alert
- Guides: blocks_and_event_listeners, controlling_layout, state_in_blocks, custom_CSS_and_JS, custom_interpretations_with_blocks, using_blocks_like_functions
- """
-
- def __init__(
- self,
- theme: str = "default",
- analytics_enabled: bool | None = None,
- mode: str = "blocks",
- title: str = "Gradio",
- css: str | None = None,
- **kwargs,
- ):
- """
- Parameters:
- theme: which theme to use - right now, only "default" is supported.
- analytics_enabled: whether to allow basic telemetry. If None, will use GRADIO_ANALYTICS_ENABLED environment variable or default to True.
- mode: a human-friendly name for the kind of Blocks or Interface being created.
- title: The tab title to display when this is opened in a browser window.
- css: custom css or path to custom css file to apply to entire Blocks
- """
- # Cleanup shared parameters with Interface #TODO: is this part still necessary after Interface with Blocks?
- self.limiter = None
- self.save_to = None
- self.theme = theme
- self.encrypt = False
- self.share = False
- self.enable_queue = None
- self.max_threads = 40
- self.show_error = True
- if css is not None and Path(css).exists():
- with open(css) as css_file:
- self.css = css_file.read()
- else:
- self.css = css
-
- # For analytics_enabled and allow_flagging: (1) first check for
- # parameter, (2) check for env variable, (3) default to True/"manual"
- self.analytics_enabled = (
- analytics_enabled
- if analytics_enabled is not None
- else os.getenv("GRADIO_ANALYTICS_ENABLED", "True") == "True"
- )
-
- super().__init__(render=False, **kwargs)
- self.blocks: Dict[int, Block] = {}
- self.fns: List[BlockFunction] = []
- self.dependencies = []
- self.mode = mode
-
- self.is_running = False
- self.local_url = None
- self.share_url = None
- self.width = None
- self.height = None
- self.api_open = True
-
- self.ip_address = ""
- self.is_space = True if os.getenv("SYSTEM") == "spaces" else False
- self.favicon_path = None
- self.auth = None
- self.dev_mode = True
- self.app_id = random.getrandbits(64)
- self.temp_file_sets = []
- self.title = title
- self.show_api = True
-
- # Only used when an Interface is loaded from a config
- self.predict = None
- self.input_components = None
- self.output_components = None
- self.__name__ = None
- self.api_mode = None
-
- if self.analytics_enabled:
- self.ip_address = utils.get_local_ip_address()
- data = {
- "mode": self.mode,
- "ip_address": self.ip_address,
- "custom_css": self.css is not None,
- "theme": self.theme,
- "version": (pkgutil.get_data(__name__, "version.txt") or b"")
- .decode("ascii")
- .strip(),
- }
- utils.initiated_analytics(data)
-
- @classmethod
- def from_config(
- cls, config: dict, fns: List[Callable], root_url: str | None = None
- ) -> Blocks:
- """
- Factory method that creates a Blocks from a config and list of functions.
-
- Parameters:
- config: a dictionary containing the configuration of the Blocks.
- fns: a list of functions that are used in the Blocks. Must be in the same order as the dependencies in the config.
- root_url: an optional root url to use for the components in the Blocks. Allows serving files from an external URL.
- """
- config = copy.deepcopy(config)
- components_config = config["components"]
- original_mapping: Dict[int, Block] = {}
-
- def get_block_instance(id: int) -> Block:
- for block_config in components_config:
- if block_config["id"] == id:
- break
- else:
- raise ValueError("Cannot find block with id {}".format(id))
- cls = component_or_layout_class(block_config["type"])
- block_config["props"].pop("type", None)
- block_config["props"].pop("name", None)
- style = block_config["props"].pop("style", None)
- if block_config["props"].get("root_url") is None and root_url:
- block_config["props"]["root_url"] = root_url + "/"
- # Any component has already processed its initial value, so we skip that step here
- block = cls(**block_config["props"], _skip_init_processing=True)
- if style and isinstance(block, components.IOComponent):
- block.style(**style)
- return block
-
- def iterate_over_children(children_list):
- for child_config in children_list:
- id = child_config["id"]
- block = get_block_instance(id)
-
- original_mapping[id] = block
-
- children = child_config.get("children")
- if children is not None:
- assert isinstance(
- block, BlockContext
- ), f"Invalid config, Block with id {id} has children but is not a BlockContext."
- with block:
- iterate_over_children(children)
-
- with Blocks(theme=config["theme"], css=config["theme"]) as blocks:
- # ID 0 should be the root Blocks component
- original_mapping[0] = Context.root_block or blocks
-
- iterate_over_children(config["layout"]["children"])
-
- first_dependency = None
-
- # add the event triggers
- for dependency, fn in zip(config["dependencies"], fns):
- # We used to add a "fake_event" to the config to cache examples
- # without removing it. This was causing bugs in calling gr.Interface.load
- # We fixed the issue by removing "fake_event" from the config in examples.py
- # but we still need to skip these events when loading the config to support
- # older demos
- if dependency["trigger"] == "fake_event":
- continue
- targets = dependency.pop("targets")
- trigger = dependency.pop("trigger")
- dependency.pop("backend_fn")
- dependency.pop("documentation", None)
- dependency["inputs"] = [
- original_mapping[i] for i in dependency["inputs"]
- ]
- dependency["outputs"] = [
- original_mapping[o] for o in dependency["outputs"]
- ]
- dependency.pop("status_tracker", None)
- dependency["preprocess"] = False
- dependency["postprocess"] = False
-
- for target in targets:
- dependency = original_mapping[target].set_event_trigger(
- event_name=trigger, fn=fn, **dependency
- )
- if first_dependency is None:
- first_dependency = dependency
-
- # Allows some use of Interface-specific methods with loaded Spaces
- if first_dependency and Context.root_block:
- blocks.predict = [fns[0]]
- blocks.input_components = [
- Context.root_block.blocks[i] for i in first_dependency["inputs"]
- ]
- blocks.output_components = [
- Context.root_block.blocks[o] for o in first_dependency["outputs"]
- ]
- blocks.__name__ = "Interface"
- blocks.api_mode = True
-
- return blocks
-
- def __str__(self):
- return self.__repr__()
-
- def __repr__(self):
- num_backend_fns = len([d for d in self.dependencies if d["backend_fn"]])
- repr = f"Gradio Blocks instance: {num_backend_fns} backend functions"
- repr += "\n" + "-" * len(repr)
- for d, dependency in enumerate(self.dependencies):
- if dependency["backend_fn"]:
- repr += f"\nfn_index={d}"
- repr += "\n inputs:"
- for input_id in dependency["inputs"]:
- block = self.blocks[input_id]
- repr += "\n |-{}".format(str(block))
- repr += "\n outputs:"
- for output_id in dependency["outputs"]:
- block = self.blocks[output_id]
- repr += "\n |-{}".format(str(block))
- return repr
-
- def render(self):
- if Context.root_block is not None:
- if self._id in Context.root_block.blocks:
- raise DuplicateBlockError(
- f"A block with id: {self._id} has already been rendered in the current Blocks."
- )
- if not set(Context.root_block.blocks).isdisjoint(self.blocks):
- raise DuplicateBlockError(
- "At least one block in this Blocks has already been rendered."
- )
-
- Context.root_block.blocks.update(self.blocks)
- Context.root_block.fns.extend(self.fns)
- dependency_offset = len(Context.root_block.dependencies)
- for i, dependency in enumerate(self.dependencies):
- api_name = dependency["api_name"]
- if api_name is not None:
- api_name_ = utils.append_unique_suffix(
- api_name,
- [dep["api_name"] for dep in Context.root_block.dependencies],
- )
- if not (api_name == api_name_):
- warnings.warn(
- "api_name {} already exists, using {}".format(
- api_name, api_name_
- )
- )
- dependency["api_name"] = api_name_
- dependency["cancels"] = [
- c + dependency_offset for c in dependency["cancels"]
- ]
- # Recreate the cancel function so that it has the latest
- # dependency fn indices. This is necessary to properly cancel
- # events in the backend
- if dependency["cancels"]:
- updated_cancels = [
- Context.root_block.dependencies[i]
- for i in dependency["cancels"]
- ]
- new_fn = BlockFunction(
- get_cancel_function(updated_cancels)[0],
- [],
- [],
- False,
- True,
- False,
- )
- Context.root_block.fns[dependency_offset + i] = new_fn
- Context.root_block.dependencies.append(dependency)
- Context.root_block.temp_file_sets.extend(self.temp_file_sets)
-
- if Context.block is not None:
- Context.block.children.extend(self.children)
- return self
-
- def is_callable(self, fn_index: int = 0) -> bool:
- """Checks if a particular Blocks function is callable (i.e. not stateful or a generator)."""
- block_fn = self.fns[fn_index]
- dependency = self.dependencies[fn_index]
-
- if inspect.isasyncgenfunction(block_fn.fn):
- return False
- if inspect.isgeneratorfunction(block_fn.fn):
- return False
- for input_id in dependency["inputs"]:
- block = self.blocks[input_id]
- if getattr(block, "stateful", False):
- return False
- for output_id in dependency["outputs"]:
- block = self.blocks[output_id]
- if getattr(block, "stateful", False):
- return False
-
- return True
-
- def __call__(self, *inputs, fn_index: int = 0, api_name: str | None = None):
- """
- Allows Blocks objects to be called as functions. Supply the parameters to the
- function as positional arguments. To choose which function to call, use the
- fn_index parameter, which must be a keyword argument.
-
- Parameters:
- *inputs: the parameters to pass to the function
- fn_index: the index of the function to call (defaults to 0, which for Interfaces, is the default prediction function)
- api_name: The api_name of the dependency to call. Will take precedence over fn_index.
- """
- if api_name is not None:
- inferred_fn_index = next(
- (
- i
- for i, d in enumerate(self.dependencies)
- if d.get("api_name") == api_name
- ),
- None,
- )
- if inferred_fn_index is None:
- raise InvalidApiName(f"Cannot find a function with api_name {api_name}")
- fn_index = inferred_fn_index
- if not (self.is_callable(fn_index)):
- raise ValueError(
- "This function is not callable because it is either stateful or is a generator. Please use the .launch() method instead to create an interactive user interface."
- )
-
- inputs = list(inputs)
- processed_inputs = self.serialize_data(fn_index, inputs)
- batch = self.dependencies[fn_index]["batch"]
- if batch:
- processed_inputs = [[inp] for inp in processed_inputs]
-
- outputs = utils.synchronize_async(
- self.process_api,
- fn_index=fn_index,
- inputs=processed_inputs,
- request=None,
- state={},
- )
- outputs = outputs["data"]
-
- if batch:
- outputs = [out[0] for out in outputs]
-
- processed_outputs = self.deserialize_data(fn_index, outputs)
- processed_outputs = utils.resolve_singleton(processed_outputs)
-
- return processed_outputs
-
- async def call_function(
- self,
- fn_index: int,
- processed_input: List[Any],
- iterator: Iterator[Any] | None = None,
- requests: routes.Request | List[routes.Request] | None = None,
- event_id: str | None = None,
- ):
- """
- Calls function with given index and preprocessed input, and measures process time.
- Parameters:
- fn_index: index of function to call
- processed_input: preprocessed input to pass to function
- iterator: iterator to use if function is a generator
- requests: requests to pass to function
- event_id: id of event in queue
- """
- block_fn = self.fns[fn_index]
- assert block_fn.fn, f"function with index {fn_index} not defined."
- is_generating = False
-
- if block_fn.inputs_as_dict:
- processed_input = [
- {
- input_component: data
- for input_component, data in zip(block_fn.inputs, processed_input)
- }
- ]
-
- if isinstance(requests, list):
- request = requests[0]
- else:
- request = requests
- processed_input, progress_index = special_args(
- block_fn.fn,
- processed_input,
- request,
- )
- progress_tracker = (
- processed_input[progress_index] if progress_index is not None else None
- )
-
- start = time.time()
-
- if iterator is None: # If not a generator function that has already run
- if progress_tracker is not None and progress_index is not None:
- progress_tracker, fn = create_tracker(
- self, event_id, block_fn.fn, progress_tracker.track_tqdm
- )
- processed_input[progress_index] = progress_tracker
- else:
- fn = block_fn.fn
-
- if inspect.iscoroutinefunction(fn):
- prediction = await fn(*processed_input)
- else:
- prediction = await anyio.to_thread.run_sync(
- fn, *processed_input, limiter=self.limiter
- )
- else:
- prediction = None
-
- if inspect.isasyncgenfunction(block_fn.fn):
- raise ValueError("Gradio does not support async generators.")
- if inspect.isgeneratorfunction(block_fn.fn):
- if not self.enable_queue:
- raise ValueError("Need to enable queue to use generators.")
- try:
- if iterator is None:
- iterator = prediction
- prediction = await anyio.to_thread.run_sync(
- utils.async_iteration, iterator, limiter=self.limiter
- )
- is_generating = True
- except StopAsyncIteration:
- n_outputs = len(self.dependencies[fn_index].get("outputs"))
- prediction = (
- components._Keywords.FINISHED_ITERATING
- if n_outputs == 1
- else (components._Keywords.FINISHED_ITERATING,) * n_outputs
- )
- iterator = None
-
- duration = time.time() - start
-
- return {
- "prediction": prediction,
- "duration": duration,
- "is_generating": is_generating,
- "iterator": iterator,
- }
-
- def serialize_data(self, fn_index: int, inputs: List[Any]) -> List[Any]:
- dependency = self.dependencies[fn_index]
- processed_input = []
-
- for i, input_id in enumerate(dependency["inputs"]):
- block = self.blocks[input_id]
- assert isinstance(
- block, components.IOComponent
- ), f"{block.__class__} Component with id {input_id} not a valid input component."
- serialized_input = block.serialize(inputs[i])
- processed_input.append(serialized_input)
-
- return processed_input
-
- def deserialize_data(self, fn_index: int, outputs: List[Any]) -> List[Any]:
- dependency = self.dependencies[fn_index]
- predictions = []
-
- for o, output_id in enumerate(dependency["outputs"]):
- block = self.blocks[output_id]
- assert isinstance(
- block, components.IOComponent
- ), f"{block.__class__} Component with id {output_id} not a valid output component."
- deserialized = block.deserialize(outputs[o])
- predictions.append(deserialized)
-
- return predictions
-
- def preprocess_data(self, fn_index: int, inputs: List[Any], state: Dict[int, Any]):
- block_fn = self.fns[fn_index]
- dependency = self.dependencies[fn_index]
-
- if block_fn.preprocess:
- processed_input = []
- for i, input_id in enumerate(dependency["inputs"]):
- block = self.blocks[input_id]
- assert isinstance(
- block, components.Component
- ), f"{block.__class__} Component with id {input_id} not a valid input component."
- if getattr(block, "stateful", False):
- processed_input.append(state.get(input_id))
- else:
- processed_input.append(block.preprocess(inputs[i]))
- else:
- processed_input = inputs
- return processed_input
-
- def postprocess_data(
- self, fn_index: int, predictions: List | Dict, state: Dict[int, Any]
- ):
- block_fn = self.fns[fn_index]
- dependency = self.dependencies[fn_index]
- batch = dependency["batch"]
-
- if type(predictions) is dict and len(predictions) > 0:
- predictions = convert_component_dict_to_list(
- dependency["outputs"], predictions
- )
-
- if len(dependency["outputs"]) == 1 and not (batch):
- predictions = [
- predictions,
- ]
-
- output = []
- for i, output_id in enumerate(dependency["outputs"]):
- if predictions[i] is components._Keywords.FINISHED_ITERATING:
- output.append(None)
- continue
- block = self.blocks[output_id]
- if getattr(block, "stateful", False):
- if not utils.is_update(predictions[i]):
- state[output_id] = predictions[i]
- output.append(None)
- else:
- prediction_value = predictions[i]
- if utils.is_update(prediction_value):
- assert isinstance(prediction_value, dict)
- prediction_value = postprocess_update_dict(
- block=block,
- update_dict=prediction_value,
- postprocess=block_fn.postprocess,
- )
- elif block_fn.postprocess:
- assert isinstance(
- block, components.Component
- ), f"{block.__class__} Component with id {output_id} not a valid output component."
- prediction_value = block.postprocess(prediction_value)
- output.append(prediction_value)
- return output
-
- async def process_api(
- self,
- fn_index: int,
- inputs: List[Any],
- state: Dict[int, Any],
- request: routes.Request | List[routes.Request] | None = None,
- iterators: Dict[int, Any] | None = None,
- event_id: str | None = None,
- ) -> Dict[str, Any]:
- """
- Processes API calls from the frontend. First preprocesses the data,
- then runs the relevant function, then postprocesses the output.
- Parameters:
- fn_index: Index of function to run.
- inputs: input data received from the frontend
- username: name of user if authentication is set up (not used)
- state: data stored from stateful components for session (key is input block id)
- iterators: the in-progress iterators for each generator function (key is function index)
- Returns: None
- """
- block_fn = self.fns[fn_index]
- batch = self.dependencies[fn_index]["batch"]
-
- if batch:
- max_batch_size = self.dependencies[fn_index]["max_batch_size"]
- batch_sizes = [len(inp) for inp in inputs]
- batch_size = batch_sizes[0]
- if inspect.isasyncgenfunction(block_fn.fn) or inspect.isgeneratorfunction(
- block_fn.fn
- ):
- raise ValueError("Gradio does not support generators in batch mode.")
- if not all(x == batch_size for x in batch_sizes):
- raise ValueError(
- f"All inputs to a batch function must have the same length but instead have sizes: {batch_sizes}."
- )
- if batch_size > max_batch_size:
- raise ValueError(
- f"Batch size ({batch_size}) exceeds the max_batch_size for this function ({max_batch_size})"
- )
-
- inputs = [
- self.preprocess_data(fn_index, list(i), state) for i in zip(*inputs)
- ]
- result = await self.call_function(
- fn_index, list(zip(*inputs)), None, request
- )
- preds = result["prediction"]
- data = [
- self.postprocess_data(fn_index, list(o), state) for o in zip(*preds)
- ]
- data = list(zip(*data))
- is_generating, iterator = None, None
- else:
- inputs = self.preprocess_data(fn_index, inputs, state)
- iterator = iterators.get(fn_index, None) if iterators else None
- result = await self.call_function(
- fn_index, inputs, iterator, request, event_id
- )
- data = self.postprocess_data(fn_index, result["prediction"], state)
- is_generating, iterator = result["is_generating"], result["iterator"]
-
- block_fn.total_runtime += result["duration"]
- block_fn.total_runs += 1
-
- return {
- "data": data,
- "is_generating": is_generating,
- "iterator": iterator,
- "duration": result["duration"],
- "average_duration": block_fn.total_runtime / block_fn.total_runs,
- }
-
- async def create_limiter(self):
- self.limiter = (
- None
- if self.max_threads == 40
- else CapacityLimiter(total_tokens=self.max_threads)
- )
-
- def get_config(self):
- return {"type": "column"}
-
- def get_config_file(self):
- config = {
- "version": routes.VERSION,
- "mode": self.mode,
- "dev_mode": self.dev_mode,
- "components": [],
- "theme": self.theme,
- "css": self.css,
- "title": self.title or "Gradio",
- "is_space": self.is_space,
- "enable_queue": getattr(self, "enable_queue", False), # launch attributes
- "show_error": getattr(self, "show_error", False),
- "show_api": self.show_api,
- "is_colab": utils.colab_check(),
- }
-
- def getLayout(block):
- if not isinstance(block, BlockContext):
- return {"id": block._id}
- children_layout = []
- for child in block.children:
- children_layout.append(getLayout(child))
- return {"id": block._id, "children": children_layout}
-
- config["layout"] = getLayout(self)
-
- for _id, block in self.blocks.items():
- config["components"].append(
- {
- "id": _id,
- "type": (block.get_block_name()),
- "props": utils.delete_none(block.get_config())
- if hasattr(block, "get_config")
- else {},
- }
- )
- config["dependencies"] = self.dependencies
- return config
-
- def __enter__(self):
- if Context.block is None:
- Context.root_block = self
- self.parent = Context.block
- Context.block = self
- return self
-
- def __exit__(self, *args):
- super().fill_expected_parents()
- Context.block = self.parent
- # Configure the load events before root_block is reset
- self.attach_load_events()
- if self.parent is None:
- Context.root_block = None
- else:
- self.parent.children.extend(self.children)
- self.config = self.get_config_file()
- self.app = routes.App.create_app(self)
-
- @class_or_instancemethod
- def load(
- self_or_cls,
- fn: Callable | None = None,
- inputs: List[Component] | None = None,
- outputs: List[Component] | None = None,
- api_name: str | None = None,
- scroll_to_output: bool = False,
- show_progress: bool = True,
- queue=None,
- batch: bool = False,
- max_batch_size: int = 4,
- preprocess: bool = True,
- postprocess: bool = True,
- every: float | None = None,
- _js: str | None = None,
- *,
- name: str | None = None,
- src: str | None = None,
- api_key: str | None = None,
- alias: str | None = None,
- **kwargs,
- ) -> Blocks | Dict[str, Any] | None:
- """
- For reverse compatibility reasons, this is both a class method and an instance
- method, the two of which, confusingly, do two completely different things.
-
-
- Class method: loads a demo from a Hugging Face Spaces repo and creates it locally and returns a block instance. Equivalent to gradio.Interface.load()
-
-
- Instance method: adds event that runs as soon as the demo loads in the browser. Example usage below.
- Parameters:
- name: Class Method - the name of the model (e.g. "gpt2" or "facebook/bart-base") or space (e.g. "flax-community/spanish-gpt2"), can include the `src` as prefix (e.g. "models/facebook/bart-base")
- src: Class Method - the source of the model: `models` or `spaces` (or leave empty if source is provided as a prefix in `name`)
- api_key: Class Method - optional access token for loading private Hugging Face Hub models or spaces. Find your token here: https://huggingface.co/settings/tokens
- alias: Class Method - optional string used as the name of the loaded model instead of the default name (only applies if loading a Space running Gradio 2.x)
- fn: Instance Method - the function to wrap an interface around. Often a machine learning model's prediction function. Each parameter of the function corresponds to one input component, and the function should return a single value or a tuple of values, with each element in the tuple corresponding to one output component.
- inputs: Instance Method - List of gradio.components to use as inputs. If the function takes no inputs, this should be an empty list.
- outputs: Instance Method - List of gradio.components to use as inputs. If the function returns no outputs, this should be an empty list.
- api_name: Instance Method - Defining this parameter exposes the endpoint in the api docs
- scroll_to_output: Instance Method - If True, will scroll to output component on completion
- show_progress: Instance Method - If True, will show progress animation while pending
- queue: Instance Method - If True, will place the request on the queue, if the queue exists
- batch: Instance Method - If True, then the function should process a batch of inputs, meaning that it should accept a list of input values for each parameter. The lists should be of equal length (and be up to length `max_batch_size`). The function is then *required* to return a tuple of lists (even if there is only 1 output component), with each list in the tuple corresponding to one output component.
- max_batch_size: Instance Method - Maximum number of inputs to batch together if this is called from the queue (only relevant if batch=True)
- preprocess: Instance Method - If False, will not run preprocessing of component data before running 'fn' (e.g. leaving it as a base64 string if this method is called with the `Image` component).
- postprocess: Instance Method - If False, will not run postprocessing of component data before returning 'fn' output to the browser.
- every: Instance Method - Run this event 'every' number of seconds. Interpreted in seconds. Queue must be enabled.
- Example:
- import gradio as gr
- import datetime
- with gr.Blocks() as demo:
- def get_time():
- return datetime.datetime.now().time()
- dt = gr.Textbox(label="Current time")
- demo.load(get_time, inputs=None, outputs=dt)
- demo.launch()
- """
- # _js: Optional frontend js method to run before running 'fn'. Input arguments for js method are values of 'inputs' and 'outputs', return should be a list of values for output components.
- if isinstance(self_or_cls, type):
- if name is None:
- raise ValueError(
- "Blocks.load() requires passing parameters as keyword arguments"
- )
- return external.load_blocks_from_repo(name, src, api_key, alias, **kwargs)
- else:
- return self_or_cls.set_event_trigger(
- event_name="load",
- fn=fn,
- inputs=inputs,
- outputs=outputs,
- api_name=api_name,
- preprocess=preprocess,
- postprocess=postprocess,
- scroll_to_output=scroll_to_output,
- show_progress=show_progress,
- js=_js,
- queue=queue,
- batch=batch,
- max_batch_size=max_batch_size,
- every=every,
- no_target=True,
- )
-
- def clear(self):
- """Resets the layout of the Blocks object."""
- self.blocks = {}
- self.fns = []
- self.dependencies = []
- self.children = []
- return self
-
- @document()
- def queue(
- self,
- concurrency_count: int = 1,
- status_update_rate: float | Literal["auto"] = "auto",
- client_position_to_load_data: int | None = None,
- default_enabled: bool | None = None,
- api_open: bool = True,
- max_size: int | None = None,
- ):
- """
- You can control the rate of processed requests by creating a queue. This will allow you to set the number of requests to be processed at one time, and will let users know their position in the queue.
- Parameters:
- concurrency_count: Number of worker threads that will be processing requests from the queue concurrently. Increasing this number will increase the rate at which requests are processed, but will also increase the memory usage of the queue.
- status_update_rate: If "auto", Queue will send status estimations to all clients whenever a job is finished. Otherwise Queue will send status at regular intervals set by this parameter as the number of seconds.
- client_position_to_load_data: DEPRECATED. This parameter is deprecated and has no effect.
- default_enabled: Deprecated and has no effect.
- api_open: If True, the REST routes of the backend will be open, allowing requests made directly to those endpoints to skip the queue.
- max_size: The maximum number of events the queue will store at any given moment. If the queue is full, new events will not be added and a user will receive a message saying that the queue is full. If None, the queue size will be unlimited.
- Example:
- demo = gr.Interface(gr.Textbox(), gr.Image(), image_generator)
- demo.queue(concurrency_count=3)
- demo.launch()
- """
- if default_enabled is not None:
- warnings.warn(
- "The default_enabled parameter of queue has no effect and will be removed "
- "in a future version of gradio."
- )
- self.enable_queue = True
- self.api_open = api_open
- if client_position_to_load_data is not None:
- warnings.warn("The client_position_to_load_data parameter is deprecated.")
- self._queue = queueing.Queue(
- live_updates=status_update_rate == "auto",
- concurrency_count=concurrency_count,
- update_intervals=status_update_rate if status_update_rate != "auto" else 1,
- max_size=max_size,
- blocks_dependencies=self.dependencies,
- )
- self.config = self.get_config_file()
- return self
-
- def launch(
- self,
- inline: bool | None = None,
- inbrowser: bool = False,
- share: bool | None = None,
- debug: bool = False,
- enable_queue: bool | None = None,
- max_threads: int = 40,
- auth: Callable | Tuple[str, str] | List[Tuple[str, str]] | None = None,
- auth_message: str | None = None,
- prevent_thread_lock: bool = False,
- show_error: bool = False,
- server_name: str | None = None,
- server_port: int | None = None,
- show_tips: bool = False,
- height: int = 500,
- width: int | str = "100%",
- encrypt: bool = False,
- favicon_path: str | None = None,
- ssl_keyfile: str | None = None,
- ssl_certfile: str | None = None,
- ssl_keyfile_password: str | None = None,
- quiet: bool = False,
- show_api: bool = True,
- _frontend: bool = True,
- ) -> Tuple[FastAPI, str, str]:
- """
- Launches a simple web server that serves the demo. Can also be used to create a
- public link used by anyone to access the demo from their browser by setting share=True.
-
- Parameters:
- inline: whether to display in the interface inline in an iframe. Defaults to True in python notebooks; False otherwise.
- inbrowser: whether to automatically launch the interface in a new tab on the default browser.
- share: whether to create a publicly shareable link for the interface. Creates an SSH tunnel to make your UI accessible from anywhere. If not provided, it is set to False by default every time, except when running in Google Colab. When localhost is not accessible (e.g. Google Colab), setting share=False is not supported.
- debug: if True, blocks the main thread from running. If running in Google Colab, this is needed to print the errors in the cell output.
- auth: If provided, username and password (or list of username-password tuples) required to access interface. Can also provide function that takes username and password and returns True if valid login.
- auth_message: If provided, HTML message provided on login page.
- prevent_thread_lock: If True, the interface will block the main thread while the server is running.
- show_error: If True, any errors in the interface will be displayed in an alert modal and printed in the browser console log
- server_port: will start gradio app on this port (if available). Can be set by environment variable GRADIO_SERVER_PORT. If None, will search for an available port starting at 7860.
- server_name: to make app accessible on local network, set this to "0.0.0.0". Can be set by environment variable GRADIO_SERVER_NAME. If None, will use "127.0.0.1".
- show_tips: if True, will occasionally show tips about new Gradio features
- enable_queue: DEPRECATED (use .queue() method instead.) if True, inference requests will be served through a queue instead of with parallel threads. Required for longer inference times (> 1min) to prevent timeout. The default option in HuggingFace Spaces is True. The default option elsewhere is False.
- max_threads: the maximum number of total threads that the Gradio app can generate in parallel. The default is inherited from the starlette library (currently 40). Applies whether the queue is enabled or not. But if queuing is enabled, this parameter is increaseed to be at least the concurrency_count of the queue.
- width: The width in pixels of the iframe element containing the interface (used if inline=True)
- height: The height in pixels of the iframe element containing the interface (used if inline=True)
- encrypt: If True, flagged data will be encrypted by key provided by creator at launch
- favicon_path: If a path to a file (.png, .gif, or .ico) is provided, it will be used as the favicon for the web page.
- ssl_keyfile: If a path to a file is provided, will use this as the private key file to create a local server running on https.
- ssl_certfile: If a path to a file is provided, will use this as the signed certificate for https. Needs to be provided if ssl_keyfile is provided.
- ssl_keyfile_password: If a password is provided, will use this with the ssl certificate for https.
- quiet: If True, suppresses most print statements.
- show_api: If True, shows the api docs in the footer of the app. Default True. If the queue is enabled, then api_open parameter of .queue() will determine if the api docs are shown, independent of the value of show_api.
- Returns:
- app: FastAPI app object that is running the demo
- local_url: Locally accessible link to the demo
- share_url: Publicly accessible link to the demo (if share=True, otherwise None)
- Example:
- import gradio as gr
- def reverse(text):
- return text[::-1]
- demo = gr.Interface(reverse, "text", "text")
- demo.launch(share=True, auth=("username", "password"))
- """
- self.dev_mode = False
- if (
- auth
- and not callable(auth)
- and not isinstance(auth[0], tuple)
- and not isinstance(auth[0], list)
- ):
- self.auth = [auth]
- else:
- self.auth = auth
- self.auth_message = auth_message
- self.show_tips = show_tips
- self.show_error = show_error
- self.height = height
- self.width = width
- self.favicon_path = favicon_path
- self.progress_tracking = any(
- block_fn.fn is not None and special_args(block_fn.fn)[1] is not None
- for block_fn in self.fns
- )
-
- if enable_queue is not None:
- self.enable_queue = enable_queue
- warnings.warn(
- "The `enable_queue` parameter has been deprecated. Please use the `.queue()` method instead.",
- DeprecationWarning,
- )
-
- if self.is_space:
- self.enable_queue = self.enable_queue is not False
- else:
- self.enable_queue = self.enable_queue is True
- if self.enable_queue and not hasattr(self, "_queue"):
- self.queue()
- self.show_api = self.api_open if self.enable_queue else show_api
-
- if not self.enable_queue and self.progress_tracking:
- raise ValueError("Progress tracking requires queuing to be enabled.")
-
- for dep in self.dependencies:
- for i in dep["cancels"]:
- if not self.queue_enabled_for_fn(i):
- raise ValueError(
- "In order to cancel an event, the queue for that event must be enabled! "
- "You may get this error by either 1) passing a function that uses the yield keyword "
- "into an interface without enabling the queue or 2) defining an event that cancels "
- "another event without enabling the queue. Both can be solved by calling .queue() "
- "before .launch()"
- )
- if dep["batch"] and (
- dep["queue"] is False
- or (dep["queue"] is None and not self.enable_queue)
- ):
- raise ValueError("In order to use batching, the queue must be enabled.")
-
- self.config = self.get_config_file()
- self.encrypt = encrypt
- self.max_threads = max(
- self._queue.max_thread_count if self.enable_queue else 0, max_threads
- )
- if self.encrypt:
- self.encryption_key = encryptor.get_key(
- getpass.getpass("Enter key for encryption: ")
- )
-
- if self.is_running:
- assert isinstance(
- self.local_url, str
- ), f"Invalid local_url: {self.local_url}"
- if not (quiet):
- print(
- "Rerunning server... use `close()` to stop if you need to change `launch()` parameters.\n----"
- )
- else:
- server_name, server_port, local_url, app, server = networking.start_server(
- self,
- server_name,
- server_port,
- ssl_keyfile,
- ssl_certfile,
- ssl_keyfile_password,
- )
- self.server_name = server_name
- self.local_url = local_url
- self.server_port = server_port
- self.server_app = app
- self.server = server
- self.is_running = True
- self.is_colab = utils.colab_check()
- self.protocol = (
- "https"
- if self.local_url.startswith("https") or self.is_colab
- else "http"
- )
-
- if self.enable_queue:
- self._queue.set_url(self.local_url)
-
- # Cannot run async functions in background other than app's scope.
- # Workaround by triggering the app endpoint
- requests.get(f"{self.local_url}startup-events")
-
- if self.enable_queue:
- if self.encrypt:
- raise ValueError("Cannot queue with encryption enabled.")
- utils.launch_counter()
-
- self.share = (
- share
- if share is not None
- else True
- if self.is_colab and self.enable_queue
- else False
- )
-
- # If running in a colab or not able to access localhost,
- # a shareable link must be created.
- if _frontend and (not networking.url_ok(self.local_url)) and (not self.share):
- raise ValueError(
- "When localhost is not accessible, a shareable link must be created. Please set share=True."
- )
-
- if self.is_colab:
- if not quiet:
- if debug:
- print(strings.en["COLAB_DEBUG_TRUE"])
- else:
- print(strings.en["COLAB_DEBUG_FALSE"])
- if not self.share:
- print(strings.en["COLAB_WARNING"].format(self.server_port))
- if self.enable_queue and not self.share:
- raise ValueError(
- "When using queueing in Colab, a shareable link must be created. Please set share=True."
- )
- else:
- print(
- strings.en["RUNNING_LOCALLY_SEPARATED"].format(
- self.protocol, self.server_name, self.server_port
- )
- )
-
- if self.share:
- if self.is_space:
- raise RuntimeError("Share is not supported when you are in Spaces")
- try:
- if self.share_url is None:
- self.share_url = networking.setup_tunnel(
- self.server_name, self.server_port
- )
- print(strings.en["SHARE_LINK_DISPLAY"].format(self.share_url))
- if not (quiet):
- print(strings.en["SHARE_LINK_MESSAGE"])
- except RuntimeError:
- if self.analytics_enabled:
- utils.error_analytics(self.ip_address, "Not able to set up tunnel")
- self.share_url = None
- self.share = False
- print(strings.en["COULD_NOT_GET_SHARE_LINK"])
- else:
- if not (quiet):
- print(strings.en["PUBLIC_SHARE_TRUE"])
- self.share_url = None
-
- if inbrowser:
- link = self.share_url if self.share and self.share_url else self.local_url
- webbrowser.open(link)
-
- # Check if running in a Python notebook in which case, display inline
- if inline is None:
- inline = utils.ipython_check() and (self.auth is None)
- if inline:
- if self.auth is not None:
- print(
- "Warning: authentication is not supported inline. Please"
- "click the link to access the interface in a new tab."
- )
- try:
- from IPython.display import HTML, Javascript, display # type: ignore
-
- if self.share and self.share_url:
- while not networking.url_ok(self.share_url):
- time.sleep(0.25)
- display(
- HTML(
- f''
- )
- )
- elif self.is_colab:
- # modified from /usr/local/lib/python3.7/dist-packages/google/colab/output/_util.py within Colab environment
- code = """(async (port, path, width, height, cache, element) => {
- if (!google.colab.kernel.accessAllowed && !cache) {
- return;
- }
- element.appendChild(document.createTextNode(''));
- const url = await google.colab.kernel.proxyPort(port, {cache});
-
- const external_link = document.createElement('div');
- external_link.innerHTML = `
-
- `;
- element.appendChild(external_link);
-
- const iframe = document.createElement('iframe');
- iframe.src = new URL(path, url).toString();
- iframe.height = height;
- iframe.allow = "autoplay; camera; microphone; clipboard-read; clipboard-write;"
- iframe.width = width;
- iframe.style.border = 0;
- element.appendChild(iframe);
- })""" + "({port}, {path}, {width}, {height}, {cache}, window.element)".format(
- port=json.dumps(self.server_port),
- path=json.dumps("/"),
- width=json.dumps(self.width),
- height=json.dumps(self.height),
- cache=json.dumps(False),
- )
-
- display(Javascript(code))
- else:
- display(
- HTML(
- f''
- )
- )
- except ImportError:
- pass
-
- if getattr(self, "analytics_enabled", False):
- data = {
- "launch_method": "browser" if inbrowser else "inline",
- "is_google_colab": self.is_colab,
- "is_sharing_on": self.share,
- "share_url": self.share_url,
- "ip_address": self.ip_address,
- "enable_queue": self.enable_queue,
- "show_tips": self.show_tips,
- "server_name": server_name,
- "server_port": server_port,
- "is_spaces": self.is_space,
- "mode": self.mode,
- }
- utils.launch_analytics(data)
-
- utils.show_tip(self)
-
- # Block main thread if debug==True
- if debug or int(os.getenv("GRADIO_DEBUG", 0)) == 1:
- self.block_thread()
- # Block main thread if running in a script to stop script from exiting
- is_in_interactive_mode = bool(getattr(sys, "ps1", sys.flags.interactive))
-
- if not prevent_thread_lock and not is_in_interactive_mode:
- self.block_thread()
-
- return TupleNoPrint((self.server_app, self.local_url, self.share_url))
-
- def integrate(
- self,
- comet_ml: comet_ml.Experiment | None = None,
- wandb: ModuleType | None = None,
- mlflow: ModuleType | None = None,
- ) -> None:
- """
- A catch-all method for integrating with other libraries. This method should be run after launch()
- Parameters:
- comet_ml: If a comet_ml Experiment object is provided, will integrate with the experiment and appear on Comet dashboard
- wandb: If the wandb module is provided, will integrate with it and appear on WandB dashboard
- mlflow: If the mlflow module is provided, will integrate with the experiment and appear on ML Flow dashboard
- """
- analytics_integration = ""
- if comet_ml is not None:
- analytics_integration = "CometML"
- comet_ml.log_other("Created from", "Gradio")
- if self.share_url is not None:
- comet_ml.log_text("gradio: " + self.share_url)
- comet_ml.end()
- elif self.local_url:
- comet_ml.log_text("gradio: " + self.local_url)
- comet_ml.end()
- else:
- raise ValueError("Please run `launch()` first.")
- if wandb is not None:
- analytics_integration = "WandB"
- if self.share_url is not None:
- wandb.log(
- {
- "Gradio panel": wandb.Html(
- ''
- )
- }
- )
- else:
- print(
- "The WandB integration requires you to "
- "`launch(share=True)` first."
- )
- if mlflow is not None:
- analytics_integration = "MLFlow"
- if self.share_url is not None:
- mlflow.log_param("Gradio Interface Share Link", self.share_url)
- else:
- mlflow.log_param("Gradio Interface Local Link", self.local_url)
- if self.analytics_enabled and analytics_integration:
- data = {"integration": analytics_integration}
- utils.integration_analytics(data)
-
- def close(self, verbose: bool = True) -> None:
- """
- Closes the Interface that was launched and frees the port.
- """
- try:
- if self.enable_queue:
- self._queue.close()
- self.server.close()
- self.is_running = False
- if verbose:
- print("Closing server running on port: {}".format(self.server_port))
- except (AttributeError, OSError): # can't close if not running
- pass
-
- def block_thread(
- self,
- ) -> None:
- """Block main thread until interrupted by user."""
- try:
- while True:
- time.sleep(0.1)
- except (KeyboardInterrupt, OSError):
- print("Keyboard interruption in main thread... closing server.")
- self.server.close()
- for tunnel in CURRENT_TUNNELS:
- tunnel.kill()
-
- def attach_load_events(self):
- """Add a load event for every component whose initial value should be randomized."""
- if Context.root_block:
- for component in Context.root_block.blocks.values():
- if (
- isinstance(component, components.IOComponent)
- and component.load_event_to_attach
- ):
- load_fn, every = component.load_event_to_attach
- # Use set_event_trigger to avoid ambiguity between load class/instance method
- self.set_event_trigger(
- "load",
- load_fn,
- None,
- component,
- no_target=True,
- queue=False,
- every=every,
- )
-
- def startup_events(self):
- """Events that should be run when the app containing this block starts up."""
-
- if self.enable_queue:
- utils.run_coro_in_background(self._queue.start, (self.progress_tracking,))
- utils.run_coro_in_background(self.create_limiter)
-
- def queue_enabled_for_fn(self, fn_index: int):
- if self.dependencies[fn_index]["queue"] is None:
- return self.enable_queue
- return self.dependencies[fn_index]["queue"]
diff --git a/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/detect.py b/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/detect.py
deleted file mode 100644
index 58b02802e6d9d3661c476dd88bf52b08b8445eef..0000000000000000000000000000000000000000
--- a/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/detect.py
+++ /dev/null
@@ -1,259 +0,0 @@
-# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
-"""
-Run YOLOv5 detection inference on images, videos, directories, globs, YouTube, webcam, streams, etc.
-
-Usage - sources:
- $ python detect.py --weights yolov5s.pt --source 0 # webcam
- img.jpg # image
- vid.mp4 # video
- screen # screenshot
- path/ # directory
- 'path/*.jpg' # glob
- 'https://youtu.be/Zgi9g1ksQHc' # YouTube
- 'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream
-
-Usage - formats:
- $ python detect.py --weights yolov5s.pt # PyTorch
- yolov5s.torchscript # TorchScript
- yolov5s.onnx # ONNX Runtime or OpenCV DNN with --dnn
- yolov5s_openvino_model # OpenVINO
- yolov5s.engine # TensorRT
- yolov5s.mlmodel # CoreML (macOS-only)
- yolov5s_saved_model # TensorFlow SavedModel
- yolov5s.pb # TensorFlow GraphDef
- yolov5s.tflite # TensorFlow Lite
- yolov5s_edgetpu.tflite # TensorFlow Edge TPU
- yolov5s_paddle_model # PaddlePaddle
-"""
-
-import argparse
-import os
-import platform
-import sys
-from pathlib import Path
-
-import torch
-
-FILE = Path(__file__).resolve()
-ROOT = FILE.parents[0] # YOLOv5 root directory
-if str(ROOT) not in sys.path:
- sys.path.append(str(ROOT)) # add ROOT to PATH
-ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative
-
-from models.common import DetectMultiBackend
-from utils.dataloaders import IMG_FORMATS, VID_FORMATS, LoadImages, LoadScreenshots, LoadStreams
-from utils.general import (LOGGER, Profile, check_file, check_img_size, check_imshow, check_requirements, colorstr, cv2,
- increment_path, non_max_suppression, print_args, scale_boxes, strip_optimizer, xyxy2xywh)
-from utils.plots import Annotator, colors, save_one_box
-from utils.torch_utils import select_device, smart_inference_mode
-
-
-@smart_inference_mode()
-def run(
- weights=ROOT / 'yolov5s.pt', # model path or triton URL
- source=ROOT / 'data/images', # file/dir/URL/glob/screen/0(webcam)
- data=ROOT / 'data/coco128.yaml', # dataset.yaml path
- imgsz=(640, 640), # inference size (height, width)
- conf_thres=0.25, # confidence threshold
- iou_thres=0.45, # NMS IOU threshold
- max_det=1000, # maximum detections per image
- device='', # cuda device, i.e. 0 or 0,1,2,3 or cpu
- view_img=False, # show results
- save_txt=False, # save results to *.txt
- save_conf=False, # save confidences in --save-txt labels
- save_crop=False, # save cropped prediction boxes
- nosave=False, # do not save images/videos
- classes=None, # filter by class: --class 0, or --class 0 2 3
- agnostic_nms=False, # class-agnostic NMS
- augment=False, # augmented inference
- visualize=False, # visualize features
- update=False, # update all models
- project=ROOT / 'runs/detect', # save results to project/name
- name='exp', # save results to project/name
- exist_ok=False, # existing project/name ok, do not increment
- line_thickness=3, # bounding box thickness (pixels)
- hide_labels=False, # hide labels
- hide_conf=False, # hide confidences
- half=False, # use FP16 half-precision inference
- dnn=False, # use OpenCV DNN for ONNX inference
- vid_stride=1, # video frame-rate stride
-):
- source = str(source)
- save_img = not nosave and not source.endswith('.txt') # save inference images
- is_file = Path(source).suffix[1:] in (IMG_FORMATS + VID_FORMATS)
- is_url = source.lower().startswith(('rtsp://', 'rtmp://', 'http://', 'https://'))
- webcam = source.isnumeric() or source.endswith('.txt') or (is_url and not is_file)
- screenshot = source.lower().startswith('screen')
- if is_url and is_file:
- source = check_file(source) # download
-
- # Directories
- save_dir = increment_path(Path(project) / name, exist_ok=exist_ok) # increment run
- (save_dir / 'labels' if save_txt else save_dir).mkdir(parents=True, exist_ok=True) # make dir
-
- # Load model
- device = select_device(device)
- model = DetectMultiBackend(weights, device=device, dnn=dnn, data=data, fp16=half)
- stride, names, pt = model.stride, model.names, model.pt
- imgsz = check_img_size(imgsz, s=stride) # check image size
-
- # Dataloader
- bs = 1 # batch_size
- if webcam:
- view_img = check_imshow(warn=True)
- dataset = LoadStreams(source, img_size=imgsz, stride=stride, auto=pt, vid_stride=vid_stride)
- bs = len(dataset)
- elif screenshot:
- dataset = LoadScreenshots(source, img_size=imgsz, stride=stride, auto=pt)
- else:
- dataset = LoadImages(source, img_size=imgsz, stride=stride, auto=pt, vid_stride=vid_stride)
- vid_path, vid_writer = [None] * bs, [None] * bs
-
- # Run inference
- model.warmup(imgsz=(1 if pt or model.triton else bs, 3, *imgsz)) # warmup
- seen, windows, dt = 0, [], (Profile(), Profile(), Profile())
- for path, im, im0s, vid_cap, s in dataset:
- with dt[0]:
- im = torch.from_numpy(im).to(model.device)
- im = im.half() if model.fp16 else im.float() # uint8 to fp16/32
- im /= 255 # 0 - 255 to 0.0 - 1.0
- if len(im.shape) == 3:
- im = im[None] # expand for batch dim
-
- # Inference
- with dt[1]:
- visualize = increment_path(save_dir / Path(path).stem, mkdir=True) if visualize else False
- pred = model(im, augment=augment, visualize=visualize)
-
- # NMS
- with dt[2]:
- pred = non_max_suppression(pred, conf_thres, iou_thres, classes, agnostic_nms, max_det=max_det)
-
- # Second-stage classifier (optional)
- # pred = utils.general.apply_classifier(pred, classifier_model, im, im0s)
-
- # Process predictions
- for i, det in enumerate(pred): # per image
- seen += 1
- if webcam: # batch_size >= 1
- p, im0, frame = path[i], im0s[i].copy(), dataset.count
- s += f'{i}: '
- else:
- p, im0, frame = path, im0s.copy(), getattr(dataset, 'frame', 0)
-
- p = Path(p) # to Path
- save_path = str(save_dir / p.name) # im.jpg
- txt_path = str(save_dir / 'labels' / p.stem) + ('' if dataset.mode == 'image' else f'_{frame}') # im.txt
- s += '%gx%g ' % im.shape[2:] # print string
- gn = torch.tensor(im0.shape)[[1, 0, 1, 0]] # normalization gain whwh
- imc = im0.copy() if save_crop else im0 # for save_crop
- annotator = Annotator(im0, line_width=line_thickness, example=str(names))
- if len(det):
- # Rescale boxes from img_size to im0 size
- det[:, :4] = scale_boxes(im.shape[2:], det[:, :4], im0.shape).round()
-
- # Print results
- for c in det[:, 5].unique():
- n = (det[:, 5] == c).sum() # detections per class
- s += f"{n} {names[int(c)]}{'s' * (n > 1)}, " # add to string
-
- # Write results
- for *xyxy, conf, cls in reversed(det):
- if save_txt: # Write to file
- xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist() # normalized xywh
- line = (cls, *xywh, conf) if save_conf else (cls, *xywh) # label format
- with open(f'{txt_path}.txt', 'a') as f:
- f.write(('%g ' * len(line)).rstrip() % line + '\n')
-
- if save_img or save_crop or view_img: # Add bbox to image
- c = int(cls) # integer class
- label = None if hide_labels else (names[c] if hide_conf else f'{names[c]} {conf:.2f}')
- annotator.box_label(xyxy, label, color=colors(c, True))
- if save_crop:
- save_one_box(xyxy, imc, file=save_dir / 'crops' / names[c] / f'{p.stem}.jpg', BGR=True)
-
- # Stream results
- im0 = annotator.result()
- if view_img:
- if platform.system() == 'Linux' and p not in windows:
- windows.append(p)
- cv2.namedWindow(str(p), cv2.WINDOW_NORMAL | cv2.WINDOW_KEEPRATIO) # allow window resize (Linux)
- cv2.resizeWindow(str(p), im0.shape[1], im0.shape[0])
- cv2.imshow(str(p), im0)
- cv2.waitKey(1) # 1 millisecond
-
- # Save results (image with detections)
- if save_img:
- if dataset.mode == 'image':
- cv2.imwrite(save_path, im0)
- else: # 'video' or 'stream'
- if vid_path[i] != save_path: # new video
- vid_path[i] = save_path
- if isinstance(vid_writer[i], cv2.VideoWriter):
- vid_writer[i].release() # release previous video writer
- if vid_cap: # video
- fps = vid_cap.get(cv2.CAP_PROP_FPS)
- w = int(vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH))
- h = int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
- else: # stream
- fps, w, h = 30, im0.shape[1], im0.shape[0]
- save_path = str(Path(save_path).with_suffix('.mp4')) # force *.mp4 suffix on results videos
- vid_writer[i] = cv2.VideoWriter(save_path, cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h))
- vid_writer[i].write(im0)
-
- # Print time (inference-only)
- LOGGER.info(f"{s}{'' if len(det) else '(no detections), '}{dt[1].dt * 1E3:.1f}ms")
-
- # Print results
- t = tuple(x.t / seen * 1E3 for x in dt) # speeds per image
- LOGGER.info(f'Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {(1, 3, *imgsz)}' % t)
- if save_txt or save_img:
- s = f"\n{len(list(save_dir.glob('labels/*.txt')))} labels saved to {save_dir / 'labels'}" if save_txt else ''
- LOGGER.info(f"Results saved to {colorstr('bold', save_dir)}{s}")
- if update:
- strip_optimizer(weights[0]) # update model (to fix SourceChangeWarning)
-
-
-def parse_opt():
- parser = argparse.ArgumentParser()
- parser.add_argument('--weights', nargs='+', type=str, default=ROOT / 'yolov5s.pt', help='model path or triton URL')
- parser.add_argument('--source', type=str, default=ROOT / 'data/images', help='file/dir/URL/glob/screen/0(webcam)')
- parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='(optional) dataset.yaml path')
- parser.add_argument('--imgsz', '--img', '--img-size', nargs='+', type=int, default=[640], help='inference size h,w')
- parser.add_argument('--conf-thres', type=float, default=0.25, help='confidence threshold')
- parser.add_argument('--iou-thres', type=float, default=0.45, help='NMS IoU threshold')
- parser.add_argument('--max-det', type=int, default=1000, help='maximum detections per image')
- parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
- parser.add_argument('--view-img', action='store_true', help='show results')
- parser.add_argument('--save-txt', action='store_true', help='save results to *.txt')
- parser.add_argument('--save-conf', action='store_true', help='save confidences in --save-txt labels')
- parser.add_argument('--save-crop', action='store_true', help='save cropped prediction boxes')
- parser.add_argument('--nosave', action='store_true', help='do not save images/videos')
- parser.add_argument('--classes', nargs='+', type=int, help='filter by class: --classes 0, or --classes 0 2 3')
- parser.add_argument('--agnostic-nms', action='store_true', help='class-agnostic NMS')
- parser.add_argument('--augment', action='store_true', help='augmented inference')
- parser.add_argument('--visualize', action='store_true', help='visualize features')
- parser.add_argument('--update', action='store_true', help='update all models')
- parser.add_argument('--project', default=ROOT / 'runs/detect', help='save results to project/name')
- parser.add_argument('--name', default='exp', help='save results to project/name')
- parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')
- parser.add_argument('--line-thickness', default=3, type=int, help='bounding box thickness (pixels)')
- parser.add_argument('--hide-labels', default=False, action='store_true', help='hide labels')
- parser.add_argument('--hide-conf', default=False, action='store_true', help='hide confidences')
- parser.add_argument('--half', action='store_true', help='use FP16 half-precision inference')
- parser.add_argument('--dnn', action='store_true', help='use OpenCV DNN for ONNX inference')
- parser.add_argument('--vid-stride', type=int, default=1, help='video frame-rate stride')
- opt = parser.parse_args()
- opt.imgsz *= 2 if len(opt.imgsz) == 1 else 1 # expand
- print_args(vars(opt))
- return opt
-
-
-def main(opt):
- check_requirements(exclude=('tensorboard', 'thop'))
- run(**vars(opt))
-
-
-if __name__ == "__main__":
- opt = parse_opt()
- main(opt)
diff --git a/spaces/Ikaros521/VITS-fast-fine-tuning_nymph/attentions.py b/spaces/Ikaros521/VITS-fast-fine-tuning_nymph/attentions.py
deleted file mode 100644
index 4e0b0c1fd48c962e21e1fbe60b23fc574927435c..0000000000000000000000000000000000000000
--- a/spaces/Ikaros521/VITS-fast-fine-tuning_nymph/attentions.py
+++ /dev/null
@@ -1,303 +0,0 @@
-import copy
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import commons
-import modules
-from modules import LayerNorm
-
-
-class Encoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.window_size = window_size
-
- self.drop = nn.Dropout(p_dropout)
- self.attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask):
- attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.attn_layers[i](x, x, attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class Decoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
-
- self.drop = nn.Dropout(p_dropout)
- self.self_attn_layers = nn.ModuleList()
- self.norm_layers_0 = nn.ModuleList()
- self.encdec_attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init))
- self.norm_layers_0.append(LayerNorm(hidden_channels))
- self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask, h, h_mask):
- """
- x: decoder input
- h: encoder output
- """
- self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype)
- encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.self_attn_layers[i](x, x, self_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_0[i](x + y)
-
- y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class MultiHeadAttention(nn.Module):
- def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False):
- super().__init__()
- assert channels % n_heads == 0
-
- self.channels = channels
- self.out_channels = out_channels
- self.n_heads = n_heads
- self.p_dropout = p_dropout
- self.window_size = window_size
- self.heads_share = heads_share
- self.block_length = block_length
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
- self.attn = None
-
- self.k_channels = channels // n_heads
- self.conv_q = nn.Conv1d(channels, channels, 1)
- self.conv_k = nn.Conv1d(channels, channels, 1)
- self.conv_v = nn.Conv1d(channels, channels, 1)
- self.conv_o = nn.Conv1d(channels, out_channels, 1)
- self.drop = nn.Dropout(p_dropout)
-
- if window_size is not None:
- n_heads_rel = 1 if heads_share else n_heads
- rel_stddev = self.k_channels**-0.5
- self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
- self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
-
- nn.init.xavier_uniform_(self.conv_q.weight)
- nn.init.xavier_uniform_(self.conv_k.weight)
- nn.init.xavier_uniform_(self.conv_v.weight)
- if proximal_init:
- with torch.no_grad():
- self.conv_k.weight.copy_(self.conv_q.weight)
- self.conv_k.bias.copy_(self.conv_q.bias)
-
- def forward(self, x, c, attn_mask=None):
- q = self.conv_q(x)
- k = self.conv_k(c)
- v = self.conv_v(c)
-
- x, self.attn = self.attention(q, k, v, mask=attn_mask)
-
- x = self.conv_o(x)
- return x
-
- def attention(self, query, key, value, mask=None):
- # reshape [b, d, t] -> [b, n_h, t, d_k]
- b, d, t_s, t_t = (*key.size(), query.size(2))
- query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
- key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
- value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
-
- scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
- if self.window_size is not None:
- assert t_s == t_t, "Relative attention is only available for self-attention."
- key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
- rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings)
- scores_local = self._relative_position_to_absolute_position(rel_logits)
- scores = scores + scores_local
- if self.proximal_bias:
- assert t_s == t_t, "Proximal bias is only available for self-attention."
- scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype)
- if mask is not None:
- scores = scores.masked_fill(mask == 0, -1e4)
- if self.block_length is not None:
- assert t_s == t_t, "Local attention is only available for self-attention."
- block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length)
- scores = scores.masked_fill(block_mask == 0, -1e4)
- p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
- p_attn = self.drop(p_attn)
- output = torch.matmul(p_attn, value)
- if self.window_size is not None:
- relative_weights = self._absolute_position_to_relative_position(p_attn)
- value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s)
- output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings)
- output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t]
- return output, p_attn
-
- def _matmul_with_relative_values(self, x, y):
- """
- x: [b, h, l, m]
- y: [h or 1, m, d]
- ret: [b, h, l, d]
- """
- ret = torch.matmul(x, y.unsqueeze(0))
- return ret
-
- def _matmul_with_relative_keys(self, x, y):
- """
- x: [b, h, l, d]
- y: [h or 1, m, d]
- ret: [b, h, l, m]
- """
- ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
- return ret
-
- def _get_relative_embeddings(self, relative_embeddings, length):
- max_relative_position = 2 * self.window_size + 1
- # Pad first before slice to avoid using cond ops.
- pad_length = max(length - (self.window_size + 1), 0)
- slice_start_position = max((self.window_size + 1) - length, 0)
- slice_end_position = slice_start_position + 2 * length - 1
- if pad_length > 0:
- padded_relative_embeddings = F.pad(
- relative_embeddings,
- commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]))
- else:
- padded_relative_embeddings = relative_embeddings
- used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position]
- return used_relative_embeddings
-
- def _relative_position_to_absolute_position(self, x):
- """
- x: [b, h, l, 2*l-1]
- ret: [b, h, l, l]
- """
- batch, heads, length, _ = x.size()
- # Concat columns of pad to shift from relative to absolute indexing.
- x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]]))
-
- # Concat extra elements so to add up to shape (len+1, 2*len-1).
- x_flat = x.view([batch, heads, length * 2 * length])
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]]))
-
- # Reshape and slice out the padded elements.
- x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:]
- return x_final
-
- def _absolute_position_to_relative_position(self, x):
- """
- x: [b, h, l, l]
- ret: [b, h, l, 2*l-1]
- """
- batch, heads, length, _ = x.size()
- # padd along column
- x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]]))
- x_flat = x.view([batch, heads, length**2 + length*(length -1)])
- # add 0's in the beginning that will skew the elements after reshape
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
- x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:]
- return x_final
-
- def _attention_bias_proximal(self, length):
- """Bias for self-attention to encourage attention to close positions.
- Args:
- length: an integer scalar.
- Returns:
- a Tensor with shape [1, 1, length, length]
- """
- r = torch.arange(length, dtype=torch.float32)
- diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
- return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
-
-
-class FFN(nn.Module):
- def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.activation = activation
- self.causal = causal
-
- if causal:
- self.padding = self._causal_padding
- else:
- self.padding = self._same_padding
-
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
- self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
- self.drop = nn.Dropout(p_dropout)
-
- def forward(self, x, x_mask):
- x = self.conv_1(self.padding(x * x_mask))
- if self.activation == "gelu":
- x = x * torch.sigmoid(1.702 * x)
- else:
- x = torch.relu(x)
- x = self.drop(x)
- x = self.conv_2(self.padding(x * x_mask))
- return x * x_mask
-
- def _causal_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = self.kernel_size - 1
- pad_r = 0
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
-
- def _same_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = (self.kernel_size - 1) // 2
- pad_r = self.kernel_size // 2
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
diff --git a/spaces/Intoval/privateChatGPT/modules/llama_func.py b/spaces/Intoval/privateChatGPT/modules/llama_func.py
deleted file mode 100644
index aec202a851c8ec51d1a96ce23320919af0d22a95..0000000000000000000000000000000000000000
--- a/spaces/Intoval/privateChatGPT/modules/llama_func.py
+++ /dev/null
@@ -1,166 +0,0 @@
-import os
-import logging
-
-from llama_index import download_loader
-from llama_index import (
- Document,
- LLMPredictor,
- PromptHelper,
- QuestionAnswerPrompt,
- RefinePrompt,
-)
-import colorama
-import PyPDF2
-from tqdm import tqdm
-
-from modules.presets import *
-from modules.utils import *
-from modules.config import local_embedding
-
-
-def get_index_name(file_src):
- file_paths = [x.name for x in file_src]
- file_paths.sort(key=lambda x: os.path.basename(x))
-
- md5_hash = hashlib.md5()
- for file_path in file_paths:
- with open(file_path, "rb") as f:
- while chunk := f.read(8192):
- md5_hash.update(chunk)
-
- return md5_hash.hexdigest()
-
-
-def block_split(text):
- blocks = []
- while len(text) > 0:
- blocks.append(Document(text[:1000]))
- text = text[1000:]
- return blocks
-
-
-def get_documents(file_src):
- documents = []
- logging.debug("Loading documents...")
- logging.debug(f"file_src: {file_src}")
- for file in file_src:
- filepath = file.name
- filename = os.path.basename(filepath)
- file_type = os.path.splitext(filepath)[1]
- logging.info(f"loading file: {filename}")
- try:
- if file_type == ".pdf":
- logging.debug("Loading PDF...")
- try:
- from modules.pdf_func import parse_pdf
- from modules.config import advance_docs
-
- two_column = advance_docs["pdf"].get("two_column", False)
- pdftext = parse_pdf(filepath, two_column).text
- except:
- pdftext = ""
- with open(filepath, "rb") as pdfFileObj:
- pdfReader = PyPDF2.PdfReader(pdfFileObj)
- for page in tqdm(pdfReader.pages):
- pdftext += page.extract_text()
- text_raw = pdftext
- elif file_type == ".docx":
- logging.debug("Loading Word...")
- DocxReader = download_loader("DocxReader")
- loader = DocxReader()
- text_raw = loader.load_data(file=filepath)[0].text
- elif file_type == ".epub":
- logging.debug("Loading EPUB...")
- EpubReader = download_loader("EpubReader")
- loader = EpubReader()
- text_raw = loader.load_data(file=filepath)[0].text
- elif file_type == ".xlsx":
- logging.debug("Loading Excel...")
- text_list = excel_to_string(filepath)
- for elem in text_list:
- documents.append(Document(elem))
- continue
- else:
- logging.debug("Loading text file...")
- with open(filepath, "r", encoding="utf-8") as f:
- text_raw = f.read()
- except Exception as e:
- logging.error(f"Error loading file: {filename}")
- pass
- text = add_space(text_raw)
- # text = block_split(text)
- # documents += text
- documents += [Document(text)]
- logging.debug("Documents loaded.")
- return documents
-
-
-def construct_index(
- api_key,
- file_src,
- max_input_size=4096,
- num_outputs=5,
- max_chunk_overlap=20,
- chunk_size_limit=600,
- embedding_limit=None,
- separator=" ",
-):
- from langchain.chat_models import ChatOpenAI
- from langchain.embeddings.huggingface import HuggingFaceEmbeddings
- from llama_index import GPTSimpleVectorIndex, ServiceContext, LangchainEmbedding, OpenAIEmbedding
-
- if api_key:
- os.environ["OPENAI_API_KEY"] = api_key
- else:
- # 由于一个依赖的愚蠢的设计,这里必须要有一个API KEY
- os.environ["OPENAI_API_KEY"] = "sk-xxxxxxx"
- chunk_size_limit = None if chunk_size_limit == 0 else chunk_size_limit
- embedding_limit = None if embedding_limit == 0 else embedding_limit
- separator = " " if separator == "" else separator
-
- prompt_helper = PromptHelper(
- max_input_size=max_input_size,
- num_output=num_outputs,
- max_chunk_overlap=max_chunk_overlap,
- embedding_limit=embedding_limit,
- chunk_size_limit=600,
- separator=separator,
- )
- index_name = get_index_name(file_src)
- if os.path.exists(f"./index/{index_name}.json"):
- logging.info("找到了缓存的索引文件,加载中……")
- return GPTSimpleVectorIndex.load_from_disk(f"./index/{index_name}.json")
- else:
- try:
- documents = get_documents(file_src)
- if local_embedding:
- embed_model = LangchainEmbedding(HuggingFaceEmbeddings())
- else:
- embed_model = OpenAIEmbedding()
- logging.info("构建索引中……")
- with retrieve_proxy():
- service_context = ServiceContext.from_defaults(
- prompt_helper=prompt_helper,
- chunk_size_limit=chunk_size_limit,
- embed_model=embed_model,
- )
- index = GPTSimpleVectorIndex.from_documents(
- documents, service_context=service_context
- )
- logging.debug("索引构建完成!")
- os.makedirs("./index", exist_ok=True)
- index.save_to_disk(f"./index/{index_name}.json")
- logging.debug("索引已保存至本地!")
- return index
-
- except Exception as e:
- logging.error("索引构建失败!", e)
- print(e)
- return None
-
-
-def add_space(text):
- punctuations = {",": ", ", "。": "。 ", "?": "? ", "!": "! ", ":": ": ", ";": "; "}
- for cn_punc, en_punc in punctuations.items():
- text = text.replace(cn_punc, en_punc)
- return text
diff --git a/spaces/Izumazu/ProxyTest/README.md b/spaces/Izumazu/ProxyTest/README.md
deleted file mode 100644
index 26e7de60e5441bd41cd2353833d9615b6924f913..0000000000000000000000000000000000000000
--- a/spaces/Izumazu/ProxyTest/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: ProxyTest
-emoji: 📉
-colorFrom: blue
-colorTo: red
-sdk: docker
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/JUNGU/VToonify/vtoonify/model/raft/core/utils/utils.py b/spaces/JUNGU/VToonify/vtoonify/model/raft/core/utils/utils.py
deleted file mode 100644
index 741ccfe4d0d778c3199c586d368edc2882d4fff8..0000000000000000000000000000000000000000
--- a/spaces/JUNGU/VToonify/vtoonify/model/raft/core/utils/utils.py
+++ /dev/null
@@ -1,82 +0,0 @@
-import torch
-import torch.nn.functional as F
-import numpy as np
-from scipy import interpolate
-
-
-class InputPadder:
- """ Pads images such that dimensions are divisible by 8 """
- def __init__(self, dims, mode='sintel'):
- self.ht, self.wd = dims[-2:]
- pad_ht = (((self.ht // 8) + 1) * 8 - self.ht) % 8
- pad_wd = (((self.wd // 8) + 1) * 8 - self.wd) % 8
- if mode == 'sintel':
- self._pad = [pad_wd//2, pad_wd - pad_wd//2, pad_ht//2, pad_ht - pad_ht//2]
- else:
- self._pad = [pad_wd//2, pad_wd - pad_wd//2, 0, pad_ht]
-
- def pad(self, *inputs):
- return [F.pad(x, self._pad, mode='replicate') for x in inputs]
-
- def unpad(self,x):
- ht, wd = x.shape[-2:]
- c = [self._pad[2], ht-self._pad[3], self._pad[0], wd-self._pad[1]]
- return x[..., c[0]:c[1], c[2]:c[3]]
-
-def forward_interpolate(flow):
- flow = flow.detach().cpu().numpy()
- dx, dy = flow[0], flow[1]
-
- ht, wd = dx.shape
- x0, y0 = np.meshgrid(np.arange(wd), np.arange(ht))
-
- x1 = x0 + dx
- y1 = y0 + dy
-
- x1 = x1.reshape(-1)
- y1 = y1.reshape(-1)
- dx = dx.reshape(-1)
- dy = dy.reshape(-1)
-
- valid = (x1 > 0) & (x1 < wd) & (y1 > 0) & (y1 < ht)
- x1 = x1[valid]
- y1 = y1[valid]
- dx = dx[valid]
- dy = dy[valid]
-
- flow_x = interpolate.griddata(
- (x1, y1), dx, (x0, y0), method='nearest', fill_value=0)
-
- flow_y = interpolate.griddata(
- (x1, y1), dy, (x0, y0), method='nearest', fill_value=0)
-
- flow = np.stack([flow_x, flow_y], axis=0)
- return torch.from_numpy(flow).float()
-
-
-def bilinear_sampler(img, coords, mode='bilinear', mask=False):
- """ Wrapper for grid_sample, uses pixel coordinates """
- H, W = img.shape[-2:]
- xgrid, ygrid = coords.split([1,1], dim=-1)
- xgrid = 2*xgrid/(W-1) - 1
- ygrid = 2*ygrid/(H-1) - 1
-
- grid = torch.cat([xgrid, ygrid], dim=-1)
- img = F.grid_sample(img, grid, align_corners=True)
-
- if mask:
- mask = (xgrid > -1) & (ygrid > -1) & (xgrid < 1) & (ygrid < 1)
- return img, mask.float()
-
- return img
-
-
-def coords_grid(batch, ht, wd, device):
- coords = torch.meshgrid(torch.arange(ht, device=device), torch.arange(wd, device=device))
- coords = torch.stack(coords[::-1], dim=0).float()
- return coords[None].repeat(batch, 1, 1, 1)
-
-
-def upflow8(flow, mode='bilinear'):
- new_size = (8 * flow.shape[2], 8 * flow.shape[3])
- return 8 * F.interpolate(flow, size=new_size, mode=mode, align_corners=True)
diff --git a/spaces/JasonData/MathGenerator/app.py b/spaces/JasonData/MathGenerator/app.py
deleted file mode 100644
index 8b655f1a8b8f34f0fa2fc7a26ef8787f394b3898..0000000000000000000000000000000000000000
--- a/spaces/JasonData/MathGenerator/app.py
+++ /dev/null
@@ -1,109 +0,0 @@
-import openai
-import gradio as gr
-import os
-
-STARTING_PROMPT = [{"role": "user", "content": """You are a math question generator. For each question, I will provide you with 4 things:
- 1. the main topic to be tested, 2. the types of question type, 3. the difficulty level, and 4. the required skillsets to solve the question.
- You will then reply with appropriate math question as well as the step by step solution for the question. Reply in Four parts.
- 1. Question Information:
- Topic(s) Tested: ...
- Question Type: ...
- Difficulty Level: ...
- Skills required: ...
- Case Study: True/False
-
- 2. Question: ....
-
- 3. Step by Step Solution: ...
-
- 4. Final answer(s): ..."""},
- {"role": "assistant", "content": f"OK"}]
-
-openai.api_key = os.environ['OPENAI']
-
-
-def predict(input, msg_history=STARTING_PROMPT):
- msg_history.append({"role": "user", "content": f"{input}"})
- print(msg_history)
-
- completion = openai.ChatCompletion.create(model="gpt-3.5-turbo", messages=msg_history, temperature=0.8)
- response = completion.choices[0].message.content
- msg_history.append({"role": "assistant", "content": f"{response}"})
-
- return [response, msg_history]
-
-
-def prompt_builder_predict(questionType=None, difficulty=0, topic=None, prerequisites=None, caseStudy=False, additionalPrompt=None, msg_history=STARTING_PROMPT, latex=False):
-
- level = ['Very Easy', 'Easy', 'Medium', 'Difficult', 'Extremely Difficult']
- prompt = 'randomly generatate a math question '
- if topic:
- prompt = prompt + f'on the topic of {topic}. '
- if difficulty:
- prompt = prompt + f'The difficulty level of the question should be: {level[difficulty-1]}, which means that it must require at least {difficulty} steps to solve. '
- if questionType:
- prompt = prompt + f'The question type should be in {questionType} format. '
- if prerequisites:
- prompt = prompt + f"This question will require to use the following methods to solve: {' and '.join(prerequisites)}. "
- if caseStudy:
- prompt = prompt + 'This question must be in the form of case study where it tries to test the application of the topic in the real life scenario. '
- if latex:
- prompt = prompt + 'Display all mathematical equation parts of the question to LaTeX format. '
- if additionalPrompt:
- prompt = prompt + f"In addition, {additionalPrompt}."
-
- return predict(prompt, msg_history)
-
-
-with gr.Blocks() as demo:
-
- msg_history = gr.State(STARTING_PROMPT)
-
- gr.Markdown(
- """
- # Math Question Generator
- This webapp demostrates an API plugin that can be used with LearningANTs to generate questions. The response will contain three parts: [Question, Step by Step Solution, Final answer].
- """)
-
- with gr.Row():
- questionType = gr.Radio(["MCQ", "True or False", "Short Response"], value='Short Response', label="Question Type")
- difficulty = gr.Slider(1, 5, value=3, step=1, label="Difficult Level", info="Choose between 1 and 5")
- with gr.Row():
- topic = gr.Dropdown(["Simultaneous Equation", "Linear Equation", "Derivatives", "Integrals", "Optimization"], value='Simultaneous Equation', label="Main Testing Topic")
- prerequisites = gr.Dropdown(["Elimination", "Subsitution", "Linear Equation", "Algebra", "Geometry", "Trigonometry", "Logarithms", "Power Rule", "Sum Rule", 'Difference Rule', "Product Rule", "Quotient Rule", 'Reciprocal Rule', "Chain Rule", "Implicit Differentiation", "Logarithmic Differentiation"], multiselect=True, interactive=True, label="Prerequisite Topics")
-
- with gr.Row():
- caseStudy = gr.Checkbox(label="Case Study", info="Does this question test the application of theory in real life scenarios?")
- latex = gr.Checkbox(label="LaTeX", value=True, info="Display all equations in LaTeX format?")
-
- additionalInfo = gr.Textbox(label="Additional information (prompt)", placeholder="Give a scenario where Jim and John are working in a garden....")
-
- gen_btn = gr.Button("Generate A New Question")
-
- with gr.Row():
- question = gr.TextArea(label="Generated Question")
-
- gen_btn.click(fn=prompt_builder_predict, inputs = [questionType, difficulty, topic, prerequisites, caseStudy, additionalInfo, msg_history, latex], outputs= [question, msg_history])
-
- with gr.Row():
- prompt = gr.Textbox(label='Additional Prompt', info='Not satified with the result? Enter instructions to modify the question.', placeholder='Include the case study of....', visible=False)
-
- with gr.Row():
- modify_btn = gr.Button('Modify Question', visible=False)
- modify_btn.click(fn=predict, inputs = [prompt, msg_history], outputs= [question, msg_history])
-
-
- # restart_btn = gr.Button("Generate Another Question", visible=False)
-
-
- def show_display():
- return gr.update(visible=True)
- def hide_display():
- return gr.update(visible=False)
- def clear_value():
- return gr.update(value='')
-
- question.change(fn=show_display, outputs=prompt)
- question.change(fn=show_display, outputs=modify_btn)
-
-demo.launch( share=False)
\ No newline at end of file
diff --git a/spaces/Jdnsn/Alexander/README.md b/spaces/Jdnsn/Alexander/README.md
deleted file mode 100644
index 88afff7444a3655c34e6a9375a6aba9118f755d1..0000000000000000000000000000000000000000
--- a/spaces/Jdnsn/Alexander/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Alexander
-emoji: 👀
-colorFrom: blue
-colorTo: yellow
-sdk: docker
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Kamtera/Persian_Automatic_Speech_Recognition_and-more/README.md b/spaces/Kamtera/Persian_Automatic_Speech_Recognition_and-more/README.md
deleted file mode 100644
index 002f78c8c984c65b9bbf95a2eb2a8df9536aad56..0000000000000000000000000000000000000000
--- a/spaces/Kamtera/Persian_Automatic_Speech_Recognition_and-more/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Multilingual Automatic Speech Recognition-56lang
-emoji: ⚡
-colorFrom: gray
-colorTo: gray
-sdk: gradio
-sdk_version: 3.3.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Kangarroar/ApplioRVC-Inference/lib/uvr5_pack/lib_v5/nets_33966KB.py b/spaces/Kangarroar/ApplioRVC-Inference/lib/uvr5_pack/lib_v5/nets_33966KB.py
deleted file mode 100644
index b8986f968dc5383e65d35aac6e4367299de3378b..0000000000000000000000000000000000000000
--- a/spaces/Kangarroar/ApplioRVC-Inference/lib/uvr5_pack/lib_v5/nets_33966KB.py
+++ /dev/null
@@ -1,122 +0,0 @@
-import torch
-from torch import nn
-import torch.nn.functional as F
-
-from . import layers_33966KB as layers
-
-
-class BaseASPPNet(nn.Module):
- def __init__(self, nin, ch, dilations=(4, 8, 16, 32)):
- super(BaseASPPNet, self).__init__()
- self.enc1 = layers.Encoder(nin, ch, 3, 2, 1)
- self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1)
- self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1)
- self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1)
-
- self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations)
-
- self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1)
- self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1)
- self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1)
- self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1)
-
- def __call__(self, x):
- h, e1 = self.enc1(x)
- h, e2 = self.enc2(h)
- h, e3 = self.enc3(h)
- h, e4 = self.enc4(h)
-
- h = self.aspp(h)
-
- h = self.dec4(h, e4)
- h = self.dec3(h, e3)
- h = self.dec2(h, e2)
- h = self.dec1(h, e1)
-
- return h
-
-
-class CascadedASPPNet(nn.Module):
- def __init__(self, n_fft):
- super(CascadedASPPNet, self).__init__()
- self.stg1_low_band_net = BaseASPPNet(2, 16)
- self.stg1_high_band_net = BaseASPPNet(2, 16)
-
- self.stg2_bridge = layers.Conv2DBNActiv(18, 8, 1, 1, 0)
- self.stg2_full_band_net = BaseASPPNet(8, 16)
-
- self.stg3_bridge = layers.Conv2DBNActiv(34, 16, 1, 1, 0)
- self.stg3_full_band_net = BaseASPPNet(16, 32)
-
- self.out = nn.Conv2d(32, 2, 1, bias=False)
- self.aux1_out = nn.Conv2d(16, 2, 1, bias=False)
- self.aux2_out = nn.Conv2d(16, 2, 1, bias=False)
-
- self.max_bin = n_fft // 2
- self.output_bin = n_fft // 2 + 1
-
- self.offset = 128
-
- def forward(self, x, aggressiveness=None):
- mix = x.detach()
- x = x.clone()
-
- x = x[:, :, : self.max_bin]
-
- bandw = x.size()[2] // 2
- aux1 = torch.cat(
- [
- self.stg1_low_band_net(x[:, :, :bandw]),
- self.stg1_high_band_net(x[:, :, bandw:]),
- ],
- dim=2,
- )
-
- h = torch.cat([x, aux1], dim=1)
- aux2 = self.stg2_full_band_net(self.stg2_bridge(h))
-
- h = torch.cat([x, aux1, aux2], dim=1)
- h = self.stg3_full_band_net(self.stg3_bridge(h))
-
- mask = torch.sigmoid(self.out(h))
- mask = F.pad(
- input=mask,
- pad=(0, 0, 0, self.output_bin - mask.size()[2]),
- mode="replicate",
- )
-
- if self.training:
- aux1 = torch.sigmoid(self.aux1_out(aux1))
- aux1 = F.pad(
- input=aux1,
- pad=(0, 0, 0, self.output_bin - aux1.size()[2]),
- mode="replicate",
- )
- aux2 = torch.sigmoid(self.aux2_out(aux2))
- aux2 = F.pad(
- input=aux2,
- pad=(0, 0, 0, self.output_bin - aux2.size()[2]),
- mode="replicate",
- )
- return mask * mix, aux1 * mix, aux2 * mix
- else:
- if aggressiveness:
- mask[:, :, : aggressiveness["split_bin"]] = torch.pow(
- mask[:, :, : aggressiveness["split_bin"]],
- 1 + aggressiveness["value"] / 3,
- )
- mask[:, :, aggressiveness["split_bin"] :] = torch.pow(
- mask[:, :, aggressiveness["split_bin"] :],
- 1 + aggressiveness["value"],
- )
-
- return mask * mix
-
- def predict(self, x_mag, aggressiveness=None):
- h = self.forward(x_mag, aggressiveness)
-
- if self.offset > 0:
- h = h[:, :, :, self.offset : -self.offset]
- assert h.size()[3] > 0
-
- return h
diff --git a/spaces/KenjieDec/GPEN/retinaface/facemodels/net.py b/spaces/KenjieDec/GPEN/retinaface/facemodels/net.py
deleted file mode 100644
index beb6040b24258f8b96020c1c9fc2610819718017..0000000000000000000000000000000000000000
--- a/spaces/KenjieDec/GPEN/retinaface/facemodels/net.py
+++ /dev/null
@@ -1,137 +0,0 @@
-import time
-import torch
-import torch.nn as nn
-import torchvision.models._utils as _utils
-import torchvision.models as models
-import torch.nn.functional as F
-from torch.autograd import Variable
-
-def conv_bn(inp, oup, stride = 1, leaky = 0):
- return nn.Sequential(
- nn.Conv2d(inp, oup, 3, stride, 1, bias=False),
- nn.BatchNorm2d(oup),
- nn.LeakyReLU(negative_slope=leaky, inplace=True)
- )
-
-def conv_bn_no_relu(inp, oup, stride):
- return nn.Sequential(
- nn.Conv2d(inp, oup, 3, stride, 1, bias=False),
- nn.BatchNorm2d(oup),
- )
-
-def conv_bn1X1(inp, oup, stride, leaky=0):
- return nn.Sequential(
- nn.Conv2d(inp, oup, 1, stride, padding=0, bias=False),
- nn.BatchNorm2d(oup),
- nn.LeakyReLU(negative_slope=leaky, inplace=True)
- )
-
-def conv_dw(inp, oup, stride, leaky=0.1):
- return nn.Sequential(
- nn.Conv2d(inp, inp, 3, stride, 1, groups=inp, bias=False),
- nn.BatchNorm2d(inp),
- nn.LeakyReLU(negative_slope= leaky,inplace=True),
-
- nn.Conv2d(inp, oup, 1, 1, 0, bias=False),
- nn.BatchNorm2d(oup),
- nn.LeakyReLU(negative_slope= leaky,inplace=True),
- )
-
-class SSH(nn.Module):
- def __init__(self, in_channel, out_channel):
- super(SSH, self).__init__()
- assert out_channel % 4 == 0
- leaky = 0
- if (out_channel <= 64):
- leaky = 0.1
- self.conv3X3 = conv_bn_no_relu(in_channel, out_channel//2, stride=1)
-
- self.conv5X5_1 = conv_bn(in_channel, out_channel//4, stride=1, leaky = leaky)
- self.conv5X5_2 = conv_bn_no_relu(out_channel//4, out_channel//4, stride=1)
-
- self.conv7X7_2 = conv_bn(out_channel//4, out_channel//4, stride=1, leaky = leaky)
- self.conv7x7_3 = conv_bn_no_relu(out_channel//4, out_channel//4, stride=1)
-
- def forward(self, input):
- conv3X3 = self.conv3X3(input)
-
- conv5X5_1 = self.conv5X5_1(input)
- conv5X5 = self.conv5X5_2(conv5X5_1)
-
- conv7X7_2 = self.conv7X7_2(conv5X5_1)
- conv7X7 = self.conv7x7_3(conv7X7_2)
-
- out = torch.cat([conv3X3, conv5X5, conv7X7], dim=1)
- out = F.relu(out)
- return out
-
-class FPN(nn.Module):
- def __init__(self,in_channels_list,out_channels):
- super(FPN,self).__init__()
- leaky = 0
- if (out_channels <= 64):
- leaky = 0.1
- self.output1 = conv_bn1X1(in_channels_list[0], out_channels, stride = 1, leaky = leaky)
- self.output2 = conv_bn1X1(in_channels_list[1], out_channels, stride = 1, leaky = leaky)
- self.output3 = conv_bn1X1(in_channels_list[2], out_channels, stride = 1, leaky = leaky)
-
- self.merge1 = conv_bn(out_channels, out_channels, leaky = leaky)
- self.merge2 = conv_bn(out_channels, out_channels, leaky = leaky)
-
- def forward(self, input):
- # names = list(input.keys())
- input = list(input.values())
-
- output1 = self.output1(input[0])
- output2 = self.output2(input[1])
- output3 = self.output3(input[2])
-
- up3 = F.interpolate(output3, size=[output2.size(2), output2.size(3)], mode="nearest")
- output2 = output2 + up3
- output2 = self.merge2(output2)
-
- up2 = F.interpolate(output2, size=[output1.size(2), output1.size(3)], mode="nearest")
- output1 = output1 + up2
- output1 = self.merge1(output1)
-
- out = [output1, output2, output3]
- return out
-
-
-
-class MobileNetV1(nn.Module):
- def __init__(self):
- super(MobileNetV1, self).__init__()
- self.stage1 = nn.Sequential(
- conv_bn(3, 8, 2, leaky = 0.1), # 3
- conv_dw(8, 16, 1), # 7
- conv_dw(16, 32, 2), # 11
- conv_dw(32, 32, 1), # 19
- conv_dw(32, 64, 2), # 27
- conv_dw(64, 64, 1), # 43
- )
- self.stage2 = nn.Sequential(
- conv_dw(64, 128, 2), # 43 + 16 = 59
- conv_dw(128, 128, 1), # 59 + 32 = 91
- conv_dw(128, 128, 1), # 91 + 32 = 123
- conv_dw(128, 128, 1), # 123 + 32 = 155
- conv_dw(128, 128, 1), # 155 + 32 = 187
- conv_dw(128, 128, 1), # 187 + 32 = 219
- )
- self.stage3 = nn.Sequential(
- conv_dw(128, 256, 2), # 219 +3 2 = 241
- conv_dw(256, 256, 1), # 241 + 64 = 301
- )
- self.avg = nn.AdaptiveAvgPool2d((1,1))
- self.fc = nn.Linear(256, 1000)
-
- def forward(self, x):
- x = self.stage1(x)
- x = self.stage2(x)
- x = self.stage3(x)
- x = self.avg(x)
- # x = self.model(x)
- x = x.view(-1, 256)
- x = self.fc(x)
- return x
-
diff --git a/spaces/Kevin676/Clone-Your-Voice/synthesizer/train.py b/spaces/Kevin676/Clone-Your-Voice/synthesizer/train.py
deleted file mode 100644
index d8cc170c415a6f56703dfee23f89a3c9d06511fa..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/Clone-Your-Voice/synthesizer/train.py
+++ /dev/null
@@ -1,258 +0,0 @@
-from datetime import datetime
-from functools import partial
-from pathlib import Path
-
-import torch
-import torch.nn.functional as F
-from torch import optim
-from torch.utils.data import DataLoader
-
-from synthesizer import audio
-from synthesizer.models.tacotron import Tacotron
-from synthesizer.synthesizer_dataset import SynthesizerDataset, collate_synthesizer
-from synthesizer.utils import ValueWindow, data_parallel_workaround
-from synthesizer.utils.plot import plot_spectrogram
-from synthesizer.utils.symbols import symbols
-from synthesizer.utils.text import sequence_to_text
-from vocoder.display import *
-
-
-def np_now(x: torch.Tensor): return x.detach().cpu().numpy()
-
-
-def time_string():
- return datetime.now().strftime("%Y-%m-%d %H:%M")
-
-
-def train(run_id: str, syn_dir: Path, models_dir: Path, save_every: int, backup_every: int, force_restart: bool,
- hparams):
- models_dir.mkdir(exist_ok=True)
-
- model_dir = models_dir.joinpath(run_id)
- plot_dir = model_dir.joinpath("plots")
- wav_dir = model_dir.joinpath("wavs")
- mel_output_dir = model_dir.joinpath("mel-spectrograms")
- meta_folder = model_dir.joinpath("metas")
- model_dir.mkdir(exist_ok=True)
- plot_dir.mkdir(exist_ok=True)
- wav_dir.mkdir(exist_ok=True)
- mel_output_dir.mkdir(exist_ok=True)
- meta_folder.mkdir(exist_ok=True)
-
- weights_fpath = model_dir / f"synthesizer.pt"
- metadata_fpath = syn_dir.joinpath("train.txt")
-
- print("Checkpoint path: {}".format(weights_fpath))
- print("Loading training data from: {}".format(metadata_fpath))
- print("Using model: Tacotron")
-
- # Bookkeeping
- time_window = ValueWindow(100)
- loss_window = ValueWindow(100)
-
- # From WaveRNN/train_tacotron.py
- if torch.cuda.is_available():
- device = torch.device("cuda")
-
- for session in hparams.tts_schedule:
- _, _, _, batch_size = session
- if batch_size % torch.cuda.device_count() != 0:
- raise ValueError("`batch_size` must be evenly divisible by n_gpus!")
- else:
- device = torch.device("cpu")
- print("Using device:", device)
-
- # Instantiate Tacotron Model
- print("\nInitialising Tacotron Model...\n")
- model = Tacotron(embed_dims=hparams.tts_embed_dims,
- num_chars=len(symbols),
- encoder_dims=hparams.tts_encoder_dims,
- decoder_dims=hparams.tts_decoder_dims,
- n_mels=hparams.num_mels,
- fft_bins=hparams.num_mels,
- postnet_dims=hparams.tts_postnet_dims,
- encoder_K=hparams.tts_encoder_K,
- lstm_dims=hparams.tts_lstm_dims,
- postnet_K=hparams.tts_postnet_K,
- num_highways=hparams.tts_num_highways,
- dropout=hparams.tts_dropout,
- stop_threshold=hparams.tts_stop_threshold,
- speaker_embedding_size=hparams.speaker_embedding_size).to(device)
-
- # Initialize the optimizer
- optimizer = optim.Adam(model.parameters())
-
- # Load the weights
- if force_restart or not weights_fpath.exists():
- print("\nStarting the training of Tacotron from scratch\n")
- model.save(weights_fpath)
-
- # Embeddings metadata
- char_embedding_fpath = meta_folder.joinpath("CharacterEmbeddings.tsv")
- with open(char_embedding_fpath, "w", encoding="utf-8") as f:
- for symbol in symbols:
- if symbol == " ":
- symbol = "\\s" # For visual purposes, swap space with \s
-
- f.write("{}\n".format(symbol))
-
- else:
- print("\nLoading weights at %s" % weights_fpath)
- model.load(weights_fpath, optimizer)
- print("Tacotron weights loaded from step %d" % model.step)
-
- # Initialize the dataset
- metadata_fpath = syn_dir.joinpath("train.txt")
- mel_dir = syn_dir.joinpath("mels")
- embed_dir = syn_dir.joinpath("embeds")
- dataset = SynthesizerDataset(metadata_fpath, mel_dir, embed_dir, hparams)
-
- for i, session in enumerate(hparams.tts_schedule):
- current_step = model.get_step()
-
- r, lr, max_step, batch_size = session
-
- training_steps = max_step - current_step
-
- # Do we need to change to the next session?
- if current_step >= max_step:
- # Are there no further sessions than the current one?
- if i == len(hparams.tts_schedule) - 1:
- # We have completed training. Save the model and exit
- model.save(weights_fpath, optimizer)
- break
- else:
- # There is a following session, go to it
- continue
-
- model.r = r
-
- # Begin the training
- simple_table([(f"Steps with r={r}", str(training_steps // 1000) + "k Steps"),
- ("Batch Size", batch_size),
- ("Learning Rate", lr),
- ("Outputs/Step (r)", model.r)])
-
- for p in optimizer.param_groups:
- p["lr"] = lr
-
- collate_fn = partial(collate_synthesizer, r=r, hparams=hparams)
- data_loader = DataLoader(dataset, batch_size, shuffle=True, num_workers=2, collate_fn=collate_fn)
-
- total_iters = len(dataset)
- steps_per_epoch = np.ceil(total_iters / batch_size).astype(np.int32)
- epochs = np.ceil(training_steps / steps_per_epoch).astype(np.int32)
-
- for epoch in range(1, epochs+1):
- for i, (texts, mels, embeds, idx) in enumerate(data_loader, 1):
- start_time = time.time()
-
- # Generate stop tokens for training
- stop = torch.ones(mels.shape[0], mels.shape[2])
- for j, k in enumerate(idx):
- stop[j, :int(dataset.metadata[k][4])-1] = 0
-
- texts = texts.to(device)
- mels = mels.to(device)
- embeds = embeds.to(device)
- stop = stop.to(device)
-
- # Forward pass
- # Parallelize model onto GPUS using workaround due to python bug
- if device.type == "cuda" and torch.cuda.device_count() > 1:
- m1_hat, m2_hat, attention, stop_pred = data_parallel_workaround(model, texts, mels, embeds)
- else:
- m1_hat, m2_hat, attention, stop_pred = model(texts, mels, embeds)
-
- # Backward pass
- m1_loss = F.mse_loss(m1_hat, mels) + F.l1_loss(m1_hat, mels)
- m2_loss = F.mse_loss(m2_hat, mels)
- stop_loss = F.binary_cross_entropy(stop_pred, stop)
-
- loss = m1_loss + m2_loss + stop_loss
-
- optimizer.zero_grad()
- loss.backward()
-
- if hparams.tts_clip_grad_norm is not None:
- grad_norm = torch.nn.utils.clip_grad_norm_(model.parameters(), hparams.tts_clip_grad_norm)
- if np.isnan(grad_norm.cpu()):
- print("grad_norm was NaN!")
-
- optimizer.step()
-
- time_window.append(time.time() - start_time)
- loss_window.append(loss.item())
-
- step = model.get_step()
- k = step // 1000
-
- msg = f"| Epoch: {epoch}/{epochs} ({i}/{steps_per_epoch}) | Loss: {loss_window.average:#.4} | " \
- f"{1./time_window.average:#.2} steps/s | Step: {k}k | "
- stream(msg)
-
- # Backup or save model as appropriate
- if backup_every != 0 and step % backup_every == 0 :
- backup_fpath = weights_fpath.parent / f"synthesizer_{k:06d}.pt"
- model.save(backup_fpath, optimizer)
-
- if save_every != 0 and step % save_every == 0 :
- # Must save latest optimizer state to ensure that resuming training
- # doesn't produce artifacts
- model.save(weights_fpath, optimizer)
-
- # Evaluate model to generate samples
- epoch_eval = hparams.tts_eval_interval == -1 and i == steps_per_epoch # If epoch is done
- step_eval = hparams.tts_eval_interval > 0 and step % hparams.tts_eval_interval == 0 # Every N steps
- if epoch_eval or step_eval:
- for sample_idx in range(hparams.tts_eval_num_samples):
- # At most, generate samples equal to number in the batch
- if sample_idx + 1 <= len(texts):
- # Remove padding from mels using frame length in metadata
- mel_length = int(dataset.metadata[idx[sample_idx]][4])
- mel_prediction = np_now(m2_hat[sample_idx]).T[:mel_length]
- target_spectrogram = np_now(mels[sample_idx]).T[:mel_length]
- attention_len = mel_length // model.r
-
- eval_model(attention=np_now(attention[sample_idx][:, :attention_len]),
- mel_prediction=mel_prediction,
- target_spectrogram=target_spectrogram,
- input_seq=np_now(texts[sample_idx]),
- step=step,
- plot_dir=plot_dir,
- mel_output_dir=mel_output_dir,
- wav_dir=wav_dir,
- sample_num=sample_idx + 1,
- loss=loss,
- hparams=hparams)
-
- # Break out of loop to update training schedule
- if step >= max_step:
- break
-
- # Add line break after every epoch
- print("")
-
-
-def eval_model(attention, mel_prediction, target_spectrogram, input_seq, step,
- plot_dir, mel_output_dir, wav_dir, sample_num, loss, hparams):
- # Save some results for evaluation
- attention_path = str(plot_dir.joinpath("attention_step_{}_sample_{}".format(step, sample_num)))
- save_attention(attention, attention_path)
-
- # save predicted mel spectrogram to disk (debug)
- mel_output_fpath = mel_output_dir.joinpath("mel-prediction-step-{}_sample_{}.npy".format(step, sample_num))
- np.save(str(mel_output_fpath), mel_prediction, allow_pickle=False)
-
- # save griffin lim inverted wav for debug (mel -> wav)
- wav = audio.inv_mel_spectrogram(mel_prediction.T, hparams)
- wav_fpath = wav_dir.joinpath("step-{}-wave-from-mel_sample_{}.wav".format(step, sample_num))
- audio.save_wav(wav, str(wav_fpath), sr=hparams.sample_rate)
-
- # save real and predicted mel-spectrogram plot to disk (control purposes)
- spec_fpath = plot_dir.joinpath("step-{}-mel-spectrogram_sample_{}.png".format(step, sample_num))
- title_str = "{}, {}, step={}, loss={:.5f}".format("Tacotron", time_string(), step, loss)
- plot_spectrogram(mel_prediction, str(spec_fpath), title=title_str,
- target_spectrogram=target_spectrogram,
- max_len=target_spectrogram.size // hparams.num_mels)
- print("Input at step {}: {}".format(step, sequence_to_text(input_seq)))
diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/detectors/condinst.py b/spaces/KyanChen/RSPrompter/mmdet/models/detectors/condinst.py
deleted file mode 100644
index ed2dc99eea3faf7b03a3970d46a372d28eb89fe1..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmdet/models/detectors/condinst.py
+++ /dev/null
@@ -1,28 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from mmdet.registry import MODELS
-from mmdet.utils import ConfigType, OptConfigType, OptMultiConfig
-from .single_stage_instance_seg import SingleStageInstanceSegmentor
-
-
-@MODELS.register_module()
-class CondInst(SingleStageInstanceSegmentor):
- """Implementation of `CondInst `_"""
-
- def __init__(self,
- backbone: ConfigType,
- neck: ConfigType,
- bbox_head: ConfigType,
- mask_head: ConfigType,
- train_cfg: OptConfigType = None,
- test_cfg: OptConfigType = None,
- data_preprocessor: OptConfigType = None,
- init_cfg: OptMultiConfig = None) -> None:
- super().__init__(
- backbone=backbone,
- neck=neck,
- bbox_head=bbox_head,
- mask_head=mask_head,
- train_cfg=train_cfg,
- test_cfg=test_cfg,
- data_preprocessor=data_preprocessor,
- init_cfg=init_cfg)
diff --git a/spaces/KyanChen/RSPrompter/mmpl/__init__.py b/spaces/KyanChen/RSPrompter/mmpl/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/LanguageBind/LanguageBind/model/process_clip.py b/spaces/LanguageBind/LanguageBind/model/process_clip.py
deleted file mode 100644
index a4956a852ccbfc705a322c15f1950cf2dceb86a5..0000000000000000000000000000000000000000
--- a/spaces/LanguageBind/LanguageBind/model/process_clip.py
+++ /dev/null
@@ -1,639 +0,0 @@
-import logging
-import math
-from typing import Optional, Tuple
-from einops import rearrange
-from peft import LoraConfig, get_peft_model
-from transformers import CLIPConfig
-from transformers.models.clip.modeling_clip import CLIPEncoderLayer as SpatialCLIPEncoderLayer, CLIPAttention, CLIPMLP
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from training.distributed import is_master
-
-aaa = {'NUM_FRAMES': 1, 'PATCH_DROPOUT': 0.0}
-
-def set_global_value(k, v):
- global aaa
- aaa[k] = v
-
-def get_global_value():
- global aaa
- return aaa
-
-# @dataclass
-# class CLIPVisionCfg:
-# layers: Union[Tuple[int, int, int, int], int] = 12
-# width: int = 768
-# head_width: int = 64
-# mlp_ratio: float = 4.0
-# patch_size: int = 16
-# image_size: Union[Tuple[int, int], int] = 224
-# cast_dtype: str = None
-# num_frames: int = 2
-#
-# ls_init_value: Optional[float] = None # layer scale initial value
-# patch_dropout: float = 0. # what fraction of patches to dropout during training (0 would mean disabled and no patches dropped) - 0.5 to 0.75 recommended in the paper for optimal results
-# input_patchnorm: bool = False # whether to use dual patchnorm - would only apply the input layernorm on each patch, as post-layernorm already exist in original clip vit design
-# global_average_pool: bool = False # whether to global average pool the last embedding layer, instead of using CLS token (https://arxiv.org/abs/2205.01580)
-# attentional_pool: bool = False # whether to use attentional pooler in the last embedding layer
-# n_queries: int = 256 # n_queries for attentional pooler
-# attn_pooler_heads: int = 8 # n heads for attentional_pooling
-# output_tokens: bool = False
-#
-# timm_model_name: str = None # a valid model name overrides layers, width, patch_size
-# timm_model_pretrained: bool = False # use (imagenet) pretrained weights for named model
-# timm_pool: str = 'avg' # feature pooling for timm model ('abs_attn', 'rot_attn', 'avg', '')
-# timm_proj: str = 'linear' # linear projection for timm model output ('linear', 'mlp', '')
-# timm_proj_bias: bool = False # enable bias final projection
-# timm_drop: float = 0. # head dropout
-# timm_drop_path: Optional[float] = None # backbone stochastic depth
-
-# class Video_VisionTransformer(nn.Module):
-# output_tokens: torch.jit.Final[bool]
-#
-# def __init__(
-# self,
-# num_frames: int,
-# image_size: int,
-# patch_size: int,
-# width: int,
-# layers: int,
-# heads: int,
-# mlp_ratio: float,
-# ls_init_value: float = None,
-# global_average_pool: bool = False,
-# attentional_pool: bool = False,
-# n_queries: int = 256,
-# attn_pooler_heads: int = 8,
-# output_dim: int = 512,
-# patch_dropout: float = 0.,
-# input_patchnorm: bool = False,
-# act_layer: Callable = nn.GELU,
-# norm_layer: Callable = LayerNorm,
-# output_tokens: bool = False
-# ):
-# super().__init__()
-# self.output_tokens = output_tokens
-# image_height, image_width = self.image_size = to_2tuple(image_size)
-# patch_height, patch_width = self.patch_size = to_2tuple(patch_size)
-# self.grid_size = (image_height // patch_height, image_width // patch_width)
-# self.output_dim = output_dim
-#
-# # whether to layernorm each patch, as done in dual patchnorm paper - https://arxiv.org/abs/2302.01327v1
-# self.input_patchnorm = input_patchnorm
-#
-# if input_patchnorm:
-# patch_input_dim = patch_height * patch_width * 3
-# self.patchnorm_pre_ln = LayerNorm(patch_input_dim)
-# self.conv1 = nn.Linear(patch_input_dim, width)
-# else:
-# self.patchnorm_pre_ln = nn.Identity()
-# self.conv1 = nn.Conv2d(in_channels=3, out_channels=width, kernel_size=patch_size, stride=patch_size,
-# bias=False)
-#
-# # class embeddings and positional embeddings
-# self.scale = scale = width ** -0.5
-# self.class_embedding = nn.Parameter(scale * torch.randn(width))
-# self.positional_embedding = nn.Parameter(scale * torch.randn(self.grid_size[0] * self.grid_size[1] + 1, width))
-#
-# self.temporal_embedding = nn.Parameter(torch.zeros(1, num_frames, width))
-# # setting a patch_dropout of 0. would mean it is disabled and this function would be the identity fn
-# self.patch_dropout = PatchDropout(patch_dropout) if patch_dropout > 0. else nn.Identity()
-#
-# self.ln_pre = norm_layer(width)
-# self.transformer = Transformer(
-# width,
-# layers,
-# heads,
-# mlp_ratio,
-# ls_init_value=ls_init_value,
-# act_layer=act_layer,
-# norm_layer=norm_layer,
-# )
-#
-# self.global_average_pool = global_average_pool
-# if attentional_pool:
-# self.attn_pool = AttentionalPooler(output_dim, width, n_head=attn_pooler_heads, n_queries=n_queries)
-# self.ln_post = norm_layer(output_dim)
-# self.proj = nn.Parameter(scale * torch.randn(output_dim, output_dim))
-# else:
-# self.attn_pool = None
-# self.ln_post = norm_layer(width)
-# self.proj = nn.Parameter(scale * torch.randn(width, output_dim))
-#
-# self.init_parameters()
-#
-#
-# def lock(self, unlocked_groups=0, freeze_bn_stats=False):
-# for param in self.parameters():
-# param.requires_grad = False
-#
-# if unlocked_groups != 0:
-# groups = [
-# [
-# self.conv1,
-# self.positional_embedding,
-# self.ln_pre,
-# ],
-# *zip(self.transformer.resblocks[:-1], [self.class_embedding for i in range(len(self.transformer.resblocks[:-1]))]),
-# [
-# self.class_embedding,
-# self.transformer.resblocks[-1],
-# self.ln_post,
-# ],
-# [self.proj, self.temporal_embedding]
-# ]
-#
-# def _unlock(x):
-# if isinstance(x, Sequence):
-# for g in x:
-# _unlock(g)
-# else:
-# if isinstance(x, torch.nn.Parameter):
-# x.requires_grad = True
-# else:
-# for p in x.parameters():
-# p.requires_grad = True
-#
-# _unlock(groups[-unlocked_groups:])
-#
-# def init_parameters(self):
-# # FIXME OpenAI CLIP did not define an init for the VisualTransformer
-# # TODO experiment if default PyTorch init, below, or alternate init is best.
-#
-# nn.init.normal_(self.temporal_embedding, std=self.scale)
-# # nn.init.normal_(self.class_embedding, std=self.scale)
-# # nn.init.normal_(self.positional_embedding, std=self.scale)
-# #
-# # proj_std = (self.transformer.width ** -0.5) * ((2 * self.transformer.layers) ** -0.5)
-# # attn_std = self.transformer.width ** -0.5
-# # fc_std = (2 * self.transformer.width) ** -0.5
-# # for block in self.transformer.resblocks:
-# # nn.init.normal_(block.attn.in_proj_weight, std=attn_std)
-# # nn.init.normal_(block.attn.out_proj.weight, std=proj_std)
-# # nn.init.normal_(block.mlp.c_fc.weight, std=fc_std)
-# # nn.init.normal_(block.mlp.c_proj.weight, std=proj_std)
-# #
-# # if self.text_projection is not None:
-# # nn.init.normal_(self.text_projection, std=self.scale)
-# # pass
-#
-# @torch.jit.ignore
-# def set_grad_checkpointing(self, enable=True):
-# self.transformer.grad_checkpointing = enable
-#
-# def _global_pool(self, x: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
-# if self.global_average_pool:
-# return x.mean(dim=1), x
-# else:
-# return x[:, 0], x[:, 1:]
-#
-# def forward(self, x: torch.Tensor):
-# # print('input img', x.shape)
-# B, _, T, _, _ = x.shape
-# x = rearrange(x, 'b c t h w -> (b t) c h w')
-# # to patches - whether to use dual patchnorm - https://arxiv.org/abs/2302.01327v1
-# if self.input_patchnorm:
-# # einops - rearrange(x, 'b c (h p1) (w p2) -> b (h w) (c p1 p2)')
-# x = x.reshape(x.shape[0], x.shape[1], self.grid_size[0], self.patch_size[0], self.grid_size[1],
-# self.patch_size[1])
-# x = x.permute(0, 2, 4, 1, 3, 5)
-# x = x.reshape(x.shape[0], self.grid_size[0] * self.grid_size[1], -1)
-# x = self.patchnorm_pre_ln(x)
-# x = self.conv1(x)
-# else:
-# x = self.conv1(x) # shape = [*, width, grid, grid]
-# x = x.reshape(x.shape[0], x.shape[1], -1) # shape = [*, width, grid ** 2]
-# x = x.permute(0, 2, 1) # shape = [*, grid ** 2, width]
-#
-# # print('embed img', x.shape)
-# # class embeddings and positional embeddings
-# x = torch.cat(
-# [self.class_embedding.to(x.dtype) + torch.zeros(x.shape[0], 1, x.shape[-1], dtype=x.dtype, device=x.device),
-# x], dim=1) # shape = [*, grid ** 2 + 1, width]
-# x = x + self.positional_embedding.to(x.dtype)
-#
-# n = x.shape[1]
-# x = rearrange(x, '(b t) n d -> (b n) t d', t=T)
-# x = x + self.temporal_embedding[:, :T, :]
-# x = rearrange(x, '(b n) t d -> (b t) n d', n=n)
-#
-# # a patch_dropout of 0. would mean it is disabled and this function would do nothing but return what was passed in
-# x = self.patch_dropout(x)
-# x = self.ln_pre(x)
-#
-# # print('patch_dropout img', x.shape)
-# x = x.permute(1, 0, 2) # NLD -> LND
-# # print('permute img', x.shape)
-# x = self.transformer(x)
-# x = x.permute(1, 0, 2) # LND -> NLD
-#
-# if self.attn_pool is not None:
-# x = self.attn_pool(x)
-# x = self.ln_post(x)
-# pooled, tokens = self._global_pool(x)
-# else:
-# pooled, tokens = self._global_pool(x)
-# pooled = self.ln_post(pooled) # bt, d
-#
-# pooled = pooled.reshape(B, T, -1).mean(1)
-# if self.proj is not None:
-# pooled = pooled @ self.proj
-#
-# if self.output_tokens:
-# return pooled, tokens
-#
-# return pooled
-#
-# def _build_vision_tower(
-# embed_dim: int,
-# vision_cfg: CLIPVisionCfg,
-# quick_gelu: bool = False,
-# cast_dtype: Optional[torch.dtype] = None
-# ):
-# if isinstance(vision_cfg, dict):
-# vision_cfg = CLIPVisionCfg(**vision_cfg)
-#
-# # OpenAI models are pretrained w/ QuickGELU but native nn.GELU is both faster and more
-# # memory efficient in recent PyTorch releases (>= 1.10).
-# # NOTE: timm models always use native GELU regardless of quick_gelu flag.
-# act_layer = QuickGELU if quick_gelu else nn.GELU
-#
-# vision_heads = vision_cfg.width // vision_cfg.head_width
-# norm_layer = LayerNormFp32 if cast_dtype in (torch.float16, torch.bfloat16) else LayerNorm
-# visual = Video_VisionTransformer(
-# num_frames=vision_cfg.num_frames,
-# image_size=vision_cfg.image_size,
-# patch_size=vision_cfg.patch_size,
-# width=vision_cfg.width,
-# layers=vision_cfg.layers,
-# heads=vision_heads,
-# mlp_ratio=vision_cfg.mlp_ratio,
-# ls_init_value=vision_cfg.ls_init_value,
-# patch_dropout=vision_cfg.patch_dropout,
-# input_patchnorm=vision_cfg.input_patchnorm,
-# global_average_pool=vision_cfg.global_average_pool,
-# attentional_pool=vision_cfg.attentional_pool,
-# n_queries=vision_cfg.n_queries,
-# attn_pooler_heads=vision_cfg.attn_pooler_heads,
-# output_tokens=vision_cfg.output_tokens,
-# output_dim=embed_dim,
-# act_layer=act_layer,
-# norm_layer=norm_layer,
-# )
-#
-# return visual
-
-
-
-
-class CLIPEncoderLayer(SpatialCLIPEncoderLayer):
- def __init__(self, config: CLIPConfig):
- super().__init__(config)
- self.temporal_embedding = nn.Parameter(torch.zeros(1, config.num_frames, config.hidden_size))
- nn.init.normal_(self.temporal_embedding, std=config.hidden_size ** -0.5)
-
- self.embed_dim = config.hidden_size
- self.temporal_attn = CLIPAttention(config)
- self.temporal_mlp = CLIPMLP(config)
- # self.t_attn_gate = nn.Parameter(torch.tensor([-20.]))
- # self.t_ffn_gate = nn.Parameter(torch.tensor([-20.]))
- self.temporal_layer_norm1 = nn.LayerNorm(self.embed_dim, eps=config.layer_norm_eps)
- self.temporal_layer_norm2 = nn.LayerNorm(self.embed_dim, eps=config.layer_norm_eps)
-
- def forward(
- self,
- hidden_states: torch.Tensor,
- attention_mask: torch.Tensor,
- causal_attention_mask: torch.Tensor,
- output_attentions: Optional[bool] = False,
- ) -> Tuple[torch.FloatTensor]:
- """
- Args:
- hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
- attention_mask (`torch.FloatTensor`): attention mask of size
- `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
- `(config.encoder_attention_heads,)`.
- output_attentions (`bool`, *optional*):
- Whether or not to return the attentions tensors of all attention layers. See `attentions` under
- returned tensors for more detail.
- """
-
-
- bt, n, d = hidden_states.shape
- t = get_global_value()['NUM_FRAMES']
-
-
- # time embed
- if t != 1:
- n = hidden_states.shape[1]
- hidden_states = rearrange(hidden_states, '(b t) n d -> (b n) t d', t=t)
- hidden_states = hidden_states + self.temporal_embedding[:, :t, :]
- hidden_states = rearrange(hidden_states, '(b n) t d -> (b t) n d', n=n)
-
- # time attn
- residual = hidden_states
- hidden_states = rearrange(hidden_states, '(b t) n d -> (b n) t d', t=t)
- # hidden_states = self.layer_norm1(hidden_states) # share layernorm
- hidden_states = self.temporal_layer_norm1(hidden_states)
- hidden_states, attn_weights = self.temporal_attn(
- hidden_states=hidden_states,
- attention_mask=attention_mask,
- causal_attention_mask=causal_attention_mask,
- output_attentions=output_attentions,
- )
- hidden_states = residual + rearrange(hidden_states, '(b n) t d -> (b t) n d', n=n)
-
- residual = hidden_states
- hidden_states = rearrange(hidden_states, '(b t) n d -> (b n) t d', t=t)
- # hidden_states = self.layer_norm2(hidden_states) # share layernorm
- hidden_states = self.temporal_layer_norm2(hidden_states)
- hidden_states = self.temporal_mlp(hidden_states)
- hidden_states = residual + rearrange(hidden_states, '(b n) t d -> (b t) n d', n=n)
-
- # spatial attn
- residual = hidden_states
-
- hidden_states = self.layer_norm1(hidden_states)
- hidden_states, attn_weights = self.self_attn(
- hidden_states=hidden_states,
- attention_mask=attention_mask,
- causal_attention_mask=causal_attention_mask,
- output_attentions=output_attentions,
- )
- hidden_states = residual + hidden_states
-
- residual = hidden_states
- hidden_states = self.layer_norm2(hidden_states)
- hidden_states = self.mlp(hidden_states)
- hidden_states = residual + hidden_states
-
- outputs = (hidden_states,)
-
- if output_attentions:
- outputs += (attn_weights,)
-
- return outputs
-
-
-
-
-# class ResidualAttentionBlock(SpatialResidualAttentionBlock):
-# def __init__(self,
-# num_frames: int,
-# d_model: int,
-# n_head: int,
-# mlp_ratio: float = 4.0,
-# ls_init_value: float = None,
-# act_layer: Callable = nn.GELU,
-# norm_layer: Callable = LayerNorm,
-# is_cross_attention: bool = False,):
-# super().__init__(d_model, n_head, mlp_ratio, ls_init_value, act_layer, norm_layer, is_cross_attention)
-#
-# self.num_frames = num_frames
-# self.time_ln_1 = norm_layer(d_model)
-# self.time_attn = nn.MultiheadAttention(d_model, n_head)
-# self.time_ls_1 = LayerScale(d_model, ls_init_value) if ls_init_value is not None else nn.Identity()
-#
-# def time_attention(
-# self,
-# q_x: torch.Tensor,
-# k_x: Optional[torch.Tensor] = None,
-# v_x: Optional[torch.Tensor] = None,
-# attn_mask: Optional[torch.Tensor] = None,
-# ):
-# k_x = k_x if k_x is not None else q_x
-# v_x = v_x if v_x is not None else q_x
-#
-# attn_mask = attn_mask.to(q_x.dtype) if attn_mask is not None else None
-# return self.time_attn(
-# q_x, k_x, v_x, need_weights=True, attn_mask=attn_mask
-# )[0]
-#
-# def forward(
-# self,
-# q_x: torch.Tensor,
-# k_x: Optional[torch.Tensor] = None,
-# v_x: Optional[torch.Tensor] = None,
-# attn_mask: Optional[torch.Tensor] = None,
-# ):
-# k_x = self.ln_1_kv(k_x) if hasattr(self, "ln_1_kv") and k_x is not None else None
-# v_x = self.ln_1_kv(v_x) if hasattr(self, "ln_1_kv") and v_x is not None else None
-#
-# n, bt, d = q_x.shape
-# t = get_global_value()['NUM_FRAMES']
-#
-# # time attn
-# # print('q_x', q_x.shape)
-# xt = rearrange(q_x, 'n (b t) d -> t (b n) d', t=t)
-# # print('xt', xt.shape)
-# xt = self.time_ls_1(self.time_attention(q_x=self.time_ln_1(xt), k_x=None, v_x=None, attn_mask=None))
-# # print('time_attention xt', xt.shape)
-# q_x = q_x + rearrange(xt, 't (b n) d -> n (b t) d', n=n)
-# # print('time_attention q_x', xt.shape)
-#
-# # spatial attn
-# x = q_x + self.ls_1(self.attention(q_x=self.ln_1(q_x), k_x=k_x, v_x=v_x, attn_mask=attn_mask))
-#
-# x = x + self.ls_2(self.mlp(self.ln_2(x)))
-# return x
-
-def print_trainable_parameters(model, msg=''):
- """
- Prints the number of trainable parameters in the model.
- """
- trainable_params = 0
- all_param = 0
- for _, param in model.named_parameters():
- all_param += param.numel()
- if param.requires_grad:
- trainable_params += param.numel()
- logging.info(f"{msg} Trainable params: {trainable_params} || all params: {all_param} || "
- f"trainable: {100 * trainable_params / all_param:.2f}%")
-
-def convert_model_to_lora(args, model):
- if args.clip_type == 'vl' and args.add_time_attn:
- target_modules = ["temporal_attn.k_proj", "temporal_attn.v_proj",
- "temporal_attn.q_proj", "temporal_attn.out_proj",
- "temporal_mlp.fc1", "temporal_mlp.fc2"]
- else:
- target_modules = ["k_proj", "v_proj", "q_proj", "out_proj"]
- config = LoraConfig(
- r=args.lora_r, # 16
- lora_alpha=args.lora_alpha, # 16
- target_modules=target_modules, # self_attn.out_proj
- lora_dropout=args.lora_dropout, # 0.1
- bias="none",
- modules_to_save=[],
- )
- model.vision_model.encoder.is_gradient_checkpointing = False
- model.vision_model.encoder = get_peft_model(model.vision_model.encoder, config)
- if is_master(args):
- print_trainable_parameters(model.vision_model.encoder, msg='The model.vision_model.encoder: ')
- # model.text_model.encoder.is_gradient_checkpointing = False
- # model.text_model.encoder = get_peft_model(model.text_model.encoder, config)
- # if is_master(args):
- # print_trainable_parameters(model.text_model.encoder, msg='The model.text_model.encoder: ')
-
-
-
-def add_time_attn_block(m: nn.ModuleList, device):
- config = m.config
- for i, sub_m in enumerate(m.layers):
- if isinstance(sub_m, SpatialCLIPEncoderLayer):
- oup = CLIPEncoderLayer(config).to(device)
- state_dict = sub_m.state_dict()
-
- new_state_dict = {}
- for k, v in state_dict.items():
- if 'self_attn' in k:
- new_state_dict[k] = v
- # if 'out_proj' in k:
- # v = torch.zeros_like(v, dtype=v.dtype, device=v.device)
- new_k = 'temporal_attn.' + '.'.join(k.split('.')[1:])
- new_state_dict[new_k] = v
- elif 'mlp' in k:
- new_state_dict[k] = v
- # if 'out_proj' in k:
- # v = torch.zeros_like(v, dtype=v.dtype, device=v.device)
- new_k = 'temporal_mlp.' + '.'.join(k.split('.')[1:])
- new_state_dict[new_k] = v
- elif 'layer_norm1' in k:
- new_state_dict[k] = v
- new_k = 'temporal_layer_norm1.' + '.'.join(k.split('.')[1:])
- new_state_dict[new_k] = v
- elif 'layer_norm2' in k:
- new_state_dict[k] = v
- new_k = 'temporal_layer_norm2.' + '.'.join(k.split('.')[1:])
- new_state_dict[new_k] = v
- else:
- new_state_dict[k] = v
-
- missing_keys, unexpected_keys = oup.load_state_dict(new_state_dict, strict=False)
- # assert missing_keys == ["t_attn_gate", "t_ffn_gate"]
- assert missing_keys == ['temporal_embedding']
- assert unexpected_keys == []
- m.layers[i] = oup
-
-def resize_pos(m: nn.Module, args):
- # convert embedding
- if args.clip_type == 'al':
- m.image_size = [args.num_mel_bins, args.target_length]
- m.config.image_size = [m.image_size, m.image_size] if isinstance(m.image_size, int) else m.image_size
-
- # m.config.num_channels = 1
- # new_patch_embedding = nn.Conv2d(
- # in_channels=m.config.num_channels,
- # out_channels=m.embed_dim,
- # kernel_size=m.patch_size,
- # stride=m.patch_size,
- # bias=False,
- # )
- # state_dict = m.patch_embedding.state_dict()
- # for k, v in state_dict.items():
- # state_dict[k] = torch.mean(v, dim=1, keepdim=True).to(v.dtype)
- # m.patch_embedding = new_patch_embedding
- # m.patch_embedding.load_state_dict(state_dict)
-
- # pos resize
- old_pos_embed_state_dict = m.position_embedding.state_dict()
- old_pos_embed = old_pos_embed_state_dict['weight']
- dtype = old_pos_embed.dtype
- grid_size = [m.config.image_size[0] // m.patch_size, m.config.image_size[1] // m.patch_size]
- extra_tokens = 1 # FIXME detect different token configs (ie no class token, or more)
- new_seq_len = grid_size[0] * grid_size[1] + extra_tokens
- if new_seq_len == old_pos_embed.shape[0]:
- m.to(args.device)
- return
-
- m.num_patches = grid_size[0] * grid_size[1]
- m.num_positions = m.num_patches + 1
- m.register_buffer("position_ids", torch.arange(m.num_positions).expand((1, -1)))
- new_position_embedding = nn.Embedding(m.num_positions, m.embed_dim)
-
- if extra_tokens:
- pos_emb_tok, pos_emb_img = old_pos_embed[:extra_tokens], old_pos_embed[extra_tokens:]
- else:
- pos_emb_tok, pos_emb_img = None, old_pos_embed
- old_grid_size = [int(math.sqrt(len(pos_emb_img)))]*2
-
- if is_master(args):
- logging.info('Resizing position embedding grid-size from %s to %s', old_grid_size, grid_size)
- pos_emb_img = pos_emb_img.reshape(1, old_grid_size[0], old_grid_size[1], -1).permute(0, 3, 1, 2)
- pos_emb_img = F.interpolate(
- pos_emb_img,
- size=grid_size,
- mode='bicubic',
- antialias=True,
- align_corners=False,
- )
- pos_emb_img = pos_emb_img.permute(0, 2, 3, 1).reshape(1, grid_size[0] * grid_size[1], -1)[0]
- if pos_emb_tok is not None:
- new_pos_embed = torch.cat([pos_emb_tok, pos_emb_img], dim=0)
- else:
- new_pos_embed = pos_emb_img
- old_pos_embed_state_dict['weight'] = new_pos_embed.to(dtype)
- m.position_embedding = new_position_embedding
- m.position_embedding.load_state_dict(old_pos_embed_state_dict)
-
- m.to(args.device)
-
-
-# def i2v_linear_resize_pos_embed(state_dict, model, interpolation: str = 'linear', antialias: bool = True):
-# # Rescale the grid of position embeddings when loading from state_dict
-# old_pos_embed = state_dict.get('visual.positional_embedding', None)
-# if old_pos_embed is None or not hasattr(model.visual, 'grid_size'):
-# return
-# # grid_size = to_2tuple(model.visual.grid_size)
-# grid_size = model.visual.grid_size
-# extra_tokens = 1 # FIXME detect different token configs (ie no class token, or more)
-# # new_seq_len = grid_size[0] * grid_size[1] + extra_tokens
-# new_seq_len = grid_size[0] * grid_size[1] * grid_size[2] + extra_tokens
-# if new_seq_len == old_pos_embed.shape[0]:
-# return
-#
-# if extra_tokens:
-# pos_emb_tok, pos_emb_img = old_pos_embed[:extra_tokens], old_pos_embed[extra_tokens:]
-# else:
-# pos_emb_tok, pos_emb_img = None, old_pos_embed
-# # old_grid_size = to_2tuple(int(math.sqrt(len(pos_emb_img))))
-#
-# logging.info('Resizing position embedding grid-size from %s to %s', old_pos_embed.shape[0], new_seq_len)
-# # pos_emb_img = pos_emb_img.reshape(1, old_grid_size[0], old_grid_size[1], -1).permute(0, 3, 1, 2)
-# pos_emb_img = pos_emb_img.unsqueeze(0).permute(0, 2, 1)
-# pos_emb_img = F.interpolate(
-# pos_emb_img,
-# # size=grid_size,
-# size=new_seq_len - extra_tokens,
-# mode=interpolation,
-# # antialias=antialias,
-# # align_corners=False,
-# )
-# # pos_emb_img = pos_emb_img.permute(0, 2, 3, 1).reshape(1, grid_size[0] * grid_size[1], -1)[0]
-# pos_emb_img = pos_emb_img.permute(0, 2, 1)[0]
-# if pos_emb_tok is not None:
-# new_pos_embed = torch.cat([pos_emb_tok, pos_emb_img], dim=0)
-# else:
-# new_pos_embed = pos_emb_img
-# state_dict['visual.positional_embedding'] = new_pos_embed
-#
-# def inflate_patch_embed(state_dict, model):
-# old_patch_embed_shape = model.visual.conv1.weight.shape
-# new_patch_embed_shape = state_dict['visual.conv1.weight'].shape
-# if old_patch_embed_shape == new_patch_embed_shape:
-# return
-# expanded_weight = state_dict['visual.conv1.weight'].unsqueeze(2).repeat(1, 1, 2, 1, 1)
-# state_dict['visual.conv1.weight'] = expanded_weight
-#
-#
-# def load_checkpoint(model, pretrained, strict=True):
-# state_dict = load_state_dict(pretrained)
-# # detect old format and make compatible with new format
-# if 'positional_embedding' in state_dict and not hasattr(model, 'positional_embedding'):
-# state_dict = convert_to_custom_text_state_dict(state_dict)
-# i2v_linear_resize_pos_embed(state_dict, model)
-# inflate_patch_embed(state_dict, model)
-# incompatible_keys = model.load_state_dict(state_dict, strict=strict)
-# return incompatible_keys
-
diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/lib_v5/nets_123821KB.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/lib_v5/nets_123821KB.py
deleted file mode 100644
index 167d4cb2198863cf43e93440f7e63c5342fc7605..0000000000000000000000000000000000000000
--- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/lib_v5/nets_123821KB.py
+++ /dev/null
@@ -1,122 +0,0 @@
-import torch
-import torch.nn.functional as F
-from torch import nn
-
-from . import layers_123821KB as layers
-
-
-class BaseASPPNet(nn.Module):
- def __init__(self, nin, ch, dilations=(4, 8, 16)):
- super(BaseASPPNet, self).__init__()
- self.enc1 = layers.Encoder(nin, ch, 3, 2, 1)
- self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1)
- self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1)
- self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1)
-
- self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations)
-
- self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1)
- self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1)
- self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1)
- self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1)
-
- def __call__(self, x):
- h, e1 = self.enc1(x)
- h, e2 = self.enc2(h)
- h, e3 = self.enc3(h)
- h, e4 = self.enc4(h)
-
- h = self.aspp(h)
-
- h = self.dec4(h, e4)
- h = self.dec3(h, e3)
- h = self.dec2(h, e2)
- h = self.dec1(h, e1)
-
- return h
-
-
-class CascadedASPPNet(nn.Module):
- def __init__(self, n_fft):
- super(CascadedASPPNet, self).__init__()
- self.stg1_low_band_net = BaseASPPNet(2, 32)
- self.stg1_high_band_net = BaseASPPNet(2, 32)
-
- self.stg2_bridge = layers.Conv2DBNActiv(34, 16, 1, 1, 0)
- self.stg2_full_band_net = BaseASPPNet(16, 32)
-
- self.stg3_bridge = layers.Conv2DBNActiv(66, 32, 1, 1, 0)
- self.stg3_full_band_net = BaseASPPNet(32, 64)
-
- self.out = nn.Conv2d(64, 2, 1, bias=False)
- self.aux1_out = nn.Conv2d(32, 2, 1, bias=False)
- self.aux2_out = nn.Conv2d(32, 2, 1, bias=False)
-
- self.max_bin = n_fft // 2
- self.output_bin = n_fft // 2 + 1
-
- self.offset = 128
-
- def forward(self, x, aggressiveness=None):
- mix = x.detach()
- x = x.clone()
-
- x = x[:, :, : self.max_bin]
-
- bandw = x.size()[2] // 2
- aux1 = torch.cat(
- [
- self.stg1_low_band_net(x[:, :, :bandw]),
- self.stg1_high_band_net(x[:, :, bandw:]),
- ],
- dim=2,
- )
-
- h = torch.cat([x, aux1], dim=1)
- aux2 = self.stg2_full_band_net(self.stg2_bridge(h))
-
- h = torch.cat([x, aux1, aux2], dim=1)
- h = self.stg3_full_band_net(self.stg3_bridge(h))
-
- mask = torch.sigmoid(self.out(h))
- mask = F.pad(
- input=mask,
- pad=(0, 0, 0, self.output_bin - mask.size()[2]),
- mode="replicate",
- )
-
- if self.training:
- aux1 = torch.sigmoid(self.aux1_out(aux1))
- aux1 = F.pad(
- input=aux1,
- pad=(0, 0, 0, self.output_bin - aux1.size()[2]),
- mode="replicate",
- )
- aux2 = torch.sigmoid(self.aux2_out(aux2))
- aux2 = F.pad(
- input=aux2,
- pad=(0, 0, 0, self.output_bin - aux2.size()[2]),
- mode="replicate",
- )
- return mask * mix, aux1 * mix, aux2 * mix
- else:
- if aggressiveness:
- mask[:, :, : aggressiveness["split_bin"]] = torch.pow(
- mask[:, :, : aggressiveness["split_bin"]],
- 1 + aggressiveness["value"] / 3,
- )
- mask[:, :, aggressiveness["split_bin"] :] = torch.pow(
- mask[:, :, aggressiveness["split_bin"] :],
- 1 + aggressiveness["value"],
- )
-
- return mask * mix
-
- def predict(self, x_mag, aggressiveness=None):
- h = self.forward(x_mag, aggressiveness)
-
- if self.offset > 0:
- h = h[:, :, :, self.offset : -self.offset]
- assert h.size()[3] > 0
-
- return h
diff --git a/spaces/LeoDog896/yolov8n-asl/app.py b/spaces/LeoDog896/yolov8n-asl/app.py
deleted file mode 100644
index 605652d9fe091bea928683d874705467ec2894c1..0000000000000000000000000000000000000000
--- a/spaces/LeoDog896/yolov8n-asl/app.py
+++ /dev/null
@@ -1,100 +0,0 @@
-import gradio as gr
-import cv2
-
-from ultralytics import YOLO
-
-model = YOLO('best.pt')
-
-def show_preds_image(image_path):
- image = cv2.imread(image_path)
- outputs = model.predict(source=image_path)
- results = outputs[0].cpu().numpy()
- for i, det in enumerate(results.boxes.xyxy):
- id = results.boxes.cls[i]
- name = model.names[id]
-
- #draw box around name
- cv2.rectangle(
- image,
- (int(det[0]), int(det[1])),
- (int(det[0]) + len(name) * 20, int(det[1]) - 30),
- color=(0, 0, 255),
- thickness=-1,
- lineType=cv2.LINE_AA
- )
-
- # draw name
- cv2.putText(
- image,
- str(name),
- (int(det[0]), int(det[1]) - 5),
- cv2.FONT_HERSHEY_SIMPLEX,
- 1,
- (255, 255, 255),
- 2,
- cv2.LINE_AA
- )
-
- # draw box
- cv2.rectangle(
- image,
- (int(det[0]), int(det[1])),
- (int(det[2]), int(det[3])),
- color=(0, 0, 255),
- thickness=2,
- lineType=cv2.LINE_AA
- )
- return cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
-
-inputs_image = [
- gr.components.Image(type="filepath", label="Input Image"),
-]
-outputs_image = [
- gr.components.Image(type="numpy", label="Output Image"),
-]
-interface_image = gr.Interface(
- fn=show_preds_image,
- inputs=inputs_image,
- outputs=outputs_image,
- title="ASL detector",
- cache_examples=False,
-)
-
-def show_preds_video(video_path):
- cap = cv2.VideoCapture(video_path)
- while(cap.isOpened()):
- ret, frame = cap.read()
- if ret:
- frame_copy = frame.copy()
- outputs = model.predict(source=frame)
- results = outputs[0].cpu().numpy()
- for i, det in enumerate(results.boxes.xyxy):
- cv2.rectangle(
- frame_copy,
- (int(det[0]), int(det[1])),
- (int(det[2]), int(det[3])),
- color=(0, 0, 255),
- thickness=2,
- lineType=cv2.LINE_AA
- )
- yield cv2.cvtColor(frame_copy, cv2.COLOR_BGR2RGB)
-
-inputs_video = [
- gr.components.Video(type="filepath", label="Input Video"),
-
-]
-outputs_video = [
- gr.components.Image(type="numpy", label="Output Image"),
-]
-interface_video = gr.Interface(
- fn=show_preds_video,
- inputs=inputs_video,
- outputs=outputs_video,
- title="ASL detector",
- cache_examples=False,
-)
-
-gr.TabbedInterface(
- [interface_image, interface_video],
- tab_names=['Image inference', 'Video inference']
-).queue().launch()
\ No newline at end of file
diff --git a/spaces/Lianguangluowuyan/QQsign/bin/unidbg-fetch-qsign.bat b/spaces/Lianguangluowuyan/QQsign/bin/unidbg-fetch-qsign.bat
deleted file mode 100644
index 8b291e7303b0c07d14b714e5795473891363c85b..0000000000000000000000000000000000000000
--- a/spaces/Lianguangluowuyan/QQsign/bin/unidbg-fetch-qsign.bat
+++ /dev/null
@@ -1,89 +0,0 @@
-@rem
-@rem Copyright 2015 the original author or authors.
-@rem
-@rem Licensed under the Apache License, Version 2.0 (the "License");
-@rem you may not use this file except in compliance with the License.
-@rem You may obtain a copy of the License at
-@rem
-@rem https://www.apache.org/licenses/LICENSE-2.0
-@rem
-@rem Unless required by applicable law or agreed to in writing, software
-@rem distributed under the License is distributed on an "AS IS" BASIS,
-@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-@rem See the License for the specific language governing permissions and
-@rem limitations under the License.
-@rem
-
-@if "%DEBUG%" == "" @echo off
-@rem ##########################################################################
-@rem
-@rem unidbg-fetch-qsign startup script for Windows
-@rem
-@rem ##########################################################################
-
-@rem Set local scope for the variables with windows NT shell
-if "%OS%"=="Windows_NT" setlocal
-
-set DIRNAME=%~dp0
-if "%DIRNAME%" == "" set DIRNAME=.
-set APP_BASE_NAME=%~n0
-set APP_HOME=%DIRNAME%..
-
-@rem Resolve any "." and ".." in APP_HOME to make it shorter.
-for %%i in ("%APP_HOME%") do set APP_HOME=%%~fi
-
-@rem Add default JVM options here. You can also use JAVA_OPTS and UNIDBG_FETCH_QSIGN_OPTS to pass JVM options to this script.
-set DEFAULT_JVM_OPTS=
-
-@rem Find java.exe
-if defined JAVA_HOME goto findJavaFromJavaHome
-
-set JAVA_EXE=java.exe
-%JAVA_EXE% -version >NUL 2>&1
-if "%ERRORLEVEL%" == "0" goto execute
-
-echo.
-echo ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH.
-echo.
-echo Please set the JAVA_HOME variable in your environment to match the
-echo location of your Java installation.
-
-goto fail
-
-:findJavaFromJavaHome
-set JAVA_HOME=%JAVA_HOME:"=%
-set JAVA_EXE=%JAVA_HOME%/bin/java.exe
-
-if exist "%JAVA_EXE%" goto execute
-
-echo.
-echo ERROR: JAVA_HOME is set to an invalid directory: %JAVA_HOME%
-echo.
-echo Please set the JAVA_HOME variable in your environment to match the
-echo location of your Java installation.
-
-goto fail
-
-:execute
-@rem Setup the command line
-
-set CLASSPATH=%APP_HOME%\lib\unidbg-fetch-qsign-1.1.9.jar;%APP_HOME%\lib\unidbg-android-105.jar;%APP_HOME%\lib\ktor-server-content-negotiation-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-kotlinx-json-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-status-pages-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-netty-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-host-common-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-core-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-kotlinx-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-events-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-websockets-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-http-cio-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-http-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-network-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-utils-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-io-jvm-2.3.1.jar;%APP_HOME%\lib\kotlin-stdlib-jdk8-1.8.22.jar;%APP_HOME%\lib\kotlinx-serialization-json-jvm-1.5.1.jar;%APP_HOME%\lib\kotlinx-serialization-protobuf-jvm-1.5.1.jar;%APP_HOME%\lib\kotlinx-serialization-core-jvm-1.5.1.jar;%APP_HOME%\lib\logback-classic-1.2.11.jar;%APP_HOME%\lib\kotlinx-coroutines-jdk8-1.7.1.jar;%APP_HOME%\lib\kotlinx-coroutines-core-jvm-1.7.1.jar;%APP_HOME%\lib\kotlin-stdlib-jdk7-1.8.22.jar;%APP_HOME%\lib\kotlin-reflect-1.8.10.jar;%APP_HOME%\lib\kotlin-stdlib-1.8.22.jar;%APP_HOME%\lib\slf4j-api-1.7.36.jar;%APP_HOME%\lib\kotlin-stdlib-common-1.8.22.jar;%APP_HOME%\lib\config-1.4.2.jar;%APP_HOME%\lib\jansi-2.4.0.jar;%APP_HOME%\lib\netty-codec-http2-4.1.92.Final.jar;%APP_HOME%\lib\alpn-api-1.1.3.v20160715.jar;%APP_HOME%\lib\netty-transport-native-kqueue-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-native-epoll-4.1.92.Final.jar;%APP_HOME%\lib\logback-core-1.2.11.jar;%APP_HOME%\lib\annotations-23.0.0.jar;%APP_HOME%\lib\netty-codec-http-4.1.92.Final.jar;%APP_HOME%\lib\netty-handler-4.1.92.Final.jar;%APP_HOME%\lib\netty-codec-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-classes-kqueue-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-classes-epoll-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-native-unix-common-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-4.1.92.Final.jar;%APP_HOME%\lib\netty-buffer-4.1.92.Final.jar;%APP_HOME%\lib\netty-resolver-4.1.92.Final.jar;%APP_HOME%\lib\netty-common-4.1.92.Final.jar
-
-
-@rem Execute unidbg-fetch-qsign
-"%JAVA_EXE%" %DEFAULT_JVM_OPTS% %JAVA_OPTS% %UNIDBG_FETCH_QSIGN_OPTS% -classpath "%CLASSPATH%" MainKt %*
-
-:end
-@rem End local scope for the variables with windows NT shell
-if "%ERRORLEVEL%"=="0" goto mainEnd
-
-:fail
-rem Set variable UNIDBG_FETCH_QSIGN_EXIT_CONSOLE if you need the _script_ return code instead of
-rem the _cmd.exe /c_ return code!
-if not "" == "%UNIDBG_FETCH_QSIGN_EXIT_CONSOLE%" exit 1
-exit /b 1
-
-:mainEnd
-if "%OS%"=="Windows_NT" endlocal
-
-:omega
diff --git a/spaces/LittleYuan/My-Real-Bot/realesrgan/models/realesrgan_model.py b/spaces/LittleYuan/My-Real-Bot/realesrgan/models/realesrgan_model.py
deleted file mode 100644
index c298a09c42433177f90001a0a31d029576072ccd..0000000000000000000000000000000000000000
--- a/spaces/LittleYuan/My-Real-Bot/realesrgan/models/realesrgan_model.py
+++ /dev/null
@@ -1,258 +0,0 @@
-import numpy as np
-import random
-import torch
-from basicsr.data.degradations import random_add_gaussian_noise_pt, random_add_poisson_noise_pt
-from basicsr.data.transforms import paired_random_crop
-from basicsr.models.srgan_model import SRGANModel
-from basicsr.utils import DiffJPEG, USMSharp
-from basicsr.utils.img_process_util import filter2D
-from basicsr.utils.registry import MODEL_REGISTRY
-from collections import OrderedDict
-from torch.nn import functional as F
-
-
-@MODEL_REGISTRY.register()
-class RealESRGANModel(SRGANModel):
- """RealESRGAN Model for Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data.
-
- It mainly performs:
- 1. randomly synthesize LQ images in GPU tensors
- 2. optimize the networks with GAN training.
- """
-
- def __init__(self, opt):
- super(RealESRGANModel, self).__init__(opt)
- self.jpeger = DiffJPEG(differentiable=False).cuda() # simulate JPEG compression artifacts
- self.usm_sharpener = USMSharp().cuda() # do usm sharpening
- self.queue_size = opt.get('queue_size', 180)
-
- @torch.no_grad()
- def _dequeue_and_enqueue(self):
- """It is the training pair pool for increasing the diversity in a batch.
-
- Batch processing limits the diversity of synthetic degradations in a batch. For example, samples in a
- batch could not have different resize scaling factors. Therefore, we employ this training pair pool
- to increase the degradation diversity in a batch.
- """
- # initialize
- b, c, h, w = self.lq.size()
- if not hasattr(self, 'queue_lr'):
- assert self.queue_size % b == 0, f'queue size {self.queue_size} should be divisible by batch size {b}'
- self.queue_lr = torch.zeros(self.queue_size, c, h, w).cuda()
- _, c, h, w = self.gt.size()
- self.queue_gt = torch.zeros(self.queue_size, c, h, w).cuda()
- self.queue_ptr = 0
- if self.queue_ptr == self.queue_size: # the pool is full
- # do dequeue and enqueue
- # shuffle
- idx = torch.randperm(self.queue_size)
- self.queue_lr = self.queue_lr[idx]
- self.queue_gt = self.queue_gt[idx]
- # get first b samples
- lq_dequeue = self.queue_lr[0:b, :, :, :].clone()
- gt_dequeue = self.queue_gt[0:b, :, :, :].clone()
- # update the queue
- self.queue_lr[0:b, :, :, :] = self.lq.clone()
- self.queue_gt[0:b, :, :, :] = self.gt.clone()
-
- self.lq = lq_dequeue
- self.gt = gt_dequeue
- else:
- # only do enqueue
- self.queue_lr[self.queue_ptr:self.queue_ptr + b, :, :, :] = self.lq.clone()
- self.queue_gt[self.queue_ptr:self.queue_ptr + b, :, :, :] = self.gt.clone()
- self.queue_ptr = self.queue_ptr + b
-
- @torch.no_grad()
- def feed_data(self, data):
- """Accept data from dataloader, and then add two-order degradations to obtain LQ images.
- """
- if self.is_train and self.opt.get('high_order_degradation', True):
- # training data synthesis
- self.gt = data['gt'].to(self.device)
- self.gt_usm = self.usm_sharpener(self.gt)
-
- self.kernel1 = data['kernel1'].to(self.device)
- self.kernel2 = data['kernel2'].to(self.device)
- self.sinc_kernel = data['sinc_kernel'].to(self.device)
-
- ori_h, ori_w = self.gt.size()[2:4]
-
- # ----------------------- The first degradation process ----------------------- #
- # blur
- out = filter2D(self.gt_usm, self.kernel1)
- # random resize
- updown_type = random.choices(['up', 'down', 'keep'], self.opt['resize_prob'])[0]
- if updown_type == 'up':
- scale = np.random.uniform(1, self.opt['resize_range'][1])
- elif updown_type == 'down':
- scale = np.random.uniform(self.opt['resize_range'][0], 1)
- else:
- scale = 1
- mode = random.choice(['area', 'bilinear', 'bicubic'])
- out = F.interpolate(out, scale_factor=scale, mode=mode)
- # add noise
- gray_noise_prob = self.opt['gray_noise_prob']
- if np.random.uniform() < self.opt['gaussian_noise_prob']:
- out = random_add_gaussian_noise_pt(
- out, sigma_range=self.opt['noise_range'], clip=True, rounds=False, gray_prob=gray_noise_prob)
- else:
- out = random_add_poisson_noise_pt(
- out,
- scale_range=self.opt['poisson_scale_range'],
- gray_prob=gray_noise_prob,
- clip=True,
- rounds=False)
- # JPEG compression
- jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.opt['jpeg_range'])
- out = torch.clamp(out, 0, 1) # clamp to [0, 1], otherwise JPEGer will result in unpleasant artifacts
- out = self.jpeger(out, quality=jpeg_p)
-
- # ----------------------- The second degradation process ----------------------- #
- # blur
- if np.random.uniform() < self.opt['second_blur_prob']:
- out = filter2D(out, self.kernel2)
- # random resize
- updown_type = random.choices(['up', 'down', 'keep'], self.opt['resize_prob2'])[0]
- if updown_type == 'up':
- scale = np.random.uniform(1, self.opt['resize_range2'][1])
- elif updown_type == 'down':
- scale = np.random.uniform(self.opt['resize_range2'][0], 1)
- else:
- scale = 1
- mode = random.choice(['area', 'bilinear', 'bicubic'])
- out = F.interpolate(
- out, size=(int(ori_h / self.opt['scale'] * scale), int(ori_w / self.opt['scale'] * scale)), mode=mode)
- # add noise
- gray_noise_prob = self.opt['gray_noise_prob2']
- if np.random.uniform() < self.opt['gaussian_noise_prob2']:
- out = random_add_gaussian_noise_pt(
- out, sigma_range=self.opt['noise_range2'], clip=True, rounds=False, gray_prob=gray_noise_prob)
- else:
- out = random_add_poisson_noise_pt(
- out,
- scale_range=self.opt['poisson_scale_range2'],
- gray_prob=gray_noise_prob,
- clip=True,
- rounds=False)
-
- # JPEG compression + the final sinc filter
- # We also need to resize images to desired sizes. We group [resize back + sinc filter] together
- # as one operation.
- # We consider two orders:
- # 1. [resize back + sinc filter] + JPEG compression
- # 2. JPEG compression + [resize back + sinc filter]
- # Empirically, we find other combinations (sinc + JPEG + Resize) will introduce twisted lines.
- if np.random.uniform() < 0.5:
- # resize back + the final sinc filter
- mode = random.choice(['area', 'bilinear', 'bicubic'])
- out = F.interpolate(out, size=(ori_h // self.opt['scale'], ori_w // self.opt['scale']), mode=mode)
- out = filter2D(out, self.sinc_kernel)
- # JPEG compression
- jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.opt['jpeg_range2'])
- out = torch.clamp(out, 0, 1)
- out = self.jpeger(out, quality=jpeg_p)
- else:
- # JPEG compression
- jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.opt['jpeg_range2'])
- out = torch.clamp(out, 0, 1)
- out = self.jpeger(out, quality=jpeg_p)
- # resize back + the final sinc filter
- mode = random.choice(['area', 'bilinear', 'bicubic'])
- out = F.interpolate(out, size=(ori_h // self.opt['scale'], ori_w // self.opt['scale']), mode=mode)
- out = filter2D(out, self.sinc_kernel)
-
- # clamp and round
- self.lq = torch.clamp((out * 255.0).round(), 0, 255) / 255.
-
- # random crop
- gt_size = self.opt['gt_size']
- (self.gt, self.gt_usm), self.lq = paired_random_crop([self.gt, self.gt_usm], self.lq, gt_size,
- self.opt['scale'])
-
- # training pair pool
- self._dequeue_and_enqueue()
- # sharpen self.gt again, as we have changed the self.gt with self._dequeue_and_enqueue
- self.gt_usm = self.usm_sharpener(self.gt)
- self.lq = self.lq.contiguous() # for the warning: grad and param do not obey the gradient layout contract
- else:
- # for paired training or validation
- self.lq = data['lq'].to(self.device)
- if 'gt' in data:
- self.gt = data['gt'].to(self.device)
- self.gt_usm = self.usm_sharpener(self.gt)
-
- def nondist_validation(self, dataloader, current_iter, tb_logger, save_img):
- # do not use the synthetic process during validation
- self.is_train = False
- super(RealESRGANModel, self).nondist_validation(dataloader, current_iter, tb_logger, save_img)
- self.is_train = True
-
- def optimize_parameters(self, current_iter):
- # usm sharpening
- l1_gt = self.gt_usm
- percep_gt = self.gt_usm
- gan_gt = self.gt_usm
- if self.opt['l1_gt_usm'] is False:
- l1_gt = self.gt
- if self.opt['percep_gt_usm'] is False:
- percep_gt = self.gt
- if self.opt['gan_gt_usm'] is False:
- gan_gt = self.gt
-
- # optimize net_g
- for p in self.net_d.parameters():
- p.requires_grad = False
-
- self.optimizer_g.zero_grad()
- self.output = self.net_g(self.lq)
-
- l_g_total = 0
- loss_dict = OrderedDict()
- if (current_iter % self.net_d_iters == 0 and current_iter > self.net_d_init_iters):
- # pixel loss
- if self.cri_pix:
- l_g_pix = self.cri_pix(self.output, l1_gt)
- l_g_total += l_g_pix
- loss_dict['l_g_pix'] = l_g_pix
- # perceptual loss
- if self.cri_perceptual:
- l_g_percep, l_g_style = self.cri_perceptual(self.output, percep_gt)
- if l_g_percep is not None:
- l_g_total += l_g_percep
- loss_dict['l_g_percep'] = l_g_percep
- if l_g_style is not None:
- l_g_total += l_g_style
- loss_dict['l_g_style'] = l_g_style
- # gan loss
- fake_g_pred = self.net_d(self.output)
- l_g_gan = self.cri_gan(fake_g_pred, True, is_disc=False)
- l_g_total += l_g_gan
- loss_dict['l_g_gan'] = l_g_gan
-
- l_g_total.backward()
- self.optimizer_g.step()
-
- # optimize net_d
- for p in self.net_d.parameters():
- p.requires_grad = True
-
- self.optimizer_d.zero_grad()
- # real
- real_d_pred = self.net_d(gan_gt)
- l_d_real = self.cri_gan(real_d_pred, True, is_disc=True)
- loss_dict['l_d_real'] = l_d_real
- loss_dict['out_d_real'] = torch.mean(real_d_pred.detach())
- l_d_real.backward()
- # fake
- fake_d_pred = self.net_d(self.output.detach().clone()) # clone for pt1.9
- l_d_fake = self.cri_gan(fake_d_pred, False, is_disc=True)
- loss_dict['l_d_fake'] = l_d_fake
- loss_dict['out_d_fake'] = torch.mean(fake_d_pred.detach())
- l_d_fake.backward()
- self.optimizer_d.step()
-
- if self.ema_decay > 0:
- self.model_ema(decay=self.ema_decay)
-
- self.log_dict = self.reduce_loss_dict(loss_dict)
diff --git a/spaces/Luccadraw24/Amelia/README.md b/spaces/Luccadraw24/Amelia/README.md
deleted file mode 100644
index e477242cc9fbfdc03697bdf4e65c8d6620b1bbb5..0000000000000000000000000000000000000000
--- a/spaces/Luccadraw24/Amelia/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Amelia
-emoji: 📚
-colorFrom: green
-colorTo: green
-sdk: docker
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Luelll/ChuanhuChatGPT/README.md b/spaces/Luelll/ChuanhuChatGPT/README.md
deleted file mode 100644
index fb163c90d56e9cf816c2d11dbd43871e776a9fc3..0000000000000000000000000000000000000000
--- a/spaces/Luelll/ChuanhuChatGPT/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: ChuanhuChatGPT
-emoji: 🐯
-colorFrom: green
-colorTo: red
-sdk: gradio
-sdk_version: 3.28.0
-app_file: ChuanhuChatbot.py
-pinned: false
-license: gpl-3.0
-duplicated_from: JohnSmith9982/ChuanhuChatGPT
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/Lykon/NeverEnding-Dream-webui/app.py b/spaces/Lykon/NeverEnding-Dream-webui/app.py
deleted file mode 100644
index c4b5de0d1ac307c8c03ee4c48b4a3760fad264cf..0000000000000000000000000000000000000000
--- a/spaces/Lykon/NeverEnding-Dream-webui/app.py
+++ /dev/null
@@ -1,72 +0,0 @@
-import os
-from subprocess import getoutput
-
-gpu_info = getoutput('nvidia-smi')
-if("A10G" in gpu_info):
- os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+4c06c79.d20221205-cp38-cp38-linux_x86_64.whl")
-elif("T4" in gpu_info):
- os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+1515f77.d20221130-cp38-cp38-linux_x86_64.whl")
-
-os.system(f"git clone -b v1.5 https://github.com/camenduru/stable-diffusion-webui /home/user/app/stable-diffusion-webui")
-os.chdir("/home/user/app/stable-diffusion-webui")
-
-os.system(f"wget -q https://github.com/camenduru/webui/raw/main/env_patch.py -O /home/user/app/env_patch.py")
-os.system(f"sed -i '$a fastapi==0.90.0' /home/user/app/stable-diffusion-webui/requirements_versions.txt")
-os.system(f"sed -i -e '/import image_from_url_text/r /home/user/app/env_patch.py' /home/user/app/stable-diffusion-webui/modules/ui.py")
-os.system(f"sed -i -e '/(modelmerger_interface, \"Checkpoint Merger\", \"modelmerger\"),/d' /home/user/app/stable-diffusion-webui/modules/ui.py")
-os.system(f"sed -i -e '/(train_interface, \"Train\", \"ti\"),/d' /home/user/app/stable-diffusion-webui/modules/ui.py")
-os.system(f"sed -i -e '/extensions_interface, \"Extensions\", \"extensions\"/d' /home/user/app/stable-diffusion-webui/modules/ui.py")
-os.system(f"sed -i -e '/settings_interface, \"Settings\", \"settings\"/d' /home/user/app/stable-diffusion-webui/modules/ui.py")
-os.system(f'''sed -i -e "s/document.getElementsByTagName('gradio-app')\[0\].shadowRoot/!!document.getElementsByTagName('gradio-app')[0].shadowRoot ? document.getElementsByTagName('gradio-app')[0].shadowRoot : document/g" /home/user/app/stable-diffusion-webui/script.js''')
-os.system(f"sed -i -e 's/ show_progress=False,/ show_progress=True,/g' /home/user/app/stable-diffusion-webui/modules/ui.py")
-os.system(f"sed -i -e 's/shared.demo.launch/shared.demo.queue().launch/g' /home/user/app/stable-diffusion-webui/webui.py")
-os.system(f"sed -i -e 's/ outputs=\[/queue=False, &/g' /home/user/app/stable-diffusion-webui/modules/ui.py")
-os.system(f"sed -i -e 's/ queue=False, / /g' /home/user/app/stable-diffusion-webui/modules/ui.py")
-
-
-if "IS_SHARED_UI" in os.environ:
- os.system(f"rm -rfv /home/user/app/stable-diffusion-webui/scripts/")
-
- os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-config.json -O /home/user/app/shared-config.json")
- os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-ui-config.json -O /home/user/app/shared-ui-config.json")
-
- # os.system(f"wget -q https://huggingface.co/ckpt/anything-v3-vae-swapped/resolve/main/anything-v3-vae-swapped.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/anything-v3-vae-swapped.ckpt")
- # os.system(f"wget -q {os.getenv('MODEL_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('MODEL_NAME')}")
- # os.system(f"wget -q {os.getenv('VAE_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('VAE_NAME')}")
- # os.system(f"wget -q {os.getenv('YAML_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('YAML_NAME')}")
- os.system(f"wget -q https://huggingface.co/Lykon/DreamShaper/resolve/main/DreamShaper_3.3_baked_vae.safetensors -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/DreamShaper_3.3_baked_vae.safetensors")
- os.system(f"wget -q https://huggingface.co/Lykon/DreamShaper/resolve/main/Dreamshaper_3.32_baked_vae_clip_fix.safetensors -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Dreamshaper_3.32_baked_vae_clip_fix.safetensors")
-
- os.system(f"python launch.py --force-enable-xformers --disable-console-progressbars --enable-console-prompts --ui-config-file /home/user/app/shared-ui-config.json --ui-settings-file /home/user/app/shared-config.json --cors-allow-origins huggingface.co,hf.space --no-progressbar-hiding")
-else:
- # Please duplicate this space and delete # character in front of the custom script you want to use or add here more custom scripts with same structure os.system(f"wget -q https://CUSTOM_SCRIPT_URL -O /home/user/app/stable-diffusion-webui/scripts/CUSTOM_SCRIPT_NAME.py")
- os.system(f"wget -q https://gist.github.com/camenduru/9ec5f8141db9902e375967e93250860f/raw/d0bcf01786f20107c329c03f8968584ee67be12a/run_n_times.py -O /home/user/app/stable-diffusion-webui/scripts/run_n_times.py")
-
- # Please duplicate this space and delete # character in front of the extension you want to use or add here more extensions with same structure os.system(f"git clone https://EXTENSION_GIT_URL /home/user/app/stable-diffusion-webui/extensions/EXTENSION_NAME")
- #os.system(f"git clone https://github.com/camenduru/stable-diffusion-webui-artists-to-study /home/user/app/stable-diffusion-webui/extensions/stable-diffusion-webui-artists-to-study")
- os.system(f"git clone https://github.com/yfszzx/stable-diffusion-webui-images-browser /home/user/app/stable-diffusion-webui/extensions/stable-diffusion-webui-images-browser")
- os.system(f"git clone https://github.com/camenduru/deforum-for-automatic1111-webui /home/user/app/stable-diffusion-webui/extensions/deforum-for-automatic1111-webui")
-
- # Please duplicate this space and delete # character in front of the model you want to use or add here more ckpts with same structure os.system(f"wget -q https://CKPT_URL -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/CKPT_NAME.ckpt")
- #os.system(f"wget -q https://huggingface.co/nitrosocke/Arcane-Diffusion/resolve/main/arcane-diffusion-v3.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/arcane-diffusion-v3.ckpt")
- #os.system(f"wget -q https://huggingface.co/DGSpitzer/Cyberpunk-Anime-Diffusion/resolve/main/Cyberpunk-Anime-Diffusion.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Cyberpunk-Anime-Diffusion.ckpt")
- #os.system(f"wget -q https://huggingface.co/prompthero/midjourney-v4-diffusion/resolve/main/mdjrny-v4.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/mdjrny-v4.ckpt")
- #os.system(f"wget -q https://huggingface.co/nitrosocke/mo-di-diffusion/resolve/main/moDi-v1-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/moDi-v1-pruned.ckpt")
- #os.system(f"wget -q https://huggingface.co/Fictiverse/Stable_Diffusion_PaperCut_Model/resolve/main/PaperCut_v1.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/PaperCut_v1.ckpt")
- #os.system(f"wget -q https://huggingface.co/lilpotat/sa/resolve/main/samdoesarts_style.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/samdoesarts_style.ckpt")
- #os.system(f"wget -q https://huggingface.co/hakurei/waifu-diffusion-v1-3/resolve/main/wd-v1-3-float32.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/wd-v1-3-float32.ckpt")
- #os.system(f"wget -q https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/sd-v1-4.ckpt")
- #os.system(f"wget -q https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.ckpt")
- #os.system(f"wget -q https://huggingface.co/runwayml/stable-diffusion-inpainting/resolve/main/sd-v1-5-inpainting.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/sd-v1-5-inpainting.ckpt")
-
- #os.system(f"wget -q https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/Anything-V3.0-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Anything-V3.0-pruned.ckpt")
- #os.system(f"wget -q https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/Anything-V3.0.vae.pt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Anything-V3.0-pruned.vae.pt")
-
- #os.system(f"wget -q https://huggingface.co/stabilityai/stable-diffusion-2/resolve/main/768-v-ema.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/768-v-ema.ckpt")
- #os.system(f"wget -q https://raw.githubusercontent.com/Stability-AI/stablediffusion/main/configs/stable-diffusion/v2-inference-v.yaml -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/768-v-ema.yaml")
-
- os.system(f"wget -q https://huggingface.co/Lykon/NeverEnding-Dream/resolve/main/NeverEndingDream_1.22_BakedVae_fp16.safetensors -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/NeverEndingDream_1.22_BakedVae_fp16.safetensors")
- os.system(f"wget -q https://huggingface.co/Lykon/NeverEnding-Dream/resolve/main/NeverEndingDream_ft_mse.safetensors -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/NeverEndingDream_ft_mse.safetensors")
-
- os.system(f"python launch.py --precision full --no-half --use-cpu SD BSRGAN ESRGAN SCUNet CodeFormer --all --ui-config-file /home/user/app/ui-config.json --ui-settings-file /home/user/app/config.json --disable-console-progressbars --enable-console-prompts --cors-allow-origins huggingface.co,hf.space --no-progressbar-hiding --api --skip-torch-cuda-test")
-
\ No newline at end of file
diff --git a/spaces/MJ/AI-ChatBot/README.md b/spaces/MJ/AI-ChatBot/README.md
deleted file mode 100644
index 263cdb1e57769f469d043974ca68b3c418bf08b1..0000000000000000000000000000000000000000
--- a/spaces/MJ/AI-ChatBot/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: AI ChatBot
-emoji: 🏆
-colorFrom: blue
-colorTo: gray
-sdk: gradio
-sdk_version: 3.33.1
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/Manjushri/SDXL-1.0/README.md b/spaces/Manjushri/SDXL-1.0/README.md
deleted file mode 100644
index a6e9553078c41b0c222816b76e44ae522ee883c5..0000000000000000000000000000000000000000
--- a/spaces/Manjushri/SDXL-1.0/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: SDXL-1.0
-emoji: ⚡
-colorFrom: green
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.35.2
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/Manmay/tortoise-tts/tortoise/models/hifigan_decoder.py b/spaces/Manmay/tortoise-tts/tortoise/models/hifigan_decoder.py
deleted file mode 100644
index 17bdf890b5bf398743a96eaf77dec90fb6a33edd..0000000000000000000000000000000000000000
--- a/spaces/Manmay/tortoise-tts/tortoise/models/hifigan_decoder.py
+++ /dev/null
@@ -1,302 +0,0 @@
-# adopted from https://github.com/jik876/hifi-gan/blob/master/models.py
-import torch
-from torch import nn
-from torch.nn import Conv1d, ConvTranspose1d
-from torch.nn import functional as F
-from torch.nn.utils import remove_weight_norm, weight_norm
-
-LRELU_SLOPE = 0.1
-
-
-def get_padding(k, d):
- return int((k * d - d) / 2)
-
-
-class ResBlock1(torch.nn.Module):
- """Residual Block Type 1. It has 3 convolutional layers in each convolutional block.
-
- Network::
-
- x -> lrelu -> conv1_1 -> conv1_2 -> conv1_3 -> z -> lrelu -> conv2_1 -> conv2_2 -> conv2_3 -> o -> + -> o
- |--------------------------------------------------------------------------------------------------|
-
-
- Args:
- channels (int): number of hidden channels for the convolutional layers.
- kernel_size (int): size of the convolution filter in each layer.
- dilations (list): list of dilation value for each conv layer in a block.
- """
-
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super().__init__()
- self.convs1 = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2]),
- )
- ),
- ]
- )
-
- self.convs2 = nn.ModuleList(
- [
- weight_norm(
- Conv1d(channels, channels, kernel_size, 1, dilation=1, padding=get_padding(kernel_size, 1))
- ),
- weight_norm(
- Conv1d(channels, channels, kernel_size, 1, dilation=1, padding=get_padding(kernel_size, 1))
- ),
- weight_norm(
- Conv1d(channels, channels, kernel_size, 1, dilation=1, padding=get_padding(kernel_size, 1))
- ),
- ]
- )
-
- def forward(self, x):
- """
- Args:
- x (Tensor): input tensor.
- Returns:
- Tensor: output tensor.
- Shapes:
- x: [B, C, T]
- """
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- xt = c2(xt)
- x = xt + x
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- """Residual Block Type 2. It has 1 convolutional layers in each convolutional block.
-
- Network::
-
- x -> lrelu -> conv1-> -> z -> lrelu -> conv2-> o -> + -> o
- |---------------------------------------------------|
-
-
- Args:
- channels (int): number of hidden channels for the convolutional layers.
- kernel_size (int): size of the convolution filter in each layer.
- dilations (list): list of dilation value for each conv layer in a block.
- """
-
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super().__init__()
- self.convs = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]),
- )
- ),
- ]
- )
-
- def forward(self, x):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- xt = c(xt)
- x = xt + x
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class HifiganGenerator(torch.nn.Module):
- def __init__(
- self,
- in_channels,
- out_channels,
- resblock_type,
- resblock_dilation_sizes,
- resblock_kernel_sizes,
- upsample_kernel_sizes,
- upsample_initial_channel,
- upsample_factors,
- inference_padding=5,
- cond_channels=0,
- conv_pre_weight_norm=True,
- conv_post_weight_norm=True,
- conv_post_bias=True,
- ):
- r"""HiFiGAN Generator with Multi-Receptive Field Fusion (MRF)
-
- Network:
- x -> lrelu -> upsampling_layer -> resblock1_k1x1 -> z1 -> + -> z_sum / #resblocks -> lrelu -> conv_post_7x1 -> tanh -> o
- .. -> zI ---|
- resblockN_kNx1 -> zN ---'
-
- Args:
- in_channels (int): number of input tensor channels.
- out_channels (int): number of output tensor channels.
- resblock_type (str): type of the `ResBlock`. '1' or '2'.
- resblock_dilation_sizes (List[List[int]]): list of dilation values in each layer of a `ResBlock`.
- resblock_kernel_sizes (List[int]): list of kernel sizes for each `ResBlock`.
- upsample_kernel_sizes (List[int]): list of kernel sizes for each transposed convolution.
- upsample_initial_channel (int): number of channels for the first upsampling layer. This is divided by 2
- for each consecutive upsampling layer.
- upsample_factors (List[int]): upsampling factors (stride) for each upsampling layer.
- inference_padding (int): constant padding applied to the input at inference time. Defaults to 5.
- """
- super().__init__()
- self.inference_padding = inference_padding
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_factors)
- # initial upsampling layers
- self.conv_pre = weight_norm(Conv1d(in_channels, upsample_initial_channel, 7, 1, padding=3))
- resblock = ResBlock1 if resblock_type == "1" else ResBlock2
- # upsampling layers
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_factors, upsample_kernel_sizes)):
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
- # MRF blocks
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for _, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)):
- self.resblocks.append(resblock(ch, k, d))
- # post convolution layer
- self.conv_post = weight_norm(Conv1d(ch, out_channels, 7, 1, padding=3, bias=conv_post_bias))
- if cond_channels > 0:
- self.cond_layer = nn.Conv1d(cond_channels, upsample_initial_channel, 1)
-
- if not conv_pre_weight_norm:
- remove_weight_norm(self.conv_pre)
-
- if not conv_post_weight_norm:
- remove_weight_norm(self.conv_post)
-
- self.device = torch.device('cuda' if torch.cuda.is_available() else'cpu')
- if torch.backends.mps.is_available():
- self.device = torch.device('mps')
- def forward(self, x, g=None):
- """
- Args:
- x (Tensor): feature input tensor.
- g (Tensor): global conditioning input tensor.
-
- Returns:
- Tensor: output waveform.
-
- Shapes:
- x: [B, C, T]
- Tensor: [B, 1, T]
- """
- o = self.conv_pre(x)
- if hasattr(self, "cond_layer"):
- o = o + self.cond_layer(g)
- for i in range(self.num_upsamples):
- o = F.leaky_relu(o, LRELU_SLOPE)
- o = self.ups[i](o)
- z_sum = None
- for j in range(self.num_kernels):
- if z_sum is None:
- z_sum = self.resblocks[i * self.num_kernels + j](o)
- else:
- z_sum += self.resblocks[i * self.num_kernels + j](o)
- o = z_sum / self.num_kernels
- o = F.leaky_relu(o)
- o = self.conv_post(o)
- o = torch.tanh(o)
- return o
-
- @torch.no_grad()
- def inference(self, c, g=None):
- """
- Args:
- x (Tensor): conditioning input tensor.
-
- Returns:
- Tensor: output waveform.
-
- Shapes:
- x: [B, C, T]
- Tensor: [B, 1, T]
- """
- # c = c.to(self.conv_pre.weight.device)
- # c = torch.nn.functional.pad(c, (self.inference_padding, self.inference_padding), "replicate")
- up_1 = torch.nn.functional.interpolate(
- c.transpose(1,2),
- scale_factor=[1024 / 256],
- mode="linear",
- )
- up_2 = torch.nn.functional.interpolate(
- up_1,
- scale_factor=[24000 / 22050],
- mode="linear",
- )
- g = g.unsqueeze(0)
- return self.forward(up_2.to(self.device), g.transpose(1,2))
-
- def remove_weight_norm(self):
- print("Removing weight norm...")
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
- remove_weight_norm(self.conv_pre)
- remove_weight_norm(self.conv_post)
diff --git a/spaces/Margaret/mazzuma-sentiment-engine/app.py b/spaces/Margaret/mazzuma-sentiment-engine/app.py
deleted file mode 100644
index bacb6b6a7efa14cdf7bac075a43f50a5090a1055..0000000000000000000000000000000000000000
--- a/spaces/Margaret/mazzuma-sentiment-engine/app.py
+++ /dev/null
@@ -1,16 +0,0 @@
-import gradio as gr
-
-from transformers import pipeline
-
-pipe = pipeline("sentiment-analysis", model="cardiffnlp/twitter-roberta-base-sentiment-latest")
-
-def get_sentiment(input_text):
- return pipe(input_text)[0]["label"]
-
-iface = gr.Interface(fn = get_sentiment,
- inputs = "text",
- outputs = 'text',
- title= 'Sentiment Analysis',
- description = 'Get Sentiment Negative/Positive/Neutral for the given input')
-
-iface.launch(enable_queue=True)
\ No newline at end of file
diff --git a/spaces/Menna2211/Text-Image/README.md b/spaces/Menna2211/Text-Image/README.md
deleted file mode 100644
index 6a2154fd6e571473d1d0e828c759e86201e445fd..0000000000000000000000000000000000000000
--- a/spaces/Menna2211/Text-Image/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Text Image
-emoji: 🚀
-colorFrom: pink
-colorTo: indigo
-sdk: streamlit
-sdk_version: 1.21.0
-app_file: Home.py
-pinned: false
----
-
-# TxT-Img
diff --git a/spaces/MikoProduction/PneumoniaDetector/app.py b/spaces/MikoProduction/PneumoniaDetector/app.py
deleted file mode 100644
index 8c3c17a5f7388f2a2f8ef1e8490c8da39a982d06..0000000000000000000000000000000000000000
--- a/spaces/MikoProduction/PneumoniaDetector/app.py
+++ /dev/null
@@ -1,77 +0,0 @@
-# 1. Imports and class names setup #
-import gradio as gr
-import os
-import torch
-from PIL import Image
-
-from model import ResNet101
-from timeit import default_timer as timer
-from typing import Tuple, Dict
-
-# setup class names
-class_names = ["normal", "pneumonia"]
-
-# 2. Model and transforms preparation #
-model = ResNet101()
-
-# Load save weights
-model.load_state_dict(torch.load(f="resnet101_pneumonia.pt",
- map_location=torch.device("cpu")))
-model_transforms = model.transforms()
-
-
-# 3. Predict function #
-
-# Create predict function
-
-def predict(img) -> Tuple[Dict, float]:
- """
- Transforms and performs a prediction on img and returns prediction and time taken.
- :param img: PIL image
- :return: prediction and time taken
- """
- # start the timer
- start_time = timer()
-
- # transform target image and add batch dimension
- img = model_transforms(img.convert("RGB")).unsqueeze(0)
-
- # put model into evaluation mode and turn on inference mode
- model.eval()
- with torch.inference_mode():
- # pass the transformed image through the model
- # and turn the prediction logits into prediction probabilities
- pred_probs = torch.sigmoid(model(img))
-
- # create a prediction label and prediction probability for each class
- pred_labels_and_probs = {class_names[0]: round(1 - float(pred_probs[0]), 4),
- class_names[1]: round(float(pred_probs[0]), 4)}
-
- # calculate the prediction time
- pred_time = round(timer() - start_time, 5)
-
- # return the prediction dictionary and prediction time
- return pred_labels_and_probs, pred_time
-
-
-# 4. Gradio app #
-
-# Create title, description and article strings
-title = "PneumoniaDetector 👁"
-description = "A ResNet101 feature extractor computer vision model to detect pneumonia"
-article = "Please add chest X-Ray image"
-
-# create examples list from "examples/" directory
-example_list = [["examples/" + example] for example in os.listdir("examples")]
-
-# create the Gradio demo
-demo = gr.Interface(fn=predict,
- inputs=gr.Image(type="pil"),
- outputs=[gr.Label(num_top_classes=1, label="Predictions"),
- gr.Number(label="Prediction time (s)")],
- examples=example_list,
- title=title,
- description=description,
- article=article)
-
-demo.launch()
diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textdet/detectors/panet.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textdet/detectors/panet.py
deleted file mode 100644
index 135ee1e9af33e8207286d4990bd513dfd441176e..0000000000000000000000000000000000000000
--- a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textdet/detectors/panet.py
+++ /dev/null
@@ -1,12 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from mmocr.registry import MODELS
-from .single_stage_text_detector import SingleStageTextDetector
-
-
-@MODELS.register_module()
-class PANet(SingleStageTextDetector):
- """The class for implementing PANet text detector:
-
- Efficient and Accurate Arbitrary-Shaped Text Detection with Pixel
- Aggregation Network [https://arxiv.org/abs/1908.05900].
- """
diff --git a/spaces/MrVicente/RA-BART/custom_bart/bart_generation_mixin.py b/spaces/MrVicente/RA-BART/custom_bart/bart_generation_mixin.py
deleted file mode 100644
index 2a8d26ab1edc8ab3827ad10764bab3593c6d763c..0000000000000000000000000000000000000000
--- a/spaces/MrVicente/RA-BART/custom_bart/bart_generation_mixin.py
+++ /dev/null
@@ -1,3272 +0,0 @@
-import inspect
-import warnings
-from typing import Any, Callable, Dict, Iterable, List, Optional, Tuple, Union
-
-import torch
-import torch.distributed as dist
-from torch import nn
-
-from transformers.generation_beam_constraints import Constraint, DisjunctiveConstraint, PhrasalConstraint
-from transformers.generation_beam_search import BeamScorer, BeamSearchScorer, ConstrainedBeamSearchScorer
-from transformers.generation_logits_process import (
- EncoderNoRepeatNGramLogitsProcessor,
- ExponentialDecayLengthPenalty,
- ForcedBOSTokenLogitsProcessor,
- ForcedEOSTokenLogitsProcessor,
- HammingDiversityLogitsProcessor,
- InfNanRemoveLogitsProcessor,
- LogitNormalization,
- LogitsProcessorList,
- MinLengthLogitsProcessor,
- NoBadWordsLogitsProcessor,
- NoRepeatNGramLogitsProcessor,
- PrefixConstrainedLogitsProcessor,
- RepetitionPenaltyLogitsProcessor,
- TemperatureLogitsWarper,
- TopKLogitsWarper,
- TopPLogitsWarper,
- TypicalLogitsWarper,
-)
-from transformers.generation_stopping_criteria import (
- MaxLengthCriteria,
- MaxTimeCriteria,
- StoppingCriteria,
- StoppingCriteriaList,
- validate_stopping_criteria,
-)
-from transformers.pytorch_utils import torch_int_div
-from transformers.utils import ModelOutput
-
-from transformers.generation_utils import (
- SampleOutput,
- BeamSearchOutput,
- BeamSampleOutput,
- GreedySearchOutput, GreedySearchDecoderOnlyOutput, SampleDecoderOnlyOutput, GreedySearchEncoderDecoderOutput,
- BeamSearchDecoderOnlyOutput, BeamSearchEncoderDecoderOutput, BeamSampleDecoderOnlyOutput,
- BeamSampleEncoderDecoderOutput, SampleEncoderDecoderOutput,
-)
-from utils import get_jump_chunks
-from torch.nn.utils.rnn import pad_sequence
-
-class GenerationMixin:
- """
- A class containing all functions for auto-regressive text generation, to be used as a mixin in [`PreTrainedModel`].
-
- The class exposes [`~generation_utils.GenerationMixin.generate`], which can be used for:
- - *greedy decoding* by calling [`~generation_utils.GenerationMixin.greedy_search`] if `num_beams=1` and
- `do_sample=False`.
- - *multinomial sampling* by calling [`~generation_utils.GenerationMixin.sample`] if `num_beams=1` and
- `do_sample=True`.
- - *beam-search decoding* by calling [`~generation_utils.GenerationMixin.beam_search`] if `num_beams>1` and
- `do_sample=False`.
- - *beam-search multinomial sampling* by calling [`~generation_utils.GenerationMixin.beam_sample`] if
- `num_beams>1` and `do_sample=True`.
- - *diverse beam-search decoding* by calling [`~generation_utils.GenerationMixin.group_beam_search`], if
- `num_beams>1` and `num_beam_groups>1`.
- - *constrained beam-search decoding* by calling [`~generation_utils.GenerationMixin.constrained_beam_search`],
- if `constraints!=None` or `force_words_ids!=None`.
- """
-
- def _prepare_model_inputs(
- self,
- inputs: Optional[torch.Tensor] = None,
- bos_token_id: Optional[int] = None,
- model_kwargs: Optional[Dict[str, torch.Tensor]] = None,
- ) -> Tuple[torch.Tensor, Optional[str], Dict[str, torch.Tensor]]:
- """
- This function extracts the model-specific `inputs` for generation.
- """
- # 1. retrieve all kwargs that are non-None or non-model input related.
- # some encoder-decoder models have different names for model and encoder
- if (
- self.config.is_encoder_decoder
- and hasattr(self, "encoder")
- and self.encoder.main_input_name != self.main_input_name
- ):
- input_name = self.encoder.main_input_name
- else:
- input_name = self.main_input_name
-
- model_kwargs = {k: v for k, v in model_kwargs.items() if v is not None or k != input_name}
-
- # 2. check whether model_input_name is passed as kwarg
- # if yes and `inputs` is None use kwarg inputs
- inputs_kwarg = model_kwargs.pop(input_name, None)
- if inputs_kwarg is not None and inputs is not None:
- raise ValueError(
- f"`inputs`: {inputs}` were passed alongside "
- f"{input_name} which is not allowed."
- f"Make sure to either pass {inputs} or {input_name}=..."
- )
- elif inputs_kwarg is not None:
- inputs = inputs_kwarg
-
- # 3. models with `input_ids` can also make use of `inputs_embeds`
- if self._can_retrieve_inputs_from_name(inputs, "inputs_embeds", model_kwargs):
- inputs, input_name = model_kwargs["inputs_embeds"], "inputs_embeds"
-
- # 4. Only encoder-decoder models can have non `input_ids` input format
- if not self.config.is_encoder_decoder and input_name != "input_ids":
- raise ValueError(
- f"If {input_name} is passed as model-specific keyword "
- "input then model has to be an encoder-decoder and not a "
- f"{self.__class__.__name__}."
- )
-
- # 5. if `inputs` is still None, try to create `input_ids` from BOS token
- if inputs is None:
- inputs = self._prepare_input_ids_for_generation(bos_token_id, model_kwargs.get("encoder_outputs"))
-
- return inputs, input_name, model_kwargs
-
- def _can_retrieve_inputs_from_name(
- self, inputs: Optional[torch.Tensor], name: str, model_kwargs: Dict[str, torch.Tensor]
- ) -> torch.Tensor:
- """
- If `inputs` is None and `name` is in both forward function and keyword arguments, then inputs can be retrieved
- from name
- """
- can_retrieve_inputs = model_kwargs.get(name, None) is not None and name in set(
- inspect.signature(self.forward).parameters.keys()
- )
-
- if can_retrieve_inputs and inputs is not None:
- raise ValueError(f"Cannot only pass one of {name} and {self.main_input_name}")
-
- return can_retrieve_inputs
-
- def prepare_inputs_for_generation(self, input_ids: torch.LongTensor, **kwargs) -> Dict[str, Any]:
- """
- Implement in subclasses of [`PreTrainedModel`] for custom behavior to prepare inputs in the generate method.
- """
- return {"input_ids": input_ids}
-
- def adjust_logits_during_generation(self, logits: torch.FloatTensor, **kwargs) -> torch.FloatTensor:
- """
- Implement in subclasses of [`PreTrainedModel`] for custom behavior to adjust the logits in the generate method.
- """
- return logits
-
- def _prepare_input_ids_for_generation(
- self, bos_token_id: Optional[int], encoder_outputs: Optional[ModelOutput]
- ) -> torch.LongTensor:
- if self.config.is_encoder_decoder and encoder_outputs is not None:
- # make dummy input_ids with value -100, as a sanity check ensuring that they won't be used for encoding
- shape = encoder_outputs.last_hidden_state.size()[:-1]
- return torch.ones(shape, dtype=torch.long, device=self.device) * -100
-
- if bos_token_id is None:
- raise ValueError("`bos_token_id` has to be defined when no `input_ids` are provided.")
- return torch.ones((1, 1), dtype=torch.long, device=self.device) * bos_token_id
-
- def _prepare_attention_mask_for_generation(
- self,
- inputs: torch.Tensor,
- pad_token_id: int,
- eos_token_id: int,
- ) -> torch.LongTensor:
- is_input_ids = len(inputs.shape) == 2 and inputs.dtype in [torch.int, torch.long]
- is_pad_token_in_inputs = (pad_token_id is not None) and (pad_token_id in inputs)
- is_pad_token_not_equal_to_eos_token_id = (eos_token_id is None) or (
- (eos_token_id is not None) and (pad_token_id != eos_token_id)
- )
- # Check if input is input_ids and padded -> only then is attention_mask defined
- if is_input_ids and is_pad_token_in_inputs and is_pad_token_not_equal_to_eos_token_id:
- return inputs.ne(pad_token_id).long()
- else:
- return torch.ones(inputs.shape[:2], dtype=torch.long, device=inputs.device)
-
- def _prepare_encoder_decoder_kwargs_for_generation(
- self, inputs_tensor: torch.Tensor, model_kwargs, model_input_name: Optional[str] = None
- ) -> Dict[str, Any]:
- # 1. get encoder
- encoder = self.get_encoder()
-
- # 2. prepare encoder args and encoder kwargs from model kwargs
- irrelevant_prefix = ["decoder_", "cross_attn", "use_cache"]
- encoder_kwargs = {
- argument: value
- for argument, value in model_kwargs.items()
- if not any(argument.startswith(p) for p in irrelevant_prefix)
- }
- print('encoder_kwargs:', encoder_kwargs)
-
- # 3. make sure that encoder returns `ModelOutput`
- model_input_name = model_input_name if model_input_name is not None else self.main_input_name
- encoder_kwargs["return_dict"] = True
- encoder_kwargs[model_input_name] = inputs_tensor
- model_kwargs["encoder_outputs"]: ModelOutput = encoder(**encoder_kwargs)
-
- return model_kwargs
-
- def _prepare_decoder_input_ids_for_generation(
- self,
- batch_size: int,
- decoder_start_token_id: int = None,
- bos_token_id: int = None,
- model_kwargs: Optional[Dict[str, torch.Tensor]] = None,
- device: torch.device = None,
- ) -> torch.LongTensor:
-
- if model_kwargs is not None and "decoder_input_ids" in model_kwargs:
- return model_kwargs.pop("decoder_input_ids")
- else:
- decoder_start_token_id = self._get_decoder_start_token_id(decoder_start_token_id, bos_token_id)
- if device is None:
- device = self.device
- return torch.ones((batch_size, 1), dtype=torch.long, device=device) * decoder_start_token_id
-
- def _get_decoder_start_token_id(self, decoder_start_token_id: int = None, bos_token_id: int = None) -> int:
- decoder_start_token_id = (
- decoder_start_token_id if decoder_start_token_id is not None else self.config.decoder_start_token_id
- )
- bos_token_id = bos_token_id if bos_token_id is not None else self.config.bos_token_id
-
- if decoder_start_token_id is not None:
- return decoder_start_token_id
- elif (
- hasattr(self.config, "decoder")
- and hasattr(self.config.decoder, "decoder_start_token_id")
- and self.config.decoder.decoder_start_token_id is not None
- ):
- return self.config.decoder.decoder_start_token_id
- elif bos_token_id is not None:
- return bos_token_id
- elif (
- hasattr(self.config, "decoder")
- and hasattr(self.config.decoder, "bos_token_id")
- and self.config.decoder.bos_token_id is not None
- ):
- return self.config.decoder.bos_token_id
- raise ValueError(
- "`decoder_start_token_id` or `bos_token_id` has to be defined for encoder-decoder generation."
- )
-
- @staticmethod
- def _expand_inputs_for_generation(
- input_ids: torch.LongTensor,
- expand_size: int = 1,
- is_encoder_decoder: bool = False,
- attention_mask: Optional[torch.LongTensor] = None,
- encoder_outputs: Optional[ModelOutput] = None,
- **model_kwargs,
- ) -> Tuple[torch.LongTensor, Dict[str, Any]]:
- expanded_return_idx = (
- torch.arange(input_ids.shape[0]).view(-1, 1).repeat(1, expand_size).view(-1).to(input_ids.device)
- )
- input_ids = input_ids.index_select(0, expanded_return_idx)
-
- if "token_type_ids" in model_kwargs:
- token_type_ids = model_kwargs["token_type_ids"]
- model_kwargs["token_type_ids"] = token_type_ids.index_select(0, expanded_return_idx)
-
- if attention_mask is not None:
- model_kwargs["attention_mask"] = attention_mask.index_select(0, expanded_return_idx)
-
- if is_encoder_decoder:
- if encoder_outputs is None:
- raise ValueError("If `is_encoder_decoder` is True, make sure that `encoder_outputs` is defined.")
- encoder_outputs["last_hidden_state"] = encoder_outputs.last_hidden_state.index_select(
- 0, expanded_return_idx.to(encoder_outputs.last_hidden_state.device)
- )
- model_kwargs["encoder_outputs"] = encoder_outputs
- return input_ids, model_kwargs
-
- @staticmethod
- def _update_model_kwargs_for_generation(
- outputs: ModelOutput, model_kwargs: Dict[str, Any], is_encoder_decoder: bool = False
- ) -> Dict[str, Any]:
- # update past
- if "past_key_values" in outputs:
- model_kwargs["past"] = outputs.past_key_values
- elif "mems" in outputs:
- model_kwargs["past"] = outputs.mems
- elif "past_buckets_states" in outputs:
- model_kwargs["past"] = outputs.past_buckets_states
- else:
- model_kwargs["past"] = None
-
- # update token_type_ids with last value
- if "token_type_ids" in model_kwargs:
- token_type_ids = model_kwargs["token_type_ids"]
- model_kwargs["token_type_ids"] = torch.cat([token_type_ids, token_type_ids[:, -1].unsqueeze(-1)], dim=-1)
-
- # update attention mask
- if not is_encoder_decoder:
- if "attention_mask" in model_kwargs:
- attention_mask = model_kwargs["attention_mask"]
- model_kwargs["attention_mask"] = torch.cat(
- [attention_mask, attention_mask.new_ones((attention_mask.shape[0], 1))], dim=-1
- )
-
- return model_kwargs
-
- def _reorder_cache(self, past, beam_idx):
- raise NotImplementedError(
- f"Make sure that a `_reorder_cache` function is correctly implemented in {self.__class__.__module__} to enable beam search for {self.__class__}"
- )
-
- def _get_logits_warper(
- self,
- top_k: Optional[int] = None,
- top_p: Optional[float] = None,
- typical_p: Optional[float] = None,
- temperature: Optional[float] = None,
- num_beams: Optional[int] = None,
- renormalize_logits: Optional[bool] = None,
- ) -> LogitsProcessorList:
- """
- This class returns a [`LogitsProcessorList`] list object that contains all relevant [`LogitsWarper`] instances
- used for multinomial sampling.
- """
-
- # init warp parameters
- top_k = top_k if top_k is not None else self.config.top_k
- top_p = top_p if top_p is not None else self.config.top_p
- typical_p = typical_p if typical_p is not None else self.config.typical_p
- temperature = temperature if temperature is not None else self.config.temperature
- # instantiate warpers list
- warpers = LogitsProcessorList()
-
- # the following idea is largely copied from this PR: https://github.com/huggingface/transformers/pull/5420/files
- # all samplers can be found in `generation_utils_samplers.py`
- if temperature is not None and temperature != 1.0:
- warpers.append(TemperatureLogitsWarper(temperature))
- if top_k is not None and top_k != 0:
- warpers.append(TopKLogitsWarper(top_k=top_k, min_tokens_to_keep=(2 if num_beams > 1 else 1)))
- if top_p is not None and top_p < 1.0:
- warpers.append(TopPLogitsWarper(top_p=top_p, min_tokens_to_keep=(2 if num_beams > 1 else 1)))
- if typical_p is not None and typical_p < 1.0:
- warpers.append(TypicalLogitsWarper(mass=typical_p, min_tokens_to_keep=(2 if num_beams > 1 else 1)))
- # `LogitNormalization` should always be the last logit processor, when present
- if renormalize_logits is True:
- warpers.append(LogitNormalization())
- return warpers
-
- def _get_logits_processor(
- self,
- repetition_penalty: float,
- no_repeat_ngram_size: int,
- encoder_no_repeat_ngram_size: int,
- input_ids_seq_length: int,
- encoder_input_ids: torch.LongTensor,
- bad_words_ids: List[List[int]],
- min_length: int,
- max_length: int,
- eos_token_id: int,
- forced_bos_token_id: int,
- forced_eos_token_id: int,
- prefix_allowed_tokens_fn: Callable[[int, torch.Tensor], List[int]],
- num_beams: int,
- num_beam_groups: int,
- diversity_penalty: float,
- remove_invalid_values: bool,
- exponential_decay_length_penalty: Tuple,
- logits_processor: Optional[LogitsProcessorList],
- renormalize_logits: Optional[bool],
- ) -> LogitsProcessorList:
- """
- This class returns a [`LogitsProcessorList`] list object that contains all relevant [`LogitsProcessor`]
- instances used to modify the scores of the language model head.
- """
- processors = LogitsProcessorList()
-
- # init warp parameters
- repetition_penalty = repetition_penalty if repetition_penalty is not None else self.config.repetition_penalty
- no_repeat_ngram_size = (
- no_repeat_ngram_size if no_repeat_ngram_size is not None else self.config.no_repeat_ngram_size
- )
- encoder_no_repeat_ngram_size = (
- encoder_no_repeat_ngram_size
- if encoder_no_repeat_ngram_size is not None
- else self.config.encoder_no_repeat_ngram_size
- )
- bad_words_ids = bad_words_ids if bad_words_ids is not None else self.config.bad_words_ids
- eos_token_id = eos_token_id if eos_token_id is not None else self.config.eos_token_id
- diversity_penalty = diversity_penalty if diversity_penalty is not None else self.config.diversity_penalty
- forced_bos_token_id = (
- forced_bos_token_id if forced_bos_token_id is not None else self.config.forced_bos_token_id
- )
- forced_eos_token_id = (
- forced_eos_token_id if forced_eos_token_id is not None else self.config.forced_eos_token_id
- )
- remove_invalid_values = (
- remove_invalid_values if remove_invalid_values is not None else self.config.remove_invalid_values
- )
- exponential_decay_length_penalty = (
- exponential_decay_length_penalty
- if exponential_decay_length_penalty is not None
- else self.config.exponential_decay_length_penalty
- )
- # instantiate processors list
-
- # the following idea is largely copied from this PR: https://github.com/huggingface/transformers/pull/5420/files
- # all samplers can be found in `generation_utils_samplers.py`
- if diversity_penalty is not None and diversity_penalty > 0.0:
- processors.append(
- HammingDiversityLogitsProcessor(
- diversity_penalty=diversity_penalty, num_beams=num_beams, num_beam_groups=num_beam_groups
- )
- )
- if repetition_penalty is not None and repetition_penalty != 1.0:
- processors.append(RepetitionPenaltyLogitsProcessor(penalty=repetition_penalty))
- if no_repeat_ngram_size is not None and no_repeat_ngram_size > 0:
- processors.append(NoRepeatNGramLogitsProcessor(no_repeat_ngram_size))
- if encoder_no_repeat_ngram_size is not None and encoder_no_repeat_ngram_size > 0:
- if self.config.is_encoder_decoder:
- processors.append(EncoderNoRepeatNGramLogitsProcessor(encoder_no_repeat_ngram_size, encoder_input_ids))
- else:
- raise ValueError(
- "It's impossible to use `encoder_no_repeat_ngram_size` with decoder-only architecture"
- )
- if bad_words_ids is not None:
- processors.append(NoBadWordsLogitsProcessor(bad_words_ids, eos_token_id))
- if min_length is not None and eos_token_id is not None and min_length > 0:
- processors.append(MinLengthLogitsProcessor(min_length, eos_token_id))
- if prefix_allowed_tokens_fn is not None:
- processors.append(PrefixConstrainedLogitsProcessor(prefix_allowed_tokens_fn, num_beams // num_beam_groups))
- if forced_bos_token_id is not None:
- processors.append(ForcedBOSTokenLogitsProcessor(forced_bos_token_id))
- if forced_eos_token_id is not None:
- processors.append(ForcedEOSTokenLogitsProcessor(max_length, forced_eos_token_id))
- if remove_invalid_values is True:
- processors.append(InfNanRemoveLogitsProcessor())
- if exponential_decay_length_penalty is not None:
- processors.append(
- ExponentialDecayLengthPenalty(exponential_decay_length_penalty, eos_token_id, input_ids_seq_length)
- )
- processors = self._merge_criteria_processor_list(processors, logits_processor)
- # `LogitNormalization` should always be the last logit processor, when present
- if renormalize_logits is True:
- processors.append(LogitNormalization())
- return processors
-
- def _get_stopping_criteria(
- self, max_length: Optional[int], max_time: Optional[float], stopping_criteria: Optional[StoppingCriteriaList]
- ) -> StoppingCriteriaList:
- criteria = StoppingCriteriaList()
- if max_length is not None:
- criteria.append(MaxLengthCriteria(max_length=max_length))
- if max_time is not None:
- criteria.append(MaxTimeCriteria(max_time=max_time))
- criteria = self._merge_criteria_processor_list(criteria, stopping_criteria)
- return criteria
-
- def _merge_criteria_processor_list(
- self,
- default_list: Union[LogitsProcessorList, StoppingCriteriaList],
- custom_list: Union[LogitsProcessorList, StoppingCriteriaList],
- ) -> Union[LogitsProcessorList, StoppingCriteriaList]:
- if len(custom_list) == 0:
- return default_list
- for default in default_list:
- for custom in custom_list:
- if type(custom) is type(default):
- object_type = "stopping criteria" if isinstance(custom, StoppingCriteria) else "logits processor"
- raise ValueError(
- f"A custom {object_type} of type {type(custom)} with values {custom} has been passed to `generate`, "
- f"but it has already been created with the values {default}. {default} has been created by passing the "
- "corresponding arguments to generate or by the model's config default values. "
- f"If you just want to change the default values of {object_type} consider passing them as arguments "
- f"to `generate` instead of using a custom {object_type}."
- )
- default_list.extend(custom_list)
- return default_list
-
- def compute_transition_beam_scores(
- self,
- sequences: torch.Tensor,
- scores: Tuple[torch.Tensor],
- beam_indices: torch.Tensor,
- eos_token_id: int = None,
- ):
- """compute the transition probabilities of sequences given generation
- scores and beam indices"""
-
- # reshape scores as [vocab_size * batch_size, # generation steps]
- # with batch_size being 2 * vocab_size and # generation steps being
- # seq_len - input_length
- scores = torch.stack(scores).reshape(len(scores), -1).transpose(0, 1)
-
- # start of generated tokens
- cut_idx = sequences.shape[-1] - scores.shape[-1]
- # adjust for beam indices
- beam_sequence_indices = torch.tensor(beam_indices, device=sequences.device) * self.config.vocab_size
- # compute real indices
- indices = sequences[:, cut_idx:] + beam_sequence_indices
- # gather scores and run
- transition_scores = scores.gather(0, indices)
- # make sure that if EOS token was used before length of sequence `sequence.shape[-1]`
- # get first occurence of EOS token
- eos_token_id = eos_token_id if eos_token_id is not None else self.config.eos_token_id
-
- if eos_token_id is not None:
- is_eos_token_id = sequences[:, cut_idx:] == eos_token_id
- # make sure first eos token still contributes to transition probs
- is_eos_token_id[:, -1] = False
- is_eos_token_id = is_eos_token_id.roll(1, -1)
- # all indices after eos shoud be masked
- zero_transition_prob_mask = is_eos_token_id.cumsum(-1).bool()
- # zero out padded probs
- transition_scores.masked_fill_(zero_transition_prob_mask, 0.0)
-
- return transition_scores
-
- # ADDED FRED
- def remove_subsets(self, l):
- #l = [[1, 2, 4, 8], [1, 2, 4, 5, 6], [1, 2, 3], [2, 3, 21], [1, 2, 3, 4], [1, 2, 3, 4, 5, 6, 7]]
- l2 = l[:]
- for m in l:
- for n in l:
- if set(m).issubset(set(n)) and m != n:
- l2.remove(m)
- break
- return l2
-
- # ADDED FRED
- @torch.no_grad()
- def cs_generate(
- self,
- inputs: Optional[torch.Tensor] = None,
- contexts:List[str]=None, #input data
- model_input:Dict=None,
- max_length: Optional[int] = None,
- min_length: Optional[int] = None,
- do_sample: Optional[bool] = None,
- early_stopping: Optional[bool] = None,
- num_beams: Optional[int] = None,
- temperature: Optional[float] = None,
- top_k: Optional[int] = None,
- top_p: Optional[float] = None,
- typical_p: Optional[float] = None,
- repetition_penalty: Optional[float] = None,
- bad_words_ids: Optional[Iterable[int]] = None,
- force_words_ids: Optional[Union[Iterable[int], Iterable[Iterable[int]]]] = None,
- bos_token_id: Optional[int] = None,
- pad_token_id: Optional[int] = None,
- eos_token_id: Optional[int] = None,
- length_penalty: Optional[float] = None,
- no_repeat_ngram_size: Optional[int] = None,
- encoder_no_repeat_ngram_size: Optional[int] = None,
- num_return_sequences: Optional[int] = None,
- max_time: Optional[float] = None,
- max_new_tokens: Optional[int] = None,
- decoder_start_token_id: Optional[int] = None,
- use_cache: Optional[bool] = None,
- num_beam_groups: Optional[int] = None,
- diversity_penalty: Optional[float] = None,
- prefix_allowed_tokens_fn: Optional[Callable[[int, torch.Tensor], List[int]]] = None,
- logits_processor: Optional[LogitsProcessorList] = LogitsProcessorList(),
- renormalize_logits: Optional[bool] = None,
- stopping_criteria: Optional[StoppingCriteriaList] = StoppingCriteriaList(),
- constraints: Optional[List[Constraint]] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- output_scores: Optional[bool] = None,
- return_dict_in_generate: Optional[bool] = None,
- forced_bos_token_id: Optional[int] = None,
- forced_eos_token_id: Optional[int] = None,
- remove_invalid_values: Optional[bool] = None,
- synced_gpus: Optional[bool] = False,
- exponential_decay_length_penalty: Optional[Tuple[Union[int, float]]] = None,
- use_kg:bool=False, #added
- relation_mapper_builder=None,
- tokenizer=None,
- max_neig_per_concept=1, #it slows down quite a lot
- **model_kwargs,
- ) -> Union[GreedySearchOutput, SampleOutput, BeamSearchOutput, BeamSampleOutput, torch.LongTensor]:
- # print(model_input)
- input_ids = model_input['input_ids']
- if "input_commonsense_relations" in model_input:
- # print(model_input['input_commonsense_relations'].sum())
- model_kwargs["relation_inputs"] = model_input.get("input_commonsense_relations").to(input_ids.device)
- if use_kg:
- all_constraints = []
- print('contexts:', contexts[:3])
- for context in contexts:
- constraints = []
- print('+++++++')
- concepts_from_context = relation_mapper_builder.get_concepts_from_context(context=context,
- clear_common_wds=True, alignment=1)
- print('concepts_from_context:', concepts_from_context)
- useful_concepts = [relation_mapper_builder.swow_knowledge.get_related_concepts(concept) for concept in
- concepts_from_context]
- if not useful_concepts:
- useful_concepts = [relation_mapper_builder.knowledge.get_related_concepts(concept) for concept in concepts_from_context]
- useful_concepts = [[f'{phrase}' for phrase in concepts] for concepts in useful_concepts] # add spaces
- # useful_concepts = [[phrase for phrase in concepts if len(phrase.split(' ')) == 1] for concepts in useful_concepts]
- # useful_concepts = list(itertools.chain.from_iterable(useful_concepts))
- # print('useful_concepts:', useful_concepts[:5])
- print('-------')
- print('useful_concepts:', useful_concepts)
- if concepts_from_context and useful_concepts:
- for context_concept, neighbour_concepts in zip(concepts_from_context, useful_concepts):
- print('neighbour:', neighbour_concepts[:5])
- # flexible_words = self.most_similar_words(context_concept, neighbour_concepts) # limit the upperbound
- # flexible_words = [word for word in flexible_words if word not in context_concept] # remove input concepts
- flexible_words = [word for word in neighbour_concepts if
- word not in context_concept] # remove input concepts
- print('flexible_words:', flexible_words[:5])
- if not flexible_words:
- continue
- flexible_words_ids: List[List[int]] = tokenizer(flexible_words, add_special_tokens=False).input_ids #add_prefix_space=True,
- flexible_words_ids = self.remove_subsets(flexible_words_ids)
- # add_prefix_space=True
- # flexible_words_ids = [x for x in flexible_words_ids if len(x) == 1] # problem with subsets
- flexible_words_ids = flexible_words_ids[:max_neig_per_concept]
- #print('flexible_words_ids:', flexible_words_ids[:3])
- constraint = DisjunctiveConstraint(flexible_words_ids)
- constraints.append(constraint)
- all_constraints.extend(constraints)
- else:
- all_constraints = None
-
- generated_answers_encoded = self.generate(input_ids=input_ids,
- #attention_mask=model_input["attention_mask"].to(input_ids.device),
- constraints=all_constraints,
- min_length=min_length,
- #max_length=max_length,
- do_sample=do_sample,
- early_stopping=early_stopping,
- #num_beams=num_beams,
- temperature=temperature,
- top_k=top_k,
- top_p=top_p,
- # eos_token_id=tokenizer.eos_token_id,
- no_repeat_ngram_size=no_repeat_ngram_size,
- num_return_sequences=num_return_sequences,
- return_dict_in_generate=return_dict_in_generate,
- output_attentions=output_attentions,
- output_scores=output_scores,
- **model_kwargs,
- )
- return generated_answers_encoded
-
- # ADDED FRED
- @torch.no_grad()
- def cs_simple_generate(
- self,
- inputs: Optional[torch.Tensor] = None,
- neighbours_contexts:List[List[str]]=None, #input data
- model_input:Dict=None,
- max_length: Optional[int] = None,
- min_length: Optional[int] = None,
- do_sample: Optional[bool] = None,
- early_stopping: Optional[bool] = None,
- num_beams: Optional[int] = None,
- temperature: Optional[float] = None,
- top_k: Optional[int] = None,
- top_p: Optional[float] = None,
- typical_p: Optional[float] = None,
- repetition_penalty: Optional[float] = None,
- bad_words_ids: Optional[Iterable[int]] = None,
- force_words_ids: Optional[Union[Iterable[int], Iterable[Iterable[int]]]] = None,
- bos_token_id: Optional[int] = None,
- pad_token_id: Optional[int] = None,
- eos_token_id: Optional[int] = None,
- length_penalty: Optional[float] = None,
- no_repeat_ngram_size: Optional[int] = None,
- encoder_no_repeat_ngram_size: Optional[int] = None,
- num_return_sequences: Optional[int] = None,
- max_time: Optional[float] = None,
- max_new_tokens: Optional[int] = None,
- decoder_start_token_id: Optional[int] = None,
- use_cache: Optional[bool] = None,
- num_beam_groups: Optional[int] = None,
- diversity_penalty: Optional[float] = None,
- prefix_allowed_tokens_fn: Optional[Callable[[int, torch.Tensor], List[int]]] = None,
- logits_processor: Optional[LogitsProcessorList] = LogitsProcessorList(),
- renormalize_logits: Optional[bool] = None,
- stopping_criteria: Optional[StoppingCriteriaList] = StoppingCriteriaList(),
- constraints: Optional[List[Constraint]] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- output_scores: Optional[bool] = None,
- return_dict_in_generate: Optional[bool] = None,
- forced_bos_token_id: Optional[int] = None,
- forced_eos_token_id: Optional[int] = None,
- remove_invalid_values: Optional[bool] = None,
- synced_gpus: Optional[bool] = False,
- exponential_decay_length_penalty: Optional[Tuple[Union[int, float]]] = None,
- use_kg:bool=False, #added
- relation_mapper_builder=None,
- tokenizer=None,
- max_concepts=2, #it slows down quite a lot
- **model_kwargs,
- ) -> Union[GreedySearchOutput, SampleOutput, BeamSearchOutput, BeamSampleOutput, torch.LongTensor]:
- # print(model_input)
- input_ids = model_input['input_ids']
- if use_kg:
- all_constraints = []
- for context_neighbours in neighbours_contexts:
- # context_neighbours is a collection of concepts
- # lets create sub collections of concepts
- context_neighbours = [f' {concept}' for concept in context_neighbours if len(concept) > 3]
- n_size_chuncks = len(context_neighbours) // max_concepts
- n_size_chuncks = n_size_chuncks if n_size_chuncks > 0 else 1
- sub_concepts_collection = list(get_jump_chunks(context_neighbours, jump=n_size_chuncks))
- constraints = []
- for sub_concepts in sub_concepts_collection[:max_concepts]:
- flexible_words_ids: List[List[int]] = tokenizer(sub_concepts, add_special_tokens=False).input_ids #add_prefix_space=True,
- #flexible_words_ids = self.remove_subsets(flexible_words_ids)
- flexible_words_ids = [[word_ids[0]] for word_ids in flexible_words_ids]
- disjunctive_set = list(map(list, set(map(frozenset, flexible_words_ids))))
-
- # add_prefix_space=True
- # flexible_words_ids = [x for x in flexible_words_ids if len(x) == 1] # problem with subsets
- #flexible_words_ids = flexible_words_ids[:max_neig_per_concept]
- #print('flexible_words_ids:', flexible_words_ids[:3])
- if not any(disjunctive_set):
- continue
- constraint = DisjunctiveConstraint(disjunctive_set)
- constraints.append(constraint)
- if not any(constraints):
- constraints=None
- all_constraints.append(constraints)
- else:
- all_constraints = None
- if not all_constraints:
- all_constraints = None
-
- generated_answers_encoded = []
- #print('all_constraints:', all_constraints)
- for i, contraints in enumerate(all_constraints):
- #print('contraints.token_ids:', [x.token_ids for x in contraints])
- if "input_commonsense_relations" in model_input:
- # print(model_input['input_commonsense_relations'].sum())
- model_kwargs["relation_inputs"] = model_input.get("input_commonsense_relations")[i].unsqueeze(0).to(input_ids.device)
- #print('model_kwargs.get("attention_mask"):', model_kwargs.get("attention_mask"))
- model_kwargs["attention_mask"] = model_input.get("attention_mask")[i].unsqueeze(0).to(input_ids.device)
- gen = self.generate(input_ids=input_ids[i].unsqueeze(0),
- constraints=contraints,
- min_length=min_length,
- #max_length=max_length,
- do_sample=do_sample,
- early_stopping=early_stopping,
- #num_beams=num_beams,
- temperature=temperature,
- top_k=top_k,
- top_p=top_p,
- # eos_token_id=tokenizer.eos_token_id,
- no_repeat_ngram_size=no_repeat_ngram_size,
- num_return_sequences=num_return_sequences,
- return_dict_in_generate=return_dict_in_generate,
- output_attentions=output_attentions,
- output_scores=output_scores,
- **model_kwargs)
- #print('[gen]:', gen)
- #print(tokenizer.batch_decode(gen))
- generated_answers_encoded.append(gen[0].detach().cpu())
- #torch.LongTensor(generated_answers_encoded)
- #print('generated_answers_encoded:', generated_answers_encoded)
- return torch.LongTensor(pad_sequence(generated_answers_encoded, batch_first=True, padding_value=tokenizer.pad_token_id)).to(input_ids.device)
-
- @torch.no_grad()
- def generate(
- self,
- inputs: Optional[torch.Tensor] = None,
- max_length: Optional[int] = None,
- min_length: Optional[int] = None,
- do_sample: Optional[bool] = None,
- early_stopping: Optional[bool] = None,
- num_beams: Optional[int] = None,
- temperature: Optional[float] = None,
- top_k: Optional[int] = None,
- top_p: Optional[float] = None,
- typical_p: Optional[float] = None,
- repetition_penalty: Optional[float] = None,
- bad_words_ids: Optional[Iterable[int]] = None,
- force_words_ids: Optional[Union[Iterable[int], Iterable[Iterable[int]]]] = None,
- bos_token_id: Optional[int] = None,
- pad_token_id: Optional[int] = None,
- eos_token_id: Optional[int] = None,
- length_penalty: Optional[float] = None,
- no_repeat_ngram_size: Optional[int] = None,
- encoder_no_repeat_ngram_size: Optional[int] = None,
- num_return_sequences: Optional[int] = None,
- max_time: Optional[float] = None,
- max_new_tokens: Optional[int] = None,
- decoder_start_token_id: Optional[int] = None,
- use_cache: Optional[bool] = None,
- num_beam_groups: Optional[int] = None,
- diversity_penalty: Optional[float] = None,
- prefix_allowed_tokens_fn: Optional[Callable[[int, torch.Tensor], List[int]]] = None,
- logits_processor: Optional[LogitsProcessorList] = LogitsProcessorList(),
- renormalize_logits: Optional[bool] = None,
- stopping_criteria: Optional[StoppingCriteriaList] = StoppingCriteriaList(),
- constraints: Optional[List[Constraint]] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- output_scores: Optional[bool] = None,
- return_dict_in_generate: Optional[bool] = None,
- forced_bos_token_id: Optional[int] = None,
- forced_eos_token_id: Optional[int] = None,
- remove_invalid_values: Optional[bool] = None,
- synced_gpus: Optional[bool] = False,
- exponential_decay_length_penalty: Optional[Tuple[Union[int, float]]] = None,
- **model_kwargs,
- ) -> Union[GreedySearchOutput, SampleOutput, BeamSearchOutput, BeamSampleOutput, torch.LongTensor]:
- r"""
-
- Generates sequences of token ids for models with a language modeling head. The method supports the following
- generation methods for text-decoder, text-to-text, speech-to-text, and vision-to-text models:
-
- - *greedy decoding* by calling [`~generation_utils.GenerationMixin.greedy_search`] if `num_beams=1` and
- `do_sample=False`.
- - *multinomial sampling* by calling [`~generation_utils.GenerationMixin.sample`] if `num_beams=1` and
- `do_sample=True`.
- - *beam-search decoding* by calling [`~generation_utils.GenerationMixin.beam_search`] if `num_beams>1` and
- `do_sample=False`.
- - *beam-search multinomial sampling* by calling [`~generation_utils.GenerationMixin.beam_sample`] if
- `num_beams>1` and `do_sample=True`.
- - *diverse beam-search decoding* by calling [`~generation_utils.GenerationMixin.group_beam_search`], if
- `num_beams>1` and `num_beam_groups>1`.
- - *constrained beam-search decoding* by calling
- [`~generation_utils.GenerationMixin.constrained_beam_search`], if `constraints!=None` or
- `force_words_ids!=None`.
-
-
-
- Apart from `inputs`, all the arguments below will default to the value of the attribute of the same name as
- defined in the model's config (`config.json`) which in turn defaults to the
- [`~modeling_utils.PretrainedConfig`] of the model.
-
-
-
- Most of these parameters are explained in more detail in [this blog
- post](https://huggingface.co/blog/how-to-generate).
-
- Parameters:
- inputs (`torch.Tensor` of varying shape depending on the modality, *optional*):
- The sequence used as a prompt for the generation or as model inputs to the encoder. If `None` the
- method initializes it with `bos_token_id` and a batch size of 1. For decoder-only models `inputs`
- should of in the format of `input_ids`. For encoder-decoder models *inputs* can represent any of
- `input_ids`, `input_values`, `input_features`, or `pixel_values`.
- max_length (`int`, *optional*, defaults to `model.config.max_length`):
- The maximum length of the sequence to be generated.
- max_new_tokens (`int`, *optional*, defaults to None):
- The maximum numbers of tokens to generate, ignore the current number of tokens. Use either
- `max_new_tokens` or `max_length` but not both, they serve the same purpose.
- min_length (`int`, *optional*, defaults to 10):
- The minimum length of the sequence to be generated.
- do_sample (`bool`, *optional*, defaults to `False`):
- Whether or not to use sampling ; use greedy decoding otherwise.
- early_stopping (`bool`, *optional*, defaults to `False`):
- Whether to stop the beam search when at least `num_beams` sentences are finished per batch or not.
- num_beams (`int`, *optional*, defaults to 1):
- Number of beams for beam search. 1 means no beam search.
- temperature (`float`, *optional*, defaults to 1.0):
- The value used to module the next token probabilities.
- top_k (`int`, *optional*, defaults to 50):
- The number of highest probability vocabulary tokens to keep for top-k-filtering.
- top_p (`float`, *optional*, defaults to 1.0):
- If set to float < 1, only the most probable tokens with probabilities that add up to `top_p` or higher
- are kept for generation.
- repetition_penalty (`float`, *optional*, defaults to 1.0):
- The parameter for repetition penalty. 1.0 means no penalty. See [this
- paper](https://arxiv.org/pdf/1909.05858.pdf) for more details.
- pad_token_id (`int`, *optional*):
- The id of the *padding* token.
- bos_token_id (`int`, *optional*):
- The id of the *beginning-of-sequence* token.
- eos_token_id (`int`, *optional*):
- The id of the *end-of-sequence* token.
- length_penalty (`float`, *optional*, defaults to 1.0):
- Exponential penalty to the length. 1.0 means that the beam score is penalized by the sequence length.
- 0.0 means no penalty. Set to values < 0.0 in order to encourage the model to generate longer
- sequences, to a value > 0.0 in order to encourage the model to produce shorter sequences.
- no_repeat_ngram_size (`int`, *optional*, defaults to 0):
- If set to int > 0, all ngrams of that size can only occur once.
- encoder_no_repeat_ngram_size (`int`, *optional*, defaults to 0):
- If set to int > 0, all ngrams of that size that occur in the `encoder_input_ids` cannot occur in the
- `decoder_input_ids`.
- bad_words_ids(`List[List[int]]`, *optional*):
- List of token ids that are not allowed to be generated. In order to get the token ids of the words that
- should not appear in the generated text, use `tokenizer(bad_words, add_prefix_space=True,
- add_special_tokens=False).input_ids`.
- force_words_ids(`List[List[int]]` or `List[List[List[int]]]`, *optional*):
- List of token ids that must be generated. If given a `List[List[int]]`, this is treated as a simple
- list of words that must be included, the opposite to `bad_words_ids`. If given `List[List[List[int]]]`,
- this triggers a [disjunctive constraint](https://github.com/huggingface/transformers/issues/14081),
- where one can allow different forms of each word.
- num_return_sequences(`int`, *optional*, defaults to 1):
- The number of independently computed returned sequences for each element in the batch.
- max_time(`float`, *optional*, defaults to None):
- The maximum amount of time you allow the computation to run for in seconds. generation will still
- finish the current pass after allocated time has been passed.
- attention_mask (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
- Mask to avoid performing attention on padding token indices. Mask values are in `[0, 1]`, 1 for tokens
- that are not masked, and 0 for masked tokens. If not provided, will default to a tensor the same shape
- as `input_ids` that masks the pad token. [What are attention masks?](../glossary#attention-mask)
- decoder_start_token_id (`int`, *optional*):
- If an encoder-decoder model starts decoding with a different token than *bos*, the id of that token.
- use_cache: (`bool`, *optional*, defaults to `True`):
- Whether or not the model should use the past last key/values attentions (if applicable to the model) to
- speed up decoding.
- num_beam_groups (`int`, *optional*, defaults to 1):
- Number of groups to divide `num_beams` into in order to ensure diversity among different groups of
- beams. [this paper](https://arxiv.org/pdf/1610.02424.pdf) for more details.
- diversity_penalty (`float`, *optional*, defaults to 0.0):
- This value is subtracted from a beam's score if it generates a token same as any beam from other group
- at a particular time. Note that `diversity_penalty` is only effective if `group beam search` is
- enabled.
- prefix_allowed_tokens_fn (`Callable[[int, torch.Tensor], List[int]]`, *optional*):
- If provided, this function constraints the beam search to allowed tokens only at each step. If not
- provided no constraint is applied. This function takes 2 arguments: the batch ID `batch_id` and
- `input_ids`. It has to return a list with the allowed tokens for the next generation step conditioned
- on the batch ID `batch_id` and the previously generated tokens `inputs_ids`. This argument is useful
- for constrained generation conditioned on the prefix, as described in [Autoregressive Entity
- Retrieval](https://arxiv.org/abs/2010.00904).
- logits_processor (`LogitsProcessorList`, *optional*):
- Custom logits processors that complement the default logits processors built from arguments and a
- model's config. If a logit processor is passed that is already created with the arguments or a model's
- config an error is thrown. This feature is intended for advanced users.
- renormalize_logits: (`bool`, *optional*, defaults to `False`):
- Whether to renormalize the logits after applying all the logits processors or warpers (including the
- custom ones). It's highly recommended to set this flag to `True` as the search algorithms suppose the
- score logits are normalized but some logit processors or warpers break the normalization.
- stopping_criteria (`StoppingCriteriaList`, *optional*):
- Custom stopping criteria that complement the default stopping criteria built from arguments and a
- model's config. If a stopping criteria is passed that is already created with the arguments or a
- model's config an error is thrown. This feature is intended for advanced users.
- constraints (`List[Constraint]`, *optional*):
- Custom constraints that can be added to the generation to ensure that the output will contain the use
- of certain tokens as defined by `Constraint` objects, in the most sensible way possible.
- output_attentions (`bool`, *optional*, defaults to `False`):
- Whether or not to return the attentions tensors of all attention layers. See `attentions` under
- returned tensors for more details.
- output_hidden_states (`bool`, *optional*, defaults to `False`):
- Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors
- for more details.
- output_scores (`bool`, *optional*, defaults to `False`):
- Whether or not to return the prediction scores. See `scores` under returned tensors for more details.
- return_dict_in_generate (`bool`, *optional*, defaults to `False`):
- Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
- forced_bos_token_id (`int`, *optional*):
- The id of the token to force as the first generated token after the `decoder_start_token_id`. Useful
- for multilingual models like [mBART](../model_doc/mbart) where the first generated token needs to be
- the target language token.
- forced_eos_token_id (`int`, *optional*):
- The id of the token to force as the last generated token when `max_length` is reached.
- remove_invalid_values (`bool`, *optional*):
- Whether to remove possible *nan* and *inf* outputs of the model to prevent the generation method to
- crash. Note that using `remove_invalid_values` can slow down generation.
- synced_gpus (`bool`, *optional*, defaults to `False`):
- Whether to continue running the while loop until max_length (needed for ZeRO stage 3)
- exponential_decay_length_penalty (`tuple(int, float)`, *optional*):
- This Tuple adds an exponentially increasing length penalty, after a certain amount of tokens have been
- generated. The tuple shall consist of: `(start_index, decay_factor)` where `start_index` indicates
- where penalty starts and `decay_factor` represents the factor of exponential decay
-
- model_kwargs:
- Additional model specific kwargs will be forwarded to the `forward` function of the model. If the model
- is an encoder-decoder model, encoder specific kwargs should not be prefixed and decoder specific kwargs
- should be prefixed with *decoder_*.
-
- Return:
- [`~utils.ModelOutput`] or `torch.LongTensor`: A [`~utils.ModelOutput`] (if `return_dict_in_generate=True`
- or when `config.return_dict_in_generate=True`) or a `torch.FloatTensor`.
-
- If the model is *not* an encoder-decoder model (`model.config.is_encoder_decoder=False`), the possible
- [`~utils.ModelOutput`] types are:
-
- - [`~generation_utils.GreedySearchDecoderOnlyOutput`],
- - [`~generation_utils.SampleDecoderOnlyOutput`],
- - [`~generation_utils.BeamSearchDecoderOnlyOutput`],
- - [`~generation_utils.BeamSampleDecoderOnlyOutput`]
-
- If the model is an encoder-decoder model (`model.config.is_encoder_decoder=True`), the possible
- [`~utils.ModelOutput`] types are:
-
- - [`~generation_utils.GreedySearchEncoderDecoderOutput`],
- - [`~generation_utils.SampleEncoderDecoderOutput`],
- - [`~generation_utils.BeamSearchEncoderDecoderOutput`],
- - [`~generation_utils.BeamSampleEncoderDecoderOutput`]
-
- Examples:
-
- Greedy Decoding:
-
- ```python
- >>> from transformers import AutoTokenizer, AutoModelForCausalLM
-
- >>> tokenizer = AutoTokenizer.from_pretrained("gpt2")
- >>> model = AutoModelForCausalLM.from_pretrained("gpt2")
-
- >>> prompt = "Today I believe we can finally"
- >>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids
-
- >>> # generate up to 30 tokens
- >>> outputs = model.generate(input_ids, do_sample=False, max_length=30)
- >>> tokenizer.batch_decode(outputs, skip_special_tokens=True)
- ['Today I believe we can finally get to the point where we can make a difference in the lives of the people of the United States of America.\n']
- ```
-
- Multinomial Sampling:
-
- ```python
- >>> from transformers import AutoTokenizer, AutoModelForCausalLM
- >>> import torch
-
- >>> tokenizer = AutoTokenizer.from_pretrained("gpt2")
- >>> model = AutoModelForCausalLM.from_pretrained("gpt2")
-
- >>> prompt = "Today I believe we can finally"
- >>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids
-
- >>> # sample up to 30 tokens
- >>> torch.manual_seed(0) # doctest: +IGNORE_RESULT
- >>> outputs = model.generate(input_ids, do_sample=True, max_length=30)
- >>> tokenizer.batch_decode(outputs, skip_special_tokens=True)
- ['Today I believe we can finally get rid of discrimination," said Rep. Mark Pocan (D-Wis.).\n\n"Just look at the']
- ```
-
- Beam-search decoding:
-
- ```python
- >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
-
- >>> tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-de")
- >>> model = AutoModelForSeq2SeqLM.from_pretrained("Helsinki-NLP/opus-mt-en-de")
-
- >>> sentence = "Paris is one of the densest populated areas in Europe."
- >>> input_ids = tokenizer(sentence, return_tensors="pt").input_ids
-
- >>> outputs = model.generate(input_ids)
- >>> tokenizer.batch_decode(outputs, skip_special_tokens=True)
- ['Paris ist eines der dichtesten besiedelten Gebiete Europas.']
- ```"""
- # 1. Set generation parameters if not already defined
- bos_token_id = bos_token_id if bos_token_id is not None else self.config.bos_token_id
- num_beams = num_beams if num_beams is not None else self.config.num_beams
- length_penalty = length_penalty if length_penalty is not None else self.config.length_penalty
- early_stopping = early_stopping if early_stopping is not None else self.config.early_stopping
- num_beam_groups = num_beam_groups if num_beam_groups is not None else self.config.num_beam_groups
- do_sample = do_sample if do_sample is not None else self.config.do_sample
- num_return_sequences = (
- num_return_sequences if num_return_sequences is not None else self.config.num_return_sequences
- )
-
- pad_token_id = pad_token_id if pad_token_id is not None else self.config.pad_token_id
- eos_token_id = eos_token_id if eos_token_id is not None else self.config.eos_token_id
-
- if eos_token_id is None and hasattr(self.config, "decoder"):
- eos_token_id = self.config.decoder.eos_token_id
-
- if pad_token_id is None and eos_token_id is not None:
- # special case if pad_token_id is not defined
- print(f"Setting `pad_token_id` to `eos_token_id`:{eos_token_id} for open-end generation.")
- pad_token_id = eos_token_id
-
- output_scores = output_scores if output_scores is not None else self.config.output_scores
- output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
- output_hidden_states = (
- output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
- )
- return_dict_in_generate = (
- return_dict_in_generate if return_dict_in_generate is not None else self.config.return_dict_in_generate
- )
-
- # 2. Define model inputs
- # inputs_tensor has to be defined
- # model_input_name is defined if model-specific keyword input is passed
- # otherwise model_input_name is None
- # all model-specific keyword inputs are removed from `model_kwargs`
- inputs_tensor, model_input_name, model_kwargs = self._prepare_model_inputs(inputs, bos_token_id, model_kwargs)
- batch_size = inputs_tensor.shape[0]
-
- # 3. Define other model kwargs
- model_kwargs["output_attentions"] = output_attentions
- model_kwargs["output_hidden_states"] = output_hidden_states
- model_kwargs["use_cache"] = use_cache
-
- accepts_attention_mask = "attention_mask" in set(inspect.signature(self.forward).parameters.keys())
- requires_attention_mask = "encoder_outputs" not in model_kwargs
-
- if model_kwargs.get("attention_mask", None) is None and requires_attention_mask and accepts_attention_mask:
- model_kwargs["attention_mask"] = self._prepare_attention_mask_for_generation(
- inputs_tensor, pad_token_id, eos_token_id
- )
-
- if self.config.is_encoder_decoder and "encoder_outputs" not in model_kwargs:
- # if model is encoder decoder encoder_outputs are created
- # and added to `model_kwargs`
- model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(
- inputs_tensor, model_kwargs, model_input_name
- )
-
- # 4. Prepare `input_ids` which will be used for auto-regressive generation
- if self.config.is_encoder_decoder:
- input_ids = self._prepare_decoder_input_ids_for_generation(
- batch_size,
- decoder_start_token_id=decoder_start_token_id,
- bos_token_id=bos_token_id,
- model_kwargs=model_kwargs,
- device=inputs_tensor.device,
- )
- else:
- # if decoder-only then inputs_tensor has to be `input_ids`
- input_ids = inputs_tensor
-
- input_ids_seq_length = input_ids.shape[-1]
-
- # 5. Prepare `max_length` depending on other stopping criteria
- # if `max_new_tokens` is passed, but not `max_length` -> set `max_length = max_new_tokens`
- if max_length is None and max_new_tokens is not None:
- max_length = max_new_tokens + input_ids_seq_length
- elif max_length is not None and max_new_tokens is not None:
- # Both are set, this is odd, raise a warning
- warnings.warn(
- "Both `max_length` and `max_new_tokens` have been set "
- f"but they serve the same purpose. `max_length` {max_length} "
- f"will take priority over `max_new_tokens` {max_new_tokens}.",
- UserWarning,
- )
- # default to config if still None
- max_length = max_length if max_length is not None else self.config.max_length
- min_length = min_length if min_length is not None else self.config.min_length
-
- if min_length is not None and min_length > max_length:
- raise ValueError(
- f"Unfeasable length constraints: the minimum length ({min_length}) is larger than the maximum "
- f"length ({max_length})"
- )
- if input_ids_seq_length >= max_length:
- input_ids_string = "decoder_input_ids" if self.config.is_encoder_decoder else "input_ids"
- print(
- f"Input length of {input_ids_string} is {input_ids_seq_length}, but ``max_length`` is set to {max_length}. "
- "This can lead to unexpected behavior. You should consider increasing ``config.max_length`` or ``max_length``."
- )
-
- # 6. determine generation mode
- is_constraint_gen_mode = constraints is not None or force_words_ids is not None
- is_greedy_gen_mode = (
- (num_beams == 1) and (num_beam_groups == 1) and do_sample is False and not is_constraint_gen_mode
- )
- is_sample_gen_mode = (
- (num_beams == 1) and (num_beam_groups == 1) and do_sample is True and not is_constraint_gen_mode
- )
- is_beam_gen_mode = (
- (num_beams > 1) and (num_beam_groups == 1) and do_sample is False and not is_constraint_gen_mode
- )
- is_beam_sample_gen_mode = (
- (num_beams > 1) and (num_beam_groups == 1) and do_sample is True and not is_constraint_gen_mode
- )
- is_group_beam_gen_mode = (num_beams > 1) and (num_beam_groups > 1) and not is_constraint_gen_mode
-
- if num_beam_groups > num_beams:
- raise ValueError("`num_beam_groups` has to be smaller or equal to `num_beams`")
- if is_group_beam_gen_mode and do_sample is True:
- raise ValueError(
- "Diverse beam search cannot be used in sampling mode. Make sure that `do_sample` is set to `False`."
- )
-
- # 7. prepare distribution pre_processing samplers
- logits_processor = self._get_logits_processor(
- repetition_penalty=repetition_penalty,
- no_repeat_ngram_size=no_repeat_ngram_size,
- encoder_no_repeat_ngram_size=encoder_no_repeat_ngram_size,
- input_ids_seq_length=input_ids_seq_length,
- encoder_input_ids=inputs_tensor,
- bad_words_ids=bad_words_ids,
- min_length=min_length,
- max_length=max_length,
- eos_token_id=eos_token_id,
- forced_bos_token_id=forced_bos_token_id,
- forced_eos_token_id=forced_eos_token_id,
- prefix_allowed_tokens_fn=prefix_allowed_tokens_fn,
- num_beams=num_beams,
- num_beam_groups=num_beam_groups,
- diversity_penalty=diversity_penalty,
- remove_invalid_values=remove_invalid_values,
- exponential_decay_length_penalty=exponential_decay_length_penalty,
- logits_processor=logits_processor,
- renormalize_logits=renormalize_logits,
- )
-
- # 8. prepare stopping criteria
- stopping_criteria = self._get_stopping_criteria(
- max_length=max_length, max_time=max_time, stopping_criteria=stopping_criteria
- )
-
- # 9. go into different generation modes
- if is_greedy_gen_mode:
- if num_return_sequences > 1:
- raise ValueError(
- f"num_return_sequences has to be 1, but is {num_return_sequences} when doing greedy search."
- )
-
- # 10. run greedy search
- return self.greedy_search(
- input_ids,
- logits_processor=logits_processor,
- stopping_criteria=stopping_criteria,
- pad_token_id=pad_token_id,
- eos_token_id=eos_token_id,
- output_scores=output_scores,
- return_dict_in_generate=return_dict_in_generate,
- synced_gpus=synced_gpus,
- **model_kwargs,
- )
-
- elif is_sample_gen_mode:
- # 10. prepare logits warper
- logits_warper = self._get_logits_warper(
- top_k=top_k,
- top_p=top_p,
- typical_p=typical_p,
- temperature=temperature,
- num_beams=num_beams,
- renormalize_logits=renormalize_logits,
- )
-
- # 11. expand input_ids with `num_return_sequences` additional sequences per batch
- input_ids, model_kwargs = self._expand_inputs_for_generation(
- input_ids,
- expand_size=num_return_sequences,
- is_encoder_decoder=self.config.is_encoder_decoder,
- **model_kwargs,
- )
-
- # 12. run sample
- return self.sample(
- input_ids,
- logits_processor=logits_processor,
- logits_warper=logits_warper,
- stopping_criteria=stopping_criteria,
- pad_token_id=pad_token_id,
- eos_token_id=eos_token_id,
- output_scores=output_scores,
- return_dict_in_generate=return_dict_in_generate,
- synced_gpus=synced_gpus,
- **model_kwargs,
- )
-
- elif is_beam_gen_mode:
- if num_return_sequences > num_beams:
- raise ValueError("`num_return_sequences` has to be smaller or equal to `num_beams`.")
-
- if stopping_criteria.max_length is None:
- raise ValueError("`max_length` needs to be a stopping_criteria for now.")
-
- # 10. prepare beam search scorer
- beam_scorer = BeamSearchScorer(
- batch_size=batch_size,
- num_beams=num_beams,
- device=inputs_tensor.device,
- length_penalty=length_penalty,
- do_early_stopping=early_stopping,
- num_beam_hyps_to_keep=num_return_sequences,
- )
- # 11. interleave input_ids with `num_beams` additional sequences per batch
- input_ids, model_kwargs = self._expand_inputs_for_generation(
- input_ids, expand_size=num_beams, is_encoder_decoder=self.config.is_encoder_decoder, **model_kwargs
- )
- # 12. run beam search
- return self.beam_search(
- input_ids,
- beam_scorer,
- logits_processor=logits_processor,
- stopping_criteria=stopping_criteria,
- pad_token_id=pad_token_id,
- eos_token_id=eos_token_id,
- output_scores=output_scores,
- return_dict_in_generate=return_dict_in_generate,
- synced_gpus=synced_gpus,
- **model_kwargs,
- )
-
- elif is_beam_sample_gen_mode:
- # 10. prepare logits warper
- logits_warper = self._get_logits_warper(
- top_k=top_k,
- top_p=top_p,
- typical_p=typical_p,
- temperature=temperature,
- num_beams=num_beams,
- renormalize_logits=renormalize_logits,
- )
-
- if stopping_criteria.max_length is None:
- raise ValueError("`max_length` needs to be a stopping_criteria for now.")
- # 11. prepare beam search scorer
- beam_scorer = BeamSearchScorer(
- batch_size=batch_size * num_return_sequences,
- num_beams=num_beams,
- device=inputs_tensor.device,
- length_penalty=length_penalty,
- do_early_stopping=early_stopping,
- )
-
- # 12. interleave input_ids with `num_beams` additional sequences per batch
- input_ids, model_kwargs = self._expand_inputs_for_generation(
- input_ids,
- expand_size=num_beams * num_return_sequences,
- is_encoder_decoder=self.config.is_encoder_decoder,
- **model_kwargs,
- )
-
- # 13. run beam sample
- return self.beam_sample(
- input_ids,
- beam_scorer,
- logits_processor=logits_processor,
- logits_warper=logits_warper,
- stopping_criteria=stopping_criteria,
- pad_token_id=pad_token_id,
- eos_token_id=eos_token_id,
- output_scores=output_scores,
- return_dict_in_generate=return_dict_in_generate,
- synced_gpus=synced_gpus,
- **model_kwargs,
- )
-
- elif is_group_beam_gen_mode:
- if num_return_sequences > num_beams:
- raise ValueError("`num_return_sequences` has to be smaller or equal to `num_beams`.")
-
- if num_beams % num_beam_groups != 0:
- raise ValueError("`num_beams` should be divisible by `num_beam_groups` for group beam search.")
-
- if stopping_criteria.max_length is None:
- raise ValueError("`max_length` needs to be a stopping_criteria for now.")
-
- # 10. prepare beam search scorer
- beam_scorer = BeamSearchScorer(
- batch_size=batch_size,
- num_beams=num_beams,
- max_length=stopping_criteria.max_length,
- device=inputs_tensor.device,
- length_penalty=length_penalty,
- do_early_stopping=early_stopping,
- num_beam_hyps_to_keep=num_return_sequences,
- num_beam_groups=num_beam_groups,
- )
- # 11. interleave input_ids with `num_beams` additional sequences per batch
- input_ids, model_kwargs = self._expand_inputs_for_generation(
- input_ids, expand_size=num_beams, is_encoder_decoder=self.config.is_encoder_decoder, **model_kwargs
- )
- # 12. run beam search
- return self.group_beam_search(
- input_ids,
- beam_scorer,
- logits_processor=logits_processor,
- stopping_criteria=stopping_criteria,
- pad_token_id=pad_token_id,
- eos_token_id=eos_token_id,
- output_scores=output_scores,
- return_dict_in_generate=return_dict_in_generate,
- synced_gpus=synced_gpus,
- **model_kwargs,
- )
-
- elif is_constraint_gen_mode:
- if num_return_sequences > num_beams:
- raise ValueError("`num_return_sequences` has to be smaller or equal to `num_beams`.")
-
- if stopping_criteria.max_length is None:
- raise ValueError("`max_length` needs to be a stopping_criteria for now.")
-
- if num_beams <= 1:
- raise ValueError("`num_beams` needs to be greater than 1 for constrained genertation.")
-
- if do_sample:
- raise ValueError("`do_sample` needs to be false for constrained generation.")
-
- if num_beam_groups is not None and num_beam_groups > 1:
- raise ValueError("`num_beam_groups` not supported yet for constrained generation.")
-
- final_constraints = []
- if constraints is not None:
- final_constraints = constraints
-
- if force_words_ids is not None:
-
- def typeerror():
- raise ValueError(
- "`force_words_ids` has to either be a `List[List[List[int]]]` or `List[List[int]]`"
- f"of positive integers, but is {force_words_ids}."
- )
-
- if not isinstance(force_words_ids, list) or len(force_words_ids) == 0:
- typeerror()
-
- for word_ids in force_words_ids:
- if isinstance(word_ids[0], list):
- if not isinstance(word_ids, list) or len(word_ids) == 0:
- typeerror()
- if any(not isinstance(token_ids, list) for token_ids in word_ids):
- typeerror()
- if any(
- any((not isinstance(token_id, int) or token_id < 0) for token_id in token_ids)
- for token_ids in word_ids
- ):
- typeerror()
-
- constraint = DisjunctiveConstraint(word_ids)
- else:
- if not isinstance(word_ids, list) or len(word_ids) == 0:
- typeerror()
- if any((not isinstance(token_id, int) or token_id < 0) for token_id in word_ids):
- typeerror()
-
- constraint = PhrasalConstraint(word_ids)
- final_constraints.append(constraint)
-
- # 10. prepare beam search scorer
- constrained_beam_scorer = ConstrainedBeamSearchScorer(
- constraints=final_constraints,
- batch_size=batch_size,
- num_beams=num_beams,
- device=inputs_tensor.device,
- length_penalty=length_penalty,
- do_early_stopping=early_stopping,
- num_beam_hyps_to_keep=num_return_sequences,
- )
- # 11. interleave input_ids with `num_beams` additional sequences per batch
- input_ids, model_kwargs = self._expand_inputs_for_generation(
- input_ids, expand_size=num_beams, is_encoder_decoder=self.config.is_encoder_decoder, **model_kwargs
- )
- # 12. run beam search
- return self.constrained_beam_search(
- input_ids,
- constrained_beam_scorer=constrained_beam_scorer,
- logits_processor=logits_processor,
- stopping_criteria=stopping_criteria,
- pad_token_id=pad_token_id,
- eos_token_id=eos_token_id,
- output_scores=output_scores,
- return_dict_in_generate=return_dict_in_generate,
- synced_gpus=synced_gpus,
- **model_kwargs,
- )
-
- def greedy_search(
- self,
- input_ids: torch.LongTensor,
- logits_processor: Optional[LogitsProcessorList] = None,
- stopping_criteria: Optional[StoppingCriteriaList] = None,
- max_length: Optional[int] = None,
- pad_token_id: Optional[int] = None,
- eos_token_id: Optional[int] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- output_scores: Optional[bool] = None,
- return_dict_in_generate: Optional[bool] = None,
- synced_gpus: Optional[bool] = False,
- **model_kwargs,
- ) -> Union[GreedySearchOutput, torch.LongTensor]:
- r"""
- Generates sequences of token ids for models with a language modeling head using **greedy decoding** and can be
- used for text-decoder, text-to-text, speech-to-text, and vision-to-text models.
-
- Parameters:
-
- input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
- The sequence used as a prompt for the generation.
- logits_processor (`LogitsProcessorList`, *optional*):
- An instance of [`LogitsProcessorList`]. List of instances of class derived from [`LogitsProcessor`]
- used to modify the prediction scores of the language modeling head applied at each generation step.
- stopping_criteria (`StoppingCriteriaList`, *optional*):
- An instance of [`StoppingCriteriaList`]. List of instances of class derived from [`StoppingCriteria`]
- used to tell if the generation loop should stop.
-
- max_length (`int`, *optional*, defaults to 20):
- **DEPRECATED**. Use `logits_processor` or `stopping_criteria` directly to cap the number of generated
- tokens. The maximum length of the sequence to be generated.
- pad_token_id (`int`, *optional*):
- The id of the *padding* token.
- eos_token_id (`int`, *optional*):
- The id of the *end-of-sequence* token.
- output_attentions (`bool`, *optional*, defaults to `False`):
- Whether or not to return the attentions tensors of all attention layers. See `attentions` under
- returned tensors for more details.
- output_hidden_states (`bool`, *optional*, defaults to `False`):
- Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors
- for more details.
- output_scores (`bool`, *optional*, defaults to `False`):
- Whether or not to return the prediction scores. See `scores` under returned tensors for more details.
- return_dict_in_generate (`bool`, *optional*, defaults to `False`):
- Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
- synced_gpus (`bool`, *optional*, defaults to `False`):
- Whether to continue running the while loop until max_length (needed for ZeRO stage 3)
- model_kwargs:
- Additional model specific keyword arguments will be forwarded to the `forward` function of the model.
- If model is an encoder-decoder model the kwargs should include `encoder_outputs`.
-
- Return:
- [`~generation_utils.GreedySearchDecoderOnlyOutput`], [`~generation_utils.GreedySearchEncoderDecoderOutput`]
- or `torch.LongTensor`: A `torch.LongTensor` containing the generated tokens (default behaviour) or a
- [`~generation_utils.GreedySearchDecoderOnlyOutput`] if `model.config.is_encoder_decoder=False` and
- `return_dict_in_generate=True` or a [`~generation_utils.GreedySearchEncoderDecoderOutput`] if
- `model.config.is_encoder_decoder=True`.
-
- Examples:
-
- ```python
- >>> from transformers import (
- ... AutoTokenizer,
- ... AutoModelForCausalLM,
- ... LogitsProcessorList,
- ... MinLengthLogitsProcessor,
- ... StoppingCriteriaList,
- ... MaxLengthCriteria,
- ... )
-
- >>> tokenizer = AutoTokenizer.from_pretrained("gpt2")
- >>> model = AutoModelForCausalLM.from_pretrained("gpt2")
-
- >>> # set pad_token_id to eos_token_id because GPT2 does not have a EOS token
- >>> model.config.pad_token_id = model.config.eos_token_id
-
- >>> input_prompt = "It might be possible to"
- >>> input_ids = tokenizer(input_prompt, return_tensors="pt").input_ids
-
- >>> # instantiate logits processors
- >>> logits_processor = LogitsProcessorList(
- ... [
- ... MinLengthLogitsProcessor(10, eos_token_id=model.config.eos_token_id),
- ... ]
- ... )
- >>> stopping_criteria = StoppingCriteriaList([MaxLengthCriteria(max_length=20)])
-
- >>> outputs = model.greedy_search(
- ... input_ids, logits_processor=logits_processor, stopping_criteria=stopping_criteria
- ... )
-
- >>> tokenizer.batch_decode(outputs, skip_special_tokens=True)
- ["It might be possible to get a better understanding of the nature of the problem, but it's not"]
- ```"""
- # init values
- logits_processor = logits_processor if logits_processor is not None else LogitsProcessorList()
- stopping_criteria = stopping_criteria if stopping_criteria is not None else StoppingCriteriaList()
- if max_length is not None:
- warnings.warn(
- "`max_length` is deprecated in this function, use `stopping_criteria=StoppingCriteriaList([MaxLengthCriteria(max_length=max_length)])` instead.",
- UserWarning,
- )
- stopping_criteria = validate_stopping_criteria(stopping_criteria, max_length)
- pad_token_id = pad_token_id if pad_token_id is not None else self.config.pad_token_id
- eos_token_id = eos_token_id if eos_token_id is not None else self.config.eos_token_id
- output_scores = output_scores if output_scores is not None else self.config.output_scores
- output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
- output_hidden_states = (
- output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
- )
- return_dict_in_generate = (
- return_dict_in_generate if return_dict_in_generate is not None else self.config.return_dict_in_generate
- )
-
- # init attention / hidden states / scores tuples
- scores = () if (return_dict_in_generate and output_scores) else None
- decoder_attentions = () if (return_dict_in_generate and output_attentions) else None
- cross_attentions = () if (return_dict_in_generate and output_attentions) else None
- decoder_hidden_states = () if (return_dict_in_generate and output_hidden_states) else None
-
- # if model is an encoder-decoder, retrieve encoder attention weights and hidden states
- if return_dict_in_generate and self.config.is_encoder_decoder:
- encoder_attentions = model_kwargs["encoder_outputs"].get("attentions") if output_attentions else None
- encoder_hidden_states = (
- model_kwargs["encoder_outputs"].get("hidden_states") if output_hidden_states else None
- )
-
- # keep track of which sequences are already finished
- unfinished_sequences = input_ids.new(input_ids.shape[0]).fill_(1)
- cur_len = input_ids.shape[-1]
-
- this_peer_finished = False # used by synced_gpus only
- while True:
-
- if synced_gpus:
- # Under synced_gpus the `forward` call must continue until all gpus complete their sequence.
- # The following logic allows an early break if all peers finished generating their sequence
- this_peer_finished_flag = torch.tensor(0.0 if this_peer_finished else 1.0).to(input_ids.device)
- # send 0.0 if we finished, 1.0 otherwise
- dist.all_reduce(this_peer_finished_flag, op=dist.ReduceOp.SUM)
- # did all peers finish? the reduced sum will be 0.0 then
- if this_peer_finished_flag.item() == 0.0:
- break
-
- # prepare model inputs
- model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
-
- # forward pass to get next token
- outputs = self(
- **model_inputs,
- return_dict=True,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- )
-
- if synced_gpus and this_peer_finished:
- cur_len = cur_len + 1
- continue # don't waste resources running the code we don't need
-
- next_token_logits = outputs.logits[:, -1, :]
-
- # Store scores, attentions and hidden_states when required
- if return_dict_in_generate:
- if output_scores:
- scores += (next_token_logits,)
- if output_attentions:
- decoder_attentions += (
- (outputs.decoder_attentions,) if self.config.is_encoder_decoder else (outputs.attentions,)
- )
- if self.config.is_encoder_decoder:
- cross_attentions += (outputs.cross_attentions,)
-
- if output_hidden_states:
- decoder_hidden_states += (
- (outputs.decoder_hidden_states,)
- if self.config.is_encoder_decoder
- else (outputs.hidden_states,)
- )
-
- # pre-process distribution
- next_tokens_scores = logits_processor(input_ids, next_token_logits)
-
- # argmax
- next_tokens = torch.argmax(next_tokens_scores, dim=-1)
-
- # finished sentences should have their next token be a padding token
- if eos_token_id is not None:
- if pad_token_id is None:
- raise ValueError("If `eos_token_id` is defined, make sure that `pad_token_id` is defined.")
- next_tokens = next_tokens * unfinished_sequences + pad_token_id * (1 - unfinished_sequences)
-
- # update generated ids, model inputs, and length for next step
- input_ids = torch.cat([input_ids, next_tokens[:, None]], dim=-1)
- model_kwargs = self._update_model_kwargs_for_generation(
- outputs, model_kwargs, is_encoder_decoder=self.config.is_encoder_decoder
- )
- cur_len = cur_len + 1
-
- # if eos_token was found in one sentence, set sentence to finished
- if eos_token_id is not None:
- unfinished_sequences = unfinished_sequences.mul((next_tokens != eos_token_id).long())
-
- # stop when each sentence is finished, or if we exceed the maximum length
- if unfinished_sequences.max() == 0 or stopping_criteria(input_ids, scores):
- if not synced_gpus:
- break
- else:
- this_peer_finished = True
-
- if return_dict_in_generate:
- if self.config.is_encoder_decoder:
- return GreedySearchEncoderDecoderOutput(
- sequences=input_ids,
- scores=scores,
- encoder_attentions=encoder_attentions,
- encoder_hidden_states=encoder_hidden_states,
- decoder_attentions=decoder_attentions,
- cross_attentions=cross_attentions,
- decoder_hidden_states=decoder_hidden_states,
- )
- else:
- return GreedySearchDecoderOnlyOutput(
- sequences=input_ids,
- scores=scores,
- attentions=decoder_attentions,
- hidden_states=decoder_hidden_states,
- )
- else:
- return input_ids
-
- def sample(
- self,
- input_ids: torch.LongTensor,
- logits_processor: Optional[LogitsProcessorList] = None,
- stopping_criteria: Optional[StoppingCriteriaList] = None,
- logits_warper: Optional[LogitsProcessorList] = None,
- max_length: Optional[int] = None,
- pad_token_id: Optional[int] = None,
- eos_token_id: Optional[int] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- output_scores: Optional[bool] = None,
- return_dict_in_generate: Optional[bool] = None,
- synced_gpus: Optional[bool] = False,
- **model_kwargs,
- ) -> Union[SampleOutput, torch.LongTensor]:
- r"""
- Generates sequences of token ids for models with a language modeling head using **multinomial sampling** and
- can be used for text-decoder, text-to-text, speech-to-text, and vision-to-text models.
-
- Parameters:
-
- input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
- The sequence used as a prompt for the generation.
- logits_processor (`LogitsProcessorList`, *optional*):
- An instance of [`LogitsProcessorList`]. List of instances of class derived from [`LogitsProcessor`]
- used to modify the prediction scores of the language modeling head applied at each generation step.
- stopping_criteria (`StoppingCriteriaList`, *optional*):
- An instance of [`StoppingCriteriaList`]. List of instances of class derived from [`StoppingCriteria`]
- used to tell if the generation loop should stop.
- logits_warper (`LogitsProcessorList`, *optional*):
- An instance of [`LogitsProcessorList`]. List of instances of class derived from [`LogitsWarper`] used
- to warp the prediction score distribution of the language modeling head applied before multinomial
- sampling at each generation step.
- max_length (`int`, *optional*, defaults to 20):
- **DEPRECATED**. Use `logits_processor` or `stopping_criteria` directly to cap the number of generated
- tokens. The maximum length of the sequence to be generated.
- pad_token_id (`int`, *optional*):
- The id of the *padding* token.
- eos_token_id (`int`, *optional*):
- The id of the *end-of-sequence* token.
- output_attentions (`bool`, *optional*, defaults to `False`):
- Whether or not to return the attentions tensors of all attention layers. See `attentions` under
- returned tensors for more details.
- output_hidden_states (`bool`, *optional*, defaults to `False`):
- Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors
- for more details.
- output_scores (`bool`, *optional*, defaults to `False`):
- Whether or not to return the prediction scores. See `scores` under returned tensors for more details.
- return_dict_in_generate (`bool`, *optional*, defaults to `False`):
- Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
- synced_gpus (`bool`, *optional*, defaults to `False`):
- Whether to continue running the while loop until max_length (needed for ZeRO stage 3)
- model_kwargs:
- Additional model specific kwargs will be forwarded to the `forward` function of the model. If model is
- an encoder-decoder model the kwargs should include `encoder_outputs`.
-
- Return:
- [`~generation_utils.SampleDecoderOnlyOutput`], [`~generation_utils.SampleEncoderDecoderOutput`] or
- `torch.LongTensor`: A `torch.LongTensor` containing the generated tokens (default behaviour) or a
- [`~generation_utils.SampleDecoderOnlyOutput`] if `model.config.is_encoder_decoder=False` and
- `return_dict_in_generate=True` or a [`~generation_utils.SampleEncoderDecoderOutput`] if
- `model.config.is_encoder_decoder=True`.
-
- Examples:
-
- ```python
- >>> from transformers import (
- ... AutoTokenizer,
- ... AutoModelForCausalLM,
- ... LogitsProcessorList,
- ... MinLengthLogitsProcessor,
- ... TopKLogitsWarper,
- ... TemperatureLogitsWarper,
- ... StoppingCriteriaList,
- ... MaxLengthCriteria,
- ... )
- >>> import torch
-
- >>> tokenizer = AutoTokenizer.from_pretrained("gpt2")
- >>> model = AutoModelForCausalLM.from_pretrained("gpt2")
-
- >>> # set pad_token_id to eos_token_id because GPT2 does not have a EOS token
- >>> model.config.pad_token_id = model.config.eos_token_id
-
- >>> input_prompt = "Today is a beautiful day, and"
- >>> input_ids = tokenizer(input_prompt, return_tensors="pt").input_ids
-
- >>> # instantiate logits processors
- >>> logits_processor = LogitsProcessorList(
- ... [
- ... MinLengthLogitsProcessor(15, eos_token_id=model.config.eos_token_id),
- ... ]
- ... )
- >>> # instantiate logits processors
- >>> logits_warper = LogitsProcessorList(
- ... [
- ... TopKLogitsWarper(50),
- ... TemperatureLogitsWarper(0.7),
- ... ]
- ... )
-
- >>> stopping_criteria = StoppingCriteriaList([MaxLengthCriteria(max_length=20)])
-
- >>> torch.manual_seed(0) # doctest: +IGNORE_RESULT
- >>> outputs = model.sample(
- ... input_ids,
- ... logits_processor=logits_processor,
- ... logits_warper=logits_warper,
- ... stopping_criteria=stopping_criteria,
- ... )
-
- >>> tokenizer.batch_decode(outputs, skip_special_tokens=True)
- ['Today is a beautiful day, and a wonderful day.\n\nI was lucky enough to meet the']
- ```"""
-
- # init values
- logits_processor = logits_processor if logits_processor is not None else LogitsProcessorList()
- stopping_criteria = stopping_criteria if stopping_criteria is not None else StoppingCriteriaList()
- if max_length is not None:
- warnings.warn(
- "`max_length` is deprecated in this function, use `stopping_criteria=StoppingCriteriaList(MaxLengthCriteria(max_length=max_length))` instead.",
- UserWarning,
- )
- stopping_criteria = validate_stopping_criteria(stopping_criteria, max_length)
- logits_warper = logits_warper if logits_warper is not None else LogitsProcessorList()
- pad_token_id = pad_token_id if pad_token_id is not None else self.config.pad_token_id
- eos_token_id = eos_token_id if eos_token_id is not None else self.config.eos_token_id
- output_scores = output_scores if output_scores is not None else self.config.output_scores
- output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
- output_hidden_states = (
- output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
- )
- return_dict_in_generate = (
- return_dict_in_generate if return_dict_in_generate is not None else self.config.return_dict_in_generate
- )
-
- # init attention / hidden states / scores tuples
- scores = () if (return_dict_in_generate and output_scores) else None
- decoder_attentions = () if (return_dict_in_generate and output_attentions) else None
- cross_attentions = () if (return_dict_in_generate and output_attentions) else None
- decoder_hidden_states = () if (return_dict_in_generate and output_hidden_states) else None
-
- # if model is an encoder-decoder, retrieve encoder attention weights and hidden states
- if return_dict_in_generate and self.config.is_encoder_decoder:
- encoder_attentions = model_kwargs["encoder_outputs"].get("attentions") if output_attentions else None
- encoder_hidden_states = (
- model_kwargs["encoder_outputs"].get("hidden_states") if output_hidden_states else None
- )
-
- # keep track of which sequences are already finished
- unfinished_sequences = input_ids.new(input_ids.shape[0]).fill_(1)
- cur_len = input_ids.shape[-1]
-
- this_peer_finished = False # used by synced_gpus only
- # auto-regressive generation
- while True:
-
- if synced_gpus:
- # Under synced_gpus the `forward` call must continue until all gpus complete their sequence.
- # The following logic allows an early break if all peers finished generating their sequence
- this_peer_finished_flag = torch.tensor(0.0 if this_peer_finished else 1.0).to(input_ids.device)
- # send 0.0 if we finished, 1.0 otherwise
- dist.all_reduce(this_peer_finished_flag, op=dist.ReduceOp.SUM)
- # did all peers finish? the reduced sum will be 0.0 then
- if this_peer_finished_flag.item() == 0.0:
- break
-
- # prepare model inputs
- model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
-
- # forward pass to get next token
- outputs = self(
- **model_inputs,
- return_dict=True,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- )
-
- if synced_gpus and this_peer_finished:
- cur_len = cur_len + 1
- continue # don't waste resources running the code we don't need
-
- next_token_logits = outputs.logits[:, -1, :]
-
- # pre-process distribution
- next_token_scores = logits_processor(input_ids, next_token_logits)
- next_token_scores = logits_warper(input_ids, next_token_scores)
-
- # Store scores, attentions and hidden_states when required
- if return_dict_in_generate:
- if output_scores:
- scores += (next_token_scores,)
- if output_attentions:
- decoder_attentions += (
- (outputs.decoder_attentions,) if self.config.is_encoder_decoder else (outputs.attentions,)
- )
- if self.config.is_encoder_decoder:
- cross_attentions += (outputs.cross_attentions,)
-
- if output_hidden_states:
- decoder_hidden_states += (
- (outputs.decoder_hidden_states,)
- if self.config.is_encoder_decoder
- else (outputs.hidden_states,)
- )
-
- # sample
- probs = nn.functional.softmax(next_token_scores, dim=-1)
- next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1)
-
- # finished sentences should have their next token be a padding token
- if eos_token_id is not None:
- if pad_token_id is None:
- raise ValueError("If `eos_token_id` is defined, make sure that `pad_token_id` is defined.")
- next_tokens = next_tokens * unfinished_sequences + pad_token_id * (1 - unfinished_sequences)
-
- # update generated ids, model inputs, and length for next step
- input_ids = torch.cat([input_ids, next_tokens[:, None]], dim=-1)
- model_kwargs = self._update_model_kwargs_for_generation(
- outputs, model_kwargs, is_encoder_decoder=self.config.is_encoder_decoder
- )
- cur_len = cur_len + 1
-
- # if eos_token was found in one sentence, set sentence to finished
- if eos_token_id is not None:
- unfinished_sequences = unfinished_sequences.mul((next_tokens != eos_token_id).long())
-
- # stop when each sentence is finished, or if we exceed the maximum length
- if unfinished_sequences.max() == 0 or stopping_criteria(input_ids, scores):
- if not synced_gpus:
- break
- else:
- this_peer_finished = True
-
- if return_dict_in_generate:
- if self.config.is_encoder_decoder:
- return SampleEncoderDecoderOutput(
- sequences=input_ids,
- scores=scores,
- encoder_attentions=encoder_attentions,
- encoder_hidden_states=encoder_hidden_states,
- decoder_attentions=decoder_attentions,
- cross_attentions=cross_attentions,
- decoder_hidden_states=decoder_hidden_states,
- )
- else:
- return SampleDecoderOnlyOutput(
- sequences=input_ids,
- scores=scores,
- attentions=decoder_attentions,
- hidden_states=decoder_hidden_states,
- )
- else:
- return input_ids
-
- def beam_search(
- self,
- input_ids: torch.LongTensor,
- beam_scorer: BeamScorer,
- logits_processor: Optional[LogitsProcessorList] = None,
- stopping_criteria: Optional[StoppingCriteriaList] = None,
- max_length: Optional[int] = None,
- pad_token_id: Optional[int] = None,
- eos_token_id: Optional[int] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- output_scores: Optional[bool] = None,
- return_dict_in_generate: Optional[bool] = None,
- synced_gpus: Optional[bool] = False,
- **model_kwargs,
- ) -> Union[BeamSearchOutput, torch.LongTensor]:
- r"""
- Generates sequences of token ids for models with a language modeling head using **beam search decoding** and
- can be used for text-decoder, text-to-text, speech-to-text, and vision-to-text models.
-
- Parameters:
-
- input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
- The sequence used as a prompt for the generation.
- beam_scorer (`BeamScorer`):
- An derived instance of [`BeamScorer`] that defines how beam hypotheses are constructed, stored and
- sorted during generation. For more information, the documentation of [`BeamScorer`] should be read.
- logits_processor (`LogitsProcessorList`, *optional*):
- An instance of [`LogitsProcessorList`]. List of instances of class derived from [`LogitsProcessor`]
- used to modify the prediction scores of the language modeling head applied at each generation step.
- stopping_criteria (`StoppingCriteriaList`, *optional*):
- An instance of [`StoppingCriteriaList`]. List of instances of class derived from [`StoppingCriteria`]
- used to tell if the generation loop should stop.
- max_length (`int`, *optional*, defaults to 20):
- **DEPRECATED**. Use `logits_processor` or `stopping_criteria` directly to cap the number of generated
- tokens. The maximum length of the sequence to be generated.
- pad_token_id (`int`, *optional*):
- The id of the *padding* token.
- eos_token_id (`int`, *optional*):
- The id of the *end-of-sequence* token.
- output_attentions (`bool`, *optional*, defaults to `False`):
- Whether or not to return the attentions tensors of all attention layers. See `attentions` under
- returned tensors for more details.
- output_hidden_states (`bool`, *optional*, defaults to `False`):
- Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors
- for more details.
- output_scores (`bool`, *optional*, defaults to `False`):
- Whether or not to return the prediction scores. See `scores` under returned tensors for more details.
- return_dict_in_generate (`bool`, *optional*, defaults to `False`):
- Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
- synced_gpus (`bool`, *optional*, defaults to `False`):
- Whether to continue running the while loop until max_length (needed for ZeRO stage 3)
- model_kwargs:
- Additional model specific kwargs will be forwarded to the `forward` function of the model. If model is
- an encoder-decoder model the kwargs should include `encoder_outputs`.
-
- Return:
- [`generation_utilsBeamSearchDecoderOnlyOutput`], [`~generation_utils.BeamSearchEncoderDecoderOutput`] or
- `torch.LongTensor`: A `torch.LongTensor` containing the generated tokens (default behaviour) or a
- [`~generation_utils.BeamSearchDecoderOnlyOutput`] if `model.config.is_encoder_decoder=False` and
- `return_dict_in_generate=True` or a [`~generation_utils.BeamSearchEncoderDecoderOutput`] if
- `model.config.is_encoder_decoder=True`.
-
-
- Examples:
-
- ```python
- >>> from transformers import (
- ... AutoTokenizer,
- ... AutoModelForSeq2SeqLM,
- ... LogitsProcessorList,
- ... MinLengthLogitsProcessor,
- ... BeamSearchScorer,
- ... )
- >>> import torch
-
- >>> tokenizer = AutoTokenizer.from_pretrained("t5-base")
- >>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-base")
-
- >>> encoder_input_str = "translate English to German: How old are you?"
- >>> encoder_input_ids = tokenizer(encoder_input_str, return_tensors="pt").input_ids
-
-
- >>> # lets run beam search using 3 beams
- >>> num_beams = 3
- >>> # define decoder start token ids
- >>> input_ids = torch.ones((num_beams, 1), device=model.device, dtype=torch.long)
- >>> input_ids = input_ids * model.config.decoder_start_token_id
-
- >>> # add encoder_outputs to model keyword arguments
- >>> model_kwargs = {
- ... "encoder_outputs": model.get_encoder()(
- ... encoder_input_ids.repeat_interleave(num_beams, dim=0), return_dict=True
- ... )
- ... }
-
- >>> # instantiate beam scorer
- >>> beam_scorer = BeamSearchScorer(
- ... batch_size=1,
- ... num_beams=num_beams,
- ... device=model.device,
- ... )
-
- >>> # instantiate logits processors
- >>> logits_processor = LogitsProcessorList(
- ... [
- ... MinLengthLogitsProcessor(5, eos_token_id=model.config.eos_token_id),
- ... ]
- ... )
-
- >>> outputs = model.beam_search(input_ids, beam_scorer, logits_processor=logits_processor, **model_kwargs)
-
- >>> tokenizer.batch_decode(outputs, skip_special_tokens=True)
- ['Wie alt bist du?']
- ```"""
- # init values
- logits_processor = logits_processor if logits_processor is not None else LogitsProcessorList()
- stopping_criteria = stopping_criteria if stopping_criteria is not None else StoppingCriteriaList()
- if max_length is not None:
- warnings.warn(
- "`max_length` is deprecated in this function, use `stopping_criteria=StoppingCriteriaList(MaxLengthCriteria(max_length=max_length))` instead.",
- UserWarning,
- )
- stopping_criteria = validate_stopping_criteria(stopping_criteria, max_length)
- if len(stopping_criteria) == 0:
- warnings.warn("You don't have defined any stopping_criteria, this will likely loop forever", UserWarning)
- pad_token_id = pad_token_id if pad_token_id is not None else self.config.pad_token_id
- eos_token_id = eos_token_id if eos_token_id is not None else self.config.eos_token_id
- output_scores = output_scores if output_scores is not None else self.config.output_scores
- output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
- output_hidden_states = (
- output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
- )
- return_dict_in_generate = (
- return_dict_in_generate if return_dict_in_generate is not None else self.config.return_dict_in_generate
- )
-
- batch_size = len(beam_scorer._beam_hyps)
- num_beams = beam_scorer.num_beams
-
- batch_beam_size, cur_len = input_ids.shape
-
- if num_beams * batch_size != batch_beam_size:
- raise ValueError(
- f"Batch dimension of `input_ids` should be {num_beams * batch_size}, but is {batch_beam_size}."
- )
-
- # init attention / hidden states / scores tuples
- scores = () if (return_dict_in_generate and output_scores) else None
- beam_indices = (
- tuple(() for _ in range(batch_beam_size)) if (return_dict_in_generate and output_scores) else None
- )
- decoder_attentions = () if (return_dict_in_generate and output_attentions) else None
- cross_attentions = () if (return_dict_in_generate and output_attentions) else None
- decoder_hidden_states = () if (return_dict_in_generate and output_hidden_states) else None
-
- # if model is an encoder-decoder, retrieve encoder attention weights and hidden states
- if return_dict_in_generate and self.config.is_encoder_decoder:
- encoder_attentions = model_kwargs["encoder_outputs"].get("attentions") if output_attentions else None
- encoder_hidden_states = (
- model_kwargs["encoder_outputs"].get("hidden_states") if output_hidden_states else None
- )
-
- beam_scores = torch.zeros((batch_size, num_beams), dtype=torch.float, device=input_ids.device)
- beam_scores[:, 1:] = -1e9
- beam_scores = beam_scores.view((batch_size * num_beams,))
-
- this_peer_finished = False # used by synced_gpus only
- while True:
-
- if synced_gpus:
- # Under synced_gpus the `forward` call must continue until all gpus complete their sequence.
- # The following logic allows an early break if all peers finished generating their sequence
- this_peer_finished_flag = torch.tensor(0.0 if this_peer_finished else 1.0).to(input_ids.device)
- # send 0.0 if we finished, 1.0 otherwise
- dist.all_reduce(this_peer_finished_flag, op=dist.ReduceOp.SUM)
- # did all peers finish? the reduced sum will be 0.0 then
- if this_peer_finished_flag.item() == 0.0:
- break
-
- model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
-
- outputs = self(
- **model_inputs,
- return_dict=True,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- )
-
- if synced_gpus and this_peer_finished:
- cur_len = cur_len + 1
- continue # don't waste resources running the code we don't need
-
- next_token_logits = outputs.logits[:, -1, :]
- # hack: adjust tokens for Marian. For Marian we have to make sure that the `pad_token_id`
- # cannot be generated both before and after the `nn.functional.log_softmax` operation.
- next_token_logits = self.adjust_logits_during_generation(next_token_logits, cur_len=cur_len)
- next_token_scores = nn.functional.log_softmax(
- next_token_logits, dim=-1
- ) # (batch_size * num_beams, vocab_size)
-
- #Normal execution
- next_token_scores_processed = logits_processor(input_ids, next_token_scores)
- next_token_scores = next_token_scores_processed + beam_scores[:, None].expand_as(next_token_scores)
-
- # Store scores, attentions and hidden_states when required
- if return_dict_in_generate:
- if output_scores:
- scores += (next_token_scores_processed,)
- if output_attentions:
- decoder_attentions += (
- (outputs.decoder_attentions,) if self.config.is_encoder_decoder else (outputs.attentions,)
- )
- if self.config.is_encoder_decoder:
- cross_attentions += (outputs.cross_attentions,)
-
- if output_hidden_states:
- decoder_hidden_states += (
- (outputs.decoder_hidden_states,)
- if self.config.is_encoder_decoder
- else (outputs.hidden_states,)
- )
-
- # reshape for beam search
- vocab_size = next_token_scores.shape[-1]
- next_token_scores = next_token_scores.view(batch_size, num_beams * vocab_size)
-
- next_token_scores, next_tokens = torch.topk(
- next_token_scores, 2 * num_beams, dim=1, largest=True, sorted=True
- )
-
- next_indices = torch_int_div(next_tokens, vocab_size)
- next_tokens = next_tokens % vocab_size
-
- # stateless
- beam_outputs = beam_scorer.process(
- input_ids,
- next_token_scores,
- next_tokens,
- next_indices,
- pad_token_id=pad_token_id,
- eos_token_id=eos_token_id,
- )
-
- beam_scores = beam_outputs["next_beam_scores"]
- beam_next_tokens = beam_outputs["next_beam_tokens"]
- beam_idx = beam_outputs["next_beam_indices"]
-
- input_ids = torch.cat([input_ids[beam_idx, :], beam_next_tokens.unsqueeze(-1)], dim=-1)
-
- model_kwargs = self._update_model_kwargs_for_generation(
- outputs, model_kwargs, is_encoder_decoder=self.config.is_encoder_decoder
- )
- if model_kwargs["past"] is not None:
- model_kwargs["past"] = self._reorder_cache(model_kwargs["past"], beam_idx)
-
- if return_dict_in_generate and output_scores:
- beam_indices = tuple((beam_indices[beam_idx[i]] + (beam_idx[i],) for i in range(len(beam_indices))))
-
- # increase cur_len
- cur_len = cur_len + 1
-
- if beam_scorer.is_done or stopping_criteria(input_ids, scores):
- if not synced_gpus:
- break
- else:
- this_peer_finished = True
-
- sequence_outputs = beam_scorer.finalize(
- input_ids,
- beam_scores,
- next_tokens,
- next_indices,
- pad_token_id=pad_token_id,
- eos_token_id=eos_token_id,
- max_length=stopping_criteria.max_length,
- )
-
- if return_dict_in_generate:
- if not output_scores:
- sequence_outputs["sequence_scores"] = None
- else:
- num_return_sequences = beam_scorer.num_beam_hyps_to_keep
- # return only as many indices as sequences
- beam_indices = tuple(
- (beam_indices[i * num_beams : i * num_beams + num_return_sequences] for i in range(batch_size))
- )
- beam_indices = sum(beam_indices, ())
-
- if self.config.is_encoder_decoder:
- return BeamSearchEncoderDecoderOutput(
- sequences=sequence_outputs["sequences"],
- sequences_scores=sequence_outputs["sequence_scores"],
- scores=scores,
- beam_indices=beam_indices,
- encoder_attentions=encoder_attentions,
- encoder_hidden_states=encoder_hidden_states,
- decoder_attentions=decoder_attentions,
- cross_attentions=cross_attentions,
- decoder_hidden_states=decoder_hidden_states,
- )
- else:
- return BeamSearchDecoderOnlyOutput(
- sequences=sequence_outputs["sequences"],
- sequences_scores=sequence_outputs["sequence_scores"],
- scores=scores,
- beam_indices=beam_indices,
- attentions=decoder_attentions,
- hidden_states=decoder_hidden_states,
- )
- else:
- return sequence_outputs["sequences"]
-
- def beam_sample(
- self,
- input_ids: torch.LongTensor,
- beam_scorer: BeamScorer,
- logits_processor: Optional[LogitsProcessorList] = None,
- stopping_criteria: Optional[StoppingCriteriaList] = None,
- logits_warper: Optional[LogitsProcessorList] = None,
- max_length: Optional[int] = None,
- pad_token_id: Optional[int] = None,
- eos_token_id: Optional[int] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- output_scores: Optional[bool] = None,
- return_dict_in_generate: Optional[bool] = None,
- synced_gpus: Optional[bool] = False,
- **model_kwargs,
- ) -> Union[BeamSampleOutput, torch.LongTensor]:
- r"""
- Generates sequences of token ids for models with a language modeling head using **beam search multinomial
- sampling** and can be used for text-decoder, text-to-text, speech-to-text, and vision-to-text models.
-
- Parameters:
-
- input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
- The sequence used as a prompt for the generation.
- beam_scorer (`BeamScorer`):
- A derived instance of [`BeamScorer`] that defines how beam hypotheses are constructed, stored and
- sorted during generation. For more information, the documentation of [`BeamScorer`] should be read.
- logits_processor (`LogitsProcessorList`, *optional*):
- An instance of [`LogitsProcessorList`]. List of instances of class derived from [`LogitsProcessor`]
- used to modify the prediction scores of the language modeling head applied at each generation step.
- stopping_criteria (`StoppingCriteriaList`, *optional*):
- An instance of [`StoppingCriteriaList`]. List of instances of class derived from [`StoppingCriteria`]
- used to tell if the generation loop should stop.
- logits_warper (`LogitsProcessorList`, *optional*):
- An instance of [`LogitsProcessorList`]. List of instances of class derived from [`LogitsWarper`] used
- to warp the prediction score distribution of the language modeling head applied before multinomial
- sampling at each generation step.
- max_length (`int`, *optional*, defaults to 20):
- **DEPRECATED**. Use `logits_processor` or `stopping_criteria` directly to cap the number of generated
- tokens. The maximum length of the sequence to be generated.
- pad_token_id (`int`, *optional*):
- The id of the *padding* token.
- eos_token_id (`int`, *optional*):
- The id of the *end-of-sequence* token.
- output_attentions (`bool`, *optional*, defaults to `False`):
- Whether or not to return the attentions tensors of all attention layers. See `attentions` under
- returned tensors for more details.
- output_hidden_states (`bool`, *optional*, defaults to `False`):
- Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors
- for more details.
- output_scores (`bool`, *optional*, defaults to `False`):
- Whether or not to return the prediction scores. See `scores` under returned tensors for more details.
- return_dict_in_generate (`bool`, *optional*, defaults to `False`):
- Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
- synced_gpus (`bool`, *optional*, defaults to `False`):
- Whether to continue running the while loop until max_length (needed for ZeRO stage 3)
- model_kwargs:
- Additional model specific kwargs will be forwarded to the `forward` function of the model. If model is
- an encoder-decoder model the kwargs should include `encoder_outputs`.
-
- Return:
- [`~generation_utils.BeamSampleDecoderOnlyOutput`], [`~generation_utils.BeamSampleEncoderDecoderOutput`] or
- `torch.LongTensor`: A `torch.LongTensor` containing the generated tokens (default behaviour) or a
- [`~generation_utils.BeamSampleDecoderOnlyOutput`] if `model.config.is_encoder_decoder=False` and
- `return_dict_in_generate=True` or a [`~generation_utils.BeamSampleEncoderDecoderOutput`] if
- `model.config.is_encoder_decoder=True`.
-
- Examples:
-
- ```python
- >>> from transformers import (
- ... AutoTokenizer,
- ... AutoModelForSeq2SeqLM,
- ... LogitsProcessorList,
- ... MinLengthLogitsProcessor,
- ... TopKLogitsWarper,
- ... TemperatureLogitsWarper,
- ... BeamSearchScorer,
- ... )
- >>> import torch
-
- >>> tokenizer = AutoTokenizer.from_pretrained("t5-base")
- >>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-base")
-
- >>> encoder_input_str = "translate English to German: How old are you?"
- >>> encoder_input_ids = tokenizer(encoder_input_str, return_tensors="pt").input_ids
-
- >>> # lets run beam search using 3 beams
- >>> num_beams = 3
- >>> # define decoder start token ids
- >>> input_ids = torch.ones((num_beams, 1), device=model.device, dtype=torch.long)
- >>> input_ids = input_ids * model.config.decoder_start_token_id
-
- >>> # add encoder_outputs to model keyword arguments
- >>> model_kwargs = {
- ... "encoder_outputs": model.get_encoder()(
- ... encoder_input_ids.repeat_interleave(num_beams, dim=0), return_dict=True
- ... )
- ... }
-
- >>> # instantiate beam scorer
- >>> beam_scorer = BeamSearchScorer(
- ... batch_size=1,
- ... max_length=model.config.max_length,
- ... num_beams=num_beams,
- ... device=model.device,
- ... )
-
- >>> # instantiate logits processors
- >>> logits_processor = LogitsProcessorList(
- ... [MinLengthLogitsProcessor(5, eos_token_id=model.config.eos_token_id)]
- ... )
- >>> # instantiate logits processors
- >>> logits_warper = LogitsProcessorList(
- ... [
- ... TopKLogitsWarper(50),
- ... TemperatureLogitsWarper(0.7),
- ... ]
- ... )
-
- >>> outputs = model.beam_sample(
- ... input_ids, beam_scorer, logits_processor=logits_processor, logits_warper=logits_warper, **model_kwargs
- ... )
-
- >>> tokenizer.batch_decode(outputs, skip_special_tokens=True)
- ['Wie alt bist du?']
- ```"""
- # init values
- logits_processor = logits_processor if logits_processor is not None else LogitsProcessorList()
- stopping_criteria = stopping_criteria if stopping_criteria is not None else StoppingCriteriaList()
- if max_length is not None:
- warnings.warn(
- "`max_length` is deprecated in this function, use `stopping_criteria=StoppingCriteriaList(MaxLengthCriteria(max_length=max_length))` instead.",
- UserWarning,
- )
- stopping_criteria = validate_stopping_criteria(stopping_criteria, max_length)
- pad_token_id = pad_token_id if pad_token_id is not None else self.config.pad_token_id
- eos_token_id = eos_token_id if eos_token_id is not None else self.config.eos_token_id
- output_scores = output_scores if output_scores is not None else self.config.output_scores
- output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
- output_hidden_states = (
- output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
- )
- return_dict_in_generate = (
- return_dict_in_generate if return_dict_in_generate is not None else self.config.return_dict_in_generate
- )
-
- batch_size = len(beam_scorer._beam_hyps)
- num_beams = beam_scorer.num_beams
-
- batch_beam_size, cur_len = input_ids.shape
-
- # init attention / hidden states / scores tuples
- scores = () if (return_dict_in_generate and output_scores) else None
- beam_indices = (
- tuple(() for _ in range(batch_beam_size)) if (return_dict_in_generate and output_scores) else None
- )
- decoder_attentions = () if (return_dict_in_generate and output_attentions) else None
- cross_attentions = () if (return_dict_in_generate and output_attentions) else None
- decoder_hidden_states = () if (return_dict_in_generate and output_hidden_states) else None
-
- # if model is an encoder-decoder, retrieve encoder attention weights and hidden states
- if return_dict_in_generate and self.config.is_encoder_decoder:
- encoder_attentions = model_kwargs["encoder_outputs"].get("attentions") if output_attentions else None
- encoder_hidden_states = (
- model_kwargs["encoder_outputs"].get("hidden_states") if output_hidden_states else None
- )
-
- beam_scores = torch.zeros((batch_size, num_beams), dtype=torch.float, device=input_ids.device)
- beam_scores = beam_scores.view((batch_size * num_beams,))
-
- this_peer_finished = False # used by synced_gpus only
- while True:
-
- if synced_gpus:
- # Under synced_gpus the `forward` call must continue until all gpus complete their sequence.
- # The following logic allows an early break if all peers finished generating their sequence
- this_peer_finished_flag = torch.tensor(0.0 if this_peer_finished else 1.0).to(input_ids.device)
- # send 0.0 if we finished, 1.0 otherwise
- dist.all_reduce(this_peer_finished_flag, op=dist.ReduceOp.SUM)
- # did all peers finish? the reduced sum will be 0.0 then
- if this_peer_finished_flag.item() == 0.0:
- break
-
- model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
-
- outputs = self(
- **model_inputs,
- return_dict=True,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- )
-
- if synced_gpus and this_peer_finished:
- cur_len = cur_len + 1
- continue # don't waste resources running the code we don't need
-
- next_token_logits = outputs.logits[:, -1, :]
-
- # hack: adjust tokens for Marian. For Marian we have to make sure that the `pad_token_id`
- # cannot be generated both before and after the `nn.functional.log_softmax` operation.
- next_token_logits = self.adjust_logits_during_generation(next_token_logits, cur_len=cur_len)
- next_token_scores = nn.functional.log_softmax(
- next_token_logits, dim=-1
- ) # (batch_size * num_beams, vocab_size)
-
- next_token_scores_processed = logits_processor(input_ids, next_token_scores)
- next_token_scores = next_token_scores_processed + beam_scores[:, None].expand_as(next_token_scores)
- next_token_scores = logits_warper(input_ids, next_token_scores)
-
- # Store scores, attentions and hidden_states when required
- if return_dict_in_generate:
- if output_scores:
- scores += (logits_warper(input_ids, next_token_scores_processed),)
- if output_attentions:
- decoder_attentions += (
- (outputs.decoder_attentions,) if self.config.is_encoder_decoder else (outputs.attentions,)
- )
- if self.config.is_encoder_decoder:
- cross_attentions += (outputs.cross_attentions,)
-
- if output_hidden_states:
- decoder_hidden_states += (
- (outputs.decoder_hidden_states,)
- if self.config.is_encoder_decoder
- else (outputs.hidden_states,)
- )
-
- # reshape for beam search
- vocab_size = next_token_scores.shape[-1]
- next_token_scores = next_token_scores.view(batch_size, num_beams * vocab_size)
-
- probs = nn.functional.softmax(next_token_scores, dim=-1)
-
- next_tokens = torch.multinomial(probs, num_samples=2 * num_beams)
- next_token_scores = torch.gather(next_token_scores, -1, next_tokens)
-
- next_token_scores, _indices = torch.sort(next_token_scores, descending=True, dim=1)
- next_tokens = torch.gather(next_tokens, -1, _indices)
-
- next_indices = torch_int_div(next_tokens, vocab_size)
- next_tokens = next_tokens % vocab_size
-
- # stateless
- beam_outputs = beam_scorer.process(
- input_ids,
- next_token_scores,
- next_tokens,
- next_indices,
- pad_token_id=pad_token_id,
- eos_token_id=eos_token_id,
- )
- beam_scores = beam_outputs["next_beam_scores"]
- beam_next_tokens = beam_outputs["next_beam_tokens"]
- beam_idx = beam_outputs["next_beam_indices"]
-
- input_ids = torch.cat([input_ids[beam_idx, :], beam_next_tokens.unsqueeze(-1)], dim=-1)
-
- model_kwargs = self._update_model_kwargs_for_generation(
- outputs, model_kwargs, is_encoder_decoder=self.config.is_encoder_decoder
- )
- if model_kwargs["past"] is not None:
- model_kwargs["past"] = self._reorder_cache(model_kwargs["past"], beam_idx)
-
- if return_dict_in_generate and output_scores:
- beam_indices = tuple((beam_indices[beam_idx[i]] + (beam_idx[i],) for i in range(len(beam_indices))))
-
- # increase cur_len
- cur_len = cur_len + 1
-
- if beam_scorer.is_done or stopping_criteria(input_ids, scores):
- if not synced_gpus:
- break
- else:
- this_peer_finished = True
-
- sequence_outputs = beam_scorer.finalize(
- input_ids,
- beam_scores,
- next_tokens,
- next_indices,
- pad_token_id=pad_token_id,
- eos_token_id=eos_token_id,
- max_length=stopping_criteria.max_length,
- )
-
- if return_dict_in_generate:
- if not output_scores:
- sequence_outputs["sequence_scores"] = None
- else:
- num_return_sequences = beam_scorer.num_beam_hyps_to_keep
- # return only as many indices as sequences
- beam_indices = tuple(
- (beam_indices[i * num_beams : i * num_beams + num_return_sequences] for i in range(batch_size))
- )
- beam_indices = sum(beam_indices, ())
-
- if self.config.is_encoder_decoder:
- return BeamSampleEncoderDecoderOutput(
- sequences=sequence_outputs["sequences"],
- sequences_scores=sequence_outputs["sequence_scores"],
- scores=scores,
- beam_indices=beam_indices,
- encoder_attentions=encoder_attentions,
- encoder_hidden_states=encoder_hidden_states,
- decoder_attentions=decoder_attentions,
- cross_attentions=cross_attentions,
- decoder_hidden_states=decoder_hidden_states,
- )
- else:
- return BeamSampleDecoderOnlyOutput(
- sequences=sequence_outputs["sequences"],
- sequences_scores=sequence_outputs["sequence_scores"],
- scores=scores,
- beam_indices=beam_indices,
- attentions=decoder_attentions,
- hidden_states=decoder_hidden_states,
- )
- else:
- return sequence_outputs["sequences"]
-
- def group_beam_search(
- self,
- input_ids: torch.LongTensor,
- beam_scorer: BeamScorer,
- logits_processor: Optional[LogitsProcessorList] = None,
- stopping_criteria: Optional[StoppingCriteriaList] = None,
- max_length: Optional[int] = None,
- pad_token_id: Optional[int] = None,
- eos_token_id: Optional[int] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- output_scores: Optional[bool] = None,
- return_dict_in_generate: Optional[bool] = None,
- synced_gpus: Optional[bool] = False,
- **model_kwargs,
- ):
- r"""
- Generates sequences of token ids for models with a language modeling head using **diverse beam search
- decoding** and can be used for text-decoder, text-to-text, speech-to-text, and vision-to-text models.
-
- Parameters:
-
- input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
- The sequence used as a prompt for the generation.
- beam_scorer (`BeamScorer`):
- An derived instance of [`BeamScorer`] that defines how beam hypotheses are constructed, stored and
- sorted during generation. For more information, the documentation of [`BeamScorer`] should be read.
- logits_processor (`LogitsProcessorList`, *optional*):
- An instance of [`LogitsProcessorList`]. List of instances of class derived from [`LogitsProcessor`]
- used to modify the prediction scores of the language modeling head applied at each generation step.
- stopping_criteria (`StoppingCriteriaList`, *optional*):
- An instance of [`StoppingCriteriaList`]. List of instances of class derived from [`StoppingCriteria`]
- used to tell if the generation loop should stop.
- max_length (`int`, *optional*, defaults to 20):
- **DEPRECATED**. Use `logits_processor` or `stopping_criteria` directly to cap the number of generated
- tokens. The maximum length of the sequence to be generated.
- pad_token_id (`int`, *optional*):
- The id of the *padding* token.
- eos_token_id (`int`, *optional*):
- The id of the *end-of-sequence* token.
- output_attentions (`bool`, *optional*, defaults to `False`):
- Whether or not to return the attentions tensors of all attention layers. See `attentions` under
- returned tensors for more details.
- output_hidden_states (`bool`, *optional*, defaults to `False`):
- Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors
- for more details.
- output_scores (`bool`, *optional*, defaults to `False`):
- Whether or not to return the prediction scores. See `scores` under returned tensors for more details.
- return_dict_in_generate (`bool`, *optional*, defaults to `False`):
- Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
- synced_gpus (`bool`, *optional*, defaults to `False`):
- Whether to continue running the while loop until max_length (needed for ZeRO stage 3)
-
- model_kwargs:
- Additional model specific kwargs that will be forwarded to the `forward` function of the model. If
- model is an encoder-decoder model the kwargs should include `encoder_outputs`.
-
- Return:
- [`~generation_utils.BeamSearchDecoderOnlyOutput`], [`~generation_utils.BeamSearchEncoderDecoderOutput`] or
- `torch.LongTensor`: A `torch.LongTensor` containing the generated tokens (default behaviour) or a
- [`~generation_utils.BeamSearchDecoderOnlyOutput`] if [`~generation_utils.BeamSearchDecoderOnlyOutput`] if
- `model.config.is_encoder_decoder=False` and `return_dict_in_generate=True` or a
- [`~generation_utils.BeamSearchEncoderDecoderOutput`] if `model.config.is_encoder_decoder=True`.
-
- Examples:
-
- ```python
- >>> from transformers import (
- ... AutoTokenizer,
- ... AutoModelForSeq2SeqLM,
- ... LogitsProcessorList,
- ... MinLengthLogitsProcessor,
- ... HammingDiversityLogitsProcessor,
- ... BeamSearchScorer,
- ... )
- >>> import torch
-
- >>> tokenizer = AutoTokenizer.from_pretrained("t5-base")
- >>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-base")
-
- >>> encoder_input_str = "translate English to German: How old are you?"
- >>> encoder_input_ids = tokenizer(encoder_input_str, return_tensors="pt").input_ids
-
-
- >>> # lets run diverse beam search using 6 beams
- >>> num_beams = 6
- >>> # define decoder start token ids
- >>> input_ids = torch.ones((num_beams, 1), device=model.device, dtype=torch.long)
- >>> input_ids = input_ids * model.config.decoder_start_token_id
-
- >>> # add encoder_outputs to model keyword arguments
- >>> model_kwargs = {
- ... "encoder_outputs": model.get_encoder()(
- ... encoder_input_ids.repeat_interleave(num_beams, dim=0), return_dict=True
- ... )
- ... }
-
- >>> # instantiate beam scorer
- >>> beam_scorer = BeamSearchScorer(
- ... batch_size=1,
- ... max_length=model.config.max_length,
- ... num_beams=num_beams,
- ... device=model.device,
- ... num_beam_groups=3,
- ... )
-
- >>> # instantiate logits processors
- >>> logits_processor = LogitsProcessorList(
- ... [
- ... HammingDiversityLogitsProcessor(5.5, num_beams=6, num_beam_groups=3),
- ... MinLengthLogitsProcessor(5, eos_token_id=model.config.eos_token_id),
- ... ]
- ... )
-
- >>> outputs = model.group_beam_search(
- ... input_ids, beam_scorer, logits_processor=logits_processor, **model_kwargs
- ... )
-
- >>> tokenizer.batch_decode(outputs, skip_special_tokens=True)
- ['Wie alt bist du?']
- ```"""
- # init values
- logits_processor = logits_processor if logits_processor is not None else LogitsProcessorList()
- stopping_criteria = stopping_criteria if stopping_criteria is not None else StoppingCriteriaList()
- if max_length is not None:
- warnings.warn(
- "`max_length` is deprecated in this function, use `stopping_criteria=StoppingCriteriaList(MaxLengthCriteria(max_length=max_length))` instead.",
- UserWarning,
- )
- stopping_criteria = validate_stopping_criteria(stopping_criteria, max_length)
- pad_token_id = pad_token_id if pad_token_id is not None else self.config.pad_token_id
- eos_token_id = eos_token_id if eos_token_id is not None else self.config.eos_token_id
- output_scores = output_scores if output_scores is not None else self.config.output_scores
- output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
- output_hidden_states = (
- output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
- )
- return_dict_in_generate = (
- return_dict_in_generate if return_dict_in_generate is not None else self.config.return_dict_in_generate
- )
-
- batch_size = len(beam_scorer._beam_hyps)
- num_beams = beam_scorer.num_beams
- num_beam_groups = beam_scorer.num_beam_groups
- num_sub_beams = num_beams // num_beam_groups
- device = input_ids.device
-
- batch_beam_size, cur_len = input_ids.shape
-
- if return_dict_in_generate and output_scores:
- beam_indices = [tuple(() for _ in range(num_sub_beams * batch_size)) for _ in range(num_beam_groups)]
- else:
- beam_indices = None
-
- if num_beams * batch_size != batch_beam_size:
- raise ValueError(
- f"Batch dimension of `input_ids` should be {num_beams * batch_size}, but is {batch_beam_size}."
- )
-
- # init attention / hidden states / scores tuples
- scores = () if (return_dict_in_generate and output_scores) else None
- decoder_attentions = () if (return_dict_in_generate and output_attentions) else None
- cross_attentions = () if (return_dict_in_generate and output_attentions) else None
- decoder_hidden_states = () if (return_dict_in_generate and output_hidden_states) else None
-
- # if model is an encoder-decoder, retrieve encoder attention weights and hidden states
- if return_dict_in_generate and self.config.is_encoder_decoder:
- encoder_attentions = model_kwargs["encoder_outputs"].get("attentions") if output_attentions else None
- encoder_hidden_states = (
- model_kwargs["encoder_outputs"].get("hidden_states") if output_hidden_states else None
- )
-
- beam_scores = torch.full((batch_size, num_beams), -1e9, dtype=torch.float, device=device)
- # initialise score of first beam of each group with 0 and the rest with 1e-9. This ensures that the beams in
- # the same group don't produce same tokens everytime.
- beam_scores[:, ::num_sub_beams] = 0
- beam_scores = beam_scores.view((batch_size * num_beams,))
-
- this_peer_finished = False # used by synced_gpus only
- while True:
-
- if synced_gpus:
- # Under synced_gpus the `forward` call must continue until all gpus complete their sequence.
- # The following logic allows an early break if all peers finished generating their sequence
- this_peer_finished_flag = torch.tensor(0.0 if this_peer_finished else 1.0).to(input_ids.device)
- # send 0.0 if we finished, 1.0 otherwise
- dist.all_reduce(this_peer_finished_flag, op=dist.ReduceOp.SUM)
- # did all peers finish? the reduced sum will be 0.0 then
- if this_peer_finished_flag.item() == 0.0:
- break
-
- # predicted tokens in cur_len step
- current_tokens = torch.zeros(batch_size * num_beams, dtype=input_ids.dtype, device=device)
-
- # indices which will form the beams in the next time step
- reordering_indices = torch.zeros(batch_size * num_beams, dtype=torch.long, device=device)
-
- # do one decoder step on all beams of all sentences in batch
- model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
- outputs = self(
- **model_inputs,
- return_dict=True,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- )
-
- if synced_gpus and this_peer_finished:
- cur_len = cur_len + 1
- continue # don't waste resources running the code we don't need
-
- if output_scores:
- processed_score = torch.zeros_like(outputs.logits[:, -1, :])
-
- for beam_group_idx in range(num_beam_groups):
- group_start_idx = beam_group_idx * num_sub_beams
- group_end_idx = min(group_start_idx + num_sub_beams, num_beams)
- group_size = group_end_idx - group_start_idx
-
- # indices of beams of current group among all sentences in batch
- batch_group_indices = []
-
- for batch_idx in range(batch_size):
- batch_group_indices.extend(
- [batch_idx * num_beams + idx for idx in range(group_start_idx, group_end_idx)]
- )
- group_input_ids = input_ids[batch_group_indices]
-
- # select outputs of beams of current group only
- next_token_logits = outputs.logits[batch_group_indices, -1, :]
-
- # hack: adjust tokens for Marian. For Marian we have to make sure that the `pad_token_id`
- # cannot be generated both before and after the `nn.functional.log_softmax` operation.
- next_token_logits = self.adjust_logits_during_generation(next_token_logits, cur_len=cur_len)
- next_token_scores = nn.functional.log_softmax(
- next_token_logits, dim=-1
- ) # (batch_size * group_size, vocab_size)
- vocab_size = next_token_scores.shape[-1]
-
- next_token_scores_processed = logits_processor(
- group_input_ids, next_token_scores, current_tokens=current_tokens, beam_group_idx=beam_group_idx
- )
- next_token_scores = next_token_scores_processed + beam_scores[batch_group_indices].unsqueeze(-1)
- next_token_scores = next_token_scores.expand_as(next_token_scores_processed)
-
- if output_scores:
- processed_score[batch_group_indices] = next_token_scores_processed
-
- # reshape for beam search
- next_token_scores = next_token_scores.view(batch_size, group_size * vocab_size)
-
- next_token_scores, next_tokens = torch.topk(
- next_token_scores, 2 * group_size, dim=1, largest=True, sorted=True
- )
-
- next_indices = torch_int_div(next_tokens, vocab_size)
- next_tokens = next_tokens % vocab_size
-
- # stateless
- beam_outputs = beam_scorer.process(
- group_input_ids,
- next_token_scores,
- next_tokens,
- next_indices,
- pad_token_id=pad_token_id,
- eos_token_id=eos_token_id,
- )
- beam_scores[batch_group_indices] = beam_outputs["next_beam_scores"]
- beam_next_tokens = beam_outputs["next_beam_tokens"]
- beam_idx = beam_outputs["next_beam_indices"]
-
- if return_dict_in_generate and output_scores:
- beam_indices[beam_group_idx] = tuple(
- beam_indices[beam_group_idx][beam_idx[i]] + (beam_idx[i],) for i in range(len(beam_indices[0]))
- )
-
- input_ids[batch_group_indices] = group_input_ids[beam_idx]
- group_input_ids = torch.cat([group_input_ids[beam_idx, :], beam_next_tokens.unsqueeze(-1)], dim=-1)
- current_tokens[batch_group_indices] = group_input_ids[:, -1]
-
- # (beam_idx // group_size) -> batch_idx
- # (beam_idx % group_size) -> offset of idx inside the group
- reordering_indices[batch_group_indices] = (
- num_beams * torch_int_div(beam_idx, group_size) + group_start_idx + (beam_idx % group_size)
- )
-
- # Store scores, attentions and hidden_states when required
- if return_dict_in_generate:
- if output_scores:
- scores += (processed_score,)
- if output_attentions:
- decoder_attentions += (
- (outputs.decoder_attentions,) if self.config.is_encoder_decoder else (outputs.attentions,)
- )
- if self.config.is_encoder_decoder:
- cross_attentions += (outputs.cross_attentions,)
-
- if output_hidden_states:
- decoder_hidden_states += (
- (outputs.decoder_hidden_states,)
- if self.config.is_encoder_decoder
- else (outputs.hidden_states,)
- )
-
- input_ids = torch.cat([input_ids, current_tokens.unsqueeze(-1)], dim=-1)
-
- model_kwargs = self._update_model_kwargs_for_generation(
- outputs, model_kwargs, is_encoder_decoder=self.config.is_encoder_decoder
- )
- if model_kwargs["past"] is not None:
- model_kwargs["past"] = self._reorder_cache(model_kwargs["past"], reordering_indices)
-
- # increase cur_len
- cur_len = cur_len + 1
-
- if beam_scorer.is_done or stopping_criteria(input_ids, scores):
- if not synced_gpus:
- break
- else:
- this_peer_finished = True
-
- sequence_outputs = beam_scorer.finalize(
- input_ids,
- beam_scores,
- next_tokens,
- next_indices,
- pad_token_id=pad_token_id,
- eos_token_id=eos_token_id,
- max_length=stopping_criteria.max_length,
- )
-
- if return_dict_in_generate:
- if not output_scores:
- sequence_outputs["sequence_scores"] = None
- else:
- beam_indices = sum(beam_indices, ())
- num_return_sequences = beam_scorer.num_beam_hyps_to_keep
- # return only as many indices as sequences
- beam_indices = tuple(
- (beam_indices[i * num_beams : i * num_beams + num_return_sequences] for i in range(batch_size))
- )
- beam_indices = sum(beam_indices, ())
-
- if self.config.is_encoder_decoder:
- return BeamSearchEncoderDecoderOutput(
- sequences=sequence_outputs["sequences"],
- sequences_scores=sequence_outputs["sequence_scores"],
- scores=scores,
- beam_indices=beam_indices,
- encoder_attentions=encoder_attentions,
- encoder_hidden_states=encoder_hidden_states,
- decoder_attentions=decoder_attentions,
- cross_attentions=cross_attentions,
- decoder_hidden_states=decoder_hidden_states,
- )
- else:
- return BeamSearchDecoderOnlyOutput(
- sequences=sequence_outputs["sequences"],
- sequences_scores=sequence_outputs["sequence_scores"],
- scores=scores,
- attentions=decoder_attentions,
- hidden_states=decoder_hidden_states,
- )
- else:
- return sequence_outputs["sequences"]
-
- def constrained_beam_search(
- self,
- input_ids: torch.LongTensor,
- constrained_beam_scorer: ConstrainedBeamSearchScorer,
- logits_processor: Optional[LogitsProcessorList] = None,
- stopping_criteria: Optional[StoppingCriteriaList] = None,
- max_length: Optional[int] = None,
- pad_token_id: Optional[int] = None,
- eos_token_id: Optional[int] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- output_scores: Optional[bool] = None,
- return_dict_in_generate: Optional[bool] = None,
- synced_gpus: Optional[bool] = None,
- **model_kwargs,
- ) -> Union[BeamSearchOutput, torch.LongTensor]:
-
- r"""
- Generates sequences of token ids for models with a language modeling head using **constrained beam search
- decoding** and can be used for text-decoder, text-to-text, speech-to-text, and vision-to-text models.
-
- Parameters:
- input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
- The sequence used as a prompt for the generation.
- constrained_beam_scorer (`ConstrainedBeamSearchScorer`):
- A derived instance of [`BeamScorer`] that defines how beam hypotheses are constructed, stored and
- sorted during generation, while satisfying a list of positive constraints. For more information, the
- documentation of [`ConstrainedBeamSearchScorer`] should be read.
- logits_processor (`LogitsProcessorList`, *optional*):
- An instance of [`LogitsProcessorList`]. List of instances of class derived from [`LogitsProcessor`]
- used to modify the prediction scores of the language modeling head applied at each generation step.
- stopping_criteria (`StoppingCriteriaList`, *optional*):
- An instance of [`StoppingCriteriaList`]. List of instances of class derived from [`StoppingCriteria`]
- used to tell if the generation loop should stop.
- logits_warper (`LogitsProcessorList`, *optional*):
- An instance of [`LogitsProcessorList`]. List of instances of class derived from [`LogitsWarper`] used
- to warp the prediction score distribution of the language modeling head applied before multinomial
- sampling at each generation step.
- max_length (`int`, *optional*, defaults to 20):
- **DEPRECATED**. Use `logits_processor` or `stopping_criteria` directly to cap the number of generated
- tokens. The maximum length of the sequence to be generated.
- pad_token_id (`int`, *optional*):
- The id of the *padding* token.
- eos_token_id (`int`, *optional*):
- The id of the *end-of-sequence* token.
- output_attentions (`bool`, *optional*, defaults to `False`):
- Whether or not to return the attentions tensors of all attention layers. See `attentions` under
- returned tensors for more details.
- output_hidden_states (`bool`, *optional*, defaults to `False`):
- Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors
- for more details.
- output_scores (`bool`, *optional*, defaults to `False`):
- Whether or not to return the prediction scores. See `scores` under returned tensors for more details.
- return_dict_in_generate (`bool`, *optional*, defaults to `False`):
- Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
- synced_gpus (`bool`, *optional*, defaults to `False`):
- Whether to continue running the while loop until max_length (needed for ZeRO stage 3)
- model_kwargs:
- Additional model specific kwargs will be forwarded to the `forward` function of the model. If model is
- an encoder-decoder model the kwargs should include `encoder_outputs`.
-
- Return:
- [`generation_utilsBeamSearchDecoderOnlyOutput`], [`~generation_utils.BeamSearchEncoderDecoderOutput`] or
- `torch.LongTensor`: A `torch.LongTensor` containing the generated tokens (default behaviour) or a
- [`~generation_utils.BeamSearchDecoderOnlyOutput`] if `model.config.is_encoder_decoder=False` and
- `return_dict_in_generate=True` or a [`~generation_utils.BeamSearchEncoderDecoderOutput`] if
- `model.config.is_encoder_decoder=True`.
-
-
- Examples:
-
- ```python
- >>> from transformers import (
- ... AutoTokenizer,
- ... AutoModelForSeq2SeqLM,
- ... LogitsProcessorList,
- ... MinLengthLogitsProcessor,
- ... ConstrainedBeamSearchScorer,
- ... PhrasalConstraint,
- ... )
- >>> import torch
-
- >>> tokenizer = AutoTokenizer.from_pretrained("t5-base")
- >>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-base")
-
- >>> encoder_input_str = "translate English to German: How old are you?"
- >>> encoder_input_ids = tokenizer(encoder_input_str, return_tensors="pt").input_ids
-
-
- >>> # lets run beam search using 3 beams
- >>> num_beams = 3
- >>> # define decoder start token ids
- >>> input_ids = torch.ones((num_beams, 1), device=model.device, dtype=torch.long)
- >>> input_ids = input_ids * model.config.decoder_start_token_id
-
- >>> # add encoder_outputs to model keyword arguments
- >>> model_kwargs = {
- ... "encoder_outputs": model.get_encoder()(
- ... encoder_input_ids.repeat_interleave(num_beams, dim=0), return_dict=True
- ... )
- ... }
-
- >>> constraint_str = "Sie"
- >>> constraint_token_ids = tokenizer.encode(constraint_str)[:-1] # slice to remove eos token
- >>> constraints = [PhrasalConstraint(token_ids=constraint_token_ids)]
-
-
- >>> # instantiate beam scorer
- >>> beam_scorer = ConstrainedBeamSearchScorer(
- ... batch_size=1, num_beams=num_beams, device=model.device, constraints=constraints
- ... )
-
- >>> # instantiate logits processors
- >>> logits_processor = LogitsProcessorList(
- ... [
- ... MinLengthLogitsProcessor(5, eos_token_id=model.config.eos_token_id),
- ... ]
- ... )
-
- >>> outputs = model.constrained_beam_search(
- ... input_ids, beam_scorer, constraints=constraints, logits_processor=logits_processor, **model_kwargs
- ... )
-
- >>> tokenizer.batch_decode(outputs, skip_special_tokens=True)
- ['Wie alt sind Sie?']
- ```"""
- # init values
- logits_processor = logits_processor if logits_processor is not None else LogitsProcessorList()
- stopping_criteria = stopping_criteria if stopping_criteria is not None else StoppingCriteriaList()
- if max_length is not None:
- warnings.warn(
- "`max_length` is deprecated in this function, use `stopping_criteria=StoppingCriteriaList(MaxLengthCriteria(max_length=max_length))` instead.",
- UserWarning,
- )
- stopping_criteria = validate_stopping_criteria(stopping_criteria, max_length)
- if len(stopping_criteria) == 0:
- warnings.warn("You don't have defined any stopping_criteria, this will likely loop forever", UserWarning)
- pad_token_id = pad_token_id if pad_token_id is not None else self.config.pad_token_id
- eos_token_id = eos_token_id if eos_token_id is not None else self.config.eos_token_id
- output_scores = output_scores if output_scores is not None else self.config.output_scores
- output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
- output_hidden_states = (
- output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
- )
- return_dict_in_generate = (
- return_dict_in_generate if return_dict_in_generate is not None else self.config.return_dict_in_generate
- )
-
- # init attention / hidden states / scores tuples
- scores = () if (return_dict_in_generate and output_scores) else None
- decoder_attentions = () if (return_dict_in_generate and output_attentions) else None
- cross_attentions = () if (return_dict_in_generate and output_attentions) else None
- decoder_hidden_states = () if (return_dict_in_generate and output_hidden_states) else None
-
- # if model is an encoder-decoder, retrieve encoder attention weights and hidden states
- if return_dict_in_generate and self.config.is_encoder_decoder:
- encoder_attentions = model_kwargs["encoder_outputs"].get("attentions") if output_attentions else None
- encoder_hidden_states = (
- model_kwargs["encoder_outputs"].get("hidden_states") if output_hidden_states else None
- )
-
- batch_size = len(constrained_beam_scorer._beam_hyps)
- num_beams = constrained_beam_scorer.num_beams
-
- batch_beam_size, cur_len = input_ids.shape
-
- if num_beams * batch_size != batch_beam_size:
- raise ValueError(
- f"Batch dimension of `input_ids` should be {num_beams * batch_size}, but is {batch_beam_size}."
- )
-
- beam_scores = torch.zeros((batch_size, num_beams), dtype=torch.float, device=input_ids.device)
- beam_scores[:, 1:] = -1e9
- beam_scores = beam_scores.view((batch_size * num_beams,))
-
- this_peer_finished = False # used by synced_gpus only
- while True:
-
- if synced_gpus:
- # Under synced_gpus the `forward` call must continue until all gpus complete their sequence.
- # The following logic allows an early break if all peers finished generating their sequence
- this_peer_finished_flag = torch.tensor(0.0 if this_peer_finished else 1.0).to(input_ids.device)
- # send 0.0 if we finished, 1.0 otherwise
- dist.all_reduce(this_peer_finished_flag, op=dist.ReduceOp.SUM)
- # did all peers finish? the reduced sum will be 0.0 then
- if this_peer_finished_flag.item() == 0.0:
- break
-
- model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
-
- outputs = self(
- **model_inputs,
- return_dict=True,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- )
-
- if synced_gpus and this_peer_finished:
- cur_len = cur_len + 1
- continue # don't waste resources running the code we don't need
-
- next_token_logits = outputs.logits[:, -1, :]
- # hack: adjust tokens for Marian. For Marian we have to make sure that the `pad_token_id`
- # cannot be generated both before and after the `nn.functional.log_softmax` operation.
- next_token_logits = self.adjust_logits_during_generation(next_token_logits, cur_len=cur_len)
- next_token_scores = nn.functional.log_softmax(
- next_token_logits, dim=-1
- ) # (batch_size * num_beams, vocab_size)
-
- next_token_scores_processed = logits_processor(input_ids, next_token_scores)
-
- scores_for_all_vocab = next_token_scores_processed.clone()
-
- next_token_scores = next_token_scores_processed + beam_scores[:, None].expand_as(next_token_scores)
-
- # Store scores, attentions and hidden_states when required
- if return_dict_in_generate:
- if output_scores:
- scores += (next_token_scores,)
- if output_attentions:
- decoder_attentions += (
- (outputs.decoder_attentions,) if self.config.is_encoder_decoder else (outputs.attentions,)
- )
- if self.config.is_encoder_decoder:
- cross_attentions += (outputs.cross_attentions,)
-
- if output_hidden_states:
- decoder_hidden_states += (
- (outputs.decoder_hidden_states,)
- if self.config.is_encoder_decoder
- else (outputs.hidden_states,)
- )
-
- # reshape for beam search
- vocab_size = next_token_scores.shape[-1]
- next_token_scores = next_token_scores.view(batch_size, num_beams * vocab_size)
-
- next_token_scores, next_tokens = torch.topk(
- next_token_scores, 2 * num_beams, dim=1, largest=True, sorted=True
- )
-
- next_indices = (next_tokens / vocab_size).long()
- next_tokens = next_tokens % vocab_size
-
- # stateless
- beam_outputs = constrained_beam_scorer.process(
- input_ids,
- next_token_scores,
- next_tokens,
- next_indices,
- scores_for_all_vocab,
- pad_token_id=pad_token_id,
- eos_token_id=eos_token_id,
- )
- beam_scores = beam_outputs["next_beam_scores"]
- beam_next_tokens = beam_outputs["next_beam_tokens"]
- beam_idx = beam_outputs["next_beam_indices"]
-
- input_ids = torch.cat([input_ids[beam_idx, :], beam_next_tokens.unsqueeze(-1)], dim=-1)
- model_kwargs = self._update_model_kwargs_for_generation(
- outputs, model_kwargs, is_encoder_decoder=self.config.is_encoder_decoder
- )
- if model_kwargs["past"] is not None:
- model_kwargs["past"] = self._reorder_cache(model_kwargs["past"], beam_idx)
-
- # increase cur_len
- cur_len = cur_len + 1
-
- if constrained_beam_scorer.is_done or stopping_criteria(input_ids, scores):
- if not synced_gpus:
- break
- else:
- this_peer_finished = True
-
- sequence_outputs = constrained_beam_scorer.finalize(
- input_ids,
- beam_scores,
- next_tokens,
- next_indices,
- pad_token_id=pad_token_id,
- eos_token_id=eos_token_id,
- max_length=stopping_criteria.max_length,
- )
-
- if return_dict_in_generate:
- if not output_scores:
- sequence_outputs["sequence_scores"] = None
- if self.config.is_encoder_decoder:
- return BeamSearchEncoderDecoderOutput(
- sequences=sequence_outputs["sequences"],
- sequences_scores=sequence_outputs["sequence_scores"],
- scores=scores,
- encoder_attentions=encoder_attentions,
- encoder_hidden_states=encoder_hidden_states,
- decoder_attentions=decoder_attentions,
- cross_attentions=cross_attentions,
- decoder_hidden_states=decoder_hidden_states,
- )
- else:
- return BeamSearchDecoderOnlyOutput(
- sequences=sequence_outputs["sequences"],
- sequences_scores=sequence_outputs["sequence_scores"],
- scores=scores,
- attentions=decoder_attentions,
- hidden_states=decoder_hidden_states,
- )
- else:
- return sequence_outputs["sequences"]
-
-
-def top_k_top_p_filtering(
- logits: torch.FloatTensor,
- top_k: int = 0,
- top_p: float = 1.0,
- filter_value: float = -float("Inf"),
- min_tokens_to_keep: int = 1,
-) -> torch.FloatTensor:
- """
- Filter a distribution of logits using top-k and/or nucleus (top-p) filtering
-
- Args:
- logits: logits distribution shape (batch size, vocabulary size)
- top_k (`int`, *optional*, defaults to 0):
- If > 0, only keep the top k tokens with highest probability (top-k filtering)
- top_p (`float`, *optional*, defaults to 1.0):
- If < 1.0, only keep the top tokens with cumulative probability >= top_p (nucleus filtering). Nucleus
- filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751)
- min_tokens_to_keep (`int`, *optional*, defaults to 1):
- Minimumber of tokens we keep per batch example in the output.
-
- From: https://gist.github.com/thomwolf/1a5a29f6962089e871b94cbd09daf317
- """
- if top_k > 0:
- logits = TopKLogitsWarper(top_k=top_k, filter_value=filter_value, min_tokens_to_keep=min_tokens_to_keep)(
- None, logits
- )
-
- if 0 <= top_p <= 1.0:
- logits = TopPLogitsWarper(top_p=top_p, min_tokens_to_keep=min_tokens_to_keep)(None, logits)
-
- return logits
diff --git a/spaces/NATSpeech/PortaSpeech/data_gen/tts/runs/adapt_mfa_align.py b/spaces/NATSpeech/PortaSpeech/data_gen/tts/runs/adapt_mfa_align.py
deleted file mode 100644
index cadb6cbb502f852279248c98566b4616f32b1311..0000000000000000000000000000000000000000
--- a/spaces/NATSpeech/PortaSpeech/data_gen/tts/runs/adapt_mfa_align.py
+++ /dev/null
@@ -1,18 +0,0 @@
-import utils.commons.single_thread_env # NOQA
-import os
-import subprocess
-from utils.commons.hparams import hparams, set_hparams
-
-
-def adapt_mfa_align():
- CORPUS = hparams['processed_data_dir'].split("/")[-1]
- print(f"| Run MFA for {CORPUS}.")
- NUM_JOB = int(os.getenv('N_PROC', os.cpu_count()))
- subprocess.check_call(
- f'CORPUS={CORPUS} NUM_JOB={NUM_JOB} bash scripts/run_mfa_adapt.sh',
- shell=True)
-
-
-if __name__ == '__main__':
- set_hparams(print_hparams=False)
- adapt_mfa_align()
diff --git a/spaces/NCTCMumbai/NCTC/app.py b/spaces/NCTCMumbai/NCTC/app.py
deleted file mode 100644
index ac564b14ded3946b3c2a08e0e36c12935802f86f..0000000000000000000000000000000000000000
--- a/spaces/NCTCMumbai/NCTC/app.py
+++ /dev/null
@@ -1,220 +0,0 @@
-import pandas as pd
-import numpy as np
-import tensorflow as tf
-import tensorflow_hub as hub
-import sys
-import random
-sys.path.append('models')
-from official.nlp.data import classifier_data_lib
-from official.nlp.bert import tokenization
-from official.nlp import optimization
-tf.get_logger().setLevel('ERROR')
-
-import math
-
-import gradio as gr
-
-config = tf.compat.v1.ConfigProto(
- device_count = {'cpu': 0}
- )
-sess = tf.compat.v1.Session(config=config)
-num_warmup_steps=1
-num_train_steps=1
-init_lr = 3e-5
-optimizer = optimization.create_optimizer(init_lr=init_lr,
- num_train_steps=num_train_steps,
- num_warmup_steps=num_warmup_steps,
- optimizer_type='adamw')
-
-### Load Model
-checkpoint_filepath=r'./Checkpoint'
-model = tf.keras.models.load_model(checkpoint_filepath, custom_objects={'KerasLayer':hub.KerasLayer , 'AdamWeightDecay': optimizer})
-
-
-
-df_report = pd.read_csv('./CTH_Description.csv')
-df_report['CTH Code'] = df_report['CTH Code'].astype(str).str.zfill(8)
-
-df_report_DUTY = pd.read_csv('./CTH_WISE_DUTY_RATE.csv')
-df_report_DUTY['CTH'] = df_report_DUTY['CTH'].astype(str).str.zfill(8)
-
-#print(df_report_DUTY)
-
-df = pd.read_csv("./CTH_CODE_MAP.csv")
-df['CTH'] = df['CTH'].astype(str).str.zfill(8)
-df = df[['CTH', 'code']]
-
-class_names=df[['CTH','code']].drop_duplicates(subset='CTH').sort_values(by='code',ignore_index=True)['CTH'].values.tolist()
-label_list=list(range(0,len(class_names)))
-max_seq_length = 200 # maximum length of (token) input sequences . it can be any number
-train_batch_size = 32 # batch size ( 16 choosen to avoid Out-Of-Memory errors)
-
-# Get BERT layer and tokenizer:
-# More details here: https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/4
-bert_layer = hub.KerasLayer("https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/4" , trainable = True)
-vocab_file = bert_layer.resolved_object.vocab_file.asset_path.numpy()
-do_lower_case = bert_layer.resolved_object.do_lower_case.numpy()
-tokenizer = tokenization.FullTokenizer(vocab_file , do_lower_case)
-
-# This provides a function to convert each row to input features and label ( as required by BERT)
-
-max_seq_length = 200 # maximum length of (token) input sequences . it can be any number
-def to_feature(text, label, label_list=label_list, max_seq_length=max_seq_length, tokenizer=tokenizer):
- example = classifier_data_lib.InputExample(guid = None,
- text_a = text.numpy(),
- text_b = None,
- label = label.numpy())
- feature = classifier_data_lib.convert_single_example(0 , example , label_list , max_seq_length , tokenizer)
-
- return (feature.input_ids , feature.input_mask , feature.segment_ids , feature.label_id)
-
-
-def to_feature_map(text, label):
- input_ids , input_mask , segment_ids , label_id = tf.py_function(to_feature , inp = [text , label],
- Tout = [tf.int32 , tf.int32 , tf.int32 , tf.int32])
-
- input_ids.set_shape([max_seq_length])
- input_mask.set_shape([max_seq_length])
- segment_ids.set_shape([max_seq_length])
- label_id.set_shape([])
-
- x = {
- "input_word_ids": input_ids,
- "input_mask": input_mask,
- "input_type_ids": segment_ids
- }
-
- return(x,label_id)
-
-
-
-def print3largest(arr, arr_size):
- third = first = second = -sys.maxsize
- for i in range(0, arr_size):
-
- if (arr[i] > first):
- third = second
- second = first
- first = arr[i]
- elif (arr[i] > second):
- third = second
- second = arr[i]
- elif (arr[i] > third):
- third = arr[i]
- pred_value_max_three=[first, second, third]
- return pred_value_max_three
-
-def count_special_character(string):
- special_char= 0
- for i in range(len(string)):
- ch = string[i]
- if (string[i].isalpha()):
- continue
- else:
- special_char += 1
-
- if len(string)==special_char:
- return False
- else:
- return True
-
-def predict_CTH(txt):
- print('Desc: ',txt)
- if (txt!='') and len(txt)>=3 and (count_special_character(txt)):
- valid_data = tf.data.Dataset.from_tensor_slices(([txt] , [1])) # 1 refers to 'entertainment' and 2 refers to 'sport'
- valid_data = (valid_data.map(to_feature_map).batch(1))
- preds = model.predict(valid_data)
- predicted_values = tf.nn.softmax(preds)
- arr = predicted_values.numpy().tolist()[0]
- n = len(arr)
- pred_value_max_three=print3largest(arr, n)
-
-
-
- sum_all = pred_value_max_three[0] + pred_value_max_three[1] + pred_value_max_three[2]
-
- val_1 = pred_value_max_three[0]/sum_all
- val_2 = pred_value_max_three[1]/sum_all
- val_3 = pred_value_max_three[2]/sum_all
-
- #val_1= 97 #random.randrange(95, 99, 1)
- #val_2=(pred_value_max_three[1]/pred_value_max_three[0])*val_1
- #val_3=(pred_value_max_three[2]/pred_value_max_three[0])*val_1
-
- if pred_value_max_three[0]<=0.000131:
- Var_CTH=[]
- Var_desc=[]
- Var_duty=[]
- pred_duty=''
- pred_desc=''
- pred_CTH=''
-
- return{'Not a adequate description':float(1.0)}
- else:
- Var_CTH=[]
- Var_desc=[]
- Var_duty=[]
- pred_duty=''
- pred_desc=''
- pred_CTH=''
-
-
- for i in pred_value_max_three:
- #i=pred_value_max_three[0]
- predicted_code=np.where(predicted_values.numpy()==i)[1][0]
- pred_CTH=df[df['code'] == predicted_code]['CTH'].iloc[0]
-
- try:
- pred_duty=df_report_DUTY[df_report_DUTY['CTH']==str(pred_CTH)]['DUTY_RATE'].iloc[0]
- pred_desc=df_report[df_report['CTH Code']==str(pred_CTH)]['Concat Description'].iloc[0]
- except:
- pass
-
- Var_CTH.append(pred_CTH)
- Var_desc.append(pred_desc)
- Var_duty.append(pred_duty)
-
- P1 ='CTH: '+str(Var_CTH[0])+' Duty Rate(%): '+ str(Var_duty[0])
- P2 ='CTH: '+str(Var_CTH[1])+' Duty Rate(%): '+ str(Var_duty[1])
- P3 ='CTH: '+str(Var_CTH[2])+' Duty Rate(%): '+ str(Var_duty[2])
-
-
- Q1='Desc: '+str(Var_desc[0])
- Q2='Desc: '+str(Var_desc[1])
- Q3='Desc: '+str(Var_desc[2])
-
-
- return {str(P1):float(val_1),str(Q1):float(val_1),
- str(P2):float(val_2),str(Q2):float(val_2),
- str(P3):float(val_3),str(Q3):float(val_3),}
- else:
- return{'Enter Correct Description':float(1.0)}
-
-
-input_txt=gr.Textbox(
- label='Enter Your Product Descrption',
- lines=3,
- )
-description="
AdvaitBERT is modified version of BERT (Bidirectional Encoder Representation for Transformers), \
-finetuned on the Text corpus of Indian Customs Declarations. It is trained for performing \
-downstream tasks like automating the tariff classification and validation process of Customs \
-declarations in realtime. This model may help Customs administration to efficiently use AI assisted \
-NLP in realtime Customs process like Assessment, Post Clearance Audit, thereby highlighting classification \
-inconsistencies and help in revenue augmentation.
"
-
-title="
AdvaitBERT
"
-article="
Powered by NCTC
"
-
-#css=".gradio-container {background-color: papayawhip}",
-
-gr.Interface(
- predict_CTH,
- inputs=input_txt,
- outputs="label",
- interpretation="default",
- description=description,
- #live=True,
- examples = ['200 SI/SI/SI LPO ALUMINIUM LIDS (QTY: 8820000 PCS/PRICE: 21.'],
- title=title,
- article=article,
-).launch()
\ No newline at end of file
diff --git a/spaces/NarendraC/MyAIChatBot/app.py b/spaces/NarendraC/MyAIChatBot/app.py
deleted file mode 100644
index 9ede0bd38a0bf7b5a72db19bf134e66df1d9d1cc..0000000000000000000000000000000000000000
--- a/spaces/NarendraC/MyAIChatBot/app.py
+++ /dev/null
@@ -1,34 +0,0 @@
-import os
-import gradio as gr
-from langchain.chat_models import ChatOpenAI
-from langchain import LLMChain, PromptTemplate
-from langchain.memory import ConversationBufferMemory
-
-OPENAI_API_KEY=os.getenv('OPENAI_API_KEY')
-
-template = """Meet Riya, your youthful and witty personal assistant! At 21 years old, she's full of energy and always eager to help. Riya's goal is to assist you with any questions or problems you might have. Her enthusiasm shines through in every response, making interactions with her enjoyable and engaging..
-{chat_history}
-User: {user_message}
-Chatbot:"""
-
-prompt = PromptTemplate(
- input_variables=["chat_history", "user_message"], template=template
-)
-
-memory = ConversationBufferMemory(memory_key="chat_history")
-
-llm_chain = LLMChain(
- llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"),
- prompt=prompt,
- verbose=True,
- memory=memory,
-)
-
-def get_text_response(user_message,history):
- response = llm_chain.predict(user_message = user_message)
- return response
-
-demo = gr.ChatInterface(get_text_response)
-
-if __name__ == "__main__":
- demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`.
diff --git a/spaces/OAOA/DifFace/basicsr/data/prefetch_dataloader.py b/spaces/OAOA/DifFace/basicsr/data/prefetch_dataloader.py
deleted file mode 100644
index 332abd32fcb004e6892d12dc69848a4454e3c503..0000000000000000000000000000000000000000
--- a/spaces/OAOA/DifFace/basicsr/data/prefetch_dataloader.py
+++ /dev/null
@@ -1,122 +0,0 @@
-import queue as Queue
-import threading
-import torch
-from torch.utils.data import DataLoader
-
-
-class PrefetchGenerator(threading.Thread):
- """A general prefetch generator.
-
- Reference: https://stackoverflow.com/questions/7323664/python-generator-pre-fetch
-
- Args:
- generator: Python generator.
- num_prefetch_queue (int): Number of prefetch queue.
- """
-
- def __init__(self, generator, num_prefetch_queue):
- threading.Thread.__init__(self)
- self.queue = Queue.Queue(num_prefetch_queue)
- self.generator = generator
- self.daemon = True
- self.start()
-
- def run(self):
- for item in self.generator:
- self.queue.put(item)
- self.queue.put(None)
-
- def __next__(self):
- next_item = self.queue.get()
- if next_item is None:
- raise StopIteration
- return next_item
-
- def __iter__(self):
- return self
-
-
-class PrefetchDataLoader(DataLoader):
- """Prefetch version of dataloader.
-
- Reference: https://github.com/IgorSusmelj/pytorch-styleguide/issues/5#
-
- TODO:
- Need to test on single gpu and ddp (multi-gpu). There is a known issue in
- ddp.
-
- Args:
- num_prefetch_queue (int): Number of prefetch queue.
- kwargs (dict): Other arguments for dataloader.
- """
-
- def __init__(self, num_prefetch_queue, **kwargs):
- self.num_prefetch_queue = num_prefetch_queue
- super(PrefetchDataLoader, self).__init__(**kwargs)
-
- def __iter__(self):
- return PrefetchGenerator(super().__iter__(), self.num_prefetch_queue)
-
-
-class CPUPrefetcher():
- """CPU prefetcher.
-
- Args:
- loader: Dataloader.
- """
-
- def __init__(self, loader):
- self.ori_loader = loader
- self.loader = iter(loader)
-
- def next(self):
- try:
- return next(self.loader)
- except StopIteration:
- return None
-
- def reset(self):
- self.loader = iter(self.ori_loader)
-
-
-class CUDAPrefetcher():
- """CUDA prefetcher.
-
- Reference: https://github.com/NVIDIA/apex/issues/304#
-
- It may consume more GPU memory.
-
- Args:
- loader: Dataloader.
- opt (dict): Options.
- """
-
- def __init__(self, loader, opt):
- self.ori_loader = loader
- self.loader = iter(loader)
- self.opt = opt
- self.stream = torch.cuda.Stream()
- self.device = torch.device('cuda' if opt['num_gpu'] != 0 else 'cpu')
- self.preload()
-
- def preload(self):
- try:
- self.batch = next(self.loader) # self.batch is a dict
- except StopIteration:
- self.batch = None
- return None
- # put tensors to gpu
- with torch.cuda.stream(self.stream):
- for k, v in self.batch.items():
- if torch.is_tensor(v):
- self.batch[k] = self.batch[k].to(device=self.device, non_blocking=True)
-
- def next(self):
- torch.cuda.current_stream().wait_stream(self.stream)
- batch = self.batch
- self.preload()
- return batch
-
- def reset(self):
- self.loader = iter(self.ori_loader)
- self.preload()
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/linformer/linformer_src/modules/linformer_sentence_encoder.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/linformer/linformer_src/modules/linformer_sentence_encoder.py
deleted file mode 100644
index 44f7989bd863329f763aa62b78df2eb42b3084ea..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/linformer/linformer_src/modules/linformer_sentence_encoder.py
+++ /dev/null
@@ -1,54 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import math
-
-import torch.nn as nn
-from fairseq.models.transformer import TransformerEncoder
-
-from .linformer_sentence_encoder_layer import LinformerTransformerEncoderLayer
-
-
-class LinformerTransformerEncoder(TransformerEncoder):
- """
- Implementation for a Bi-directional Linformer based Sentence Encoder used
- in BERT/XLM style pre-trained models.
-
- This first computes the token embedding using the token embedding matrix,
- position embeddings (if specified) and segment embeddings
- (if specified). After applying the specified number of
- LinformerEncoderLayers, it outputs all the internal states of the
- encoder as well as the final representation associated with the first
- token (usually CLS token).
-
- Input:
- - tokens: B x T matrix representing sentences
- - segment_labels: B x T matrix representing segment label for tokens
-
- Output:
- - a tuple of the following:
- - a list of internal model states used to compute the
- predictions where each tensor has shape T x B x C
- - sentence representation associated with first input token
- in format B x C.
- """
-
- def __init__(self, args, dictionary, embed_tokens):
- self.compress_layer = None
- super().__init__(args, dictionary, embed_tokens)
-
- def build_encoder_layer(self, args):
- if self.args.shared_layer_kv_compressed == 1 and self.compress_layer is None:
- compress_layer = nn.Linear(
- self.args.max_positions,
- self.args.max_positions // self.args.compressed,
- )
- # intialize parameters for compressed layer
- nn.init.xavier_uniform_(compress_layer.weight, gain=1 / math.sqrt(2))
- if self.args.freeze_compress == 1:
- compress_layer.weight.requires_grad = False
- self.compress_layer = compress_layer
-
- return LinformerTransformerEncoderLayer(args, self.compress_layer)
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/backtranslation_dataset.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/backtranslation_dataset.py
deleted file mode 100644
index 8f70c90df3d237077537993e125d366c95292f1a..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/backtranslation_dataset.py
+++ /dev/null
@@ -1,165 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-from fairseq import utils
-
-from . import FairseqDataset
-
-
-def backtranslate_samples(samples, collate_fn, generate_fn, cuda=True):
- """Backtranslate a list of samples.
-
- Given an input (*samples*) of the form:
-
- [{'id': 1, 'source': 'hallo welt'}]
-
- this will return:
-
- [{'id': 1, 'source': 'hello world', 'target': 'hallo welt'}]
-
- Args:
- samples (List[dict]): samples to backtranslate. Individual samples are
- expected to have a 'source' key, which will become the 'target'
- after backtranslation.
- collate_fn (callable): function to collate samples into a mini-batch
- generate_fn (callable): function to generate backtranslations
- cuda (bool): use GPU for generation (default: ``True``)
-
- Returns:
- List[dict]: an updated list of samples with a backtranslated source
- """
- collated_samples = collate_fn(samples)
- s = utils.move_to_cuda(collated_samples) if cuda else collated_samples
- generated_sources = generate_fn(s)
-
- id_to_src = {sample["id"]: sample["source"] for sample in samples}
-
- # Go through each tgt sentence in batch and its corresponding best
- # generated hypothesis and create a backtranslation data pair
- # {id: id, source: generated backtranslation, target: original tgt}
- return [
- {
- "id": id.item(),
- "target": id_to_src[id.item()],
- "source": hypos[0]["tokens"].cpu(),
- }
- for id, hypos in zip(collated_samples["id"], generated_sources)
- ]
-
-
-class BacktranslationDataset(FairseqDataset):
- """
- Sets up a backtranslation dataset which takes a tgt batch, generates
- a src using a tgt-src backtranslation function (*backtranslation_fn*),
- and returns the corresponding `{generated src, input tgt}` batch.
-
- Args:
- tgt_dataset (~fairseq.data.FairseqDataset): the dataset to be
- backtranslated. Only the source side of this dataset will be used.
- After backtranslation, the source sentences in this dataset will be
- returned as the targets.
- src_dict (~fairseq.data.Dictionary): the dictionary of backtranslated
- sentences.
- tgt_dict (~fairseq.data.Dictionary, optional): the dictionary of
- sentences to be backtranslated.
- backtranslation_fn (callable, optional): function to call to generate
- backtranslations. This is typically the `generate` method of a
- :class:`~fairseq.sequence_generator.SequenceGenerator` object.
- Pass in None when it is not available at initialization time, and
- use set_backtranslation_fn function to set it when available.
- output_collater (callable, optional): function to call on the
- backtranslated samples to create the final batch
- (default: ``tgt_dataset.collater``).
- cuda: use GPU for generation
- """
-
- def __init__(
- self,
- tgt_dataset,
- src_dict,
- tgt_dict=None,
- backtranslation_fn=None,
- output_collater=None,
- cuda=True,
- **kwargs
- ):
- self.tgt_dataset = tgt_dataset
- self.backtranslation_fn = backtranslation_fn
- self.output_collater = (
- output_collater if output_collater is not None else tgt_dataset.collater
- )
- self.cuda = cuda if torch.cuda.is_available() else False
- self.src_dict = src_dict
- self.tgt_dict = tgt_dict
-
- def __getitem__(self, index):
- """
- Returns a single sample from *tgt_dataset*. Note that backtranslation is
- not applied in this step; use :func:`collater` instead to backtranslate
- a batch of samples.
- """
- return self.tgt_dataset[index]
-
- def __len__(self):
- return len(self.tgt_dataset)
-
- def set_backtranslation_fn(self, backtranslation_fn):
- self.backtranslation_fn = backtranslation_fn
-
- def collater(self, samples):
- """Merge and backtranslate a list of samples to form a mini-batch.
-
- Using the samples from *tgt_dataset*, load a collated target sample to
- feed to the backtranslation model. Then take the backtranslation with
- the best score as the source and the original input as the target.
-
- Note: we expect *tgt_dataset* to provide a function `collater()` that
- will collate samples into the format expected by *backtranslation_fn*.
- After backtranslation, we will feed the new list of samples (i.e., the
- `(backtranslated source, original source)` pairs) to *output_collater*
- and return the result.
-
- Args:
- samples (List[dict]): samples to backtranslate and collate
-
- Returns:
- dict: a mini-batch with keys coming from *output_collater*
- """
- if samples[0].get("is_dummy", False):
- return samples
- samples = backtranslate_samples(
- samples=samples,
- collate_fn=self.tgt_dataset.collater,
- generate_fn=(lambda net_input: self.backtranslation_fn(net_input)),
- cuda=self.cuda,
- )
- return self.output_collater(samples)
-
- def num_tokens(self, index):
- """Just use the tgt dataset num_tokens"""
- return self.tgt_dataset.num_tokens(index)
-
- def ordered_indices(self):
- """Just use the tgt dataset ordered_indices"""
- return self.tgt_dataset.ordered_indices()
-
- def size(self, index):
- """Return an example's size as a float or tuple. This value is used
- when filtering a dataset with ``--max-positions``.
-
- Note: we use *tgt_dataset* to approximate the length of the source
- sentence, since we do not know the actual length until after
- backtranslation.
- """
- tgt_size = self.tgt_dataset.size(index)[0]
- return (tgt_size, tgt_size)
-
- @property
- def supports_prefetch(self):
- return getattr(self.tgt_dataset, "supports_prefetch", False)
-
- def prefetch(self, indices):
- return self.tgt_dataset.prefetch(indices)
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/gpu/test_binaries_gpu.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/gpu/test_binaries_gpu.py
deleted file mode 100644
index de8c2426134089035c6e0e5da223647bab6f3dba..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/gpu/test_binaries_gpu.py
+++ /dev/null
@@ -1,449 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import contextlib
-import logging
-import json
-import os
-import tempfile
-import unittest
-from io import StringIO
-
-import torch
-from fairseq import options
-from fairseq_cli import train
-from tests.utils import (
- create_dummy_data,
- generate_main,
- preprocess_lm_data,
- preprocess_translation_data,
- train_translation_model,
-)
-
-
-@unittest.skipIf(not torch.cuda.is_available(), "test requires a GPU")
-class TestTranslationGPU(unittest.TestCase):
- def setUp(self):
- logging.disable(logging.CRITICAL)
-
- def tearDown(self):
- logging.disable(logging.NOTSET)
-
- def test_fp16_multigpu(self):
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory("test_fp16") as data_dir:
- log = os.path.join(data_dir, "train.log")
- create_dummy_data(data_dir)
- preprocess_translation_data(data_dir)
- train_translation_model(
- data_dir,
- "fconv_iwslt_de_en",
- ["--fp16", "--log-file", log],
- world_size=min(torch.cuda.device_count(), 2),
- )
- generate_main(data_dir)
- assert os.path.exists(log)
-
- @staticmethod
- def parse_logs(logfile):
- logs = []
- for ln in open(logfile, "r").readlines():
- try:
- logs.append(json.loads(ln))
- except json.JSONDecodeError:
- continue
- return logs
-
- def test_resume_training_fsdp(self):
- self._test_resume_training(["--ddp-backend", "fully_sharded"])
-
- def test_resume_training_fsdp_sharded_state(self):
- self._test_resume_training(["--ddp-backend", "fully_sharded", "--use-sharded-state"])
-
- def test_resume_training_noc10d(self):
- self._test_resume_training([])
-
- def _test_resume_training(self, extra_clargs, arch="fconv_iwslt_de_en"):
- flags = [
- "--fp16",
- "--log-format",
- "json",
- "--max-update",
- "10",
- "--save-interval-updates",
- "2",
- "--log-interval",
- "1",
- ] + extra_clargs
- world_size = min(torch.cuda.device_count(), 2)
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory("test_fp16") as data_dir:
- log = os.path.join(data_dir, "train.log")
- create_dummy_data(data_dir)
- preprocess_translation_data(data_dir)
- train_translation_model(
- data_dir, arch, flags + ["--log-file", log], world_size=world_size,
- )
- log2 = os.path.join(data_dir, "resume.log")
- restore_file = os.path.join(data_dir, "checkpoint_1_2.pt")
- train_translation_model(
- data_dir,
- arch,
- flags + ["--log-file", log2, "--restore-file", restore_file],
- world_size=world_size,
- )
-
- l1 = self.parse_logs(log)
- l2 = self.parse_logs(log2)
- assert int(l2[0]["num_updates"]) == 3, f"{l1}\n\n {l2}"
- for k in [
- "train_loss",
- "train_num_updates",
- "train_ppl",
- "train_gnorm",
- ]:
- from_scratch, resumed = l1[-1][k], l2[-1][k]
- assert (
- from_scratch == resumed
- ), f"difference at {k} {from_scratch} != {resumed}"
-
- def test_memory_efficient_fp16(self):
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory("test_memory_efficient_fp16") as data_dir:
- create_dummy_data(data_dir)
- preprocess_translation_data(data_dir)
- train_translation_model(
- data_dir, "fconv_iwslt_de_en", ["--memory-efficient-fp16"]
- )
- generate_main(data_dir)
-
- def test_transformer_fp16(self):
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory("test_transformer") as data_dir:
- create_dummy_data(data_dir)
- preprocess_translation_data(data_dir)
- train_translation_model(
- data_dir,
- "transformer_iwslt_de_en",
- [
- "--encoder-layers",
- "2",
- "--decoder-layers",
- "2",
- "--encoder-embed-dim",
- "64",
- "--decoder-embed-dim",
- "64",
- "--fp16",
- ],
- run_validation=True,
- )
- generate_main(data_dir)
-
- @unittest.skipIf(not torch.cuda.is_available(), "test requires a GPU")
- def test_amp(self):
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory("test_amp") as data_dir:
- create_dummy_data(data_dir)
- preprocess_translation_data(data_dir)
- train_translation_model(data_dir, "fconv_iwslt_de_en", ["--amp"])
- generate_main(data_dir)
-
- @unittest.skipIf(not torch.cuda.is_available(), "test requires a GPU")
- def test_transformer_amp(self):
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory("test_transformer") as data_dir:
- create_dummy_data(data_dir)
- preprocess_translation_data(data_dir)
- train_translation_model(
- data_dir,
- "transformer_iwslt_de_en",
- [
- "--encoder-layers",
- "2",
- "--decoder-layers",
- "2",
- "--encoder-embed-dim",
- "64",
- "--decoder-embed-dim",
- "64",
- "--amp",
- ],
- run_validation=True,
- )
- generate_main(data_dir)
-
- @unittest.skipIf(not torch.cuda.is_available(), "test requires a GPU")
- def test_levenshtein_transformer(self):
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory(
- "test_levenshtein_transformer"
- ) as data_dir:
- create_dummy_data(data_dir)
- preprocess_translation_data(data_dir, ["--joined-dictionary"])
- train_translation_model(
- data_dir,
- "levenshtein_transformer",
- [
- "--apply-bert-init",
- "--early-exit",
- "6,6,6",
- "--criterion",
- "nat_loss",
- ],
- task="translation_lev",
- )
- gen_config = [
- "--task",
- "translation_lev",
- "--iter-decode-max-iter",
- "9",
- "--iter-decode-eos-penalty",
- "0",
- "--print-step",
- ]
- # non-ensemble generation
- generate_main(data_dir, gen_config)
- # ensemble generation
- generate_main(
- data_dir,
- gen_config,
- path=os.pathsep.join(
- [
- os.path.join(data_dir, "checkpoint_last.pt"),
- os.path.join(data_dir, "checkpoint_last.pt"),
- ]
- ),
- )
-
- def test_fsdp_checkpoint_generate(self):
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory("test_fsdp_sharded") as data_dir:
- log = os.path.join(data_dir, "train.log")
- create_dummy_data(data_dir)
- preprocess_translation_data(data_dir)
- world_size = min(torch.cuda.device_count(), 2)
- train_translation_model(
- data_dir,
- "fconv_iwslt_de_en",
- ["--log-file", log, "--ddp-backend", "fully_sharded"],
- world_size=world_size,
- )
- generate_main(data_dir)
- assert os.path.exists(log)
-
- def test_fsdp_sharded_checkpoint_generate(self):
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory("test_fsdp_sharded") as data_dir:
- log = os.path.join(data_dir, "train.log")
- create_dummy_data(data_dir)
- preprocess_translation_data(data_dir)
- world_size = min(torch.cuda.device_count(), 2)
- train_translation_model(
- data_dir,
- "fconv_iwslt_de_en",
- ["--log-file", log, "--ddp-backend", "fully_sharded", "--use-sharded-state"],
- world_size=world_size,
- )
- generate_main(data_dir, ["--checkpoint-shard-count", str(world_size)])
- assert os.path.exists(log)
-
-
-def _quantize_language_model(data_dir, arch, extra_flags=None, run_validation=False):
- train_parser = options.get_training_parser()
- train_args = options.parse_args_and_arch(
- train_parser,
- [
- "--task",
- "language_modeling",
- data_dir,
- "--arch",
- arch,
- "--optimizer",
- "adam",
- "--lr",
- "0.0001",
- "--criterion",
- "adaptive_loss",
- "--adaptive-softmax-cutoff",
- "5,10,15",
- "--max-tokens",
- "500",
- "--tokens-per-sample",
- "500",
- "--save-dir",
- data_dir,
- "--max-epoch",
- "1",
- "--no-progress-bar",
- "--distributed-world-size",
- "1",
- "--ddp-backend",
- "no_c10d",
- "--num-workers",
- "0",
- ]
- + (extra_flags or []),
- )
- train.main(train_args)
-
- # try scalar quantization
- scalar_quant_train_parser = options.get_training_parser()
- scalar_quant_train_args = options.parse_args_and_arch(
- scalar_quant_train_parser,
- [
- "--task",
- "language_modeling",
- data_dir,
- "--arch",
- arch,
- "--optimizer",
- "adam",
- "--lr",
- "0.0001",
- "--criterion",
- "adaptive_loss",
- "--adaptive-softmax-cutoff",
- "5,10,15",
- "--max-tokens",
- "500",
- "--tokens-per-sample",
- "500",
- "--save-dir",
- data_dir,
- "--max-update",
- "3",
- "--no-progress-bar",
- "--distributed-world-size",
- "1",
- "--ddp-backend",
- "no_c10d",
- "--num-workers",
- "0",
- "--quant-noise-scalar",
- "0.5",
- ]
- + (extra_flags or []),
- )
- train.main(scalar_quant_train_args)
-
- # try iterative PQ quantization
- quantize_parser = options.get_training_parser()
- quantize_args = options.parse_args_and_arch(
- quantize_parser,
- [
- "--task",
- "language_modeling",
- data_dir,
- "--arch",
- arch,
- "--optimizer",
- "adam",
- "--lr",
- "0.0001",
- "--criterion",
- "adaptive_loss",
- "--adaptive-softmax-cutoff",
- "5,10,15",
- "--max-tokens",
- "50",
- "--tokens-per-sample",
- "50",
- "--max-update",
- "6",
- "--no-progress-bar",
- "--distributed-world-size",
- "1",
- "--ddp-backend",
- "no_c10d",
- "--num-workers",
- "0",
- "--restore-file",
- os.path.join(data_dir, "checkpoint_last.pt"),
- "--reset-optimizer",
- "--quantization-config-path",
- os.path.join(
- os.path.dirname(__file__), "transformer_quantization_config.yaml"
- ),
- ]
- + (extra_flags or []),
- )
- train.main(quantize_args)
-
-
-@unittest.skipIf(not torch.cuda.is_available(), "test requires a GPU")
-class TestQuantization(unittest.TestCase):
- def setUp(self):
- logging.disable(logging.CRITICAL)
-
- def tearDown(self):
- logging.disable(logging.NOTSET)
-
- def test_quantization(self):
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory("test_quantization") as data_dir:
- create_dummy_data(data_dir)
- preprocess_lm_data(data_dir)
- # tests both scalar and iterative PQ quantization
- _quantize_language_model(data_dir, "transformer_lm")
-
-
-@unittest.skipIf(not torch.cuda.is_available(), "test requires a GPU")
-class TestOptimizersGPU(unittest.TestCase):
- def setUp(self):
- logging.disable(logging.CRITICAL)
-
- def tearDown(self):
- logging.disable(logging.NOTSET)
-
- def test_flat_grads(self):
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory("test_flat_grads") as data_dir:
- # Use just a bit of data and tiny model to keep this test runtime reasonable
- create_dummy_data(data_dir, num_examples=10, maxlen=5)
- preprocess_translation_data(data_dir)
- with self.assertRaises(RuntimeError):
- # adafactor isn't compatible with flat grads, which
- # are used by default with --fp16
- train_translation_model(
- data_dir,
- "lstm",
- [
- "--required-batch-size-multiple",
- "1",
- "--encoder-layers",
- "1",
- "--encoder-hidden-size",
- "32",
- "--decoder-layers",
- "1",
- "--optimizer",
- "adafactor",
- "--fp16",
- ],
- )
- # but it should pass once we set --fp16-no-flatten-grads
- train_translation_model(
- data_dir,
- "lstm",
- [
- "--required-batch-size-multiple",
- "1",
- "--encoder-layers",
- "1",
- "--encoder-hidden-size",
- "32",
- "--decoder-layers",
- "1",
- "--optimizer",
- "adafactor",
- "--fp16",
- "--fp16-no-flatten-grads",
- ],
- )
-
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/rxf/rxf_src/__init__.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/rxf/rxf_src/__init__.py
deleted file mode 100644
index 306e232d6f386b26153864601114e162080dcee4..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/rxf/rxf_src/__init__.py
+++ /dev/null
@@ -1,6 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from . import label_smoothed_cross_entropy_r3f, sentence_prediction_r3f # noqa
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/tests/test_binaries.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/tests/test_binaries.py
deleted file mode 100644
index 4e207742625427f108f78bcd24d487a081b6ccf7..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/tests/test_binaries.py
+++ /dev/null
@@ -1,1874 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import contextlib
-import logging
-import json
-import os
-import random
-import sys
-import tempfile
-import unittest
-from io import StringIO
-from typing import List, Dict
-import torch
-from fairseq import options
-from fairseq_cli import eval_lm, train
-from tests.utils import (
- create_dummy_data,
- generate_main,
- preprocess_lm_data,
- preprocess_summarization_data,
- preprocess_translation_data,
- create_laser_data_and_config_json,
- train_translation_model,
- train_language_model,
-)
-
-
-try:
- import transformers # noqa
-
- has_hf_transformers = True
-except ImportError:
- has_hf_transformers = False
-
-
-class TestTranslation(unittest.TestCase):
- def setUp(self):
- logging.disable(logging.CRITICAL)
-
- def tearDown(self):
- logging.disable(logging.NOTSET)
-
- def test_fconv(self):
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory("test_fconv") as data_dir:
- create_dummy_data(data_dir)
- preprocess_translation_data(data_dir)
- train_translation_model(data_dir, "fconv_iwslt_de_en")
- generate_main(data_dir)
-
- def test_raw(self):
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory("test_fconv_raw") as data_dir:
- create_dummy_data(data_dir)
- preprocess_translation_data(data_dir, ["--dataset-impl", "raw"])
- train_translation_model(
- data_dir, "fconv_iwslt_de_en", ["--dataset-impl", "raw"]
- )
- generate_main(data_dir, ["--dataset-impl", "raw"])
-
- def test_update_freq(self):
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory("test_update_freq") as data_dir:
- create_dummy_data(data_dir)
- preprocess_translation_data(data_dir)
- train_translation_model(
- data_dir, "fconv_iwslt_de_en", ["--update-freq", "3"]
- )
- generate_main(data_dir)
-
- def test_max_positions(self):
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory("test_max_positions") as data_dir:
- create_dummy_data(data_dir)
- preprocess_translation_data(data_dir)
- with self.assertRaises(Exception) as context:
- train_translation_model(
- data_dir,
- "fconv_iwslt_de_en",
- ["--max-target-positions", "5"],
- )
- self.assertTrue(
- "skip this example with --skip-invalid-size-inputs-valid-test"
- in str(context.exception)
- )
- train_translation_model(
- data_dir,
- "fconv_iwslt_de_en",
- [
- "--max-target-positions",
- "5",
- "--skip-invalid-size-inputs-valid-test",
- ],
- )
- with self.assertRaises(Exception) as context:
- generate_main(data_dir)
- generate_main(data_dir, ["--skip-invalid-size-inputs-valid-test"])
-
- def test_generation(self):
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory("test_sampling") as data_dir:
- create_dummy_data(data_dir)
- preprocess_translation_data(data_dir)
- train_translation_model(data_dir, "fconv_iwslt_de_en")
- generate_main(
- data_dir,
- [
- "--sampling",
- "--temperature",
- "2",
- "--beam",
- "2",
- "--nbest",
- "2",
- ],
- )
- generate_main(
- data_dir,
- [
- "--sampling",
- "--sampling-topk",
- "3",
- "--beam",
- "2",
- "--nbest",
- "2",
- ],
- )
- generate_main(
- data_dir,
- [
- "--sampling",
- "--sampling-topp",
- "0.2",
- "--beam",
- "2",
- "--nbest",
- "2",
- ],
- )
- generate_main(
- data_dir,
- [
- "--diversity-rate",
- "0.5",
- "--beam",
- "6",
- ],
- )
- with self.assertRaises(ValueError):
- generate_main(
- data_dir,
- [
- "--diverse-beam-groups",
- "4",
- "--match-source-len",
- ],
- )
- generate_main(data_dir, ["--prefix-size", "2"])
- generate_main(data_dir, ["--retain-dropout"])
-
- def test_eval_bleu(self):
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory("test_eval_bleu") as data_dir:
- create_dummy_data(data_dir)
- preprocess_translation_data(data_dir)
- train_translation_model(
- data_dir,
- "fconv_iwslt_de_en",
- [
- "--eval-bleu",
- "--eval-bleu-print-samples",
- "--eval-bleu-remove-bpe",
- "--eval-bleu-detok",
- "space",
- "--eval-bleu-args",
- '{"beam": 4, "min_len": 10}',
- ],
- )
-
- def test_lstm(self):
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory("test_lstm") as data_dir:
- create_dummy_data(data_dir)
- preprocess_translation_data(data_dir)
- train_translation_model(
- data_dir,
- "lstm_wiseman_iwslt_de_en",
- [
- "--encoder-layers",
- "2",
- "--decoder-layers",
- "2",
- "--encoder-embed-dim",
- "8",
- "--decoder-embed-dim",
- "8",
- "--decoder-out-embed-dim",
- "8",
- ],
- )
- generate_main(data_dir)
-
- def test_lstm_bidirectional(self):
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory("test_lstm_bidirectional") as data_dir:
- create_dummy_data(data_dir)
- preprocess_translation_data(data_dir)
- train_translation_model(
- data_dir,
- "lstm",
- [
- "--encoder-layers",
- "2",
- "--encoder-bidirectional",
- "--encoder-hidden-size",
- "16",
- "--encoder-embed-dim",
- "8",
- "--decoder-embed-dim",
- "8",
- "--decoder-out-embed-dim",
- "8",
- "--decoder-layers",
- "2",
- ],
- )
- generate_main(data_dir)
-
- def test_transformer(self):
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory("test_transformer") as data_dir:
- create_dummy_data(data_dir)
- preprocess_translation_data(data_dir)
- train_translation_model(
- data_dir,
- "transformer_iwslt_de_en",
- [
- "--encoder-layers",
- "2",
- "--decoder-layers",
- "2",
- "--encoder-embed-dim",
- "8",
- "--decoder-embed-dim",
- "8",
- ],
- run_validation=True,
- )
- generate_main(data_dir)
-
- def test_multilingual_transformer(self):
- # test with all combinations of encoder/decoder lang tokens
- encoder_langtok_flags = [
- [],
- ["--encoder-langtok", "src"],
- ["--encoder-langtok", "tgt"],
- ]
- decoder_langtok_flags = [[], ["--decoder-langtok"]]
- with contextlib.redirect_stdout(StringIO()):
- for i in range(len(encoder_langtok_flags)):
- for j in range(len(decoder_langtok_flags)):
- enc_ltok_flag = encoder_langtok_flags[i]
- dec_ltok_flag = decoder_langtok_flags[j]
- with tempfile.TemporaryDirectory(
- f"test_multilingual_transformer_{i}_{j}"
- ) as data_dir:
- create_dummy_data(data_dir)
- preprocess_translation_data(data_dir)
- train_translation_model(
- data_dir,
- arch="multilingual_transformer",
- task="multilingual_translation",
- extra_flags=[
- "--encoder-layers",
- "2",
- "--decoder-layers",
- "2",
- "--encoder-embed-dim",
- "8",
- "--decoder-embed-dim",
- "8",
- ]
- + enc_ltok_flag
- + dec_ltok_flag,
- lang_flags=["--lang-pairs", "in-out,out-in"],
- run_validation=True,
- extra_valid_flags=enc_ltok_flag + dec_ltok_flag,
- )
- generate_main(
- data_dir,
- extra_flags=[
- "--task",
- "multilingual_translation",
- "--lang-pairs",
- "in-out,out-in",
- "--source-lang",
- "in",
- "--target-lang",
- "out",
- ]
- + enc_ltok_flag
- + dec_ltok_flag,
- )
-
- @unittest.skipIf(
- sys.platform.lower() == "darwin", "skip latent depth test on MacOS"
- )
- def test_multilingual_translation_latent_depth(self):
- # test with latent depth in encoder, decoder, or both
- encoder_latent_layer = [[], ["--encoder-latent-layer"]]
- decoder_latent_layer = [[], ["--decoder-latent-layer"]]
- with contextlib.redirect_stdout(StringIO()):
- for i in range(len(encoder_latent_layer)):
- for j in range(len(decoder_latent_layer)):
- if i == 0 and j == 0:
- continue
- enc_ll_flag = encoder_latent_layer[i]
- dec_ll_flag = decoder_latent_layer[j]
- with tempfile.TemporaryDirectory(
- f"test_multilingual_translation_latent_depth_{i}_{j}"
- ) as data_dir:
- create_dummy_data(data_dir)
- preprocess_translation_data(
- data_dir, extra_flags=["--joined-dictionary"]
- )
- train_translation_model(
- data_dir,
- arch="latent_multilingual_transformer",
- task="multilingual_translation_latent_depth",
- extra_flags=[
- "--user-dir",
- "examples/latent_depth/latent_depth_src",
- "--encoder-layers",
- "2",
- "--decoder-layers",
- "2",
- "--encoder-embed-dim",
- "8",
- "--decoder-embed-dim",
- "8",
- "--share-encoders",
- "--share-decoders",
- "--sparsity-weight",
- "0.1",
- ]
- + enc_ll_flag
- + dec_ll_flag,
- lang_flags=["--lang-pairs", "in-out,out-in"],
- run_validation=True,
- extra_valid_flags=[
- "--user-dir",
- "examples/latent_depth/latent_depth_src",
- ]
- + enc_ll_flag
- + dec_ll_flag,
- )
- generate_main(
- data_dir,
- extra_flags=[
- "--user-dir",
- "examples/latent_depth/latent_depth_src",
- "--task",
- "multilingual_translation_latent_depth",
- "--lang-pairs",
- "in-out,out-in",
- "--source-lang",
- "in",
- "--target-lang",
- "out",
- ]
- + enc_ll_flag
- + dec_ll_flag,
- )
-
- def test_translation_multi_simple_epoch(self):
- # test with all combinations of encoder/decoder lang tokens
- encoder_langtok_flags = [
- [],
- ["--encoder-langtok", "src"],
- ["--encoder-langtok", "tgt"],
- ]
- decoder_langtok_flags = [[], ["--decoder-langtok"]]
- with contextlib.redirect_stdout(StringIO()):
- for i in range(len(encoder_langtok_flags)):
- for j in range(len(decoder_langtok_flags)):
- enc_ltok_flag = encoder_langtok_flags[i]
- dec_ltok_flag = decoder_langtok_flags[j]
- with tempfile.TemporaryDirectory(
- f"test_translation_multi_simple_epoch_{i}_{j}"
- ) as data_dir:
- create_dummy_data(data_dir)
- preprocess_translation_data(
- data_dir, extra_flags=["--joined-dictionary"]
- )
- train_translation_model(
- data_dir,
- arch="transformer",
- task="translation_multi_simple_epoch",
- extra_flags=[
- "--encoder-layers",
- "2",
- "--decoder-layers",
- "2",
- "--encoder-embed-dim",
- "8",
- "--decoder-embed-dim",
- "8",
- "--sampling-method",
- "temperature",
- "--sampling-temperature",
- "1.5",
- "--virtual-epoch-size",
- "1000",
- ]
- + enc_ltok_flag
- + dec_ltok_flag,
- lang_flags=["--lang-pairs", "in-out,out-in"],
- run_validation=True,
- extra_valid_flags=enc_ltok_flag + dec_ltok_flag,
- )
- generate_main(
- data_dir,
- extra_flags=[
- "--task",
- "translation_multi_simple_epoch",
- "--lang-pairs",
- "in-out,out-in",
- "--source-lang",
- "in",
- "--target-lang",
- "out",
- ]
- + enc_ltok_flag
- + dec_ltok_flag,
- )
-
- def test_translation_multi_simple_epoch_no_vepoch(self):
- # test with all combinations of encoder/decoder lang tokens
- with contextlib.redirect_stdout(StringIO()):
- enc_ltok_flag = ["--encoder-langtok", "src"]
- dec_ltok_flag = ["--decoder-langtok"]
- with tempfile.TemporaryDirectory(
- "test_translation_multi_simple_epoch_dict"
- ) as data_dir:
- create_dummy_data(data_dir)
- preprocess_translation_data(data_dir, extra_flags=[])
- train_translation_model(
- data_dir,
- arch="transformer",
- task="translation_multi_simple_epoch",
- extra_flags=[
- "--encoder-layers",
- "2",
- "--decoder-layers",
- "2",
- "--encoder-embed-dim",
- "8",
- "--decoder-embed-dim",
- "8",
- "--sampling-method",
- "temperature",
- "--sampling-temperature",
- "1.5",
- ]
- + enc_ltok_flag
- + dec_ltok_flag,
- lang_flags=["--lang-pairs", "in-out"],
- run_validation=True,
- extra_valid_flags=enc_ltok_flag + dec_ltok_flag,
- )
- generate_main(
- data_dir,
- extra_flags=[
- "--task",
- "translation_multi_simple_epoch",
- "--lang-pairs",
- "in-out",
- "--source-lang",
- "in",
- "--target-lang",
- "out",
- ]
- + enc_ltok_flag
- + dec_ltok_flag,
- )
-
- def test_translation_multi_simple_epoch_dicts(self):
- # test with all combinations of encoder/decoder lang tokens
- with contextlib.redirect_stdout(StringIO()):
- enc_ltok_flag = ["--encoder-langtok", "src"]
- dec_ltok_flag = ["--decoder-langtok"]
- with tempfile.TemporaryDirectory(
- "test_translation_multi_simple_epoch_dict"
- ) as data_dir:
- create_dummy_data(data_dir)
- preprocess_translation_data(data_dir, extra_flags=[])
- train_translation_model(
- data_dir,
- arch="transformer",
- task="translation_multi_simple_epoch",
- extra_flags=[
- "--encoder-layers",
- "2",
- "--decoder-layers",
- "2",
- "--encoder-embed-dim",
- "8",
- "--decoder-embed-dim",
- "8",
- "--sampling-method",
- "temperature",
- "--sampling-temperature",
- "1.5",
- "--virtual-epoch-size",
- "1000",
- ]
- + enc_ltok_flag
- + dec_ltok_flag,
- lang_flags=["--lang-pairs", "in-out"],
- run_validation=True,
- extra_valid_flags=enc_ltok_flag + dec_ltok_flag,
- )
- generate_main(
- data_dir,
- extra_flags=[
- "--task",
- "translation_multi_simple_epoch",
- "--lang-pairs",
- "in-out",
- "--source-lang",
- "in",
- "--target-lang",
- "out",
- ]
- + enc_ltok_flag
- + dec_ltok_flag,
- )
-
- def test_translation_multi_simple_epoch_src_tgt_dict_spec(self):
- # test the specification of explicit --src-dict and --tgt-dict
- with contextlib.redirect_stdout(StringIO()):
- enc_ltok_flag = ["--encoder-langtok", "src"]
- dec_ltok_flag = ["--decoder-langtok"]
- with tempfile.TemporaryDirectory(
- "test_translation_multi_simple_epoch_dict"
- ) as data_dir:
- create_dummy_data(data_dir)
- preprocess_translation_data(data_dir, extra_flags=[])
- train_translation_model(
- data_dir,
- arch="transformer",
- task="translation_multi_simple_epoch",
- extra_flags=[
- "--source-dict",
- f"{data_dir}/dict.in.txt",
- "--target-dict",
- f"{data_dir}/dict.out.txt",
- "--encoder-layers",
- "2",
- "--decoder-layers",
- "2",
- "--encoder-embed-dim",
- "8",
- "--decoder-embed-dim",
- "8",
- "--sampling-method",
- "temperature",
- "--sampling-temperature",
- "1.5",
- "--virtual-epoch-size",
- "1000",
- ]
- + enc_ltok_flag
- + dec_ltok_flag,
- lang_flags=["--lang-pairs", "in-out"],
- run_validation=True,
- extra_valid_flags=enc_ltok_flag + dec_ltok_flag,
- )
- generate_main(
- data_dir,
- extra_flags=[
- "--task",
- "translation_multi_simple_epoch",
- "--lang-pairs",
- "in-out",
- "--source-lang",
- "in",
- "--target-lang",
- "out",
- ]
- + enc_ltok_flag
- + dec_ltok_flag,
- )
-
- def test_transformer_cross_self_attention(self):
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory(
- "test_transformer_cross_self_attention"
- ) as data_dir:
- create_dummy_data(data_dir)
- preprocess_translation_data(data_dir)
- train_translation_model(
- data_dir,
- "transformer_iwslt_de_en",
- [
- "--encoder-layers",
- "2",
- "--decoder-layers",
- "2",
- "--encoder-embed-dim",
- "8",
- "--decoder-embed-dim",
- "8",
- "--decoder-embed-dim",
- "8",
- "--no-cross-attention",
- "--cross-self-attention",
- ],
- run_validation=True,
- )
- generate_main(data_dir, extra_flags=[])
-
- def test_transformer_pointer_generator(self):
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory(
- "test_transformer_pointer_generator"
- ) as data_dir:
- create_dummy_data(data_dir)
- preprocess_summarization_data(data_dir)
- train_translation_model(
- data_dir,
- "transformer_pointer_generator",
- extra_flags=[
- "--user-dir",
- "examples/pointer_generator/pointer_generator_src",
- "--encoder-layers",
- "2",
- "--decoder-layers",
- "2",
- "--encoder-embed-dim",
- "8",
- "--decoder-embed-dim",
- "8",
- "--alignment-layer",
- "-1",
- "--alignment-heads",
- "1",
- "--source-position-markers",
- "0",
- ],
- run_validation=True,
- extra_valid_flags=[
- "--user-dir",
- "examples/pointer_generator/pointer_generator_src",
- ],
- )
- generate_main(
- data_dir,
- extra_flags=[
- "--user-dir",
- "examples/pointer_generator/pointer_generator_src",
- ],
- )
-
- def test_lightconv(self):
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory("test_lightconv") as data_dir:
- create_dummy_data(data_dir)
- preprocess_translation_data(data_dir)
- train_translation_model(
- data_dir,
- "lightconv_iwslt_de_en",
- [
- "--encoder-conv-type",
- "lightweight",
- "--decoder-conv-type",
- "lightweight",
- "--encoder-embed-dim",
- "8",
- "--decoder-embed-dim",
- "8",
- ],
- )
- generate_main(data_dir)
-
- def test_dynamicconv(self):
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory("test_dynamicconv") as data_dir:
- create_dummy_data(data_dir)
- preprocess_translation_data(data_dir)
- train_translation_model(
- data_dir,
- "lightconv_iwslt_de_en",
- [
- "--encoder-conv-type",
- "dynamic",
- "--decoder-conv-type",
- "dynamic",
- "--encoder-embed-dim",
- "8",
- "--decoder-embed-dim",
- "8",
- ],
- )
- generate_main(data_dir)
-
- def test_cmlm_transformer(self):
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory("test_cmlm_transformer") as data_dir:
- create_dummy_data(data_dir)
- preprocess_translation_data(data_dir, ["--joined-dictionary"])
- train_translation_model(
- data_dir,
- "cmlm_transformer",
- [
- "--apply-bert-init",
- "--criterion",
- "nat_loss",
- "--noise",
- "full_mask",
- "--pred-length-offset",
- "--length-loss-factor",
- "0.1",
- ],
- task="translation_lev",
- )
- generate_main(
- data_dir,
- [
- "--task",
- "translation_lev",
- "--iter-decode-max-iter",
- "9",
- "--iter-decode-eos-penalty",
- "0",
- "--print-step",
- ],
- )
-
- def test_nonautoregressive_transformer(self):
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory(
- "test_nonautoregressive_transformer"
- ) as data_dir:
- create_dummy_data(data_dir)
- preprocess_translation_data(data_dir, ["--joined-dictionary"])
- train_translation_model(
- data_dir,
- "nonautoregressive_transformer",
- [
- "--apply-bert-init",
- "--src-embedding-copy",
- "--criterion",
- "nat_loss",
- "--noise",
- "full_mask",
- "--pred-length-offset",
- "--length-loss-factor",
- "0.1",
- ],
- task="translation_lev",
- )
- generate_main(
- data_dir,
- [
- "--task",
- "translation_lev",
- "--iter-decode-max-iter",
- "0",
- "--iter-decode-eos-penalty",
- "0",
- "--print-step",
- ],
- )
-
- # def test_nat_crf_transformer(self):
- # with contextlib.redirect_stdout(StringIO()):
- # with tempfile.TemporaryDirectory('test_nat_crf_transformer') as data_dir:
- # create_dummy_data(data_dir)
- # preprocess_translation_data(data_dir, ['--joined-dictionary'])
- # train_translation_model(data_dir, 'nacrf_transformer', [
- # '--apply-bert-init', '--criterion',
- # 'nat_loss', '--noise', 'full_mask', '--pred-length-offset',
- # '--length-loss-factor', '0.1',
- # '--word-ins-loss-factor', '0.5',
- # '--crf-lowrank-approx', '1',
- # '--crf-beam-approx', '1'
- # ], task='translation_lev')
- # generate_main(data_dir, [
- # '--task', 'translation_lev',
- # '--iter-decode-max-iter', '0',
- # '--iter-decode-eos-penalty', '0',
- # '--print-step',
- # ])
-
- def test_iterative_nonautoregressive_transformer(self):
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory(
- "test_iterative_nonautoregressive_transformer"
- ) as data_dir:
- create_dummy_data(data_dir)
- preprocess_translation_data(data_dir, ["--joined-dictionary"])
- train_translation_model(
- data_dir,
- "iterative_nonautoregressive_transformer",
- [
- "--apply-bert-init",
- "--src-embedding-copy",
- "--criterion",
- "nat_loss",
- "--noise",
- "full_mask",
- "--stochastic-approx",
- "--dae-ratio",
- "0.5",
- "--train-step",
- "3",
- ],
- task="translation_lev",
- )
- generate_main(
- data_dir,
- [
- "--task",
- "translation_lev",
- "--iter-decode-max-iter",
- "9",
- "--iter-decode-eos-penalty",
- "0",
- "--print-step",
- ],
- )
-
- def test_insertion_transformer(self):
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory("test_insertion_transformer") as data_dir:
- create_dummy_data(data_dir)
- preprocess_translation_data(data_dir, ["--joined-dictionary"])
- train_translation_model(
- data_dir,
- "insertion_transformer",
- [
- "--apply-bert-init",
- "--criterion",
- "nat_loss",
- "--noise",
- "random_mask",
- ],
- task="translation_lev",
- )
- generate_main(
- data_dir,
- [
- "--task",
- "translation_lev",
- "--iter-decode-max-iter",
- "9",
- "--iter-decode-eos-penalty",
- "0",
- "--print-step",
- ],
- )
-
- def test_mixture_of_experts(self):
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory("test_moe") as data_dir:
- create_dummy_data(data_dir)
- preprocess_translation_data(data_dir)
- train_translation_model(
- data_dir,
- "transformer_iwslt_de_en",
- [
- "--task",
- "translation_moe",
- "--user-dir",
- "examples/translation_moe/translation_moe_src",
- "--method",
- "hMoElp",
- "--mean-pool-gating-network",
- "--num-experts",
- "3",
- "--encoder-layers",
- "2",
- "--decoder-layers",
- "2",
- "--encoder-embed-dim",
- "8",
- "--decoder-embed-dim",
- "8",
- ],
- )
- generate_main(
- data_dir,
- [
- "--task",
- "translation_moe",
- "--user-dir",
- "examples/translation_moe/translation_moe_src",
- "--method",
- "hMoElp",
- "--mean-pool-gating-network",
- "--num-experts",
- "3",
- "--gen-expert",
- "0",
- ],
- )
-
- def test_alignment(self):
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory("test_alignment") as data_dir:
- create_dummy_data(data_dir, alignment=True)
- preprocess_translation_data(data_dir, ["--align-suffix", "align"])
- train_translation_model(
- data_dir,
- "transformer_align",
- [
- "--encoder-layers",
- "2",
- "--decoder-layers",
- "2",
- "--encoder-embed-dim",
- "8",
- "--decoder-embed-dim",
- "8",
- "--load-alignments",
- "--alignment-layer",
- "1",
- "--criterion",
- "label_smoothed_cross_entropy_with_alignment",
- ],
- run_validation=True,
- )
- generate_main(data_dir)
-
- def test_laser_lstm(self):
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory("test_laser_lstm") as data_dir:
- laser_config_file = create_laser_data_and_config_json(data_dir)
- train_translation_model(
- laser_config_file.name,
- "laser_lstm",
- [
- "--user-dir",
- "examples/laser/laser_src",
- "--weighting-alpha",
- "0.3",
- "--encoder-bidirectional",
- "--encoder-hidden-size",
- "512",
- "--encoder-layers",
- "5",
- "--decoder-layers",
- "1",
- "--encoder-embed-dim",
- "320",
- "--decoder-embed-dim",
- "320",
- "--decoder-lang-embed-dim",
- "32",
- "--save-dir",
- data_dir,
- "--disable-validation",
- ],
- task="laser",
- lang_flags=[],
- )
-
- def test_laser_transformer(self):
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory("test_laser_transformer") as data_dir:
- laser_config_file = create_laser_data_and_config_json(data_dir)
- train_translation_model(
- laser_config_file.name,
- "laser_transformer",
- [
- "--user-dir",
- "examples/laser/laser_src",
- "--weighting-alpha",
- "0.3",
- "--encoder-embed-dim",
- "320",
- "--decoder-embed-dim",
- "320",
- "--decoder-lang-embed-dim",
- "32",
- "--save-dir",
- data_dir,
- "--disable-validation",
- ],
- task="laser",
- lang_flags=[],
- )
-
- def test_alignment_full_context(self):
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory("test_alignment") as data_dir:
- create_dummy_data(data_dir, alignment=True)
- preprocess_translation_data(data_dir, ["--align-suffix", "align"])
- train_translation_model(
- data_dir,
- "transformer_align",
- [
- "--encoder-layers",
- "2",
- "--decoder-layers",
- "2",
- "--encoder-embed-dim",
- "8",
- "--decoder-embed-dim",
- "8",
- "--load-alignments",
- "--alignment-layer",
- "1",
- "--criterion",
- "label_smoothed_cross_entropy_with_alignment",
- "--full-context-alignment",
- ],
- run_validation=True,
- )
- generate_main(data_dir)
-
- def test_transformer_layerdrop(self):
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory("test_transformer_layerdrop") as data_dir:
- create_dummy_data(data_dir)
- preprocess_translation_data(data_dir)
- train_translation_model(
- data_dir,
- "transformer_iwslt_de_en",
- [
- "--encoder-layers",
- "3",
- "--decoder-layers",
- "3",
- "--encoder-embed-dim",
- "8",
- "--decoder-embed-dim",
- "8",
- "--encoder-layerdrop",
- "0.01",
- "--decoder-layerdrop",
- "0.01",
- ],
- )
- generate_main(data_dir)
- generate_main(
- data_dir,
- [
- "--model-overrides",
- "{'encoder_layers_to_keep':'0,2','decoder_layers_to_keep':'1'}",
- ],
- )
-
-
-class TestStories(unittest.TestCase):
- def setUp(self):
- logging.disable(logging.CRITICAL)
-
- def tearDown(self):
- logging.disable(logging.NOTSET)
-
- def test_fconv_self_att_wp(self):
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory("test_fconv_self_att_wp") as data_dir:
- create_dummy_data(data_dir)
- preprocess_translation_data(data_dir)
- config = [
- "--encoder-layers",
- "[(128, 3)] * 2",
- "--decoder-layers",
- "[(128, 3)] * 2",
- "--decoder-attention",
- "True",
- "--encoder-attention",
- "False",
- "--gated-attention",
- "True",
- "--self-attention",
- "True",
- "--project-input",
- "True",
- "--encoder-embed-dim",
- "8",
- "--decoder-embed-dim",
- "8",
- "--decoder-out-embed-dim",
- "8",
- "--multihead-self-attention-nheads",
- "2",
- ]
- train_translation_model(data_dir, "fconv_self_att_wp", config)
- generate_main(data_dir)
-
- # fusion model
- os.rename(
- os.path.join(data_dir, "checkpoint_last.pt"),
- os.path.join(data_dir, "pretrained.pt"),
- )
- config.extend(
- [
- "--pretrained",
- "True",
- "--pretrained-checkpoint",
- os.path.join(data_dir, "pretrained.pt"),
- "--save-dir",
- os.path.join(data_dir, "fusion_model"),
- ]
- )
- train_translation_model(data_dir, "fconv_self_att_wp", config)
-
-
-class TestLanguageModeling(unittest.TestCase):
- def setUp(self):
- logging.disable(logging.CRITICAL)
-
- def tearDown(self):
- logging.disable(logging.NOTSET)
-
- def test_fconv_lm(self):
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory("test_fconv_lm") as data_dir:
- create_dummy_data(data_dir)
- preprocess_lm_data(data_dir)
- train_language_model(
- data_dir,
- "fconv_lm",
- [
- "--decoder-layers",
- "[(850, 3)] * 2 + [(1024,4)]",
- "--decoder-embed-dim",
- "280",
- "--optimizer",
- "nag",
- "--lr",
- "0.1",
- ],
- )
- eval_lm_main(data_dir)
- generate_main(
- data_dir,
- [
- "--task",
- "language_modeling",
- "--sample-break-mode",
- "eos",
- "--tokens-per-sample",
- "500",
- ],
- )
-
- def test_transformer_lm(self):
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory("test_transformer_lm") as data_dir:
- create_dummy_data(data_dir)
- preprocess_lm_data(data_dir)
- train_language_model(
- data_dir,
- "transformer_lm",
- ["--add-bos-token", '--nval', '1'],
- run_validation=True,
- )
- eval_lm_main(data_dir)
- eval_lm_main(data_dir, extra_flags=["--context-window", "25"])
- generate_main(
- data_dir,
- [
- "--task",
- "language_modeling",
- "--sample-break-mode",
- "eos",
- "--tokens-per-sample",
- "500",
- ],
- )
-
- def test_transformer_lm_with_adaptive_softmax(self):
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory(
- "test_transformer_lm_with_adaptive_softmax"
- ) as data_dir:
- create_dummy_data(data_dir)
- preprocess_lm_data(data_dir)
- train_language_model(
- data_dir,
- "transformer_lm",
- [
- "--add-bos-token",
- "--criterion",
- "adaptive_loss",
- "--adaptive-softmax-cutoff",
- "5,10,15",
- ],
- run_validation=True,
- )
- eval_lm_main(data_dir)
- generate_main(
- data_dir,
- [
- "--task",
- "language_modeling",
- "--sample-break-mode",
- "eos",
- "--tokens-per-sample",
- "500",
- ],
- )
-
- def test_lightconv_lm(self):
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory("test_lightconv_lm") as data_dir:
- create_dummy_data(data_dir)
- preprocess_lm_data(data_dir)
- train_language_model(
- data_dir,
- "lightconv_lm",
- ["--add-bos-token"],
- run_validation=True,
- )
- eval_lm_main(data_dir)
- generate_main(
- data_dir,
- [
- "--task",
- "language_modeling",
- "--sample-break-mode",
- "eos",
- "--tokens-per-sample",
- "500",
- ],
- )
-
- def test_lstm_lm(self):
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory("test_lstm_lm") as data_dir:
- create_dummy_data(data_dir)
- preprocess_lm_data(data_dir)
- train_language_model(
- data_dir,
- "lstm_lm",
- ["--add-bos-token"],
- run_validation=True,
- )
- eval_lm_main(data_dir)
- generate_main(
- data_dir,
- [
- "--task",
- "language_modeling",
- "--sample-break-mode",
- "eos",
- "--tokens-per-sample",
- "500",
- ],
- )
-
- def test_lstm_lm_residuals(self):
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory("test_lstm_lm_residuals") as data_dir:
- create_dummy_data(data_dir)
- preprocess_lm_data(data_dir)
- train_language_model(
- data_dir,
- "lstm_lm",
- ["--add-bos-token", "--residuals"],
- run_validation=True,
- )
- eval_lm_main(data_dir)
- generate_main(
- data_dir,
- [
- "--task",
- "language_modeling",
- "--sample-break-mode",
- "eos",
- "--tokens-per-sample",
- "500",
- ],
- )
-
- @unittest.skipIf(not has_hf_transformers, "skip test if transformers is missing")
- def test_transformer_xl_bptt_lm(self):
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory("test_transformer_xl_bptt_lm") as data_dir:
- create_dummy_data(data_dir)
- preprocess_lm_data(data_dir)
- task_flags = [
- "--user-dir",
- "examples/truncated_bptt",
- "--task",
- "truncated_bptt_lm",
- "--batch-size",
- "2",
- "--tokens-per-sample",
- "50",
- ]
- train_language_model(
- data_dir=data_dir,
- arch="transformer_xl",
- extra_flags=task_flags
- + [
- "--n-layer",
- "2",
- ],
- task="truncated_bptt_lm",
- run_validation=True,
- extra_valid_flags=task_flags,
- )
- eval_lm_main(data_dir, extra_flags=task_flags)
- # Train with activation offloading
- train_language_model(
- data_dir=data_dir,
- arch="transformer_xl",
- extra_flags=task_flags
- + [
- "--n-layer",
- "2",
- "--offload-activations",
- ],
- task="truncated_bptt_lm",
- run_validation=True,
- extra_valid_flags=task_flags,
- )
-
-
-class TestMaskedLanguageModel(unittest.TestCase):
- def setUp(self):
- logging.disable(logging.CRITICAL)
-
- def tearDown(self):
- logging.disable(logging.NOTSET)
-
- def test_legacy_masked_lm(self):
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory("test_legacy_mlm") as data_dir:
- create_dummy_data(data_dir)
- preprocess_lm_data(data_dir)
- train_legacy_masked_language_model(data_dir, "masked_lm")
-
- def test_roberta_masked_lm(self):
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory("test_roberta_mlm") as data_dir:
- create_dummy_data(data_dir)
- preprocess_lm_data(data_dir)
- train_masked_lm(
- data_dir, "roberta_base", extra_flags=["--encoder-layers", "2"]
- )
-
- def test_roberta_sentence_prediction(self):
- num_classes = 3
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory("test_roberta_head") as data_dir:
- create_dummy_roberta_head_data(data_dir, num_classes=num_classes)
- preprocess_lm_data(os.path.join(data_dir, "input0"))
- preprocess_lm_data(os.path.join(data_dir, "label"))
- train_roberta_head(data_dir, "roberta_base", num_classes=num_classes)
-
- def test_roberta_regression_single(self):
- num_classes = 1
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory(
- "test_roberta_regression_single"
- ) as data_dir:
- create_dummy_roberta_head_data(
- data_dir, num_classes=num_classes, regression=True
- )
- preprocess_lm_data(os.path.join(data_dir, "input0"))
- train_roberta_head(
- data_dir,
- "roberta_base",
- num_classes=num_classes,
- extra_flags=["--regression-target"],
- )
-
- def test_roberta_regression_multiple(self):
- num_classes = 3
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory(
- "test_roberta_regression_multiple"
- ) as data_dir:
- create_dummy_roberta_head_data(
- data_dir, num_classes=num_classes, regression=True
- )
- preprocess_lm_data(os.path.join(data_dir, "input0"))
- train_roberta_head(
- data_dir,
- "roberta_base",
- num_classes=num_classes,
- extra_flags=["--regression-target"],
- )
-
- def test_linformer_roberta_masked_lm(self):
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory("test_linformer_roberta_mlm") as data_dir:
- create_dummy_data(data_dir)
- preprocess_lm_data(data_dir)
- train_masked_lm(
- data_dir,
- "linformer_roberta_base",
- extra_flags=[
- "--user-dir",
- "examples/linformer/linformer_src",
- "--encoder-layers",
- "2",
- ],
- )
-
- def test_linformer_roberta_sentence_prediction(self):
- num_classes = 3
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory("test_linformer_roberta_head") as data_dir:
- create_dummy_roberta_head_data(data_dir, num_classes=num_classes)
- preprocess_lm_data(os.path.join(data_dir, "input0"))
- preprocess_lm_data(os.path.join(data_dir, "label"))
- train_roberta_head(
- data_dir,
- "linformer_roberta_base",
- num_classes=num_classes,
- extra_flags=["--user-dir", "examples/linformer/linformer_src"],
- )
-
- def test_linformer_roberta_regression_single(self):
- num_classes = 1
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory(
- "test_linformer_roberta_regression_single"
- ) as data_dir:
- create_dummy_roberta_head_data(
- data_dir, num_classes=num_classes, regression=True
- )
- preprocess_lm_data(os.path.join(data_dir, "input0"))
- train_roberta_head(
- data_dir,
- "linformer_roberta_base",
- num_classes=num_classes,
- extra_flags=[
- "--regression-target",
- "--user-dir",
- "examples/linformer/linformer_src",
- ],
- )
-
- def test_linformer_roberta_regression_multiple(self):
- num_classes = 3
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory(
- "test_linformer_roberta_regression_multiple"
- ) as data_dir:
- create_dummy_roberta_head_data(
- data_dir, num_classes=num_classes, regression=True
- )
- preprocess_lm_data(os.path.join(data_dir, "input0"))
- train_roberta_head(
- data_dir,
- "linformer_roberta_base",
- num_classes=num_classes,
- extra_flags=[
- "--regression-target",
- "--user-dir",
- "examples/linformer/linformer_src",
- ],
- )
-
- def _test_pretrained_masked_lm_for_translation(self, learned_pos_emb, encoder_only):
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory("test_mlm") as data_dir:
- create_dummy_data(data_dir)
- preprocess_lm_data(data_dir)
- train_legacy_masked_language_model(
- data_dir,
- arch="masked_lm",
- extra_args=("--encoder-learned-pos",) if learned_pos_emb else (),
- )
- with tempfile.TemporaryDirectory(
- "test_mlm_translation"
- ) as translation_dir:
- create_dummy_data(translation_dir)
- preprocess_translation_data(
- translation_dir, extra_flags=["--joined-dictionary"]
- )
- # Train transformer with data_dir/checkpoint_last.pt
- train_translation_model(
- translation_dir,
- arch="transformer_from_pretrained_xlm",
- extra_flags=[
- "--decoder-layers",
- "1",
- "--decoder-embed-dim",
- "32",
- "--decoder-attention-heads",
- "1",
- "--decoder-ffn-embed-dim",
- "32",
- "--encoder-layers",
- "1",
- "--encoder-embed-dim",
- "32",
- "--encoder-attention-heads",
- "1",
- "--encoder-ffn-embed-dim",
- "32",
- "--pretrained-xlm-checkpoint",
- "{}/checkpoint_last.pt".format(data_dir),
- "--activation-fn",
- "gelu",
- "--max-source-positions",
- "500",
- "--max-target-positions",
- "500",
- ]
- + (
- ["--encoder-learned-pos", "--decoder-learned-pos"]
- if learned_pos_emb
- else []
- )
- + (["--init-encoder-only"] if encoder_only else []),
- task="translation_from_pretrained_xlm",
- )
-
- def test_pretrained_masked_lm_for_translation_learned_pos_emb(self):
- self._test_pretrained_masked_lm_for_translation(True, False)
-
- def test_pretrained_masked_lm_for_translation_sinusoidal_pos_emb(self):
- self._test_pretrained_masked_lm_for_translation(False, False)
-
- def test_pretrained_masked_lm_for_translation_encoder_only(self):
- self._test_pretrained_masked_lm_for_translation(True, True)
-
- def test_r4f_roberta(self):
- num_classes = 3
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory("test_r4f_roberta_head") as data_dir:
- create_dummy_roberta_head_data(data_dir, num_classes=num_classes)
- preprocess_lm_data(os.path.join(data_dir, "input0"))
- preprocess_lm_data(os.path.join(data_dir, "label"))
- train_roberta_head(
- data_dir,
- "roberta_base",
- num_classes=num_classes,
- extra_flags=[
- "--user-dir",
- "examples/rxf/rxf_src",
- "--criterion",
- "sentence_prediction_r3f",
- "--spectral-norm-classification-head",
- ],
- )
-
-
-def train_legacy_masked_language_model(data_dir, arch, extra_args=()):
- train_parser = options.get_training_parser()
- # TODO: langs should be in and out right?
- train_args = options.parse_args_and_arch(
- train_parser,
- [
- "--task",
- "cross_lingual_lm",
- data_dir,
- "--arch",
- arch,
- # Optimizer args
- "--optimizer",
- "adam",
- "--lr-scheduler",
- "reduce_lr_on_plateau",
- "--lr-shrink",
- "0.5",
- "--lr",
- "0.0001",
- "--stop-min-lr",
- "1e-09",
- # dropout, attention args
- "--dropout",
- "0.1",
- "--attention-dropout",
- "0.1",
- # MLM args
- "--criterion",
- "legacy_masked_lm_loss",
- "--masked-lm-only",
- "--monolingual-langs",
- "in,out",
- "--num-segment",
- "5",
- # Transformer args: use a small transformer model for fast training
- "--encoder-layers",
- "1",
- "--encoder-embed-dim",
- "32",
- "--encoder-attention-heads",
- "1",
- "--encoder-ffn-embed-dim",
- "32",
- # Other training args
- "--max-tokens",
- "500",
- "--tokens-per-sample",
- "500",
- "--save-dir",
- data_dir,
- "--max-epoch",
- "1",
- "--no-progress-bar",
- "--distributed-world-size",
- "1",
- "--dataset-impl",
- "raw",
- "--num-workers",
- "0",
- ]
- + list(extra_args),
- )
- train.main(train_args)
-
-
-class TestOptimizers(unittest.TestCase):
- def setUp(self):
- logging.disable(logging.CRITICAL)
-
- def tearDown(self):
- logging.disable(logging.NOTSET)
-
- def test_optimizers(self):
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory("test_optimizers") as data_dir:
- # Use just a bit of data and tiny model to keep this test runtime reasonable
- create_dummy_data(data_dir, num_examples=10, maxlen=5)
- preprocess_translation_data(data_dir)
- optimizers = ["adafactor", "adam", "nag", "adagrad", "sgd", "adadelta"]
- last_checkpoint = os.path.join(data_dir, "checkpoint_last.pt")
- for optimizer in optimizers:
- if os.path.exists(last_checkpoint):
- os.remove(last_checkpoint)
- train_translation_model(
- data_dir,
- "lstm",
- [
- "--required-batch-size-multiple",
- "1",
- "--encoder-layers",
- "1",
- "--encoder-hidden-size",
- "32",
- "--decoder-layers",
- "1",
- "--optimizer",
- optimizer,
- ],
- )
- generate_main(data_dir)
-
-
-def read_last_log_entry(
- logs: List[logging.LogRecord], logger_name: str
-) -> Dict[str, float]:
- for x in reversed(logs):
- if x.name == logger_name:
- return json.loads(x.message)
- raise ValueError(f"No entries from {logger_name} found in captured logs")
-
-
-class TestActivationCheckpointing(unittest.TestCase):
- base_flags = [
- "--encoder-layers",
- "2",
- "--decoder-layers",
- "2",
- "--encoder-embed-dim",
- "8",
- "--decoder-embed-dim",
- "8",
- "--restore-file",
- "x.pt",
- "--log-format",
- "json",
- "--log-interval",
- "1",
- "--max-update",
- "2",
- ]
-
- def _train(self, data_dir, extra_flags):
- with self.assertLogs() as logs:
- train_translation_model(
- data_dir,
- "transformer_iwslt_de_en",
- self.base_flags + extra_flags,
- run_validation=True,
- extra_valid_flags=["--log-format", "json"],
- )
- return logs.records
-
- def test_activation_offloading_does_not_change_metrics(self):
- """Neither ----checkpoint-activations nor --offload-activations should change loss"""
- with tempfile.TemporaryDirectory("test_transformer_with_act_cpt") as data_dir:
-
- with self.assertLogs():
- create_dummy_data(data_dir, num_examples=20)
- preprocess_translation_data(data_dir)
- offload_logs = self._train(data_dir, ["--offload-activations"])
- baseline_logs = self._train(data_dir, [])
-
- assert len(baseline_logs) == len(offload_logs)
-
- baseline_valid_stats = read_last_log_entry(baseline_logs, "valid")
- offload_valid_stats = read_last_log_entry(offload_logs, "valid")
- baseline_train_stats = read_last_log_entry(baseline_logs, "train")
- offload_train_stats = read_last_log_entry(offload_logs, "train")
-
- assert (
- baseline_train_stats["train_loss"] == offload_train_stats["train_loss"]
- )
- assert (
- baseline_valid_stats["valid_loss"] == offload_valid_stats["valid_loss"]
- )
-
- def test_activation_checkpointing_does_not_change_metrics(self):
- """--checkpoint-activations should not change loss"""
-
- with tempfile.TemporaryDirectory("test_transformer_with_act_cpt") as data_dir:
- with self.assertLogs():
- create_dummy_data(data_dir, num_examples=20)
- preprocess_translation_data(data_dir)
- ckpt_logs = self._train(data_dir, ["--checkpoint-activations"])
- baseline_logs = self._train(data_dir, [])
- assert len(baseline_logs) == len(ckpt_logs)
-
- baseline_train_stats = read_last_log_entry(baseline_logs, "train")
- ckpt_train_stats = read_last_log_entry(ckpt_logs, "train")
- assert baseline_train_stats["train_loss"] == ckpt_train_stats["train_loss"]
-
- baseline_valid_stats = read_last_log_entry(baseline_logs, "valid")
- ckpt_valid_stats = read_last_log_entry(ckpt_logs, "valid")
- assert baseline_valid_stats["valid_loss"] == ckpt_valid_stats["valid_loss"]
-
-
-def create_dummy_roberta_head_data(
- data_dir, num_examples=100, maxlen=10, num_classes=2, regression=False
-):
- input_dir = "input0"
-
- def _create_dummy_data(filename):
- random_data = torch.rand(num_examples * maxlen)
- input_data = 97 + torch.floor(26 * random_data).int()
- if regression:
- output_data = torch.rand((num_examples, num_classes))
- else:
- output_data = 1 + torch.floor(num_classes * torch.rand(num_examples)).int()
- with open(os.path.join(data_dir, input_dir, filename + ".out"), "w") as f_in:
- label_filename = filename + ".label" if regression else filename + ".out"
- with open(os.path.join(data_dir, "label", label_filename), "w") as f_out:
- offset = 0
- for i in range(num_examples):
- # write example input
- ex_len = random.randint(1, maxlen)
- ex_str = " ".join(map(chr, input_data[offset : offset + ex_len]))
- print(ex_str, file=f_in)
- # write example label
- if regression:
- class_str = " ".join(map(str, output_data[i].numpy()))
- print(class_str, file=f_out)
- else:
- class_str = "class{}".format(output_data[i])
- print(class_str, file=f_out)
- offset += ex_len
-
- os.mkdir(os.path.join(data_dir, input_dir))
- os.mkdir(os.path.join(data_dir, "label"))
- _create_dummy_data("train")
- _create_dummy_data("valid")
- _create_dummy_data("test")
-
-
-def train_masked_lm(data_dir, arch, extra_flags=None):
- train_parser = options.get_training_parser()
- train_args = options.parse_args_and_arch(
- train_parser,
- [
- "--task",
- "masked_lm",
- data_dir,
- "--arch",
- arch,
- "--optimizer",
- "adam",
- "--lr",
- "0.0001",
- "--criterion",
- "masked_lm",
- "--batch-size",
- "500",
- "--save-dir",
- data_dir,
- "--max-epoch",
- "1",
- "--no-progress-bar",
- "--distributed-world-size",
- "1",
- "--ddp-backend",
- "no_c10d",
- "--num-workers",
- "0",
- ]
- + (extra_flags or []),
- )
- train.main(train_args)
-
-
-def train_roberta_head(data_dir, arch, num_classes=2, extra_flags=None):
- train_parser = options.get_training_parser()
- train_args = options.parse_args_and_arch(
- train_parser,
- [
- "--task",
- "sentence_prediction",
- data_dir,
- "--arch",
- arch,
- "--encoder-layers",
- "2",
- "--num-classes",
- str(num_classes),
- "--optimizer",
- "adam",
- "--lr",
- "0.0001",
- "--criterion",
- "sentence_prediction",
- "--max-tokens",
- "500",
- "--max-positions",
- "500",
- "--batch-size",
- "500",
- "--save-dir",
- data_dir,
- "--max-epoch",
- "1",
- "--no-progress-bar",
- "--distributed-world-size",
- "1",
- "--ddp-backend",
- "no_c10d",
- "--num-workers",
- "0",
- ]
- + (extra_flags or []),
- )
- train.main(train_args)
-
-
-def eval_lm_main(data_dir, extra_flags=None):
- eval_lm_parser = options.get_eval_lm_parser()
- eval_lm_args = options.parse_args_and_arch(
- eval_lm_parser,
- [
- data_dir,
- "--path",
- os.path.join(data_dir, "checkpoint_last.pt"),
- "--no-progress-bar",
- "--num-workers",
- "0",
- ]
- + (extra_flags or []),
- )
- eval_lm.main(eval_lm_args)
-
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/prepend_dataset.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/prepend_dataset.py
deleted file mode 100644
index ad74784d2d7920e4a6225282d95543ce16ea50d9..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/prepend_dataset.py
+++ /dev/null
@@ -1,28 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import numpy as np
-import torch
-
-from . import BaseWrapperDataset
-
-
-class PrependDataset(BaseWrapperDataset):
- def __init__(self, dataset, prepend_getter, ensure_first_token_is=None):
- super().__init__(dataset)
- self.prepend_getter = prepend_getter
- self.ensure_first_token = ensure_first_token_is
-
- def __getitem__(self, idx):
- item = self.dataset[idx]
- is_tuple = isinstance(item, tuple)
- src = item[0] if is_tuple else item
-
- assert self.ensure_first_token is None or src[0] == self.ensure_first_token
- prepend_idx = self.prepend_getter(self.dataset, idx)
- assert isinstance(prepend_idx, int)
- src[0] = prepend_idx
- item = tuple((src,) + item[1:]) if is_tuple else src
- return item
diff --git a/spaces/OpenDILabCommunity/LLMRiddlesChatGLMCN/llmriddles/questions/executor.py b/spaces/OpenDILabCommunity/LLMRiddlesChatGLMCN/llmriddles/questions/executor.py
deleted file mode 100644
index 61dafa769808626ef0f179fed4f6bf45979e8252..0000000000000000000000000000000000000000
--- a/spaces/OpenDILabCommunity/LLMRiddlesChatGLMCN/llmriddles/questions/executor.py
+++ /dev/null
@@ -1,35 +0,0 @@
-from typing import Tuple
-
-from .question import Question
-from ..llms import get_llm_fn
-
-
-class QuestionExecutor:
- def __init__(self, question: Question, lang: str = 'cn', llm: str = 'chatgpt', llm_cfgs=None):
- self.question = question
- self.lang = lang
- self.llm = llm
- self.llm_cfgs = dict(llm_cfgs or {})
-
- @property
- def question_text(self):
- return self.question.texts[self.lang]
-
- @property
- def question_name(self):
- return self.question.names[self.lang]
-
- def check(self, qs_text: str) -> Tuple[str, bool, str]:
- answer_text = get_llm_fn(self.llm)(qs_text, **self.llm_cfgs)
- correct, explanation = self.check_answer(qs_text, answer_text)
- return answer_text, correct, explanation
-
- def check_answer(self, user_text: str, answer_text: str) -> Tuple[bool, str]:
- correct, explanation = self.question.checker(self.question_text, user_text, answer_text, self.lang)
- if explanation is None:
- if correct:
- explanation = 'LLM的回答满足要求' if self.lang == 'cn' else 'Correct Answer From LLM'
- else:
- explanation = 'LLM的回答不满足要求' if self.lang == 'cn' else 'Wrong Answer From LLM'
-
- return correct, explanation
diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/runner/base_module.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/runner/base_module.py
deleted file mode 100644
index 617fad9bb89f10a9a0911d962dfb3bc8f3a3628c..0000000000000000000000000000000000000000
--- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/runner/base_module.py
+++ /dev/null
@@ -1,195 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import copy
-import warnings
-from abc import ABCMeta
-from collections import defaultdict
-from logging import FileHandler
-
-import torch.nn as nn
-
-from annotator.uniformer.mmcv.runner.dist_utils import master_only
-from annotator.uniformer.mmcv.utils.logging import get_logger, logger_initialized, print_log
-
-
-class BaseModule(nn.Module, metaclass=ABCMeta):
- """Base module for all modules in openmmlab.
-
- ``BaseModule`` is a wrapper of ``torch.nn.Module`` with additional
- functionality of parameter initialization. Compared with
- ``torch.nn.Module``, ``BaseModule`` mainly adds three attributes.
-
- - ``init_cfg``: the config to control the initialization.
- - ``init_weights``: The function of parameter
- initialization and recording initialization
- information.
- - ``_params_init_info``: Used to track the parameter
- initialization information. This attribute only
- exists during executing the ``init_weights``.
-
- Args:
- init_cfg (dict, optional): Initialization config dict.
- """
-
- def __init__(self, init_cfg=None):
- """Initialize BaseModule, inherited from `torch.nn.Module`"""
-
- # NOTE init_cfg can be defined in different levels, but init_cfg
- # in low levels has a higher priority.
-
- super(BaseModule, self).__init__()
- # define default value of init_cfg instead of hard code
- # in init_weights() function
- self._is_init = False
-
- self.init_cfg = copy.deepcopy(init_cfg)
-
- # Backward compatibility in derived classes
- # if pretrained is not None:
- # warnings.warn('DeprecationWarning: pretrained is a deprecated \
- # key, please consider using init_cfg')
- # self.init_cfg = dict(type='Pretrained', checkpoint=pretrained)
-
- @property
- def is_init(self):
- return self._is_init
-
- def init_weights(self):
- """Initialize the weights."""
-
- is_top_level_module = False
- # check if it is top-level module
- if not hasattr(self, '_params_init_info'):
- # The `_params_init_info` is used to record the initialization
- # information of the parameters
- # the key should be the obj:`nn.Parameter` of model and the value
- # should be a dict containing
- # - init_info (str): The string that describes the initialization.
- # - tmp_mean_value (FloatTensor): The mean of the parameter,
- # which indicates whether the parameter has been modified.
- # this attribute would be deleted after all parameters
- # is initialized.
- self._params_init_info = defaultdict(dict)
- is_top_level_module = True
-
- # Initialize the `_params_init_info`,
- # When detecting the `tmp_mean_value` of
- # the corresponding parameter is changed, update related
- # initialization information
- for name, param in self.named_parameters():
- self._params_init_info[param][
- 'init_info'] = f'The value is the same before and ' \
- f'after calling `init_weights` ' \
- f'of {self.__class__.__name__} '
- self._params_init_info[param][
- 'tmp_mean_value'] = param.data.mean()
-
- # pass `params_init_info` to all submodules
- # All submodules share the same `params_init_info`,
- # so it will be updated when parameters are
- # modified at any level of the model.
- for sub_module in self.modules():
- sub_module._params_init_info = self._params_init_info
-
- # Get the initialized logger, if not exist,
- # create a logger named `mmcv`
- logger_names = list(logger_initialized.keys())
- logger_name = logger_names[0] if logger_names else 'mmcv'
-
- from ..cnn import initialize
- from ..cnn.utils.weight_init import update_init_info
- module_name = self.__class__.__name__
- if not self._is_init:
- if self.init_cfg:
- print_log(
- f'initialize {module_name} with init_cfg {self.init_cfg}',
- logger=logger_name)
- initialize(self, self.init_cfg)
- if isinstance(self.init_cfg, dict):
- # prevent the parameters of
- # the pre-trained model
- # from being overwritten by
- # the `init_weights`
- if self.init_cfg['type'] == 'Pretrained':
- return
-
- for m in self.children():
- if hasattr(m, 'init_weights'):
- m.init_weights()
- # users may overload the `init_weights`
- update_init_info(
- m,
- init_info=f'Initialized by '
- f'user-defined `init_weights`'
- f' in {m.__class__.__name__} ')
-
- self._is_init = True
- else:
- warnings.warn(f'init_weights of {self.__class__.__name__} has '
- f'been called more than once.')
-
- if is_top_level_module:
- self._dump_init_info(logger_name)
-
- for sub_module in self.modules():
- del sub_module._params_init_info
-
- @master_only
- def _dump_init_info(self, logger_name):
- """Dump the initialization information to a file named
- `initialization.log.json` in workdir.
-
- Args:
- logger_name (str): The name of logger.
- """
-
- logger = get_logger(logger_name)
-
- with_file_handler = False
- # dump the information to the logger file if there is a `FileHandler`
- for handler in logger.handlers:
- if isinstance(handler, FileHandler):
- handler.stream.write(
- 'Name of parameter - Initialization information\n')
- for name, param in self.named_parameters():
- handler.stream.write(
- f'\n{name} - {param.shape}: '
- f"\n{self._params_init_info[param]['init_info']} \n")
- handler.stream.flush()
- with_file_handler = True
- if not with_file_handler:
- for name, param in self.named_parameters():
- print_log(
- f'\n{name} - {param.shape}: '
- f"\n{self._params_init_info[param]['init_info']} \n ",
- logger=logger_name)
-
- def __repr__(self):
- s = super().__repr__()
- if self.init_cfg:
- s += f'\ninit_cfg={self.init_cfg}'
- return s
-
-
-class Sequential(BaseModule, nn.Sequential):
- """Sequential module in openmmlab.
-
- Args:
- init_cfg (dict, optional): Initialization config dict.
- """
-
- def __init__(self, *args, init_cfg=None):
- BaseModule.__init__(self, init_cfg)
- nn.Sequential.__init__(self, *args)
-
-
-class ModuleList(BaseModule, nn.ModuleList):
- """ModuleList in openmmlab.
-
- Args:
- modules (iterable, optional): an iterable of modules to add.
- init_cfg (dict, optional): Initialization config dict.
- """
-
- def __init__(self, modules=None, init_cfg=None):
- BaseModule.__init__(self, init_cfg)
- nn.ModuleList.__init__(self, modules)
diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/models/necks/__init__.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/models/necks/__init__.py
deleted file mode 100644
index 9b9d3d5b3fe80247642d962edd6fb787537d01d6..0000000000000000000000000000000000000000
--- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/models/necks/__init__.py
+++ /dev/null
@@ -1,4 +0,0 @@
-from .fpn import FPN
-from .multilevel_neck import MultiLevelNeck
-
-__all__ = ['FPN', 'MultiLevelNeck']
diff --git a/spaces/PSLD/PSLD/diffusion-posterior-sampling/bkse/models/losses/dsd_loss.py b/spaces/PSLD/PSLD/diffusion-posterior-sampling/bkse/models/losses/dsd_loss.py
deleted file mode 100644
index 9cf4660dc5f3d088bcf926866914ca0790348c5e..0000000000000000000000000000000000000000
--- a/spaces/PSLD/PSLD/diffusion-posterior-sampling/bkse/models/losses/dsd_loss.py
+++ /dev/null
@@ -1,129 +0,0 @@
-import torch
-from models.dsd.bicubic import BicubicDownSample
-from models.kernel_encoding.kernel_wizard import KernelWizard
-from models.losses.ssim_loss import SSIM
-
-
-class LossBuilder(torch.nn.Module):
- def __init__(self, ref_im, opt):
- super(LossBuilder, self).__init__()
- assert ref_im.shape[2] == ref_im.shape[3]
- self.ref_im = ref_im
- loss_str = opt["loss_str"]
- self.parsed_loss = [loss_term.split("*") for loss_term in loss_str.split("+")]
- self.eps = opt["eps"]
-
- self.ssim = SSIM().cuda()
-
- self.D = KernelWizard(opt["KernelWizard"]).cuda()
- self.D.load_state_dict(torch.load(opt["KernelWizard"]["pretrained"]))
- for v in self.D.parameters():
- v.requires_grad = False
-
- # Takes a list of tensors, flattens them, and concatenates them into a vector
- # Used to calculate euclidian distance between lists of tensors
- def flatcat(self, l):
- l = l if (isinstance(l, list)) else [l]
- return torch.cat([x.flatten() for x in l], dim=0)
-
- def _loss_l2(self, gen_im_lr, ref_im, **kwargs):
- return (gen_im_lr - ref_im).pow(2).mean((1, 2, 3)).clamp(min=self.eps).sum()
-
- def _loss_l1(self, gen_im_lr, ref_im, **kwargs):
- return 10 * ((gen_im_lr - ref_im).abs().mean((1, 2, 3)).clamp(min=self.eps).sum())
-
- # Uses geodesic distance on sphere to sum pairwise distances of the 18 vectors
- def _loss_geocross(self, latent, **kwargs):
- pass
-
-
-class LossBuilderStyleGAN(LossBuilder):
- def __init__(self, ref_im, opt):
- super(LossBuilderStyleGAN, self).__init__(ref_im, opt)
- im_size = ref_im.shape[2]
- factor = opt["output_size"] // im_size
- assert im_size * factor == opt["output_size"]
- self.bicub = BicubicDownSample(factor=factor)
-
- # Uses geodesic distance on sphere to sum pairwise distances of the 18 vectors
- def _loss_geocross(self, latent, **kwargs):
- if latent.shape[1] == 1:
- return 0
- else:
- X = latent.view(-1, 1, 18, 512)
- Y = latent.view(-1, 18, 1, 512)
- A = ((X - Y).pow(2).sum(-1) + 1e-9).sqrt()
- B = ((X + Y).pow(2).sum(-1) + 1e-9).sqrt()
- D = 2 * torch.atan2(A, B)
- D = ((D.pow(2) * 512).mean((1, 2)) / 8.0).sum()
- return D
-
- def forward(self, latent, gen_im, kernel, step):
- var_dict = {
- "latent": latent,
- "gen_im_lr": self.D.adaptKernel(self.bicub(gen_im), kernel),
- "ref_im": self.ref_im,
- }
- loss = 0
- loss_fun_dict = {
- "L2": self._loss_l2,
- "L1": self._loss_l1,
- "GEOCROSS": self._loss_geocross,
- }
- losses = {}
-
- for weight, loss_type in self.parsed_loss:
- tmp_loss = loss_fun_dict[loss_type](**var_dict)
- losses[loss_type] = tmp_loss
- loss += float(weight) * tmp_loss
- loss += 5e-5 * torch.norm(kernel)
- losses["Norm"] = torch.norm(kernel)
-
- return loss, losses
-
- def get_blur_img(self, sharp_img, kernel):
- return self.D.adaptKernel(self.bicub(sharp_img), kernel).cpu().detach().clamp(0, 1)
-
-
-class LossBuilderStyleGAN2(LossBuilder):
- def __init__(self, ref_im, opt):
- super(LossBuilderStyleGAN2, self).__init__(ref_im, opt)
-
- # Uses geodesic distance on sphere to sum pairwise distances of the 18 vectors
- def _loss_geocross(self, latent, **kwargs):
- if latent.shape[1] == 1:
- return 0
- else:
- X = latent.view(-1, 1, 14, 512)
- Y = latent.view(-1, 14, 1, 512)
- A = ((X - Y).pow(2).sum(-1) + 1e-9).sqrt()
- B = ((X + Y).pow(2).sum(-1) + 1e-9).sqrt()
- D = 2 * torch.atan2(A, B)
- D = ((D.pow(2) * 512).mean((1, 2)) / 6.0).sum()
- return D
-
- def forward(self, latent, gen_im, kernel, step):
- var_dict = {
- "latent": latent,
- "gen_im_lr": self.D.adaptKernel(gen_im, kernel),
- "ref_im": self.ref_im,
- }
- loss = 0
- loss_fun_dict = {
- "L2": self._loss_l2,
- "L1": self._loss_l1,
- "GEOCROSS": self._loss_geocross,
- }
- losses = {}
-
- for weight, loss_type in self.parsed_loss:
- tmp_loss = loss_fun_dict[loss_type](**var_dict)
- losses[loss_type] = tmp_loss
- loss += float(weight) * tmp_loss
- loss += 1e-4 * torch.norm(kernel)
- losses["Norm"] = torch.norm(kernel)
-
- return loss, losses
-
- def get_blur_img(self, sharp_img, kernel):
- return self.D.adaptKernel(sharp_img, kernel).cpu().detach().clamp(0, 1)
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/rnrs/lists.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/rnrs/lists.go
deleted file mode 100644
index 53c499db4fe5fa929ba045fad76e16e7b0d4058e..0000000000000000000000000000000000000000
Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/rnrs/lists.go and /dev/null differ
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/sxml/fold.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/sxml/fold.go
deleted file mode 100644
index 140b3302098c40c7ac52770487268c6c507410f9..0000000000000000000000000000000000000000
Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/sxml/fold.go and /dev/null differ
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/autochange.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/autochange.go
deleted file mode 100644
index 0021ca197157e06caa801783259475d1483f9b4d..0000000000000000000000000000000000000000
Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/autochange.go and /dev/null differ
diff --git a/spaces/PeepDaSlan9/AutoGPT/BULLETIN.md b/spaces/PeepDaSlan9/AutoGPT/BULLETIN.md
deleted file mode 100644
index 735048ddc87a914987c6bd70ccdb231a80242ae3..0000000000000000000000000000000000000000
--- a/spaces/PeepDaSlan9/AutoGPT/BULLETIN.md
+++ /dev/null
@@ -1,2 +0,0 @@
-Welcome to Auto-GPT! We'll keep you informed of the latest news and features by printing messages here.
-If you don't wish to see this message, you can run Auto-GPT with the --skip-news flag
\ No newline at end of file
diff --git a/spaces/PeepDaSlan9/AutoGPT/autogpt/speech/brian.py b/spaces/PeepDaSlan9/AutoGPT/autogpt/speech/brian.py
deleted file mode 100644
index 821fdf2f482a9cfa928e5c9680152ad6766d8326..0000000000000000000000000000000000000000
--- a/spaces/PeepDaSlan9/AutoGPT/autogpt/speech/brian.py
+++ /dev/null
@@ -1,40 +0,0 @@
-""" Brian speech module for autogpt """
-import os
-
-import requests
-from playsound import playsound
-
-from autogpt.speech.base import VoiceBase
-
-
-class BrianSpeech(VoiceBase):
- """Brian speech module for autogpt"""
-
- def _setup(self) -> None:
- """Setup the voices, API key, etc."""
- pass
-
- def _speech(self, text: str, _: int = 0) -> bool:
- """Speak text using Brian with the streamelements API
-
- Args:
- text (str): The text to speak
-
- Returns:
- bool: True if the request was successful, False otherwise
- """
- tts_url = (
- f"https://api.streamelements.com/kappa/v2/speech?voice=Brian&text={text}"
- )
- response = requests.get(tts_url)
-
- if response.status_code == 200:
- with open("speech.mp3", "wb") as f:
- f.write(response.content)
- playsound("speech.mp3")
- os.remove("speech.mp3")
- return True
- else:
- print("Request failed with status code:", response.status_code)
- print("Response content:", response.content)
- return False
diff --git a/spaces/QuanLingZ/ChatReviewer/app.py b/spaces/QuanLingZ/ChatReviewer/app.py
deleted file mode 100644
index 083a1fa40a17e4950c79a5c0934f3d74036eb445..0000000000000000000000000000000000000000
--- a/spaces/QuanLingZ/ChatReviewer/app.py
+++ /dev/null
@@ -1,218 +0,0 @@
-import numpy as np
-import os
-import re
-import jieba
-from io import BytesIO
-import datetime
-import time
-import openai, tenacity
-import argparse
-import configparser
-import json
-import tiktoken
-import PyPDF2
-import gradio
-
-
-def contains_chinese(text):
- for ch in text:
- if u'\u4e00' <= ch <= u'\u9fff':
- return True
- return False
-
-def insert_sentence(text, sentence, interval):
- lines = text.split('\n')
- new_lines = []
-
- for line in lines:
- if contains_chinese(line):
- words = list(jieba.cut(line))
- separator = ''
- else:
- words = line.split()
- separator = ' '
-
- new_words = []
- count = 0
-
- for word in words:
- new_words.append(word)
- count += 1
-
- if count % interval == 0:
- new_words.append(sentence)
-
- new_lines.append(separator.join(new_words))
-
- return '\n'.join(new_lines)
-
-# 定义Reviewer类
-class Reviewer:
- # 初始化方法,设置属性
- def __init__(self, api, review_format, paper_pdf, language):
- self.api = api
- self.review_format = review_format
-
- self.language = language
- self.paper_pdf = paper_pdf
- self.max_token_num = 12000
- self.encoding = tiktoken.get_encoding("gpt2")
-
-
- def review_by_chatgpt(self, paper_list):
- text = self.extract_chapter(self.paper_pdf)
- chat_review_text, total_token_used = self.chat_review(text=text)
- return chat_review_text, total_token_used
-
-
-
- @tenacity.retry(wait=tenacity.wait_exponential(multiplier=1, min=4, max=10),
- stop=tenacity.stop_after_attempt(5),
- reraise=True)
- def chat_review(self, text):
- openai.api_key = self.api # 读取api
- review_prompt_token = 1000
- try:
- text_token = len(self.encoding.encode(text))
- except:
- text_token = 13000
- input_text_index = int(len(text)*(self.max_token_num-review_prompt_token)/(text_token+1))
- input_text = "This is the paper for your review:" + text[:input_text_index]
- messages=[
- {"role": "system", "content": "You are a professional reviewer. Now I will give you a paper. You need to give a complete review opinion according to the following requirements and format:"+ self.review_format + "Be sure to use {} answers".format(self.language)} ,
- {"role": "user", "content": input_text + " Translate the output into {}.".format(self.language)},
- ]
- try:
- response = openai.ChatCompletion.create(
- model="gpt-3.5-turbo-16k",
- messages=messages,
- temperature=0.7
- )
- result = ''
- for choice in response.choices:
- result += choice.message.content
- result = insert_sentence(result, '**Generated by ChatGPT, no copying allowed!**', 50)
- result += "\n\n⚠伦理声明/Ethics statement:\n--禁止直接复制生成的评论用于任何论文审稿工作!\n--Direct copying of generated comments for any paper review work is prohibited!"
- usage = response.usage.total_tokens
- except Exception as e:
- # 处理其他的异常
- result = "⚠:非常抱歉>_<,生了一个错误:"+ str(e)
- usage = 'xxxxx'
- print("********"*10)
- print(result)
- print("********"*10)
- return result, usage
-
-
-
-
-
- def extract_chapter(self, pdf_path):
- file_object = BytesIO(pdf_path)
- pdf_reader = PyPDF2.PdfReader(file_object)
- # 获取PDF的总页数
- num_pages = len(pdf_reader.pages)
- # 初始化提取状态和提取文本
- extraction_started = False
- extracted_text = ""
- # 遍历PDF中的每一页
- for page_number in range(num_pages):
- page = pdf_reader.pages[page_number]
- page_text = page.extract_text()
-
- # 开始提取
- extraction_started = True
- page_number_start = page_number
- # 如果提取已开始,将页面文本添加到提取文本中
- if extraction_started:
- extracted_text += page_text
- # 停止提取
- if page_number_start + 1 < page_number:
- break
- return extracted_text
-
-def main(api, review_format, paper_pdf, language):
- start_time = time.time()
- comments = ''
- output2 = ''
- if not api or not review_format or not paper_pdf:
- comments = "⚠:API-key或审稿要求或论文pdf未输入!请检测!"
- output2 = "⚠:API-key或审稿要求或论文pdf未输入!请检测!"
- # 判断PDF文件
- else:
- # 创建一个Reader对象
- reviewer1 = Reviewer(api, review_format, paper_pdf, language)
- # 开始判断是路径还是文件:
- comments, total_token_used = reviewer1.review_by_chatgpt(paper_list=paper_pdf)
- time_used = time.time() - start_time
- output2 ="使用token数:"+ str(total_token_used)+"\n花费时间:"+ str(round(time_used, 2)) +"秒"
- return comments, output2
-
-
-
-########################################################################################################
-# 标题
-title = "🤖ChatReviewer🤖"
-# 描述
-
-description = '''
"
-
-examples = [
- ["turtle.jpg"],
- ["lions.jpg"]
-]
-
-gr.Interface(depth, inputs, outputs, title=title, description=description, article=article, examples=examples, analytics_enabled=False).launch(enable_queue=True,cache_examples=True)
\ No newline at end of file
diff --git a/spaces/Salesforce/EDICT/my_diffusers/hub_utils.py b/spaces/Salesforce/EDICT/my_diffusers/hub_utils.py
deleted file mode 100644
index c07329e36fe7a8826b0f1fb22396819b220e1b58..0000000000000000000000000000000000000000
--- a/spaces/Salesforce/EDICT/my_diffusers/hub_utils.py
+++ /dev/null
@@ -1,197 +0,0 @@
-# coding=utf-8
-# Copyright 2022 The HuggingFace Inc. team.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-
-import os
-import shutil
-from pathlib import Path
-from typing import Optional
-
-from huggingface_hub import HfFolder, Repository, whoami
-
-from .pipeline_utils import DiffusionPipeline
-from .utils import is_modelcards_available, logging
-
-
-if is_modelcards_available():
- from modelcards import CardData, ModelCard
-
-
-logger = logging.get_logger(__name__)
-
-
-MODEL_CARD_TEMPLATE_PATH = Path(__file__).parent / "utils" / "model_card_template.md"
-
-
-def get_full_repo_name(model_id: str, organization: Optional[str] = None, token: Optional[str] = None):
- if token is None:
- token = HfFolder.get_token()
- if organization is None:
- username = whoami(token)["name"]
- return f"{username}/{model_id}"
- else:
- return f"{organization}/{model_id}"
-
-
-def init_git_repo(args, at_init: bool = False):
- """
- Args:
- Initializes a git repo in `args.hub_model_id`.
- at_init (`bool`, *optional*, defaults to `False`):
- Whether this function is called before any training or not. If `self.args.overwrite_output_dir` is `True`
- and `at_init` is `True`, the path to the repo (which is `self.args.output_dir`) might be wiped out.
- """
- if hasattr(args, "local_rank") and args.local_rank not in [-1, 0]:
- return
- hub_token = args.hub_token if hasattr(args, "hub_token") else None
- use_auth_token = True if hub_token is None else hub_token
- if not hasattr(args, "hub_model_id") or args.hub_model_id is None:
- repo_name = Path(args.output_dir).absolute().name
- else:
- repo_name = args.hub_model_id
- if "/" not in repo_name:
- repo_name = get_full_repo_name(repo_name, token=hub_token)
-
- try:
- repo = Repository(
- args.output_dir,
- clone_from=repo_name,
- use_auth_token=use_auth_token,
- private=args.hub_private_repo,
- )
- except EnvironmentError:
- if args.overwrite_output_dir and at_init:
- # Try again after wiping output_dir
- shutil.rmtree(args.output_dir)
- repo = Repository(
- args.output_dir,
- clone_from=repo_name,
- use_auth_token=use_auth_token,
- )
- else:
- raise
-
- repo.git_pull()
-
- # By default, ignore the checkpoint folders
- if not os.path.exists(os.path.join(args.output_dir, ".gitignore")):
- with open(os.path.join(args.output_dir, ".gitignore"), "w", encoding="utf-8") as writer:
- writer.writelines(["checkpoint-*/"])
-
- return repo
-
-
-def push_to_hub(
- args,
- pipeline: DiffusionPipeline,
- repo: Repository,
- commit_message: Optional[str] = "End of training",
- blocking: bool = True,
- **kwargs,
-) -> str:
- """
- Parameters:
- Upload *self.model* and *self.tokenizer* to the 🤗 model hub on the repo *self.args.hub_model_id*.
- commit_message (`str`, *optional*, defaults to `"End of training"`):
- Message to commit while pushing.
- blocking (`bool`, *optional*, defaults to `True`):
- Whether the function should return only when the `git push` has finished.
- kwargs:
- Additional keyword arguments passed along to [`create_model_card`].
- Returns:
- The url of the commit of your model in the given repository if `blocking=False`, a tuple with the url of the
- commit and an object to track the progress of the commit if `blocking=True`
- """
-
- if not hasattr(args, "hub_model_id") or args.hub_model_id is None:
- model_name = Path(args.output_dir).name
- else:
- model_name = args.hub_model_id.split("/")[-1]
-
- output_dir = args.output_dir
- os.makedirs(output_dir, exist_ok=True)
- logger.info(f"Saving pipeline checkpoint to {output_dir}")
- pipeline.save_pretrained(output_dir)
-
- # Only push from one node.
- if hasattr(args, "local_rank") and args.local_rank not in [-1, 0]:
- return
-
- # Cancel any async push in progress if blocking=True. The commits will all be pushed together.
- if (
- blocking
- and len(repo.command_queue) > 0
- and repo.command_queue[-1] is not None
- and not repo.command_queue[-1].is_done
- ):
- repo.command_queue[-1]._process.kill()
-
- git_head_commit_url = repo.push_to_hub(commit_message=commit_message, blocking=blocking, auto_lfs_prune=True)
- # push separately the model card to be independent from the rest of the model
- create_model_card(args, model_name=model_name)
- try:
- repo.push_to_hub(commit_message="update model card README.md", blocking=blocking, auto_lfs_prune=True)
- except EnvironmentError as exc:
- logger.error(f"Error pushing update to the model card. Please read logs and retry.\n${exc}")
-
- return git_head_commit_url
-
-
-def create_model_card(args, model_name):
- if not is_modelcards_available:
- raise ValueError(
- "Please make sure to have `modelcards` installed when using the `create_model_card` function. You can"
- " install the package with `pip install modelcards`."
- )
-
- if hasattr(args, "local_rank") and args.local_rank not in [-1, 0]:
- return
-
- hub_token = args.hub_token if hasattr(args, "hub_token") else None
- repo_name = get_full_repo_name(model_name, token=hub_token)
-
- model_card = ModelCard.from_template(
- card_data=CardData( # Card metadata object that will be converted to YAML block
- language="en",
- license="apache-2.0",
- library_name="diffusers",
- tags=[],
- datasets=args.dataset_name,
- metrics=[],
- ),
- template_path=MODEL_CARD_TEMPLATE_PATH,
- model_name=model_name,
- repo_name=repo_name,
- dataset_name=args.dataset_name if hasattr(args, "dataset_name") else None,
- learning_rate=args.learning_rate,
- train_batch_size=args.train_batch_size,
- eval_batch_size=args.eval_batch_size,
- gradient_accumulation_steps=args.gradient_accumulation_steps
- if hasattr(args, "gradient_accumulation_steps")
- else None,
- adam_beta1=args.adam_beta1 if hasattr(args, "adam_beta1") else None,
- adam_beta2=args.adam_beta2 if hasattr(args, "adam_beta2") else None,
- adam_weight_decay=args.adam_weight_decay if hasattr(args, "adam_weight_decay") else None,
- adam_epsilon=args.adam_epsilon if hasattr(args, "adam_epsilon") else None,
- lr_scheduler=args.lr_scheduler if hasattr(args, "lr_scheduler") else None,
- lr_warmup_steps=args.lr_warmup_steps if hasattr(args, "lr_warmup_steps") else None,
- ema_inv_gamma=args.ema_inv_gamma if hasattr(args, "ema_inv_gamma") else None,
- ema_power=args.ema_power if hasattr(args, "ema_power") else None,
- ema_max_decay=args.ema_max_decay if hasattr(args, "ema_max_decay") else None,
- mixed_precision=args.mixed_precision,
- )
-
- card_path = os.path.join(args.output_dir, "README.md")
- model_card.save(card_path)
diff --git a/spaces/SarthakSidhant/Go-Cattle/diseases/contagious ecthyma (orf).md b/spaces/SarthakSidhant/Go-Cattle/diseases/contagious ecthyma (orf).md
deleted file mode 100644
index 13d67dbcc79c9908c63c8103443b48f6852a4c9d..0000000000000000000000000000000000000000
--- a/spaces/SarthakSidhant/Go-Cattle/diseases/contagious ecthyma (orf).md
+++ /dev/null
@@ -1,39 +0,0 @@
-## Contagious ecthyma (orf)
-
-**Information** : Contagious ecthyma (orf) is a highly contagious viral disease of cattle that causes raised, crusty lesions on the skin. The virus is spread through direct contact with infected animals or their secretions.
-[Image of Contagious ecthyma (orf) in cattle]
-
-**Symptoms**
-
-The symptoms of contagious ecthyma typically appear within 2-5 days of infection and include:
-
-* Raised, crusty lesions on the lips, tongue, muzzle, teats, and coronary bands of the hooves
-* Painful eating and drinking
-* Drooling
-* Fever
-* Swelling of the lymph nodes in the head and neck
-
-**Remedies**
-
-There is no specific treatment for contagious ecthyma. Treatment is usually supportive and may include:
-
-* Providing pain relief
-* Administering fluids and electrolytes
-* Treating secondary bacterial infections
-
-**Causes**
-
-Contagious ecthyma (orf) is caused by the orf virus, which is a member of the poxvirus family. The virus is spread through direct contact with infected animals or their secretions. The virus can also be spread through contact with contaminated objects, such as feed, water, or equipment.
-
-**Prevention**
-
-There is no vaccine available for contagious ecthyma. However, there are a number of preventive measures that can be taken to reduce the risk of infection, such as:
-
-* Practicing good biosecurity measures
-* Isolating infected animals from healthy animals
-* Cleaning and disinfecting contaminated areas
-* Vaccinating cattle against other diseases that can weaken the immune system, such as bovine viral diarrhea virus (BVDV) and rotavirus
-
-**Differential diagnosis**
-
-Contagious ecthyma can be difficult to distinguish from other diseases that cause mouth lesions, such as foot-and-mouth disease, bovine papular stomatitis, and vesicular stomatitis. A veterinarian can diagnose contagious ecthyma by testing a sample of the lesions for the presence of the orf virus.
diff --git a/spaces/Senpaisora6/dreambooth-training/app.py b/spaces/Senpaisora6/dreambooth-training/app.py
deleted file mode 100644
index 25728e55803278642ca68a4f8da27d72745667aa..0000000000000000000000000000000000000000
--- a/spaces/Senpaisora6/dreambooth-training/app.py
+++ /dev/null
@@ -1,340 +0,0 @@
-import gradio as gr
-import os
-from pathlib import Path
-import argparse
-import shutil
-from train_dreambooth import run_training
-from convertosd import convert
-from PIL import Image
-from slugify import slugify
-import requests
-import torch
-import zipfile
-from diffusers import StableDiffusionPipeline
-
-css = '''
- .instruction{position: absolute; top: 0;right: 0;margin-top: 0px !important}
- .arrow{position: absolute;top: 0;right: -110px;margin-top: -8px !important}
- #component-4, #component-3, #component-10{min-height: 0}
-'''
-model_to_load = "multimodalart/sd-fine-tunable"
-maximum_concepts = 3
-#Pre download the files even if we don't use it here
-StableDiffusionPipeline.from_pretrained(model_to_load)
-
-def zipdir(path, ziph):
- # ziph is zipfile handle
- for root, dirs, files in os.walk(path):
- for file in files:
- ziph.write(os.path.join(root, file),
- os.path.relpath(os.path.join(root, file),
- os.path.join(path, '..')))
-
-def swap_text(option):
- mandatory_liability = "You must have the right to do so and you are liable for the images you use, example:"
- if(option == "object"):
- instance_prompt_example = "cttoy"
- freeze_for = 50
- return [f"You are going to train `object`(s), upload 5-10 images of each object you are planning on training on from different angles/perspectives. {mandatory_liability}:", '''''', f"You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `{instance_prompt_example}` here). Images will be automatically cropped to 512x512.", freeze_for]
- elif(option == "person"):
- instance_prompt_example = "julcto"
- freeze_for = 100
- return [f"You are going to train a `person`(s), upload 10-20 images of each person you are planning on training on from different angles/perspectives. {mandatory_liability}:", '''''', f"You should name the files with a unique word that represent your concept (e.g.: `{instance_prompt_example}` here). Images will be automatically cropped to 512x512.", freeze_for]
- elif(option == "style"):
- instance_prompt_example = "trsldamrl"
- freeze_for = 10
- return [f"You are going to train a `style`, upload 10-20 images of the style you are planning on training on. Name the files with the words you would like {mandatory_liability}:", '''''', f"You should name your files with a unique word that represent your concept (e.g.: `{instance_prompt_example}` here). Images will be automatically cropped to 512x512.", freeze_for]
-
-def count_files(*inputs):
- file_counter = 0
- concept_counter = 0
- for i, input in enumerate(inputs):
- if(i < maximum_concepts-1):
- files = inputs[i]
- if(files):
- concept_counter+=1
- file_counter+=len(files)
- uses_custom = inputs[-1]
- type_of_thing = inputs[-4]
- if(uses_custom):
- Training_Steps = int(inputs[-3])
- else:
- if(type_of_thing == "person"):
- Training_Steps = file_counter*200*2
- else:
- Training_Steps = file_counter*200
- return(gr.update(visible=True, value=f"You are going to train {concept_counter} {type_of_thing}(s), with {file_counter} images for {Training_Steps} steps. This should take around {round(Training_Steps/1.5, 2)} seconds, or {round((Training_Steps/1.5)/3600, 2)} hours. As a reminder, the T4 GPU costs US$0.60 for 1h. Once training is over, don't forget to swap the hardware back to CPU."))
-
-def train(*inputs):
- if "IS_SHARED_UI" in os.environ:
- raise gr.Error("This Space only works in duplicated instances")
- if os.path.exists("output_model"): shutil.rmtree('output_model')
- if os.path.exists("instance_images"): shutil.rmtree('instance_images')
- if os.path.exists("diffusers_model.zip"): os.remove("diffusers_model.zip")
- if os.path.exists("model.ckpt"): os.remove("model.ckpt")
- file_counter = 0
- for i, input in enumerate(inputs):
- if(i < maximum_concepts-1):
- if(input):
- os.makedirs('instance_images',exist_ok=True)
- files = inputs[i+(maximum_concepts*2)]
- prompt = inputs[i+maximum_concepts]
- if(prompt == "" or prompt == None):
- raise gr.Error("You forgot to define your concept prompt")
- for j, file_temp in enumerate(files):
- file = Image.open(file_temp.name)
- width, height = file.size
- side_length = min(width, height)
- left = (width - side_length)/2
- top = (height - side_length)/2
- right = (width + side_length)/2
- bottom = (height + side_length)/2
- image = file.crop((left, top, right, bottom))
- image = image.resize((512, 512))
- extension = file_temp.name.split(".")[1]
- image = image.convert('RGB')
- image.save(f'instance_images/{prompt}_({j+1}).jpg', format="JPEG", quality = 100)
- file_counter += 1
-
- os.makedirs('output_model',exist_ok=True)
- uses_custom = inputs[-1]
- type_of_thing = inputs[-4]
- if(uses_custom):
- Training_Steps = int(inputs[-3])
- Train_text_encoder_for = int(inputs[-2])
- else:
- Training_Steps = file_counter*200
- if(type_of_thing == "object"):
- Train_text_encoder_for=30
- elif(type_of_thing == "person"):
- Train_text_encoder_for=60
- elif(type_of_thing == "style"):
- Train_text_encoder_for=15
-
- class_data_dir = None
- stptxt = int((Training_Steps*Train_text_encoder_for)/100)
- args_general = argparse.Namespace(
- image_captions_filename = True,
- train_text_encoder = True,
- stop_text_encoder_training = stptxt,
- save_n_steps = 0,
- pretrained_model_name_or_path = model_to_load,
- instance_data_dir="instance_images",
- class_data_dir=class_data_dir,
- output_dir="output_model",
- instance_prompt="",
- seed=42,
- resolution=512,
- mixed_precision="fp16",
- train_batch_size=1,
- gradient_accumulation_steps=1,
- use_8bit_adam=True,
- learning_rate=2e-6,
- lr_scheduler="polynomial",
- lr_warmup_steps = 0,
- max_train_steps=Training_Steps,
- )
- run_training(args_general)
- torch.cuda.empty_cache()
- #convert("output_model", "model.ckpt")
- #shutil.rmtree('instance_images')
- #shutil.make_archive("diffusers_model", 'zip', "output_model")
- with zipfile.ZipFile('diffusers_model.zip', 'w', zipfile.ZIP_DEFLATED) as zipf:
- zipdir('output_model/', zipf)
- torch.cuda.empty_cache()
- return [gr.update(visible=True, value=["diffusers_model.zip"]), gr.update(visible=True), gr.update(visible=True), gr.update(visible=True)]
-
-def generate(prompt):
- from diffusers import StableDiffusionPipeline
-
- pipe = StableDiffusionPipeline.from_pretrained("./output_model", torch_dtype=torch.float16)
- pipe = pipe.to("cuda")
- image = pipe(prompt).images[0]
- return(image)
-
-def push(model_name, where_to_upload, hf_token):
- if(not os.path.exists("model.ckpt")):
- convert("output_model", "model.ckpt")
- from huggingface_hub import HfApi, HfFolder, CommitOperationAdd
- from huggingface_hub import create_repo
- model_name_slug = slugify(model_name)
- if(where_to_upload == "My personal profile"):
- api = HfApi()
- your_username = api.whoami(token=hf_token)["name"]
- model_id = f"{your_username}/{model_name_slug}"
- else:
- model_id = f"sd-dreambooth-library/{model_name_slug}"
- headers = {"Authorization" : f"Bearer: {hf_token}", "Content-Type": "application/json"}
- response = requests.post("https://huggingface.co/organizations/sd-dreambooth-library/share/SSeOwppVCscfTEzFGQaqpfcjukVeNrKNHX", headers=headers)
-
- images_upload = os.listdir("instance_images")
- image_string = ""
- instance_prompt_list = []
- previous_instance_prompt = ''
- for i, image in enumerate(images_upload):
- instance_prompt = image.split("_")[0]
- if(instance_prompt != previous_instance_prompt):
- title_instance_prompt_string = instance_prompt
- instance_prompt_list.append(instance_prompt)
- else:
- title_instance_prompt_string = ''
- previous_instance_prompt = instance_prompt
- image_string = f'''{title_instance_prompt_string}
-{image_string}'''
- readme_text = f'''---
-license: creativeml-openrail-m
-tags:
-- text-to-image
----
-### {model_name} Dreambooth model trained by {api.whoami(token=hf_token)["name"]} with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training)
-
-You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
-
-Sample pictures of this concept:
-{image_string}
-'''
- #Save the readme to a file
- readme_file = open("README.md", "w")
- readme_file.write(readme_text)
- readme_file.close()
- #Save the token identifier to a file
- text_file = open("token_identifier.txt", "w")
- text_file.write(', '.join(instance_prompt_list))
- text_file.close()
- create_repo(model_id,private=True, token=hf_token)
- operations = [
- CommitOperationAdd(path_in_repo="token_identifier.txt", path_or_fileobj="token_identifier.txt"),
- CommitOperationAdd(path_in_repo="README.md", path_or_fileobj="README.md"),
- CommitOperationAdd(path_in_repo=f"model.ckpt",path_or_fileobj="model.ckpt")
- ]
- api.create_commit(
- repo_id=model_id,
- operations=operations,
- commit_message=f"Upload the model {model_name}",
- token=hf_token
- )
- api.upload_folder(
- folder_path="output_model",
- repo_id=model_id,
- token=hf_token
- )
- api.upload_folder(
- folder_path="instance_images",
- path_in_repo="concept_images",
- repo_id=model_id,
- token=hf_token
- )
- return [gr.update(visible=True, value=f"Successfully uploaded your model. Access it [here](https://huggingface.co/{model_id})"), gr.update(visible=True, value=["diffusers_model.zip", "model.ckpt"])]
-
-def convert_to_ckpt():
- convert("output_model", "model.ckpt")
- return gr.update(visible=True, value=["diffusers_model.zip", "model.ckpt"])
-
-with gr.Blocks(css=css) as demo:
- with gr.Box():
- if "IS_SHARED_UI" in os.environ:
- gr.HTML('''
-
-
Attention - This Space doesn't work in this shared UI
-
For it to work, you have to duplicate the Space and run it on your own profile where a (paid) private GPU will be attributed to it during runtime. As each T4 costs US$0,60/h, it should cost < US$1 to train a model with less than 100 images on default settings!
-
-
-
- ''')
- else:
- gr.HTML('''
-
-
You have successfully cloned the Dreambooth Training Space
-
If you haven't already, attribute a T4 GPU to it (via the Settings tab) and run the training below. You will be billed by the minute from when you activate the GPU until when you turn it off.
-
- ''')
- gr.Markdown("# Dreambooth training")
- gr.Markdown("Customize Stable Diffusion by giving it with few-shot examples")
- with gr.Row():
- type_of_thing = gr.Dropdown(label="What would you like to train?", choices=["object", "person", "style"], value="object", interactive=True)
-
- with gr.Row():
- with gr.Column():
- thing_description = gr.Markdown("You are going to train an `object`, upload 5-10 images of the object you are planning on training on from different angles/perspectives. You must have the right to do so and you are liable for the images you use, example:")
- thing_image_example = gr.HTML('''''')
- things_naming = gr.Markdown("You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `cttoy` here). Images will be automatically cropped to 512x512.")
- with gr.Column():
- file_collection = []
- concept_collection = []
- buttons_collection = []
- delete_collection = []
- is_visible = []
-
- row = [None] * maximum_concepts
- for x in range(maximum_concepts):
- ordinal = lambda n: "%d%s" % (n, "tsnrhtdd"[(n // 10 % 10 != 1) * (n % 10 < 4) * n % 10::4])
- if(x == 0):
- visible = True
- is_visible.append(gr.State(value=True))
- else:
- visible = False
- is_visible.append(gr.State(value=False))
-
- file_collection.append(gr.File(label=f"Upload the images for your {ordinal(x+1)} concept", file_count="multiple", interactive=True, visible=visible))
- with gr.Column(visible=visible) as row[x]:
- concept_collection.append(gr.Textbox(label=f"{ordinal(x+1)} concept prompt - use a unique, made up word to avoid collisions"))
- with gr.Row():
- if(x < maximum_concepts-1):
- buttons_collection.append(gr.Button(value="Add +1 concept", visible=visible))
- if(x > 0):
- delete_collection.append(gr.Button(value=f"Delete {ordinal(x+1)} concept"))
-
- counter_add = 1
- for button in buttons_collection:
- if(counter_add < len(buttons_collection)):
- button.click(lambda:
- [gr.update(visible=True),gr.update(visible=True), gr.update(visible=False), gr.update(visible=True), True, None],
- None,
- [row[counter_add], file_collection[counter_add], buttons_collection[counter_add-1], buttons_collection[counter_add], is_visible[counter_add], file_collection[counter_add]], queue=False)
- else:
- button.click(lambda:[gr.update(visible=True),gr.update(visible=True), gr.update(visible=False), True], None, [row[counter_add], file_collection[counter_add], buttons_collection[counter_add-1], is_visible[counter_add]], queue=False)
- counter_add += 1
-
- counter_delete = 1
- for delete_button in delete_collection:
- if(counter_delete < len(delete_collection)+1):
- delete_button.click(lambda:[gr.update(visible=False),gr.update(visible=False), gr.update(visible=True), False], None, [file_collection[counter_delete], row[counter_delete], buttons_collection[counter_delete-1], is_visible[counter_delete]], queue=False)
- counter_delete += 1
-
-
-
- with gr.Accordion("Custom Settings", open=False):
- swap_auto_calculated = gr.Checkbox(label="Use custom settings")
- gr.Markdown("If not checked, the number of steps and % of frozen encoder will be tuned automatically according to the amount of images you upload and whether you are training an `object`, `person` or `style` as follows: The number of steps is calculated by number of images uploaded multiplied by 20. The text-encoder is frozen after 10% of the steps for a style, 30% of the steps for an object and is fully trained for persons.")
- steps = gr.Number(label="How many steps", value=800)
- perc_txt_encoder = gr.Number(label="Percentage of the training steps the text-encoder should be trained as well", value=30)
-
- type_of_thing.change(fn=swap_text, inputs=[type_of_thing], outputs=[thing_description, thing_image_example, things_naming, perc_txt_encoder], queue=False)
- training_summary = gr.Textbox("", visible=False, label="Training Summary")
- steps.change(fn=count_files, inputs=file_collection+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary], queue=False)
- perc_txt_encoder.change(fn=count_files, inputs=file_collection+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary], queue=False)
- for file in file_collection:
- file.change(fn=count_files, inputs=file_collection+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary], queue=False)
- train_btn = gr.Button("Start Training")
- with gr.Box(visible=False) as try_your_model:
- gr.Markdown("## Try your model")
- with gr.Row():
- prompt = gr.Textbox(label="Type your prompt")
- result_image = gr.Image()
- generate_button = gr.Button("Generate Image")
- with gr.Box(visible=False) as push_to_hub:
- gr.Markdown("## Push to Hugging Face Hub")
- model_name = gr.Textbox(label="Name of your model", placeholder="Tarsila do Amaral Style")
- where_to_upload = gr.Dropdown(["My personal profile", "Public Library"], label="Upload to")
- gr.Markdown("[A Hugging Face write access token](https://huggingface.co/settings/tokens), go to \"New token\" -> Role : Write. A regular read token won't work here.")
- hf_token = gr.Textbox(label="Hugging Face Write Token")
- push_button = gr.Button("Push to the Hub")
- result = gr.File(label="Download the uploaded models in the diffusers format", visible=True)
- success_message_upload = gr.Markdown(visible=False)
- convert_button = gr.Button("Convert to CKPT", visible=False)
-
- train_btn.click(fn=train, inputs=is_visible+concept_collection+file_collection+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[result, try_your_model, push_to_hub, convert_button])
- generate_button.click(fn=generate, inputs=prompt, outputs=result_image)
- push_button.click(fn=push, inputs=[model_name, where_to_upload, hf_token], outputs=[success_message_upload, result])
- convert_button.click(fn=convert_to_ckpt, inputs=[], outputs=result)
-demo.launch()
\ No newline at end of file
diff --git a/spaces/ServerX/PorcoDiaz/infer/lib/infer_pack/models_onnx.py b/spaces/ServerX/PorcoDiaz/infer/lib/infer_pack/models_onnx.py
deleted file mode 100644
index 3e99763bf3ed7988eb2ae33d9066f85d37adf119..0000000000000000000000000000000000000000
--- a/spaces/ServerX/PorcoDiaz/infer/lib/infer_pack/models_onnx.py
+++ /dev/null
@@ -1,824 +0,0 @@
-import math
-import logging
-
-logger = logging.getLogger(__name__)
-
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import AvgPool1d, Conv1d, Conv2d, ConvTranspose1d
-from torch.nn import functional as F
-from torch.nn.utils import remove_weight_norm, spectral_norm, weight_norm
-
-from infer.lib.infer_pack import attentions, commons, modules
-from infer.lib.infer_pack.commons import get_padding, init_weights
-
-
-class TextEncoder256(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(256, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return m, logs, x_mask
-
-
-class TextEncoder768(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(768, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return m, logs, x_mask
-
-
-class ResidualCouplingBlock(nn.Module):
- def __init__(
- self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- n_flows=4,
- gin_channels=0,
- ):
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for i in range(n_flows):
- self.flows.append(
- modules.ResidualCouplingLayer(
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- mean_only=True,
- )
- )
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
- def remove_weight_norm(self):
- for i in range(self.n_flows):
- self.flows[i * 2].remove_weight_norm()
-
-
-class PosteriorEncoder(nn.Module):
- def __init__(
- self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.pre(x) * x_mask
- x = self.enc(x, x_mask, g=g)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask
-
- def remove_weight_norm(self):
- self.enc.remove_weight_norm()
-
-
-class Generator(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=0,
- ):
- super(Generator, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- def forward(self, x, g=None):
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-class SineGen(torch.nn.Module):
- """Definition of sine generator
- SineGen(samp_rate, harmonic_num = 0,
- sine_amp = 0.1, noise_std = 0.003,
- voiced_threshold = 0,
- flag_for_pulse=False)
- samp_rate: sampling rate in Hz
- harmonic_num: number of harmonic overtones (default 0)
- sine_amp: amplitude of sine-wavefrom (default 0.1)
- noise_std: std of Gaussian noise (default 0.003)
- voiced_thoreshold: F0 threshold for U/V classification (default 0)
- flag_for_pulse: this SinGen is used inside PulseGen (default False)
- Note: when flag_for_pulse is True, the first time step of a voiced
- segment is always sin(np.pi) or cos(0)
- """
-
- def __init__(
- self,
- samp_rate,
- harmonic_num=0,
- sine_amp=0.1,
- noise_std=0.003,
- voiced_threshold=0,
- flag_for_pulse=False,
- ):
- super(SineGen, self).__init__()
- self.sine_amp = sine_amp
- self.noise_std = noise_std
- self.harmonic_num = harmonic_num
- self.dim = self.harmonic_num + 1
- self.sampling_rate = samp_rate
- self.voiced_threshold = voiced_threshold
-
- def _f02uv(self, f0):
- # generate uv signal
- uv = torch.ones_like(f0)
- uv = uv * (f0 > self.voiced_threshold)
- return uv
-
- def forward(self, f0, upp):
- """sine_tensor, uv = forward(f0)
- input F0: tensor(batchsize=1, length, dim=1)
- f0 for unvoiced steps should be 0
- output sine_tensor: tensor(batchsize=1, length, dim)
- output uv: tensor(batchsize=1, length, 1)
- """
- with torch.no_grad():
- f0 = f0[:, None].transpose(1, 2)
- f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device)
- # fundamental component
- f0_buf[:, :, 0] = f0[:, :, 0]
- for idx in np.arange(self.harmonic_num):
- f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * (
- idx + 2
- ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic
- rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化
- rand_ini = torch.rand(
- f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device
- )
- rand_ini[:, 0] = 0
- rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini
- tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化
- tmp_over_one *= upp
- tmp_over_one = F.interpolate(
- tmp_over_one.transpose(2, 1),
- scale_factor=upp,
- mode="linear",
- align_corners=True,
- ).transpose(2, 1)
- rad_values = F.interpolate(
- rad_values.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(
- 2, 1
- ) #######
- tmp_over_one %= 1
- tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0
- cumsum_shift = torch.zeros_like(rad_values)
- cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0
- sine_waves = torch.sin(
- torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi
- )
- sine_waves = sine_waves * self.sine_amp
- uv = self._f02uv(f0)
- uv = F.interpolate(
- uv.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(2, 1)
- noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3
- noise = noise_amp * torch.randn_like(sine_waves)
- sine_waves = sine_waves * uv + noise
- return sine_waves, uv, noise
-
-
-class SourceModuleHnNSF(torch.nn.Module):
- """SourceModule for hn-nsf
- SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1,
- add_noise_std=0.003, voiced_threshod=0)
- sampling_rate: sampling_rate in Hz
- harmonic_num: number of harmonic above F0 (default: 0)
- sine_amp: amplitude of sine source signal (default: 0.1)
- add_noise_std: std of additive Gaussian noise (default: 0.003)
- note that amplitude of noise in unvoiced is decided
- by sine_amp
- voiced_threshold: threhold to set U/V given F0 (default: 0)
- Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
- F0_sampled (batchsize, length, 1)
- Sine_source (batchsize, length, 1)
- noise_source (batchsize, length 1)
- uv (batchsize, length, 1)
- """
-
- def __init__(
- self,
- sampling_rate,
- harmonic_num=0,
- sine_amp=0.1,
- add_noise_std=0.003,
- voiced_threshod=0,
- is_half=True,
- ):
- super(SourceModuleHnNSF, self).__init__()
-
- self.sine_amp = sine_amp
- self.noise_std = add_noise_std
- self.is_half = is_half
- # to produce sine waveforms
- self.l_sin_gen = SineGen(
- sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod
- )
-
- # to merge source harmonics into a single excitation
- self.l_linear = torch.nn.Linear(harmonic_num + 1, 1)
- self.l_tanh = torch.nn.Tanh()
-
- def forward(self, x, upp=None):
- sine_wavs, uv, _ = self.l_sin_gen(x, upp)
- if self.is_half:
- sine_wavs = sine_wavs.half()
- sine_merge = self.l_tanh(self.l_linear(sine_wavs))
- return sine_merge, None, None # noise, uv
-
-
-class GeneratorNSF(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels,
- sr,
- is_half=False,
- ):
- super(GeneratorNSF, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
-
- self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates))
- self.m_source = SourceModuleHnNSF(
- sampling_rate=sr, harmonic_num=0, is_half=is_half
- )
- self.noise_convs = nn.ModuleList()
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- c_cur = upsample_initial_channel // (2 ** (i + 1))
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
- if i + 1 < len(upsample_rates):
- stride_f0 = np.prod(upsample_rates[i + 1 :])
- self.noise_convs.append(
- Conv1d(
- 1,
- c_cur,
- kernel_size=stride_f0 * 2,
- stride=stride_f0,
- padding=stride_f0 // 2,
- )
- )
- else:
- self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1))
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- self.upp = np.prod(upsample_rates)
-
- def forward(self, x, f0, g=None):
- har_source, noi_source, uv = self.m_source(f0, self.upp)
- har_source = har_source.transpose(1, 2)
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- x_source = self.noise_convs[i](har_source)
- x = x + x_source
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-sr2sr = {
- "32k": 32000,
- "40k": 40000,
- "48k": 48000,
-}
-
-
-class SynthesizerTrnMsNSFsidM(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr,
- version,
- **kwargs
- ):
- super().__init__()
- if type(sr) == type("strr"):
- sr = sr2sr[sr]
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- if version == "v1":
- self.enc_p = TextEncoder256(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- else:
- self.enc_p = TextEncoder768(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- sr=sr,
- is_half=kwargs["is_half"],
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- self.speaker_map = None
- logger.debug(
- "gin_channels: "
- + gin_channels
- + ", self.spk_embed_dim: "
- + self.spk_embed_dim
- )
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def construct_spkmixmap(self, n_speaker):
- self.speaker_map = torch.zeros((n_speaker, 1, 1, self.gin_channels))
- for i in range(n_speaker):
- self.speaker_map[i] = self.emb_g(torch.LongTensor([[i]]))
- self.speaker_map = self.speaker_map.unsqueeze(0)
-
- def forward(self, phone, phone_lengths, pitch, nsff0, g, rnd, max_len=None):
- if self.speaker_map is not None: # [N, S] * [S, B, 1, H]
- g = g.reshape((g.shape[0], g.shape[1], 1, 1, 1)) # [N, S, B, 1, 1]
- g = g * self.speaker_map # [N, S, B, 1, H]
- g = torch.sum(g, dim=1) # [N, 1, B, 1, H]
- g = g.transpose(0, -1).transpose(0, -2).squeeze(0) # [B, H, N]
- else:
- g = g.unsqueeze(0)
- g = self.emb_g(g).transpose(1, 2)
-
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * rnd) * x_mask
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g)
- return o
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminator, self).__init__()
- periods = [2, 3, 5, 7, 11, 17]
- # periods = [3, 5, 7, 11, 17, 23, 37]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [
- DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
- ]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = [] #
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- # for j in range(len(fmap_r)):
- # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class MultiPeriodDiscriminatorV2(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminatorV2, self).__init__()
- # periods = [2, 3, 5, 7, 11, 17]
- periods = [2, 3, 5, 7, 11, 17, 23, 37]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [
- DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
- ]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = [] #
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- # for j in range(len(fmap_r)):
- # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ]
- )
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- self.use_spectral_norm = use_spectral_norm
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(
- Conv2d(
- 1,
- 32,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 32,
- 128,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 128,
- 512,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 512,
- 1024,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 1024,
- 1024,
- (kernel_size, 1),
- 1,
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- ]
- )
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
diff --git a/spaces/Sky5408er/vits-uma-genshin-honkai/mel_processing.py b/spaces/Sky5408er/vits-uma-genshin-honkai/mel_processing.py
deleted file mode 100644
index 3e252e76320522a8a4195a60665168f22769aec2..0000000000000000000000000000000000000000
--- a/spaces/Sky5408er/vits-uma-genshin-honkai/mel_processing.py
+++ /dev/null
@@ -1,101 +0,0 @@
-import torch
-import torch.utils.data
-from librosa.filters import mel as librosa_mel_fn
-
-MAX_WAV_VALUE = 32768.0
-
-
-def dynamic_range_compression_torch(x, C=1, clip_val=1e-5):
- """
- PARAMS
- ------
- C: compression factor
- """
- return torch.log(torch.clamp(x, min=clip_val) * C)
-
-
-def dynamic_range_decompression_torch(x, C=1):
- """
- PARAMS
- ------
- C: compression factor used to compress
- """
- return torch.exp(x) / C
-
-
-def spectral_normalize_torch(magnitudes):
- output = dynamic_range_compression_torch(magnitudes)
- return output
-
-
-def spectral_de_normalize_torch(magnitudes):
- output = dynamic_range_decompression_torch(magnitudes)
- return output
-
-
-mel_basis = {}
-hann_window = {}
-
-
-def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False):
- if torch.min(y) < -1.:
- print('min value is ', torch.min(y))
- if torch.max(y) > 1.:
- print('max value is ', torch.max(y))
-
- global hann_window
- dtype_device = str(y.dtype) + '_' + str(y.device)
- wnsize_dtype_device = str(win_size) + '_' + dtype_device
- if wnsize_dtype_device not in hann_window:
- hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device)
-
- y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect')
- y = y.squeeze(1)
-
- spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device],
- center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False)
-
- spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
- return spec
-
-
-def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax):
- global mel_basis
- dtype_device = str(spec.dtype) + '_' + str(spec.device)
- fmax_dtype_device = str(fmax) + '_' + dtype_device
- if fmax_dtype_device not in mel_basis:
- mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
- mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device)
- spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
- spec = spectral_normalize_torch(spec)
- return spec
-
-
-def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False):
- if torch.min(y) < -1.:
- print('min value is ', torch.min(y))
- if torch.max(y) > 1.:
- print('max value is ', torch.max(y))
-
- global mel_basis, hann_window
- dtype_device = str(y.dtype) + '_' + str(y.device)
- fmax_dtype_device = str(fmax) + '_' + dtype_device
- wnsize_dtype_device = str(win_size) + '_' + dtype_device
- if fmax_dtype_device not in mel_basis:
- mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
- mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device)
- if wnsize_dtype_device not in hann_window:
- hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device)
-
- y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect')
- y = y.squeeze(1)
-
- spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device],
- center=center, pad_mode='reflect', normalized=False, onesided=True)
-
- spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
-
- spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
- spec = spectral_normalize_torch(spec)
-
- return spec
diff --git a/spaces/Spectrez/Chest-Lung-Identification/app.py b/spaces/Spectrez/Chest-Lung-Identification/app.py
deleted file mode 100644
index 0da7294ff35bace59405f7e14d9bab2df67c0548..0000000000000000000000000000000000000000
--- a/spaces/Spectrez/Chest-Lung-Identification/app.py
+++ /dev/null
@@ -1,53 +0,0 @@
-import streamlit as st
-import tensorflow as tf
-from PIL import Image
-import cv2 as cv
-import numpy as np
-
-
-model = tf.keras.models.load_model("LUNG-AI-5.h5")
-st.title("AI Lung Prediction")
-img = st.file_uploader("Upload a Chest X-ray", type=["jpg", "png"], accept_multiple_files=False, label_visibility="visible")
-# st.write(print(img))
-
-class_names = [
- 'Normal',
- 'Pneumonia'
-]
-
-
-if img != None:
- st.image(img, width=300)
- img = Image.open(img)
- img = img.convert('RGB')
- # image_preprocess.load() # required for png.split()
- # img = Image.new("RGB", image_preprocess.size, (255, 255, 255))
- # img.paste(image_preprocess, mask=image_preprocess.split()[3]) # 3 is the alpha channel
-
-else:
- st.header("Please Upload a Lung X-ray")
-
-
-
-img = cv.resize(np.asarray(img), (100, 100))
-# if img != None:
-image_p = []
-image_p.append(cv.resize(img, (100, 100)))
-image_p = np.asanyarray(image_p)
-
-image_p = image_p / 255.0
-
-probability_model = tf.keras.Sequential([
- model,
- tf.keras.layers.Softmax()
-])
-
-
-predictions = probability_model.predict(image_p)
-image_class_predict = np.argmax(predictions)
-
-
-if image_class_predict == 0:
- st.subheader("Normal Lung")
-elif image_class_predict == 1:
- st.subheader("Pneumonia Lung")
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/historyapp.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/historyapp.py
deleted file mode 100644
index 01a55343f8a51f59b77da952a6e71088e0c4debf..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/historyapp.py
+++ /dev/null
@@ -1,161 +0,0 @@
-# encoding: utf-8
-"""
-An application for managing IPython history.
-
-To be invoked as the `ipython history` subcommand.
-"""
-
-import sqlite3
-from pathlib import Path
-
-from traitlets.config.application import Application
-from .application import BaseIPythonApplication
-from traitlets import Bool, Int, Dict
-from ..utils.io import ask_yes_no
-
-trim_hist_help = """Trim the IPython history database to the last 1000 entries.
-
-This actually copies the last 1000 entries to a new database, and then replaces
-the old file with the new. Use the `--keep=` argument to specify a number
-other than 1000.
-"""
-
-clear_hist_help = """Clear the IPython history database, deleting all entries.
-
-Because this is a destructive operation, IPython will prompt the user if they
-really want to do this. Passing a `-f` flag will force clearing without a
-prompt.
-
-This is an handy alias to `ipython history trim --keep=0`
-"""
-
-
-class HistoryTrim(BaseIPythonApplication):
- description = trim_hist_help
-
- backup = Bool(False,
- help="Keep the old history file as history.sqlite."
- ).tag(config=True)
-
- keep = Int(1000,
- help="Number of recent lines to keep in the database."
- ).tag(config=True)
-
- flags = Dict(dict(
- backup = ({'HistoryTrim' : {'backup' : True}},
- backup.help
- )
- ))
-
- aliases=Dict(dict(
- keep = 'HistoryTrim.keep'
- ))
-
- def start(self):
- profile_dir = Path(self.profile_dir.location)
- hist_file = profile_dir / "history.sqlite"
- con = sqlite3.connect(hist_file)
-
- # Grab the recent history from the current database.
- inputs = list(con.execute('SELECT session, line, source, source_raw FROM '
- 'history ORDER BY session DESC, line DESC LIMIT ?', (self.keep+1,)))
- if len(inputs) <= self.keep:
- print("There are already at most %d entries in the history database." % self.keep)
- print("Not doing anything. Use --keep= argument to keep fewer entries")
- return
-
- print("Trimming history to the most recent %d entries." % self.keep)
-
- inputs.pop() # Remove the extra element we got to check the length.
- inputs.reverse()
- if inputs:
- first_session = inputs[0][0]
- outputs = list(con.execute('SELECT session, line, output FROM '
- 'output_history WHERE session >= ?', (first_session,)))
- sessions = list(con.execute('SELECT session, start, end, num_cmds, remark FROM '
- 'sessions WHERE session >= ?', (first_session,)))
- con.close()
-
- # Create the new history database.
- new_hist_file = profile_dir / "history.sqlite.new"
- i = 0
- while new_hist_file.exists():
- # Make sure we don't interfere with an existing file.
- i += 1
- new_hist_file = profile_dir / ("history.sqlite.new" + str(i))
- new_db = sqlite3.connect(new_hist_file)
- new_db.execute("""CREATE TABLE IF NOT EXISTS sessions (session integer
- primary key autoincrement, start timestamp,
- end timestamp, num_cmds integer, remark text)""")
- new_db.execute("""CREATE TABLE IF NOT EXISTS history
- (session integer, line integer, source text, source_raw text,
- PRIMARY KEY (session, line))""")
- new_db.execute("""CREATE TABLE IF NOT EXISTS output_history
- (session integer, line integer, output text,
- PRIMARY KEY (session, line))""")
- new_db.commit()
-
-
- if inputs:
- with new_db:
- # Add the recent history into the new database.
- new_db.executemany('insert into sessions values (?,?,?,?,?)', sessions)
- new_db.executemany('insert into history values (?,?,?,?)', inputs)
- new_db.executemany('insert into output_history values (?,?,?)', outputs)
- new_db.close()
-
- if self.backup:
- i = 1
- backup_hist_file = profile_dir / ("history.sqlite.old.%d" % i)
- while backup_hist_file.exists():
- i += 1
- backup_hist_file = profile_dir / ("history.sqlite.old.%d" % i)
- hist_file.rename(backup_hist_file)
- print("Backed up longer history file to", backup_hist_file)
- else:
- hist_file.unlink()
-
- new_hist_file.rename(hist_file)
-
-class HistoryClear(HistoryTrim):
- description = clear_hist_help
- keep = Int(0,
- help="Number of recent lines to keep in the database.")
-
- force = Bool(False,
- help="Don't prompt user for confirmation"
- ).tag(config=True)
-
- flags = Dict(dict(
- force = ({'HistoryClear' : {'force' : True}},
- force.help),
- f = ({'HistoryTrim' : {'force' : True}},
- force.help
- )
- ))
- aliases = Dict()
-
- def start(self):
- if self.force or ask_yes_no("Really delete all ipython history? ",
- default="no", interrupt="no"):
- HistoryTrim.start(self)
-
-class HistoryApp(Application):
- name = u'ipython-history'
- description = "Manage the IPython history database."
-
- subcommands = Dict(dict(
- trim = (HistoryTrim, HistoryTrim.description.splitlines()[0]),
- clear = (HistoryClear, HistoryClear.description.splitlines()[0]),
- ))
-
- def start(self):
- if self.subapp is None:
- print("No subcommand specified. Must specify one of: %s" % \
- (self.subcommands.keys()))
- print()
- self.print_description()
- self.print_subcommands()
- self.exit(1)
- else:
- return self.subapp.start()
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/tests/test_events.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/tests/test_events.py
deleted file mode 100644
index cc9bf40fd6dc42e48e93ecce71c714706613afd3..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/tests/test_events.py
+++ /dev/null
@@ -1,91 +0,0 @@
-import unittest
-from unittest.mock import Mock
-
-from IPython.core import events
-import IPython.testing.tools as tt
-
-
-@events._define_event
-def ping_received():
- pass
-
-
-@events._define_event
-def event_with_argument(argument):
- pass
-
-
-class CallbackTests(unittest.TestCase):
- def setUp(self):
- self.em = events.EventManager(get_ipython(),
- {'ping_received': ping_received,
- 'event_with_argument': event_with_argument})
-
- def test_register_unregister(self):
- cb = Mock()
-
- self.em.register('ping_received', cb)
- self.em.trigger('ping_received')
- self.assertEqual(cb.call_count, 1)
-
- self.em.unregister('ping_received', cb)
- self.em.trigger('ping_received')
- self.assertEqual(cb.call_count, 1)
-
- def test_bare_function_missed_unregister(self):
- def cb1():
- ...
-
- def cb2():
- ...
-
- self.em.register("ping_received", cb1)
- self.assertRaises(ValueError, self.em.unregister, "ping_received", cb2)
- self.em.unregister("ping_received", cb1)
-
- def test_cb_error(self):
- cb = Mock(side_effect=ValueError)
- self.em.register('ping_received', cb)
- with tt.AssertPrints("Error in callback"):
- self.em.trigger('ping_received')
-
- def test_cb_keyboard_interrupt(self):
- cb = Mock(side_effect=KeyboardInterrupt)
- self.em.register('ping_received', cb)
- with tt.AssertPrints("Error in callback"):
- self.em.trigger('ping_received')
-
- def test_unregister_during_callback(self):
- invoked = [False] * 3
-
- def func1(*_):
- invoked[0] = True
- self.em.unregister('ping_received', func1)
- self.em.register('ping_received', func3)
-
- def func2(*_):
- invoked[1] = True
- self.em.unregister('ping_received', func2)
-
- def func3(*_):
- invoked[2] = True
-
- self.em.register('ping_received', func1)
- self.em.register('ping_received', func2)
-
- self.em.trigger('ping_received')
- self.assertEqual([True, True, False], invoked)
- self.assertEqual([func3], self.em.callbacks['ping_received'])
-
- def test_ignore_event_arguments_if_no_argument_required(self):
- call_count = [0]
- def event_with_no_argument():
- call_count[0] += 1
-
- self.em.register('event_with_argument', event_with_no_argument)
- self.em.trigger('event_with_argument', 'the argument')
- self.assertEqual(call_count[0], 1)
-
- self.em.unregister('event_with_argument', event_with_no_argument)
- self.em.trigger('ping_received')
- self.assertEqual(call_count[0], 1)
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/aiohttp/test_utils.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/aiohttp/test_utils.py
deleted file mode 100644
index fcda2f3ddc045a381470012ba331c75299af4981..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/aiohttp/test_utils.py
+++ /dev/null
@@ -1,706 +0,0 @@
-"""Utilities shared by tests."""
-
-import asyncio
-import contextlib
-import gc
-import inspect
-import ipaddress
-import os
-import socket
-import sys
-import warnings
-from abc import ABC, abstractmethod
-from types import TracebackType
-from typing import (
- TYPE_CHECKING,
- Any,
- Callable,
- Iterator,
- List,
- Optional,
- Type,
- Union,
- cast,
-)
-from unittest import mock
-
-from aiosignal import Signal
-from multidict import CIMultiDict, CIMultiDictProxy
-from yarl import URL
-
-import aiohttp
-from aiohttp.client import _RequestContextManager, _WSRequestContextManager
-
-from . import ClientSession, hdrs
-from .abc import AbstractCookieJar
-from .client_reqrep import ClientResponse
-from .client_ws import ClientWebSocketResponse
-from .helpers import PY_38, sentinel
-from .http import HttpVersion, RawRequestMessage
-from .web import (
- Application,
- AppRunner,
- BaseRunner,
- Request,
- Server,
- ServerRunner,
- SockSite,
- UrlMappingMatchInfo,
-)
-from .web_protocol import _RequestHandler
-
-if TYPE_CHECKING: # pragma: no cover
- from ssl import SSLContext
-else:
- SSLContext = None
-
-if PY_38:
- from unittest import IsolatedAsyncioTestCase as TestCase
-else:
- from asynctest import TestCase # type: ignore[no-redef]
-
-REUSE_ADDRESS = os.name == "posix" and sys.platform != "cygwin"
-
-
-def get_unused_port_socket(
- host: str, family: socket.AddressFamily = socket.AF_INET
-) -> socket.socket:
- return get_port_socket(host, 0, family)
-
-
-def get_port_socket(
- host: str, port: int, family: socket.AddressFamily
-) -> socket.socket:
- s = socket.socket(family, socket.SOCK_STREAM)
- if REUSE_ADDRESS:
- # Windows has different semantics for SO_REUSEADDR,
- # so don't set it. Ref:
- # https://docs.microsoft.com/en-us/windows/win32/winsock/using-so-reuseaddr-and-so-exclusiveaddruse
- s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
- s.bind((host, port))
- return s
-
-
-def unused_port() -> int:
- """Return a port that is unused on the current host."""
- with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
- s.bind(("127.0.0.1", 0))
- return cast(int, s.getsockname()[1])
-
-
-class BaseTestServer(ABC):
- __test__ = False
-
- def __init__(
- self,
- *,
- scheme: Union[str, object] = sentinel,
- loop: Optional[asyncio.AbstractEventLoop] = None,
- host: str = "127.0.0.1",
- port: Optional[int] = None,
- skip_url_asserts: bool = False,
- socket_factory: Callable[
- [str, int, socket.AddressFamily], socket.socket
- ] = get_port_socket,
- **kwargs: Any,
- ) -> None:
- self._loop = loop
- self.runner: Optional[BaseRunner] = None
- self._root: Optional[URL] = None
- self.host = host
- self.port = port
- self._closed = False
- self.scheme = scheme
- self.skip_url_asserts = skip_url_asserts
- self.socket_factory = socket_factory
-
- async def start_server(
- self, loop: Optional[asyncio.AbstractEventLoop] = None, **kwargs: Any
- ) -> None:
- if self.runner:
- return
- self._loop = loop
- self._ssl = kwargs.pop("ssl", None)
- self.runner = await self._make_runner(**kwargs)
- await self.runner.setup()
- if not self.port:
- self.port = 0
- try:
- version = ipaddress.ip_address(self.host).version
- except ValueError:
- version = 4
- family = socket.AF_INET6 if version == 6 else socket.AF_INET
- _sock = self.socket_factory(self.host, self.port, family)
- self.host, self.port = _sock.getsockname()[:2]
- site = SockSite(self.runner, sock=_sock, ssl_context=self._ssl)
- await site.start()
- server = site._server
- assert server is not None
- sockets = server.sockets
- assert sockets is not None
- self.port = sockets[0].getsockname()[1]
- if self.scheme is sentinel:
- if self._ssl:
- scheme = "https"
- else:
- scheme = "http"
- self.scheme = scheme
- self._root = URL(f"{self.scheme}://{self.host}:{self.port}")
-
- @abstractmethod # pragma: no cover
- async def _make_runner(self, **kwargs: Any) -> BaseRunner:
- pass
-
- def make_url(self, path: str) -> URL:
- assert self._root is not None
- url = URL(path)
- if not self.skip_url_asserts:
- assert not url.is_absolute()
- return self._root.join(url)
- else:
- return URL(str(self._root) + path)
-
- @property
- def started(self) -> bool:
- return self.runner is not None
-
- @property
- def closed(self) -> bool:
- return self._closed
-
- @property
- def handler(self) -> Server:
- # for backward compatibility
- # web.Server instance
- runner = self.runner
- assert runner is not None
- assert runner.server is not None
- return runner.server
-
- async def close(self) -> None:
- """Close all fixtures created by the test client.
-
- After that point, the TestClient is no longer usable.
-
- This is an idempotent function: running close multiple times
- will not have any additional effects.
-
- close is also run when the object is garbage collected, and on
- exit when used as a context manager.
-
- """
- if self.started and not self.closed:
- assert self.runner is not None
- await self.runner.cleanup()
- self._root = None
- self.port = None
- self._closed = True
-
- def __enter__(self) -> None:
- raise TypeError("Use async with instead")
-
- def __exit__(
- self,
- exc_type: Optional[Type[BaseException]],
- exc_value: Optional[BaseException],
- traceback: Optional[TracebackType],
- ) -> None:
- # __exit__ should exist in pair with __enter__ but never executed
- pass # pragma: no cover
-
- async def __aenter__(self) -> "BaseTestServer":
- await self.start_server(loop=self._loop)
- return self
-
- async def __aexit__(
- self,
- exc_type: Optional[Type[BaseException]],
- exc_value: Optional[BaseException],
- traceback: Optional[TracebackType],
- ) -> None:
- await self.close()
-
-
-class TestServer(BaseTestServer):
- def __init__(
- self,
- app: Application,
- *,
- scheme: Union[str, object] = sentinel,
- host: str = "127.0.0.1",
- port: Optional[int] = None,
- **kwargs: Any,
- ):
- self.app = app
- super().__init__(scheme=scheme, host=host, port=port, **kwargs)
-
- async def _make_runner(self, **kwargs: Any) -> BaseRunner:
- return AppRunner(self.app, **kwargs)
-
-
-class RawTestServer(BaseTestServer):
- def __init__(
- self,
- handler: _RequestHandler,
- *,
- scheme: Union[str, object] = sentinel,
- host: str = "127.0.0.1",
- port: Optional[int] = None,
- **kwargs: Any,
- ) -> None:
- self._handler = handler
- super().__init__(scheme=scheme, host=host, port=port, **kwargs)
-
- async def _make_runner(self, debug: bool = True, **kwargs: Any) -> ServerRunner:
- srv = Server(self._handler, loop=self._loop, debug=debug, **kwargs)
- return ServerRunner(srv, debug=debug, **kwargs)
-
-
-class TestClient:
- """
- A test client implementation.
-
- To write functional tests for aiohttp based servers.
-
- """
-
- __test__ = False
-
- def __init__(
- self,
- server: BaseTestServer,
- *,
- cookie_jar: Optional[AbstractCookieJar] = None,
- loop: Optional[asyncio.AbstractEventLoop] = None,
- **kwargs: Any,
- ) -> None:
- if not isinstance(server, BaseTestServer):
- raise TypeError(
- "server must be TestServer " "instance, found type: %r" % type(server)
- )
- self._server = server
- self._loop = loop
- if cookie_jar is None:
- cookie_jar = aiohttp.CookieJar(unsafe=True, loop=loop)
- self._session = ClientSession(loop=loop, cookie_jar=cookie_jar, **kwargs)
- self._closed = False
- self._responses: List[ClientResponse] = []
- self._websockets: List[ClientWebSocketResponse] = []
-
- async def start_server(self) -> None:
- await self._server.start_server(loop=self._loop)
-
- @property
- def host(self) -> str:
- return self._server.host
-
- @property
- def port(self) -> Optional[int]:
- return self._server.port
-
- @property
- def server(self) -> BaseTestServer:
- return self._server
-
- @property
- def app(self) -> Optional[Application]:
- return cast(Optional[Application], getattr(self._server, "app", None))
-
- @property
- def session(self) -> ClientSession:
- """An internal aiohttp.ClientSession.
-
- Unlike the methods on the TestClient, client session requests
- do not automatically include the host in the url queried, and
- will require an absolute path to the resource.
-
- """
- return self._session
-
- def make_url(self, path: str) -> URL:
- return self._server.make_url(path)
-
- async def _request(self, method: str, path: str, **kwargs: Any) -> ClientResponse:
- resp = await self._session.request(method, self.make_url(path), **kwargs)
- # save it to close later
- self._responses.append(resp)
- return resp
-
- def request(self, method: str, path: str, **kwargs: Any) -> _RequestContextManager:
- """Routes a request to tested http server.
-
- The interface is identical to aiohttp.ClientSession.request,
- except the loop kwarg is overridden by the instance used by the
- test server.
-
- """
- return _RequestContextManager(self._request(method, path, **kwargs))
-
- def get(self, path: str, **kwargs: Any) -> _RequestContextManager:
- """Perform an HTTP GET request."""
- return _RequestContextManager(self._request(hdrs.METH_GET, path, **kwargs))
-
- def post(self, path: str, **kwargs: Any) -> _RequestContextManager:
- """Perform an HTTP POST request."""
- return _RequestContextManager(self._request(hdrs.METH_POST, path, **kwargs))
-
- def options(self, path: str, **kwargs: Any) -> _RequestContextManager:
- """Perform an HTTP OPTIONS request."""
- return _RequestContextManager(self._request(hdrs.METH_OPTIONS, path, **kwargs))
-
- def head(self, path: str, **kwargs: Any) -> _RequestContextManager:
- """Perform an HTTP HEAD request."""
- return _RequestContextManager(self._request(hdrs.METH_HEAD, path, **kwargs))
-
- def put(self, path: str, **kwargs: Any) -> _RequestContextManager:
- """Perform an HTTP PUT request."""
- return _RequestContextManager(self._request(hdrs.METH_PUT, path, **kwargs))
-
- def patch(self, path: str, **kwargs: Any) -> _RequestContextManager:
- """Perform an HTTP PATCH request."""
- return _RequestContextManager(self._request(hdrs.METH_PATCH, path, **kwargs))
-
- def delete(self, path: str, **kwargs: Any) -> _RequestContextManager:
- """Perform an HTTP PATCH request."""
- return _RequestContextManager(self._request(hdrs.METH_DELETE, path, **kwargs))
-
- def ws_connect(self, path: str, **kwargs: Any) -> _WSRequestContextManager:
- """Initiate websocket connection.
-
- The api corresponds to aiohttp.ClientSession.ws_connect.
-
- """
- return _WSRequestContextManager(self._ws_connect(path, **kwargs))
-
- async def _ws_connect(self, path: str, **kwargs: Any) -> ClientWebSocketResponse:
- ws = await self._session.ws_connect(self.make_url(path), **kwargs)
- self._websockets.append(ws)
- return ws
-
- async def close(self) -> None:
- """Close all fixtures created by the test client.
-
- After that point, the TestClient is no longer usable.
-
- This is an idempotent function: running close multiple times
- will not have any additional effects.
-
- close is also run on exit when used as a(n) (asynchronous)
- context manager.
-
- """
- if not self._closed:
- for resp in self._responses:
- resp.close()
- for ws in self._websockets:
- await ws.close()
- await self._session.close()
- await self._server.close()
- self._closed = True
-
- def __enter__(self) -> None:
- raise TypeError("Use async with instead")
-
- def __exit__(
- self,
- exc_type: Optional[Type[BaseException]],
- exc: Optional[BaseException],
- tb: Optional[TracebackType],
- ) -> None:
- # __exit__ should exist in pair with __enter__ but never executed
- pass # pragma: no cover
-
- async def __aenter__(self) -> "TestClient":
- await self.start_server()
- return self
-
- async def __aexit__(
- self,
- exc_type: Optional[Type[BaseException]],
- exc: Optional[BaseException],
- tb: Optional[TracebackType],
- ) -> None:
- await self.close()
-
-
-class AioHTTPTestCase(TestCase):
- """A base class to allow for unittest web applications using aiohttp.
-
- Provides the following:
-
- * self.client (aiohttp.test_utils.TestClient): an aiohttp test client.
- * self.loop (asyncio.BaseEventLoop): the event loop in which the
- application and server are running.
- * self.app (aiohttp.web.Application): the application returned by
- self.get_application()
-
- Note that the TestClient's methods are asynchronous: you have to
- execute function on the test client using asynchronous methods.
- """
-
- async def get_application(self) -> Application:
- """Get application.
-
- This method should be overridden
- to return the aiohttp.web.Application
- object to test.
- """
- return self.get_app()
-
- def get_app(self) -> Application:
- """Obsolete method used to constructing web application.
-
- Use .get_application() coroutine instead.
- """
- raise RuntimeError("Did you forget to define get_application()?")
-
- def setUp(self) -> None:
- if not PY_38:
- asyncio.get_event_loop().run_until_complete(self.asyncSetUp())
-
- async def asyncSetUp(self) -> None:
- try:
- self.loop = asyncio.get_running_loop()
- except (AttributeError, RuntimeError): # AttributeError->py36
- self.loop = asyncio.get_event_loop_policy().get_event_loop()
-
- return await self.setUpAsync()
-
- async def setUpAsync(self) -> None:
- self.app = await self.get_application()
- self.server = await self.get_server(self.app)
- self.client = await self.get_client(self.server)
-
- await self.client.start_server()
-
- def tearDown(self) -> None:
- if not PY_38:
- self.loop.run_until_complete(self.asyncTearDown())
-
- async def asyncTearDown(self) -> None:
- return await self.tearDownAsync()
-
- async def tearDownAsync(self) -> None:
- await self.client.close()
-
- async def get_server(self, app: Application) -> TestServer:
- """Return a TestServer instance."""
- return TestServer(app, loop=self.loop)
-
- async def get_client(self, server: TestServer) -> TestClient:
- """Return a TestClient instance."""
- return TestClient(server, loop=self.loop)
-
-
-def unittest_run_loop(func: Any, *args: Any, **kwargs: Any) -> Any:
- """
- A decorator dedicated to use with asynchronous AioHTTPTestCase test methods.
-
- In 3.8+, this does nothing.
- """
- warnings.warn(
- "Decorator `@unittest_run_loop` is no longer needed in aiohttp 3.8+",
- DeprecationWarning,
- stacklevel=2,
- )
- return func
-
-
-_LOOP_FACTORY = Callable[[], asyncio.AbstractEventLoop]
-
-
-@contextlib.contextmanager
-def loop_context(
- loop_factory: _LOOP_FACTORY = asyncio.new_event_loop, fast: bool = False
-) -> Iterator[asyncio.AbstractEventLoop]:
- """A contextmanager that creates an event_loop, for test purposes.
-
- Handles the creation and cleanup of a test loop.
- """
- loop = setup_test_loop(loop_factory)
- yield loop
- teardown_test_loop(loop, fast=fast)
-
-
-def setup_test_loop(
- loop_factory: _LOOP_FACTORY = asyncio.new_event_loop,
-) -> asyncio.AbstractEventLoop:
- """Create and return an asyncio.BaseEventLoop instance.
-
- The caller should also call teardown_test_loop,
- once they are done with the loop.
- """
- loop = loop_factory()
- try:
- module = loop.__class__.__module__
- skip_watcher = "uvloop" in module
- except AttributeError: # pragma: no cover
- # Just in case
- skip_watcher = True
- asyncio.set_event_loop(loop)
- if sys.platform != "win32" and not skip_watcher:
- policy = asyncio.get_event_loop_policy()
- watcher: asyncio.AbstractChildWatcher
- try: # Python >= 3.8
- # Refs:
- # * https://github.com/pytest-dev/pytest-xdist/issues/620
- # * https://stackoverflow.com/a/58614689/595220
- # * https://bugs.python.org/issue35621
- # * https://github.com/python/cpython/pull/14344
- watcher = asyncio.ThreadedChildWatcher()
- except AttributeError: # Python < 3.8
- watcher = asyncio.SafeChildWatcher()
- watcher.attach_loop(loop)
- with contextlib.suppress(NotImplementedError):
- policy.set_child_watcher(watcher)
- return loop
-
-
-def teardown_test_loop(loop: asyncio.AbstractEventLoop, fast: bool = False) -> None:
- """Teardown and cleanup an event_loop created by setup_test_loop."""
- closed = loop.is_closed()
- if not closed:
- loop.call_soon(loop.stop)
- loop.run_forever()
- loop.close()
-
- if not fast:
- gc.collect()
-
- asyncio.set_event_loop(None)
-
-
-def _create_app_mock() -> mock.MagicMock:
- def get_dict(app: Any, key: str) -> Any:
- return app.__app_dict[key]
-
- def set_dict(app: Any, key: str, value: Any) -> None:
- app.__app_dict[key] = value
-
- app = mock.MagicMock(spec=Application)
- app.__app_dict = {}
- app.__getitem__ = get_dict
- app.__setitem__ = set_dict
-
- app._debug = False
- app.on_response_prepare = Signal(app)
- app.on_response_prepare.freeze()
- return app
-
-
-def _create_transport(sslcontext: Optional[SSLContext] = None) -> mock.Mock:
- transport = mock.Mock()
-
- def get_extra_info(key: str) -> Optional[SSLContext]:
- if key == "sslcontext":
- return sslcontext
- else:
- return None
-
- transport.get_extra_info.side_effect = get_extra_info
- return transport
-
-
-def make_mocked_request(
- method: str,
- path: str,
- headers: Any = None,
- *,
- match_info: Any = sentinel,
- version: HttpVersion = HttpVersion(1, 1),
- closing: bool = False,
- app: Any = None,
- writer: Any = sentinel,
- protocol: Any = sentinel,
- transport: Any = sentinel,
- payload: Any = sentinel,
- sslcontext: Optional[SSLContext] = None,
- client_max_size: int = 1024**2,
- loop: Any = ...,
-) -> Request:
- """Creates mocked web.Request testing purposes.
-
- Useful in unit tests, when spinning full web server is overkill or
- specific conditions and errors are hard to trigger.
- """
- task = mock.Mock()
- if loop is ...:
- loop = mock.Mock()
- loop.create_future.return_value = ()
-
- if version < HttpVersion(1, 1):
- closing = True
-
- if headers:
- headers = CIMultiDictProxy(CIMultiDict(headers))
- raw_hdrs = tuple(
- (k.encode("utf-8"), v.encode("utf-8")) for k, v in headers.items()
- )
- else:
- headers = CIMultiDictProxy(CIMultiDict())
- raw_hdrs = ()
-
- chunked = "chunked" in headers.get(hdrs.TRANSFER_ENCODING, "").lower()
-
- message = RawRequestMessage(
- method,
- path,
- version,
- headers,
- raw_hdrs,
- closing,
- None,
- False,
- chunked,
- URL(path),
- )
- if app is None:
- app = _create_app_mock()
-
- if transport is sentinel:
- transport = _create_transport(sslcontext)
-
- if protocol is sentinel:
- protocol = mock.Mock()
- protocol.transport = transport
-
- if writer is sentinel:
- writer = mock.Mock()
- writer.write_headers = make_mocked_coro(None)
- writer.write = make_mocked_coro(None)
- writer.write_eof = make_mocked_coro(None)
- writer.drain = make_mocked_coro(None)
- writer.transport = transport
-
- protocol.transport = transport
- protocol.writer = writer
-
- if payload is sentinel:
- payload = mock.Mock()
-
- req = Request(
- message, payload, protocol, writer, task, loop, client_max_size=client_max_size
- )
-
- match_info = UrlMappingMatchInfo(
- {} if match_info is sentinel else match_info, mock.Mock()
- )
- match_info.add_app(app)
- req._match_info = match_info
-
- return req
-
-
-def make_mocked_coro(
- return_value: Any = sentinel, raise_exception: Any = sentinel
-) -> Any:
- """Creates a coroutine mock."""
-
- async def mock_coro(*args: Any, **kwargs: Any) -> Any:
- if raise_exception is not sentinel:
- raise raise_exception
- if not inspect.isawaitable(return_value):
- return return_value
- await return_value
-
- return mock.Mock(wraps=mock_coro)
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/__init__.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/__init__.py
deleted file mode 100644
index 975bec79b9f6bb55393b0931ca3a3dc50cc4ae54..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/__init__.py
+++ /dev/null
@@ -1,38 +0,0 @@
-# Copyright (c) Microsoft Corporation. All rights reserved.
-# Licensed under the MIT License. See LICENSE in the project root
-# for license information.
-
-"""An implementation of the Debug Adapter Protocol (DAP) for Python.
-
-https://microsoft.github.io/debug-adapter-protocol/
-"""
-
-# debugpy stable public API consists solely of members of this module that are
-# enumerated below.
-__all__ = [ # noqa
- "__version__",
- "breakpoint",
- "configure",
- "connect",
- "debug_this_thread",
- "is_client_connected",
- "listen",
- "log_to",
- "trace_this_thread",
- "wait_for_client",
-]
-
-import sys
-
-assert sys.version_info >= (3, 7), (
- "Python 3.6 and below is not supported by this version of debugpy; "
- "use debugpy 1.5.1 or earlier."
-)
-
-
-# Actual definitions are in a separate file to work around parsing issues causing
-# SyntaxError on Python 2 and preventing the above version check from executing.
-from debugpy.public_api import * # noqa
-from debugpy.public_api import __version__
-
-del sys
diff --git a/spaces/Suniilkumaar/SwapMukham/face_parsing/resnet.py b/spaces/Suniilkumaar/SwapMukham/face_parsing/resnet.py
deleted file mode 100644
index aa2bf95130e9815ba378cb6f73207068b81a04b9..0000000000000000000000000000000000000000
--- a/spaces/Suniilkumaar/SwapMukham/face_parsing/resnet.py
+++ /dev/null
@@ -1,109 +0,0 @@
-#!/usr/bin/python
-# -*- encoding: utf-8 -*-
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import torch.utils.model_zoo as modelzoo
-
-# from modules.bn import InPlaceABNSync as BatchNorm2d
-
-resnet18_url = 'https://download.pytorch.org/models/resnet18-5c106cde.pth'
-
-
-def conv3x3(in_planes, out_planes, stride=1):
- """3x3 convolution with padding"""
- return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,
- padding=1, bias=False)
-
-
-class BasicBlock(nn.Module):
- def __init__(self, in_chan, out_chan, stride=1):
- super(BasicBlock, self).__init__()
- self.conv1 = conv3x3(in_chan, out_chan, stride)
- self.bn1 = nn.BatchNorm2d(out_chan)
- self.conv2 = conv3x3(out_chan, out_chan)
- self.bn2 = nn.BatchNorm2d(out_chan)
- self.relu = nn.ReLU(inplace=True)
- self.downsample = None
- if in_chan != out_chan or stride != 1:
- self.downsample = nn.Sequential(
- nn.Conv2d(in_chan, out_chan,
- kernel_size=1, stride=stride, bias=False),
- nn.BatchNorm2d(out_chan),
- )
-
- def forward(self, x):
- residual = self.conv1(x)
- residual = F.relu(self.bn1(residual))
- residual = self.conv2(residual)
- residual = self.bn2(residual)
-
- shortcut = x
- if self.downsample is not None:
- shortcut = self.downsample(x)
-
- out = shortcut + residual
- out = self.relu(out)
- return out
-
-
-def create_layer_basic(in_chan, out_chan, bnum, stride=1):
- layers = [BasicBlock(in_chan, out_chan, stride=stride)]
- for i in range(bnum-1):
- layers.append(BasicBlock(out_chan, out_chan, stride=1))
- return nn.Sequential(*layers)
-
-
-class Resnet18(nn.Module):
- def __init__(self):
- super(Resnet18, self).__init__()
- self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3,
- bias=False)
- self.bn1 = nn.BatchNorm2d(64)
- self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
- self.layer1 = create_layer_basic(64, 64, bnum=2, stride=1)
- self.layer2 = create_layer_basic(64, 128, bnum=2, stride=2)
- self.layer3 = create_layer_basic(128, 256, bnum=2, stride=2)
- self.layer4 = create_layer_basic(256, 512, bnum=2, stride=2)
- self.init_weight()
-
- def forward(self, x):
- x = self.conv1(x)
- x = F.relu(self.bn1(x))
- x = self.maxpool(x)
-
- x = self.layer1(x)
- feat8 = self.layer2(x) # 1/8
- feat16 = self.layer3(feat8) # 1/16
- feat32 = self.layer4(feat16) # 1/32
- return feat8, feat16, feat32
-
- def init_weight(self):
- state_dict = modelzoo.load_url(resnet18_url)
- self_state_dict = self.state_dict()
- for k, v in state_dict.items():
- if 'fc' in k: continue
- self_state_dict.update({k: v})
- self.load_state_dict(self_state_dict)
-
- def get_params(self):
- wd_params, nowd_params = [], []
- for name, module in self.named_modules():
- if isinstance(module, (nn.Linear, nn.Conv2d)):
- wd_params.append(module.weight)
- if not module.bias is None:
- nowd_params.append(module.bias)
- elif isinstance(module, nn.BatchNorm2d):
- nowd_params += list(module.parameters())
- return wd_params, nowd_params
-
-
-if __name__ == "__main__":
- net = Resnet18()
- x = torch.randn(16, 3, 224, 224)
- out = net(x)
- print(out[0].size())
- print(out[1].size())
- print(out[2].size())
- net.get_params()
diff --git a/spaces/Supedsa/rvc-models/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py b/spaces/Supedsa/rvc-models/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py
deleted file mode 100644
index b2c592527a5966e6f8e79e8c52dc5b414246dcc6..0000000000000000000000000000000000000000
--- a/spaces/Supedsa/rvc-models/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py
+++ /dev/null
@@ -1,97 +0,0 @@
-from lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor
-import parselmouth
-import numpy as np
-
-
-class PMF0Predictor(F0Predictor):
- def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100):
- self.hop_length = hop_length
- self.f0_min = f0_min
- self.f0_max = f0_max
- self.sampling_rate = sampling_rate
-
- def interpolate_f0(self, f0):
- """
- 对F0进行插值处理
- """
-
- data = np.reshape(f0, (f0.size, 1))
-
- vuv_vector = np.zeros((data.size, 1), dtype=np.float32)
- vuv_vector[data > 0.0] = 1.0
- vuv_vector[data <= 0.0] = 0.0
-
- ip_data = data
-
- frame_number = data.size
- last_value = 0.0
- for i in range(frame_number):
- if data[i] <= 0.0:
- j = i + 1
- for j in range(i + 1, frame_number):
- if data[j] > 0.0:
- break
- if j < frame_number - 1:
- if last_value > 0.0:
- step = (data[j] - data[i - 1]) / float(j - i)
- for k in range(i, j):
- ip_data[k] = data[i - 1] + step * (k - i + 1)
- else:
- for k in range(i, j):
- ip_data[k] = data[j]
- else:
- for k in range(i, frame_number):
- ip_data[k] = last_value
- else:
- ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝
- last_value = data[i]
-
- return ip_data[:, 0], vuv_vector[:, 0]
-
- def compute_f0(self, wav, p_len=None):
- x = wav
- if p_len is None:
- p_len = x.shape[0] // self.hop_length
- else:
- assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error"
- time_step = self.hop_length / self.sampling_rate * 1000
- f0 = (
- parselmouth.Sound(x, self.sampling_rate)
- .to_pitch_ac(
- time_step=time_step / 1000,
- voicing_threshold=0.6,
- pitch_floor=self.f0_min,
- pitch_ceiling=self.f0_max,
- )
- .selected_array["frequency"]
- )
-
- pad_size = (p_len - len(f0) + 1) // 2
- if pad_size > 0 or p_len - len(f0) - pad_size > 0:
- f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant")
- f0, uv = self.interpolate_f0(f0)
- return f0
-
- def compute_f0_uv(self, wav, p_len=None):
- x = wav
- if p_len is None:
- p_len = x.shape[0] // self.hop_length
- else:
- assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error"
- time_step = self.hop_length / self.sampling_rate * 1000
- f0 = (
- parselmouth.Sound(x, self.sampling_rate)
- .to_pitch_ac(
- time_step=time_step / 1000,
- voicing_threshold=0.6,
- pitch_floor=self.f0_min,
- pitch_ceiling=self.f0_max,
- )
- .selected_array["frequency"]
- )
-
- pad_size = (p_len - len(f0) + 1) // 2
- if pad_size > 0 or p_len - len(f0) - pad_size > 0:
- f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant")
- f0, uv = self.interpolate_f0(f0)
- return f0, uv
diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/fileio/handlers/pickle_handler.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/fileio/handlers/pickle_handler.py
deleted file mode 100644
index b37c79bed4ef9fd8913715e62dbe3fc5cafdc3aa..0000000000000000000000000000000000000000
--- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/fileio/handlers/pickle_handler.py
+++ /dev/null
@@ -1,28 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import pickle
-
-from .base import BaseFileHandler
-
-
-class PickleHandler(BaseFileHandler):
-
- str_like = False
-
- def load_from_fileobj(self, file, **kwargs):
- return pickle.load(file, **kwargs)
-
- def load_from_path(self, filepath, **kwargs):
- return super(PickleHandler, self).load_from_path(
- filepath, mode='rb', **kwargs)
-
- def dump_to_str(self, obj, **kwargs):
- kwargs.setdefault('protocol', 2)
- return pickle.dumps(obj, **kwargs)
-
- def dump_to_fileobj(self, obj, file, **kwargs):
- kwargs.setdefault('protocol', 2)
- pickle.dump(obj, file, **kwargs)
-
- def dump_to_path(self, obj, filepath, **kwargs):
- super(PickleHandler, self).dump_to_path(
- obj, filepath, mode='wb', **kwargs)
diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/runner/hooks/lr_updater.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/runner/hooks/lr_updater.py
deleted file mode 100644
index 6365908ddf6070086de2ffc0afada46ed2f32256..0000000000000000000000000000000000000000
--- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/runner/hooks/lr_updater.py
+++ /dev/null
@@ -1,670 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import numbers
-from math import cos, pi
-
-import annotator.uniformer.mmcv as mmcv
-from .hook import HOOKS, Hook
-
-
-class LrUpdaterHook(Hook):
- """LR Scheduler in MMCV.
-
- Args:
- by_epoch (bool): LR changes epoch by epoch
- warmup (string): Type of warmup used. It can be None(use no warmup),
- 'constant', 'linear' or 'exp'
- warmup_iters (int): The number of iterations or epochs that warmup
- lasts
- warmup_ratio (float): LR used at the beginning of warmup equals to
- warmup_ratio * initial_lr
- warmup_by_epoch (bool): When warmup_by_epoch == True, warmup_iters
- means the number of epochs that warmup lasts, otherwise means the
- number of iteration that warmup lasts
- """
-
- def __init__(self,
- by_epoch=True,
- warmup=None,
- warmup_iters=0,
- warmup_ratio=0.1,
- warmup_by_epoch=False):
- # validate the "warmup" argument
- if warmup is not None:
- if warmup not in ['constant', 'linear', 'exp']:
- raise ValueError(
- f'"{warmup}" is not a supported type for warming up, valid'
- ' types are "constant" and "linear"')
- if warmup is not None:
- assert warmup_iters > 0, \
- '"warmup_iters" must be a positive integer'
- assert 0 < warmup_ratio <= 1.0, \
- '"warmup_ratio" must be in range (0,1]'
-
- self.by_epoch = by_epoch
- self.warmup = warmup
- self.warmup_iters = warmup_iters
- self.warmup_ratio = warmup_ratio
- self.warmup_by_epoch = warmup_by_epoch
-
- if self.warmup_by_epoch:
- self.warmup_epochs = self.warmup_iters
- self.warmup_iters = None
- else:
- self.warmup_epochs = None
-
- self.base_lr = [] # initial lr for all param groups
- self.regular_lr = [] # expected lr if no warming up is performed
-
- def _set_lr(self, runner, lr_groups):
- if isinstance(runner.optimizer, dict):
- for k, optim in runner.optimizer.items():
- for param_group, lr in zip(optim.param_groups, lr_groups[k]):
- param_group['lr'] = lr
- else:
- for param_group, lr in zip(runner.optimizer.param_groups,
- lr_groups):
- param_group['lr'] = lr
-
- def get_lr(self, runner, base_lr):
- raise NotImplementedError
-
- def get_regular_lr(self, runner):
- if isinstance(runner.optimizer, dict):
- lr_groups = {}
- for k in runner.optimizer.keys():
- _lr_group = [
- self.get_lr(runner, _base_lr)
- for _base_lr in self.base_lr[k]
- ]
- lr_groups.update({k: _lr_group})
-
- return lr_groups
- else:
- return [self.get_lr(runner, _base_lr) for _base_lr in self.base_lr]
-
- def get_warmup_lr(self, cur_iters):
-
- def _get_warmup_lr(cur_iters, regular_lr):
- if self.warmup == 'constant':
- warmup_lr = [_lr * self.warmup_ratio for _lr in regular_lr]
- elif self.warmup == 'linear':
- k = (1 - cur_iters / self.warmup_iters) * (1 -
- self.warmup_ratio)
- warmup_lr = [_lr * (1 - k) for _lr in regular_lr]
- elif self.warmup == 'exp':
- k = self.warmup_ratio**(1 - cur_iters / self.warmup_iters)
- warmup_lr = [_lr * k for _lr in regular_lr]
- return warmup_lr
-
- if isinstance(self.regular_lr, dict):
- lr_groups = {}
- for key, regular_lr in self.regular_lr.items():
- lr_groups[key] = _get_warmup_lr(cur_iters, regular_lr)
- return lr_groups
- else:
- return _get_warmup_lr(cur_iters, self.regular_lr)
-
- def before_run(self, runner):
- # NOTE: when resuming from a checkpoint, if 'initial_lr' is not saved,
- # it will be set according to the optimizer params
- if isinstance(runner.optimizer, dict):
- self.base_lr = {}
- for k, optim in runner.optimizer.items():
- for group in optim.param_groups:
- group.setdefault('initial_lr', group['lr'])
- _base_lr = [
- group['initial_lr'] for group in optim.param_groups
- ]
- self.base_lr.update({k: _base_lr})
- else:
- for group in runner.optimizer.param_groups:
- group.setdefault('initial_lr', group['lr'])
- self.base_lr = [
- group['initial_lr'] for group in runner.optimizer.param_groups
- ]
-
- def before_train_epoch(self, runner):
- if self.warmup_iters is None:
- epoch_len = len(runner.data_loader)
- self.warmup_iters = self.warmup_epochs * epoch_len
-
- if not self.by_epoch:
- return
-
- self.regular_lr = self.get_regular_lr(runner)
- self._set_lr(runner, self.regular_lr)
-
- def before_train_iter(self, runner):
- cur_iter = runner.iter
- if not self.by_epoch:
- self.regular_lr = self.get_regular_lr(runner)
- if self.warmup is None or cur_iter >= self.warmup_iters:
- self._set_lr(runner, self.regular_lr)
- else:
- warmup_lr = self.get_warmup_lr(cur_iter)
- self._set_lr(runner, warmup_lr)
- elif self.by_epoch:
- if self.warmup is None or cur_iter > self.warmup_iters:
- return
- elif cur_iter == self.warmup_iters:
- self._set_lr(runner, self.regular_lr)
- else:
- warmup_lr = self.get_warmup_lr(cur_iter)
- self._set_lr(runner, warmup_lr)
-
-
-@HOOKS.register_module()
-class FixedLrUpdaterHook(LrUpdaterHook):
-
- def __init__(self, **kwargs):
- super(FixedLrUpdaterHook, self).__init__(**kwargs)
-
- def get_lr(self, runner, base_lr):
- return base_lr
-
-
-@HOOKS.register_module()
-class StepLrUpdaterHook(LrUpdaterHook):
- """Step LR scheduler with min_lr clipping.
-
- Args:
- step (int | list[int]): Step to decay the LR. If an int value is given,
- regard it as the decay interval. If a list is given, decay LR at
- these steps.
- gamma (float, optional): Decay LR ratio. Default: 0.1.
- min_lr (float, optional): Minimum LR value to keep. If LR after decay
- is lower than `min_lr`, it will be clipped to this value. If None
- is given, we don't perform lr clipping. Default: None.
- """
-
- def __init__(self, step, gamma=0.1, min_lr=None, **kwargs):
- if isinstance(step, list):
- assert mmcv.is_list_of(step, int)
- assert all([s > 0 for s in step])
- elif isinstance(step, int):
- assert step > 0
- else:
- raise TypeError('"step" must be a list or integer')
- self.step = step
- self.gamma = gamma
- self.min_lr = min_lr
- super(StepLrUpdaterHook, self).__init__(**kwargs)
-
- def get_lr(self, runner, base_lr):
- progress = runner.epoch if self.by_epoch else runner.iter
-
- # calculate exponential term
- if isinstance(self.step, int):
- exp = progress // self.step
- else:
- exp = len(self.step)
- for i, s in enumerate(self.step):
- if progress < s:
- exp = i
- break
-
- lr = base_lr * (self.gamma**exp)
- if self.min_lr is not None:
- # clip to a minimum value
- lr = max(lr, self.min_lr)
- return lr
-
-
-@HOOKS.register_module()
-class ExpLrUpdaterHook(LrUpdaterHook):
-
- def __init__(self, gamma, **kwargs):
- self.gamma = gamma
- super(ExpLrUpdaterHook, self).__init__(**kwargs)
-
- def get_lr(self, runner, base_lr):
- progress = runner.epoch if self.by_epoch else runner.iter
- return base_lr * self.gamma**progress
-
-
-@HOOKS.register_module()
-class PolyLrUpdaterHook(LrUpdaterHook):
-
- def __init__(self, power=1., min_lr=0., **kwargs):
- self.power = power
- self.min_lr = min_lr
- super(PolyLrUpdaterHook, self).__init__(**kwargs)
-
- def get_lr(self, runner, base_lr):
- if self.by_epoch:
- progress = runner.epoch
- max_progress = runner.max_epochs
- else:
- progress = runner.iter
- max_progress = runner.max_iters
- coeff = (1 - progress / max_progress)**self.power
- return (base_lr - self.min_lr) * coeff + self.min_lr
-
-
-@HOOKS.register_module()
-class InvLrUpdaterHook(LrUpdaterHook):
-
- def __init__(self, gamma, power=1., **kwargs):
- self.gamma = gamma
- self.power = power
- super(InvLrUpdaterHook, self).__init__(**kwargs)
-
- def get_lr(self, runner, base_lr):
- progress = runner.epoch if self.by_epoch else runner.iter
- return base_lr * (1 + self.gamma * progress)**(-self.power)
-
-
-@HOOKS.register_module()
-class CosineAnnealingLrUpdaterHook(LrUpdaterHook):
-
- def __init__(self, min_lr=None, min_lr_ratio=None, **kwargs):
- assert (min_lr is None) ^ (min_lr_ratio is None)
- self.min_lr = min_lr
- self.min_lr_ratio = min_lr_ratio
- super(CosineAnnealingLrUpdaterHook, self).__init__(**kwargs)
-
- def get_lr(self, runner, base_lr):
- if self.by_epoch:
- progress = runner.epoch
- max_progress = runner.max_epochs
- else:
- progress = runner.iter
- max_progress = runner.max_iters
-
- if self.min_lr_ratio is not None:
- target_lr = base_lr * self.min_lr_ratio
- else:
- target_lr = self.min_lr
- return annealing_cos(base_lr, target_lr, progress / max_progress)
-
-
-@HOOKS.register_module()
-class FlatCosineAnnealingLrUpdaterHook(LrUpdaterHook):
- """Flat + Cosine lr schedule.
-
- Modified from https://github.com/fastai/fastai/blob/master/fastai/callback/schedule.py#L128 # noqa: E501
-
- Args:
- start_percent (float): When to start annealing the learning rate
- after the percentage of the total training steps.
- The value should be in range [0, 1).
- Default: 0.75
- min_lr (float, optional): The minimum lr. Default: None.
- min_lr_ratio (float, optional): The ratio of minimum lr to the base lr.
- Either `min_lr` or `min_lr_ratio` should be specified.
- Default: None.
- """
-
- def __init__(self,
- start_percent=0.75,
- min_lr=None,
- min_lr_ratio=None,
- **kwargs):
- assert (min_lr is None) ^ (min_lr_ratio is None)
- if start_percent < 0 or start_percent > 1 or not isinstance(
- start_percent, float):
- raise ValueError(
- 'expected float between 0 and 1 start_percent, but '
- f'got {start_percent}')
- self.start_percent = start_percent
- self.min_lr = min_lr
- self.min_lr_ratio = min_lr_ratio
- super(FlatCosineAnnealingLrUpdaterHook, self).__init__(**kwargs)
-
- def get_lr(self, runner, base_lr):
- if self.by_epoch:
- start = round(runner.max_epochs * self.start_percent)
- progress = runner.epoch - start
- max_progress = runner.max_epochs - start
- else:
- start = round(runner.max_iters * self.start_percent)
- progress = runner.iter - start
- max_progress = runner.max_iters - start
-
- if self.min_lr_ratio is not None:
- target_lr = base_lr * self.min_lr_ratio
- else:
- target_lr = self.min_lr
-
- if progress < 0:
- return base_lr
- else:
- return annealing_cos(base_lr, target_lr, progress / max_progress)
-
-
-@HOOKS.register_module()
-class CosineRestartLrUpdaterHook(LrUpdaterHook):
- """Cosine annealing with restarts learning rate scheme.
-
- Args:
- periods (list[int]): Periods for each cosine anneling cycle.
- restart_weights (list[float], optional): Restart weights at each
- restart iteration. Default: [1].
- min_lr (float, optional): The minimum lr. Default: None.
- min_lr_ratio (float, optional): The ratio of minimum lr to the base lr.
- Either `min_lr` or `min_lr_ratio` should be specified.
- Default: None.
- """
-
- def __init__(self,
- periods,
- restart_weights=[1],
- min_lr=None,
- min_lr_ratio=None,
- **kwargs):
- assert (min_lr is None) ^ (min_lr_ratio is None)
- self.periods = periods
- self.min_lr = min_lr
- self.min_lr_ratio = min_lr_ratio
- self.restart_weights = restart_weights
- assert (len(self.periods) == len(self.restart_weights)
- ), 'periods and restart_weights should have the same length.'
- super(CosineRestartLrUpdaterHook, self).__init__(**kwargs)
-
- self.cumulative_periods = [
- sum(self.periods[0:i + 1]) for i in range(0, len(self.periods))
- ]
-
- def get_lr(self, runner, base_lr):
- if self.by_epoch:
- progress = runner.epoch
- else:
- progress = runner.iter
-
- if self.min_lr_ratio is not None:
- target_lr = base_lr * self.min_lr_ratio
- else:
- target_lr = self.min_lr
-
- idx = get_position_from_periods(progress, self.cumulative_periods)
- current_weight = self.restart_weights[idx]
- nearest_restart = 0 if idx == 0 else self.cumulative_periods[idx - 1]
- current_periods = self.periods[idx]
-
- alpha = min((progress - nearest_restart) / current_periods, 1)
- return annealing_cos(base_lr, target_lr, alpha, current_weight)
-
-
-def get_position_from_periods(iteration, cumulative_periods):
- """Get the position from a period list.
-
- It will return the index of the right-closest number in the period list.
- For example, the cumulative_periods = [100, 200, 300, 400],
- if iteration == 50, return 0;
- if iteration == 210, return 2;
- if iteration == 300, return 3.
-
- Args:
- iteration (int): Current iteration.
- cumulative_periods (list[int]): Cumulative period list.
-
- Returns:
- int: The position of the right-closest number in the period list.
- """
- for i, period in enumerate(cumulative_periods):
- if iteration < period:
- return i
- raise ValueError(f'Current iteration {iteration} exceeds '
- f'cumulative_periods {cumulative_periods}')
-
-
-@HOOKS.register_module()
-class CyclicLrUpdaterHook(LrUpdaterHook):
- """Cyclic LR Scheduler.
-
- Implement the cyclical learning rate policy (CLR) described in
- https://arxiv.org/pdf/1506.01186.pdf
-
- Different from the original paper, we use cosine annealing rather than
- triangular policy inside a cycle. This improves the performance in the
- 3D detection area.
-
- Args:
- by_epoch (bool): Whether to update LR by epoch.
- target_ratio (tuple[float]): Relative ratio of the highest LR and the
- lowest LR to the initial LR.
- cyclic_times (int): Number of cycles during training
- step_ratio_up (float): The ratio of the increasing process of LR in
- the total cycle.
- anneal_strategy (str): {'cos', 'linear'}
- Specifies the annealing strategy: 'cos' for cosine annealing,
- 'linear' for linear annealing. Default: 'cos'.
- """
-
- def __init__(self,
- by_epoch=False,
- target_ratio=(10, 1e-4),
- cyclic_times=1,
- step_ratio_up=0.4,
- anneal_strategy='cos',
- **kwargs):
- if isinstance(target_ratio, float):
- target_ratio = (target_ratio, target_ratio / 1e5)
- elif isinstance(target_ratio, tuple):
- target_ratio = (target_ratio[0], target_ratio[0] / 1e5) \
- if len(target_ratio) == 1 else target_ratio
- else:
- raise ValueError('target_ratio should be either float '
- f'or tuple, got {type(target_ratio)}')
-
- assert len(target_ratio) == 2, \
- '"target_ratio" must be list or tuple of two floats'
- assert 0 <= step_ratio_up < 1.0, \
- '"step_ratio_up" must be in range [0,1)'
-
- self.target_ratio = target_ratio
- self.cyclic_times = cyclic_times
- self.step_ratio_up = step_ratio_up
- self.lr_phases = [] # init lr_phases
- # validate anneal_strategy
- if anneal_strategy not in ['cos', 'linear']:
- raise ValueError('anneal_strategy must be one of "cos" or '
- f'"linear", instead got {anneal_strategy}')
- elif anneal_strategy == 'cos':
- self.anneal_func = annealing_cos
- elif anneal_strategy == 'linear':
- self.anneal_func = annealing_linear
-
- assert not by_epoch, \
- 'currently only support "by_epoch" = False'
- super(CyclicLrUpdaterHook, self).__init__(by_epoch, **kwargs)
-
- def before_run(self, runner):
- super(CyclicLrUpdaterHook, self).before_run(runner)
- # initiate lr_phases
- # total lr_phases are separated as up and down
- max_iter_per_phase = runner.max_iters // self.cyclic_times
- iter_up_phase = int(self.step_ratio_up * max_iter_per_phase)
- self.lr_phases.append(
- [0, iter_up_phase, max_iter_per_phase, 1, self.target_ratio[0]])
- self.lr_phases.append([
- iter_up_phase, max_iter_per_phase, max_iter_per_phase,
- self.target_ratio[0], self.target_ratio[1]
- ])
-
- def get_lr(self, runner, base_lr):
- curr_iter = runner.iter
- for (start_iter, end_iter, max_iter_per_phase, start_ratio,
- end_ratio) in self.lr_phases:
- curr_iter %= max_iter_per_phase
- if start_iter <= curr_iter < end_iter:
- progress = curr_iter - start_iter
- return self.anneal_func(base_lr * start_ratio,
- base_lr * end_ratio,
- progress / (end_iter - start_iter))
-
-
-@HOOKS.register_module()
-class OneCycleLrUpdaterHook(LrUpdaterHook):
- """One Cycle LR Scheduler.
-
- The 1cycle learning rate policy changes the learning rate after every
- batch. The one cycle learning rate policy is described in
- https://arxiv.org/pdf/1708.07120.pdf
-
- Args:
- max_lr (float or list): Upper learning rate boundaries in the cycle
- for each parameter group.
- total_steps (int, optional): The total number of steps in the cycle.
- Note that if a value is not provided here, it will be the max_iter
- of runner. Default: None.
- pct_start (float): The percentage of the cycle (in number of steps)
- spent increasing the learning rate.
- Default: 0.3
- anneal_strategy (str): {'cos', 'linear'}
- Specifies the annealing strategy: 'cos' for cosine annealing,
- 'linear' for linear annealing.
- Default: 'cos'
- div_factor (float): Determines the initial learning rate via
- initial_lr = max_lr/div_factor
- Default: 25
- final_div_factor (float): Determines the minimum learning rate via
- min_lr = initial_lr/final_div_factor
- Default: 1e4
- three_phase (bool): If three_phase is True, use a third phase of the
- schedule to annihilate the learning rate according to
- final_div_factor instead of modifying the second phase (the first
- two phases will be symmetrical about the step indicated by
- pct_start).
- Default: False
- """
-
- def __init__(self,
- max_lr,
- total_steps=None,
- pct_start=0.3,
- anneal_strategy='cos',
- div_factor=25,
- final_div_factor=1e4,
- three_phase=False,
- **kwargs):
- # validate by_epoch, currently only support by_epoch = False
- if 'by_epoch' not in kwargs:
- kwargs['by_epoch'] = False
- else:
- assert not kwargs['by_epoch'], \
- 'currently only support "by_epoch" = False'
- if not isinstance(max_lr, (numbers.Number, list, dict)):
- raise ValueError('the type of max_lr must be the one of list or '
- f'dict, but got {type(max_lr)}')
- self._max_lr = max_lr
- if total_steps is not None:
- if not isinstance(total_steps, int):
- raise ValueError('the type of total_steps must be int, but'
- f'got {type(total_steps)}')
- self.total_steps = total_steps
- # validate pct_start
- if pct_start < 0 or pct_start > 1 or not isinstance(pct_start, float):
- raise ValueError('expected float between 0 and 1 pct_start, but '
- f'got {pct_start}')
- self.pct_start = pct_start
- # validate anneal_strategy
- if anneal_strategy not in ['cos', 'linear']:
- raise ValueError('anneal_strategy must be one of "cos" or '
- f'"linear", instead got {anneal_strategy}')
- elif anneal_strategy == 'cos':
- self.anneal_func = annealing_cos
- elif anneal_strategy == 'linear':
- self.anneal_func = annealing_linear
- self.div_factor = div_factor
- self.final_div_factor = final_div_factor
- self.three_phase = three_phase
- self.lr_phases = [] # init lr_phases
- super(OneCycleLrUpdaterHook, self).__init__(**kwargs)
-
- def before_run(self, runner):
- if hasattr(self, 'total_steps'):
- total_steps = self.total_steps
- else:
- total_steps = runner.max_iters
- if total_steps < runner.max_iters:
- raise ValueError(
- 'The total steps must be greater than or equal to max '
- f'iterations {runner.max_iters} of runner, but total steps '
- f'is {total_steps}.')
-
- if isinstance(runner.optimizer, dict):
- self.base_lr = {}
- for k, optim in runner.optimizer.items():
- _max_lr = format_param(k, optim, self._max_lr)
- self.base_lr[k] = [lr / self.div_factor for lr in _max_lr]
- for group, lr in zip(optim.param_groups, self.base_lr[k]):
- group.setdefault('initial_lr', lr)
- else:
- k = type(runner.optimizer).__name__
- _max_lr = format_param(k, runner.optimizer, self._max_lr)
- self.base_lr = [lr / self.div_factor for lr in _max_lr]
- for group, lr in zip(runner.optimizer.param_groups, self.base_lr):
- group.setdefault('initial_lr', lr)
-
- if self.three_phase:
- self.lr_phases.append(
- [float(self.pct_start * total_steps) - 1, 1, self.div_factor])
- self.lr_phases.append([
- float(2 * self.pct_start * total_steps) - 2, self.div_factor, 1
- ])
- self.lr_phases.append(
- [total_steps - 1, 1, 1 / self.final_div_factor])
- else:
- self.lr_phases.append(
- [float(self.pct_start * total_steps) - 1, 1, self.div_factor])
- self.lr_phases.append(
- [total_steps - 1, self.div_factor, 1 / self.final_div_factor])
-
- def get_lr(self, runner, base_lr):
- curr_iter = runner.iter
- start_iter = 0
- for i, (end_iter, start_lr, end_lr) in enumerate(self.lr_phases):
- if curr_iter <= end_iter:
- pct = (curr_iter - start_iter) / (end_iter - start_iter)
- lr = self.anneal_func(base_lr * start_lr, base_lr * end_lr,
- pct)
- break
- start_iter = end_iter
- return lr
-
-
-def annealing_cos(start, end, factor, weight=1):
- """Calculate annealing cos learning rate.
-
- Cosine anneal from `weight * start + (1 - weight) * end` to `end` as
- percentage goes from 0.0 to 1.0.
-
- Args:
- start (float): The starting learning rate of the cosine annealing.
- end (float): The ending learing rate of the cosine annealing.
- factor (float): The coefficient of `pi` when calculating the current
- percentage. Range from 0.0 to 1.0.
- weight (float, optional): The combination factor of `start` and `end`
- when calculating the actual starting learning rate. Default to 1.
- """
- cos_out = cos(pi * factor) + 1
- return end + 0.5 * weight * (start - end) * cos_out
-
-
-def annealing_linear(start, end, factor):
- """Calculate annealing linear learning rate.
-
- Linear anneal from `start` to `end` as percentage goes from 0.0 to 1.0.
-
- Args:
- start (float): The starting learning rate of the linear annealing.
- end (float): The ending learing rate of the linear annealing.
- factor (float): The coefficient of `pi` when calculating the current
- percentage. Range from 0.0 to 1.0.
- """
- return start + (end - start) * factor
-
-
-def format_param(name, optim, param):
- if isinstance(param, numbers.Number):
- return [param] * len(optim.param_groups)
- elif isinstance(param, (list, tuple)): # multi param groups
- if len(param) != len(optim.param_groups):
- raise ValueError(f'expected {len(optim.param_groups)} '
- f'values for {name}, got {len(param)}')
- return param
- else: # multi optimizers
- if name not in param:
- raise KeyError(f'{name} is not found in {param.keys()}')
- return param[name]
diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/configs/common/models/fcos.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/configs/common/models/fcos.py
deleted file mode 100644
index 1c752029b7fc64ec375a55182e5342c9eb48bb33..0000000000000000000000000000000000000000
--- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/configs/common/models/fcos.py
+++ /dev/null
@@ -1,23 +0,0 @@
-from detectron2.modeling.meta_arch.fcos import FCOS, FCOSHead
-
-from .retinanet import model
-
-model._target_ = FCOS
-
-del model.anchor_generator
-del model.box2box_transform
-del model.anchor_matcher
-del model.input_format
-
-# Use P5 instead of C5 to compute P6/P7
-# (Sec 2.2 of https://arxiv.org/abs/2006.09214)
-model.backbone.top_block.in_feature = "p5"
-model.backbone.top_block.in_channels = 256
-
-# New score threshold determined based on sqrt(cls_score * centerness)
-model.test_score_thresh = 0.2
-model.test_nms_thresh = 0.6
-
-model.head._target_ = FCOSHead
-del model.head.num_anchors
-model.head.norm = "GN"
diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/layers/csrc/nms_rotated/nms_rotated.h b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/layers/csrc/nms_rotated/nms_rotated.h
deleted file mode 100644
index 12aca388e47b12dafd20999f2991a9d42f4b904b..0000000000000000000000000000000000000000
--- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/layers/csrc/nms_rotated/nms_rotated.h
+++ /dev/null
@@ -1,39 +0,0 @@
-// Copyright (c) Facebook, Inc. and its affiliates.
-#pragma once
-#include
-
-namespace detectron2 {
-
-at::Tensor nms_rotated_cpu(
- const at::Tensor& dets,
- const at::Tensor& scores,
- const double iou_threshold);
-
-#if defined(WITH_CUDA) || defined(WITH_HIP)
-at::Tensor nms_rotated_cuda(
- const at::Tensor& dets,
- const at::Tensor& scores,
- const double iou_threshold);
-#endif
-
-// Interface for Python
-// inline is needed to prevent multiple function definitions when this header is
-// included by different cpps
-inline at::Tensor nms_rotated(
- const at::Tensor& dets,
- const at::Tensor& scores,
- const double iou_threshold) {
- assert(dets.device().is_cuda() == scores.device().is_cuda());
- if (dets.device().is_cuda()) {
-#if defined(WITH_CUDA) || defined(WITH_HIP)
- return nms_rotated_cuda(
- dets.contiguous(), scores.contiguous(), iou_threshold);
-#else
- AT_ERROR("Detectron2 is not compiled with GPU support!");
-#endif
- }
-
- return nms_rotated_cpu(dets.contiguous(), scores.contiguous(), iou_threshold);
-}
-
-} // namespace detectron2
diff --git a/spaces/ThirdEyeData/Customer-Conversion-Prediction/supv/lrd.py b/spaces/ThirdEyeData/Customer-Conversion-Prediction/supv/lrd.py
deleted file mode 100644
index b476e477f642adfb93e5a71b19b0877f6b3eda92..0000000000000000000000000000000000000000
--- a/spaces/ThirdEyeData/Customer-Conversion-Prediction/supv/lrd.py
+++ /dev/null
@@ -1,112 +0,0 @@
-#!/usr/local/bin/python3
-
-# avenir-python: Machine Learning
-# Author: Pranab Ghosh
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you
-# may not use this file except in compliance with the License. You may
-# obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
-# implied. See the License for the specific language governing
-# permissions and limitations under the License.
-
-# Package imports
-import os
-import sys
-import matplotlib.pyplot as plt
-import numpy as np
-import sklearn as sk
-import sklearn.linear_model
-import matplotlib
-import random
-import jprops
-from sklearn.linear_model import LogisticRegression
-from random import randint
-sys.path.append(os.path.abspath("../lib"))
-from util import *
-from mlutil import *
-from pasearch import *
-from bacl import *
-
-# logistic regression classification
-class LogisticRegressionDiscriminant(BaseClassifier):
-
- def __init__(self, configFile):
- defValues = {}
- defValues["common.mode"] = ("train", None)
- defValues["common.model.directory"] = ("model", None)
- defValues["common.model.file"] = (None, None)
- defValues["common.scale.file.path"] = (None, "missing scale file path")
- defValues["common.preprocessing"] = (None, None)
- defValues["common.verbose"] = (False, None)
- defValues["train.data.file"] = (None, "missing training data file")
- defValues["train.data.fields"] = (None, "missing training data field ordinals")
- defValues["train.data.feature.fields"] = (None, "missing training data feature field ordinals")
- defValues["train.data.class.field"] = (None, "missing class field ordinal")
- defValues["train.validation"] = ("kfold", None)
- defValues["train.num.folds"] = (5, None)
- defValues["train.penalty"] = ("l2", None)
- defValues["train.dual"] = (False, None)
- defValues["train.tolerance"] = (0.0001, None)
- defValues["train.regularization"] = (1.0, None)
- defValues["train.fit.intercept"] = (True, None)
- defValues["train.intercept.scaling"] = (1.0, None)
- defValues["train.class.weight"] = (None, None)
- defValues["train.random.state"] = (None, None)
- defValues["train.solver"] = ("liblinear", None)
- defValues["train.max.iter"] = (100, None)
- defValues["train.multi.class"] = ("ovr", None)
- defValues["train.verbose"] = (0, None)
- defValues["train.warm.start"] = (False, None)
- defValues["train.num.jobs"] = (None, None)
- defValues["train.l1.ratio"] = (None, None)
- defValues["train.success.criterion"] = ("error", None)
- defValues["train.model.save"] = (False, None)
- defValues["train.score.method"] = ("accuracy", None)
- defValues["train.search.param.strategy"] = (None, None)
- defValues["train.search.params"] = (None, None)
- defValues["predict.data.file"] = (None, None)
- defValues["predict.data.fields"] = (None, "missing data field ordinals")
- defValues["predict.data.feature.fields"] = (None, "missing data feature field ordinals")
- defValues["predict.use.saved.model"] = (False, None)
- defValues["validate.data.file"] = (None, "missing validation data file")
- defValues["validate.data.fields"] = (None, "missing validation data field ordinals")
- defValues["validate.data.feature.fields"] = (None, "missing validation data feature field ordinals")
- defValues["validate.data.class.field"] = (None, "missing class field ordinal")
- defValues["validate.use.saved.model"] = (False, None)
- defValues["validate.score.method"] = ("accuracy", None)
-
- super(LogisticRegressionDiscriminant, self).__init__(configFile, defValues, __name__)
-
- # builds model object
- def buildModel(self):
- print ("...building logistic regression model")
- penalty = self.config.getStringConfig("train.penalty")[0]
- dual = self.config.getBooleanConfig("train.dual")[0]
- tol = self.config.getFloatConfig("train.tolerance")[0]
- c = self.config.getFloatConfig("train.regularization")[0]
- fitIntercept = self.config.getBooleanConfig("train.fit.intercept")[0]
- interceptScaling = self.config.getFloatConfig("train.intercept.scaling")[0]
- classWeight = self.config.getStringConfig("train.class.weight")[0]
- randomState = self.config.getIntConfig("train.random.state")[0]
- solver = self.config.getStringConfig("train.solver")[0]
- maxIter = self.config.getIntConfig("train.max.iter")[0]
- multiClass = self.config.getStringConfig("train.multi.class")[0]
- verbos = self.config.getIntConfig("train.verbose")[0]
- warmStart = self.config.getBooleanConfig("train.warm.start")[0]
- nJobs = self.config.getIntConfig("train.num.jobs")[0]
- l1Ratio = self.config.getFloatConfig("train.l1.ratio")[0]
-
- self.classifier = LogisticRegression(penalty=penalty, dual=dual, tol=tol, C=c, fit_intercept=fitIntercept,\
- intercept_scaling=interceptScaling, class_weight=classWeight, random_state=randomState, solver=solver,\
- max_iter=maxIter, multi_class=multiClass, verbose=verbos, warm_start=warmStart, n_jobs=nJobs, l1_ratio=l1Ratio)
-
- return self.classifier
-
-
-
diff --git a/spaces/UglyLemon/LEMONTR/app.py b/spaces/UglyLemon/LEMONTR/app.py
deleted file mode 100644
index 5840c4a717bf730cfd0948402c81feb0bfed8c2d..0000000000000000000000000000000000000000
--- a/spaces/UglyLemon/LEMONTR/app.py
+++ /dev/null
@@ -1,9 +0,0 @@
-import streamlit as st
-from transformers import pipeline
-
-pipe = pipeline('sentiment-analysis')
-text = st.text_area('enter some text!')
-
-if text
- out = pipe(text)
- st.json(out)
\ No newline at end of file
diff --git a/spaces/VIPLab/Track-Anything/app_test.py b/spaces/VIPLab/Track-Anything/app_test.py
deleted file mode 100644
index cd10fe77cec552dffba84c6516ec33a6622b6c38..0000000000000000000000000000000000000000
--- a/spaces/VIPLab/Track-Anything/app_test.py
+++ /dev/null
@@ -1,46 +0,0 @@
-# import gradio as gr
-
-# def update_iframe(slider_value):
-# return f'''
-#
-#
-# '''
-
-# iface = gr.Interface(
-# fn=update_iframe,
-# inputs=gr.inputs.Slider(minimum=0, maximum=100, step=1, default=50),
-# outputs=gr.outputs.HTML(),
-# allow_flagging=False,
-# )
-
-# iface.launch(server_name='0.0.0.0', server_port=12212)
-
-import gradio as gr
-
-
-def change_mask(drop):
- return gr.update(choices=["hello", "kitty"])
-
-with gr.Blocks() as iface:
- drop = gr.Dropdown(
- choices=["cat", "dog", "bird"], label="Animal", info="Will add more animals later!"
- )
- radio = gr.Radio(["park", "zoo", "road"], label="Location", info="Where did they go?")
- multi_drop = gr.Dropdown(
- ["ran", "swam", "ate", "slept"], value=["swam", "slept"], multiselect=True, label="Activity", info="Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed auctor, nisl eget ultricies aliquam, nunc nisl aliquet nunc, eget aliquam nisl nunc vel nisl."
- )
-
- multi_drop.change(
- fn=change_mask,
- inputs = multi_drop,
- outputs=multi_drop
- )
-
-iface.launch(server_name='0.0.0.0', server_port=1223)
\ No newline at end of file
diff --git a/spaces/WatchOutForMike/Character/app.py b/spaces/WatchOutForMike/Character/app.py
deleted file mode 100644
index c04b6d45f84686618444749797188ca31fcb9882..0000000000000000000000000000000000000000
--- a/spaces/WatchOutForMike/Character/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/prompthero/openjourney-v4").launch()
\ No newline at end of file
diff --git a/spaces/Xyan-shuo2/Shoshoo/README.md b/spaces/Xyan-shuo2/Shoshoo/README.md
deleted file mode 100644
index f72b01c4c37e1c4ac0585d7ea6e2235f5fde5839..0000000000000000000000000000000000000000
--- a/spaces/Xyan-shuo2/Shoshoo/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Shoshoo
-emoji: 🌍
-colorFrom: green
-colorTo: red
-sdk: docker
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/XzJosh/Taffy-Bert-VITS2/mel_processing.py b/spaces/XzJosh/Taffy-Bert-VITS2/mel_processing.py
deleted file mode 100644
index 50435ecf88ef4fb6c1d47f3e6edd04c3ea7d3e80..0000000000000000000000000000000000000000
--- a/spaces/XzJosh/Taffy-Bert-VITS2/mel_processing.py
+++ /dev/null
@@ -1,112 +0,0 @@
-import math
-import os
-import random
-import torch
-from torch import nn
-import torch.nn.functional as F
-import torch.utils.data
-import numpy as np
-import librosa
-import librosa.util as librosa_util
-from librosa.util import normalize, pad_center, tiny
-from scipy.signal import get_window
-from scipy.io.wavfile import read
-from librosa.filters import mel as librosa_mel_fn
-
-MAX_WAV_VALUE = 32768.0
-
-
-def dynamic_range_compression_torch(x, C=1, clip_val=1e-5):
- """
- PARAMS
- ------
- C: compression factor
- """
- return torch.log(torch.clamp(x, min=clip_val) * C)
-
-
-def dynamic_range_decompression_torch(x, C=1):
- """
- PARAMS
- ------
- C: compression factor used to compress
- """
- return torch.exp(x) / C
-
-
-def spectral_normalize_torch(magnitudes):
- output = dynamic_range_compression_torch(magnitudes)
- return output
-
-
-def spectral_de_normalize_torch(magnitudes):
- output = dynamic_range_decompression_torch(magnitudes)
- return output
-
-
-mel_basis = {}
-hann_window = {}
-
-
-def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False):
- if torch.min(y) < -1.:
- print('min value is ', torch.min(y))
- if torch.max(y) > 1.:
- print('max value is ', torch.max(y))
-
- global hann_window
- dtype_device = str(y.dtype) + '_' + str(y.device)
- wnsize_dtype_device = str(win_size) + '_' + dtype_device
- if wnsize_dtype_device not in hann_window:
- hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device)
-
- y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect')
- y = y.squeeze(1)
-
- spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device],
- center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False)
-
- spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
- return spec
-
-
-def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax):
- global mel_basis
- dtype_device = str(spec.dtype) + '_' + str(spec.device)
- fmax_dtype_device = str(fmax) + '_' + dtype_device
- if fmax_dtype_device not in mel_basis:
- mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
- mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device)
- spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
- spec = spectral_normalize_torch(spec)
- return spec
-
-
-def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False):
- if torch.min(y) < -1.:
- print('min value is ', torch.min(y))
- if torch.max(y) > 1.:
- print('max value is ', torch.max(y))
-
- global mel_basis, hann_window
- dtype_device = str(y.dtype) + '_' + str(y.device)
- fmax_dtype_device = str(fmax) + '_' + dtype_device
- wnsize_dtype_device = str(win_size) + '_' + dtype_device
- if fmax_dtype_device not in mel_basis:
- mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
- mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device)
- if wnsize_dtype_device not in hann_window:
- hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device)
-
- y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect')
- y = y.squeeze(1)
-
- spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device],
- center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False)
-
- spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
-
- spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
- spec = spectral_normalize_torch(spec)
-
- return spec
diff --git a/spaces/YazawaSunrise/so-vits-svc-LoveLive/inference/infer_tool_grad.py b/spaces/YazawaSunrise/so-vits-svc-LoveLive/inference/infer_tool_grad.py
deleted file mode 100644
index 39359a82e5cc288c7c3f41e58c7c0c954581b14f..0000000000000000000000000000000000000000
--- a/spaces/YazawaSunrise/so-vits-svc-LoveLive/inference/infer_tool_grad.py
+++ /dev/null
@@ -1,160 +0,0 @@
-import hashlib
-import json
-import logging
-import os
-import time
-from pathlib import Path
-import io
-import librosa
-import maad
-import numpy as np
-from inference import slicer
-import parselmouth
-import soundfile
-import torch
-import torchaudio
-
-from hubert import hubert_model
-import utils
-from models import SynthesizerTrn
-logging.getLogger('numba').setLevel(logging.WARNING)
-logging.getLogger('matplotlib').setLevel(logging.WARNING)
-
-def resize2d_f0(x, target_len):
- source = np.array(x)
- source[source < 0.001] = np.nan
- target = np.interp(np.arange(0, len(source) * target_len, len(source)) / target_len, np.arange(0, len(source)),
- source)
- res = np.nan_to_num(target)
- return res
-
-def get_f0(x, p_len,f0_up_key=0):
-
- time_step = 160 / 16000 * 1000
- f0_min = 50
- f0_max = 1100
- f0_mel_min = 1127 * np.log(1 + f0_min / 700)
- f0_mel_max = 1127 * np.log(1 + f0_max / 700)
-
- f0 = parselmouth.Sound(x, 16000).to_pitch_ac(
- time_step=time_step / 1000, voicing_threshold=0.6,
- pitch_floor=f0_min, pitch_ceiling=f0_max).selected_array['frequency']
-
- pad_size=(p_len - len(f0) + 1) // 2
- if(pad_size>0 or p_len - len(f0) - pad_size>0):
- f0 = np.pad(f0,[[pad_size,p_len - len(f0) - pad_size]], mode='constant')
-
- f0 *= pow(2, f0_up_key / 12)
- f0_mel = 1127 * np.log(1 + f0 / 700)
- f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (f0_mel_max - f0_mel_min) + 1
- f0_mel[f0_mel <= 1] = 1
- f0_mel[f0_mel > 255] = 255
- f0_coarse = np.rint(f0_mel).astype(np.int)
- return f0_coarse, f0
-
-def clean_pitch(input_pitch):
- num_nan = np.sum(input_pitch == 1)
- if num_nan / len(input_pitch) > 0.9:
- input_pitch[input_pitch != 1] = 1
- return input_pitch
-
-
-def plt_pitch(input_pitch):
- input_pitch = input_pitch.astype(float)
- input_pitch[input_pitch == 1] = np.nan
- return input_pitch
-
-
-def f0_to_pitch(ff):
- f0_pitch = 69 + 12 * np.log2(ff / 440)
- return f0_pitch
-
-
-def fill_a_to_b(a, b):
- if len(a) < len(b):
- for _ in range(0, len(b) - len(a)):
- a.append(a[0])
-
-
-def mkdir(paths: list):
- for path in paths:
- if not os.path.exists(path):
- os.mkdir(path)
-
-
-class VitsSvc(object):
- def __init__(self):
- self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
- self.SVCVITS = None
- self.hps = None
- self.speakers = None
- self.hubert_soft = hubert_model.hubert_soft("hubert/model.pt")
-
- def set_device(self, device):
- self.device = torch.device(device)
- self.hubert_soft.to(self.device)
- if self.SVCVITS != None:
- self.SVCVITS.to(self.device)
-
- def loadCheckpoint(self, path):
- self.hps = utils.get_hparams_from_file(f"checkpoints/{path}/config.json")
- self.SVCVITS = SynthesizerTrn(
- self.hps.data.filter_length // 2 + 1,
- self.hps.train.segment_size // self.hps.data.hop_length,
- **self.hps.model)
- _ = utils.load_checkpoint(f"checkpoints/{path}/model.pth", self.SVCVITS, None)
- _ = self.SVCVITS.eval().to(self.device)
- self.speakers = self.hps.spk
-
- def get_units(self, source, sr):
- source = source.unsqueeze(0).to(self.device)
- with torch.inference_mode():
- units = self.hubert_soft.units(source)
- return units
-
-
- def get_unit_pitch(self, in_path, tran):
- source, sr = torchaudio.load(in_path)
- source = torchaudio.functional.resample(source, sr, 16000)
- if len(source.shape) == 2 and source.shape[1] >= 2:
- source = torch.mean(source, dim=0).unsqueeze(0)
- soft = self.get_units(source, sr).squeeze(0).cpu().numpy()
- f0_coarse, f0 = get_f0(source.cpu().numpy()[0], soft.shape[0]*2, tran)
- return soft, f0
-
- def infer(self, speaker_id, tran, raw_path):
- speaker_id = self.speakers[speaker_id]
- sid = torch.LongTensor([int(speaker_id)]).to(self.device).unsqueeze(0)
- soft, pitch = self.get_unit_pitch(raw_path, tran)
- f0 = torch.FloatTensor(clean_pitch(pitch)).unsqueeze(0).to(self.device)
- stn_tst = torch.FloatTensor(soft)
- with torch.no_grad():
- x_tst = stn_tst.unsqueeze(0).to(self.device)
- x_tst = torch.repeat_interleave(x_tst, repeats=2, dim=1).transpose(1, 2)
- audio = self.SVCVITS.infer(x_tst, f0=f0, g=sid)[0,0].data.float()
- return audio, audio.shape[-1]
-
- def inference(self,srcaudio,chara,tran,slice_db):
- sampling_rate, audio = srcaudio
- audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32)
- if len(audio.shape) > 1:
- audio = librosa.to_mono(audio.transpose(1, 0))
- if sampling_rate != 16000:
- audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000)
- soundfile.write("tmpwav.wav", audio, 16000, format="wav")
- chunks = slicer.cut("tmpwav.wav", db_thresh=slice_db)
- audio_data, audio_sr = slicer.chunks2audio("tmpwav.wav", chunks)
- audio = []
- for (slice_tag, data) in audio_data:
- length = int(np.ceil(len(data) / audio_sr * self.hps.data.sampling_rate))
- raw_path = io.BytesIO()
- soundfile.write(raw_path, data, audio_sr, format="wav")
- raw_path.seek(0)
- if slice_tag:
- _audio = np.zeros(length)
- else:
- out_audio, out_sr = self.infer(chara, tran, raw_path)
- _audio = out_audio.cpu().numpy()
- audio.extend(list(_audio))
- audio = (np.array(audio) * 32768.0).astype('int16')
- return (self.hps.data.sampling_rate,audio)
diff --git a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/checkpoint/catalog.py b/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/checkpoint/catalog.py
deleted file mode 100644
index 9a85736754a0de4550df96c22f38fc515bd02d71..0000000000000000000000000000000000000000
--- a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/checkpoint/catalog.py
+++ /dev/null
@@ -1,115 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import logging
-
-from detectron2.utils.file_io import PathHandler, PathManager
-
-
-class ModelCatalog(object):
- """
- Store mappings from names to third-party models.
- """
-
- S3_C2_DETECTRON_PREFIX = "https://dl.fbaipublicfiles.com/detectron"
-
- # MSRA models have STRIDE_IN_1X1=True. False otherwise.
- # NOTE: all BN models here have fused BN into an affine layer.
- # As a result, you should only load them to a model with "FrozenBN".
- # Loading them to a model with regular BN or SyncBN is wrong.
- # Even when loaded to FrozenBN, it is still different from affine by an epsilon,
- # which should be negligible for training.
- # NOTE: all models here uses PIXEL_STD=[1,1,1]
- # NOTE: Most of the BN models here are no longer used. We use the
- # re-converted pre-trained models under detectron2 model zoo instead.
- C2_IMAGENET_MODELS = {
- "MSRA/R-50": "ImageNetPretrained/MSRA/R-50.pkl",
- "MSRA/R-101": "ImageNetPretrained/MSRA/R-101.pkl",
- "FAIR/R-50-GN": "ImageNetPretrained/47261647/R-50-GN.pkl",
- "FAIR/R-101-GN": "ImageNetPretrained/47592356/R-101-GN.pkl",
- "FAIR/X-101-32x8d": "ImageNetPretrained/20171220/X-101-32x8d.pkl",
- "FAIR/X-101-64x4d": "ImageNetPretrained/FBResNeXt/X-101-64x4d.pkl",
- "FAIR/X-152-32x8d-IN5k": "ImageNetPretrained/25093814/X-152-32x8d-IN5k.pkl",
- }
-
- C2_DETECTRON_PATH_FORMAT = (
- "{prefix}/{url}/output/train/{dataset}/{type}/model_final.pkl" # noqa B950
- )
-
- C2_DATASET_COCO = "coco_2014_train%3Acoco_2014_valminusminival"
- C2_DATASET_COCO_KEYPOINTS = "keypoints_coco_2014_train%3Akeypoints_coco_2014_valminusminival"
-
- # format: {model_name} -> part of the url
- C2_DETECTRON_MODELS = {
- "35857197/e2e_faster_rcnn_R-50-C4_1x": "35857197/12_2017_baselines/e2e_faster_rcnn_R-50-C4_1x.yaml.01_33_49.iAX0mXvW", # noqa B950
- "35857345/e2e_faster_rcnn_R-50-FPN_1x": "35857345/12_2017_baselines/e2e_faster_rcnn_R-50-FPN_1x.yaml.01_36_30.cUF7QR7I", # noqa B950
- "35857890/e2e_faster_rcnn_R-101-FPN_1x": "35857890/12_2017_baselines/e2e_faster_rcnn_R-101-FPN_1x.yaml.01_38_50.sNxI7sX7", # noqa B950
- "36761737/e2e_faster_rcnn_X-101-32x8d-FPN_1x": "36761737/12_2017_baselines/e2e_faster_rcnn_X-101-32x8d-FPN_1x.yaml.06_31_39.5MIHi1fZ", # noqa B950
- "35858791/e2e_mask_rcnn_R-50-C4_1x": "35858791/12_2017_baselines/e2e_mask_rcnn_R-50-C4_1x.yaml.01_45_57.ZgkA7hPB", # noqa B950
- "35858933/e2e_mask_rcnn_R-50-FPN_1x": "35858933/12_2017_baselines/e2e_mask_rcnn_R-50-FPN_1x.yaml.01_48_14.DzEQe4wC", # noqa B950
- "35861795/e2e_mask_rcnn_R-101-FPN_1x": "35861795/12_2017_baselines/e2e_mask_rcnn_R-101-FPN_1x.yaml.02_31_37.KqyEK4tT", # noqa B950
- "36761843/e2e_mask_rcnn_X-101-32x8d-FPN_1x": "36761843/12_2017_baselines/e2e_mask_rcnn_X-101-32x8d-FPN_1x.yaml.06_35_59.RZotkLKI", # noqa B950
- "48616381/e2e_mask_rcnn_R-50-FPN_2x_gn": "GN/48616381/04_2018_gn_baselines/e2e_mask_rcnn_R-50-FPN_2x_gn_0416.13_23_38.bTlTI97Q", # noqa B950
- "37697547/e2e_keypoint_rcnn_R-50-FPN_1x": "37697547/12_2017_baselines/e2e_keypoint_rcnn_R-50-FPN_1x.yaml.08_42_54.kdzV35ao", # noqa B950
- "35998355/rpn_R-50-C4_1x": "35998355/12_2017_baselines/rpn_R-50-C4_1x.yaml.08_00_43.njH5oD9L", # noqa B950
- "35998814/rpn_R-50-FPN_1x": "35998814/12_2017_baselines/rpn_R-50-FPN_1x.yaml.08_06_03.Axg0r179", # noqa B950
- "36225147/fast_R-50-FPN_1x": "36225147/12_2017_baselines/fast_rcnn_R-50-FPN_1x.yaml.08_39_09.L3obSdQ2", # noqa B950
- }
-
- @staticmethod
- def get(name):
- if name.startswith("Caffe2Detectron/COCO"):
- return ModelCatalog._get_c2_detectron_baseline(name)
- if name.startswith("ImageNetPretrained/"):
- return ModelCatalog._get_c2_imagenet_pretrained(name)
- raise RuntimeError("model not present in the catalog: {}".format(name))
-
- @staticmethod
- def _get_c2_imagenet_pretrained(name):
- prefix = ModelCatalog.S3_C2_DETECTRON_PREFIX
- name = name[len("ImageNetPretrained/") :]
- name = ModelCatalog.C2_IMAGENET_MODELS[name]
- url = "/".join([prefix, name])
- return url
-
- @staticmethod
- def _get_c2_detectron_baseline(name):
- name = name[len("Caffe2Detectron/COCO/") :]
- url = ModelCatalog.C2_DETECTRON_MODELS[name]
- if "keypoint_rcnn" in name:
- dataset = ModelCatalog.C2_DATASET_COCO_KEYPOINTS
- else:
- dataset = ModelCatalog.C2_DATASET_COCO
-
- if "35998355/rpn_R-50-C4_1x" in name:
- # this one model is somehow different from others ..
- type = "rpn"
- else:
- type = "generalized_rcnn"
-
- # Detectron C2 models are stored in the structure defined in `C2_DETECTRON_PATH_FORMAT`.
- url = ModelCatalog.C2_DETECTRON_PATH_FORMAT.format(
- prefix=ModelCatalog.S3_C2_DETECTRON_PREFIX, url=url, type=type, dataset=dataset
- )
- return url
-
-
-class ModelCatalogHandler(PathHandler):
- """
- Resolve URL like catalog://.
- """
-
- PREFIX = "catalog://"
-
- def _get_supported_prefixes(self):
- return [self.PREFIX]
-
- def _get_local_path(self, path, **kwargs):
- logger = logging.getLogger(__name__)
- catalog_path = ModelCatalog.get(path[len(self.PREFIX) :])
- logger.info("Catalog entry {} points to {}".format(path, catalog_path))
- return PathManager.get_local_path(catalog_path, **kwargs)
-
- def _open(self, path, mode="r", **kwargs):
- return PathManager.open(self._get_local_path(path), mode, **kwargs)
-
-
-PathManager.register_handler(ModelCatalogHandler())
diff --git a/spaces/YuDou/ChuanhuChatGPT/chatgpt - macOS.command b/spaces/YuDou/ChuanhuChatGPT/chatgpt - macOS.command
deleted file mode 100644
index fa015edca9e6916f24394813ce8ba77d2072e296..0000000000000000000000000000000000000000
--- a/spaces/YuDou/ChuanhuChatGPT/chatgpt - macOS.command
+++ /dev/null
@@ -1,7 +0,0 @@
-#!/bin/bash
-echo Opening ChuanhuChatGPT...
-cd "$(dirname "${BASH_SOURCE[0]}")"
-nohup python3 ChuanhuChatbot.py >/dev/null 2>&1 &
-sleep 5
-open http://127.0.0.1:7860
-echo Finished opening ChuanhuChatGPT (http://127.0.0.1:7860/). If you kill ChuanhuChatbot, Use "pkill -f 'ChuanhuChatbot'" command in terminal.
\ No newline at end of file
diff --git a/spaces/Yuliang/ECON/lib/pymafx/utils/data_loader.py b/spaces/Yuliang/ECON/lib/pymafx/utils/data_loader.py
deleted file mode 100644
index 3d109f82b3473242a9fb9442037c47471fd0f7d2..0000000000000000000000000000000000000000
--- a/spaces/Yuliang/ECON/lib/pymafx/utils/data_loader.py
+++ /dev/null
@@ -1,78 +0,0 @@
-from __future__ import division
-
-import torch
-from torch.utils.data import DataLoader
-from torch.utils.data.sampler import Sampler
-
-
-class RandomSampler(Sampler):
- def __init__(self, data_source, checkpoint):
- self.data_source = data_source
- if checkpoint is not None and checkpoint['dataset_perm'] is not None:
- self.dataset_perm = checkpoint['dataset_perm']
- self.perm = self.dataset_perm[checkpoint['batch_size'] * checkpoint['batch_idx']:]
- else:
- self.dataset_perm = torch.randperm(len(self.data_source)).tolist()
- self.perm = torch.randperm(len(self.data_source)).tolist()
-
- def __iter__(self):
- return iter(self.perm)
-
- def __len__(self):
- return len(self.perm)
-
-
-class SequentialSampler(Sampler):
- def __init__(self, data_source, checkpoint):
- self.data_source = data_source
- if checkpoint is not None and checkpoint['dataset_perm'] is not None:
- self.dataset_perm = checkpoint['dataset_perm']
- self.perm = self.dataset_perm[checkpoint['batch_size'] * checkpoint['batch_idx']:]
- else:
- self.dataset_perm = list(range(len(self.data_source)))
- self.perm = self.dataset_perm
-
- def __iter__(self):
- return iter(self.perm)
-
- def __len__(self):
- return len(self.perm)
-
-
-class CheckpointDataLoader(DataLoader):
- """
- Extends torch.utils.data.DataLoader to handle resuming training from an arbitrary point within an epoch.
- """
- def __init__(
- self,
- dataset,
- checkpoint=None,
- batch_size=1,
- shuffle=False,
- num_workers=0,
- pin_memory=False,
- drop_last=True,
- timeout=0,
- worker_init_fn=None
- ):
-
- if shuffle:
- sampler = RandomSampler(dataset, checkpoint)
- else:
- sampler = SequentialSampler(dataset, checkpoint)
- if checkpoint is not None:
- self.checkpoint_batch_idx = checkpoint['batch_idx']
- else:
- self.checkpoint_batch_idx = 0
-
- super(CheckpointDataLoader, self).__init__(
- dataset,
- sampler=sampler,
- shuffle=False,
- batch_size=batch_size,
- num_workers=num_workers,
- drop_last=drop_last,
- pin_memory=pin_memory,
- timeout=timeout,
- worker_init_fn=None
- )
diff --git a/spaces/ZJunTvT/ZJunChat/modules/overwrites.py b/spaces/ZJunTvT/ZJunChat/modules/overwrites.py
deleted file mode 100644
index 035a4a52722d66ee28af1c05231ad1cea3339ef5..0000000000000000000000000000000000000000
--- a/spaces/ZJunTvT/ZJunChat/modules/overwrites.py
+++ /dev/null
@@ -1,94 +0,0 @@
-from __future__ import annotations
-import logging
-
-from llama_index import Prompt
-from typing import List, Tuple
-import mdtex2html
-from gradio_client import utils as client_utils
-
-from modules.presets import *
-from modules.llama_func import *
-
-
-def compact_text_chunks(self, prompt: Prompt, text_chunks: List[str]) -> List[str]:
- logging.debug("Compacting text chunks...🚀🚀🚀")
- combined_str = [c.strip() for c in text_chunks if c.strip()]
- combined_str = [f"[{index+1}] {c}" for index, c in enumerate(combined_str)]
- combined_str = "\n\n".join(combined_str)
- # resplit based on self.max_chunk_overlap
- text_splitter = self.get_text_splitter_given_prompt(prompt, 1, padding=1)
- return text_splitter.split_text(combined_str)
-
-
-def postprocess(
- self,
- y: List[List[str | Tuple[str] | Tuple[str, str] | None] | Tuple],
- ) -> List[List[str | Dict | None]]:
- """
- Parameters:
- y: List of lists representing the message and response pairs. Each message and response should be a string, which may be in Markdown format. It can also be a tuple whose first element is a string filepath or URL to an image/video/audio, and second (optional) element is the alt text, in which case the media file is displayed. It can also be None, in which case that message is not displayed.
- Returns:
- List of lists representing the message and response. Each message and response will be a string of HTML, or a dictionary with media information. Or None if the message is not to be displayed.
- """
- if y is None:
- return []
- processed_messages = []
- for message_pair in y:
- assert isinstance(
- message_pair, (tuple, list)
- ), f"Expected a list of lists or list of tuples. Received: {message_pair}"
- assert (
- len(message_pair) == 2
- ), f"Expected a list of lists of length 2 or list of tuples of length 2. Received: {message_pair}"
-
- processed_messages.append(
- [
- self._postprocess_chat_messages(message_pair[0], "user"),
- self._postprocess_chat_messages(message_pair[1], "bot"),
- ]
- )
- return processed_messages
-
-def postprocess_chat_messages(
- self, chat_message: str | Tuple | List | None, message_type: str
- ) -> str | Dict | None:
- if chat_message is None:
- return None
- elif isinstance(chat_message, (tuple, list)):
- filepath = chat_message[0]
- mime_type = client_utils.get_mimetype(filepath)
- filepath = self.make_temp_copy_if_needed(filepath)
- return {
- "name": filepath,
- "mime_type": mime_type,
- "alt_text": chat_message[1] if len(chat_message) > 1 else None,
- "data": None, # These last two fields are filled in by the frontend
- "is_file": True,
- }
- elif isinstance(chat_message, str):
- if message_type == "bot":
- if not detect_converted_mark(chat_message):
- chat_message = convert_mdtext(chat_message)
- elif message_type == "user":
- if not detect_converted_mark(chat_message):
- chat_message = convert_asis(chat_message)
- return chat_message
- else:
- raise ValueError(f"Invalid message for Chatbot component: {chat_message}")
-
-with open("./assets/custom.js", "r", encoding="utf-8") as f, open("./assets/Kelpy-Codos.js", "r", encoding="utf-8") as f2:
- customJS = f.read()
- kelpyCodos = f2.read()
-
-def reload_javascript():
- print("Reloading javascript...")
- js = f''
- def template_response(*args, **kwargs):
- res = GradioTemplateResponseOriginal(*args, **kwargs)
- res.body = res.body.replace(b'