diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/Eric-Helms-The-Muscle-And-Strength-Pyramid-Nutrition-V101pdf-CRACKED.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/Eric-Helms-The-Muscle-And-Strength-Pyramid-Nutrition-V101pdf-CRACKED.md
deleted file mode 100644
index be54a7e480792062f4af93dd9ade2a8b3f948ff7..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/Eric-Helms-The-Muscle-And-Strength-Pyramid-Nutrition-V101pdf-CRACKED.md
+++ /dev/null
@@ -1,114 +0,0 @@
-## Eric Helms The Muscle And Strength Pyramid Nutrition V101pdf
-
-
-
-
-
- ![Eric Helms The Muscle And Strength Pyramid Nutrition V101pdf \[CRACKED\]](https://zarrinholeh.com/wp-content/uploads/2018/10/price-towel-01.jpg)
-
-
-
-
-
-**Download === [https://www.google.com/url?q=https%3A%2F%2Ftlniurl.com%2F2txKNe&sa=D&sntz=1&usg=AOvVaw00kewj9WV-3GzZaeyjBmp-](https://www.google.com/url?q=https%3A%2F%2Ftlniurl.com%2F2txKNe&sa=D&sntz=1&usg=AOvVaw00kewj9WV-3GzZaeyjBmp-)**
-
-
-
-
-
-
-
-
-
-
-
-
-
-# How to Optimize Your Nutrition for Muscle and Strength
-
-
-
-If you are looking for a comprehensive guide on how to set up your nutrition for optimal muscle and strength gains, you might want to check out **The Muscle and Strength Pyramid: Nutrition** by Eric Helms, Andy Morgan and Andrea Valdez. This book is based on the concept of understanding priorities and context, so you can take all the pieces of the puzzle and fit them together into an actionable plan.
-
-
-
-In this book, you will learn:
-
-
-
-- What are the most important factors for nutrition success and how to rank them in order of importance.
-
-- How to calculate your calorie, protein, carbohydrate and fat needs based on your goals, body type and activity level.
-
-- How to adjust your nutrition for different scenarios, such as bulking, cutting, maintenance, bodybuilding, powerlifting or weight class sports.
-
-- How to balance adherence, consistency and flexibility so you can live your life while progressing toward your goals.
-
-- How to apply evidence-based principles and avoid common myths and misconceptions about nutrition.
-
-
-
-The book is written by experts who have both academic and practical experience in the field of nutrition and fitness. Eric Helms is a researcher, coach and natural bodybuilder who has helped hundreds of clients achieve their goals. Andy Morgan is a writer and consultant who specializes in body composition change and has a unique ability to communicate complex topics in a simple way. Andrea Valdez is a lifelong athlete with a Masters in Exercise Physiology and extensive coaching experience.
-
-
-
-The book is available in paperback and PDF formats. You can find more information about the book and how to order it on the following websites:
-
-
-
-1. [Google Books](https://books.google.com/books/about/The_Muscle_and_Strength_Pyramid_Nutritio.html?id=XMawwwEACAAJ)
-
-2. [Amazon](https://www.amazon.com/Muscle-Strength-Pyramid-Nutrition/dp/1090912188)
-
-3. [Archive](https://archive.org/details/0erichelmsthemuscleandstrengthtrainingpyramidv2.0nutrion02)
-
-
-
-If you are serious about improving your nutrition for muscle and strength, this book is a must-read. It will provide you with the knowledge, tools and strategies you need to succeed.
-
-
-
-But nutrition is only one part of the equation. If you want to optimize your muscle and strength gains, you also need to train properly. That's why Eric Helms and his co-authors have also written **The Muscle and Strength Pyramid: Training**, a companion book that covers everything you need to know about designing and executing effective training programs.
-
-
-
-In this book, you will learn:
-
-
-
-- What are the main principles of training for muscle and strength and how to apply them to your own goals.
-
-- How to manipulate volume, intensity, frequency, progression, specificity and variation to optimize your training stimulus.
-
-- How to choose the best exercises, rep ranges, rest periods, tempo and technique for your needs.
-
-- How to manage fatigue, recovery, stress and adaptation to avoid overtraining and injury.
-
-- How to periodize your training for long-term progress and peak performance.
-
-
-
-The book is also based on the latest scientific evidence and practical experience of the authors. Eric Helms is not only a researcher and coach, but also a competitive natural bodybuilder and powerlifter who has achieved elite status in both sports. Andy Morgan and Andrea Valdez are also experienced coaches and athletes who have helped hundreds of clients reach their potential.
-
-
-
-The book is available in paperback and PDF formats. You can find more information about the book and how to order it on the following websites:
-
-
-
-1. [Goodreads](https://www.goodreads.com/book/show/44773627-the-muscle-and-strength-pyramid)
-
-2. [The Muscle and Strength Pyramids](https://muscleandstrengthpyramids.com/)
-
-3. [Archive](https://archive.org/details/0erichelmsthemuscleandstrengthtrainingpyramidv2.0nutrion02)
-
-
-
-If you are serious about improving your training for muscle and strength, this book is a must-read. It will provide you with the knowledge, tools and strategies you need to succeed.
-
- dfd1c89656
-
-
-
-
-
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Call Of Duty Black Ops English Language Pack Download and Install Guide.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Call Of Duty Black Ops English Language Pack Download and Install Guide.md
deleted file mode 100644
index 03438c12f0a6bfe7b5ce1f91520d801eda945353..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Call Of Duty Black Ops English Language Pack Download and Install Guide.md
+++ /dev/null
@@ -1,116 +0,0 @@
-
- - Overview: What you need to change the language and where to find it - Step-by-step guide: How to download and install the English language pack and change the game settings - Conclusion: Summary of the main points and benefits of changing the language | | H2: Introduction | - What is Call of Duty Black Ops and why you might want to change the language | | H3: What is Call of Duty Black Ops? | - A brief description of the game, its genre, setting, features and popularity | | H3: Why you might want to change the language | - Some possible reasons why you might not be satisfied with the default language of the game, such as preference, accessibility, compatibility or availability | | H2: Overview | - What you need to change the language and where to find it | | H3: What you need to change the language | - A list of the files and tools you need to change the language, such as localization files, sound files and WinRAR | | H3: Where to find the English language pack | - A brief explanation of where you can download the English language pack for free, such as YouTube videos or Google Drive links | | H2: Step-by-step guide | - How to download and install the English language pack and change the game settings | | H3: How to download the English language pack | - A detailed instruction on how to download the English language pack from one of the sources, such as YouTube video or Google Drive link | | H3: How to install the English language pack | - A detailed instruction on how to extract and copy the English language pack files into the game folder using WinRAR | | H3: How to change the game settings | - A detailed instruction on how to edit the localization files and select the English language in the game options | | H2: Conclusion | - Summary of the main points and benefits of changing the language | | H3: Summary of the main points | - A recap of what Call of Duty Black Ops is, what you need to change the language and how to do it | | H3: Benefits of changing the language | - A list of some advantages of playing Call of Duty Black Ops in English, such as better understanding, immersion, compatibility or availability | **Table 2: Article with HTML formatting** ```html
Call of Duty Black Ops English Language Pack: How to Change the Game Language from Any Language to English
-
Call of Duty Black Ops is one of the most popular first-person shooter games ever made. It takes you on a thrilling adventure across different locations and time periods during the Cold War. However, if you are not happy with the default language of the game, you might be wondering how to change it to English. In this article, we will show you what you need to change the language and where to find it. We will also provide you with a step-by-step guide on how to download and install the English language pack and change the game settings. By following these simple steps, you will be able to enjoy Call of Duty Black Ops in English in no time.
Call of Duty Black Ops is a first-person shooter game developed by Treyarch and published by Activision in 2010. It is the seventh installment in the Call of Duty series and a sequel to Call of Duty World at War. The game follows the missions of a covert team of special forces operatives known as SOG (Studies and Observations Group) during various conflicts in Vietnam, Cuba, Laos and Russia. The game features a single-player campaign mode, a multiplayer mode with various modes and maps, and a zombie mode with four maps. The game received critical acclaim for its story, gameplay, graphics and sound design. It also became one of the best-selling games of all time, selling more than 30 million copies worldwide.
-
Why you might want to change the language
-
Depending on where you bought or downloaded Call of Duty Black Ops, you might have a different default language for your game. For example, if you bought or downloaded it from Russia or Poland, you might have Russian or Polish as your default language. However, you might not be satisfied with this language for various reasons. For instance:
-
-
You might prefer playing games in English because it is your native language or because you are more comfortable with it.
-
You might have trouble understanding or reading some words or phrases in another language because they are too fast or too small.
-
You might experience some compatibility issues with some mods or patches that are only available in English.
-
You might want to access some content or features that are only available in English.
-
-
Whatever your reason is, changing your game language from any language to English can improve your gaming experience significantly.
-
Overview
-
What you need to change the language
-
To change your game language from any language to English, you will need two things:
-
-
The English language pack files for Call of Duty Black Ops. These are files that contain all the text and audio data for the English version of the game.
-
A tool that can extract and copy files from compressed archives. We recommend using WinRAR because it is free and easy to use.
-
-
Where to find the English language pack
-
The good news is that you can find and download the English language pack for Call of Duty Black Ops for free online. There are several sources that offer this service, but we will focus on two of them:
-
-
YouTube videos that provide links to download sites or Google Drive folders that contain the English language pack files. For example, this video by Rishi's Tech & Tutorials shows how to change your game language from any language to English using a Google Drive link that contains all the necessary files.
-
Reddit posts that provide links to download sites or Google Drive folders that contain the English language pack files. For example, this post by u/ChaosZeroX shows how to change your game language from Russian (or any other) to English using a Google Drive link that contains all the necessary files.
-
-
You can choose any source that works for you, but make sure that it is reliable and safe before downloading anything.
-
Step-by-step guide
-
How to download the English language pack
-
In this guide, we will use the YouTube video by Rishi's Tech & Tutorials as an example, but you can follow the same steps for any other source that provides the same files. To download the English language pack, you need to do the following:
-
Open the YouTube video in your browser.
Go to the description section below the video and click on the link that says "Call Of Duty English Language Pack". This will take you to a blog post by Kurivaim1.
In the blog post, scroll down until you see a button that says "Download". Click on it. This will take you to another page with a countdown timer.
Wait for the countdown timer to finish and then click on "Skip Ad". This will take you to a Google Drive folder that contains the English language pack files.
In the Google Drive folder, select all the files by clicking on one file and then pressing Ctrl+A on your keyboard.
Right-click on any file and select "Download". This will start downloading a ZIP file named "Call Of Duty-English Language Pack.zip" into your computer.
-
How to install the English language pack
-
To install the English language pack, you need to do the following:
-
How to install Call Of Duty Black Ops English Language Pack
-Call Of Duty Black Ops English Language Pack download link
-Call Of Duty Black Ops English Language Pack free download
-Call Of Duty Black Ops English Language Pack torrent
-Call Of Duty Black Ops English Language Pack crack
-Call Of Duty Black Ops English Language Pack steam
-Call Of Duty Black Ops English Language Pack skidrow
-Call Of Duty Black Ops English Language Pack error fix
-Call Of Duty Black Ops English Language Pack gameplay
-Call Of Duty Black Ops English Language Pack review
-Call Of Duty Black Ops English Language Pack trailer
-Call Of Duty Black Ops English Language Pack system requirements
-Call Of Duty Black Ops English Language Pack mods
-Call Of Duty Black Ops English Language Pack cheats
-Call Of Duty Black Ops English Language Pack zombies
-Call Of Duty Black Ops English Language Pack multiplayer
-Call Of Duty Black Ops English Language Pack online
-Call Of Duty Black Ops English Language Pack patch
-Call Of Duty Black Ops English Language Pack update
-Call Of Duty Black Ops English Language Pack DLC
-Call Of Duty Black Ops English Language Pack PS4
-Call Of Duty Black Ops English Language Pack Xbox One
-Call Of Duty Black Ops English Language Pack PC
-Call Of Duty Black Ops English Language Pack Mac
-Call Of Duty Black Ops English Language Pack Linux
-Call Of Duty Black Ops English Language Pack switch
-Call Of Duty Black Ops English Language Pack android
-Call Of Duty Black Ops English Language Pack iOS
-Call Of Duty Black Ops English Language Pack VR
-Call Of Duty Black Ops English Language Pack 4K
-Call Of Duty Black Ops English Language Pack remastered
-Call Of Duty Black Ops English Language Pack comparison
-Call Of Duty Black Ops English Language Pack tips and tricks
-Call Of Duty Black Ops English Language Pack guide and walkthrough
-Call Of Duty Black Ops English Language Pack best weapons and perks
-Call Of Duty Black Ops English Language Pack easter eggs and secrets
-Call Of Duty Black Ops English Language Pack soundtrack and music
-Call Of Duty Black Ops English Language Pack voice actors and characters
-Call Of Duty Black Ops English Language Pack story and lore
-Call Of Duty Black Ops English Language Pack fan art and memes
-Call Of Duty Black Ops English Language Pack merchandise and collectibles
-Call Of Duty Black Ops English Language Pack news and rumors
-Call Of Duty Black Ops English Language Pack release date and price
-Call Of Duty Black Ops English Language Pack pre-order and bonus content
-Call Of Duty Black Ops English Language Pack beta and demo access
-Call Of Duty Black Ops English Language Pack forums and communities
-Call Of Duty Black Ops English Language Pack ratings and awards
-Call Of Duty Black Ops English Language Pack developer and publisher
-
Locate the ZIP file named "Call Of Duty-English Language Pack.zip" in your computer's Downloads folder (or wherever you saved it).
Right-click on it and select "Extract Here" if you have WinRAR installed. This will create a new folder named "Call Of Duty-English Language Pack" with all the extracted files inside.
Open the folder named "Call Of Duty-English Language Pack" and find the folder named "Sounds".
Open another window of File Explorer and navigate to your Call of Duty Black Ops game folder. The location of this folder may vary depending on where you installed the game, but you can find it by following these steps:
-
-
Open the Battle.net client and select Call of Duty Black Ops from the left panel.
-
Click on the gear icon next to the play button and select Show in Explorer. This will open your game folder in File Explorer.
-
-
Copy the folder named "Sounds" from the "Call Of Duty-English Language Pack" folder and paste it into your game folder. If prompted, choose to replace the existing files.
-
Go back to the "Call Of Duty-English Language Pack" folder and find the folder named "Zone". Inside this folder, you will see another folder named "English".
-
Copy the folder named "English" from the "Zone" folder and paste it into your game folder. If prompted, choose to replace the existing files.
-
-
How to change the game settings
-
To change the game settings, you need to do the following:
-
In your game folder, find and open the file named "localization.txt" with a text editor such as Notepad.
Change the value of the line that says "SET LANG \"xx\"" to "SET LANG \"en\"". For example, if your default language was Russian, you would change it from "SET LANG \"ru\"" to "SET LANG \"en\"". Save and close the file.
Repeat the same process for the files named "localization_mp.txt" and "localization_zm.txt". These are for the multiplayer and zombie modes respectively.
Launch the Battle.net client and select Call of Duty Black Ops from the left panel.
Click on the gear icon next to the play button and select Game Settings.
In the Game Settings window, click on the Game Language tab.
Select English from the drop-down menu and click on Done.
-
Conclusion
-
Summary of the main points
-
In this article, we have shown you how to change your game language from any language to English for Call of Duty Black Ops. We have explained what Call of Duty Black Ops is, what you need to change the language and where to find it. We have also provided you with a step-by-step guide on how to download and install the English language pack and change the game settings. By following these simple steps, you will be able to enjoy Call of Duty Black Ops in English in no time.
-
Benefits of changing the language
-
Changing your game language from any language to English for Call of Duty Black Ops can have several benefits for your gaming experience. For example:
-
You will be able to understand and read all the text and audio data in the game, such as dialogues, subtitles, menus, instructions and tips.
You will be able to immerse yourself more in the game's story, setting and atmosphere.
You will be able to avoid some compatibility issues with some mods or patches that are only available in English.
You will be able to access some content or features that are only available in English, such as online servers, forums or guides.
-
We hope that this article has been helpful for you and that you have learned something new today. If you have any questions or feedback, feel free to leave a comment below. Thank you for reading and happy gaming!
-
Frequently Asked Questions
-
Can I change my game language back to my original language?
Yes, you can change your game language back to your original language by following the same steps as above, but using the original language pack files instead of the English ones. You can also change your game language anytime from the Game Settings window in the Battle.net client.
-
Will changing my game language affect my save files or progress?
-
No, changing your game language will not affect your save files or progress. You can continue playing from where you left off without any problems.
-
Will changing my game language affect my online multiplayer or zombie mode?
-
No, changing your game language will not affect your online multiplayer or zombie mode. You can still play with other players who have different languages without any issues.
-
Where can I find more information about Call of Duty Black Ops?
-
You can find more information about Call of Duty Black Ops on its official website , its Wikipedia page , or its Steam page . You can also check out some reviews, videos, guides or forums online for more tips and tricks.
-
Where can I find more articles like this one?
-
You can find more articles like this one on our website , where we write about various topics related to gaming, technology, entertainment and more. You can also subscribe to our newsletter or follow us on social media for more updates.
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Encase 6 Download The Ultimate Guide to the Best Digital Forensics Software.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Encase 6 Download The Ultimate Guide to the Best Digital Forensics Software.md
deleted file mode 100644
index aeedd6ff1ad12bd4fd5a52701d4388c979e14e5a..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Encase 6 Download The Ultimate Guide to the Best Digital Forensics Software.md
+++ /dev/null
@@ -1,40 +0,0 @@
-
-
How to Encase 6 Download and Install on Your Computer
-
If you are looking for a powerful and reliable digital forensics software, you might want to consider Encase 6 download. Encase 6 is a popular tool that allows you to acquire, analyze and report on digital evidence from various sources, such as hard drives, mobile devices, cloud services and more.
In this article, we will show you how to encase 6 download and install on your computer in a few simple steps. We will also give you some tips on how to use Encase 6 effectively and efficiently.
-
-
Step 1: Encase 6 Download from the Official Website
-
The first step to encase 6 download is to visit the official website of the software provider, Guidance Software. You can find the link here.
-
Once you are on the website, you will need to create an account or log in with your existing credentials. You will also need to provide some information about yourself and your organization, such as your name, email address, phone number and country.
-
-
After you complete the registration process, you will be able to access the download page of Encase 6. You will see a list of available versions and languages for the software. Choose the one that suits your needs and click on the download button.
-
The file size of Encase 6 is about 1.5 GB, so it might take some time to download depending on your internet speed. You can check the progress of the download on your browser or in your downloads folder.
-
-
Step 2: Install Encase 6 on Your Computer
-
Once you have downloaded Encase 6, you can proceed to install it on your computer. To do so, follow these steps:
-
-
Locate the downloaded file and double-click on it to launch the installer.
-
Accept the license agreement and click on Next.
-
Choose the destination folder for the installation and click on Next.
-
Select the components you want to install and click on Next.
-
Enter the serial number that was sent to your email address and click on Next.
-
Click on Install to begin the installation process.
-
Wait for the installation to finish and click on Finish.
-
-
Congratulations! You have successfully installed Encase 6 on your computer. You can now launch the software from your desktop or start menu.
-
-
Step 3: Use Encase 6 for Digital Forensics
-
Now that you have encase 6 download and install on your computer, you can start using it for digital forensics purposes. Here are some of the main features and functions of Encase 6 that you should know:
-
-
Encase 6 allows you to acquire digital evidence from various sources, such as hard drives, mobile devices, cloud services and more. You can use different methods of acquisition, such as physical, logical or remote.
-
Encase 6 enables you to analyze digital evidence using various tools and techniques, such as keyword search, hash analysis, file carving, timeline analysis and more. You can also use custom scripts and plugins to extend the functionality of the software.
-
Encase 6 helps you to report on digital evidence using various formats and templates, such as HTML, PDF, XML and more. You can also create bookmarks, annotations and comments to highlight important findings and observations.
-
-
Encase 6 is a powerful and reliable digital forensics software that can help you with various cases and scenarios. However, it also requires some skills and knowledge to use it effectively and efficiently. Therefore, we recommend that you take some training courses or consult some experts before using Encase 6 for real investigations.
-
-
Conclusion
-
In this article, we have shown you how to encase 6 download and install on your computer in a few simple steps. We have also given you some tips on how to use Encase 6 for digital forensics purposes. We hope that this article has been helpful and informative for you.
-
If you have any questions or comments about Encase 6 download or installation, feel free to leave them below. We will try our best to answer them as soon as possible. Thank you
ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Azlibnet A guide to the electronic services of Azerbaijani libraries.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Azlibnet A guide to the electronic services of Azerbaijani libraries.md
deleted file mode 100644
index d9bc14a55cc1102f73efc2b0052e8b8a28b835f6..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Azlibnet A guide to the electronic services of Azerbaijani libraries.md
+++ /dev/null
@@ -1,137 +0,0 @@
-
-
What is AZLIBNET and why you should use it
-
If you are a fan of Azerbaijani literature, or if you want to learn more about the rich and diverse culture of Azerbaijan, you should definitely check out AZLIBNET. AZLIBNET is the virtual library of Azerbaijani literature, where you can find thousands of books, journals, magazines, newspapers, and other publications in Azerbaijani, Turkish, Russian, English, and other languages. You can also order books online and have them delivered to your doorstep, or publish your own books and reach a wide audience. In this article, we will tell you everything you need to know about AZLIBNET and why you should use it.
AZLIBNET: The virtual library of Azerbaijani literature
-
AZLIBNET was established in 2007 by the decree of the President of the Republic of Azerbaijan, with the aim of improving the activities of libraries and information services in the country. It is a project of the National Library named after M.F.Akhundov, which is the main library institution in Azerbaijan. AZLIBNET is a digital platform that provides access to various sources of information related to Azerbaijani literature, history, culture, art, science, and education.
-
How to access AZLIBNET and what you can find there
-
To access AZLIBNET, you need to visit its official website: www.lib.az. There, you can browse through different categories of publications, such as books, journals, newspapers, magazines, dissertations, reports, etc. You can also search by title, author, subject, keyword, or ISBN. You can view the full text of some publications online, or download them as PDF files. You can also request a copy of any publication that is not available online by filling out a form.
-
Some of the publications that you can find on AZLIBNET are:
-
-
Kitablar: This section contains books on various topics, such as literature, history, philosophy, religion, sociology, psychology, law, economics, etc. You can find both classic and contemporary works by Azerbaijani authors, as well as translations from other languages.
-
Jurnallar: This section contains journals on different fields of science and humanities, such as linguistics, literature, history, culture, art, education, etc. You can find both academic and popular journals that cover current issues and trends in Azerbaijan and the world.
-
Gəzətlər: This section contains newspapers that reflect the political, social, economic, and cultural life of Azerbaijan. You can find both national and regional newspapers that offer news, analysis, opinions, interviews, etc.
-
Məcmuələr: This section contains magazines that cater to different interests and tastes of readers. You can find magazines on topics such as fashion, beauty, health, lifestyle, entertainment, travel, sports, etc.
-
Dissertasiyalar: This section contains dissertations that represent the scientific achievements and contributions of Azerbaijani scholars. You can find dissertations on various disciplines and levels (bachelor's, master's, doctoral).
-
Hesabatlar: This section contains reports that provide information and statistics on various aspects of Azerbaijan's development and progress. You can find reports on topics such as economy, education , health, culture, etc.
-
-
As you can see, AZLIBNET offers a rich and diverse collection of publications that can satisfy your curiosity and needs. Whether you are a student, a researcher, a teacher, a writer, or a reader, you can find something useful and interesting on AZLIBNET.
-
The benefits of using AZLIBNET for your research and education
-
Using AZLIBNET for your research and education has many benefits. Here are some of them:
-
azlibnet ebooks
-azlibnet kitabxana
-azlibnet virtual kitabxana
-azlibnet kitab göndərişi
-azlibnet azərbaycan ədəbiyyatı
-azlibnet şuşa
-azlibnet səfəvilər
-azlibnet elektron xidmətlər
-azlibnet şamaxı səfəri
-azlibnet zirə kəndi
-azlibnet fatmayi kəndi
-azlibnet rövşən yerfi
-azlibnet prezident fərmanı
-azlibnet kitabxana-informasiya sahəsi
-azlibnet dövlət proqramı
-azlibnet vimeo video
-azlibnet kitabxana sistemi
-azlibnet kitabxana kartotekası
-azlibnet kitabxana kataloqu
-azlibnet kitabxana abunəçiləri
-azlibnet kitabxana statistikası
-azlibnet kitabxana tədbirləri
-azlibnet kitabxana xidmətləri
-azlibnet kitabxana resursları
-azlibnet kitabxana standartları
-azlibnet kitabxana mütaliəçiləri
-azlibnet kitabxana işçiləri
-azlibnet kitabxana təlimləri
-azlibnet kitabxana elmi işləri
-azlibnet kitabxana nümunələri
-azlibnet kitabxana mükafatları
-azlibnet kitabxana tarihi
-azlibnet kitabxana mühitliliyi
-azlibnet kitabxana texnologiyaları
-azlibnet kitabxana innovasiyaları
-azlibnet kitabxana ictimaiyyətliliyi
-azlibnet kitabxana diasporu
-azlibnet kitabxana ictimai fikir anketi
-azlibnet kitabxana elektron jurnalları
-azlibnet kitabxana elektron qanunvericiliyi
-azlibnet kitabxana elektron ensiklopediyaları
-azlibnet kitabxana elektron lüğətləri
-azlibnet kitabxana elektron bibliotekaları
-azlibnet kitabxana elektron arxivləri
-azlibnet kitabxana elektron qaynaqları
-azlibnet kitabxana elektron nümayişləri
-azlibnet kitabxana elektron sifarişləri
-azlibnet kitabxana elektron müraciətləri
-
-
It saves you time and money: You don't have to visit physical libraries or bookstores to find the publications you need. You can access them online anytime and anywhere, with just a few clicks. You also don't have to pay for subscriptions or fees to use AZLIBNET. It is free and open to everyone.
-
It provides you with quality and reliable information: You can trust the information you find on AZLIBNET, because it is verified and updated by the National Library and other reputable institutions. You can also cite the sources you use from AZLIBNET in your academic papers and projects, as they are recognized and respected by the scientific community.
-
It enhances your knowledge and skills: You can learn new things and improve your skills by reading the publications on AZLIBNET. You can broaden your horizons and perspectives by exploring different topics and viewpoints. You can also improve your language skills by reading publications in different languages.
-
It supports your cultural identity and awareness: You can discover and appreciate the rich and diverse culture of Azerbaijan by reading the publications on AZLIBNET. You can learn more about the history, traditions, values, achievements, and challenges of your country and people. You can also share your culture with others by recommending or reviewing the publications you like.
-
-
These are just some of the benefits of using AZLIBNET for your research and education. There are many more that you can discover by yourself. So, what are you waiting for? Start using AZLIBNET today and see the difference!
-
AZLIBNET: The online book delivery system
-
AZLIBNET is not only a virtual library, but also an online book delivery system. This means that you can order books from AZLIBNET and have them delivered to your doorstep. This is a great option for those who prefer to read physical books rather than digital ones, or who want to own or gift books that they like.
-
How to order books from AZLIBNET and how they are delivered
-
To order books from AZLIBNET, you need to follow these simple steps:
-
-
Visit the official website of AZLIBNET: www.lib.az.
-
Browse through the categories of books or search for the ones you want.
-
Select the books you want to order and add them to your cart.
-
Fill out your personal and delivery information.
-
Choose your payment method (cash on delivery or online payment).
-
Confirm your order and wait for the confirmation email.
-
-
The delivery time depends on the availability of the books and the location of the delivery address. Usually, it takes between 1 to 5 working days for the books to arrive. The delivery fee is calculated based on the weight of the books and the distance of the delivery address. You can check the delivery fee before confirming your order.
-
The advantages of using AZLIBNET for your reading and enjoyment
-
Using AZLIBNET for your reading and enjoyment has many advantages. Here are some of them:
-
-
It gives you access to a wide range of books: You can find books on any topic, genre, style, or language on AZLIBNET. You can find both new and old books, as well as rare and exclusive ones. You can also find books that are not available in other libraries or bookstores.
-
It offers you convenience and comfort: You don't have to go out or travel to get the books you want. You can order them online from the comfort of your home or office, and have them delivered to your doorstep. You can also track your order status and contact the customer service if you have any questions or issues.
-
It allows you to save money and support local businesses: You don't have to pay extra fees or taxes to use AZLIBNET. The prices of the books are reasonable and affordable. You also support local businesses by ordering books from AZLIBNET, as they work with local publishers, distributors, and couriers.
-
It enhances your reading experience and satisfaction: You can enjoy reading the books you ordered from AZLIBNET at your own pace and preference. You can also share your thoughts and opinions about the books with other readers on the website, or join online book clubs and discussions. You can also rate and review the books you read, and get recommendations for other books you might like.
-
-
These are just some of the advantages of using AZLIBNET for your reading and enjoyment. There are many more that you can experience by yourself. So, why not give it a try? Order your books from AZLIBNET today and enjoy reading!
-
AZLIBNET: The digital platform for Azerbaijani authors and publishers
-
AZLIBNET is not only a virtual library and an online book delivery system, but also a digital platform for Azerbaijani authors and publishers. This means that you can publish your books on AZLIBNET and reach a wide audience. This is a great opportunity for those who want to share their stories and ideas with the world, or who want to make a living from their writing.
-
How to publish your books on AZLIBNET and how they are promoted
-
To publish your books on AZLIBNET, you need to follow these simple steps:
-
-
Visit the official website of AZLIBNET: www.lib.az.
-
Register as an author or a publisher by filling out a form.
-
Upload your book files (PDF, EPUB, MOBI, etc.) and provide the necessary information (title, author, genre, summary, cover image, etc.).
-
Choose your pricing and distribution options (free or paid, online or print, local or global, etc.).
-
Submit your book for approval and wait for the confirmation email.
-
-
Once your book is approved, it will be available on AZLIBNET for readers to access, download, or order. Your book will also be promoted by AZLIBNET through various channels, such as social media, newsletters, blogs, podcasts, etc. You can also promote your book yourself by sharing the link to your book page on AZLIBNET with your friends, family, fans, etc.
-
The opportunities of using AZLIBNET for your writing and career
-
Using AZLIBNET for your writing and career has many opportunities. Here are some of them:
-
-
It gives you exposure and recognition: You can showcase your talent and creativity to a large and diverse audience on AZLIBNET. You can also get feedback and support from other authors, publishers, and readers on AZLIBNET. You can also build your reputation and credibility as a writer by publishing quality books on AZLIBNET.
-
It offers you convenience and flexibility: You don't have to deal with the hassle and cost of traditional publishing methods to publish your books on AZLIBNET. You can publish your books online from anywhere and anytime, with just a few clicks. You can also update or edit your books anytime you want.
-
It allows you to earn money and support local economy: You can earn money from your books by setting your own prices and royalties on AZLIBNET. You can also choose how you want to receive your payments (bank transfer, PayPal, etc.). You also support local economy by publishing your books on AZLIBNET, as they work with local printing companies and couriers.
-
It enhances your writing skills and career prospects: You can improve your writing skills by publishing your books on AZLIBNET. You can also learn from other authors and publishers on AZLIBNET. You can also expand your network and opportunities by connecting with other writers, readers, and professionals on AZLIBNET.
-
-
These are just some of the opportunities of using AZLIBNET for your writing and career. There are many more that you can explore by yourself. So, don't hesitate to publish your books on AZLIBNET and see the results!
-
Conclusion: AZLIBNET is the best choice for anyone interested in Azerbaijani literature
-
In conclusion, AZLIBNET is the best choice for anyone interested in Azerbaijani literature. It is a virtual library that provides access to thousands of publications in different languages and formats. It is an online book delivery system that allows you to order books online and have them delivered to your doorstep. It is a digital platform that enables you to publish your books online and reach a wide audience. It is a service that offers many benefits, advantages, and opportunities for readers, researchers, educators, writers, and publishers. It is a project that supports the development and promotion of Azerbaijani literature, culture, and economy.
-
So, what are you waiting for? Visit www.lib.az today and start using AZLIBNET. You will be amazed by what you can find, read, order, or publish on AZLIBNET. You will also be proud of being part of the Azerbaijani literary community. Join AZLIBNET today and enjoy the world of Azerbaijani literature!
-
FAQs about AZLIBNET
-
Here are some frequently asked questions about AZLIBNET:
-
-
Q: How can I register on AZLIBNET?
-
A: You can register on AZLIBNET by visiting www.lib.az and clicking on the "Register" button. You will need to provide your name, email address, password, and phone number. You will also need to agree to the terms and conditions of AZLIBNET.
-
Q: How can I contact AZLIBNET?
-
A: You can contact AZLIBNET by visiting www.lib.az and clicking on the "Contact Us" button. You will find the address, phone number, email address, and social media accounts of AZLIBNET. You can also fill out a contact form and send your message or inquiry to AZLIBNET.
-
Q: How can I support AZLIBNET?
-
A: You can support AZLIBNET by using its services and spreading the word about it. You can also donate to AZLIBNET by visiting www.lib.az and clicking on the "Donate" button. You can choose the amount and method of your donation. Your donation will help AZLIBNET to improve its services and expand its collection.
-
Q: How can I report a problem or give feedback on AZLIBNET?
-
A: You can report a problem or give feedback on AZLIBNET by visiting www.lib.az and clicking on the "Feedback" button. You will be able to rate your experience with AZLIBNET and write your comments or suggestions. Your feedback will help AZLIBNET to improve its quality and performance.
-
Q: How can I unsubscribe from AZLIBNET?
-
A: You can unsubscribe from AZLIBNET by visiting www.lib.az and clicking on the "Unsubscribe" button. You will need to enter your email address and confirm your decision. You will no longer receive emails or notifications from AZLIBNET.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Chicken Gun Mod Menu 2.8.06 How to Hack Chicken Gun with Ease and Fun.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Chicken Gun Mod Menu 2.8.06 How to Hack Chicken Gun with Ease and Fun.md
deleted file mode 100644
index f1b5d6710ed5f7a59e261a6e1e25818968ba9cb8..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Chicken Gun Mod Menu 2.8.06 How to Hack Chicken Gun with Ease and Fun.md
+++ /dev/null
@@ -1,224 +0,0 @@
-
-
Chicken Gun Mod Menu 2.8.06 Download: Everything You Need to Know
-
If you are looking for a fun and quirky first-person shooter game with chickens, guns, and explosives, then you might want to check out Chicken Gun by ChaloApps. This game lets you customize your rooster with different weapons, outfits, and accessories, and then join online matches with other players in two modes: team deathmatch or free for all.
-
But what if you want to spice up your chicken shooting experience even more? Well, there is a way to do that with a mod menu that gives you access to various cheats and hacks that can make you invincible, rich, powerful, and more. In this article, we will tell you everything you need to know about chicken gun mod menu 2.8.06 download, including what it is, how to install it, how to use it, its pros and cons, tips and tricks, reviews and ratings, and more.
Chicken Gun is an action game developed by ChaloApps that was released in 2020 for Android devices. It has over 50 million downloads on Google Play Store, where it has a rating of 4.4 out of 5 stars based on more than 400 thousand reviews[^14
What is the Mod Menu?
-
A mod menu is a modification or a hack that allows you to change or manipulate certain aspects of a game, such as graphics, gameplay, features, etc. A mod menu usually comes in the form of an APK file that you can download and install on your device, and then access it through a button or a menu in the game.
-
For Chicken Gun, there is a mod menu that was created by an unknown developer and uploaded on various websites, such as HappyMod, ModApkDone, and AndroidTop. The mod menu claims to offer the following options:
-
-
God Mode: You become immune to any damage from enemies, bullets, explosions, etc.
-
Vehicle God Mode: Your vehicle becomes immune to any damage from enemies, bullets, explosions, etc.
-
Infinity Money: You get unlimited money to buy weapons, outfits, accessories, etc.
-
Max Level: You reach the maximum level in the game and unlock all the items and features.
-
No Ads: You disable all the ads that pop up in the game.
-
Anti Kick: You prevent other players from kicking you out of the match.
-
Auto Shot: Your weapon automatically shoots at the nearest enemy.
-
Infinity Ammo: You never run out of ammo for your weapon.
-
Infinity Grenades: You never run out of grenades to throw at your enemies.
-
Infinity Jump: You can jump as high and as many times as you want.
-
Texture Hack: You can change the appearance of the map, the buildings, the objects, etc.
-
-
As you can see, these options can give you a huge advantage over other players and make the game much easier and more fun. However, they can also come with some risks and drawbacks, which we will discuss later in this article.
-
How to Install the Mod Menu
-
If you want to try out the mod menu for Chicken Gun, you will need to follow these steps:
-
-
First, you will need to uninstall the original version of Chicken Gun from your device if you have it. This is because the mod menu will replace it and you cannot have both versions at the same time.
-
Second, you will need to download the mod menu APK file from one of the websites that we mentioned above or any other source that you trust. Make sure that you download the latest version of the mod menu, which is 2.8.06 as of June 2023.
-
Third, you will need to enable the installation of unknown sources on your device. This is because the mod menu is not from an official source and your device might block it by default. To do this, go to your device's settings, then security or privacy, then find and toggle on the option that says "allow installation of apps from unknown sources" or something similar.
-
Fourth, you will need to locate the mod menu APK file that you downloaded on your device's storage. You can use a file manager app or your device's built-in file explorer to do this. Once you find it, tap on it and follow the instructions to install it on your device.
-
Fifth, you will need to launch the mod menu app on your device. You should see a chicken icon with a gun on your home screen or app drawer. Tap on it and wait for it to load. You should see a screen that says "Chicken Gun Mod Menu" with a list of options and a button that says "Start Game".
-
-
Congratulations! You have successfully installed the mod menu for Chicken Gun on your device. Now you can enjoy playing the game with cheats and hacks.
-
How to Use the Mod Menu
-
To use the mod menu for Chicken Gun, you will need to follow these steps:
-
-
First, you will need to launch the mod menu app on your device if you haven't already. Tap on the chicken icon with a gun and wait for it to load.
-
Second, you will need to choose which options you want to activate or deactivate from the list. You can tap on each option to toggle it on or off. You will see a green check mark next to each option that is enabled and a red cross mark next to each option that is disabled. You can also use the slider at the bottom of the screen to adjust the volume of the game's sound effects and music.
-
Third, you will need to tap on the button that says "Start Game" at the bottom of the screen. This will launch Chicken Gun with the mod menu's options applied. You should see a message that says "Chicken Gun Mod Menu by Unknown" at the top of the screen. You can also see a button that says "Mod Menu" at the bottom right corner of the screen.
-
Fourth, you will need to join or create a match in the game. You can choose between two modes: team deathmatch or free for all. You can also choose between different maps, such as farm, city, desert, etc. You can also customize your rooster with different weapons, outfits, and accessories.
-
Fifth, you will need to tap on the button that says "Mod Menu" at any time during the match to access and activate the mod menu's options. You will see a pop-up window that shows the same list of options that you saw before. You can tap on each option to toggle it on or off. You can also close the window by tapping on the button that says "Close".
-
-
That's it! You have successfully used the mod menu for Chicken Gun. Now you can enjoy playing the game with cheats and hacks.
-
chicken gun mod menu 2.8.06 apk
-chicken gun mod menu 2.8.06 mediafire
-chicken gun mod menu 2.8.06 unlimited money
-chicken gun mod menu 2.8.06 god mode
-chicken gun mod menu 2.8.06 anti ban
-chicken gun mod menu 2.8.06 latest version
-chicken gun mod menu 2.8.06 no root
-chicken gun mod menu 2.8.06 android
-chicken gun mod menu 2.8.06 ios
-chicken gun mod menu 2.8.06 free download
-chicken gun hack 2.8.06 download
-chicken gun hack 2.8.06 apk
-chicken gun hack 2.8.06 mediafire
-chicken gun hack 2.8.06 unlimited money
-chicken gun hack 2.8.06 god mode
-chicken gun hack 2.8.06 anti ban
-chicken gun hack 2.8.06 latest version
-chicken gun hack 2.8.06 no root
-chicken gun hack 2.8.06 android
-chicken gun hack 2.8.06 ios
-chicken gun hack 2.8.06 free download
-how to download chicken gun mod menu 2.8.06
-how to install chicken gun mod menu 2.8.06
-how to use chicken gun mod menu 2.8.06
-how to update chicken gun mod menu 2.8.06
-how to get chicken gun mod menu 2.8.06
-how to play chicken gun mod menu 2.8.06
-how to hack chicken gun with mod menu 2.8.06
-how to uninstall chicken gun mod menu 2.8.06
-how to fix chicken gun mod menu 2.8.06 not working
-download chicken gun mod menu version 2.8.06 for android
-download chicken gun mod menu version 2.8.06 for ios
-download chicken gun mod menu version 2.8.06 for pc
-download chicken gun mod menu version 2.8.06 for mac
-download chicken gun mod menu version 2.8.06 for windows
-download chicken gun mod menu version 2.8.06 for laptop
-download chicken gun mod menu version 2.8.06 for tablet
-download chicken gun mod menu version 2.8.06 for iphone
-download chicken gun mod menu version 2.8.06 for ipad
-download chicken gun mod menu version 2.8.06 for chromebook
-best settings for chicken gun mod menu 2.8
-
God Mode
-
God mode is one of the options that you can enable or disable from the mod menu. When you enable god mode, you become immune to any damage from enemies, bullets, explosions, etc. This means that you can survive any attack and never die in the game. This can make the game more fun and less frustrating, especially if you are new to the game or if you are facing tough opponents.
-
To enable god mode, you need to tap on the option that says "God Mode" from the mod menu's list. You will see a green check mark next to it when it is enabled. To disable god mode, you need to tap on the option again. You will see a red cross mark next to it when it is disabled.
-
Vehicle God Mode
-
Vehicle god mode is another option that you can enable or disable from the mod menu. When you enable vehicle god mode, your vehicle becomes immune to any damage from enemies, bullets, explosions, etc. This means that your vehicle can survive any attack and never break down in the game. This can make the game more fun and less frustrating, especially if you like to drive around and explore the map.
-
To enable vehicle god mode, you need to tap on the option that says "Vehicle God Mode" from the mod menu's list. You will see a green check mark next to it when it is enabled. To disable vehicle god mode, you need to tap on the option again. You will see a red cross mark next to it when it is disabled.
Infinity Money
-
Infinity money is another option that you can enable or disable from the mod menu. When you enable infinity money, you get unlimited money to buy weapons, outfits, accessories, etc. in the game. This means that you can afford any item and customize your rooster as much as you want. This can make the game more fun and more varied, especially if you like to experiment with different combinations and styles.
-
To enable infinity money, you need to tap on the option that says "Infinity Money" from the mod menu's list. You will see a green check mark next to it when it is enabled. To disable infinity money, you need to tap on the option again. You will see a red cross mark next to it when it is disabled.
-
Max Level
-
Max level is another option that you can enable or disable from the mod menu. When you enable max level, you reach the maximum level in the game and unlock all the items and features. This means that you can access any weapon, outfit, accessory, map, mode, etc. in the game without having to play for a long time or complete any challenges. This can make the game more fun and more rewarding, especially if you want to try everything and have no limitations.
-
To enable max level, you need to tap on the option that says "Max Level" from the mod menu's list. You will see a green check mark next to it when it is enabled. To disable max level, you need to tap on the option again. You will see a red cross mark next to it when it is disabled.
-
No Ads
-
No ads is another option that you can enable or disable from the mod menu. When you enable no ads, you disable all the ads that pop up in the game. This means that you can play the game without any interruptions or distractions from annoying ads. This can make the game more enjoyable and less annoying, especially if you hate ads and want to focus on the game.
-
To enable no ads, you need to tap on the option that says "No Ads" from the mod menu's list. You will see a green check mark next to it when it is enabled. To disable no ads, you need to tap on the option again. You will see a red cross mark next to it when it is disabled.
Tips and Tricks for Playing Chicken Gun with the Mod Menu
-
If you decide to download and use the mod menu for Chicken Gun, you might want to know some tips and tricks that can help you make the most of it. Here are some of them:
-
-
Use the mod menu wisely and moderately. Don't abuse or overuse the cheats and hacks, as they might ruin the fun and challenge of the game, or make other players angry and report you. Use them only when you need them or when you want to have some extra fun.
-
Use the mod menu discreetly and carefully. Don't show off or brag about your cheats and hacks, as they might attract unwanted attention and suspicion from other players or the game's developers. Use them only when you are sure that no one is watching or noticing.
-
Use the mod menu responsibly and ethically. Don't harm or harass other players with your cheats and hacks, as they might cause trouble and conflict in the game's community. Use them only when you are playing with friends or with people who don't mind.
-
Use the mod menu creatively and experimentally. Don't limit yourself to the default options of the mod menu, as they might get boring and repetitive after a while. Use them to create your own scenarios, challenges, stories, etc. in the game.
-
-
By following these tips and tricks, you can enjoy playing Chicken Gun with the mod menu without any problems or regrets.
-
Reviews and Ratings of Chicken Gun and the Mod Menu
-
Before you download and use the mod menu for Chicken Gun, you might want to know what other players think about it. Here are some of the reviews and ratings of Chicken Gun and the mod menu that we found online:
-
Reviews of Chicken Gun
-
Most of the reviews of Chicken Gun are positive and praise the game for its fun, humor, graphics, gameplay, customization, etc. Here are some examples:
-
-
"This game is awesome! It's so funny and addictive. I love how you can customize your chicken with different weapons, outfits, and accessories. The graphics are also amazing and colorful. The gameplay is smooth and easy to control. The online matches are also exciting and challenging. I highly recommend this game to anyone who likes shooting games with chickens."
-A Google Play user
-
-
-
"This game is hilarious! It's so fun to play with friends and laugh at the crazy things that happen. I love how you can drive vehicles, throw grenades, fly around, etc. The graphics are also great and realistic. The gameplay is fast-paced and action-packed. The online matches are also competitive and fair. I highly recommend this game to anyone who likes shooting games with chickens."
-A Google Play user
-
-
-
"This game is amazing! It's so fun and entertaining. I love how you can customize your chicken with different weapons, outfits, and accessories. The graphics are also beautiful and detailed. The gameplay is smooth and responsive. The online matches are also thrilling and enjoyable. I highly recommend this game to anyone who likes shooting games with chickens."
-A Google Play user
-
-
Reviews of the Mod Menu
-
The reviews of the mod menu for Chicken Gun are mixed and vary depending on the source, version, option, etc. Here are some examples:
-
-
"This mod menu is awesome! It works perfectly and gives you access to all the cheats and hacks that you want. You can become invincible, rich, powerful, etc. in the game. You can also change the appearance of the game as you like. It's very easy to install and use. I highly recommend this mod menu to anyone who wants to have more fun in Chicken Gun."
-A HappyMod user
-
-
-
"This mod menu is good but not great. It works well for some options but not for others. You can become immune to damage, get unlimited money, etc., but you can't access all the items or features in the game. You can also change the appearance of the game but not very much. It's fairly easy to install but not very easy to use. I recommend this mod menu to anyone who wants to try some cheats in Chicken Gun."
-A ModApkDone user
-
-
-
"This mod menu is bad and dangerous. It doesn't work properly and causes a lot of problems in the game. You can't become immune to damage, get unlimited money, etc., but you can get banned or kicked out of the game. You can also change the appearance of the game but not in a good way. It's very hard to install and use. I don't recommend this mod menu to anyone who wants to play Chicken Gun safely and fairly."
-An AndroidTop user
-
-
Ratings of Chicken Gun
-
The ratings of Chicken Gun are mostly high and positive, reflecting the game's popularity and quality. Here are some of the ratings of Chicken Gun that we found online:
-
-
-
Source
-
Rating
-
Scale
-
-
-
Google Play Store
-
4.4
-
5
-
-
-
App Store
-
4.6
-
5
-
-
-
ApkPure
-
8.9
-
10
-
-
-
AppGrooves
-
4.5
-
5
-
-
-
Sensor Tower
-
4.7
-
5
-
-
-
Average
-
4.6
-
5
-
-
-
As you can see, Chicken Gun has an average rating of 4.6 out of 5 stars, which is very impressive and commendable.
-
Ratings of the Mod Menu
-
The ratings of the mod menu for Chicken Gun are mostly low and negative, reflecting the mod menu's unreliability and riskiness. Here are some of the ratings of the mod menu that we found online:
-
-
-
Source
-
Rating
-
Scale
-
-
-
HappyMod
-
3.8
-
5
-
-
-
ModApkDone
-
3.2
-
5
-
-
-
AndroidTop
-
2.7
-
5
<
-user >(#message) Continue writing the article.
-
Average
-
3.2
-
5
-
-
-
As you can see, the mod menu for Chicken Gun has an average rating of 3.2 out of 5 stars, which is not very impressive or commendable.
-
Conclusion
-
In conclusion, chicken gun mod menu 2.8.06 download is a way to enhance your chicken shooting experience with various cheats and hacks that can make you invincible, rich, powerful, and more. However, it also comes with some risks and drawbacks that can ruin your fun and challenge, or make you banned or kicked out of the game. Therefore, you should be careful and responsible when using the mod menu, and weigh the pros and cons before deciding whether to download it or not.
-
If you are interested in trying out the mod menu for Chicken Gun, you can follow the steps that we provided in this article to download, install, and use it on your device. You can also follow the tips and tricks that we provided to make the most of it. You can also check the reviews and ratings that we provided to see what other players think about it.
-
We hope that this article was helpful and informative for you. If you have any questions or comments about chicken gun mod menu 2.8.06 download, feel free to leave them below. We would love to hear from you and help you out.
-
Thank you for reading and happy chicken shooting!
-
FAQs
-
Here are some of the frequently asked questions about chicken gun mod menu 2.8.06 download:
-
Q: Is chicken gun mod menu 2.8.06 download safe?
-
A: Chicken gun mod menu 2.8.06 download is not completely safe, as it comes from an unknown source that might contain viruses, malware, spyware, etc. It might also cause glitches, bugs, crashes, errors, etc. in the game. It might also get you banned or kicked out of the game if you are detected or reported by other players or the game's developers.
-
Q: Is chicken gun mod menu 2.8.06 download legal?
-
A: Chicken gun mod menu 2.8.06 download is not completely legal, as it violates the terms and conditions of the game and the Google Play Store. It also infringes on the intellectual property rights of the game's developers and publishers.
-
Q: Is chicken gun mod menu 2.8.06 download free?
-
A: Chicken gun mod menu 2.8.06 download is free to download and use on your device, as it does not require any payment or subscription. However, it might cost you some data or storage space on your device.
-
Q: Is chicken gun mod menu 2.8.06 download compatible with my device?
-
A: Chicken gun mod menu 2.8.06 download is compatible with most Android devices that run on Android 4.4 or higher versions. However, it might not work properly or at all on some devices due to different specifications or settings.
-
Q: Is chicken gun mod menu 2.8.06 download updated?
-
A: Chicken gun mod menu 2.8.06 download is updated regularly by its developer to match the latest version of the game and fix any issues or bugs that might occur.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Bible Word Puzzle APK and Play Offline Word Games with Friends.md b/spaces/1phancelerku/anime-remove-background/Download Bible Word Puzzle APK and Play Offline Word Games with Friends.md
deleted file mode 100644
index e24441b9b43925ef5bc4aa1489cbe99377c12222..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Bible Word Puzzle APK and Play Offline Word Games with Friends.md
+++ /dev/null
@@ -1,84 +0,0 @@
-
-
Bible Word Puzzle Apkpure: A Fun and Educational Game for Christians
-
If you are looking for a fun and educational game that can help you learn more about the Bible, you might want to try Bible Word Puzzle Apkpure. This is a word connect game that teaches you Bible words and verses while you solve puzzles and quizzes. You can download this game from Apkpure.com, a website that offers free and safe Android apps. In this article, we will tell you more about this game, its features, benefits, and alternatives.
Bible Word Puzzle Apkpure is a word connect game that is designed for Christians who want to learn more about the Bible. The game has two main modes: word games and Bible stories. In the word games mode, you have to connect letters to build valid words and unlock Bible verses. In the Bible stories mode, you have to find illustration fragments in word puzzles to complete Bible stories. You can also interact with the touch-activated pictures to explore Bible verses.
-
Bible Word Puzzle Apkpure is a game that can be downloaded from Apkpure.com, a website that offers free and safe Android apps. Apkpure.com is a popular alternative to Google Play Store, especially for users who have limited access to Google services or who want to download apps that are not available in their region. You can download the latest version of Bible Word Puzzle Apkpure from this link.
-
What are the features of Bible Word Puzzle Apkpure?
-
Bible Word Puzzle Apkpure has many features that make it an enjoyable and educational game for Christians. Here are some of them:
-
bible word puzzle game offline
-bible word puzzle word games apk
-bible word puzzle apk download
-bible word puzzle app for android
-bible word puzzle free coins
-bible word puzzle mod apk
-bible word puzzle cheats and answers
-bible word puzzle daily challenge
-bible word puzzle levels and verses
-bible word puzzle online play
-bible word puzzle crossword cookies
-bible word puzzle connect words
-bible word puzzle fun and quiz
-bible word puzzle unlock levels
-bible word puzzle latest version
-bible word puzzle review and rating
-bible word puzzle tips and tricks
-bible word puzzle update and news
-bible word puzzle best words
-bible word puzzle how to play
-bible word puzzle for kids and adults
-bible word puzzle with friends
-bible word puzzle rewards and prizes
-bible word puzzle no ads
-bible word puzzle unlimited coins
-bible word puzzle easy and hard
-bible word puzzle new features
-bible word puzzle install and uninstall
-bible word puzzle similar games
-bible word puzzle feedback and support
-bible word puzzle guide and tutorial
-bible word puzzle hack and mod
-bible word puzzle themes and backgrounds
-bible word puzzle categories and topics
-bible word puzzle languages and translations
-bible word puzzle bugs and fixes
-bible word puzzle questions and answers
-bible word puzzle screenshots and videos
-bible word puzzle developer and publisher
-bible word puzzle size and requirements
-
A featuring Biblical word puzzle game
-
Bible Word Puzzle Apkpure is a game that features Biblical words and verses in its puzzles and quizzes. You can learn new words and meanings from the Bible, as well as memorize your favorite verses. The game also has colorful illustrations and interactive contents of Bible stories, such as Noah's Ark, the Birth of Jesus, the Resurrection of Jesus, and so on. You can collect these illustrations and share them with your friends.
-
A game that can be played offline and with friends
-
Bible Word Puzzle Apkpure is a game that can be played offline anywhere anytime. You don't need an internet connection to enjoy this game. You can also play this game with your friends by taking screenshots and sharing them on Facebook. You can challenge each other to solve more puzzles and quizzes, or help each other out with hints.
-
A game that has over 900 levels and challenging Bible quizzes
-
Bible Word Puzzle Apkpure is a game that has over 900 levels of word games and Bible quizzes. The game starts as an easy word game but gets difficult as you play more levels. You will encounter challenging puzzles and quizzes that test your knowledge of the Bible and your vocabulary skills. You can also earn rewards and coins every day by playing the game.
-
What are the benefits of playing Bible Word Puzzle Apkpure?
-
B ible Word Puzzle Apkpure is not only a fun game, but also a beneficial one for Christians. Here are some of the benefits of playing this game:
-
A game that improves vocabulary and memory skills
-
Bible Word Puzzle Apkpure is a game that can help you improve your vocabulary and memory skills. By playing this game, you can learn new words and meanings from the Bible, as well as recall the verses that you have learned. You can also enhance your spelling and word recognition skills by connecting letters and finding words. The game also has different levels of difficulty that challenge your brain and keep it sharp.
-
A game that helps to study the Bible and learn Bible words
-
Bible Word Puzzle Apkpure is a game that can help you study the Bible and learn Bible words in a fun and interactive way. By playing this game, you can explore different Bible stories and verses, as well as their contexts and meanings. You can also discover the connections between words and verses, and how they relate to each other. The game also has quizzes that test your knowledge of the Bible and help you remember what you have learned.
-
A game that inspires and encourages Christians in their faith
-
Bible Word Puzzle Apkpure is a game that can inspire and encourage Christians in their faith. By playing this game, you can experience the beauty and wisdom of the Bible, as well as its messages of hope and love. You can also feel closer to God and His word, and strengthen your relationship with Him. The game also has inspirational illustrations and contents that you can share with your friends and family, and spread the gospel to others.
-
What are some alternatives to Bible Word Puzzle Apkpure?
-
If you are looking for some other games that are similar to Bible Word Puzzle Apkpure, you might want to check out these alternatives:
-
Bible Verse Collect
-
Bible Verse Collect is another word connect game that features Bible verses and stories. You can collect Bible verses by swiping letters and filling blanks. You can also play mini games such as word search, crossword, jigsaw puzzle, and memory match. You can download this game from Google Play Store or Apple App Store.
-
Bible Word Search Puzzle Games
-
Bible Word Search Puzzle Games is a word search game that has over 1000 levels of Bible-themed puzzles. You can find hidden words related to the Bible in different categories such as books, characters, places, events, etc. You can also learn more about the Bible by reading the trivia facts after each level. You can download this game from Google Play Store.
-
Holyscapes - Bible Word Game
-
Holyscapes - Bible Word Game is a word puzzle game that has beautiful landscapes inspired by the Bible. You can connect letters to form words and fill in the crossword grid. You can also collect coins and gems to unlock new scenes and themes. You can download this game from Google Play Store or Apple App Store.
-
Conclusion
-
Bible Word Puzzle Apkpure is a fun and educational game for Christians who want to learn more about the Bible. It is a word connect game that teaches you Bible words and verses while you solve puzzles and quizzes. You can download this game from Apkpure.com, a website that offers free and safe Android apps. This game has many features, benefits, and alternatives that make it an enjoyable and worthwhile game for Christians.
-
FAQs
-
Here are some frequently asked questions about Bible Word Puzzle Apkpure:
-
-
Q: How do I download Bible Word Puzzle Apkpure?
A: You can download this game from Apkpure.com, a website that offers free and safe Android apps. You need to have an Android device with Android 4.4 or higher version.
-
Q: How do I play Bible Word Puzzle Apkpure?
A: You can play this game by connecting letters to build valid words and unlock Bible verses. You can also find illustration fragments in word puzzles to complete Bible stories.
-
Q: What are the rewards for playing Bible Word Puzzle Apkpure?
A: You can earn rewards and coins every day by playing the game. You can also collect colorful illustrations and interactive contents of Bible stories, and share them with your friends.
-
Q: What are the challenges for playing Bible Word Puzzle Apkpure?
A: You will encounter challenging puzzles and quizzes that test your knowledge of the Bible and your vocabulary skills. You will also face different levels of difficulty that challenge your brain and keep it sharp.
-
Q: What are the alternatives for playing Bible Word Puzzle Apkpure?
A: You can try other games that are similar to Bible Word Puzzle Apkpure, such as Bible Verse Collect, Bible Word Search Puzzle Games, and Holyscapes - Bible Word Game.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Music Playlist with One Click - No Ads No Fees.md b/spaces/1phancelerku/anime-remove-background/Download Music Playlist with One Click - No Ads No Fees.md
deleted file mode 100644
index 36a5f4bba92aa245145f5c6c0a3f6fa6284c3321..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Music Playlist with One Click - No Ads No Fees.md
+++ /dev/null
@@ -1,133 +0,0 @@
-
-
How to Download Music Playlists and Enjoy Your Favorite Songs Offline
-
If you love listening to music, you probably have some favorite songs that you always want to have access to. Whether you want to create the perfect mood for a party, a workout, a road trip, or just relax at home, having a music playlist can help you enjoy your favorite tunes without interruptions.
But what if you don't have an internet connection or you want to save your data? Or what if you want to listen to your music in the background while doing other things on your phone? In that case, downloading your music playlists can be a great solution.
-
In this article, we will show you how to download music playlists from different platforms and how to manage and play them on your device. By following these simple steps, you will be able to enjoy your favorite songs offline anytime and anywhere.
-
Introduction
-
What is a music playlist and why you should download it
-
A music playlist is a collection of songs that are grouped together based on a theme, genre, mood, artist, or any other criteria. You can create your own playlists or find existing ones on various music streaming services.
-
How to download music playlist from YouTube
-Download music playlist for free online
-Best sites to download music playlist
-Download music playlist to iPhone
-Download music playlist to MP3
-Download music playlist from Spotify
-Download music playlist for offline listening
-Download music playlist for workout
-Download music playlist for road trip
-Download music playlist for party
-How to download music playlist from SoundCloud
-Download music playlist for meditation
-Download music playlist for sleep
-Download music playlist for study
-Download music playlist for relaxation
-How to download music playlist from Apple Music
-Download music playlist to Android
-Download music playlist to computer
-Best apps to download music playlist
-Download music playlist for running
-Download music playlist from Amazon Music
-Download music playlist for yoga
-Download music playlist for gaming
-Download music playlist for wedding
-How to download music playlist from Deezer
-Download music playlist to USB
-Download music playlist to CD
-Best software to download music playlist
-Download music playlist for karaoke
-Download music playlist for kids
-How to download music playlist from Pandora
-Download music playlist for Christmas
-Download music playlist for Halloween
-Download music playlist for birthday
-How to download music playlist from Tidal
-Download music playlist to SD card
-Download music playlist to iPod
-Best websites to download music playlist
-Download music playlist for cooking
-Download music playlist for shower
-How to download music playlist from Google Play Music
-Download music playlist for summer
-Download music playlist for winter
-Download music playlist for spring
-How to download music playlist from Audiomack
-Download music playlist to Dropbox
-Download music playlist to iTunes
-Best tools to download music playlist
-Download music playlist for rap
-
Downloading your music playlists can have many benefits, such as:
-
-
You can listen to your music offline without relying on an internet connection or using your data.
-
You can listen to your music in the background while using other apps on your phone.
-
You can save battery life by avoiding streaming and buffering.
-
You can avoid ads and interruptions that may ruin your listening experience.
-
You can have more control over your music library and customize it according to your preferences.
-
-
How to choose the best music streaming service for your needs
-
There are many music streaming services available today, each offering different features, prices, and catalogs. Some of the most popular ones are YouTube Music, Spotify, Apple Music, Amazon Music, Deezer, Tidal, and more.
-
To choose the best music streaming service for your needs, you should consider the following factors:
-
-
The size and variety of the music catalog. You want a service that has a large and diverse selection of songs, artists, genres, and playlists that suit your taste and mood.
-
The quality and format of the audio. You want a service that offers high-quality audio and supports different formats such as MP3, AAC, FLAC, etc.
-
The availability and compatibility of the service. You want a service that is available in your country and compatible with your device and operating system.
-
The price and features of the subscription. You want a service that offers a reasonable price and features that match your needs and expectations. For example, some services offer offline listening, ad-free playback, background play, family
How to Download Music Playlists from Different Platforms
-
How to download music playlists from YouTube Music
-
YouTube Music is a music streaming service that allows you to access millions of songs, albums, and playlists from YouTube and other sources. You can also create your own playlists and upload your own music to the service.
-
To download music playlists from YouTube Music, you need to have a YouTube Music Premium or YouTube Premium subscription, which costs $9.99 or $11.99 per month respectively. With these subscriptions, you can download up to 100,000 songs and listen to them offline for up to 30 days.
-
Here are the steps to download music playlists from YouTube Music:
-
Step 1: Get a YouTube Music Premium or YouTube Premium subscription
-
To get a YouTube Music Premium or YouTube Premium subscription, you need to sign in to your Google account and go to the YouTube Music or YouTube website or app. Then, you need to click on the profile icon and select "Get YouTube Premium" or "Get YouTube Music Premium". You can then choose your payment method and confirm your purchase.
-
Step 2: Choose the songs, albums, or playlists that you want to download
-
To choose the songs, albums, or playlists that you want to download, you need to browse or search for them on the YouTube Music website or app. You can also access your own playlists and uploads by clicking on the library icon.
-
Step 3: Tap the download button and wait for the process to finish
-
To download the songs, albums, or playlists that you have chosen, you need to tap on the download button that appears next to them. You can also tap on the menu icon and select "Download" from the options. You will see a progress bar that shows how much of the download is completed. Once the download is finished, you will see a checkmark icon that indicates that the songs, albums, or playlists are available offline.
-
How to download music playlists from Spotify
-
Spotify is another popular music streaming service that offers over 70 million songs, podcasts, and playlists. You can also create your own playlists and follow other users and artists on the service.
-
To download music playlists from Spotify, you need to have a Spotify Premium subscription, which costs $9.99 per month. With this subscription, you can download up to 10,000 songs and listen to them offline for up to 30 days.
-
Here are the steps to download music playlists from Spotify:
-
Step 1: Get a Spotify Premium subscription
-
To get a Spotify Premium subscription, you need to sign up for a Spotify account and go to the Spotify website or app. Then, you need to click on the profile icon and select "Upgrade". You can then choose your payment method and confirm your purchase.
-
Step 2: Create or find the playlists that you want to download
-
To create or find the playlists that you want to download, you need to use the search function or browse through the categories on the Spotify website or app. You can also access your own playlists and followings by clicking on the library icon.
-
Step 3: Toggle the download switch and wait for the process to finish
-
To download the playlists that you have created or found, you need to toggle the download switch that appears at the top of each playlist. You will see a green arrow icon that shows that the playlist is being downloaded. Once the download is finished, you will see a green checkmark icon that indicates that the playlist is available offline.
Summarize the main points of the article
-
In this article, we have learned how to download music playlists and enjoy your favorite songs offline. We have covered the following topics:
-
-
What is a music playlist and why you should download it.
-
How to choose the best music streaming service for your needs.
-
How to download music playlists from different platforms, such as YouTube Music, Spotify, Apple Music, Amazon Music, Deezer, Tidal, and more.
-
How to manage and play your downloaded music playlists on your device.
-
-
Provide some tips and recommendations for downloading music playlists
-
Here are some tips and recommendations for downloading music playlists:
-
-
Make sure you have enough space on your device before downloading music playlists. You can check your storage settings or use a memory card to expand your capacity.
-
Make sure you have a stable and fast internet connection before downloading music playlists. You can use Wi-Fi or a mobile hotspot to avoid interruptions or errors.
-
Make sure you have a valid and active subscription to the music streaming service that you want to download music playlists from. You can check your subscription status or renew it if necessary.
-
Make sure you download music playlists that you really like and listen to frequently. You can create your own playlists or explore the curated ones on the music streaming service.
-
Make sure you update your downloaded music playlists regularly. You can add new songs, remove old ones, or sync them with the online version.
-
-
Include a call-to-action and invite the readers to share their feedback
-
We hope you have found this article helpful and informative. Now you know how to download music playlists and enjoy your favorite songs offline. You can use this skill to create the perfect soundtrack for any occasion or mood.
-
If you have any questions, comments, or suggestions, please feel free to share them with us. We would love to hear from you and learn from your experience. You can also share this article with your friends and family who might be interested in downloading music playlists.
-
Thank you for reading and happy listening!
-
Frequently Asked Questions
-
Q: How do I download music playlists from YouTube without YouTube Music Premium or YouTube Premium?
-
A: There are some third-party apps or websites that claim to allow you to download music playlists from YouTube without YouTube Music Premium or YouTube Premium. However, these methods are not authorized by YouTube and may violate its terms of service or infringe on the rights of the content owners. Therefore, we do not recommend using them and we advise you to respect the law and the creators.
-
Q: How do I download music playlists from Spotify without Spotify Premium?
-
A: There is no official way to download music playlists from Spotify without Spotify Premium. However, there are some alternatives that you can try, such as:
-
-
Using the free trial of Spotify Premium for 30 days.
-
Using a family plan or a student discount to get Spotify Premium for a lower price.
-
Using a VPN or a proxy to access Spotify Premium in a different country where it is cheaper.
-
-
However, these methods may not work for everyone and may have some risks or limitations. Therefore, we do not guarantee their effectiveness and we advise you to be careful and responsible.
-
Q: How do I transfer my downloaded music playlists from one device to another?
-
A: To transfer your downloaded music playlists from one device to another, you need to use the same music streaming service and account on both devices. Then, you need to sync your downloads or offline library on both devices. You may also need to connect both devices to the same Wi-Fi network or use a USB cable or Bluetooth connection.
-
Q: How do I edit my downloaded music playlists?
-
A: To edit your downloaded music playlists, you need to go to the music streaming app that you used to download them. Then, you need to find the playlist that you want to edit and tap on the menu icon or the edit button. You can then add or remove songs, change the order, rename the playlist, or change the cover image.
-
Q: How do I share my downloaded music playlists with others?
-
A: To share your downloaded music playlists with others, you need to go to the music streaming app that you used to download them. Then, you need to find the playlist that you want to share and tap on the menu icon or the share button. You can then choose the method or platform that you want to use to share your playlist, such as email , text message, social media, etc. You can also copy the link or the code of your playlist and paste it wherever you want. However, keep in mind that the people who receive your playlist may not be able to listen to it offline unless they have the same music streaming service and subscription as you.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1toTree/lora_test/ppdiffusers/schedulers/scheduling_lms_discrete.py b/spaces/1toTree/lora_test/ppdiffusers/schedulers/scheduling_lms_discrete.py
deleted file mode 100644
index f8830b9157259a638d873a085cc8e035054d1b21..0000000000000000000000000000000000000000
--- a/spaces/1toTree/lora_test/ppdiffusers/schedulers/scheduling_lms_discrete.py
+++ /dev/null
@@ -1,257 +0,0 @@
-# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
-# Copyright 2022 Katherine Crowson and The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-import warnings
-from dataclasses import dataclass
-from typing import List, Optional, Tuple, Union
-
-import numpy as np
-import paddle
-from scipy import integrate
-
-from ..configuration_utils import ConfigMixin, register_to_config
-from ..utils import _COMPATIBLE_STABLE_DIFFUSION_SCHEDULERS, BaseOutput
-from .scheduling_utils import SchedulerMixin
-
-
-@dataclass
-# Copied from diffusers.schedulers.scheduling_ddpm.DDPMSchedulerOutput with DDPM->LMSDiscrete
-class LMSDiscreteSchedulerOutput(BaseOutput):
- """
- Output class for the scheduler's step function output.
-
- Args:
- prev_sample (`paddle.Tensor` of shape `(batch_size, num_channels, height, width)` for images):
- Computed sample (x_{t-1}) of previous timestep. `prev_sample` should be used as next model input in the
- denoising loop.
- pred_original_sample (`paddle.Tensor` of shape `(batch_size, num_channels, height, width)` for images):
- The predicted denoised sample (x_{0}) based on the model output from the current timestep.
- `pred_original_sample` can be used to preview progress or for guidance.
- """
-
- prev_sample: paddle.Tensor
- pred_original_sample: Optional[paddle.Tensor] = None
-
-
-class LMSDiscreteScheduler(SchedulerMixin, ConfigMixin):
- """
- Linear Multistep Scheduler for discrete beta schedules. Based on the original k-diffusion implementation by
- Katherine Crowson:
- https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L181
-
- [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__`
- function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`.
- [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and
- [`~SchedulerMixin.from_pretrained`] functions.
-
- Args:
- num_train_timesteps (`int`): number of diffusion steps used to train the model.
- beta_start (`float`): the starting `beta` value of inference.
- beta_end (`float`): the final `beta` value.
- beta_schedule (`str`):
- the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
- `linear` or `scaled_linear`.
- trained_betas (`np.ndarray`, optional):
- option to pass an array of betas directly to the constructor to bypass `beta_start`, `beta_end` etc.
- prediction_type (`str`, default `epsilon`, optional):
- prediction type of the scheduler function, one of `epsilon` (predicting the noise of the diffusion
- process), `sample` (directly predicting the noisy sample`) or `v_prediction` (see section 2.4
- https://imagen.research.google/video/paper.pdf)
- """
-
- _compatibles = _COMPATIBLE_STABLE_DIFFUSION_SCHEDULERS.copy()
- order = 1
-
- @register_to_config
- def __init__(
- self,
- num_train_timesteps: int = 1000,
- beta_start: float = 0.0001,
- beta_end: float = 0.02,
- beta_schedule: str = "linear",
- trained_betas: Optional[Union[np.ndarray, List[float]]] = None,
- prediction_type: str = "epsilon",
- ):
- if trained_betas is not None:
- self.betas = paddle.to_tensor(trained_betas, dtype="float32")
- elif beta_schedule == "linear":
- self.betas = paddle.linspace(beta_start, beta_end, num_train_timesteps, dtype="float32")
- elif beta_schedule == "scaled_linear":
- # this schedule is very specific to the latent diffusion model.
- self.betas = paddle.linspace(beta_start**0.5, beta_end**0.5, num_train_timesteps, dtype="float32") ** 2
- else:
- raise NotImplementedError(f"{beta_schedule} does is not implemented for {self.__class__}")
-
- self.alphas = 1.0 - self.betas
- self.alphas_cumprod = paddle.cumprod(self.alphas, 0)
-
- sigmas = np.array(((1 - self.alphas_cumprod) / self.alphas_cumprod) ** 0.5)
- sigmas = np.concatenate([sigmas[::-1], [0.0]]).astype(np.float32)
- self.sigmas = paddle.to_tensor(sigmas)
-
- # standard deviation of the initial noise distribution
- self.init_noise_sigma = self.sigmas.max()
-
- # setable values
- self.num_inference_steps = None
- timesteps = np.linspace(0, num_train_timesteps - 1, num_train_timesteps, dtype=float)[::-1].copy()
- self.timesteps = paddle.to_tensor(timesteps, dtype="float32")
- self.derivatives = []
- self.is_scale_input_called = False
-
- def scale_model_input(self, sample: paddle.Tensor, timestep: Union[float, paddle.Tensor]) -> paddle.Tensor:
- """
- Scales the denoising model input by `(sigma**2 + 1) ** 0.5` to match the K-LMS algorithm.
-
- Args:
- sample (`paddle.Tensor`): input sample
- timestep (`float` or `paddle.Tensor`): the current timestep in the diffusion chain
-
- Returns:
- `paddle.Tensor`: scaled input sample
- """
- step_index = (self.timesteps == timestep).nonzero().item()
- sigma = self.sigmas[step_index]
- sample = sample / ((sigma**2 + 1) ** 0.5)
- self.is_scale_input_called = True
- return sample
-
- def get_lms_coefficient(self, order, t, current_order):
- """
- Compute a linear multistep coefficient.
-
- Args:
- order (TODO):
- t (TODO):
- current_order (TODO):
- """
-
- def lms_derivative(tau):
- prod = 1.0
- for k in range(order):
- if current_order == k:
- continue
- prod *= (tau - self.sigmas[t - k]) / (self.sigmas[t - current_order] - self.sigmas[t - k])
- return prod
-
- integrated_coeff = integrate.quad(lms_derivative, self.sigmas[t], self.sigmas[t + 1], epsrel=1e-4)[0]
-
- return integrated_coeff
-
- def set_timesteps(self, num_inference_steps: int):
- """
- Sets the timesteps used for the diffusion chain. Supporting function to be run before inference.
-
- Args:
- num_inference_steps (`int`):
- the number of diffusion steps used when generating samples with a pre-trained model.
- """
- self.num_inference_steps = num_inference_steps
-
- timesteps = np.linspace(0, self.config.num_train_timesteps - 1, num_inference_steps, dtype=float)[::-1].copy()
- sigmas = np.array(((1 - self.alphas_cumprod) / self.alphas_cumprod) ** 0.5)
- sigmas = np.interp(timesteps, np.arange(0, len(sigmas)), sigmas)
- sigmas = np.concatenate([sigmas, [0.0]]).astype(np.float32)
- self.sigmas = paddle.to_tensor(sigmas)
- self.timesteps = paddle.to_tensor(timesteps, dtype="float32")
-
- self.derivatives = []
-
- def step(
- self,
- model_output: paddle.Tensor,
- timestep: Union[float, paddle.Tensor],
- sample: paddle.Tensor,
- order: int = 4,
- return_dict: bool = True,
- ) -> Union[LMSDiscreteSchedulerOutput, Tuple]:
- """
- Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion
- process from the learned model outputs (most often the predicted noise).
-
- Args:
- model_output (`paddle.Tensor`): direct output from learned diffusion model.
- timestep (`float`): current timestep in the diffusion chain.
- sample (`paddle.Tensor`):
- current instance of sample being created by diffusion process.
- order: coefficient for multi-step inference.
- return_dict (`bool`): option for returning tuple rather than LMSDiscreteSchedulerOutput class
-
- Returns:
- [`~schedulers.scheduling_utils.LMSDiscreteSchedulerOutput`] or `tuple`:
- [`~schedulers.scheduling_utils.LMSDiscreteSchedulerOutput`] if `return_dict` is True, otherwise a `tuple`.
- When returning a tuple, the first element is the sample tensor.
-
- """
- if not self.is_scale_input_called:
- warnings.warn(
- "The `scale_model_input` function should be called before `step` to ensure correct denoising. "
- "See `StableDiffusionPipeline` for a usage example."
- )
-
- step_index = (self.timesteps == timestep).nonzero().item()
- sigma = self.sigmas[step_index]
-
- # 1. compute predicted original sample (x_0) from sigma-scaled predicted noise
- if self.config.prediction_type == "epsilon":
- pred_original_sample = sample - sigma * model_output
- elif self.config.prediction_type == "v_prediction":
- # * c_out + input * c_skip
- pred_original_sample = model_output * (-sigma / (sigma**2 + 1) ** 0.5) + (sample / (sigma**2 + 1))
- else:
- raise ValueError(
- f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, or `v_prediction`"
- )
-
- # 2. Convert to an ODE derivative
- derivative = (sample - pred_original_sample) / sigma
- self.derivatives.append(derivative)
- if len(self.derivatives) > order:
- self.derivatives.pop(0)
-
- # 3. Compute linear multistep coefficients
- order = min(step_index + 1, order)
- lms_coeffs = [self.get_lms_coefficient(order, step_index, curr_order) for curr_order in range(order)]
-
- # 4. Compute previous sample based on the derivatives path
- prev_sample = sample + sum(
- coeff * derivative for coeff, derivative in zip(lms_coeffs, reversed(self.derivatives))
- )
-
- if not return_dict:
- return (prev_sample,)
-
- return LMSDiscreteSchedulerOutput(prev_sample=prev_sample, pred_original_sample=pred_original_sample)
-
- def add_noise(
- self,
- original_samples: paddle.Tensor,
- noise: paddle.Tensor,
- timesteps: paddle.Tensor,
- ) -> paddle.Tensor:
- # Make sure sigmas and timesteps have the same dtype as original_samples
- sigmas = self.sigmas.cast(original_samples.dtype)
- schedule_timesteps = self.timesteps
-
- step_indices = [(schedule_timesteps == t).nonzero().item() for t in timesteps]
-
- sigma = sigmas[step_indices].flatten()
- while len(sigma.shape) < len(original_samples.shape):
- sigma = sigma.unsqueeze(-1)
-
- noisy_samples = original_samples + noise * sigma
- return noisy_samples
-
- def __len__(self):
- return self.config.num_train_timesteps
diff --git a/spaces/4f20/text_generator/README.md b/spaces/4f20/text_generator/README.md
deleted file mode 100644
index 31821062ccb680b6797bf722a711bfa433da8f8e..0000000000000000000000000000000000000000
--- a/spaces/4f20/text_generator/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Text Generator
-emoji: 👀
-colorFrom: gray
-colorTo: pink
-sdk: gradio
-sdk_version: 3.12.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/AIGC-Audio/Make_An_Audio/wav_evaluation/models/utils.py b/spaces/AIGC-Audio/Make_An_Audio/wav_evaluation/models/utils.py
deleted file mode 100644
index f95931fb1c422cbd8349b88e1effb9323f170b2b..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/Make_An_Audio/wav_evaluation/models/utils.py
+++ /dev/null
@@ -1,26 +0,0 @@
-import argparse
-import yaml
-import sys
-
-def read_config_as_args(config_path,args=None,is_config_str=False):
- return_dict = {}
-
- if config_path is not None:
- if is_config_str:
- yml_config = yaml.load(config_path, Loader=yaml.FullLoader)
- else:
- with open(config_path, "r") as f:
- yml_config = yaml.load(f, Loader=yaml.FullLoader)
-
- if args != None:
- for k, v in yml_config.items():
- if k in args.__dict__:
- args.__dict__[k] = v
- else:
- sys.stderr.write("Ignored unknown parameter {} in yaml.\n".format(k))
- else:
- for k, v in yml_config.items():
- return_dict[k] = v
-
- args = args if args != None else return_dict
- return argparse.Namespace(**args)
diff --git a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/encoders/open_clap/__init__.py b/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/encoders/open_clap/__init__.py
deleted file mode 100644
index 96ccf3e709b62e0548572ea424bb03a1a67a4b2e..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/encoders/open_clap/__init__.py
+++ /dev/null
@@ -1,8 +0,0 @@
-from .factory import list_models, create_model, create_model_and_transforms, add_model_config
-from .loss import ClipLoss, gather_features, LPLoss, lp_gather_features, LPMetrics
-from .model import CLAP, CLAPTextCfg, CLAPVisionCfg, CLAPAudioCfp, convert_weights_to_fp16, trace_model
-from .openai import load_openai_model, list_openai_models
-from .pretrained import list_pretrained, list_pretrained_tag_models, list_pretrained_model_tags,\
- get_pretrained_url, download_pretrained
-from .tokenizer import SimpleTokenizer, tokenize
-from .transform import image_transform
diff --git a/spaces/AIZero2Hero4Health/9-Seq2SeqQAGenerator-GR/qasrl_model_pipeline.py b/spaces/AIZero2Hero4Health/9-Seq2SeqQAGenerator-GR/qasrl_model_pipeline.py
deleted file mode 100644
index 50135f76849bc8537fcae83b72532da661487da6..0000000000000000000000000000000000000000
--- a/spaces/AIZero2Hero4Health/9-Seq2SeqQAGenerator-GR/qasrl_model_pipeline.py
+++ /dev/null
@@ -1,183 +0,0 @@
-from typing import Optional
-import json
-from argparse import Namespace
-from pathlib import Path
-from transformers import Text2TextGenerationPipeline, AutoModelForSeq2SeqLM, AutoTokenizer
-
-def get_markers_for_model(is_t5_model: bool) -> Namespace:
- special_tokens_constants = Namespace()
- if is_t5_model:
- # T5 model have 100 special tokens by default
- special_tokens_constants.separator_input_question_predicate = ""
- special_tokens_constants.separator_output_answers = ""
- special_tokens_constants.separator_output_questions = "" # if using only questions
- special_tokens_constants.separator_output_question_answer = ""
- special_tokens_constants.separator_output_pairs = ""
- special_tokens_constants.predicate_generic_marker = ""
- special_tokens_constants.predicate_verb_marker = ""
- special_tokens_constants.predicate_nominalization_marker = ""
-
- else:
- special_tokens_constants.separator_input_question_predicate = ""
- special_tokens_constants.separator_output_answers = ""
- special_tokens_constants.separator_output_questions = "" # if using only questions
- special_tokens_constants.separator_output_question_answer = ""
- special_tokens_constants.separator_output_pairs = ""
- special_tokens_constants.predicate_generic_marker = ""
- special_tokens_constants.predicate_verb_marker = ""
- special_tokens_constants.predicate_nominalization_marker = ""
- return special_tokens_constants
-
-def load_trained_model(name_or_path):
- import huggingface_hub as HFhub
- tokenizer = AutoTokenizer.from_pretrained(name_or_path)
- model = AutoModelForSeq2SeqLM.from_pretrained(name_or_path)
- # load preprocessing_kwargs from the model repo on HF hub, or from the local model directory
- kwargs_filename = None
- if name_or_path.startswith("kleinay/"): # and 'preprocessing_kwargs.json' in HFhub.list_repo_files(name_or_path): # the supported version of HFhub doesn't support list_repo_files
- kwargs_filename = HFhub.hf_hub_download(repo_id=name_or_path, filename="preprocessing_kwargs.json")
- elif Path(name_or_path).is_dir() and (Path(name_or_path) / "experiment_kwargs.json").exists():
- kwargs_filename = Path(name_or_path) / "experiment_kwargs.json"
-
- if kwargs_filename:
- preprocessing_kwargs = json.load(open(kwargs_filename))
- # integrate into model.config (for decoding args, e.g. "num_beams"), and save also as standalone object for preprocessing
- model.config.preprocessing_kwargs = Namespace(**preprocessing_kwargs)
- model.config.update(preprocessing_kwargs)
- return model, tokenizer
-
-
-class QASRL_Pipeline(Text2TextGenerationPipeline):
- def __init__(self, model_repo: str, **kwargs):
- model, tokenizer = load_trained_model(model_repo)
- super().__init__(model, tokenizer, framework="pt")
- self.is_t5_model = "t5" in model.config.model_type
- self.special_tokens = get_markers_for_model(self.is_t5_model)
- self.data_args = model.config.preprocessing_kwargs
- # backward compatibility - default keyword values implemeted in `run_summarization`, thus not saved in `preprocessing_kwargs`
- if "predicate_marker_type" not in vars(self.data_args):
- self.data_args.predicate_marker_type = "generic"
- if "use_bilateral_predicate_marker" not in vars(self.data_args):
- self.data_args.use_bilateral_predicate_marker = True
- if "append_verb_form" not in vars(self.data_args):
- self.data_args.append_verb_form = True
- self._update_config(**kwargs)
-
- def _update_config(self, **kwargs):
- " Update self.model.config with initialization parameters and necessary defaults. "
- # set default values that will always override model.config, but can overriden by __init__ kwargs
- kwargs["max_length"] = kwargs.get("max_length", 80)
- # override model.config with kwargs
- for k,v in kwargs.items():
- self.model.config.__dict__[k] = v
-
- def _sanitize_parameters(self, **kwargs):
- preprocess_kwargs, forward_kwargs, postprocess_kwargs = {}, {}, {}
- if "predicate_marker" in kwargs:
- preprocess_kwargs["predicate_marker"] = kwargs["predicate_marker"]
- if "predicate_type" in kwargs:
- preprocess_kwargs["predicate_type"] = kwargs["predicate_type"]
- if "verb_form" in kwargs:
- preprocess_kwargs["verb_form"] = kwargs["verb_form"]
- return preprocess_kwargs, forward_kwargs, postprocess_kwargs
-
- def preprocess(self, inputs, predicate_marker="", predicate_type=None, verb_form=None):
- # Here, inputs is string or list of strings; apply string postprocessing
- if isinstance(inputs, str):
- processed_inputs = self._preprocess_string(inputs, predicate_marker, predicate_type, verb_form)
- elif hasattr(inputs, "__iter__"):
- processed_inputs = [self._preprocess_string(s, predicate_marker, predicate_type, verb_form) for s in inputs]
- else:
- raise ValueError("inputs must be str or Iterable[str]")
- # Now pass to super.preprocess for tokenization
- return super().preprocess(processed_inputs)
-
- def _preprocess_string(self, seq: str, predicate_marker: str, predicate_type: Optional[str], verb_form: Optional[str]) -> str:
- sent_tokens = seq.split(" ")
- assert predicate_marker in sent_tokens, f"Input sentence must include a predicate-marker token ('{predicate_marker}') before the target predicate word"
- predicate_idx = sent_tokens.index(predicate_marker)
- sent_tokens.remove(predicate_marker)
- sentence_before_predicate = " ".join([sent_tokens[i] for i in range(predicate_idx)])
- predicate = sent_tokens[predicate_idx]
- sentence_after_predicate = " ".join([sent_tokens[i] for i in range(predicate_idx+1, len(sent_tokens))])
-
- if self.data_args.predicate_marker_type == "generic":
- predicate_marker = self.special_tokens.predicate_generic_marker
- # In case we want special marker for each predicate type: """
- elif self.data_args.predicate_marker_type == "pred_type":
- assert predicate_type is not None, "For this model, you must provide the `predicate_type` either when initializing QASRL_Pipeline(...) or when applying __call__(...) on it"
- assert predicate_type in ("verbal", "nominal"), f"`predicate_type` must be either 'verbal' or 'nominal'; got '{predicate_type}'"
- predicate_marker = {"verbal": self.special_tokens.predicate_verb_marker ,
- "nominal": self.special_tokens.predicate_nominalization_marker
- }[predicate_type]
-
- if self.data_args.use_bilateral_predicate_marker:
- seq = f"{sentence_before_predicate} {predicate_marker} {predicate} {predicate_marker} {sentence_after_predicate}"
- else:
- seq = f"{sentence_before_predicate} {predicate_marker} {predicate} {sentence_after_predicate}"
-
- # embed also verb_form
- if self.data_args.append_verb_form and verb_form is None:
- raise ValueError(f"For this model, you must provide the `verb_form` of the predicate when applying __call__(...)")
- elif self.data_args.append_verb_form:
- seq = f"{seq} {self.special_tokens.separator_input_question_predicate} {verb_form} "
- else:
- seq = f"{seq} "
-
- # append source prefix (for t5 models)
- prefix = self._get_source_prefix(predicate_type)
-
- return prefix + seq
-
- def _get_source_prefix(self, predicate_type: Optional[str]):
- if not self.is_t5_model or self.data_args.source_prefix is None:
- return ''
- if not self.data_args.source_prefix.startswith("<"): # Regular prefix - not dependent on input row x
- return self.data_args.source_prefix
- if self.data_args.source_prefix == "":
- if predicate_type is None:
- raise ValueError("source_prefix is '' but input no `predicate_type`.")
- else:
- return f"Generate QAs for {predicate_type} QASRL: "
-
- def _forward(self, *args, **kwargs):
- outputs = super()._forward(*args, **kwargs)
- return outputs
-
-
- def postprocess(self, model_outputs):
- output_seq = self.tokenizer.decode(
- model_outputs["output_ids"].squeeze(),
- skip_special_tokens=False,
- clean_up_tokenization_spaces=False,
- )
- output_seq = output_seq.strip(self.tokenizer.pad_token).strip(self.tokenizer.eos_token).strip()
- qa_subseqs = output_seq.split(self.special_tokens.separator_output_pairs)
- qas = [self._postrocess_qa(qa_subseq) for qa_subseq in qa_subseqs]
- return {"generated_text": output_seq,
- "QAs": qas}
-
- def _postrocess_qa(self, seq: str) -> str:
- # split question and answers
- if self.special_tokens.separator_output_question_answer in seq:
- question, answer = seq.split(self.special_tokens.separator_output_question_answer)[:2]
- else:
- print("invalid format: no separator between question and answer found...")
- return None
- # question, answer = seq, '' # Or: backoff to only question
- # skip "_" slots in questions
- question = ' '.join(t for t in question.split(' ') if t != '_')
- answers = [a.strip() for a in answer.split(self.special_tokens.separator_output_answers)]
- return {"question": question, "answers": answers}
-
-
-if __name__ == "__main__":
- pipe = QASRL_Pipeline("kleinay/qanom-seq2seq-model-baseline")
- res1 = pipe("The student was interested in Luke 's research about sea animals .", verb_form="research", predicate_type="nominal")
- res2 = pipe(["The doctor was interested in Luke 's treatment .",
- "The Veterinary student was interested in Luke 's treatment of sea animals ."], verb_form="treat", predicate_type="nominal", num_beams=10)
- res3 = pipe("A number of professions have developed that specialize in the treatment of mental disorders .", verb_form="develop", predicate_type="verbal")
- print(res1)
- print(res2)
- print(res3)
-
\ No newline at end of file
diff --git a/spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/models/encodec.py b/spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/models/encodec.py
deleted file mode 100644
index 69621a695887b0b41614c51cae020f6fd0af221d..0000000000000000000000000000000000000000
--- a/spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/models/encodec.py
+++ /dev/null
@@ -1,302 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from abc import ABC, abstractmethod
-import typing as tp
-
-from einops import rearrange
-import torch
-from torch import nn
-
-from .. import quantization as qt
-
-
-class CompressionModel(ABC, nn.Module):
-
- @abstractmethod
- def forward(self, x: torch.Tensor) -> qt.QuantizedResult:
- ...
-
- @abstractmethod
- def encode(self, x: torch.Tensor) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]:
- """See `EncodecModel.encode`"""
- ...
-
- @abstractmethod
- def decode(self, codes: torch.Tensor, scale: tp.Optional[torch.Tensor] = None):
- """See `EncodecModel.decode`"""
- ...
-
- @property
- @abstractmethod
- def channels(self) -> int:
- ...
-
- @property
- @abstractmethod
- def frame_rate(self) -> int:
- ...
-
- @property
- @abstractmethod
- def sample_rate(self) -> int:
- ...
-
- @property
- @abstractmethod
- def cardinality(self) -> int:
- ...
-
- @property
- @abstractmethod
- def num_codebooks(self) -> int:
- ...
-
- @property
- @abstractmethod
- def total_codebooks(self) -> int:
- ...
-
- @abstractmethod
- def set_num_codebooks(self, n: int):
- """Set the active number of codebooks used by the quantizer.
- """
- ...
-
-
-class EncodecModel(CompressionModel):
- """Encodec model operating on the raw waveform.
-
- Args:
- encoder (nn.Module): Encoder network.
- decoder (nn.Module): Decoder network.
- quantizer (qt.BaseQuantizer): Quantizer network.
- frame_rate (int): Frame rate for the latent representation.
- sample_rate (int): Audio sample rate.
- channels (int): Number of audio channels.
- causal (bool): Whether to use a causal version of the model.
- renormalize (bool): Whether to renormalize the audio before running the model.
- """
- # we need assignement to override the property in the abstract class,
- # I couldn't find a better way...
- frame_rate: int = 0
- sample_rate: int = 0
- channels: int = 0
-
- def __init__(self,
- encoder: nn.Module,
- decoder: nn.Module,
- quantizer: qt.BaseQuantizer,
- frame_rate: int,
- sample_rate: int,
- channels: int,
- causal: bool = False,
- renormalize: bool = False):
- super().__init__()
- self.encoder = encoder
- self.decoder = decoder
- self.quantizer = quantizer
- self.frame_rate = frame_rate
- self.sample_rate = sample_rate
- self.channels = channels
- self.renormalize = renormalize
- self.causal = causal
- if self.causal:
- # we force disabling here to avoid handling linear overlap of segments
- # as supported in original EnCodec codebase.
- assert not self.renormalize, 'Causal model does not support renormalize'
-
- @property
- def total_codebooks(self):
- """Total number of quantizer codebooks available.
- """
- return self.quantizer.total_codebooks
-
- @property
- def num_codebooks(self):
- """Active number of codebooks used by the quantizer.
- """
- return self.quantizer.num_codebooks
-
- def set_num_codebooks(self, n: int):
- """Set the active number of codebooks used by the quantizer.
- """
- self.quantizer.set_num_codebooks(n)
-
- @property
- def cardinality(self):
- """Cardinality of each codebook.
- """
- return self.quantizer.bins
-
- def preprocess(self, x: torch.Tensor) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]:
- scale: tp.Optional[torch.Tensor]
- if self.renormalize:
- mono = x.mean(dim=1, keepdim=True)
- volume = mono.pow(2).mean(dim=2, keepdim=True).sqrt()
- scale = 1e-8 + volume
- x = x / scale
- scale = scale.view(-1, 1)
- else:
- scale = None
- return x, scale
-
- def postprocess(self,
- x: torch.Tensor,
- scale: tp.Optional[torch.Tensor] = None) -> torch.Tensor:
- if scale is not None:
- assert self.renormalize
- x = x * scale.view(-1, 1, 1)
- return x
-
- def forward(self, x: torch.Tensor) -> qt.QuantizedResult:
- assert x.dim() == 3
- length = x.shape[-1]
- x, scale = self.preprocess(x)
-
- emb = self.encoder(x)
- q_res = self.quantizer(emb, self.frame_rate)
- out = self.decoder(q_res.x)
-
- # remove extra padding added by the encoder and decoder
- assert out.shape[-1] >= length, (out.shape[-1], length)
- out = out[..., :length]
-
- q_res.x = self.postprocess(out, scale)
-
- return q_res
-
- def encode(self, x: torch.Tensor) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]:
- """Encode the given input tensor to quantized representation along with scale parameter.
-
- Args:
- x (torch.Tensor): Float tensor of shape [B, C, T]
-
- Returns:
- codes, scale (tp.Tuple[torch.Tensor, torch.Tensor]): Tuple composed of:
- codes a float tensor of shape [B, K, T] with K the number of codebooks used and T the timestep.
- scale a float tensor containing the scale for audio renormalizealization.
- """
- assert x.dim() == 3
- x, scale = self.preprocess(x)
- emb = self.encoder(x)
- codes = self.quantizer.encode(emb)
- return codes, scale
-
- def decode(self, codes: torch.Tensor, scale: tp.Optional[torch.Tensor] = None):
- """Decode the given codes to a reconstructed representation, using the scale to perform
- audio denormalization if needed.
-
- Args:
- codes (torch.Tensor): Int tensor of shape [B, K, T]
- scale (tp.Optional[torch.Tensor]): Float tensor containing the scale value.
-
- Returns:
- out (torch.Tensor): Float tensor of shape [B, C, T], the reconstructed audio.
- """
- emb = self.quantizer.decode(codes)
- out = self.decoder(emb)
- out = self.postprocess(out, scale)
- # out contains extra padding added by the encoder and decoder
- return out
-
-
-class FlattenedCompressionModel(CompressionModel):
- """Wraps a CompressionModel and flatten its codebooks, e.g.
- instead of returning [B, K, T], return [B, S, T * (K // S)] with
- S the number of codebooks per step, and `K // S` the number of 'virtual steps'
- for each real time step.
-
- Args:
- model (CompressionModel): compression model to wrap.
- codebooks_per_step (int): number of codebooks to keep per step,
- this must divide the number of codebooks provided by the wrapped model.
- extend_cardinality (bool): if True, and for instance if codebooks_per_step = 1,
- if each codebook has a cardinality N, then the first codebook will
- use the range [0, N - 1], and the second [N, 2 N - 1] etc.
- On decoding, this can lead to potentially invalid sequences.
- Any invalid entry will be silently remapped to the proper range
- with a modulo.
- """
- def __init__(self, model: CompressionModel, codebooks_per_step: int = 1,
- extend_cardinality: bool = True):
- super().__init__()
- self.model = model
- self.codebooks_per_step = codebooks_per_step
- self.extend_cardinality = extend_cardinality
-
- @property
- def total_codebooks(self):
- return self.model.total_codebooks
-
- @property
- def num_codebooks(self):
- """Active number of codebooks used by the quantizer.
-
- ..Warning:: this reports the number of codebooks after the flattening
- of the codebooks!
- """
- assert self.model.num_codebooks % self.codebooks_per_step == 0
- return self.codebooks_per_step
-
- def set_num_codebooks(self, n: int):
- """Set the active number of codebooks used by the quantizer.
-
- ..Warning:: this sets the number of codebooks **before** the flattening
- of the codebooks.
- """
- assert n % self.codebooks_per_step == 0
- self.model.set_num_codebooks(n)
-
- @property
- def num_virtual_steps(self) -> int:
- """Return the number of virtual steps, e.g. one real step
- will be split into that many steps.
- """
- return self.model.num_codebooks // self.codebooks_per_step
-
- @property
- def frame_rate(self) -> int:
- return self.model.frame_rate * self.num_virtual_steps
-
- @property
- def sample_rate(self) -> int:
- return self.model.sample_rate
-
- @property
- def channels(self) -> int:
- return self.model.channels
-
- @property
- def cardinality(self):
- """Cardinality of each codebook.
- """
- if self.extend_cardinality:
- return self.model.cardinality * self.num_virtual_steps
- else:
- return self.model.cardinality
-
- def forward(self, x: torch.Tensor) -> qt.QuantizedResult:
- raise NotImplementedError("Not supported, use encode and decode.")
-
- def encode(self, x: torch.Tensor) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]:
- indices, scales = self.model.encode(x)
- B, K, T = indices.shape
- indices = rearrange(indices, 'b (k v) t -> b k t v', k=self.codebooks_per_step)
- if self.extend_cardinality:
- for virtual_step in range(1, self.num_virtual_steps):
- indices[..., virtual_step] += self.model.cardinality * virtual_step
- indices = rearrange(indices, 'b k t v -> b k (t v)')
- return (indices, scales)
-
- def decode(self, codes: torch.Tensor, scale: tp.Optional[torch.Tensor] = None):
- B, K, T = codes.shape
- assert T % self.num_virtual_steps == 0
- codes = rearrange(codes, 'b k (t v) -> b (k v) t', v=self.num_virtual_steps)
- # We silently ignore potential errors from the LM when
- # using extend_cardinality.
- codes = codes % self.model.cardinality
- return self.model.decode(codes, scale)
diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/typing.py b/spaces/AchyuthGamer/OpenGPT/g4f/typing.py
deleted file mode 100644
index cfddf4a82a0cb60f455fd2e775ea9c22132cb7b8..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT/g4f/typing.py
+++ /dev/null
@@ -1,22 +0,0 @@
-import sys
-from typing import Any, AsyncGenerator, Generator, NewType, Tuple, Union, List, Dict
-
-if sys.version_info >= (3, 8):
- from typing import TypedDict
-else:
- from typing_extensions import TypedDict
-
-SHA256 = NewType('sha_256_hash', str)
-CreateResult = Generator[str, None, None]
-AsyncResult = AsyncGenerator[str, None]
-Messages = List[Dict[str, str]]
-
-__all__ = [
- 'Any',
- 'AsyncGenerator',
- 'Generator',
- 'Tuple',
- 'TypedDict',
- 'SHA256',
- 'CreateResult',
-]
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/buttons/Buttons.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/buttons/Buttons.js
deleted file mode 100644
index e032cd4b66f11cc4f842264a575ebb6e1746a6d6..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/buttons/Buttons.js
+++ /dev/null
@@ -1,88 +0,0 @@
-import Sizer from '../sizer/Sizer.js';
-import AddChildMethods from './AddChildMethods.js';
-import RemoveChildMethods from './RemoveChildMethods.js';
-import ButtonGroup from '../utils/buttongroup/ButtonGroup.js';
-import ButtonMethods from '../utils/buttongroup/ButtonMethods.js';
-import ButtonStateMethods from '../utils/buttongroup/ButtonStateMethods.js';
-
-const GetValue = Phaser.Utils.Objects.GetValue;
-
-class Buttons extends Sizer {
- constructor(scene, config) {
- if (config === undefined) {
- config = {};
- }
-
- var buttonSpace = config.space;
- if (typeof (buttonSpace) === 'number') {
- config.space = { item: buttonSpace };
- }
-
- // Create
- super(scene, config);
- this.type = 'rexButtons';
- this.buttonGroup = new ButtonGroup({
- parent: this,
- eventEmitter: GetValue(config, 'eventEmitter', this),
- groupName: GetValue(config, 'groupName', undefined),
- clickConfig: GetValue(config, 'click', undefined)
- })
- .setButtonsType(config)
-
- // Add elements
- var background = GetValue(config, 'background', undefined);
- var buttons = GetValue(config, 'buttons', undefined);
-
- // Buttons properties
- this.buttonsExpand = GetValue(config, 'expand', false);
- this.buttonsAlign = GetValue(config, 'align', undefined); // undefined/left/top: no space
-
- if (background) {
- this.addBackground(background);
- }
-
- if (buttons) {
- this.addButtons(buttons);
- }
-
- this.addChildrenMap('background', background);
- this.addChildrenMap('buttons', this.buttonGroup.buttons);
- }
-
- destroy(fromScene) {
- // This Game Object has already been destroyed
- if (!this.scene || this.ignoreDestroy) {
- return;
- }
-
- super.destroy(fromScene);
- this.buttonGroup.destroy();
- this.buttonGroup = undefined;
- }
-
- get buttons() {
- return this.buttonGroup.buttons;
- }
-
- get groupName() {
- return this.buttonGroup.groupName;
- }
-
- set groupName(value) {
- this.buttonGroup.groupName = value;
- }
-
- get eventEmitter() {
- return this.buttonGroup.eventEmitter;
- }
-}
-
-Object.assign(
- Buttons.prototype,
- AddChildMethods,
- RemoveChildMethods,
- ButtonMethods,
- ButtonStateMethods
-);
-
-export default Buttons;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/folder/methods/ChildTransition.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/folder/methods/ChildTransition.js
deleted file mode 100644
index 33cdd363322e7856bf9769c83459a54e5f356b9b..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/folder/methods/ChildTransition.js
+++ /dev/null
@@ -1,24 +0,0 @@
-import OpenCloseTransition from '../../../../plugins/behaviors/openclosetransition/OpenCloseTransition.js';
-
-class Transition extends OpenCloseTransition {
- constructor(gameObject, config) {
- if (config === undefined) {
- config = {};
- }
- config.destroy = false;
- super(gameObject, config);
- }
-
- onOpen() {
- this.emit('open', this.parent, this);
- super.onOpen();
- }
-
- onClose() {
- this.emit('close', this.parent, this);
- super.onClose();
- }
-
-}
-
-export default Transition;
\ No newline at end of file
diff --git a/spaces/Aluxes/anime-remove-background/README.md b/spaces/Aluxes/anime-remove-background/README.md
deleted file mode 100644
index 1ba3cb5ea0e994e246d57b7d62b8aa5a6331901c..0000000000000000000000000000000000000000
--- a/spaces/Aluxes/anime-remove-background/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Anime Remove Background
-emoji: 🪄🖼️
-colorFrom: indigo
-colorTo: pink
-sdk: gradio
-sdk_version: 3.1.4
-app_file: app.py
-pinned: false
-license: apache-2.0
-duplicated_from: skytnt/anime-remove-background
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Amrrs/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/stylegan2/op/__init__.py b/spaces/Amrrs/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/stylegan2/op/__init__.py
deleted file mode 100644
index d0918d92285955855be89f00096b888ee5597ce3..0000000000000000000000000000000000000000
--- a/spaces/Amrrs/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/stylegan2/op/__init__.py
+++ /dev/null
@@ -1,2 +0,0 @@
-from .fused_act import FusedLeakyReLU, fused_leaky_relu
-from .upfirdn2d import upfirdn2d
diff --git a/spaces/Amrrs/textsummarizer/app.py b/spaces/Amrrs/textsummarizer/app.py
deleted file mode 100644
index 888272584ac26953390722f159578a000743df65..0000000000000000000000000000000000000000
--- a/spaces/Amrrs/textsummarizer/app.py
+++ /dev/null
@@ -1,16 +0,0 @@
-import gradio as gr
-import transformers
-from transformers import BartTokenizer, BartForConditionalGeneration
-
-model_name = 'facebook/bart-large-cnn'
-tokenizer = BartTokenizer.from_pretrained(model_name)
-model = BartForConditionalGeneration.from_pretrained(model_name)
-
-def summarize(inp):
- inp = inp.replace('\n','')
- inp = tokenizer.encode(inp, return_tensors='pt', max_length=1024)
- summary_ids = model.generate(inp, num_beams=4, max_length=150, early_stopping=True)
- summary = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
- return summary
-
-gr.Interface(fn=summarize, inputs=gr.inputs.Textbox(lines=7, label="Input Text"), outputs="text").launch(inline=False)
\ No newline at end of file
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/vq_diffusion.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/vq_diffusion.md
deleted file mode 100644
index 5441d1d579ff2209b332243b3a086b057d1f4af4..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/vq_diffusion.md
+++ /dev/null
@@ -1,35 +0,0 @@
-
-
-# VQ Diffusion
-
-[Vector Quantized Diffusion Model for Text-to-Image Synthesis](https://huggingface.co/papers/2111.14822) is by Shuyang Gu, Dong Chen, Jianmin Bao, Fang Wen, Bo Zhang, Dongdong Chen, Lu Yuan, Baining Guo.
-
-The abstract from the paper is:
-
-*We present the vector quantized diffusion (VQ-Diffusion) model for text-to-image generation. This method is based on a vector quantized variational autoencoder (VQ-VAE) whose latent space is modeled by a conditional variant of the recently developed Denoising Diffusion Probabilistic Model (DDPM). We find that this latent-space method is well-suited for text-to-image generation tasks because it not only eliminates the unidirectional bias with existing methods but also allows us to incorporate a mask-and-replace diffusion strategy to avoid the accumulation of errors, which is a serious problem with existing methods. Our experiments show that the VQ-Diffusion produces significantly better text-to-image generation results when compared with conventional autoregressive (AR) models with similar numbers of parameters. Compared with previous GAN-based text-to-image methods, our VQ-Diffusion can handle more complex scenes and improve the synthesized image quality by a large margin. Finally, we show that the image generation computation in our method can be made highly efficient by reparameterization. With traditional AR methods, the text-to-image generation time increases linearly with the output image resolution and hence is quite time consuming even for normal size images. The VQ-Diffusion allows us to achieve a better trade-off between quality and speed. Our experiments indicate that the VQ-Diffusion model with the reparameterization is fifteen times faster than traditional AR methods while achieving a better image quality.*
-
-The original codebase can be found at [microsoft/VQ-Diffusion](https://github.com/microsoft/VQ-Diffusion).
-
-
-
-Make sure to check out the Schedulers [guide](/using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](/using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines.
-
-
-
-## VQDiffusionPipeline
-[[autodoc]] VQDiffusionPipeline
- - all
- - __call__
-
-## ImagePipelineOutput
-[[autodoc]] pipelines.ImagePipelineOutput
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_dpmsolver_multistep_flax.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_dpmsolver_multistep_flax.py
deleted file mode 100644
index 9b4ee67a7f5dbf8384eaedc0ede322284a413edd..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_dpmsolver_multistep_flax.py
+++ /dev/null
@@ -1,622 +0,0 @@
-# Copyright 2023 TSAIL Team and The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# DISCLAIMER: This file is strongly influenced by https://github.com/LuChengTHU/dpm-solver
-
-from dataclasses import dataclass
-from typing import List, Optional, Tuple, Union
-
-import flax
-import jax
-import jax.numpy as jnp
-
-from ..configuration_utils import ConfigMixin, register_to_config
-from .scheduling_utils_flax import (
- CommonSchedulerState,
- FlaxKarrasDiffusionSchedulers,
- FlaxSchedulerMixin,
- FlaxSchedulerOutput,
- add_noise_common,
-)
-
-
-@flax.struct.dataclass
-class DPMSolverMultistepSchedulerState:
- common: CommonSchedulerState
- alpha_t: jnp.ndarray
- sigma_t: jnp.ndarray
- lambda_t: jnp.ndarray
-
- # setable values
- init_noise_sigma: jnp.ndarray
- timesteps: jnp.ndarray
- num_inference_steps: Optional[int] = None
-
- # running values
- model_outputs: Optional[jnp.ndarray] = None
- lower_order_nums: Optional[jnp.int32] = None
- prev_timestep: Optional[jnp.int32] = None
- cur_sample: Optional[jnp.ndarray] = None
-
- @classmethod
- def create(
- cls,
- common: CommonSchedulerState,
- alpha_t: jnp.ndarray,
- sigma_t: jnp.ndarray,
- lambda_t: jnp.ndarray,
- init_noise_sigma: jnp.ndarray,
- timesteps: jnp.ndarray,
- ):
- return cls(
- common=common,
- alpha_t=alpha_t,
- sigma_t=sigma_t,
- lambda_t=lambda_t,
- init_noise_sigma=init_noise_sigma,
- timesteps=timesteps,
- )
-
-
-@dataclass
-class FlaxDPMSolverMultistepSchedulerOutput(FlaxSchedulerOutput):
- state: DPMSolverMultistepSchedulerState
-
-
-class FlaxDPMSolverMultistepScheduler(FlaxSchedulerMixin, ConfigMixin):
- """
- DPM-Solver (and the improved version DPM-Solver++) is a fast dedicated high-order solver for diffusion ODEs with
- the convergence order guarantee. Empirically, sampling by DPM-Solver with only 20 steps can generate high-quality
- samples, and it can generate quite good samples even in only 10 steps.
-
- For more details, see the original paper: https://arxiv.org/abs/2206.00927 and https://arxiv.org/abs/2211.01095
-
- Currently, we support the multistep DPM-Solver for both noise prediction models and data prediction models. We
- recommend to use `solver_order=2` for guided sampling, and `solver_order=3` for unconditional sampling.
-
- We also support the "dynamic thresholding" method in Imagen (https://arxiv.org/abs/2205.11487). For pixel-space
- diffusion models, you can set both `algorithm_type="dpmsolver++"` and `thresholding=True` to use the dynamic
- thresholding. Note that the thresholding method is unsuitable for latent-space diffusion models (such as
- stable-diffusion).
-
- [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__`
- function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`.
- [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and
- [`~SchedulerMixin.from_pretrained`] functions.
-
- For more details, see the original paper: https://arxiv.org/abs/2206.00927 and https://arxiv.org/abs/2211.01095
-
- Args:
- num_train_timesteps (`int`): number of diffusion steps used to train the model.
- beta_start (`float`): the starting `beta` value of inference.
- beta_end (`float`): the final `beta` value.
- beta_schedule (`str`):
- the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
- `linear`, `scaled_linear`, or `squaredcos_cap_v2`.
- trained_betas (`np.ndarray`, optional):
- option to pass an array of betas directly to the constructor to bypass `beta_start`, `beta_end` etc.
- solver_order (`int`, default `2`):
- the order of DPM-Solver; can be `1` or `2` or `3`. We recommend to use `solver_order=2` for guided
- sampling, and `solver_order=3` for unconditional sampling.
- prediction_type (`str`, default `epsilon`):
- indicates whether the model predicts the noise (epsilon), or the data / `x0`. One of `epsilon`, `sample`,
- or `v-prediction`.
- thresholding (`bool`, default `False`):
- whether to use the "dynamic thresholding" method (introduced by Imagen, https://arxiv.org/abs/2205.11487).
- For pixel-space diffusion models, you can set both `algorithm_type=dpmsolver++` and `thresholding=True` to
- use the dynamic thresholding. Note that the thresholding method is unsuitable for latent-space diffusion
- models (such as stable-diffusion).
- dynamic_thresholding_ratio (`float`, default `0.995`):
- the ratio for the dynamic thresholding method. Default is `0.995`, the same as Imagen
- (https://arxiv.org/abs/2205.11487).
- sample_max_value (`float`, default `1.0`):
- the threshold value for dynamic thresholding. Valid only when `thresholding=True` and
- `algorithm_type="dpmsolver++`.
- algorithm_type (`str`, default `dpmsolver++`):
- the algorithm type for the solver. Either `dpmsolver` or `dpmsolver++`. The `dpmsolver` type implements the
- algorithms in https://arxiv.org/abs/2206.00927, and the `dpmsolver++` type implements the algorithms in
- https://arxiv.org/abs/2211.01095. We recommend to use `dpmsolver++` with `solver_order=2` for guided
- sampling (e.g. stable-diffusion).
- solver_type (`str`, default `midpoint`):
- the solver type for the second-order solver. Either `midpoint` or `heun`. The solver type slightly affects
- the sample quality, especially for small number of steps. We empirically find that `midpoint` solvers are
- slightly better, so we recommend to use the `midpoint` type.
- lower_order_final (`bool`, default `True`):
- whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. We empirically
- find this trick can stabilize the sampling of DPM-Solver for steps < 15, especially for steps <= 10.
- dtype (`jnp.dtype`, *optional*, defaults to `jnp.float32`):
- the `dtype` used for params and computation.
- """
-
- _compatibles = [e.name for e in FlaxKarrasDiffusionSchedulers]
-
- dtype: jnp.dtype
-
- @property
- def has_state(self):
- return True
-
- @register_to_config
- def __init__(
- self,
- num_train_timesteps: int = 1000,
- beta_start: float = 0.0001,
- beta_end: float = 0.02,
- beta_schedule: str = "linear",
- trained_betas: Optional[jnp.ndarray] = None,
- solver_order: int = 2,
- prediction_type: str = "epsilon",
- thresholding: bool = False,
- dynamic_thresholding_ratio: float = 0.995,
- sample_max_value: float = 1.0,
- algorithm_type: str = "dpmsolver++",
- solver_type: str = "midpoint",
- lower_order_final: bool = True,
- dtype: jnp.dtype = jnp.float32,
- ):
- self.dtype = dtype
-
- def create_state(self, common: Optional[CommonSchedulerState] = None) -> DPMSolverMultistepSchedulerState:
- if common is None:
- common = CommonSchedulerState.create(self)
-
- # Currently we only support VP-type noise schedule
- alpha_t = jnp.sqrt(common.alphas_cumprod)
- sigma_t = jnp.sqrt(1 - common.alphas_cumprod)
- lambda_t = jnp.log(alpha_t) - jnp.log(sigma_t)
-
- # settings for DPM-Solver
- if self.config.algorithm_type not in ["dpmsolver", "dpmsolver++"]:
- raise NotImplementedError(f"{self.config.algorithm_type} does is not implemented for {self.__class__}")
- if self.config.solver_type not in ["midpoint", "heun"]:
- raise NotImplementedError(f"{self.config.solver_type} does is not implemented for {self.__class__}")
-
- # standard deviation of the initial noise distribution
- init_noise_sigma = jnp.array(1.0, dtype=self.dtype)
-
- timesteps = jnp.arange(0, self.config.num_train_timesteps).round()[::-1]
-
- return DPMSolverMultistepSchedulerState.create(
- common=common,
- alpha_t=alpha_t,
- sigma_t=sigma_t,
- lambda_t=lambda_t,
- init_noise_sigma=init_noise_sigma,
- timesteps=timesteps,
- )
-
- def set_timesteps(
- self, state: DPMSolverMultistepSchedulerState, num_inference_steps: int, shape: Tuple
- ) -> DPMSolverMultistepSchedulerState:
- """
- Sets the discrete timesteps used for the diffusion chain. Supporting function to be run before inference.
-
- Args:
- state (`DPMSolverMultistepSchedulerState`):
- the `FlaxDPMSolverMultistepScheduler` state data class instance.
- num_inference_steps (`int`):
- the number of diffusion steps used when generating samples with a pre-trained model.
- shape (`Tuple`):
- the shape of the samples to be generated.
- """
-
- timesteps = (
- jnp.linspace(0, self.config.num_train_timesteps - 1, num_inference_steps + 1)
- .round()[::-1][:-1]
- .astype(jnp.int32)
- )
-
- # initial running values
-
- model_outputs = jnp.zeros((self.config.solver_order,) + shape, dtype=self.dtype)
- lower_order_nums = jnp.int32(0)
- prev_timestep = jnp.int32(-1)
- cur_sample = jnp.zeros(shape, dtype=self.dtype)
-
- return state.replace(
- num_inference_steps=num_inference_steps,
- timesteps=timesteps,
- model_outputs=model_outputs,
- lower_order_nums=lower_order_nums,
- prev_timestep=prev_timestep,
- cur_sample=cur_sample,
- )
-
- def convert_model_output(
- self,
- state: DPMSolverMultistepSchedulerState,
- model_output: jnp.ndarray,
- timestep: int,
- sample: jnp.ndarray,
- ) -> jnp.ndarray:
- """
- Convert the model output to the corresponding type that the algorithm (DPM-Solver / DPM-Solver++) needs.
-
- DPM-Solver is designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to
- discretize an integral of the data prediction model. So we need to first convert the model output to the
- corresponding type to match the algorithm.
-
- Note that the algorithm type and the model type is decoupled. That is to say, we can use either DPM-Solver or
- DPM-Solver++ for both noise prediction model and data prediction model.
-
- Args:
- model_output (`jnp.ndarray`): direct output from learned diffusion model.
- timestep (`int`): current discrete timestep in the diffusion chain.
- sample (`jnp.ndarray`):
- current instance of sample being created by diffusion process.
-
- Returns:
- `jnp.ndarray`: the converted model output.
- """
- # DPM-Solver++ needs to solve an integral of the data prediction model.
- if self.config.algorithm_type == "dpmsolver++":
- if self.config.prediction_type == "epsilon":
- alpha_t, sigma_t = state.alpha_t[timestep], state.sigma_t[timestep]
- x0_pred = (sample - sigma_t * model_output) / alpha_t
- elif self.config.prediction_type == "sample":
- x0_pred = model_output
- elif self.config.prediction_type == "v_prediction":
- alpha_t, sigma_t = state.alpha_t[timestep], state.sigma_t[timestep]
- x0_pred = alpha_t * sample - sigma_t * model_output
- else:
- raise ValueError(
- f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, `sample`, "
- " or `v_prediction` for the FlaxDPMSolverMultistepScheduler."
- )
-
- if self.config.thresholding:
- # Dynamic thresholding in https://arxiv.org/abs/2205.11487
- dynamic_max_val = jnp.percentile(
- jnp.abs(x0_pred), self.config.dynamic_thresholding_ratio, axis=tuple(range(1, x0_pred.ndim))
- )
- dynamic_max_val = jnp.maximum(
- dynamic_max_val, self.config.sample_max_value * jnp.ones_like(dynamic_max_val)
- )
- x0_pred = jnp.clip(x0_pred, -dynamic_max_val, dynamic_max_val) / dynamic_max_val
- return x0_pred
- # DPM-Solver needs to solve an integral of the noise prediction model.
- elif self.config.algorithm_type == "dpmsolver":
- if self.config.prediction_type == "epsilon":
- return model_output
- elif self.config.prediction_type == "sample":
- alpha_t, sigma_t = state.alpha_t[timestep], state.sigma_t[timestep]
- epsilon = (sample - alpha_t * model_output) / sigma_t
- return epsilon
- elif self.config.prediction_type == "v_prediction":
- alpha_t, sigma_t = state.alpha_t[timestep], state.sigma_t[timestep]
- epsilon = alpha_t * model_output + sigma_t * sample
- return epsilon
- else:
- raise ValueError(
- f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, `sample`, "
- " or `v_prediction` for the FlaxDPMSolverMultistepScheduler."
- )
-
- def dpm_solver_first_order_update(
- self,
- state: DPMSolverMultistepSchedulerState,
- model_output: jnp.ndarray,
- timestep: int,
- prev_timestep: int,
- sample: jnp.ndarray,
- ) -> jnp.ndarray:
- """
- One step for the first-order DPM-Solver (equivalent to DDIM).
-
- See https://arxiv.org/abs/2206.00927 for the detailed derivation.
-
- Args:
- model_output (`jnp.ndarray`): direct output from learned diffusion model.
- timestep (`int`): current discrete timestep in the diffusion chain.
- prev_timestep (`int`): previous discrete timestep in the diffusion chain.
- sample (`jnp.ndarray`):
- current instance of sample being created by diffusion process.
-
- Returns:
- `jnp.ndarray`: the sample tensor at the previous timestep.
- """
- t, s0 = prev_timestep, timestep
- m0 = model_output
- lambda_t, lambda_s = state.lambda_t[t], state.lambda_t[s0]
- alpha_t, alpha_s = state.alpha_t[t], state.alpha_t[s0]
- sigma_t, sigma_s = state.sigma_t[t], state.sigma_t[s0]
- h = lambda_t - lambda_s
- if self.config.algorithm_type == "dpmsolver++":
- x_t = (sigma_t / sigma_s) * sample - (alpha_t * (jnp.exp(-h) - 1.0)) * m0
- elif self.config.algorithm_type == "dpmsolver":
- x_t = (alpha_t / alpha_s) * sample - (sigma_t * (jnp.exp(h) - 1.0)) * m0
- return x_t
-
- def multistep_dpm_solver_second_order_update(
- self,
- state: DPMSolverMultistepSchedulerState,
- model_output_list: jnp.ndarray,
- timestep_list: List[int],
- prev_timestep: int,
- sample: jnp.ndarray,
- ) -> jnp.ndarray:
- """
- One step for the second-order multistep DPM-Solver.
-
- Args:
- model_output_list (`List[jnp.ndarray]`):
- direct outputs from learned diffusion model at current and latter timesteps.
- timestep (`int`): current and latter discrete timestep in the diffusion chain.
- prev_timestep (`int`): previous discrete timestep in the diffusion chain.
- sample (`jnp.ndarray`):
- current instance of sample being created by diffusion process.
-
- Returns:
- `jnp.ndarray`: the sample tensor at the previous timestep.
- """
- t, s0, s1 = prev_timestep, timestep_list[-1], timestep_list[-2]
- m0, m1 = model_output_list[-1], model_output_list[-2]
- lambda_t, lambda_s0, lambda_s1 = state.lambda_t[t], state.lambda_t[s0], state.lambda_t[s1]
- alpha_t, alpha_s0 = state.alpha_t[t], state.alpha_t[s0]
- sigma_t, sigma_s0 = state.sigma_t[t], state.sigma_t[s0]
- h, h_0 = lambda_t - lambda_s0, lambda_s0 - lambda_s1
- r0 = h_0 / h
- D0, D1 = m0, (1.0 / r0) * (m0 - m1)
- if self.config.algorithm_type == "dpmsolver++":
- # See https://arxiv.org/abs/2211.01095 for detailed derivations
- if self.config.solver_type == "midpoint":
- x_t = (
- (sigma_t / sigma_s0) * sample
- - (alpha_t * (jnp.exp(-h) - 1.0)) * D0
- - 0.5 * (alpha_t * (jnp.exp(-h) - 1.0)) * D1
- )
- elif self.config.solver_type == "heun":
- x_t = (
- (sigma_t / sigma_s0) * sample
- - (alpha_t * (jnp.exp(-h) - 1.0)) * D0
- + (alpha_t * ((jnp.exp(-h) - 1.0) / h + 1.0)) * D1
- )
- elif self.config.algorithm_type == "dpmsolver":
- # See https://arxiv.org/abs/2206.00927 for detailed derivations
- if self.config.solver_type == "midpoint":
- x_t = (
- (alpha_t / alpha_s0) * sample
- - (sigma_t * (jnp.exp(h) - 1.0)) * D0
- - 0.5 * (sigma_t * (jnp.exp(h) - 1.0)) * D1
- )
- elif self.config.solver_type == "heun":
- x_t = (
- (alpha_t / alpha_s0) * sample
- - (sigma_t * (jnp.exp(h) - 1.0)) * D0
- - (sigma_t * ((jnp.exp(h) - 1.0) / h - 1.0)) * D1
- )
- return x_t
-
- def multistep_dpm_solver_third_order_update(
- self,
- state: DPMSolverMultistepSchedulerState,
- model_output_list: jnp.ndarray,
- timestep_list: List[int],
- prev_timestep: int,
- sample: jnp.ndarray,
- ) -> jnp.ndarray:
- """
- One step for the third-order multistep DPM-Solver.
-
- Args:
- model_output_list (`List[jnp.ndarray]`):
- direct outputs from learned diffusion model at current and latter timesteps.
- timestep (`int`): current and latter discrete timestep in the diffusion chain.
- prev_timestep (`int`): previous discrete timestep in the diffusion chain.
- sample (`jnp.ndarray`):
- current instance of sample being created by diffusion process.
-
- Returns:
- `jnp.ndarray`: the sample tensor at the previous timestep.
- """
- t, s0, s1, s2 = prev_timestep, timestep_list[-1], timestep_list[-2], timestep_list[-3]
- m0, m1, m2 = model_output_list[-1], model_output_list[-2], model_output_list[-3]
- lambda_t, lambda_s0, lambda_s1, lambda_s2 = (
- state.lambda_t[t],
- state.lambda_t[s0],
- state.lambda_t[s1],
- state.lambda_t[s2],
- )
- alpha_t, alpha_s0 = state.alpha_t[t], state.alpha_t[s0]
- sigma_t, sigma_s0 = state.sigma_t[t], state.sigma_t[s0]
- h, h_0, h_1 = lambda_t - lambda_s0, lambda_s0 - lambda_s1, lambda_s1 - lambda_s2
- r0, r1 = h_0 / h, h_1 / h
- D0 = m0
- D1_0, D1_1 = (1.0 / r0) * (m0 - m1), (1.0 / r1) * (m1 - m2)
- D1 = D1_0 + (r0 / (r0 + r1)) * (D1_0 - D1_1)
- D2 = (1.0 / (r0 + r1)) * (D1_0 - D1_1)
- if self.config.algorithm_type == "dpmsolver++":
- # See https://arxiv.org/abs/2206.00927 for detailed derivations
- x_t = (
- (sigma_t / sigma_s0) * sample
- - (alpha_t * (jnp.exp(-h) - 1.0)) * D0
- + (alpha_t * ((jnp.exp(-h) - 1.0) / h + 1.0)) * D1
- - (alpha_t * ((jnp.exp(-h) - 1.0 + h) / h**2 - 0.5)) * D2
- )
- elif self.config.algorithm_type == "dpmsolver":
- # See https://arxiv.org/abs/2206.00927 for detailed derivations
- x_t = (
- (alpha_t / alpha_s0) * sample
- - (sigma_t * (jnp.exp(h) - 1.0)) * D0
- - (sigma_t * ((jnp.exp(h) - 1.0) / h - 1.0)) * D1
- - (sigma_t * ((jnp.exp(h) - 1.0 - h) / h**2 - 0.5)) * D2
- )
- return x_t
-
- def step(
- self,
- state: DPMSolverMultistepSchedulerState,
- model_output: jnp.ndarray,
- timestep: int,
- sample: jnp.ndarray,
- return_dict: bool = True,
- ) -> Union[FlaxDPMSolverMultistepSchedulerOutput, Tuple]:
- """
- Predict the sample at the previous timestep by DPM-Solver. Core function to propagate the diffusion process
- from the learned model outputs (most often the predicted noise).
-
- Args:
- state (`DPMSolverMultistepSchedulerState`):
- the `FlaxDPMSolverMultistepScheduler` state data class instance.
- model_output (`jnp.ndarray`): direct output from learned diffusion model.
- timestep (`int`): current discrete timestep in the diffusion chain.
- sample (`jnp.ndarray`):
- current instance of sample being created by diffusion process.
- return_dict (`bool`): option for returning tuple rather than FlaxDPMSolverMultistepSchedulerOutput class
-
- Returns:
- [`FlaxDPMSolverMultistepSchedulerOutput`] or `tuple`: [`FlaxDPMSolverMultistepSchedulerOutput`] if
- `return_dict` is True, otherwise a `tuple`. When returning a tuple, the first element is the sample tensor.
-
- """
- if state.num_inference_steps is None:
- raise ValueError(
- "Number of inference steps is 'None', you need to run 'set_timesteps' after creating the scheduler"
- )
-
- (step_index,) = jnp.where(state.timesteps == timestep, size=1)
- step_index = step_index[0]
-
- prev_timestep = jax.lax.select(step_index == len(state.timesteps) - 1, 0, state.timesteps[step_index + 1])
-
- model_output = self.convert_model_output(state, model_output, timestep, sample)
-
- model_outputs_new = jnp.roll(state.model_outputs, -1, axis=0)
- model_outputs_new = model_outputs_new.at[-1].set(model_output)
- state = state.replace(
- model_outputs=model_outputs_new,
- prev_timestep=prev_timestep,
- cur_sample=sample,
- )
-
- def step_1(state: DPMSolverMultistepSchedulerState) -> jnp.ndarray:
- return self.dpm_solver_first_order_update(
- state,
- state.model_outputs[-1],
- state.timesteps[step_index],
- state.prev_timestep,
- state.cur_sample,
- )
-
- def step_23(state: DPMSolverMultistepSchedulerState) -> jnp.ndarray:
- def step_2(state: DPMSolverMultistepSchedulerState) -> jnp.ndarray:
- timestep_list = jnp.array([state.timesteps[step_index - 1], state.timesteps[step_index]])
- return self.multistep_dpm_solver_second_order_update(
- state,
- state.model_outputs,
- timestep_list,
- state.prev_timestep,
- state.cur_sample,
- )
-
- def step_3(state: DPMSolverMultistepSchedulerState) -> jnp.ndarray:
- timestep_list = jnp.array(
- [
- state.timesteps[step_index - 2],
- state.timesteps[step_index - 1],
- state.timesteps[step_index],
- ]
- )
- return self.multistep_dpm_solver_third_order_update(
- state,
- state.model_outputs,
- timestep_list,
- state.prev_timestep,
- state.cur_sample,
- )
-
- step_2_output = step_2(state)
- step_3_output = step_3(state)
-
- if self.config.solver_order == 2:
- return step_2_output
- elif self.config.lower_order_final and len(state.timesteps) < 15:
- return jax.lax.select(
- state.lower_order_nums < 2,
- step_2_output,
- jax.lax.select(
- step_index == len(state.timesteps) - 2,
- step_2_output,
- step_3_output,
- ),
- )
- else:
- return jax.lax.select(
- state.lower_order_nums < 2,
- step_2_output,
- step_3_output,
- )
-
- step_1_output = step_1(state)
- step_23_output = step_23(state)
-
- if self.config.solver_order == 1:
- prev_sample = step_1_output
-
- elif self.config.lower_order_final and len(state.timesteps) < 15:
- prev_sample = jax.lax.select(
- state.lower_order_nums < 1,
- step_1_output,
- jax.lax.select(
- step_index == len(state.timesteps) - 1,
- step_1_output,
- step_23_output,
- ),
- )
-
- else:
- prev_sample = jax.lax.select(
- state.lower_order_nums < 1,
- step_1_output,
- step_23_output,
- )
-
- state = state.replace(
- lower_order_nums=jnp.minimum(state.lower_order_nums + 1, self.config.solver_order),
- )
-
- if not return_dict:
- return (prev_sample, state)
-
- return FlaxDPMSolverMultistepSchedulerOutput(prev_sample=prev_sample, state=state)
-
- def scale_model_input(
- self, state: DPMSolverMultistepSchedulerState, sample: jnp.ndarray, timestep: Optional[int] = None
- ) -> jnp.ndarray:
- """
- Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
- current timestep.
-
- Args:
- state (`DPMSolverMultistepSchedulerState`):
- the `FlaxDPMSolverMultistepScheduler` state data class instance.
- sample (`jnp.ndarray`): input sample
- timestep (`int`, optional): current timestep
-
- Returns:
- `jnp.ndarray`: scaled input sample
- """
- return sample
-
- def add_noise(
- self,
- state: DPMSolverMultistepSchedulerState,
- original_samples: jnp.ndarray,
- noise: jnp.ndarray,
- timesteps: jnp.ndarray,
- ) -> jnp.ndarray:
- return add_noise_common(state.common, original_samples, noise, timesteps)
-
- def __len__(self):
- return self.config.num_train_timesteps
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_sde_vp.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_sde_vp.py
deleted file mode 100644
index 6e2ead90edb57cd1eb1d270695e222d404064180..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_sde_vp.py
+++ /dev/null
@@ -1,90 +0,0 @@
-# Copyright 2023 Google Brain and The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# DISCLAIMER: This file is strongly influenced by https://github.com/yang-song/score_sde_pytorch
-
-import math
-from typing import Union
-
-import torch
-
-from ..configuration_utils import ConfigMixin, register_to_config
-from ..utils import randn_tensor
-from .scheduling_utils import SchedulerMixin
-
-
-class ScoreSdeVpScheduler(SchedulerMixin, ConfigMixin):
- """
- The variance preserving stochastic differential equation (SDE) scheduler.
-
- [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__`
- function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`.
- [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and
- [`~SchedulerMixin.from_pretrained`] functions.
-
- For more information, see the original paper: https://arxiv.org/abs/2011.13456
-
- UNDER CONSTRUCTION
-
- """
-
- order = 1
-
- @register_to_config
- def __init__(self, num_train_timesteps=2000, beta_min=0.1, beta_max=20, sampling_eps=1e-3):
- self.sigmas = None
- self.discrete_sigmas = None
- self.timesteps = None
-
- def set_timesteps(self, num_inference_steps, device: Union[str, torch.device] = None):
- self.timesteps = torch.linspace(1, self.config.sampling_eps, num_inference_steps, device=device)
-
- def step_pred(self, score, x, t, generator=None):
- if self.timesteps is None:
- raise ValueError(
- "`self.timesteps` is not set, you need to run 'set_timesteps' after creating the scheduler"
- )
-
- # TODO(Patrick) better comments + non-PyTorch
- # postprocess model score
- log_mean_coeff = (
- -0.25 * t**2 * (self.config.beta_max - self.config.beta_min) - 0.5 * t * self.config.beta_min
- )
- std = torch.sqrt(1.0 - torch.exp(2.0 * log_mean_coeff))
- std = std.flatten()
- while len(std.shape) < len(score.shape):
- std = std.unsqueeze(-1)
- score = -score / std
-
- # compute
- dt = -1.0 / len(self.timesteps)
-
- beta_t = self.config.beta_min + t * (self.config.beta_max - self.config.beta_min)
- beta_t = beta_t.flatten()
- while len(beta_t.shape) < len(x.shape):
- beta_t = beta_t.unsqueeze(-1)
- drift = -0.5 * beta_t * x
-
- diffusion = torch.sqrt(beta_t)
- drift = drift - diffusion**2 * score
- x_mean = x + drift * dt
-
- # add noise
- noise = randn_tensor(x.shape, layout=x.layout, generator=generator, device=x.device, dtype=x.dtype)
- x = x_mean + diffusion * math.sqrt(-dt) * noise
-
- return x, x_mean
-
- def __len__(self):
- return self.config.num_train_timesteps
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/karras_ve/test_karras_ve.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/karras_ve/test_karras_ve.py
deleted file mode 100644
index 142058bcd7103aa55352a1aa04f75bdf3d2082a2..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/karras_ve/test_karras_ve.py
+++ /dev/null
@@ -1,86 +0,0 @@
-# coding=utf-8
-# Copyright 2023 HuggingFace Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import unittest
-
-import numpy as np
-import torch
-
-from diffusers import KarrasVePipeline, KarrasVeScheduler, UNet2DModel
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch, slow, torch_device
-
-
-enable_full_determinism()
-
-
-class KarrasVePipelineFastTests(unittest.TestCase):
- @property
- def dummy_uncond_unet(self):
- torch.manual_seed(0)
- model = UNet2DModel(
- block_out_channels=(32, 64),
- layers_per_block=2,
- sample_size=32,
- in_channels=3,
- out_channels=3,
- down_block_types=("DownBlock2D", "AttnDownBlock2D"),
- up_block_types=("AttnUpBlock2D", "UpBlock2D"),
- )
- return model
-
- def test_inference(self):
- unet = self.dummy_uncond_unet
- scheduler = KarrasVeScheduler()
-
- pipe = KarrasVePipeline(unet=unet, scheduler=scheduler)
- pipe.to(torch_device)
- pipe.set_progress_bar_config(disable=None)
-
- generator = torch.manual_seed(0)
- image = pipe(num_inference_steps=2, generator=generator, output_type="numpy").images
-
- generator = torch.manual_seed(0)
- image_from_tuple = pipe(num_inference_steps=2, generator=generator, output_type="numpy", return_dict=False)[0]
-
- image_slice = image[0, -3:, -3:, -1]
- image_from_tuple_slice = image_from_tuple[0, -3:, -3:, -1]
-
- assert image.shape == (1, 32, 32, 3)
- expected_slice = np.array([0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0])
-
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
- assert np.abs(image_from_tuple_slice.flatten() - expected_slice).max() < 1e-2
-
-
-@slow
-@require_torch
-class KarrasVePipelineIntegrationTests(unittest.TestCase):
- def test_inference(self):
- model_id = "google/ncsnpp-celebahq-256"
- model = UNet2DModel.from_pretrained(model_id)
- scheduler = KarrasVeScheduler()
-
- pipe = KarrasVePipeline(unet=model, scheduler=scheduler)
- pipe.to(torch_device)
- pipe.set_progress_bar_config(disable=None)
-
- generator = torch.manual_seed(0)
- image = pipe(num_inference_steps=20, generator=generator, output_type="numpy").images
-
- image_slice = image[0, -3:, -3:, -1]
- assert image.shape == (1, 256, 256, 3)
- expected_slice = np.array([0.578, 0.5811, 0.5924, 0.5809, 0.587, 0.5886, 0.5861, 0.5802, 0.586])
-
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/datasets/__init__.py b/spaces/Andy1621/uniformer_image_detection/mmdet/datasets/__init__.py
deleted file mode 100644
index 9b18b30a258c32283cbfc03ba01781a19fd993c1..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/mmdet/datasets/__init__.py
+++ /dev/null
@@ -1,24 +0,0 @@
-from .builder import DATASETS, PIPELINES, build_dataloader, build_dataset
-from .cityscapes import CityscapesDataset
-from .coco import CocoDataset
-from .custom import CustomDataset
-from .dataset_wrappers import (ClassBalancedDataset, ConcatDataset,
- RepeatDataset)
-from .deepfashion import DeepFashionDataset
-from .lvis import LVISDataset, LVISV1Dataset, LVISV05Dataset
-from .samplers import DistributedGroupSampler, DistributedSampler, GroupSampler
-from .utils import (NumClassCheckHook, get_loading_pipeline,
- replace_ImageToTensor)
-from .voc import VOCDataset
-from .wider_face import WIDERFaceDataset
-from .xml_style import XMLDataset
-
-__all__ = [
- 'CustomDataset', 'XMLDataset', 'CocoDataset', 'DeepFashionDataset',
- 'VOCDataset', 'CityscapesDataset', 'LVISDataset', 'LVISV05Dataset',
- 'LVISV1Dataset', 'GroupSampler', 'DistributedGroupSampler',
- 'DistributedSampler', 'build_dataloader', 'ConcatDataset', 'RepeatDataset',
- 'ClassBalancedDataset', 'WIDERFaceDataset', 'DATASETS', 'PIPELINES',
- 'build_dataset', 'replace_ImageToTensor', 'get_loading_pipeline',
- 'NumClassCheckHook'
-]
diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/datasets/wider_face.py b/spaces/Andy1621/uniformer_image_detection/mmdet/datasets/wider_face.py
deleted file mode 100644
index 3a13907db87a9986a7d701837259a0b712fc9dca..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/mmdet/datasets/wider_face.py
+++ /dev/null
@@ -1,51 +0,0 @@
-import os.path as osp
-import xml.etree.ElementTree as ET
-
-import mmcv
-
-from .builder import DATASETS
-from .xml_style import XMLDataset
-
-
-@DATASETS.register_module()
-class WIDERFaceDataset(XMLDataset):
- """Reader for the WIDER Face dataset in PASCAL VOC format.
-
- Conversion scripts can be found in
- https://github.com/sovrasov/wider-face-pascal-voc-annotations
- """
- CLASSES = ('face', )
-
- def __init__(self, **kwargs):
- super(WIDERFaceDataset, self).__init__(**kwargs)
-
- def load_annotations(self, ann_file):
- """Load annotation from WIDERFace XML style annotation file.
-
- Args:
- ann_file (str): Path of XML file.
-
- Returns:
- list[dict]: Annotation info from XML file.
- """
-
- data_infos = []
- img_ids = mmcv.list_from_file(ann_file)
- for img_id in img_ids:
- filename = f'{img_id}.jpg'
- xml_path = osp.join(self.img_prefix, 'Annotations',
- f'{img_id}.xml')
- tree = ET.parse(xml_path)
- root = tree.getroot()
- size = root.find('size')
- width = int(size.find('width').text)
- height = int(size.find('height').text)
- folder = root.find('folder').text
- data_infos.append(
- dict(
- id=img_id,
- filename=osp.join(folder, filename),
- width=width,
- height=height))
-
- return data_infos
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/dmnet/dmnet_r101-d8_512x1024_40k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/dmnet/dmnet_r101-d8_512x1024_40k_cityscapes.py
deleted file mode 100644
index fd6897691d3f8f200783fae7bfe231735f25a11b..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/dmnet/dmnet_r101-d8_512x1024_40k_cityscapes.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './dmnet_r50-d8_512x1024_40k_cityscapes.py'
-model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/logits.py b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/logits.py
deleted file mode 100644
index 6fc5bf6077997c0e60c63f328a033767799c1022..0000000000000000000000000000000000000000
--- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/logits.py
+++ /dev/null
@@ -1,56 +0,0 @@
-import torch
-
-from modules import sampler_hijack, shared
-from modules.logging_colors import logger
-from modules.text_generation import generate_reply
-
-global_scores = None
-
-
-def get_next_logits(prompt, state, use_samplers, previous):
- if shared.model is None:
- logger.error("No model is loaded! Select one in the Model tab.")
- return 'Error: No model is loaded1 Select one in the Model tab.', previous
-
- is_non_hf_exllamav2 = shared.model.__class__.__name__ == 'Exllamav2Model'
- is_non_hf_exllamav1 = shared.model.__class__.__name__ == 'ExllamaModel'
- is_non_hf_llamacpp = shared.model.__class__.__name__ == 'LlamaCppModel'
-
- if use_samplers:
- if any([is_non_hf_exllamav2, is_non_hf_exllamav1, is_non_hf_llamacpp]):
- logger.error("Sampler hijacking is not supported non-Huggingface loaders.")
- # sampling is all done in c for exllama, so it is really hard to hijack
- # it should be possible to hijack llamacpp sampler by hijacking all their sampling methods,
- # but it is not implemented yet
- return 'Error: Sampler hijacking is not supported non-Huggingface loaders. Please disable the "Use samplers" option.', previous
-
- state['max_new_tokens'] = 1
- state['auto_max_new_tokens'] = False
- for _ in generate_reply(prompt, state):
- pass
-
- scores = sampler_hijack.global_scores[-1]
- else:
- if is_non_hf_exllamav2 or is_non_hf_exllamav1:
- tokens = shared.tokenizer.encode(prompt).cuda()
- scores = shared.model.get_logits(tokens)[-1][-1]
- elif is_non_hf_llamacpp:
- tokens = shared.tokenizer.encode(prompt)
- scores = shared.model.get_logits(tokens)[-1][-1]
- else:
- tokens = shared.tokenizer.encode(prompt, return_tensors='pt').cuda()
- output = shared.model(input_ids=tokens)
- scores = output['logits'][-1][-1]
-
- probs = torch.softmax(scores, dim=-1, dtype=torch.float)
- topk_values, topk_indices = torch.topk(probs, k=50, largest=True, sorted=True)
- topk_values = [f"{float(i):.5f}" for i in topk_values]
- if is_non_hf_exllamav1 or is_non_hf_llamacpp:
- topk_indices = [i.expand((1, 1)) for i in topk_indices]
-
- tokens = [shared.tokenizer.decode(i) for i in topk_indices]
- output = ''
- for row in list(zip(topk_values, tokens)):
- output += f"{row[0]} - {repr(row[1])}\n"
-
- return output, previous
diff --git a/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/CLIP/clip/simple_tokenizer.py b/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/CLIP/clip/simple_tokenizer.py
deleted file mode 100644
index 0a66286b7d5019c6e221932a813768038f839c91..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/CLIP/clip/simple_tokenizer.py
+++ /dev/null
@@ -1,132 +0,0 @@
-import gzip
-import html
-import os
-from functools import lru_cache
-
-import ftfy
-import regex as re
-
-
-@lru_cache()
-def default_bpe():
- return os.path.join(os.path.dirname(os.path.abspath(__file__)), "bpe_simple_vocab_16e6.txt.gz")
-
-
-@lru_cache()
-def bytes_to_unicode():
- """
- Returns list of utf-8 byte and a corresponding list of unicode strings.
- The reversible bpe codes work on unicode strings.
- This means you need a large # of unicode characters in your vocab if you want to avoid UNKs.
- When you're at something like a 10B token dataset you end up needing around 5K for decent coverage.
- This is a signficant percentage of your normal, say, 32K bpe vocab.
- To avoid that, we want lookup tables between utf-8 bytes and unicode strings.
- And avoids mapping to whitespace/control characters the bpe code barfs on.
- """
- bs = list(range(ord("!"), ord("~")+1))+list(range(ord("¡"), ord("¬")+1))+list(range(ord("®"), ord("ÿ")+1))
- cs = bs[:]
- n = 0
- for b in range(2**8):
- if b not in bs:
- bs.append(b)
- cs.append(2**8+n)
- n += 1
- cs = [chr(n) for n in cs]
- return dict(zip(bs, cs))
-
-
-def get_pairs(word):
- """Return set of symbol pairs in a word.
- Word is represented as tuple of symbols (symbols being variable-length strings).
- """
- pairs = set()
- prev_char = word[0]
- for char in word[1:]:
- pairs.add((prev_char, char))
- prev_char = char
- return pairs
-
-
-def basic_clean(text):
- text = ftfy.fix_text(text)
- text = html.unescape(html.unescape(text))
- return text.strip()
-
-
-def whitespace_clean(text):
- text = re.sub(r'\s+', ' ', text)
- text = text.strip()
- return text
-
-
-class SimpleTokenizer(object):
- def __init__(self, bpe_path: str = default_bpe()):
- self.byte_encoder = bytes_to_unicode()
- self.byte_decoder = {v: k for k, v in self.byte_encoder.items()}
- merges = gzip.open(bpe_path).read().decode("utf-8").split('\n')
- merges = merges[1:49152-256-2+1]
- merges = [tuple(merge.split()) for merge in merges]
- vocab = list(bytes_to_unicode().values())
- vocab = vocab + [v+'' for v in vocab]
- for merge in merges:
- vocab.append(''.join(merge))
- vocab.extend(['<|startoftext|>', '<|endoftext|>'])
- self.encoder = dict(zip(vocab, range(len(vocab))))
- self.decoder = {v: k for k, v in self.encoder.items()}
- self.bpe_ranks = dict(zip(merges, range(len(merges))))
- self.cache = {'<|startoftext|>': '<|startoftext|>', '<|endoftext|>': '<|endoftext|>'}
- self.pat = re.compile(r"""<\|startoftext\|>|<\|endoftext\|>|'s|'t|'re|'ve|'m|'ll|'d|[\p{L}]+|[\p{N}]|[^\s\p{L}\p{N}]+""", re.IGNORECASE)
-
- def bpe(self, token):
- if token in self.cache:
- return self.cache[token]
- word = tuple(token[:-1]) + ( token[-1] + '',)
- pairs = get_pairs(word)
-
- if not pairs:
- return token+''
-
- while True:
- bigram = min(pairs, key = lambda pair: self.bpe_ranks.get(pair, float('inf')))
- if bigram not in self.bpe_ranks:
- break
- first, second = bigram
- new_word = []
- i = 0
- while i < len(word):
- try:
- j = word.index(first, i)
- new_word.extend(word[i:j])
- i = j
- except:
- new_word.extend(word[i:])
- break
-
- if word[i] == first and i < len(word)-1 and word[i+1] == second:
- new_word.append(first+second)
- i += 2
- else:
- new_word.append(word[i])
- i += 1
- new_word = tuple(new_word)
- word = new_word
- if len(word) == 1:
- break
- else:
- pairs = get_pairs(word)
- word = ' '.join(word)
- self.cache[token] = word
- return word
-
- def encode(self, text):
- bpe_tokens = []
- text = whitespace_clean(basic_clean(text)).lower()
- for token in re.findall(self.pat, text):
- token = ''.join(self.byte_encoder[b] for b in token.encode('utf-8'))
- bpe_tokens.extend(self.encoder[bpe_token] for bpe_token in self.bpe(token).split(' '))
- return bpe_tokens
-
- def decode(self, tokens):
- text = ''.join([self.decoder[token] for token in tokens])
- text = bytearray([self.byte_decoder[c] for c in text]).decode('utf-8', errors="replace").replace('', ' ')
- return text
diff --git a/spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/evaluations/fid_score.py b/spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/evaluations/fid_score.py
deleted file mode 100644
index f4a912099b606d3d00ae28b31c9814bf1c96db37..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/evaluations/fid_score.py
+++ /dev/null
@@ -1,246 +0,0 @@
-"""Calculates the Frechet Inception Distance (FID) to evalulate GANs
-The FID metric calculates the distance between two distributions of examples.
-Typically, we have summary statistics (mean & covariance matrix) of one
-of these distributions, while the 2nd distribution is given by a GAN.
-When run as a stand-alone program, it compares the distribution of
-examples that are stored as PNG/JPEG at a specified location with a
-distribution given by summary statistics (in pickle format).
-The FID is calculated by assuming that X_1 and X_2 are the activations of
-the pool_3 layer of the inception net for generated samples and real world
-samples respectively.
-See --help to see further details.
-Code apapted from https://github.com/bioinf-jku/TTUR to use PyTorch instead
-of Tensorflow
-Copyright 2018 Institute of Bioinformatics, JKU Linz
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
- http://www.apache.org/licenses/LICENSE-2.0
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-"""
-import os
-import pathlib
-from argparse import ArgumentParser, ArgumentDefaultsHelpFormatter
-
-import numpy as np
-import torch
-from scipy import linalg
-from torch.nn.functional import adaptive_avg_pool2d
-
-from PIL import Image
-from evaluations.inception import InceptionV3
-from dataloader.image_folder import make_dataset
-
-try:
- from tqdm import tqdm
-except ImportError:
- # If not tqdm is not available, provide a mock version of it
- def tqdm(x): return x
-
-parser = ArgumentParser(formatter_class=ArgumentDefaultsHelpFormatter)
-parser.add_argument('--batch-size', type=int, default=50,
- help='Batch size to use')
-parser.add_argument('--dims', type=int, default=2048,
- choices=list(InceptionV3.BLOCK_INDEX_BY_DIM),
- help=('Dimensionality of Inception features to use. '
- 'By default, uses pool3 features'))
-parser.add_argument('-c', '--gpu', default='', type=str,
- help='GPU to use (leave blank for CPU only)')
-parser.add_argument('path', type=str, nargs=2,
- help=('Paths to the generated examples or '
- 'to .npz statistic files'))
-
-
-def imread(filename):
- """
- Loads an image file into a (height, width, 3) uint8 ndarray. .resize((229, 229), Image.BILINEAR)
- """
- return np.asarray(Image.open(filename).convert('RGB').resize((229, 229), Image.BILINEAR), dtype=np.uint8)[..., :3]
-
-
-def get_activations(files, model, batch_size=50, dims=2048, cuda=False):
- """Calculates the activations of the pool_3 layer for all examples.
- Params:
- -- files : List of image files paths
- -- model : Instance of inception model
- -- batch_size : Batch size of examples for the model to process at once.
- Make sure that the number of samples is a multiple of
- the batch size, otherwise some samples are ignored. This
- behavior is retained to match the original FID score
- implementation.
- -- dims : Dimensionality of features returned by Inception
- -- cuda : If set to True, use GPU
- Returns:
- -- A numpy array of dimension (num examples, dims) that contains the
- activations of the given tensor when feeding inception with the
- query tensor.
- """
- model.eval()
-
- if batch_size > len(files):
- print(('Warning: batch size is bigger than the data size. '
- 'Setting batch size to data size'))
- batch_size = len(files)
-
- pred_arr = np.empty((len(files), dims))
-
- for i in tqdm(range(0, len(files), batch_size)):
- start = i
- end = i + batch_size
-
- images = np.array([imread(str(f)).astype(np.float32)
- for f in files[start:end]])
-
- # Reshape to (n_images, 3, height, width)
- images = images.transpose((0, 3, 1, 2))
- images /= 255
-
- batch = torch.from_numpy(images).type(torch.FloatTensor)
- if cuda:
- batch = batch.cuda()
-
- pred = model(batch)[0]
-
- # If model output is not scalar, apply global spatial average pooling.
- # This happens if you choose a dimensionality not equal 2048.
- if pred.size(2) != 1 or pred.size(3) != 1:
- pred = adaptive_avg_pool2d(pred, output_size=(1, 1))
-
- pred_arr[start:end] = pred.cpu().data.numpy().reshape(pred.size(0), -1)
-
- return pred_arr
-
-
-def calculate_frechet_distance(mu1, sigma1, mu2, sigma2, eps=1e-6):
- """Numpy implementation of the Frechet Distance.
- The Frechet distance between two multivariate Gaussians X_1 ~ N(mu_1, C_1)
- and X_2 ~ N(mu_2, C_2) is
- d^2 = ||mu_1 - mu_2||^2 + Tr(C_1 + C_2 - 2*sqrt(C_1*C_2)).
- Stable version by Dougal J. Sutherland.
- Params:
- -- mu1 : Numpy array containing the activations of a layer of the
- inception net (like returned by the function 'get_predictions')
- for generated samples.
- -- mu2 : The sample mean over activations, precalculated on an
- representative data set.
- -- sigma1: The covariance matrix over activations for generated samples.
- -- sigma2: The covariance matrix over activations, precalculated on an
- representative data set.
- Returns:
- -- : The Frechet Distance.
- """
-
- mu1 = np.atleast_1d(mu1)
- mu2 = np.atleast_1d(mu2)
-
- sigma1 = np.atleast_2d(sigma1)
- sigma2 = np.atleast_2d(sigma2)
-
- assert mu1.shape == mu2.shape, \
- 'Training and test mean vectors have different lengths'
- assert sigma1.shape == sigma2.shape, \
- 'Training and test covariances have different dimensions'
-
- diff = mu1 - mu2
-
- # Product might be almost singular
- covmean, _ = linalg.sqrtm(sigma1.dot(sigma2), disp=False)
- if not np.isfinite(covmean).all():
- msg = ('fid calculation produces singular product; '
- 'adding %s to diagonal of cov estimates') % eps
- print(msg)
- offset = np.eye(sigma1.shape[0]) * eps
- covmean = linalg.sqrtm((sigma1 + offset).dot(sigma2 + offset))
-
- # Numerical error might give slight imaginary component
- if np.iscomplexobj(covmean):
- if not np.allclose(np.diagonal(covmean).imag, 0, atol=1e-3):
- m = np.max(np.abs(covmean.imag))
- raise ValueError('Imaginary component {}'.format(m))
- covmean = covmean.real
-
- tr_covmean = np.trace(covmean)
-
- return (diff.dot(diff) + np.trace(sigma1) +
- np.trace(sigma2) - 2 * tr_covmean)
-
-
-def calculate_activation_statistics(files, model, batch_size=50, dims=2048,
- cuda=False):
- """Calculation of the statistics used by the FID.
- Params:
- -- files : List of image files paths
- -- model : Instance of inception model
- -- batch_size : The examples numpy array is split into batches with
- batch size batch_size. A reasonable batch size
- depends on the hardware.
- -- dims : Dimensionality of features returned by Inception
- -- cuda : If set to True, use GPU
- Returns:
- -- mu : The mean over samples of the activations of the pool_3 layer of
- the inception model.
- -- sigma : The covariance matrix of the activations of the pool_3 layer of
- the inception model.
- """
- act = get_activations(files, model, batch_size, dims, cuda)
- mu = np.mean(act, axis=0)
- sigma = np.cov(act, rowvar=False)
- return mu, sigma
-
-
-def _compute_statistics_of_path(path, model, batch_size, dims, cuda):
- if path.endswith('.npz'):
- f = np.load(path)
- m, s = f['mu'][:], f['sigma'][:]
- f.close()
- elif path.endswith('.txt'):
- files, file_size = make_dataset(path)
- m, s = calculate_activation_statistics(files, model, batch_size,
- dims, cuda)
- else:
- path = pathlib.Path(path)
- files = list(path.glob('*.jpg')) + list(path.glob('*.png'))
- m, s = calculate_activation_statistics(files, model, batch_size,
- dims, cuda)
-
- return m, s
-
-
-def calculate_fid_given_paths(paths, batch_size, cuda, dims):
- """Calculates the FID of two paths"""
- for p in paths:
- if not os.path.exists(p):
- raise RuntimeError('Invalid path: %s' % p)
-
- block_idx = InceptionV3.BLOCK_INDEX_BY_DIM[dims]
-
- model = InceptionV3([block_idx])
- if cuda:
- model.cuda()
-
- m1, s1 = _compute_statistics_of_path(paths[0], model, batch_size,
- dims, cuda)
- m2, s2 = _compute_statistics_of_path(paths[1], model, batch_size,
- dims, cuda)
- fid_value = calculate_frechet_distance(m1, s1, m2, s2)
-
- return fid_value
-
-
-def main():
- args = parser.parse_args()
- os.environ['CUDA_VISIBLE_DEVICES'] = args.gpu
-
- fid_value = calculate_fid_given_paths(args.path,
- args.batch_size,
- args.gpu != '',
- args.dims)
- print('FID: ', fid_value)
-
-
-if __name__ == '__main__':
- main()
\ No newline at end of file
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/resolution/legacy/__init__.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/resolution/legacy/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/colorama/tests/winterm_test.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/colorama/tests/winterm_test.py
deleted file mode 100644
index d0955f9e608377940f0d548576964f2fcf3caf48..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/colorama/tests/winterm_test.py
+++ /dev/null
@@ -1,131 +0,0 @@
-# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file.
-import sys
-from unittest import TestCase, main, skipUnless
-
-try:
- from unittest.mock import Mock, patch
-except ImportError:
- from mock import Mock, patch
-
-from ..winterm import WinColor, WinStyle, WinTerm
-
-
-class WinTermTest(TestCase):
-
- @patch('colorama.winterm.win32')
- def testInit(self, mockWin32):
- mockAttr = Mock()
- mockAttr.wAttributes = 7 + 6 * 16 + 8
- mockWin32.GetConsoleScreenBufferInfo.return_value = mockAttr
- term = WinTerm()
- self.assertEqual(term._fore, 7)
- self.assertEqual(term._back, 6)
- self.assertEqual(term._style, 8)
-
- @skipUnless(sys.platform.startswith("win"), "requires Windows")
- def testGetAttrs(self):
- term = WinTerm()
-
- term._fore = 0
- term._back = 0
- term._style = 0
- self.assertEqual(term.get_attrs(), 0)
-
- term._fore = WinColor.YELLOW
- self.assertEqual(term.get_attrs(), WinColor.YELLOW)
-
- term._back = WinColor.MAGENTA
- self.assertEqual(
- term.get_attrs(),
- WinColor.YELLOW + WinColor.MAGENTA * 16)
-
- term._style = WinStyle.BRIGHT
- self.assertEqual(
- term.get_attrs(),
- WinColor.YELLOW + WinColor.MAGENTA * 16 + WinStyle.BRIGHT)
-
- @patch('colorama.winterm.win32')
- def testResetAll(self, mockWin32):
- mockAttr = Mock()
- mockAttr.wAttributes = 1 + 2 * 16 + 8
- mockWin32.GetConsoleScreenBufferInfo.return_value = mockAttr
- term = WinTerm()
-
- term.set_console = Mock()
- term._fore = -1
- term._back = -1
- term._style = -1
-
- term.reset_all()
-
- self.assertEqual(term._fore, 1)
- self.assertEqual(term._back, 2)
- self.assertEqual(term._style, 8)
- self.assertEqual(term.set_console.called, True)
-
- @skipUnless(sys.platform.startswith("win"), "requires Windows")
- def testFore(self):
- term = WinTerm()
- term.set_console = Mock()
- term._fore = 0
-
- term.fore(5)
-
- self.assertEqual(term._fore, 5)
- self.assertEqual(term.set_console.called, True)
-
- @skipUnless(sys.platform.startswith("win"), "requires Windows")
- def testBack(self):
- term = WinTerm()
- term.set_console = Mock()
- term._back = 0
-
- term.back(5)
-
- self.assertEqual(term._back, 5)
- self.assertEqual(term.set_console.called, True)
-
- @skipUnless(sys.platform.startswith("win"), "requires Windows")
- def testStyle(self):
- term = WinTerm()
- term.set_console = Mock()
- term._style = 0
-
- term.style(22)
-
- self.assertEqual(term._style, 22)
- self.assertEqual(term.set_console.called, True)
-
- @patch('colorama.winterm.win32')
- def testSetConsole(self, mockWin32):
- mockAttr = Mock()
- mockAttr.wAttributes = 0
- mockWin32.GetConsoleScreenBufferInfo.return_value = mockAttr
- term = WinTerm()
- term.windll = Mock()
-
- term.set_console()
-
- self.assertEqual(
- mockWin32.SetConsoleTextAttribute.call_args,
- ((mockWin32.STDOUT, term.get_attrs()), {})
- )
-
- @patch('colorama.winterm.win32')
- def testSetConsoleOnStderr(self, mockWin32):
- mockAttr = Mock()
- mockAttr.wAttributes = 0
- mockWin32.GetConsoleScreenBufferInfo.return_value = mockAttr
- term = WinTerm()
- term.windll = Mock()
-
- term.set_console(on_stderr=True)
-
- self.assertEqual(
- mockWin32.SetConsoleTextAttribute.call_args,
- ((mockWin32.STDERR, term.get_attrs()), {})
- )
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_impl.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_impl.py
deleted file mode 100644
index 37b0e6531f1544e1ba9b5895c48939fc97441ce7..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_impl.py
+++ /dev/null
@@ -1,330 +0,0 @@
-import json
-import os
-import sys
-import tempfile
-from contextlib import contextmanager
-from os.path import abspath
-from os.path import join as pjoin
-from subprocess import STDOUT, check_call, check_output
-
-from ._in_process import _in_proc_script_path
-
-
-def write_json(obj, path, **kwargs):
- with open(path, 'w', encoding='utf-8') as f:
- json.dump(obj, f, **kwargs)
-
-
-def read_json(path):
- with open(path, encoding='utf-8') as f:
- return json.load(f)
-
-
-class BackendUnavailable(Exception):
- """Will be raised if the backend cannot be imported in the hook process."""
- def __init__(self, traceback):
- self.traceback = traceback
-
-
-class BackendInvalid(Exception):
- """Will be raised if the backend is invalid."""
- def __init__(self, backend_name, backend_path, message):
- super().__init__(message)
- self.backend_name = backend_name
- self.backend_path = backend_path
-
-
-class HookMissing(Exception):
- """Will be raised on missing hooks (if a fallback can't be used)."""
- def __init__(self, hook_name):
- super().__init__(hook_name)
- self.hook_name = hook_name
-
-
-class UnsupportedOperation(Exception):
- """May be raised by build_sdist if the backend indicates that it can't."""
- def __init__(self, traceback):
- self.traceback = traceback
-
-
-def default_subprocess_runner(cmd, cwd=None, extra_environ=None):
- """The default method of calling the wrapper subprocess.
-
- This uses :func:`subprocess.check_call` under the hood.
- """
- env = os.environ.copy()
- if extra_environ:
- env.update(extra_environ)
-
- check_call(cmd, cwd=cwd, env=env)
-
-
-def quiet_subprocess_runner(cmd, cwd=None, extra_environ=None):
- """Call the subprocess while suppressing output.
-
- This uses :func:`subprocess.check_output` under the hood.
- """
- env = os.environ.copy()
- if extra_environ:
- env.update(extra_environ)
-
- check_output(cmd, cwd=cwd, env=env, stderr=STDOUT)
-
-
-def norm_and_check(source_tree, requested):
- """Normalise and check a backend path.
-
- Ensure that the requested backend path is specified as a relative path,
- and resolves to a location under the given source tree.
-
- Return an absolute version of the requested path.
- """
- if os.path.isabs(requested):
- raise ValueError("paths must be relative")
-
- abs_source = os.path.abspath(source_tree)
- abs_requested = os.path.normpath(os.path.join(abs_source, requested))
- # We have to use commonprefix for Python 2.7 compatibility. So we
- # normalise case to avoid problems because commonprefix is a character
- # based comparison :-(
- norm_source = os.path.normcase(abs_source)
- norm_requested = os.path.normcase(abs_requested)
- if os.path.commonprefix([norm_source, norm_requested]) != norm_source:
- raise ValueError("paths must be inside source tree")
-
- return abs_requested
-
-
-class BuildBackendHookCaller:
- """A wrapper to call the build backend hooks for a source directory.
- """
-
- def __init__(
- self,
- source_dir,
- build_backend,
- backend_path=None,
- runner=None,
- python_executable=None,
- ):
- """
- :param source_dir: The source directory to invoke the build backend for
- :param build_backend: The build backend spec
- :param backend_path: Additional path entries for the build backend spec
- :param runner: The :ref:`subprocess runner ` to use
- :param python_executable:
- The Python executable used to invoke the build backend
- """
- if runner is None:
- runner = default_subprocess_runner
-
- self.source_dir = abspath(source_dir)
- self.build_backend = build_backend
- if backend_path:
- backend_path = [
- norm_and_check(self.source_dir, p) for p in backend_path
- ]
- self.backend_path = backend_path
- self._subprocess_runner = runner
- if not python_executable:
- python_executable = sys.executable
- self.python_executable = python_executable
-
- @contextmanager
- def subprocess_runner(self, runner):
- """A context manager for temporarily overriding the default
- :ref:`subprocess runner `.
-
- .. code-block:: python
-
- hook_caller = BuildBackendHookCaller(...)
- with hook_caller.subprocess_runner(quiet_subprocess_runner):
- ...
- """
- prev = self._subprocess_runner
- self._subprocess_runner = runner
- try:
- yield
- finally:
- self._subprocess_runner = prev
-
- def _supported_features(self):
- """Return the list of optional features supported by the backend."""
- return self._call_hook('_supported_features', {})
-
- def get_requires_for_build_wheel(self, config_settings=None):
- """Get additional dependencies required for building a wheel.
-
- :returns: A list of :pep:`dependency specifiers <508>`.
- :rtype: list[str]
-
- .. admonition:: Fallback
-
- If the build backend does not defined a hook with this name, an
- empty list will be returned.
- """
- return self._call_hook('get_requires_for_build_wheel', {
- 'config_settings': config_settings
- })
-
- def prepare_metadata_for_build_wheel(
- self, metadata_directory, config_settings=None,
- _allow_fallback=True):
- """Prepare a ``*.dist-info`` folder with metadata for this project.
-
- :returns: Name of the newly created subfolder within
- ``metadata_directory``, containing the metadata.
- :rtype: str
-
- .. admonition:: Fallback
-
- If the build backend does not define a hook with this name and
- ``_allow_fallback`` is truthy, the backend will be asked to build a
- wheel via the ``build_wheel`` hook and the dist-info extracted from
- that will be returned.
- """
- return self._call_hook('prepare_metadata_for_build_wheel', {
- 'metadata_directory': abspath(metadata_directory),
- 'config_settings': config_settings,
- '_allow_fallback': _allow_fallback,
- })
-
- def build_wheel(
- self, wheel_directory, config_settings=None,
- metadata_directory=None):
- """Build a wheel from this project.
-
- :returns:
- The name of the newly created wheel within ``wheel_directory``.
-
- .. admonition:: Interaction with fallback
-
- If the ``build_wheel`` hook was called in the fallback for
- :meth:`prepare_metadata_for_build_wheel`, the build backend would
- not be invoked. Instead, the previously built wheel will be copied
- to ``wheel_directory`` and the name of that file will be returned.
- """
- if metadata_directory is not None:
- metadata_directory = abspath(metadata_directory)
- return self._call_hook('build_wheel', {
- 'wheel_directory': abspath(wheel_directory),
- 'config_settings': config_settings,
- 'metadata_directory': metadata_directory,
- })
-
- def get_requires_for_build_editable(self, config_settings=None):
- """Get additional dependencies required for building an editable wheel.
-
- :returns: A list of :pep:`dependency specifiers <508>`.
- :rtype: list[str]
-
- .. admonition:: Fallback
-
- If the build backend does not defined a hook with this name, an
- empty list will be returned.
- """
- return self._call_hook('get_requires_for_build_editable', {
- 'config_settings': config_settings
- })
-
- def prepare_metadata_for_build_editable(
- self, metadata_directory, config_settings=None,
- _allow_fallback=True):
- """Prepare a ``*.dist-info`` folder with metadata for this project.
-
- :returns: Name of the newly created subfolder within
- ``metadata_directory``, containing the metadata.
- :rtype: str
-
- .. admonition:: Fallback
-
- If the build backend does not define a hook with this name and
- ``_allow_fallback`` is truthy, the backend will be asked to build a
- wheel via the ``build_editable`` hook and the dist-info
- extracted from that will be returned.
- """
- return self._call_hook('prepare_metadata_for_build_editable', {
- 'metadata_directory': abspath(metadata_directory),
- 'config_settings': config_settings,
- '_allow_fallback': _allow_fallback,
- })
-
- def build_editable(
- self, wheel_directory, config_settings=None,
- metadata_directory=None):
- """Build an editable wheel from this project.
-
- :returns:
- The name of the newly created wheel within ``wheel_directory``.
-
- .. admonition:: Interaction with fallback
-
- If the ``build_editable`` hook was called in the fallback for
- :meth:`prepare_metadata_for_build_editable`, the build backend
- would not be invoked. Instead, the previously built wheel will be
- copied to ``wheel_directory`` and the name of that file will be
- returned.
- """
- if metadata_directory is not None:
- metadata_directory = abspath(metadata_directory)
- return self._call_hook('build_editable', {
- 'wheel_directory': abspath(wheel_directory),
- 'config_settings': config_settings,
- 'metadata_directory': metadata_directory,
- })
-
- def get_requires_for_build_sdist(self, config_settings=None):
- """Get additional dependencies required for building an sdist.
-
- :returns: A list of :pep:`dependency specifiers <508>`.
- :rtype: list[str]
- """
- return self._call_hook('get_requires_for_build_sdist', {
- 'config_settings': config_settings
- })
-
- def build_sdist(self, sdist_directory, config_settings=None):
- """Build an sdist from this project.
-
- :returns:
- The name of the newly created sdist within ``wheel_directory``.
- """
- return self._call_hook('build_sdist', {
- 'sdist_directory': abspath(sdist_directory),
- 'config_settings': config_settings,
- })
-
- def _call_hook(self, hook_name, kwargs):
- extra_environ = {'PEP517_BUILD_BACKEND': self.build_backend}
-
- if self.backend_path:
- backend_path = os.pathsep.join(self.backend_path)
- extra_environ['PEP517_BACKEND_PATH'] = backend_path
-
- with tempfile.TemporaryDirectory() as td:
- hook_input = {'kwargs': kwargs}
- write_json(hook_input, pjoin(td, 'input.json'), indent=2)
-
- # Run the hook in a subprocess
- with _in_proc_script_path() as script:
- python = self.python_executable
- self._subprocess_runner(
- [python, abspath(str(script)), hook_name, td],
- cwd=self.source_dir,
- extra_environ=extra_environ
- )
-
- data = read_json(pjoin(td, 'output.json'))
- if data.get('unsupported'):
- raise UnsupportedOperation(data.get('traceback', ''))
- if data.get('no_backend'):
- raise BackendUnavailable(data.get('traceback', ''))
- if data.get('backend_invalid'):
- raise BackendInvalid(
- backend_name=self.build_backend,
- backend_path=self.backend_path,
- message=data.get('backend_error', '')
- )
- if data.get('hook_missing'):
- raise HookMissing(data.get('missing_hook_name') or hook_name)
- return data['return_val']
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/command/bdist.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/command/bdist.py
deleted file mode 100644
index de37dae0ffcd5ea3b05c2203981f23163707cdd6..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/command/bdist.py
+++ /dev/null
@@ -1,157 +0,0 @@
-"""distutils.command.bdist
-
-Implements the Distutils 'bdist' command (create a built [binary]
-distribution)."""
-
-import os
-import warnings
-
-from distutils.core import Command
-from distutils.errors import DistutilsPlatformError, DistutilsOptionError
-from distutils.util import get_platform
-
-
-def show_formats():
- """Print list of available formats (arguments to "--format" option)."""
- from distutils.fancy_getopt import FancyGetopt
-
- formats = []
- for format in bdist.format_commands:
- formats.append(("formats=" + format, None, bdist.format_commands[format][1]))
- pretty_printer = FancyGetopt(formats)
- pretty_printer.print_help("List of available distribution formats:")
-
-
-class ListCompat(dict):
- # adapter to allow for Setuptools compatibility in format_commands
- def append(self, item):
- warnings.warn(
- """format_commands is now a dict. append is deprecated.""",
- DeprecationWarning,
- stacklevel=2,
- )
-
-
-class bdist(Command):
-
- description = "create a built (binary) distribution"
-
- user_options = [
- ('bdist-base=', 'b', "temporary directory for creating built distributions"),
- (
- 'plat-name=',
- 'p',
- "platform name to embed in generated filenames "
- "(default: %s)" % get_platform(),
- ),
- ('formats=', None, "formats for distribution (comma-separated list)"),
- (
- 'dist-dir=',
- 'd',
- "directory to put final built distributions in " "[default: dist]",
- ),
- ('skip-build', None, "skip rebuilding everything (for testing/debugging)"),
- (
- 'owner=',
- 'u',
- "Owner name used when creating a tar file" " [default: current user]",
- ),
- (
- 'group=',
- 'g',
- "Group name used when creating a tar file" " [default: current group]",
- ),
- ]
-
- boolean_options = ['skip-build']
-
- help_options = [
- ('help-formats', None, "lists available distribution formats", show_formats),
- ]
-
- # The following commands do not take a format option from bdist
- no_format_option = ('bdist_rpm',)
-
- # This won't do in reality: will need to distinguish RPM-ish Linux,
- # Debian-ish Linux, Solaris, FreeBSD, ..., Windows, Mac OS.
- default_format = {'posix': 'gztar', 'nt': 'zip'}
-
- # Define commands in preferred order for the --help-formats option
- format_commands = ListCompat(
- {
- 'rpm': ('bdist_rpm', "RPM distribution"),
- 'gztar': ('bdist_dumb', "gzip'ed tar file"),
- 'bztar': ('bdist_dumb', "bzip2'ed tar file"),
- 'xztar': ('bdist_dumb', "xz'ed tar file"),
- 'ztar': ('bdist_dumb', "compressed tar file"),
- 'tar': ('bdist_dumb', "tar file"),
- 'zip': ('bdist_dumb', "ZIP file"),
- }
- )
-
- # for compatibility until consumers only reference format_commands
- format_command = format_commands
-
- def initialize_options(self):
- self.bdist_base = None
- self.plat_name = None
- self.formats = None
- self.dist_dir = None
- self.skip_build = 0
- self.group = None
- self.owner = None
-
- def finalize_options(self):
- # have to finalize 'plat_name' before 'bdist_base'
- if self.plat_name is None:
- if self.skip_build:
- self.plat_name = get_platform()
- else:
- self.plat_name = self.get_finalized_command('build').plat_name
-
- # 'bdist_base' -- parent of per-built-distribution-format
- # temporary directories (eg. we'll probably have
- # "build/bdist./dumb", "build/bdist./rpm", etc.)
- if self.bdist_base is None:
- build_base = self.get_finalized_command('build').build_base
- self.bdist_base = os.path.join(build_base, 'bdist.' + self.plat_name)
-
- self.ensure_string_list('formats')
- if self.formats is None:
- try:
- self.formats = [self.default_format[os.name]]
- except KeyError:
- raise DistutilsPlatformError(
- "don't know how to create built distributions "
- "on platform %s" % os.name
- )
-
- if self.dist_dir is None:
- self.dist_dir = "dist"
-
- def run(self):
- # Figure out which sub-commands we need to run.
- commands = []
- for format in self.formats:
- try:
- commands.append(self.format_commands[format][0])
- except KeyError:
- raise DistutilsOptionError("invalid format '%s'" % format)
-
- # Reinitialize and run each command.
- for i in range(len(self.formats)):
- cmd_name = commands[i]
- sub_cmd = self.reinitialize_command(cmd_name)
- if cmd_name not in self.no_format_option:
- sub_cmd.format = self.formats[i]
-
- # passing the owner and group names for tar archiving
- if cmd_name == 'bdist_dumb':
- sub_cmd.owner = self.owner
- sub_cmd.group = self.group
-
- # If we're going to need to run this command again, tell it to
- # keep its temporary files around so subsequent runs go faster.
- if cmd_name in commands[i + 1 :]:
- sub_cmd.keep_temp = 1
- self.run_command(cmd_name)
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/command/__init__.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/command/__init__.py
deleted file mode 100644
index 5acd7687d642f06de84b38f5842c41ae14d5f24a..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/command/__init__.py
+++ /dev/null
@@ -1,12 +0,0 @@
-from distutils.command.bdist import bdist
-import sys
-
-if 'egg' not in bdist.format_commands:
- try:
- bdist.format_commands['egg'] = ('bdist_egg', "Python .egg file")
- except TypeError:
- # For backward compatibility with older distutils (stdlib)
- bdist.format_command['egg'] = ('bdist_egg', "Python .egg file")
- bdist.format_commands.append('egg')
-
-del bdist, sys
diff --git a/spaces/BLACKHOST/Date/date.py b/spaces/BLACKHOST/Date/date.py
deleted file mode 100644
index b31096f7005522139b54bd52551b5393f4905b21..0000000000000000000000000000000000000000
--- a/spaces/BLACKHOST/Date/date.py
+++ /dev/null
@@ -1,9 +0,0 @@
-from datetime import datetime
-from os import system
-from time import sleep
-
-while Ture:
- time = datetime.now()
- print(time.strftime(' TiME:'+"[%H: %M: %S:] "))
- sleep(1)
- system("clear")
\ No newline at end of file
diff --git a/spaces/Bart92/RVC_HF/julius/__init__.py b/spaces/Bart92/RVC_HF/julius/__init__.py
deleted file mode 100644
index 69811b0415a291ca1beb845531785ba03c57099a..0000000000000000000000000000000000000000
--- a/spaces/Bart92/RVC_HF/julius/__init__.py
+++ /dev/null
@@ -1,41 +0,0 @@
-# File under the MIT license, see https://github.com/adefossez/julius/LICENSE for details.
-# Author: adefossez, 2020
-
-# flake8: noqa
-"""
-.. image:: ../logo.png
-
-Julius contains different Digital Signal Processing algorithms implemented
-with PyTorch, so that they are differentiable and available on CUDA.
-Note that all the modules implemented here can be used with TorchScript.
-
-For now, I have implemented:
-
-- `julius.resample`: fast sinc resampling.
-- `julius.fftconv`: FFT based convolutions.
-- `julius.lowpass`: FIR low pass filter banks.
-- `julius.filters`: FIR high pass and band pass filters.
-- `julius.bands`: Decomposition of a waveform signal over mel-scale frequency bands.
-
-Along that, you might found useful utilities in:
-
-- `julius.core`: DSP related functions.
-- `julius.utils`: Generic utilities.
-
-
-Please checkout [the Github repository](https://github.com/adefossez/julius) for other informations.
-For a verification of the speed and correctness of Julius, check the benchmark module `bench`.
-
-
-This package is named in this honor of
-[Julius O. Smith](https://ccrma.stanford.edu/~jos/),
-whose books and website were a gold mine of information for me to learn about DSP. Go checkout his website if you want
-to learn more about DSP.
-"""
-
-from .bands import SplitBands, split_bands
-from .fftconv import fft_conv1d, FFTConv1d
-from .filters import bandpass_filter, BandPassFilter
-from .filters import highpass_filter, highpass_filters, HighPassFilter, HighPassFilters
-from .lowpass import lowpass_filter, lowpass_filters, LowPassFilters, LowPassFilter
-from .resample import resample_frac, ResampleFrac
diff --git a/spaces/Benson/text-generation/Examples/Cinco Noches En Freddy 39s 6 Descarga.md b/spaces/Benson/text-generation/Examples/Cinco Noches En Freddy 39s 6 Descarga.md
deleted file mode 100644
index f52224f143ee7f7377ae3a7ddddc16968734451e..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Cinco Noches En Freddy 39s 6 Descarga.md
+++ /dev/null
@@ -1,103 +0,0 @@
-
-
Cinco noches en Freddy’s 6 Descargar: Cómo jugar la última entrega de la serie de juegos de terror
-
Si eres un fan de los juegos de terror, probablemente hayas oído hablar de Five Nights at Freddy’s, una popular serie que cuenta con personajes animatrónicos que intentan matarte en una pizzería. La serie ha generado varias secuelas, spin-offs, novelas e incluso una película en desarrollo. Pero ¿qué pasa con la última entrega, Five Nights at Freddy’s 6? ¿Cómo se puede descargar y jugar? En este artículo, te contaremos todo lo que necesitas saber sobre Five Nights at Freddy’s 6, también conocido como Five Nights at Freddy’s: Security Breach.
Five Nights at Freddy’s 6 es el sexto juego principal de la serie Five Nights at Freddy’s, creado por Scott Cawthon y desarrollado por Steel Wool Studios. Fue lanzado el 16 de diciembre de 2021 para Windows, PlayStation 4 y PlayStation 5. También está planeado para Xbox One, Xbox Series X/S, Nintendo Switch, iOS y Android en 2022.
-
La trama y la configuración del juego
-
El juego tiene lugar en Mega Pizzaplex de Freddy Fazbear, un centro de diversión familiar de tres pisos que cuenta con personajes animatrónicos como Freddy Fazbear, Chica, Monty Gator y Roxanne Wolf. Usted juega como Gregory, un joven que está atrapado dentro de la Pizzaplex durante la noche. Con la ayuda del propio Freddy, Gregory debe descubrir los secretos del Pizzaplex, aprender la verdad sobre su pasado y sobrevivir hasta el amanecer. Sin embargo, no está solo. El Pizzaplex es también el hogar de Vanessa, un guardia de seguridad que tiene una agenda oscura, y otros animatrónicos hostiles que no se detendrán ante nada para atraparlo.
-
La jugabilidad y características del juego
-
-
El juego también ofrece una variedad de atracciones y actividades para disfrutar en el Pizzaplex. Puedes jugar juegos de árcade como Monty Golf, Roxy Raceway, Bonnie Bowl o Fazbear Blast. También puede explorar diferentes áreas, como las alcantarillas o la arena láser tag. También puede recoger monedas y fichas para comprar artículos o desbloquear secretos.
-
-
Cómo descargar Five Nights at Freddy’s 6?
-
Las plataformas oficiales y los precios del juego
-
El juego está disponible para su compra en Steam para usuarios de Windows por $39.99. También puedes comprarlo en PlayStation Store para usuarios de PlayStation 4 o PlayStation 5 por $39.99. El juego admite la compra cruzada entre las versiones de PS4 y PS5.
-
El juego aún no está disponible para otras plataformas como Xbox One, Xbox Series X/S, Nintendo Switch, iOS o Android. Sin embargo, se espera que sean liberados en algún momento de 2022.
-
Los requisitos del sistema y la compatibilidad del juego
-
Antes de descargar el juego, usted debe asegurarse de que su dispositivo cumple con los requisitos mínimos del sistema para el juego. Estos son los requisitos del sistema para los usuarios de Windows y PlayStation:
-
-
-
Plataforma
-
Requisitos mínimos
-
Requisitos recomendados
-
-
-
Windows
-
-
-
OS: Windows 10 64-bit
-
Procesador: Intel Core i5-2500K o AMD FX-8350
-
Memoria: 8 GB RAM
-
Gráficos: NVIDIA GeForce GTX 960 o AMD Radeon R9 280X
-
DirectX: Versión 11
-
Almacenamiento: 20 GB de espacio disponible
-
-
-
-
-
OS: Windows 10 64-bit
-
Procesador: Intel Core i7-6700K o AMD Ryzen 5 2600X
OS: software de sistema PlayStation 4 o PlayStation 5
-
Procesador: N/A
-
-
Gráficos: N/A
-
DirectX: N/A
-
Almacenamiento: 20 GB de espacio disponible
-
-
Nota: El juego admite funciones mejoradas de PS4 Pro y PS5, como una resolución más alta, tiempos de carga más rápidos y trazado de rayos.
-
-
-
-
Si tu dispositivo cumple con los requisitos del sistema, puedes descargar el juego desde las plataformas oficiales siguiendo estos pasos:
-
-
Crear una cuenta o iniciar sesión en Steam o PlayStation Store.
-
Búsqueda de cinco noches en Freddy’s 6 o cinco noches en Freddy’s: Violación de seguridad en la tienda.
-
Seleccione el juego y haga clic en Comprar o Añadir al carrito.
-
Complete el proceso de pago y confirme su compra.
-
El juego comenzará a descargarse automáticamente a su dispositivo.
-
Una vez completada la descarga, puedes lanzar el juego y disfrutarlo.
-
Cómo jugar cinco noches en Freddy’s 6?
-
Ahora que ha descargado el juego, es posible que se pregunte cómo jugarlo. Estos son algunos consejos y trucos para sobrevivir la noche y descubrir los secretos de la Pizzaplex.
-
Los consejos y trucos para sobrevivir la noche
-
El objetivo principal del juego es sobrevivir hasta las 6 a.m. sin ser atrapado por Vanessa o los otros animatrónicos. Aquí hay algunos consejos y trucos para ayudarte a hacerlo:
-
-
Utilice las cámaras de seguridad para monitorear su entorno y planificar su ruta. Puede cambiar entre diferentes cámaras usando el ratón o el controlador. También puede acercar o alejar usando la rueda de desplazamiento o los disparadores. Las cámaras te mostrarán dónde están Vanessa y los otros animatrónicos, así como dónde puedes encontrar objetos, herramientas, escondites o salidas.
-
-
Escóndete en diferentes lugares o huye del peligro. Puede esconderse en varios lugares, como casilleros, gabinetes, rejillas de ventilación o botes de basura presionando F o A en su teclado o controlador. También puede huir del peligro pulsando Shift o L3 en su teclado o controlador. Sin embargo, debe tener cuidado con su resistencia, potencia de la batería, nivel de ruido y límite de tiempo. Su resistencia disminuirá si corre demasiado, su potencia de la batería disminuirá si usa demasiados artículos o herramientas, su nivel de ruido aumentará si hace demasiado ruido, y su límite de tiempo disminuirá si toma demasiado tiempo para completar sus objetivos. Si cualquiera de estos factores llega a cero, usted será más vulnerable a ser atrapado.
-
Los secretos y huevos de Pascua para descubrir en el juego
-
El juego también ofrece muchos secretos y huevos de Pascua para que los descubras en el juego. Estos son algunos de ellos:
-
-
Recoge monedas y fichas para comprar objetos o desbloquear secretos. Puedes encontrar monedas y fichas en todo el Pizzaplex. Puede utilizarlos para comprar artículos en máquinas expendedoras o en contadores de premios. También puedes usarlos para desbloquear secretos como juegos de árcade ocultos, habitaciones secretas o finales secretos.
-
Jugar juegos de árcade para ganar recompensas o acceder a mini-juegos. Puedes jugar juegos de árcade como Monty Golf, Roxy Raceway, Bonnie Bowl o Fazbear Blast in the Pizzaplex. Puedes ganar recompensas como monedas, fichas u objetos al jugarlos. También puedes acceder a minijuegos como Princess Quest, Freddy in Space 2 o Corn Maze jugando ciertos juegos de árcade.
-
Explora diferentes áreas para encontrar pistas o huevos de Pascua. Puedes explorar diferentes áreas como las alcantarillas o la arena láser tag en el Pizzaplex. Puedes encontrar pistas o huevos de Pascua como carteles, notas, cintas o referencias a juegos anteriores u otros medios.
-
-
Conclusión
-
-
Preguntas frecuentes
-
Aquí hay algunas preguntas frecuentes sobre Five Nights at Freddy’s 6:
-
-
Q: ¿Cinco noches en Freddy’s 6 da miedo?
-
A: Sí, Five Nights at Freddy’s 6 es un juego de terror que presenta sustos de salto, gore, violencia y temas oscuros. No es adecuado para niños o personas que se asustan fácilmente.
-
Q: ¿Son cinco noches en el 6 canon de Freddy?
-
A: Sí, Five Nights at Freddy’s 6 es canon y parte de la línea de tiempo principal de la serie Five Nights at Freddy’s. Tiene lugar después de los eventos de Five Nights at Freddy’s: Help Wanted y Five Nights at Freddy’s: Special Delivery.
-
Q: ¿Cinco noches en Freddy’s 6 es gratis?
-
A: No, Five Nights at Freddy’s 6 no es gratis. Cuesta $39.99 en Steam y PlayStation Store. Sin embargo, puede estar disponible de forma gratuita o con descuento en ciertas ocasiones o plataformas.
-
Q: ¿Son cinco noches en el multijugador de Freddy’s 6?
-
A: No, Five Nights at Freddy’s 6 no es multijugador. Es un juego para un solo jugador que no admite modos cooperativos o versus en línea o locales.
-
Q: ¿Cinco noches en Freddy’s 6 es el juego final de la serie?
-
A: No, Five Nights at Freddy’s 6 no es el juego final de la serie. Scott Cawthon, el creador de la serie, ha confirmado que hay más juegos en desarrollo, como Five Nights at Freddy’s: Into Madness y Five Nights at Freddy’s: The Movie.
- 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/CVPR/Bamboo_ViT-B16_demo/README.md b/spaces/CVPR/Bamboo_ViT-B16_demo/README.md
deleted file mode 100644
index 74a1caa7498f2c89cde79a3f031ec6a77758e45f..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Bamboo_ViT-B16_demo/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Bamboo ViT-B16 Demo
-emoji: 🎋
-colorFrom: blue
-colorTo: blue
-sdk: gradio
-sdk_version: 3.0.17
-app_file: app.py
-pinned: false
-license: cc-by-4.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/CVPR/GFPGAN-example/setup.py b/spaces/CVPR/GFPGAN-example/setup.py
deleted file mode 100644
index 474e9188aa2dc5c19614921760ce4ad99bd19c13..0000000000000000000000000000000000000000
--- a/spaces/CVPR/GFPGAN-example/setup.py
+++ /dev/null
@@ -1,107 +0,0 @@
-#!/usr/bin/env python
-
-from setuptools import find_packages, setup
-
-import os
-import subprocess
-import time
-
-version_file = 'gfpgan/version.py'
-
-
-def readme():
- with open('README.md', encoding='utf-8') as f:
- content = f.read()
- return content
-
-
-def get_git_hash():
-
- def _minimal_ext_cmd(cmd):
- # construct minimal environment
- env = {}
- for k in ['SYSTEMROOT', 'PATH', 'HOME']:
- v = os.environ.get(k)
- if v is not None:
- env[k] = v
- # LANGUAGE is used on win32
- env['LANGUAGE'] = 'C'
- env['LANG'] = 'C'
- env['LC_ALL'] = 'C'
- out = subprocess.Popen(cmd, stdout=subprocess.PIPE, env=env).communicate()[0]
- return out
-
- try:
- out = _minimal_ext_cmd(['git', 'rev-parse', 'HEAD'])
- sha = out.strip().decode('ascii')
- except OSError:
- sha = 'unknown'
-
- return sha
-
-
-def get_hash():
- if os.path.exists('.git'):
- sha = get_git_hash()[:7]
- else:
- sha = 'unknown'
-
- return sha
-
-
-def write_version_py():
- content = """# GENERATED VERSION FILE
-# TIME: {}
-__version__ = '{}'
-__gitsha__ = '{}'
-version_info = ({})
-"""
- sha = get_hash()
- with open('VERSION', 'r') as f:
- SHORT_VERSION = f.read().strip()
- VERSION_INFO = ', '.join([x if x.isdigit() else f'"{x}"' for x in SHORT_VERSION.split('.')])
-
- version_file_str = content.format(time.asctime(), SHORT_VERSION, sha, VERSION_INFO)
- with open(version_file, 'w') as f:
- f.write(version_file_str)
-
-
-def get_version():
- with open(version_file, 'r') as f:
- exec(compile(f.read(), version_file, 'exec'))
- return locals()['__version__']
-
-
-def get_requirements(filename='requirements.txt'):
- here = os.path.dirname(os.path.realpath(__file__))
- with open(os.path.join(here, filename), 'r') as f:
- requires = [line.replace('\n', '') for line in f.readlines()]
- return requires
-
-
-if __name__ == '__main__':
- write_version_py()
- setup(
- name='gfpgan',
- version=get_version(),
- description='GFPGAN aims at developing Practical Algorithms for Real-world Face Restoration',
- long_description=readme(),
- long_description_content_type='text/markdown',
- author='Xintao Wang',
- author_email='xintao.wang@outlook.com',
- keywords='computer vision, pytorch, image restoration, super-resolution, face restoration, gan, gfpgan',
- url='https://github.com/TencentARC/GFPGAN',
- include_package_data=True,
- packages=find_packages(exclude=('options', 'datasets', 'experiments', 'results', 'tb_logger', 'wandb')),
- classifiers=[
- 'Development Status :: 4 - Beta',
- 'License :: OSI Approved :: Apache Software License',
- 'Operating System :: OS Independent',
- 'Programming Language :: Python :: 3',
- 'Programming Language :: Python :: 3.7',
- 'Programming Language :: Python :: 3.8',
- ],
- license='Apache License Version 2.0',
- setup_requires=['cython', 'numpy'],
- install_requires=get_requirements(),
- zip_safe=False)
diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/allocator_aware_execution_policy.h b/spaces/CVPR/LIVE/thrust/thrust/detail/allocator_aware_execution_policy.h
deleted file mode 100644
index 28fd54f9b73d45ba01e10797e392151e70de690c..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/detail/allocator_aware_execution_policy.h
+++ /dev/null
@@ -1,101 +0,0 @@
-/*
- * Copyright 2018 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-#include
-#include
-
-#if THRUST_CPP_DIALECT >= 2011
- #include
-#endif
-
-namespace thrust
-{
-
-namespace mr
-{
-
-template
-class allocator;
-
-}
-
-namespace detail
-{
-
-template class ExecutionPolicyCRTPBase>
-struct allocator_aware_execution_policy
-{
- template
- struct execute_with_memory_resource_type
- {
- typedef thrust::detail::execute_with_allocator<
- thrust::mr::allocator<
- thrust::detail::max_align_t,
- MemoryResource
- >,
- ExecutionPolicyCRTPBase
- > type;
- };
-
- template
- struct execute_with_allocator_type
- {
- typedef thrust::detail::execute_with_allocator<
- Allocator,
- ExecutionPolicyCRTPBase
- > type;
- };
-
- template
- typename execute_with_memory_resource_type::type
- operator()(MemoryResource * mem_res) const
- {
- return typename execute_with_memory_resource_type::type(mem_res);
- }
-
- template
- typename execute_with_allocator_type::type
- operator()(Allocator &alloc) const
- {
- return typename execute_with_allocator_type::type(alloc);
- }
-
- template
- typename execute_with_allocator_type::type
- operator()(const Allocator &alloc) const
- {
- return typename execute_with_allocator_type::type(alloc);
- }
-
-#if THRUST_CPP_DIALECT >= 2011
- // just the rvalue overload
- // perfect forwarding doesn't help, because a const reference has to be turned
- // into a value by copying for the purpose of storing it in execute_with_allocator
- template::value>::type * = nullptr>
- typename execute_with_allocator_type::type
- operator()(Allocator &&alloc) const
- {
- return typename execute_with_allocator_type::type(std::move(alloc));
- }
-#endif
-};
-
-}
-}
diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/trivial_sequence.h b/spaces/CVPR/LIVE/thrust/thrust/detail/trivial_sequence.h
deleted file mode 100644
index b6c3ed9ebb3cd9022368644edb77ca101ec133e3..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/detail/trivial_sequence.h
+++ /dev/null
@@ -1,95 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-/*! \file trivial_sequence.h
- * \brief Container-like class for wrapping sequences. The wrapped
- * sequence always has trivial iterators, even when the input
- * sequence does not.
- */
-
-
-#pragma once
-
-#include
-#include
-#include
-#include
-#include
-
-namespace thrust
-{
-
-namespace detail
-{
-
-// never instantiated
-template struct _trivial_sequence { };
-
-// trivial case
-template
-struct _trivial_sequence
-{
- typedef Iterator iterator_type;
- Iterator first, last;
-
- __host__ __device__
- _trivial_sequence(thrust::execution_policy &, Iterator _first, Iterator _last) : first(_first), last(_last)
- {
- }
-
- __host__ __device__
- iterator_type begin() { return first; }
-
- __host__ __device__
- iterator_type end() { return last; }
-};
-
-// non-trivial case
-template
-struct _trivial_sequence
-{
- typedef typename thrust::iterator_value::type iterator_value;
- typedef typename thrust::detail::temporary_array::iterator iterator_type;
-
- thrust::detail::temporary_array buffer;
-
- __host__ __device__
- _trivial_sequence(thrust::execution_policy &exec, Iterator first, Iterator last)
- : buffer(exec, first, last)
- {
- }
-
- __host__ __device__
- iterator_type begin() { return buffer.begin(); }
-
- __host__ __device__
- iterator_type end() { return buffer.end(); }
-};
-
-template
-struct trivial_sequence
- : detail::_trivial_sequence::type>
-{
- typedef _trivial_sequence::type> super_t;
-
- __host__ __device__
- trivial_sequence(thrust::execution_policy &exec, Iterator first, Iterator last) : super_t(exec, first, last) { }
-};
-
-} // end namespace detail
-
-} // end namespace thrust
-
diff --git a/spaces/CikeyQI/Yunzai/Yunzai/lib/plugins/plugin.js b/spaces/CikeyQI/Yunzai/Yunzai/lib/plugins/plugin.js
deleted file mode 100644
index b4874ba2c3579a6961b6c58a61b590319f23c2a6..0000000000000000000000000000000000000000
--- a/spaces/CikeyQI/Yunzai/Yunzai/lib/plugins/plugin.js
+++ /dev/null
@@ -1,119 +0,0 @@
-let stateArr = {}
-
-export default class plugin {
- /**
- * @param name 插件名称
- * @param dsc 插件描述
- * @param handler handler配置
- * @param handler.key handler支持的事件key
- * @param handler.fn handler的处理func
- * @param namespace namespace,设置handler时建议设置
- * @param event 执行事件,默认message
- * @param priority 优先级,数字越小优先级越高
- * @param rule
- * @param rule.reg 命令正则
- * @param rule.fnc 命令执行方法
- * @param rule.event 执行事件,默认message
- * @param rule.log false时不显示执行日志
- * @param rule.permission 权限 master,owner,admin,all
- * @param task
- * @param task.name 定时任务名称
- * @param task.cron 定时任务cron表达式
- * @param task.fnc 定时任务方法名
- * @param task.log false时不显示执行日志
- */
- constructor ({
- name = 'your-plugin',
- dsc = '无',
- handler,
- namespace,
- event = 'message',
- priority = 5000,
- task = { fnc: '', cron: '' },
- rule = []
- }) {
- /** 插件名称 */
- this.name = name
- /** 插件描述 */
- this.dsc = dsc
- /** 监听事件,默认message https://oicqjs.github.io/oicq/#events */
- this.event = event
- /** 优先级 */
- this.priority = priority
- /** 定时任务,可以是数组 */
- this.task = {
- /** 任务名 */
- name: '',
- /** 任务方法名 */
- fnc: task.fnc || '',
- /** 任务cron表达式 */
- cron: task.cron || ''
- }
- /** 命令规则 */
- this.rule = rule
-
- if (handler) {
- this.handler = handler
- this.namespace = namespace || ''
- }
- }
-
- /**
- * @param msg 发送的消息
- * @param quote 是否引用回复
- * @param data.recallMsg 群聊是否撤回消息,0-120秒,0不撤回
- * @param data.at 是否at用户
- */
- reply (msg = '', quote = false, data = {}) {
- if (!this.e.reply || !msg) return false
- return this.e.reply(msg, quote, data)
- }
-
- conKey (isGroup = false) {
- if (isGroup) {
- return `${this.name}.${this.e.group_id}`
- } else {
- return `${this.name}.${this.userId || this.e.user_id}`
- }
- }
-
- /**
- * @param type 执行方法
- * @param isGroup 是否群聊
- * @param time 操作时间,默认120秒
- */
- setContext (type, isGroup = false, time = 120) {
- let key = this.conKey(isGroup)
- if (!stateArr[key]) stateArr[key] = {}
- stateArr[key][type] = this.e
- if (time) {
- /** 操作时间 */
- setTimeout(() => {
- if (stateArr[key][type]) {
- delete stateArr[key][type]
- this.e.reply('操作超时已取消', true)
- }
- }, time * 1000)
- }
- }
-
- getContext () {
- let key = this.conKey()
- return stateArr[key]
- }
-
- getContextGroup () {
- let key = this.conKey(true)
- return stateArr[key]
- }
-
- /**
- * @param type 执行方法
- * @param isGroup 是否群聊
- */
- finish (type, isGroup = false) {
- if (stateArr[this.conKey(isGroup)] && stateArr[this.conKey(isGroup)][type]) {
- delete stateArr[this.conKey(isGroup)][type]
- }
- }
-}
diff --git a/spaces/CleanML/demo/README.md b/spaces/CleanML/demo/README.md
deleted file mode 100644
index 0019f41f2157de4c6792b1a4253f52e920849407..0000000000000000000000000000000000000000
--- a/spaces/CleanML/demo/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: CleanML Demo - Data centric NER MLOps
-emoji: 📚🔍
-colorFrom: gray
-colorTo: gray
-sdk: docker
-pinned: true
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/ConvLab/README/README.md b/spaces/ConvLab/README/README.md
deleted file mode 100644
index d9768447f872757b5a4b48537e17bb0690557e77..0000000000000000000000000000000000000000
--- a/spaces/ConvLab/README/README.md
+++ /dev/null
@@ -1,23 +0,0 @@
----
-title: README
-emoji: 👀
-colorFrom: gray
-colorTo: gray
-sdk: static
-pinned: false
----
-
-### Dataset
-
-To use our unified datasets, you need to install [ConvLab-3](https://github.com/ConvLab/ConvLab-3) platform first. Then you can load the dataset via:
-```
-from convlab.util import load_dataset, load_ontology, load_database
-
-dataset_name = 'multiwoz21' # use the dataset name in our repo
-dataset = load_dataset(dataset_name)
-ontology = load_ontology(dataset_name)
-database = load_database(dataset_name)
-```
-Each dataset has a `dummy_data.json` showing a few samples. For the unified data format and more usage please refer to [here](https://github.com/ConvLab/ConvLab-3/tree/master/data/unified_datasets).
-
-Contributions such as adding new datasets and models are highly welcome!
diff --git a/spaces/Cropinky/hana_hanak_houses/README.md b/spaces/Cropinky/hana_hanak_houses/README.md
deleted file mode 100644
index b477bae50d8db279047fa78c75e8c70345a7efca..0000000000000000000000000000000000000000
--- a/spaces/Cropinky/hana_hanak_houses/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Anti House Generator
-emoji: 🎨
-colorFrom: blue
-colorTo: red
-sdk: gradio
-sdk_version: 3.33.1
-app_file: app.py
-pinned: true
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/DAMO-NLP-SG/CLEX-Chat/style.css b/spaces/DAMO-NLP-SG/CLEX-Chat/style.css
deleted file mode 100644
index f59630c8f1c2cb87da9a99d33a3fb7b1f228fa21..0000000000000000000000000000000000000000
--- a/spaces/DAMO-NLP-SG/CLEX-Chat/style.css
+++ /dev/null
@@ -1,16 +0,0 @@
-h1 {
- text-align: center;
- }
-
- #duplicate-button {
- margin: auto;
- color: white;
- background: #1565c0;
- border-radius: 100vh;
- }
-
- .contain {
- max-width: 900px;
- margin: auto;
- padding-top: 1.5rem;
- }
\ No newline at end of file
diff --git a/spaces/DHEIVER/ThyroidTumorClassificationModel/README.md b/spaces/DHEIVER/ThyroidTumorClassificationModel/README.md
deleted file mode 100644
index e2abaed0fab5e5f30f4a5297c6ff2d7e6392c02e..0000000000000000000000000000000000000000
--- a/spaces/DHEIVER/ThyroidTumorClassificationModel/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: SerdarHelli ThyroidTumorClassificationModel
-emoji: 🐨
-colorFrom: gray
-colorTo: pink
-sdk: gradio
-sdk_version: 3.44.4
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiohttp/web_protocol.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiohttp/web_protocol.py
deleted file mode 100644
index 10a960801880ea378b2d41fb7482626e8aabe688..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiohttp/web_protocol.py
+++ /dev/null
@@ -1,679 +0,0 @@
-import asyncio
-import asyncio.streams
-import traceback
-import warnings
-from collections import deque
-from contextlib import suppress
-from html import escape as html_escape
-from http import HTTPStatus
-from logging import Logger
-from typing import (
- TYPE_CHECKING,
- Any,
- Awaitable,
- Callable,
- Deque,
- Optional,
- Sequence,
- Tuple,
- Type,
- Union,
- cast,
-)
-
-import attr
-import yarl
-
-from .abc import AbstractAccessLogger, AbstractStreamWriter
-from .base_protocol import BaseProtocol
-from .helpers import ceil_timeout
-from .http import (
- HttpProcessingError,
- HttpRequestParser,
- HttpVersion10,
- RawRequestMessage,
- StreamWriter,
-)
-from .log import access_logger, server_logger
-from .streams import EMPTY_PAYLOAD, StreamReader
-from .tcp_helpers import tcp_keepalive
-from .web_exceptions import HTTPException
-from .web_log import AccessLogger
-from .web_request import BaseRequest
-from .web_response import Response, StreamResponse
-
-__all__ = ("RequestHandler", "RequestPayloadError", "PayloadAccessError")
-
-if TYPE_CHECKING: # pragma: no cover
- from .web_server import Server
-
-
-_RequestFactory = Callable[
- [
- RawRequestMessage,
- StreamReader,
- "RequestHandler",
- AbstractStreamWriter,
- "asyncio.Task[None]",
- ],
- BaseRequest,
-]
-
-_RequestHandler = Callable[[BaseRequest], Awaitable[StreamResponse]]
-
-ERROR = RawRequestMessage(
- "UNKNOWN",
- "/",
- HttpVersion10,
- {}, # type: ignore[arg-type]
- {}, # type: ignore[arg-type]
- True,
- None,
- False,
- False,
- yarl.URL("/"),
-)
-
-
-class RequestPayloadError(Exception):
- """Payload parsing error."""
-
-
-class PayloadAccessError(Exception):
- """Payload was accessed after response was sent."""
-
-
-@attr.s(auto_attribs=True, frozen=True, slots=True)
-class _ErrInfo:
- status: int
- exc: BaseException
- message: str
-
-
-_MsgType = Tuple[Union[RawRequestMessage, _ErrInfo], StreamReader]
-
-
-class RequestHandler(BaseProtocol):
- """HTTP protocol implementation.
-
- RequestHandler handles incoming HTTP request. It reads request line,
- request headers and request payload and calls handle_request() method.
- By default it always returns with 404 response.
-
- RequestHandler handles errors in incoming request, like bad
- status line, bad headers or incomplete payload. If any error occurs,
- connection gets closed.
-
- keepalive_timeout -- number of seconds before closing
- keep-alive connection
-
- tcp_keepalive -- TCP keep-alive is on, default is on
-
- debug -- enable debug mode
-
- logger -- custom logger object
-
- access_log_class -- custom class for access_logger
-
- access_log -- custom logging object
-
- access_log_format -- access log format string
-
- loop -- Optional event loop
-
- max_line_size -- Optional maximum header line size
-
- max_field_size -- Optional maximum header field size
-
- max_headers -- Optional maximum header size
-
- """
-
- KEEPALIVE_RESCHEDULE_DELAY = 1
-
- __slots__ = (
- "_request_count",
- "_keepalive",
- "_manager",
- "_request_handler",
- "_request_factory",
- "_tcp_keepalive",
- "_keepalive_time",
- "_keepalive_handle",
- "_keepalive_timeout",
- "_lingering_time",
- "_messages",
- "_message_tail",
- "_waiter",
- "_task_handler",
- "_upgrade",
- "_payload_parser",
- "_request_parser",
- "_reading_paused",
- "logger",
- "debug",
- "access_log",
- "access_logger",
- "_close",
- "_force_close",
- "_current_request",
- )
-
- def __init__(
- self,
- manager: "Server",
- *,
- loop: asyncio.AbstractEventLoop,
- keepalive_timeout: float = 75.0, # NGINX default is 75 secs
- tcp_keepalive: bool = True,
- logger: Logger = server_logger,
- access_log_class: Type[AbstractAccessLogger] = AccessLogger,
- access_log: Logger = access_logger,
- access_log_format: str = AccessLogger.LOG_FORMAT,
- debug: bool = False,
- max_line_size: int = 8190,
- max_headers: int = 32768,
- max_field_size: int = 8190,
- lingering_time: float = 10.0,
- read_bufsize: int = 2**16,
- auto_decompress: bool = True,
- ):
- super().__init__(loop)
-
- self._request_count = 0
- self._keepalive = False
- self._current_request: Optional[BaseRequest] = None
- self._manager: Optional[Server] = manager
- self._request_handler: Optional[_RequestHandler] = manager.request_handler
- self._request_factory: Optional[_RequestFactory] = manager.request_factory
-
- self._tcp_keepalive = tcp_keepalive
- # placeholder to be replaced on keepalive timeout setup
- self._keepalive_time = 0.0
- self._keepalive_handle: Optional[asyncio.Handle] = None
- self._keepalive_timeout = keepalive_timeout
- self._lingering_time = float(lingering_time)
-
- self._messages: Deque[_MsgType] = deque()
- self._message_tail = b""
-
- self._waiter: Optional[asyncio.Future[None]] = None
- self._task_handler: Optional[asyncio.Task[None]] = None
-
- self._upgrade = False
- self._payload_parser: Any = None
- self._request_parser: Optional[HttpRequestParser] = HttpRequestParser(
- self,
- loop,
- read_bufsize,
- max_line_size=max_line_size,
- max_field_size=max_field_size,
- max_headers=max_headers,
- payload_exception=RequestPayloadError,
- auto_decompress=auto_decompress,
- )
-
- self.logger = logger
- self.debug = debug
- self.access_log = access_log
- if access_log:
- self.access_logger: Optional[AbstractAccessLogger] = access_log_class(
- access_log, access_log_format
- )
- else:
- self.access_logger = None
-
- self._close = False
- self._force_close = False
-
- def __repr__(self) -> str:
- return "<{} {}>".format(
- self.__class__.__name__,
- "connected" if self.transport is not None else "disconnected",
- )
-
- @property
- def keepalive_timeout(self) -> float:
- return self._keepalive_timeout
-
- async def shutdown(self, timeout: Optional[float] = 15.0) -> None:
- """Do worker process exit preparations.
-
- We need to clean up everything and stop accepting requests.
- It is especially important for keep-alive connections.
- """
- self._force_close = True
-
- if self._keepalive_handle is not None:
- self._keepalive_handle.cancel()
-
- if self._waiter:
- self._waiter.cancel()
-
- # wait for handlers
- with suppress(asyncio.CancelledError, asyncio.TimeoutError):
- async with ceil_timeout(timeout):
- if self._current_request is not None:
- self._current_request._cancel(asyncio.CancelledError())
-
- if self._task_handler is not None and not self._task_handler.done():
- await self._task_handler
-
- # force-close non-idle handler
- if self._task_handler is not None:
- self._task_handler.cancel()
-
- if self.transport is not None:
- self.transport.close()
- self.transport = None
-
- def connection_made(self, transport: asyncio.BaseTransport) -> None:
- super().connection_made(transport)
-
- real_transport = cast(asyncio.Transport, transport)
- if self._tcp_keepalive:
- tcp_keepalive(real_transport)
-
- self._task_handler = self._loop.create_task(self.start())
- assert self._manager is not None
- self._manager.connection_made(self, real_transport)
-
- def connection_lost(self, exc: Optional[BaseException]) -> None:
- if self._manager is None:
- return
- self._manager.connection_lost(self, exc)
-
- super().connection_lost(exc)
-
- self._manager = None
- self._force_close = True
- self._request_factory = None
- self._request_handler = None
- self._request_parser = None
-
- if self._keepalive_handle is not None:
- self._keepalive_handle.cancel()
-
- if self._current_request is not None:
- if exc is None:
- exc = ConnectionResetError("Connection lost")
- self._current_request._cancel(exc)
-
- if self._waiter is not None:
- self._waiter.cancel()
-
- self._task_handler = None
-
- if self._payload_parser is not None:
- self._payload_parser.feed_eof()
- self._payload_parser = None
-
- def set_parser(self, parser: Any) -> None:
- # Actual type is WebReader
- assert self._payload_parser is None
-
- self._payload_parser = parser
-
- if self._message_tail:
- self._payload_parser.feed_data(self._message_tail)
- self._message_tail = b""
-
- def eof_received(self) -> None:
- pass
-
- def data_received(self, data: bytes) -> None:
- if self._force_close or self._close:
- return
- # parse http messages
- messages: Sequence[_MsgType]
- if self._payload_parser is None and not self._upgrade:
- assert self._request_parser is not None
- try:
- messages, upgraded, tail = self._request_parser.feed_data(data)
- except HttpProcessingError as exc:
- messages = [
- (_ErrInfo(status=400, exc=exc, message=exc.message), EMPTY_PAYLOAD)
- ]
- upgraded = False
- tail = b""
-
- for msg, payload in messages or ():
- self._request_count += 1
- self._messages.append((msg, payload))
-
- waiter = self._waiter
- if messages and waiter is not None and not waiter.done():
- # don't set result twice
- waiter.set_result(None)
-
- self._upgrade = upgraded
- if upgraded and tail:
- self._message_tail = tail
-
- # no parser, just store
- elif self._payload_parser is None and self._upgrade and data:
- self._message_tail += data
-
- # feed payload
- elif data:
- eof, tail = self._payload_parser.feed_data(data)
- if eof:
- self.close()
-
- def keep_alive(self, val: bool) -> None:
- """Set keep-alive connection mode.
-
- :param bool val: new state.
- """
- self._keepalive = val
- if self._keepalive_handle:
- self._keepalive_handle.cancel()
- self._keepalive_handle = None
-
- def close(self) -> None:
- """Close connection.
-
- Stop accepting new pipelining messages and close
- connection when handlers done processing messages.
- """
- self._close = True
- if self._waiter:
- self._waiter.cancel()
-
- def force_close(self) -> None:
- """Forcefully close connection."""
- self._force_close = True
- if self._waiter:
- self._waiter.cancel()
- if self.transport is not None:
- self.transport.close()
- self.transport = None
-
- def log_access(
- self, request: BaseRequest, response: StreamResponse, time: float
- ) -> None:
- if self.access_logger is not None:
- self.access_logger.log(request, response, self._loop.time() - time)
-
- def log_debug(self, *args: Any, **kw: Any) -> None:
- if self.debug:
- self.logger.debug(*args, **kw)
-
- def log_exception(self, *args: Any, **kw: Any) -> None:
- self.logger.exception(*args, **kw)
-
- def _process_keepalive(self) -> None:
- if self._force_close or not self._keepalive:
- return
-
- next = self._keepalive_time + self._keepalive_timeout
-
- # handler in idle state
- if self._waiter:
- if self._loop.time() > next:
- self.force_close()
- return
-
- # not all request handlers are done,
- # reschedule itself to next second
- self._keepalive_handle = self._loop.call_later(
- self.KEEPALIVE_RESCHEDULE_DELAY, self._process_keepalive
- )
-
- async def _handle_request(
- self,
- request: BaseRequest,
- start_time: float,
- request_handler: Callable[[BaseRequest], Awaitable[StreamResponse]],
- ) -> Tuple[StreamResponse, bool]:
- assert self._request_handler is not None
- try:
- try:
- self._current_request = request
- resp = await request_handler(request)
- finally:
- self._current_request = None
- except HTTPException as exc:
- resp = exc
- reset = await self.finish_response(request, resp, start_time)
- except asyncio.CancelledError:
- raise
- except asyncio.TimeoutError as exc:
- self.log_debug("Request handler timed out.", exc_info=exc)
- resp = self.handle_error(request, 504)
- reset = await self.finish_response(request, resp, start_time)
- except Exception as exc:
- resp = self.handle_error(request, 500, exc)
- reset = await self.finish_response(request, resp, start_time)
- else:
- # Deprecation warning (See #2415)
- if getattr(resp, "__http_exception__", False):
- warnings.warn(
- "returning HTTPException object is deprecated "
- "(#2415) and will be removed, "
- "please raise the exception instead",
- DeprecationWarning,
- )
-
- reset = await self.finish_response(request, resp, start_time)
-
- return resp, reset
-
- async def start(self) -> None:
- """Process incoming request.
-
- It reads request line, request headers and request payload, then
- calls handle_request() method. Subclass has to override
- handle_request(). start() handles various exceptions in request
- or response handling. Connection is being closed always unless
- keep_alive(True) specified.
- """
- loop = self._loop
- handler = self._task_handler
- assert handler is not None
- manager = self._manager
- assert manager is not None
- keepalive_timeout = self._keepalive_timeout
- resp = None
- assert self._request_factory is not None
- assert self._request_handler is not None
-
- while not self._force_close:
- if not self._messages:
- try:
- # wait for next request
- self._waiter = loop.create_future()
- await self._waiter
- except asyncio.CancelledError:
- break
- finally:
- self._waiter = None
-
- message, payload = self._messages.popleft()
-
- start = loop.time()
-
- manager.requests_count += 1
- writer = StreamWriter(self, loop)
- if isinstance(message, _ErrInfo):
- # make request_factory work
- request_handler = self._make_error_handler(message)
- message = ERROR
- else:
- request_handler = self._request_handler
-
- request = self._request_factory(message, payload, self, writer, handler)
- try:
- # a new task is used for copy context vars (#3406)
- task = self._loop.create_task(
- self._handle_request(request, start, request_handler)
- )
- try:
- resp, reset = await task
- except (asyncio.CancelledError, ConnectionError):
- self.log_debug("Ignored premature client disconnection")
- break
-
- # Drop the processed task from asyncio.Task.all_tasks() early
- del task
- if reset:
- self.log_debug("Ignored premature client disconnection 2")
- break
-
- # notify server about keep-alive
- self._keepalive = bool(resp.keep_alive)
-
- # check payload
- if not payload.is_eof():
- lingering_time = self._lingering_time
- if not self._force_close and lingering_time:
- self.log_debug(
- "Start lingering close timer for %s sec.", lingering_time
- )
-
- now = loop.time()
- end_t = now + lingering_time
-
- with suppress(asyncio.TimeoutError, asyncio.CancelledError):
- while not payload.is_eof() and now < end_t:
- async with ceil_timeout(end_t - now):
- # read and ignore
- await payload.readany()
- now = loop.time()
-
- # if payload still uncompleted
- if not payload.is_eof() and not self._force_close:
- self.log_debug("Uncompleted request.")
- self.close()
-
- payload.set_exception(PayloadAccessError())
-
- except asyncio.CancelledError:
- self.log_debug("Ignored premature client disconnection ")
- break
- except RuntimeError as exc:
- if self.debug:
- self.log_exception("Unhandled runtime exception", exc_info=exc)
- self.force_close()
- except Exception as exc:
- self.log_exception("Unhandled exception", exc_info=exc)
- self.force_close()
- finally:
- if self.transport is None and resp is not None:
- self.log_debug("Ignored premature client disconnection.")
- elif not self._force_close:
- if self._keepalive and not self._close:
- # start keep-alive timer
- if keepalive_timeout is not None:
- now = self._loop.time()
- self._keepalive_time = now
- if self._keepalive_handle is None:
- self._keepalive_handle = loop.call_at(
- now + keepalive_timeout, self._process_keepalive
- )
- else:
- break
-
- # remove handler, close transport if no handlers left
- if not self._force_close:
- self._task_handler = None
- if self.transport is not None:
- self.transport.close()
-
- async def finish_response(
- self, request: BaseRequest, resp: StreamResponse, start_time: float
- ) -> bool:
- """Prepare the response and write_eof, then log access.
-
- This has to
- be called within the context of any exception so the access logger
- can get exception information. Returns True if the client disconnects
- prematurely.
- """
- if self._request_parser is not None:
- self._request_parser.set_upgraded(False)
- self._upgrade = False
- if self._message_tail:
- self._request_parser.feed_data(self._message_tail)
- self._message_tail = b""
- try:
- prepare_meth = resp.prepare
- except AttributeError:
- if resp is None:
- raise RuntimeError("Missing return " "statement on request handler")
- else:
- raise RuntimeError(
- "Web-handler should return "
- "a response instance, "
- "got {!r}".format(resp)
- )
- try:
- await prepare_meth(request)
- await resp.write_eof()
- except ConnectionError:
- self.log_access(request, resp, start_time)
- return True
- else:
- self.log_access(request, resp, start_time)
- return False
-
- def handle_error(
- self,
- request: BaseRequest,
- status: int = 500,
- exc: Optional[BaseException] = None,
- message: Optional[str] = None,
- ) -> StreamResponse:
- """Handle errors.
-
- Returns HTTP response with specific status code. Logs additional
- information. It always closes current connection.
- """
- self.log_exception("Error handling request", exc_info=exc)
-
- # some data already got sent, connection is broken
- if request.writer.output_size > 0:
- raise ConnectionError(
- "Response is sent already, cannot send another response "
- "with the error message"
- )
-
- ct = "text/plain"
- if status == HTTPStatus.INTERNAL_SERVER_ERROR:
- title = "{0.value} {0.phrase}".format(HTTPStatus.INTERNAL_SERVER_ERROR)
- msg = HTTPStatus.INTERNAL_SERVER_ERROR.description
- tb = None
- if self.debug:
- with suppress(Exception):
- tb = traceback.format_exc()
-
- if "text/html" in request.headers.get("Accept", ""):
- if tb:
- tb = html_escape(tb)
- msg = f"
Traceback:
\n
{tb}
"
- message = (
- ""
- "{title}"
- "\n
{title}
"
- "\n{msg}\n\n"
- ).format(title=title, msg=msg)
- ct = "text/html"
- else:
- if tb:
- msg = tb
- message = title + "\n\n" + msg
-
- resp = Response(status=status, text=message, content_type=ct)
- resp.force_close()
-
- return resp
-
- def _make_error_handler(
- self, err_info: _ErrInfo
- ) -> Callable[[BaseRequest], Awaitable[StreamResponse]]:
- async def handler(request: BaseRequest) -> StreamResponse:
- return self.handle_error(
- request, err_info.status, err_info.exc, err_info.message
- )
-
- return handler
diff --git a/spaces/Dachus/Realfee/Dockerfile b/spaces/Dachus/Realfee/Dockerfile
deleted file mode 100644
index 8673c32e47d0e6700f24404e933dac777d2abe83..0000000000000000000000000000000000000000
--- a/spaces/Dachus/Realfee/Dockerfile
+++ /dev/null
@@ -1,15 +0,0 @@
-FROM ghcr.io/livebook-dev/livebook:latest-cuda11.8
-
-ENV LIVEBOOK_APP_SERVICE_NAME "🐳 Hugging Face - $SPACE_TITLE"
-ENV LIVEBOOK_APP_SERVICE_URL "https://huggingface.co/spaces/$SPACE_AUTHOR_NAME/$SPACE_REPO_NAME"
-ENV LIVEBOOK_UPDATE_INSTRUCTIONS_URL "https://livebook.dev"
-ENV LIVEBOOK_WITHIN_IFRAME "true"
-ENV LIVEBOOK_APPS_PATH "/public-apps"
-ENV LIVEBOOK_DATA_PATH "/data"
-ENV LIVEBOOK_PORT 7860
-
-EXPOSE 7860
-USER root
-COPY public-apps/ /public-apps
-RUN mkdir -p /data
-RUN chmod 777 /data
diff --git a/spaces/Dauzy/whisper-webui/src/utils.py b/spaces/Dauzy/whisper-webui/src/utils.py
deleted file mode 100644
index 576244c9cf8b8e8aa888b0a51312ddf56db928ce..0000000000000000000000000000000000000000
--- a/spaces/Dauzy/whisper-webui/src/utils.py
+++ /dev/null
@@ -1,245 +0,0 @@
-import textwrap
-import unicodedata
-import re
-
-import zlib
-from typing import Iterator, TextIO, Union
-import tqdm
-
-import urllib3
-
-
-def exact_div(x, y):
- assert x % y == 0
- return x // y
-
-
-def str2bool(string):
- str2val = {"True": True, "False": False}
- if string in str2val:
- return str2val[string]
- else:
- raise ValueError(f"Expected one of {set(str2val.keys())}, got {string}")
-
-
-def optional_int(string):
- return None if string == "None" else int(string)
-
-
-def optional_float(string):
- return None if string == "None" else float(string)
-
-
-def compression_ratio(text) -> float:
- return len(text) / len(zlib.compress(text.encode("utf-8")))
-
-
-def format_timestamp(seconds: float, always_include_hours: bool = False, fractionalSeperator: str = '.'):
- assert seconds >= 0, "non-negative timestamp expected"
- milliseconds = round(seconds * 1000.0)
-
- hours = milliseconds // 3_600_000
- milliseconds -= hours * 3_600_000
-
- minutes = milliseconds // 60_000
- milliseconds -= minutes * 60_000
-
- seconds = milliseconds // 1_000
- milliseconds -= seconds * 1_000
-
- hours_marker = f"{hours:02d}:" if always_include_hours or hours > 0 else ""
- return f"{hours_marker}{minutes:02d}:{seconds:02d}{fractionalSeperator}{milliseconds:03d}"
-
-
-def write_txt(transcript: Iterator[dict], file: TextIO):
- for segment in transcript:
- print(segment['text'].strip(), file=file, flush=True)
-
-
-def write_vtt(transcript: Iterator[dict], file: TextIO,
- maxLineWidth=None, highlight_words: bool = False):
- iterator = __subtitle_preprocessor_iterator(transcript, maxLineWidth, highlight_words)
-
- print("WEBVTT\n", file=file)
-
- for segment in iterator:
- text = segment['text'].replace('-->', '->')
-
- print(
- f"{format_timestamp(segment['start'])} --> {format_timestamp(segment['end'])}\n"
- f"{text}\n",
- file=file,
- flush=True,
- )
-
-def write_srt(transcript: Iterator[dict], file: TextIO,
- maxLineWidth=None, highlight_words: bool = False):
- """
- Write a transcript to a file in SRT format.
- Example usage:
- from pathlib import Path
- from whisper.utils import write_srt
- result = transcribe(model, audio_path, temperature=temperature, **args)
- # save SRT
- audio_basename = Path(audio_path).stem
- with open(Path(output_dir) / (audio_basename + ".srt"), "w", encoding="utf-8") as srt:
- write_srt(result["segments"], file=srt)
- """
- iterator = __subtitle_preprocessor_iterator(transcript, maxLineWidth, highlight_words)
-
- for i, segment in enumerate(iterator, start=1):
- text = segment['text'].replace('-->', '->')
-
- # write srt lines
- print(
- f"{i}\n"
- f"{format_timestamp(segment['start'], always_include_hours=True, fractionalSeperator=',')} --> "
- f"{format_timestamp(segment['end'], always_include_hours=True, fractionalSeperator=',')}\n"
- f"{text}\n",
- file=file,
- flush=True,
- )
-
-def __subtitle_preprocessor_iterator(transcript: Iterator[dict], maxLineWidth: int = None, highlight_words: bool = False):
- for segment in transcript:
- words = segment.get('words', [])
-
- if len(words) == 0:
- # Yield the segment as-is or processed
- if maxLineWidth is None or maxLineWidth < 0:
- yield segment
- else:
- yield {
- 'start': segment['start'],
- 'end': segment['end'],
- 'text': process_text(segment['text'].strip(), maxLineWidth)
- }
- # We are done
- continue
-
- subtitle_start = segment['start']
- subtitle_end = segment['end']
-
- text_words = [ this_word["word"] for this_word in words ]
- subtitle_text = __join_words(text_words, maxLineWidth)
-
- # Iterate over the words in the segment
- if highlight_words:
- last = subtitle_start
-
- for i, this_word in enumerate(words):
- start = this_word['start']
- end = this_word['end']
-
- if last != start:
- # Display the text up to this point
- yield {
- 'start': last,
- 'end': start,
- 'text': subtitle_text
- }
-
- # Display the text with the current word highlighted
- yield {
- 'start': start,
- 'end': end,
- 'text': __join_words(
- [
- {
- "word": re.sub(r"^(\s*)(.*)$", r"\1\2", word)
- if j == i
- else word,
- # The HTML tags and are not displayed,
- # # so they should not be counted in the word length
- "length": len(word)
- } for j, word in enumerate(text_words)
- ], maxLineWidth)
- }
- last = end
-
- if last != subtitle_end:
- # Display the last part of the text
- yield {
- 'start': last,
- 'end': subtitle_end,
- 'text': subtitle_text
- }
-
- # Just return the subtitle text
- else:
- yield {
- 'start': subtitle_start,
- 'end': subtitle_end,
- 'text': subtitle_text
- }
-
-def __join_words(words: Iterator[Union[str, dict]], maxLineWidth: int = None):
- if maxLineWidth is None or maxLineWidth < 0:
- return " ".join(words)
-
- lines = []
- current_line = ""
- current_length = 0
-
- for entry in words:
- # Either accept a string or a dict with a 'word' and 'length' field
- if isinstance(entry, dict):
- word = entry['word']
- word_length = entry['length']
- else:
- word = entry
- word_length = len(word)
-
- if current_length > 0 and current_length + word_length > maxLineWidth:
- lines.append(current_line)
- current_line = ""
- current_length = 0
-
- current_length += word_length
- # The word will be prefixed with a space by Whisper, so we don't need to add one here
- current_line += word
-
- if len(current_line) > 0:
- lines.append(current_line)
-
- return "\n".join(lines)
-
-def process_text(text: str, maxLineWidth=None):
- if (maxLineWidth is None or maxLineWidth < 0):
- return text
-
- lines = textwrap.wrap(text, width=maxLineWidth, tabsize=4)
- return '\n'.join(lines)
-
-def slugify(value, allow_unicode=False):
- """
- Taken from https://github.com/django/django/blob/master/django/utils/text.py
- Convert to ASCII if 'allow_unicode' is False. Convert spaces or repeated
- dashes to single dashes. Remove characters that aren't alphanumerics,
- underscores, or hyphens. Convert to lowercase. Also strip leading and
- trailing whitespace, dashes, and underscores.
- """
- value = str(value)
- if allow_unicode:
- value = unicodedata.normalize('NFKC', value)
- else:
- value = unicodedata.normalize('NFKD', value).encode('ascii', 'ignore').decode('ascii')
- value = re.sub(r'[^\w\s-]', '', value.lower())
- return re.sub(r'[-\s]+', '-', value).strip('-_')
-
-def download_file(url: str, destination: str):
- with urllib3.request.urlopen(url) as source, open(destination, "wb") as output:
- with tqdm(
- total=int(source.info().get("Content-Length")),
- ncols=80,
- unit="iB",
- unit_scale=True,
- unit_divisor=1024,
- ) as loop:
- while True:
- buffer = source.read(8192)
- if not buffer:
- break
-
- output.write(buffer)
- loop.update(len(buffer))
\ No newline at end of file
diff --git a/spaces/DemoLou/moe-tts/text/mandarin.py b/spaces/DemoLou/moe-tts/text/mandarin.py
deleted file mode 100644
index ff71de9788e4f20c897b971a775d1ecfbfe1c7b7..0000000000000000000000000000000000000000
--- a/spaces/DemoLou/moe-tts/text/mandarin.py
+++ /dev/null
@@ -1,329 +0,0 @@
-import os
-import sys
-import re
-from pypinyin import lazy_pinyin, BOPOMOFO
-import jieba
-import cn2an
-import logging
-
-logging.getLogger('jieba').setLevel(logging.WARNING)
-jieba.initialize()
-
-
-# List of (Latin alphabet, bopomofo) pairs:
-_latin_to_bopomofo = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [
- ('a', 'ㄟˉ'),
- ('b', 'ㄅㄧˋ'),
- ('c', 'ㄙㄧˉ'),
- ('d', 'ㄉㄧˋ'),
- ('e', 'ㄧˋ'),
- ('f', 'ㄝˊㄈㄨˋ'),
- ('g', 'ㄐㄧˋ'),
- ('h', 'ㄝˇㄑㄩˋ'),
- ('i', 'ㄞˋ'),
- ('j', 'ㄐㄟˋ'),
- ('k', 'ㄎㄟˋ'),
- ('l', 'ㄝˊㄛˋ'),
- ('m', 'ㄝˊㄇㄨˋ'),
- ('n', 'ㄣˉ'),
- ('o', 'ㄡˉ'),
- ('p', 'ㄆㄧˉ'),
- ('q', 'ㄎㄧㄡˉ'),
- ('r', 'ㄚˋ'),
- ('s', 'ㄝˊㄙˋ'),
- ('t', 'ㄊㄧˋ'),
- ('u', 'ㄧㄡˉ'),
- ('v', 'ㄨㄧˉ'),
- ('w', 'ㄉㄚˋㄅㄨˋㄌㄧㄡˋ'),
- ('x', 'ㄝˉㄎㄨˋㄙˋ'),
- ('y', 'ㄨㄞˋ'),
- ('z', 'ㄗㄟˋ')
-]]
-
-# List of (bopomofo, romaji) pairs:
-_bopomofo_to_romaji = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('ㄅㄛ', 'p⁼wo'),
- ('ㄆㄛ', 'pʰwo'),
- ('ㄇㄛ', 'mwo'),
- ('ㄈㄛ', 'fwo'),
- ('ㄅ', 'p⁼'),
- ('ㄆ', 'pʰ'),
- ('ㄇ', 'm'),
- ('ㄈ', 'f'),
- ('ㄉ', 't⁼'),
- ('ㄊ', 'tʰ'),
- ('ㄋ', 'n'),
- ('ㄌ', 'l'),
- ('ㄍ', 'k⁼'),
- ('ㄎ', 'kʰ'),
- ('ㄏ', 'h'),
- ('ㄐ', 'ʧ⁼'),
- ('ㄑ', 'ʧʰ'),
- ('ㄒ', 'ʃ'),
- ('ㄓ', 'ʦ`⁼'),
- ('ㄔ', 'ʦ`ʰ'),
- ('ㄕ', 's`'),
- ('ㄖ', 'ɹ`'),
- ('ㄗ', 'ʦ⁼'),
- ('ㄘ', 'ʦʰ'),
- ('ㄙ', 's'),
- ('ㄚ', 'a'),
- ('ㄛ', 'o'),
- ('ㄜ', 'ə'),
- ('ㄝ', 'e'),
- ('ㄞ', 'ai'),
- ('ㄟ', 'ei'),
- ('ㄠ', 'au'),
- ('ㄡ', 'ou'),
- ('ㄧㄢ', 'yeNN'),
- ('ㄢ', 'aNN'),
- ('ㄧㄣ', 'iNN'),
- ('ㄣ', 'əNN'),
- ('ㄤ', 'aNg'),
- ('ㄧㄥ', 'iNg'),
- ('ㄨㄥ', 'uNg'),
- ('ㄩㄥ', 'yuNg'),
- ('ㄥ', 'əNg'),
- ('ㄦ', 'əɻ'),
- ('ㄧ', 'i'),
- ('ㄨ', 'u'),
- ('ㄩ', 'ɥ'),
- ('ˉ', '→'),
- ('ˊ', '↑'),
- ('ˇ', '↓↑'),
- ('ˋ', '↓'),
- ('˙', ''),
- (',', ','),
- ('。', '.'),
- ('!', '!'),
- ('?', '?'),
- ('—', '-')
-]]
-
-# List of (romaji, ipa) pairs:
-_romaji_to_ipa = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [
- ('ʃy', 'ʃ'),
- ('ʧʰy', 'ʧʰ'),
- ('ʧ⁼y', 'ʧ⁼'),
- ('NN', 'n'),
- ('Ng', 'ŋ'),
- ('y', 'j'),
- ('h', 'x')
-]]
-
-# List of (bopomofo, ipa) pairs:
-_bopomofo_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('ㄅㄛ', 'p⁼wo'),
- ('ㄆㄛ', 'pʰwo'),
- ('ㄇㄛ', 'mwo'),
- ('ㄈㄛ', 'fwo'),
- ('ㄅ', 'p⁼'),
- ('ㄆ', 'pʰ'),
- ('ㄇ', 'm'),
- ('ㄈ', 'f'),
- ('ㄉ', 't⁼'),
- ('ㄊ', 'tʰ'),
- ('ㄋ', 'n'),
- ('ㄌ', 'l'),
- ('ㄍ', 'k⁼'),
- ('ㄎ', 'kʰ'),
- ('ㄏ', 'x'),
- ('ㄐ', 'tʃ⁼'),
- ('ㄑ', 'tʃʰ'),
- ('ㄒ', 'ʃ'),
- ('ㄓ', 'ts`⁼'),
- ('ㄔ', 'ts`ʰ'),
- ('ㄕ', 's`'),
- ('ㄖ', 'ɹ`'),
- ('ㄗ', 'ts⁼'),
- ('ㄘ', 'tsʰ'),
- ('ㄙ', 's'),
- ('ㄚ', 'a'),
- ('ㄛ', 'o'),
- ('ㄜ', 'ə'),
- ('ㄝ', 'ɛ'),
- ('ㄞ', 'aɪ'),
- ('ㄟ', 'eɪ'),
- ('ㄠ', 'ɑʊ'),
- ('ㄡ', 'oʊ'),
- ('ㄧㄢ', 'jɛn'),
- ('ㄩㄢ', 'ɥæn'),
- ('ㄢ', 'an'),
- ('ㄧㄣ', 'in'),
- ('ㄩㄣ', 'ɥn'),
- ('ㄣ', 'ən'),
- ('ㄤ', 'ɑŋ'),
- ('ㄧㄥ', 'iŋ'),
- ('ㄨㄥ', 'ʊŋ'),
- ('ㄩㄥ', 'jʊŋ'),
- ('ㄥ', 'əŋ'),
- ('ㄦ', 'əɻ'),
- ('ㄧ', 'i'),
- ('ㄨ', 'u'),
- ('ㄩ', 'ɥ'),
- ('ˉ', '→'),
- ('ˊ', '↑'),
- ('ˇ', '↓↑'),
- ('ˋ', '↓'),
- ('˙', ''),
- (',', ','),
- ('。', '.'),
- ('!', '!'),
- ('?', '?'),
- ('—', '-')
-]]
-
-# List of (bopomofo, ipa2) pairs:
-_bopomofo_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('ㄅㄛ', 'pwo'),
- ('ㄆㄛ', 'pʰwo'),
- ('ㄇㄛ', 'mwo'),
- ('ㄈㄛ', 'fwo'),
- ('ㄅ', 'p'),
- ('ㄆ', 'pʰ'),
- ('ㄇ', 'm'),
- ('ㄈ', 'f'),
- ('ㄉ', 't'),
- ('ㄊ', 'tʰ'),
- ('ㄋ', 'n'),
- ('ㄌ', 'l'),
- ('ㄍ', 'k'),
- ('ㄎ', 'kʰ'),
- ('ㄏ', 'h'),
- ('ㄐ', 'tɕ'),
- ('ㄑ', 'tɕʰ'),
- ('ㄒ', 'ɕ'),
- ('ㄓ', 'tʂ'),
- ('ㄔ', 'tʂʰ'),
- ('ㄕ', 'ʂ'),
- ('ㄖ', 'ɻ'),
- ('ㄗ', 'ts'),
- ('ㄘ', 'tsʰ'),
- ('ㄙ', 's'),
- ('ㄚ', 'a'),
- ('ㄛ', 'o'),
- ('ㄜ', 'ɤ'),
- ('ㄝ', 'ɛ'),
- ('ㄞ', 'aɪ'),
- ('ㄟ', 'eɪ'),
- ('ㄠ', 'ɑʊ'),
- ('ㄡ', 'oʊ'),
- ('ㄧㄢ', 'jɛn'),
- ('ㄩㄢ', 'yæn'),
- ('ㄢ', 'an'),
- ('ㄧㄣ', 'in'),
- ('ㄩㄣ', 'yn'),
- ('ㄣ', 'ən'),
- ('ㄤ', 'ɑŋ'),
- ('ㄧㄥ', 'iŋ'),
- ('ㄨㄥ', 'ʊŋ'),
- ('ㄩㄥ', 'jʊŋ'),
- ('ㄥ', 'ɤŋ'),
- ('ㄦ', 'əɻ'),
- ('ㄧ', 'i'),
- ('ㄨ', 'u'),
- ('ㄩ', 'y'),
- ('ˉ', '˥'),
- ('ˊ', '˧˥'),
- ('ˇ', '˨˩˦'),
- ('ˋ', '˥˩'),
- ('˙', ''),
- (',', ','),
- ('。', '.'),
- ('!', '!'),
- ('?', '?'),
- ('—', '-')
-]]
-
-
-def number_to_chinese(text):
- numbers = re.findall(r'\d+(?:\.?\d+)?', text)
- for number in numbers:
- text = text.replace(number, cn2an.an2cn(number), 1)
- return text
-
-
-def chinese_to_bopomofo(text):
- text = text.replace('、', ',').replace(';', ',').replace(':', ',')
- words = jieba.lcut(text, cut_all=False)
- text = ''
- for word in words:
- bopomofos = lazy_pinyin(word, BOPOMOFO)
- if not re.search('[\u4e00-\u9fff]', word):
- text += word
- continue
- for i in range(len(bopomofos)):
- bopomofos[i] = re.sub(r'([\u3105-\u3129])$', r'\1ˉ', bopomofos[i])
- if text != '':
- text += ' '
- text += ''.join(bopomofos)
- return text
-
-
-def latin_to_bopomofo(text):
- for regex, replacement in _latin_to_bopomofo:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def bopomofo_to_romaji(text):
- for regex, replacement in _bopomofo_to_romaji:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def bopomofo_to_ipa(text):
- for regex, replacement in _bopomofo_to_ipa:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def bopomofo_to_ipa2(text):
- for regex, replacement in _bopomofo_to_ipa2:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def chinese_to_romaji(text):
- text = number_to_chinese(text)
- text = chinese_to_bopomofo(text)
- text = latin_to_bopomofo(text)
- text = bopomofo_to_romaji(text)
- text = re.sub('i([aoe])', r'y\1', text)
- text = re.sub('u([aoəe])', r'w\1', text)
- text = re.sub('([ʦsɹ]`[⁼ʰ]?)([→↓↑ ]+|$)',
- r'\1ɹ`\2', text).replace('ɻ', 'ɹ`')
- text = re.sub('([ʦs][⁼ʰ]?)([→↓↑ ]+|$)', r'\1ɹ\2', text)
- return text
-
-
-def chinese_to_lazy_ipa(text):
- text = chinese_to_romaji(text)
- for regex, replacement in _romaji_to_ipa:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def chinese_to_ipa(text):
- text = number_to_chinese(text)
- text = chinese_to_bopomofo(text)
- text = latin_to_bopomofo(text)
- text = bopomofo_to_ipa(text)
- text = re.sub('i([aoe])', r'j\1', text)
- text = re.sub('u([aoəe])', r'w\1', text)
- text = re.sub('([sɹ]`[⁼ʰ]?)([→↓↑ ]+|$)',
- r'\1ɹ`\2', text).replace('ɻ', 'ɹ`')
- text = re.sub('([s][⁼ʰ]?)([→↓↑ ]+|$)', r'\1ɹ\2', text)
- return text
-
-
-def chinese_to_ipa2(text):
- text = number_to_chinese(text)
- text = chinese_to_bopomofo(text)
- text = latin_to_bopomofo(text)
- text = bopomofo_to_ipa2(text)
- text = re.sub(r'i([aoe])', r'j\1', text)
- text = re.sub(r'u([aoəe])', r'w\1', text)
- text = re.sub(r'([ʂɹ]ʰ?)([˩˨˧˦˥ ]+|$)', r'\1ʅ\2', text)
- text = re.sub(r'(sʰ?)([˩˨˧˦˥ ]+|$)', r'\1ɿ\2', text)
- return text
diff --git a/spaces/Dharshinijayakumar/Dharshujayakumaraiapp/app.py b/spaces/Dharshinijayakumar/Dharshujayakumaraiapp/app.py
deleted file mode 100644
index ca8b6d40b4ab898c70da92f4a4298de2baf703dc..0000000000000000000000000000000000000000
--- a/spaces/Dharshinijayakumar/Dharshujayakumaraiapp/app.py
+++ /dev/null
@@ -1,164 +0,0 @@
-import os
-import re
-import requests
-import json
-import gradio as gr
-from langchain.chat_models import ChatOpenAI
-from langchain import LLMChain, PromptTemplate
-from langchain.memory import ConversationBufferMemory
-
-OPENAI_API_KEY=os.getenv('OPENAI_API_KEY')
-PLAY_HT_API_KEY=os.getenv('PLAY_HT_API_KEY')
-PLAY_HT_USER_ID=os.getenv('PLAY_HT_USER_ID')
-
-PLAY_HT_VOICE_ID=os.getenv('PLAY_HT_VOICE_ID')
-play_ht_api_get_audio_url = "https://play.ht/api/v2/tts"
-
-
-template = """You are a helpful assistant to answer user queries.
-{chat_history}
-User: {user_message}
-Chatbot:"""
-
-prompt = PromptTemplate(
- input_variables=["chat_history", "user_message"], template=template
-)
-
-memory = ConversationBufferMemory(memory_key="chat_history")
-
-llm_chain = LLMChain(
- llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"),
- prompt=prompt,
- verbose=True,
- memory=memory,
-)
-
-headers = {
- "accept": "text/event-stream",
- "content-type": "application/json",
- "AUTHORIZATION": "Bearer "+ PLAY_HT_API_KEY,
- "X-USER-ID": PLAY_HT_USER_ID
-}
-
-
-def get_payload(text):
- return {
- "text": text,
- "voice": PLAY_HT_VOICE_ID,
- "quality": "medium",
- "output_format": "mp3",
- "speed": 1,
- "sample_rate": 24000,
- "seed": None,
- "temperature": None
- }
-
-def get_generated_audio(text):
- payload = get_payload(text)
- generated_response = {}
- try:
- response = requests.post(play_ht_api_get_audio_url, json=payload, headers=headers)
- response.raise_for_status()
- generated_response["type"]= 'SUCCESS'
- generated_response["response"] = response.text
- except requests.exceptions.RequestException as e:
- generated_response["type"]= 'ERROR'
- try:
- response_text = json.loads(response.text)
- if response_text['error_message']:
- generated_response["response"] = response_text['error_message']
- else:
- generated_response["response"] = response.text
- except Exception as e:
- generated_response["response"] = response.text
- except Exception as e:
- generated_response["type"]= 'ERROR'
- generated_response["response"] = response.text
- return generated_response
-
-def extract_urls(text):
- # Define the regex pattern for URLs
- url_pattern = r'https?://(?:[-\w.]|(?:%[\da-fA-F]{2}))+[/\w\.-]*'
-
- # Find all occurrences of URLs in the text
- urls = re.findall(url_pattern, text)
-
- return urls
-
-def get_audio_reply_for_question(text):
- generated_audio_event = get_generated_audio(text)
- #From get_generated_audio, you will get events in a string format, from that we need to extract the url
- final_response = {
- "audio_url": '',
- "message": ''
- }
- if generated_audio_event["type"] == 'SUCCESS':
- audio_urls = extract_urls(generated_audio_event["response"])
- if len(audio_urls) == 0:
- final_response['message'] = "No audio file link found in generated event"
- else:
- final_response['audio_url'] = audio_urls[-1]
- else:
- final_response['message'] = generated_audio_event['response']
- return final_response
-
-def download_url(url):
- try:
- # Send a GET request to the URL to fetch the content
- final_response = {
- 'content':'',
- 'error':''
- }
- response = requests.get(url)
- # Check if the request was successful (status code 200)
- if response.status_code == 200:
- final_response['content'] = response.content
- else:
- final_response['error'] = f"Failed to download the URL. Status code: {response.status_code}"
- except Exception as e:
- final_response['error'] = f"Failed to download the URL. Error: {e}"
- return final_response
-
-def get_filename_from_url(url):
- # Use os.path.basename() to extract the file name from the URL
- file_name = os.path.basename(url)
- return file_name
-
-def get_text_response(user_message):
- response = llm_chain.predict(user_message = user_message)
- return response
-
-def get_text_response_and_audio_response(user_message):
- response = get_text_response(user_message) # Getting the reply from Open AI
- audio_reply_for_question_response = get_audio_reply_for_question(response)
- final_response = {
- 'output_file_path': '',
- 'message':''
- }
- audio_url = audio_reply_for_question_response['audio_url']
- if audio_url:
- output_file_path=get_filename_from_url(audio_url)
- download_url_response = download_url(audio_url)
- audio_content = download_url_response['content']
- if audio_content:
- with open(output_file_path, "wb") as audio_file:
- audio_file.write(audio_content)
- final_response['output_file_path'] = output_file_path
- else:
- final_response['message'] = download_url_response['error']
- else:
- final_response['message'] = audio_reply_for_question_response['message']
- return final_response
-
-def chat_bot_response(message, history):
- text_and_audio_response = get_text_response_and_audio_response(message)
- output_file_path = text_and_audio_response['output_file_path']
- if output_file_path:
- return (text_and_audio_response['output_file_path'],)
- else:
- return text_and_audio_response['message']
-
-demo = gr.ChatInterface(chat_bot_response,examples=["How are you doing?","What are your interests?","Which places do you like to visit?"])
-
-if __name__ == "__main__":
- demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`.
diff --git a/spaces/DragGan/DragGan-Inversion/gui_utils/glfw_window.py b/spaces/DragGan/DragGan-Inversion/gui_utils/glfw_window.py
deleted file mode 100644
index 69c96ff72ccff6a42bcf6ab1dbdbb8cfb8005921..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan-Inversion/gui_utils/glfw_window.py
+++ /dev/null
@@ -1,239 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-import time
-import glfw
-import OpenGL.GL as gl
-from . import gl_utils
-
-# ----------------------------------------------------------------------------
-
-
-class GlfwWindow: # pylint: disable=too-many-public-methods
- def __init__(self, *, title='GlfwWindow', window_width=1920, window_height=1080, deferred_show=True, close_on_esc=True):
- self._glfw_window = None
- self._drawing_frame = False
- self._frame_start_time = None
- self._frame_delta = 0
- self._fps_limit = None
- self._vsync = None
- self._skip_frames = 0
- self._deferred_show = deferred_show
- self._close_on_esc = close_on_esc
- self._esc_pressed = False
- self._drag_and_drop_paths = None
- self._capture_next_frame = False
- self._captured_frame = None
-
- # Create window.
- glfw.init()
- glfw.window_hint(glfw.VISIBLE, False)
- self._glfw_window = glfw.create_window(
- width=window_width, height=window_height, title=title, monitor=None, share=None)
- self._attach_glfw_callbacks()
- self.make_context_current()
-
- # Adjust window.
- self.set_vsync(False)
- self.set_window_size(window_width, window_height)
- if not self._deferred_show:
- glfw.show_window(self._glfw_window)
-
- def close(self):
- if self._drawing_frame:
- self.end_frame()
- if self._glfw_window is not None:
- glfw.destroy_window(self._glfw_window)
- self._glfw_window = None
- # glfw.terminate() # Commented out to play it nice with other glfw clients.
-
- def __del__(self):
- try:
- self.close()
- except:
- pass
-
- @property
- def window_width(self):
- return self.content_width
-
- @property
- def window_height(self):
- return self.content_height + self.title_bar_height
-
- @property
- def content_width(self):
- width, _height = glfw.get_window_size(self._glfw_window)
- return width
-
- @property
- def content_height(self):
- _width, height = glfw.get_window_size(self._glfw_window)
- return height
-
- @property
- def title_bar_height(self):
- _left, top, _right, _bottom = glfw.get_window_frame_size(
- self._glfw_window)
- return top
-
- @property
- def monitor_width(self):
- _, _, width, _height = glfw.get_monitor_workarea(
- glfw.get_primary_monitor())
- return width
-
- @property
- def monitor_height(self):
- _, _, _width, height = glfw.get_monitor_workarea(
- glfw.get_primary_monitor())
- return height
-
- @property
- def frame_delta(self):
- return self._frame_delta
-
- def set_title(self, title):
- glfw.set_window_title(self._glfw_window, title)
-
- def set_window_size(self, width, height):
- width = min(width, self.monitor_width)
- height = min(height, self.monitor_height)
- glfw.set_window_size(self._glfw_window, width, max(
- height - self.title_bar_height, 0))
- if width == self.monitor_width and height == self.monitor_height:
- self.maximize()
-
- def set_content_size(self, width, height):
- self.set_window_size(width, height + self.title_bar_height)
-
- def maximize(self):
- glfw.maximize_window(self._glfw_window)
-
- def set_position(self, x, y):
- glfw.set_window_pos(self._glfw_window, x, y + self.title_bar_height)
-
- def center(self):
- self.set_position((self.monitor_width - self.window_width) //
- 2, (self.monitor_height - self.window_height) // 2)
-
- def set_vsync(self, vsync):
- vsync = bool(vsync)
- if vsync != self._vsync:
- glfw.swap_interval(1 if vsync else 0)
- self._vsync = vsync
-
- def set_fps_limit(self, fps_limit):
- self._fps_limit = int(fps_limit)
-
- def should_close(self):
- return glfw.window_should_close(self._glfw_window) or (self._close_on_esc and self._esc_pressed)
-
- def skip_frame(self):
- self.skip_frames(1)
-
- def skip_frames(self, num): # Do not update window for the next N frames.
- self._skip_frames = max(self._skip_frames, int(num))
-
- def is_skipping_frames(self):
- return self._skip_frames > 0
-
- def capture_next_frame(self):
- self._capture_next_frame = True
-
- def pop_captured_frame(self):
- frame = self._captured_frame
- self._captured_frame = None
- return frame
-
- def pop_drag_and_drop_paths(self):
- paths = self._drag_and_drop_paths
- self._drag_and_drop_paths = None
- return paths
-
- def draw_frame(self): # To be overridden by subclass.
- self.begin_frame()
- # Rendering code goes here.
- self.end_frame()
-
- def make_context_current(self):
- if self._glfw_window is not None:
- glfw.make_context_current(self._glfw_window)
-
- def begin_frame(self):
- # End previous frame.
- if self._drawing_frame:
- self.end_frame()
-
- # Apply FPS limit.
- if self._frame_start_time is not None and self._fps_limit is not None:
- delay = self._frame_start_time - time.perf_counter() + 1 / self._fps_limit
- if delay > 0:
- time.sleep(delay)
- cur_time = time.perf_counter()
- if self._frame_start_time is not None:
- self._frame_delta = cur_time - self._frame_start_time
- self._frame_start_time = cur_time
-
- # Process events.
- glfw.poll_events()
-
- # Begin frame.
- self._drawing_frame = True
- self.make_context_current()
-
- # Initialize GL state.
- gl.glViewport(0, 0, self.content_width, self.content_height)
- gl.glMatrixMode(gl.GL_PROJECTION)
- gl.glLoadIdentity()
- gl.glTranslate(-1, 1, 0)
- gl.glScale(2 / max(self.content_width, 1), -
- 2 / max(self.content_height, 1), 1)
- gl.glMatrixMode(gl.GL_MODELVIEW)
- gl.glLoadIdentity()
- gl.glEnable(gl.GL_BLEND)
- # Pre-multiplied alpha.
- gl.glBlendFunc(gl.GL_ONE, gl.GL_ONE_MINUS_SRC_ALPHA)
-
- # Clear.
- gl.glClearColor(0, 0, 0, 1)
- gl.glClear(gl.GL_COLOR_BUFFER_BIT | gl.GL_DEPTH_BUFFER_BIT)
-
- def end_frame(self):
- assert self._drawing_frame
- self._drawing_frame = False
-
- # Skip frames if requested.
- if self._skip_frames > 0:
- self._skip_frames -= 1
- return
-
- # Capture frame if requested.
- if self._capture_next_frame:
- self._captured_frame = gl_utils.read_pixels(
- self.content_width, self.content_height)
- self._capture_next_frame = False
-
- # Update window.
- if self._deferred_show:
- glfw.show_window(self._glfw_window)
- self._deferred_show = False
- glfw.swap_buffers(self._glfw_window)
-
- def _attach_glfw_callbacks(self):
- glfw.set_key_callback(self._glfw_window, self._glfw_key_callback)
- glfw.set_drop_callback(self._glfw_window, self._glfw_drop_callback)
-
- def _glfw_key_callback(self, _window, key, _scancode, action, _mods):
- if action == glfw.PRESS and key == glfw.KEY_ESCAPE:
- self._esc_pressed = True
-
- def _glfw_drop_callback(self, _window, paths):
- self._drag_and_drop_paths = paths
-
-# ----------------------------------------------------------------------------
diff --git a/spaces/ECCV2022/bytetrack/tutorials/trades/README.md b/spaces/ECCV2022/bytetrack/tutorials/trades/README.md
deleted file mode 100644
index 95afad0195f6230b7ca593dfd088ea7953ff2ed6..0000000000000000000000000000000000000000
--- a/spaces/ECCV2022/bytetrack/tutorials/trades/README.md
+++ /dev/null
@@ -1,41 +0,0 @@
-# TraDeS
-
-Step1. git clone https://github.com/JialianW/TraDeS.git
-
-
-Step2.
-
-replace https://github.com/JialianW/TraDeS/blob/master/src/lib/utils/tracker.py
-
-replace https://github.com/JialianW/TraDeS/blob/master/src/lib/opts.py
-
-
-Step3. run
-```
-python3 test.py tracking --exp_id mot17_half --dataset mot --dataset_version 17halfval --pre_hm --ltrb_amodal --inference --load_model ../models/mot_half.pth --gpus 0 --clip_len 3 --trades --track_thresh 0.4 --new_thresh 0.4 --out_thresh 0.2 --pre_thresh 0.5
-```
-
-
-# TraDeS_BYTE
-
-Step1. git clone https://github.com/JialianW/TraDeS.git
-
-
-Step2.
-
-replace https://github.com/JialianW/TraDeS/blob/master/src/lib/utils/tracker.py by byte_tracker.py
-
-replace https://github.com/JialianW/TraDeS/blob/master/src/lib/opts.py
-
-add mot_online to https://github.com/JialianW/TraDeS/blob/master/src/lib/utils
-
-Step3. run
-```
-python3 test.py tracking --exp_id mot17_half --dataset mot --dataset_version 17halfval --pre_hm --ltrb_amodal --inference --load_model ../models/mot_half.pth --gpus 0 --clip_len 3 --trades --track_thresh 0.4 --new_thresh 0.5 --out_thresh 0.1 --pre_thresh 0.5
-```
-
-
-## Notes
-tracker.py: motion + reid
-
-byte_tracker.py: motion with kalman filter
diff --git a/spaces/Eddycrack864/Applio-Inference/demucs/repitch.py b/spaces/Eddycrack864/Applio-Inference/demucs/repitch.py
deleted file mode 100644
index 8846ab2d951a024c95067f66a113968500442828..0000000000000000000000000000000000000000
--- a/spaces/Eddycrack864/Applio-Inference/demucs/repitch.py
+++ /dev/null
@@ -1,96 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import io
-import random
-import subprocess as sp
-import tempfile
-
-import numpy as np
-import torch
-from scipy.io import wavfile
-
-
-def i16_pcm(wav):
- if wav.dtype == np.int16:
- return wav
- return (wav * 2**15).clamp_(-2**15, 2**15 - 1).short()
-
-
-def f32_pcm(wav):
- if wav.dtype == np.float:
- return wav
- return wav.float() / 2**15
-
-
-class RepitchedWrapper:
- """
- Wrap a dataset to apply online change of pitch / tempo.
- """
- def __init__(self, dataset, proba=0.2, max_pitch=2, max_tempo=12, tempo_std=5, vocals=[3]):
- self.dataset = dataset
- self.proba = proba
- self.max_pitch = max_pitch
- self.max_tempo = max_tempo
- self.tempo_std = tempo_std
- self.vocals = vocals
-
- def __len__(self):
- return len(self.dataset)
-
- def __getitem__(self, index):
- streams = self.dataset[index]
- in_length = streams.shape[-1]
- out_length = int((1 - 0.01 * self.max_tempo) * in_length)
-
- if random.random() < self.proba:
- delta_pitch = random.randint(-self.max_pitch, self.max_pitch)
- delta_tempo = random.gauss(0, self.tempo_std)
- delta_tempo = min(max(-self.max_tempo, delta_tempo), self.max_tempo)
- outs = []
- for idx, stream in enumerate(streams):
- stream = repitch(
- stream,
- delta_pitch,
- delta_tempo,
- voice=idx in self.vocals)
- outs.append(stream[:, :out_length])
- streams = torch.stack(outs)
- else:
- streams = streams[..., :out_length]
- return streams
-
-
-def repitch(wav, pitch, tempo, voice=False, quick=False, samplerate=44100):
- """
- tempo is a relative delta in percentage, so tempo=10 means tempo at 110%!
- pitch is in semi tones.
- Requires `soundstretch` to be installed, see
- https://www.surina.net/soundtouch/soundstretch.html
- """
- outfile = tempfile.NamedTemporaryFile(suffix=".wav")
- in_ = io.BytesIO()
- wavfile.write(in_, samplerate, i16_pcm(wav).t().numpy())
- command = [
- "soundstretch",
- "stdin",
- outfile.name,
- f"-pitch={pitch}",
- f"-tempo={tempo:.6f}",
- ]
- if quick:
- command += ["-quick"]
- if voice:
- command += ["-speech"]
- try:
- sp.run(command, capture_output=True, input=in_.getvalue(), check=True)
- except sp.CalledProcessError as error:
- raise RuntimeError(f"Could not change bpm because {error.stderr.decode('utf-8')}")
- sr, wav = wavfile.read(outfile.name)
- wav = wav.copy()
- wav = f32_pcm(torch.from_numpy(wav).t())
- assert sr == samplerate
- return wav
diff --git a/spaces/Epoching/GLIDE_Inpaint/glide_text2im/clip/encoders.py b/spaces/Epoching/GLIDE_Inpaint/glide_text2im/clip/encoders.py
deleted file mode 100644
index ee72773c2c891d2dda6d02933e88599b5330b052..0000000000000000000000000000000000000000
--- a/spaces/Epoching/GLIDE_Inpaint/glide_text2im/clip/encoders.py
+++ /dev/null
@@ -1,497 +0,0 @@
-import math
-from collections import OrderedDict
-from typing import List, Optional, Tuple, cast
-
-import attr
-import numpy as np
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-from .attention import (
- AttentionInfo,
- DenseAttentionMask,
- DenseCausalAttentionMask,
- make_full_layout,
- to_attention_info,
-)
-from .utils import Affine, LayerNorm, zero_key_bias_grad
-
-# Constants used in the original CLIP implementation.
-image_channel_means = [122.77093945, 116.74601272, 104.09373519]
-image_channel_stds = [68.50053285, 66.63215831, 70.32316309]
-
-
-@attr.s(eq=False, repr=False)
-class TextEmbedding(nn.Module):
- n_vocab: int = attr.ib()
- n_context: int = attr.ib()
- n_state: int = attr.ib()
- device: torch.device = attr.ib(default=torch.device("cuda"))
-
- def __attrs_post_init__(self) -> None:
- super().__init__()
-
- w_voc = torch.empty((self.n_vocab, self.n_state), dtype=torch.float32, device=self.device)
- w_pos = torch.empty((self.n_context, self.n_state), dtype=torch.float32, device=self.device)
-
- with torch.no_grad():
- w_voc.normal_(std=0.02)
- w_pos.normal_(std=0.01)
-
- self.w_voc = nn.Parameter(w_voc)
- self.w_pos = nn.Parameter(w_pos)
-
- def forward(self, x: torch.Tensor) -> torch.Tensor:
- if len(x.shape) != 2:
- raise ValueError()
-
- return F.embedding(x, self.w_voc) + self.w_pos[None, :, :]
-
-
-@attr.s(eq=False, repr=False)
-class ImageEmbedding(nn.Module):
- image_size: int = attr.ib()
- patch_size: int = attr.ib()
- n_state: int = attr.ib()
- n_timestep: int = attr.ib(default=0)
- device: torch.device = attr.ib(default=torch.device("cuda"))
-
- def __attrs_post_init__(self) -> None:
- super().__init__()
-
- if self.image_size % self.patch_size != 0:
- raise ValueError()
-
- n_patch = self.image_size // self.patch_size
- patch_proj = torch.empty(
- (self.n_state, 3) + 2 * (self.patch_size,), dtype=torch.float32, device=self.device
- )
- w_pos = torch.empty(
- (1 + n_patch ** 2, self.n_state), dtype=torch.float32, device=self.device
- )
-
- with torch.no_grad():
- if self.n_timestep == 0:
- pred_state = torch.empty((self.n_state,), dtype=torch.float32, device=self.device)
- pred_state.normal_(std=1 / np.sqrt(self.n_state))
- self.pred_state = nn.Parameter(pred_state)
- else:
- w_t = torch.empty(
- (self.n_timestep, self.n_state), dtype=torch.float32, device=self.device
- )
- w_t.normal_(std=1 / np.sqrt(self.n_state))
- self.w_t = nn.Parameter(w_t)
-
- patch_proj.normal_(std=np.sqrt(2 / (self.n_state * self.patch_size ** 2)))
- w_pos.normal_(std=1 / np.sqrt(self.n_state))
-
- self.patch_proj = nn.Parameter(patch_proj)
- self.w_pos = nn.Parameter(w_pos)
-
- self.channel_means = torch.tensor(
- image_channel_means, dtype=torch.float32, device=self.device
- )[None, :, None, None]
- self.channel_stds = torch.tensor(
- image_channel_stds, dtype=torch.float32, device=self.device
- )[None, :, None, None]
- self.ln = LayerNorm(self.n_state, eps=1e-5, device=self.device)
-
- def forward(self, x: torch.Tensor, t: Optional[torch.Tensor] = None) -> torch.Tensor:
- if len(x.shape) != 4:
- raise ValueError("input should be 4d")
- if x.shape[1] != 3:
- raise ValueError("input should have 3 channels")
- if not (x.shape[2] == self.image_size and x.shape[3] == self.image_size):
- raise ValueError(f"input is not {self.image_size} x {self.image_size}")
-
- if (self.n_timestep == 0 and t is not None) or (self.n_timestep != 0 and t is None):
- raise ValueError()
- if self.n_timestep != 0:
- assert t is not None
- if len(t.shape) != 1:
- raise ValueError()
- if t.shape[0] != x.shape[0]:
- raise ValueError()
-
- x = (x - self.channel_means) / self.channel_stds
- x = F.conv2d(x, self.patch_proj, stride=self.patch_size)
- x = x.reshape(x.shape[0], self.n_state, (self.image_size // self.patch_size) ** 2).permute(
- 0, 2, 1
- )
-
- sot = (
- self.pred_state[None, None].expand(x.shape[0], -1, -1)
- if self.n_timestep == 0
- else F.embedding(cast(torch.Tensor, t), self.w_t)[:, None]
- )
- x = torch.cat((sot, x), dim=1) + self.w_pos[None]
- return self.ln(x)
-
-
-@attr.s(eq=False, repr=False)
-class AttentionResblock(nn.Module):
- n_state: int = attr.ib()
- n_resblocks: int = attr.ib()
- attn_fn: AttentionInfo = attr.ib()
- device: torch.device = attr.ib(default=torch.device("cuda"))
-
- def __attrs_post_init__(self) -> None:
- super().__init__()
-
- self.n_head_state = self.n_state // self.attn_fn.n_heads
- self.qk_scale = 1 / np.sqrt(self.n_head_state)
-
- self.ln = LayerNorm(self.n_state, eps=1e-5, device=self.device)
- self.f_q = Affine(
- self.n_state,
- self.n_state,
- std=1 / math.sqrt(self.n_state),
- use_bias=True,
- bias_filter_fn=zero_key_bias_grad,
- device=self.device,
- )
- self.f_k = Affine(
- self.n_state,
- self.n_state,
- std=1 / math.sqrt(self.n_state),
- use_bias=False,
- bias_filter_fn=zero_key_bias_grad,
- device=self.device,
- )
- self.f_v = Affine(
- self.n_state,
- self.n_state,
- std=1 / math.sqrt(self.n_state),
- use_bias=True,
- bias_filter_fn=zero_key_bias_grad,
- device=self.device,
- )
- self.f_c = Affine(
- self.n_state,
- self.n_state,
- use_bias=True,
- std=1 / np.sqrt(self.n_state * self.n_resblocks ** 2),
- device=self.device,
- ) # XXX
-
- def forward(self, m: torch.Tensor) -> torch.Tensor:
- n_context = m.shape[1]
- n_query_pad = self.attn_fn.ctx_blks_q * self.attn_fn.block_size - n_context
- n_key_pad = self.attn_fn.ctx_blks_k * self.attn_fn.block_size - n_context
- assert n_query_pad >= 0
- assert n_key_pad >= 0
-
- r = m
- r = self.ln(r)
- q, k, v = self.f_q(r), self.f_k(r), self.f_v(r)
-
- if n_query_pad != 0:
- q = F.pad(q, (0, 0, 0, n_query_pad))
-
- if n_key_pad != 0:
- k = F.pad(k, (0, 0, 0, n_key_pad))
- v = F.pad(v, (0, 0, 0, n_key_pad))
-
- q = q.view([q.shape[0], -1, self.attn_fn.n_heads, self.n_head_state]).permute((0, 2, 1, 3))
- k = k.view([k.shape[0], -1, self.attn_fn.n_heads, self.n_head_state]).permute((0, 2, 1, 3))
- v = v.view([v.shape[0], -1, self.attn_fn.n_heads, self.n_head_state]).permute((0, 2, 1, 3))
- w = torch.einsum(
- "bhcd,bhkd->bhck", q * math.sqrt(self.qk_scale), k * math.sqrt(self.qk_scale)
- )
-
- if hasattr(self.attn_fn, "pytorch_attn_bias"):
- bias = self.attn_fn.pytorch_attn_bias
- assert len(bias.shape) in {2, 3}
-
- if len(bias.shape) == 2:
- w = torch.softmax(w + self.attn_fn.pytorch_attn_bias[None, None], dim=-1)
- elif len(bias.shape) == 3:
- w = torch.softmax(w + self.attn_fn.pytorch_attn_bias[None], dim=-1)
- else:
- w = torch.softmax(w, dim=-1)
-
- r = torch.einsum("bhck,bhkd->bhcd", w, v)
- r = r.permute((0, 2, 1, 3)).reshape((r.shape[0], -1, self.n_state))
-
- if n_query_pad != 0:
- r = r[:, :-n_query_pad]
-
- assert r.shape[1] == n_context
-
- r = self.f_c(r)
- return m + r
-
-
-@attr.s(eq=False, repr=False)
-class FullyConnectedResblock(nn.Module):
- """
- Not imported from other files because we retain Alec's original inits.
- """
-
- n_state: int = attr.ib()
- n_resblocks: int = attr.ib()
- device: torch.device = attr.ib(default=torch.device("cuda"))
-
- def __attrs_post_init__(self) -> None:
- super().__init__()
-
- self.ln = LayerNorm(self.n_state, eps=1e-5, device=self.device)
- self.f_1 = Affine(
- self.n_state,
- 4 * self.n_state,
- use_bias=True,
- std=np.sqrt(2 / (4 * self.n_state)),
- device=self.device,
- )
- self.f_2 = Affine(
- 4 * self.n_state,
- self.n_state,
- use_bias=True,
- std=1 / np.sqrt(self.n_state * self.n_resblocks ** 2),
- device=self.device,
- ) # XXX
-
- def forward(self, m: torch.Tensor) -> torch.Tensor:
- r = m
- r = self.ln(r)
-
- r = self.f_2(F.gelu(self.f_1(r)))
- return m + r
-
-
-@attr.s(eq=False, repr=False)
-class TransformerBlock(nn.Module):
- n_state: int = attr.ib()
- n_resblocks: int = attr.ib()
- attn_fn: AttentionInfo = attr.ib()
- device: torch.device = attr.ib(default=torch.device("cuda"))
-
- def __attrs_post_init__(self) -> None:
- super().__init__()
-
- self.f_attn = AttentionResblock(
- self.n_state,
- self.n_resblocks,
- self.attn_fn,
- self.device,
- )
- self.f_mlp = FullyConnectedResblock(self.n_state, self.n_resblocks, self.device)
-
- def forward(self, x: torch.Tensor) -> torch.Tensor:
- return self.f_mlp(self.f_attn(x))
-
-
-@attr.s(eq=False, repr=False)
-class TextFeatureExtractor(nn.Module):
- n_state: int = attr.ib()
- n_embd: int = attr.ib()
- device: torch.device = attr.ib(default=torch.device("cuda"))
-
- def __attrs_post_init__(self) -> None:
- super().__init__()
-
- self.ln = LayerNorm(self.n_state, eps=1e-5, device=self.device)
- self.f = Affine(self.n_state, self.n_embd, use_bias=False, device=self.device)
-
- def forward(
- self, text: torch.Tensor, text_len: torch.Tensor, return_probe_features: bool = False
- ) -> torch.Tensor:
- if len(text.shape) != 3:
- raise ValueError("expected text to be 3d")
- if len(text_len.shape) != 1:
- raise ValueError("expected text length to be 1d")
- if text.shape[0] != text_len.shape[0]:
- raise ValueError("text and text_len have inconsistent batch dimensions")
-
- index = (text_len - 1)[:, None, None].expand(-1, 1, text.shape[2])
- x = torch.gather(text, dim=1, index=index)
- assert list(x.shape) == [text.shape[0], 1, text.shape[2]]
-
- if return_probe_features:
- return x[:, 0]
-
- x = self.ln(x)
- return self.f(x[:, 0])
-
-
-@attr.s(eq=False, repr=False)
-class ImageFeatureExtractor(nn.Module):
- n_state: int = attr.ib()
- n_embd: int = attr.ib()
- device: torch.device = attr.ib(default=torch.device("cuda"))
-
- def __attrs_post_init__(self) -> None:
- super().__init__()
-
- self.ln = LayerNorm(self.n_state, eps=1e-5, device=self.device)
- self.f = Affine(self.n_state, self.n_embd, use_bias=False, device=self.device)
-
- def forward(self, x: torch.Tensor, return_probe_features: bool = False) -> torch.Tensor:
- if return_probe_features:
- return x[:, 0]
-
- x = self.ln(x[:, :1])
- return self.f(x[:, 0])
-
-
-@attr.s(eq=False, repr=False)
-class TextEncoder(nn.Module):
- n_bpe_vocab: int = attr.ib()
- max_text_len: int = attr.ib()
- n_embd: int = attr.ib()
- n_head: int = attr.ib()
- n_xf_blocks: int = attr.ib()
- n_head_state: int = attr.ib(default=64)
- device: torch.device = attr.ib(default=torch.device("cuda"))
- block_size: int = attr.ib(init=False, default=32)
-
- def __attrs_post_init__(self) -> None:
- super().__init__()
-
- self.n_state = self.n_head * self.n_head_state
- n_rounded_context = self.block_size * int(math.ceil(self.max_text_len / self.block_size))
- n_pad = n_rounded_context - self.max_text_len
-
- args = (
- n_rounded_context,
- n_rounded_context,
- self.block_size,
- self.n_head,
- False,
- n_pad,
- n_pad,
- )
- mask = DenseCausalAttentionMask(*args)
- attn_fn = to_attention_info(mask)
-
- m = 1 - make_full_layout(mask).astype(np.float32)
- m[m == 1] = -1e10
- attn_fn.pytorch_attn_bias = torch.from_numpy(m).to(self.device)
-
- blocks: List[Tuple[str, nn.Module]] = [
- (
- "input",
- TextEmbedding(
- self.n_bpe_vocab, self.max_text_len, self.n_state, device=self.device
- ),
- )
- ]
-
- for i in range(self.n_xf_blocks):
- blocks.append(
- (
- f"block_{i}",
- TransformerBlock(self.n_state, 2 * self.n_xf_blocks, attn_fn, self.device),
- )
- )
-
- blocks.append(
- ("output", TextFeatureExtractor(self.n_state, self.n_embd, device=self.device))
- )
-
- self.blocks = nn.ModuleDict(OrderedDict(blocks))
-
- def forward(
- self,
- text: torch.Tensor,
- text_len: torch.Tensor,
- return_probe_features: bool = False,
- ) -> torch.Tensor:
-
- n_batch = text.shape[0]
- h = self.blocks["input"](text)
-
- for i in range(self.n_xf_blocks):
- h = self.blocks[f"block_{i}"](h)
-
- h = self.blocks["output"](h, text_len, return_probe_features=return_probe_features)
-
- assert list(h.shape) == [
- n_batch,
- self.n_embd if not return_probe_features else self.n_state,
- ]
- return h
-
-
-@attr.s(eq=False, repr=False)
-class ImageEncoder(nn.Module):
- image_size: int = attr.ib()
- patch_size: int = attr.ib()
- n_embd: int = attr.ib()
- n_head: int = attr.ib()
- n_xf_blocks: int = attr.ib()
- n_head_state: int = attr.ib(default=64)
- n_timestep: int = attr.ib(default=0)
- device: torch.device = attr.ib(default=torch.device("cuda"))
- block_size: int = attr.ib(init=False, default=32)
-
- def __attrs_post_init__(self) -> None:
- super().__init__()
-
- self.n_state = self.n_head * self.n_head_state
- self.n_context = 1 + (self.image_size // self.patch_size) ** 2
- n_rounded_context = self.block_size * int(math.ceil(self.n_context / self.block_size))
- n_pad = n_rounded_context - self.n_context
-
- args = (
- n_rounded_context,
- n_rounded_context,
- self.block_size,
- self.n_head,
- False,
- n_pad,
- n_pad,
- )
- mask = DenseAttentionMask(*args)
- attn_fn = to_attention_info(mask)
-
- m = 1 - make_full_layout(mask).astype(np.float32)
- m[m == 1] = -1e10
- attn_fn.pytorch_attn_bias = torch.from_numpy(m).to(self.device)
-
- blocks: List[Tuple[str, nn.Module]] = [
- (
- "input",
- ImageEmbedding(
- self.image_size,
- self.patch_size,
- self.n_state,
- n_timestep=self.n_timestep,
- device=self.device,
- ),
- )
- ]
-
- for i in range(self.n_xf_blocks):
- blocks.append(
- (
- f"block_{i}",
- TransformerBlock(self.n_state, 2 * self.n_xf_blocks, attn_fn, self.device),
- )
- )
-
- blocks.append(("output", ImageFeatureExtractor(self.n_state, self.n_embd, self.device)))
-
- self.blocks = nn.ModuleDict(OrderedDict(blocks))
-
- def forward(
- self,
- image: torch.Tensor,
- timesteps: Optional[torch.Tensor] = None,
- return_probe_features: bool = False,
- ) -> torch.Tensor:
- n_batch = image.shape[0]
- h = self.blocks["input"](image, t=timesteps)
-
- for i in range(self.n_xf_blocks):
- h = self.blocks[f"block_{i}"](h)
-
- h = self.blocks["output"](h, return_probe_features=return_probe_features)
-
- assert list(h.shape) == [
- n_batch,
- self.n_embd if not return_probe_features else self.n_state,
- ]
-
- return h
diff --git a/spaces/EronSamez/RVC_HFmeu/go-applio.bat b/spaces/EronSamez/RVC_HFmeu/go-applio.bat
deleted file mode 100644
index 60c0c41d34a8aee5e14e744accb33d028d807245..0000000000000000000000000000000000000000
--- a/spaces/EronSamez/RVC_HFmeu/go-applio.bat
+++ /dev/null
@@ -1,92 +0,0 @@
-@echo off
-setlocal
-title Start Applio
-
-:::
-::: _ _
-::: /\ | (_)
-::: / \ _ __ _ __ | |_ ___
-::: / /\ \ | '_ \| '_ \| | |/ _ \
-::: / ____ \| |_) | |_) | | | (_) |
-::: /_/ \_\ .__/| .__/|_|_|\___/
-::: | | | |
-::: |_| |_|
-:::
-:::
-
-:menu
-for /f "delims=: tokens=*" %%A in ('findstr /b ":::" "%~f0"') do @echo(%%A
-
-echo [1] Start Applio
-echo [2] Start Applio (DML)
-echo [3] Start Realtime GUI (DML)
-echo [4] Start Realtime GUI (V0)
-echo [5] Start Realtime GUI (V1)
-echo.
-
-set /p choice=Select an option:
-set choice=%choice: =%
-
-cls
-echo WARNING: It's recommended to disable antivirus or firewall, as errors might occur when starting the ssl.
-pause
-
-if "%choice%"=="1" (
- cls
- echo WARNING: At this point, it's recommended to disable antivirus or firewall, as errors might occur when downloading pretrained models.
- pause>null
- echo Starting Applio...
- echo.
- runtime\python.exe infer-web.py --pycmd runtime\python.exe --port 7897
- pause
- cls
- goto menu
-)
-
-if "%choice%"=="2" (
- cls
- echo Starting Applio ^(DML^)...
- echo.
- runtime\python.exe infer-web.py --pycmd runtime\python.exe --port 7897 --dml
- pause
- cls
- goto menu
-)
-
-if "%choice%"=="3" (
- cls
- echo Starting Realtime GUI ^(DML^)...
- echo.
- runtime\python.exe gui_v1.py --pycmd runtime\python.exe --dml
- pause
- cls
- goto menu
-)
-
-if "%choice%"=="4" (
- cls
- echo Starting Realtime GUI ^(V0^)...
- echo.
- runtime\python.exe gui_v0.py
- pause
- cls
- goto menu
-)
-
-if "%choice%"=="5" (
- cls
- echo Starting Realtime GUI ^(V1^)...
- echo.
- runtime\python.exe gui_v1.py
- pause
- cls
- goto menu
-)
-
-cls
-echo Invalid option. Please enter a number from 1 to 5.
-echo.
-echo Press 'Enter' to access the main menu...
-pause>nul
-cls
-goto menu
diff --git a/spaces/FrankZxShen/so-vits-svc-models-pcr/diffusion/diffusion_onnx.py b/spaces/FrankZxShen/so-vits-svc-models-pcr/diffusion/diffusion_onnx.py
deleted file mode 100644
index 1c1e80321de162b5233801efa3423739f7f92bdc..0000000000000000000000000000000000000000
--- a/spaces/FrankZxShen/so-vits-svc-models-pcr/diffusion/diffusion_onnx.py
+++ /dev/null
@@ -1,612 +0,0 @@
-from collections import deque
-from functools import partial
-from inspect import isfunction
-import torch.nn.functional as F
-import librosa.sequence
-import numpy as np
-from torch.nn import Conv1d
-from torch.nn import Mish
-import torch
-from torch import nn
-from tqdm import tqdm
-import math
-
-
-def exists(x):
- return x is not None
-
-
-def default(val, d):
- if exists(val):
- return val
- return d() if isfunction(d) else d
-
-
-def extract(a, t):
- return a[t].reshape((1, 1, 1, 1))
-
-
-def noise_like(shape, device, repeat=False):
- repeat_noise = lambda: torch.randn((1, *shape[1:]), device=device).repeat(shape[0], *((1,) * (len(shape) - 1)))
- noise = lambda: torch.randn(shape, device=device)
- return repeat_noise() if repeat else noise()
-
-
-def linear_beta_schedule(timesteps, max_beta=0.02):
- """
- linear schedule
- """
- betas = np.linspace(1e-4, max_beta, timesteps)
- return betas
-
-
-def cosine_beta_schedule(timesteps, s=0.008):
- """
- cosine schedule
- as proposed in https://openreview.net/forum?id=-NEXDKk8gZ
- """
- steps = timesteps + 1
- x = np.linspace(0, steps, steps)
- alphas_cumprod = np.cos(((x / steps) + s) / (1 + s) * np.pi * 0.5) ** 2
- alphas_cumprod = alphas_cumprod / alphas_cumprod[0]
- betas = 1 - (alphas_cumprod[1:] / alphas_cumprod[:-1])
- return np.clip(betas, a_min=0, a_max=0.999)
-
-
-beta_schedule = {
- "cosine": cosine_beta_schedule,
- "linear": linear_beta_schedule,
-}
-
-
-def extract_1(a, t):
- return a[t].reshape((1, 1, 1, 1))
-
-
-def predict_stage0(noise_pred, noise_pred_prev):
- return (noise_pred + noise_pred_prev) / 2
-
-
-def predict_stage1(noise_pred, noise_list):
- return (noise_pred * 3
- - noise_list[-1]) / 2
-
-
-def predict_stage2(noise_pred, noise_list):
- return (noise_pred * 23
- - noise_list[-1] * 16
- + noise_list[-2] * 5) / 12
-
-
-def predict_stage3(noise_pred, noise_list):
- return (noise_pred * 55
- - noise_list[-1] * 59
- + noise_list[-2] * 37
- - noise_list[-3] * 9) / 24
-
-
-class SinusoidalPosEmb(nn.Module):
- def __init__(self, dim):
- super().__init__()
- self.dim = dim
- self.half_dim = dim // 2
- self.emb = 9.21034037 / (self.half_dim - 1)
- self.emb = torch.exp(torch.arange(self.half_dim) * torch.tensor(-self.emb)).unsqueeze(0)
- self.emb = self.emb.cpu()
-
- def forward(self, x):
- emb = self.emb * x
- emb = torch.cat((emb.sin(), emb.cos()), dim=-1)
- return emb
-
-
-class ResidualBlock(nn.Module):
- def __init__(self, encoder_hidden, residual_channels, dilation):
- super().__init__()
- self.residual_channels = residual_channels
- self.dilated_conv = Conv1d(residual_channels, 2 * residual_channels, 3, padding=dilation, dilation=dilation)
- self.diffusion_projection = nn.Linear(residual_channels, residual_channels)
- self.conditioner_projection = Conv1d(encoder_hidden, 2 * residual_channels, 1)
- self.output_projection = Conv1d(residual_channels, 2 * residual_channels, 1)
-
- def forward(self, x, conditioner, diffusion_step):
- diffusion_step = self.diffusion_projection(diffusion_step).unsqueeze(-1)
- conditioner = self.conditioner_projection(conditioner)
- y = x + diffusion_step
- y = self.dilated_conv(y) + conditioner
-
- gate, filter_1 = torch.split(y, [self.residual_channels, self.residual_channels], dim=1)
-
- y = torch.sigmoid(gate) * torch.tanh(filter_1)
- y = self.output_projection(y)
-
- residual, skip = torch.split(y, [self.residual_channels, self.residual_channels], dim=1)
-
- return (x + residual) / 1.41421356, skip
-
-
-class DiffNet(nn.Module):
- def __init__(self, in_dims, n_layers, n_chans, n_hidden):
- super().__init__()
- self.encoder_hidden = n_hidden
- self.residual_layers = n_layers
- self.residual_channels = n_chans
- self.input_projection = Conv1d(in_dims, self.residual_channels, 1)
- self.diffusion_embedding = SinusoidalPosEmb(self.residual_channels)
- dim = self.residual_channels
- self.mlp = nn.Sequential(
- nn.Linear(dim, dim * 4),
- Mish(),
- nn.Linear(dim * 4, dim)
- )
- self.residual_layers = nn.ModuleList([
- ResidualBlock(self.encoder_hidden, self.residual_channels, 1)
- for i in range(self.residual_layers)
- ])
- self.skip_projection = Conv1d(self.residual_channels, self.residual_channels, 1)
- self.output_projection = Conv1d(self.residual_channels, in_dims, 1)
- nn.init.zeros_(self.output_projection.weight)
-
- def forward(self, spec, diffusion_step, cond):
- x = spec.squeeze(0)
- x = self.input_projection(x) # x [B, residual_channel, T]
- x = F.relu(x)
- # skip = torch.randn_like(x)
- diffusion_step = diffusion_step.float()
- diffusion_step = self.diffusion_embedding(diffusion_step)
- diffusion_step = self.mlp(diffusion_step)
-
- x, skip = self.residual_layers[0](x, cond, diffusion_step)
- # noinspection PyTypeChecker
- for layer in self.residual_layers[1:]:
- x, skip_connection = layer.forward(x, cond, diffusion_step)
- skip = skip + skip_connection
- x = skip / math.sqrt(len(self.residual_layers))
- x = self.skip_projection(x)
- x = F.relu(x)
- x = self.output_projection(x) # [B, 80, T]
- return x.unsqueeze(1)
-
-
-class AfterDiffusion(nn.Module):
- def __init__(self, spec_max, spec_min, v_type='a'):
- super().__init__()
- self.spec_max = spec_max
- self.spec_min = spec_min
- self.type = v_type
-
- def forward(self, x):
- x = x.squeeze(1).permute(0, 2, 1)
- mel_out = (x + 1) / 2 * (self.spec_max - self.spec_min) + self.spec_min
- if self.type == 'nsf-hifigan-log10':
- mel_out = mel_out * 0.434294
- return mel_out.transpose(2, 1)
-
-
-class Pred(nn.Module):
- def __init__(self, alphas_cumprod):
- super().__init__()
- self.alphas_cumprod = alphas_cumprod
-
- def forward(self, x_1, noise_t, t_1, t_prev):
- a_t = extract(self.alphas_cumprod, t_1).cpu()
- a_prev = extract(self.alphas_cumprod, t_prev).cpu()
- a_t_sq, a_prev_sq = a_t.sqrt().cpu(), a_prev.sqrt().cpu()
- x_delta = (a_prev - a_t) * ((1 / (a_t_sq * (a_t_sq + a_prev_sq))) * x_1 - 1 / (
- a_t_sq * (((1 - a_prev) * a_t).sqrt() + ((1 - a_t) * a_prev).sqrt())) * noise_t)
- x_pred = x_1 + x_delta.cpu()
-
- return x_pred
-
-
-class GaussianDiffusion(nn.Module):
- def __init__(self,
- out_dims=128,
- n_layers=20,
- n_chans=384,
- n_hidden=256,
- timesteps=1000,
- k_step=1000,
- max_beta=0.02,
- spec_min=-12,
- spec_max=2):
- super().__init__()
- self.denoise_fn = DiffNet(out_dims, n_layers, n_chans, n_hidden)
- self.out_dims = out_dims
- self.mel_bins = out_dims
- self.n_hidden = n_hidden
- betas = beta_schedule['linear'](timesteps, max_beta=max_beta)
-
- alphas = 1. - betas
- alphas_cumprod = np.cumprod(alphas, axis=0)
- alphas_cumprod_prev = np.append(1., alphas_cumprod[:-1])
- timesteps, = betas.shape
- self.num_timesteps = int(timesteps)
- self.k_step = k_step
-
- self.noise_list = deque(maxlen=4)
-
- to_torch = partial(torch.tensor, dtype=torch.float32)
-
- self.register_buffer('betas', to_torch(betas))
- self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod))
- self.register_buffer('alphas_cumprod_prev', to_torch(alphas_cumprod_prev))
-
- # calculations for diffusion q(x_t | x_{t-1}) and others
- self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod)))
- self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod)))
- self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod)))
- self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod)))
- self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod - 1)))
-
- # calculations for posterior q(x_{t-1} | x_t, x_0)
- posterior_variance = betas * (1. - alphas_cumprod_prev) / (1. - alphas_cumprod)
- # above: equal to 1. / (1. / (1. - alpha_cumprod_tm1) + alpha_t / beta_t)
- self.register_buffer('posterior_variance', to_torch(posterior_variance))
- # below: log calculation clipped because the posterior variance is 0 at the beginning of the diffusion chain
- self.register_buffer('posterior_log_variance_clipped', to_torch(np.log(np.maximum(posterior_variance, 1e-20))))
- self.register_buffer('posterior_mean_coef1', to_torch(
- betas * np.sqrt(alphas_cumprod_prev) / (1. - alphas_cumprod)))
- self.register_buffer('posterior_mean_coef2', to_torch(
- (1. - alphas_cumprod_prev) * np.sqrt(alphas) / (1. - alphas_cumprod)))
-
- self.register_buffer('spec_min', torch.FloatTensor([spec_min])[None, None, :out_dims])
- self.register_buffer('spec_max', torch.FloatTensor([spec_max])[None, None, :out_dims])
- self.ad = AfterDiffusion(self.spec_max, self.spec_min)
- self.xp = Pred(self.alphas_cumprod)
-
- def q_mean_variance(self, x_start, t):
- mean = extract(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start
- variance = extract(1. - self.alphas_cumprod, t, x_start.shape)
- log_variance = extract(self.log_one_minus_alphas_cumprod, t, x_start.shape)
- return mean, variance, log_variance
-
- def predict_start_from_noise(self, x_t, t, noise):
- return (
- extract(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t -
- extract(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) * noise
- )
-
- def q_posterior(self, x_start, x_t, t):
- posterior_mean = (
- extract(self.posterior_mean_coef1, t, x_t.shape) * x_start +
- extract(self.posterior_mean_coef2, t, x_t.shape) * x_t
- )
- posterior_variance = extract(self.posterior_variance, t, x_t.shape)
- posterior_log_variance_clipped = extract(self.posterior_log_variance_clipped, t, x_t.shape)
- return posterior_mean, posterior_variance, posterior_log_variance_clipped
-
- def p_mean_variance(self, x, t, cond):
- noise_pred = self.denoise_fn(x, t, cond=cond)
- x_recon = self.predict_start_from_noise(x, t=t, noise=noise_pred)
-
- x_recon.clamp_(-1., 1.)
-
- model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t)
- return model_mean, posterior_variance, posterior_log_variance
-
- @torch.no_grad()
- def p_sample(self, x, t, cond, clip_denoised=True, repeat_noise=False):
- b, *_, device = *x.shape, x.device
- model_mean, _, model_log_variance = self.p_mean_variance(x=x, t=t, cond=cond)
- noise = noise_like(x.shape, device, repeat_noise)
- # no noise when t == 0
- nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1)))
- return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise
-
- @torch.no_grad()
- def p_sample_plms(self, x, t, interval, cond, clip_denoised=True, repeat_noise=False):
- """
- Use the PLMS method from
- [Pseudo Numerical Methods for Diffusion Models on Manifolds](https://arxiv.org/abs/2202.09778).
- """
-
- def get_x_pred(x, noise_t, t):
- a_t = extract(self.alphas_cumprod, t)
- a_prev = extract(self.alphas_cumprod, torch.max(t - interval, torch.zeros_like(t)))
- a_t_sq, a_prev_sq = a_t.sqrt(), a_prev.sqrt()
-
- x_delta = (a_prev - a_t) * ((1 / (a_t_sq * (a_t_sq + a_prev_sq))) * x - 1 / (
- a_t_sq * (((1 - a_prev) * a_t).sqrt() + ((1 - a_t) * a_prev).sqrt())) * noise_t)
- x_pred = x + x_delta
-
- return x_pred
-
- noise_list = self.noise_list
- noise_pred = self.denoise_fn(x, t, cond=cond)
-
- if len(noise_list) == 0:
- x_pred = get_x_pred(x, noise_pred, t)
- noise_pred_prev = self.denoise_fn(x_pred, max(t - interval, 0), cond=cond)
- noise_pred_prime = (noise_pred + noise_pred_prev) / 2
- elif len(noise_list) == 1:
- noise_pred_prime = (3 * noise_pred - noise_list[-1]) / 2
- elif len(noise_list) == 2:
- noise_pred_prime = (23 * noise_pred - 16 * noise_list[-1] + 5 * noise_list[-2]) / 12
- else:
- noise_pred_prime = (55 * noise_pred - 59 * noise_list[-1] + 37 * noise_list[-2] - 9 * noise_list[-3]) / 24
-
- x_prev = get_x_pred(x, noise_pred_prime, t)
- noise_list.append(noise_pred)
-
- return x_prev
-
- def q_sample(self, x_start, t, noise=None):
- noise = default(noise, lambda: torch.randn_like(x_start))
- return (
- extract(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start +
- extract(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) * noise
- )
-
- def p_losses(self, x_start, t, cond, noise=None, loss_type='l2'):
- noise = default(noise, lambda: torch.randn_like(x_start))
-
- x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise)
- x_recon = self.denoise_fn(x_noisy, t, cond)
-
- if loss_type == 'l1':
- loss = (noise - x_recon).abs().mean()
- elif loss_type == 'l2':
- loss = F.mse_loss(noise, x_recon)
- else:
- raise NotImplementedError()
-
- return loss
-
- def org_forward(self,
- condition,
- init_noise=None,
- gt_spec=None,
- infer=True,
- infer_speedup=100,
- method='pndm',
- k_step=1000,
- use_tqdm=True):
- """
- conditioning diffusion, use fastspeech2 encoder output as the condition
- """
- cond = condition
- b, device = condition.shape[0], condition.device
- if not infer:
- spec = self.norm_spec(gt_spec)
- t = torch.randint(0, self.k_step, (b,), device=device).long()
- norm_spec = spec.transpose(1, 2)[:, None, :, :] # [B, 1, M, T]
- return self.p_losses(norm_spec, t, cond=cond)
- else:
- shape = (cond.shape[0], 1, self.out_dims, cond.shape[2])
-
- if gt_spec is None:
- t = self.k_step
- if init_noise is None:
- x = torch.randn(shape, device=device)
- else:
- x = init_noise
- else:
- t = k_step
- norm_spec = self.norm_spec(gt_spec)
- norm_spec = norm_spec.transpose(1, 2)[:, None, :, :]
- x = self.q_sample(x_start=norm_spec, t=torch.tensor([t - 1], device=device).long())
-
- if method is not None and infer_speedup > 1:
- if method == 'dpm-solver':
- from .dpm_solver_pytorch import NoiseScheduleVP, model_wrapper, DPM_Solver
- # 1. Define the noise schedule.
- noise_schedule = NoiseScheduleVP(schedule='discrete', betas=self.betas[:t])
-
- # 2. Convert your discrete-time `model` to the continuous-time
- # noise prediction model. Here is an example for a diffusion model
- # `model` with the noise prediction type ("noise") .
- def my_wrapper(fn):
- def wrapped(x, t, **kwargs):
- ret = fn(x, t, **kwargs)
- if use_tqdm:
- self.bar.update(1)
- return ret
-
- return wrapped
-
- model_fn = model_wrapper(
- my_wrapper(self.denoise_fn),
- noise_schedule,
- model_type="noise", # or "x_start" or "v" or "score"
- model_kwargs={"cond": cond}
- )
-
- # 3. Define dpm-solver and sample by singlestep DPM-Solver.
- # (We recommend singlestep DPM-Solver for unconditional sampling)
- # You can adjust the `steps` to balance the computation
- # costs and the sample quality.
- dpm_solver = DPM_Solver(model_fn, noise_schedule)
-
- steps = t // infer_speedup
- if use_tqdm:
- self.bar = tqdm(desc="sample time step", total=steps)
- x = dpm_solver.sample(
- x,
- steps=steps,
- order=3,
- skip_type="time_uniform",
- method="singlestep",
- )
- if use_tqdm:
- self.bar.close()
- elif method == 'pndm':
- self.noise_list = deque(maxlen=4)
- if use_tqdm:
- for i in tqdm(
- reversed(range(0, t, infer_speedup)), desc='sample time step',
- total=t // infer_speedup,
- ):
- x = self.p_sample_plms(
- x, torch.full((b,), i, device=device, dtype=torch.long),
- infer_speedup, cond=cond
- )
- else:
- for i in reversed(range(0, t, infer_speedup)):
- x = self.p_sample_plms(
- x, torch.full((b,), i, device=device, dtype=torch.long),
- infer_speedup, cond=cond
- )
- else:
- raise NotImplementedError(method)
- else:
- if use_tqdm:
- for i in tqdm(reversed(range(0, t)), desc='sample time step', total=t):
- x = self.p_sample(x, torch.full((b,), i, device=device, dtype=torch.long), cond)
- else:
- for i in reversed(range(0, t)):
- x = self.p_sample(x, torch.full((b,), i, device=device, dtype=torch.long), cond)
- x = x.squeeze(1).transpose(1, 2) # [B, T, M]
- return self.denorm_spec(x).transpose(2, 1)
-
- def norm_spec(self, x):
- return (x - self.spec_min) / (self.spec_max - self.spec_min) * 2 - 1
-
- def denorm_spec(self, x):
- return (x + 1) / 2 * (self.spec_max - self.spec_min) + self.spec_min
-
- def get_x_pred(self, x_1, noise_t, t_1, t_prev):
- a_t = extract(self.alphas_cumprod, t_1)
- a_prev = extract(self.alphas_cumprod, t_prev)
- a_t_sq, a_prev_sq = a_t.sqrt(), a_prev.sqrt()
- x_delta = (a_prev - a_t) * ((1 / (a_t_sq * (a_t_sq + a_prev_sq))) * x_1 - 1 / (
- a_t_sq * (((1 - a_prev) * a_t).sqrt() + ((1 - a_t) * a_prev).sqrt())) * noise_t)
- x_pred = x_1 + x_delta
- return x_pred
-
- def OnnxExport(self, project_name=None, init_noise=None, hidden_channels=256, export_denoise=True, export_pred=True, export_after=True):
- cond = torch.randn([1, self.n_hidden, 10]).cpu()
- if init_noise is None:
- x = torch.randn((1, 1, self.mel_bins, cond.shape[2]), dtype=torch.float32).cpu()
- else:
- x = init_noise
- pndms = 100
-
- org_y_x = self.org_forward(cond, init_noise=x)
-
- device = cond.device
- n_frames = cond.shape[2]
- step_range = torch.arange(0, self.k_step, pndms, dtype=torch.long, device=device).flip(0)
- plms_noise_stage = torch.tensor(0, dtype=torch.long, device=device)
- noise_list = torch.zeros((0, 1, 1, self.mel_bins, n_frames), device=device)
-
- ot = step_range[0]
- ot_1 = torch.full((1,), ot, device=device, dtype=torch.long)
- if export_denoise:
- torch.onnx.export(
- self.denoise_fn,
- (x.cpu(), ot_1.cpu(), cond.cpu()),
- f"{project_name}_denoise.onnx",
- input_names=["noise", "time", "condition"],
- output_names=["noise_pred"],
- dynamic_axes={
- "noise": [3],
- "condition": [2]
- },
- opset_version=16
- )
-
- for t in step_range:
- t_1 = torch.full((1,), t, device=device, dtype=torch.long)
- noise_pred = self.denoise_fn(x, t_1, cond)
- t_prev = t_1 - pndms
- t_prev = t_prev * (t_prev > 0)
- if plms_noise_stage == 0:
- if export_pred:
- torch.onnx.export(
- self.xp,
- (x.cpu(), noise_pred.cpu(), t_1.cpu(), t_prev.cpu()),
- f"{project_name}_pred.onnx",
- input_names=["noise", "noise_pred", "time", "time_prev"],
- output_names=["noise_pred_o"],
- dynamic_axes={
- "noise": [3],
- "noise_pred": [3]
- },
- opset_version=16
- )
-
- x_pred = self.get_x_pred(x, noise_pred, t_1, t_prev)
- noise_pred_prev = self.denoise_fn(x_pred, t_prev, cond=cond)
- noise_pred_prime = predict_stage0(noise_pred, noise_pred_prev)
-
- elif plms_noise_stage == 1:
- noise_pred_prime = predict_stage1(noise_pred, noise_list)
-
- elif plms_noise_stage == 2:
- noise_pred_prime = predict_stage2(noise_pred, noise_list)
-
- else:
- noise_pred_prime = predict_stage3(noise_pred, noise_list)
-
- noise_pred = noise_pred.unsqueeze(0)
-
- if plms_noise_stage < 3:
- noise_list = torch.cat((noise_list, noise_pred), dim=0)
- plms_noise_stage = plms_noise_stage + 1
-
- else:
- noise_list = torch.cat((noise_list[-2:], noise_pred), dim=0)
-
- x = self.get_x_pred(x, noise_pred_prime, t_1, t_prev)
- if export_after:
- torch.onnx.export(
- self.ad,
- x.cpu(),
- f"{project_name}_after.onnx",
- input_names=["x"],
- output_names=["mel_out"],
- dynamic_axes={
- "x": [3]
- },
- opset_version=16
- )
- x = self.ad(x)
-
- print((x == org_y_x).all())
- return x
-
- def forward(self, condition=None, init_noise=None, pndms=None, k_step=None):
- cond = condition
- x = init_noise
-
- device = cond.device
- n_frames = cond.shape[2]
- step_range = torch.arange(0, k_step.item(), pndms.item(), dtype=torch.long, device=device).flip(0)
- plms_noise_stage = torch.tensor(0, dtype=torch.long, device=device)
- noise_list = torch.zeros((0, 1, 1, self.mel_bins, n_frames), device=device)
-
- ot = step_range[0]
- ot_1 = torch.full((1,), ot, device=device, dtype=torch.long)
-
- for t in step_range:
- t_1 = torch.full((1,), t, device=device, dtype=torch.long)
- noise_pred = self.denoise_fn(x, t_1, cond)
- t_prev = t_1 - pndms
- t_prev = t_prev * (t_prev > 0)
- if plms_noise_stage == 0:
- x_pred = self.get_x_pred(x, noise_pred, t_1, t_prev)
- noise_pred_prev = self.denoise_fn(x_pred, t_prev, cond=cond)
- noise_pred_prime = predict_stage0(noise_pred, noise_pred_prev)
-
- elif plms_noise_stage == 1:
- noise_pred_prime = predict_stage1(noise_pred, noise_list)
-
- elif plms_noise_stage == 2:
- noise_pred_prime = predict_stage2(noise_pred, noise_list)
-
- else:
- noise_pred_prime = predict_stage3(noise_pred, noise_list)
-
- noise_pred = noise_pred.unsqueeze(0)
-
- if plms_noise_stage < 3:
- noise_list = torch.cat((noise_list, noise_pred), dim=0)
- plms_noise_stage = plms_noise_stage + 1
-
- else:
- noise_list = torch.cat((noise_list[-2:], noise_pred), dim=0)
-
- x = self.get_x_pred(x, noise_pred_prime, t_1, t_prev)
- x = self.ad(x)
- return x
diff --git a/spaces/GT4SD/PatentToolkit/Model_bert/train_script.py b/spaces/GT4SD/PatentToolkit/Model_bert/train_script.py
deleted file mode 100644
index 82b47b6277499b8f17d139d0c651a6f961c06124..0000000000000000000000000000000000000000
--- a/spaces/GT4SD/PatentToolkit/Model_bert/train_script.py
+++ /dev/null
@@ -1,344 +0,0 @@
-"""
-Train script for a single file
-
-Need to set the TPU address first:
-export XRT_TPU_CONFIG="localservice;0;localhost:51011"
-"""
-
-import torch.multiprocessing as mp
-import threading
-import time
-import random
-import sys
-import argparse
-import gzip
-import json
-import logging
-import tqdm
-import torch
-from torch import nn
-from torch.utils.data import DataLoader
-import torch
-import torch_xla
-import torch_xla.core
-import torch_xla.core.functions
-import torch_xla.core.xla_model as xm
-import torch_xla.distributed.xla_multiprocessing as xmp
-import torch_xla.distributed.parallel_loader as pl
-import os
-from shutil import copyfile
-
-
-from transformers import (
- AdamW,
- AutoModel,
- AutoTokenizer,
- get_linear_schedule_with_warmup,
- set_seed,
-)
-
-class AutoModelForSentenceEmbedding(nn.Module):
- def __init__(self, model_name, tokenizer, normalize=True):
- super(AutoModelForSentenceEmbedding, self).__init__()
-
- self.model = AutoModel.from_pretrained(model_name)
- self.normalize = normalize
- self.tokenizer = tokenizer
-
- def forward(self, **kwargs):
- model_output = self.model(**kwargs)
- embeddings = self.mean_pooling(model_output, kwargs['attention_mask'])
- if self.normalize:
- embeddings = torch.nn.functional.normalize(embeddings, p=2, dim=1)
-
- return embeddings
-
- def mean_pooling(self, model_output, attention_mask):
- token_embeddings = model_output[0] # First element of model_output contains all token embeddings
- input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
- return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
-
- def save_pretrained(self, output_path):
- if xm.is_master_ordinal():
- self.tokenizer.save_pretrained(output_path)
- self.model.config.save_pretrained(output_path)
-
- xm.save(self.model.state_dict(), os.path.join(output_path, "pytorch_model.bin"))
-
-
-
-
-def train_function(index, args, queue):
- tokenizer = AutoTokenizer.from_pretrained(args.model)
- model = AutoModelForSentenceEmbedding(args.model, tokenizer)
-
-
- ### Train Loop
- device = xm.xla_device()
- model = model.to(device)
-
- # Instantiate optimizer
- optimizer = AdamW(params=model.parameters(), lr=2e-5, correct_bias=True)
-
- lr_scheduler = get_linear_schedule_with_warmup(
- optimizer=optimizer,
- num_warmup_steps=500,
- num_training_steps=args.steps,
- )
-
- # Now we train the model
- cross_entropy_loss = nn.CrossEntropyLoss()
- max_grad_norm = 1
-
- model.train()
-
- for global_step in tqdm.trange(args.steps, disable=not xm.is_master_ordinal()):
- #### Get the batch data
- batch = queue.get()
- #print(index, "batch {}x{}".format(len(batch), ",".join([str(len(b)) for b in batch])))
-
-
- if len(batch[0]) == 2: #(anchor, positive)
- text1 = tokenizer([b[0] for b in batch], return_tensors="pt", max_length=args.max_length, truncation=True, padding="max_length")
- text2 = tokenizer([b[1] for b in batch], return_tensors="pt", max_length=args.max_length, truncation=True, padding="max_length")
-
- ### Compute embeddings
- embeddings_a = model(**text1.to(device))
- embeddings_b = model(**text2.to(device))
-
- ### Gather all embedings
- embeddings_a = torch_xla.core.functions.all_gather(embeddings_a)
- embeddings_b = torch_xla.core.functions.all_gather(embeddings_b)
-
- ### Compute similarity scores 512 x 512
- scores = torch.mm(embeddings_a, embeddings_b.transpose(0, 1)) * args.scale
-
- ### Compute cross-entropy loss
- labels = torch.tensor(range(len(scores)), dtype=torch.long, device=embeddings_a.device) # Example a[i] should match with b[i]
-
- ## Symmetric loss as in CLIP
- loss = (cross_entropy_loss(scores, labels) + cross_entropy_loss(scores.transpose(0, 1), labels)) / 2
-
- else: #(anchor, positive, negative)
- text1 = tokenizer([b[0] for b in batch], return_tensors="pt", max_length=args.max_length, truncation=True, padding="max_length")
- text2 = tokenizer([b[1] for b in batch], return_tensors="pt", max_length=args.max_length, truncation=True, padding="max_length")
- text3 = tokenizer([b[2] for b in batch], return_tensors="pt", max_length=args.max_length, truncation=True, padding="max_length")
-
- embeddings_a = model(**text1.to(device))
- embeddings_b1 = model(**text2.to(device))
- embeddings_b2 = model(**text3.to(device))
-
- embeddings_a = torch_xla.core.functions.all_gather(embeddings_a)
- embeddings_b1 = torch_xla.core.functions.all_gather(embeddings_b1)
- embeddings_b2 = torch_xla.core.functions.all_gather(embeddings_b2)
-
- embeddings_b = torch.cat([embeddings_b1, embeddings_b2])
-
- ### Compute similarity scores 512 x 1024
- scores = torch.mm(embeddings_a, embeddings_b.transpose(0, 1)) * args.scale
-
- ### Compute cross-entropy loss
- labels = torch.tensor(range(len(scores)), dtype=torch.long, device=embeddings_a.device) # Example a[i] should match with b[i]
-
- ## One-way loss
- loss = cross_entropy_loss(scores, labels)
-
-
- # Backward pass
- optimizer.zero_grad()
- loss.backward()
- torch.nn.utils.clip_grad_norm_(model.parameters(), max_grad_norm)
-
- xm.optimizer_step(optimizer, barrier=True)
- lr_scheduler.step()
-
-
- #Save model
- if (global_step+1) % args.save_steps == 0:
- output_path = os.path.join(args.output, str(global_step+1))
- xm.master_print("save model: "+output_path)
- model.save_pretrained(output_path)
-
-
- output_path = os.path.join(args.output, "final")
- xm.master_print("save model final: "+ output_path)
- model.save_pretrained(output_path)
-
-
-def produce_data(args, queue, filepaths, dataset_indices):
- global_batch_size = args.batch_size*args.nprocs #Global batch size
- size_per_dataset = int(global_batch_size / args.datasets_per_batch) #How many datasets per batch
- num_same_dataset = int(size_per_dataset / args.batch_size)
- print("producer", "global_batch_size", global_batch_size)
- print("producer", "size_per_dataset", size_per_dataset)
- print("producer", "num_same_dataset", num_same_dataset)
-
- datasets = []
- for filepath in filepaths:
- if "reddit_" in filepath: #Special dataset class for Reddit files
- data_obj = RedditDataset(filepath)
- else:
- data_obj = Dataset(filepath)
- datasets.append(iter(data_obj))
-
- # Store if dataset is in a 2 col or 3 col format
- num_cols = {idx: len(next(dataset)) for idx, dataset in enumerate(datasets)}
-
- while True:
- texts_in_batch = set()
- batch_format = None #2 vs 3 col format for this batch
-
- #Add data from several sub datasets
- for _ in range(args.datasets_per_batch):
- valid_dataset = False #Check that datasets have the same 2/3 col format
- while not valid_dataset:
- data_idx = random.choice(dataset_indices)
- if batch_format is None:
- batch_format = num_cols[data_idx]
- valid_dataset = True
- else: #Check that this dataset has the same format
- valid_dataset = (batch_format == num_cols[data_idx])
-
- #Get data from this dataset
- dataset = datasets[data_idx]
- for _ in range(num_same_dataset):
- for _ in range(args.nprocs):
- batch_device = [] #A batch for one device
- while len(batch_device) < args.batch_size:
- sample = next(dataset)
- in_batch = False
- for text in sample:
- if text in texts_in_batch:
- in_batch = True
- break
-
- if not in_batch:
- for text in sample:
- texts_in_batch.add(text)
- batch_device.append(sample)
-
- queue.put(batch_device)
-
-
-class RedditDataset:
- """
- A class that handles the reddit data files
- """
- def __init__(self, filepath):
- self.filepath = filepath
-
- def __iter__(self):
- while True:
- with gzip.open(self.filepath, "rt") as fIn:
- for line in fIn:
- data = json.loads(line)
-
- if "response" in data and "context" in data:
- yield [data["response"], data["context"]]
-
-class Dataset:
- """
- A class that handles one dataset
- """
- def __init__(self, filepath):
- self.filepath = filepath
-
- def __iter__(self):
- max_dataset_size = 10*1000*1000 #Cache small datasets in memory
- dataset = []
- data_format = None
-
- while dataset is None or len(dataset) == 0:
- with gzip.open(self.filepath, "rt") as fIn:
- for line in fIn:
- data = json.loads(line)
- if isinstance(data, dict):
- data = data['texts']
-
- if data_format is None:
- data_format = len(data)
-
- #Ensure that all entries are of the same 2/3 col format
- assert len(data) == data_format
-
- if dataset is not None:
- dataset.append(data)
- if len(dataset) >= max_dataset_size:
- dataset = None
-
- yield data
-
- # Data loaded. Now stream to the queue
- # Shuffle for each epoch
- while True:
- random.shuffle(dataset)
- for data in dataset:
- yield data
-
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument('--model', default='nreimers/MiniLM-L6-H384-uncased')
- parser.add_argument('--steps', type=int, default=2000)
- parser.add_argument('--save_steps', type=int, default=10000)
- parser.add_argument('--batch_size', type=int, default=64)
- parser.add_argument('--max_length', type=int, default=128)
- parser.add_argument('--nprocs', type=int, default=8)
- parser.add_argument('--datasets_per_batch', type=int, default=2, help="Number of datasets per batch")
- parser.add_argument('--scale', type=float, default=20, help="Use 20 for cossim, and 1 when you work with unnormalized embeddings with dot product")
- parser.add_argument('--data_folder', default="/data", help="Folder with your dataset files")
- parser.add_argument('data_config', help="A data_config.json file")
- parser.add_argument('output')
- args = parser.parse_args()
-
- # Ensure global batch size is divisble by data_sample_size
- assert (args.batch_size*args.nprocs) % args.datasets_per_batch == 0
-
- logging.info("Output: "+args.output)
- if os.path.exists(args.output):
- print("Output folder already exists.")
- input("Continue?")
-
- # Write train script to output path
- os.makedirs(args.output, exist_ok=True)
-
- data_config_path = os.path.join(args.output, 'data_config.json')
- copyfile(args.data_config, data_config_path)
-
- train_script_path = os.path.join(args.output, 'train_script.py')
- copyfile(__file__, train_script_path)
- with open(train_script_path, 'a') as fOut:
- fOut.write("\n\n# Script was called via:\n#python " + " ".join(sys.argv))
-
-
-
- #Load data config
- with open(args.data_config) as fIn:
- data_config = json.load(fIn)
-
- queue = mp.Queue(maxsize=100*args.nprocs)
-
- filepaths = []
- dataset_indices = []
- for idx, data in enumerate(data_config):
- filepaths.append(os.path.join(os.path.expanduser(args.data_folder), data['name']))
- dataset_indices.extend([idx]*data['weight'])
-
- # Start producer
- p = mp.Process(target=produce_data, args=(args, queue, filepaths, dataset_indices))
- p.start()
-
- # Run training
- print("Start processes:", args.nprocs)
- xmp.spawn(train_function, args=(args, queue), nprocs=args.nprocs, start_method='fork')
- print("Training done")
- print("It might be that not all processes exit automatically. In that case you must manually kill this process.")
- print("With 'pkill python' you can kill all remaining python processes")
- p.kill()
- exit()
-
-
-
-# Script was called via:
-#python train_many_data_files_v2.py --steps 1000000 --batch_size 128 --model nreimers/MiniLM-L6-H384-uncased train_data_configs/all_datasets_v4.json output/all_datasets_v4_MiniLM-L6-H384-uncased-batch128
\ No newline at end of file
diff --git a/spaces/GastonMazzei/escher-inpaint-project/README.md b/spaces/GastonMazzei/escher-inpaint-project/README.md
deleted file mode 100644
index 451a2b1776856f62c12d95537905856ec62e6431..0000000000000000000000000000000000000000
--- a/spaces/GastonMazzei/escher-inpaint-project/README.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: GLIDE_Inpaint
-emoji: 💻
-colorFrom: green
-colorTo: purple
-sdk: gradio
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/color_sequenced_pyramid_packing.py b/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/color_sequenced_pyramid_packing.py
deleted file mode 100644
index c2a2dea351a5b7a978c8a7763e5e0e687a0bdb1e..0000000000000000000000000000000000000000
--- a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/color_sequenced_pyramid_packing.py
+++ /dev/null
@@ -1,63 +0,0 @@
-import numpy as np
-import os
-import pybullet as p
-import random
-from cliport.tasks import primitives
-from cliport.tasks.grippers import Spatula
-from cliport.tasks.task import Task
-from cliport.utils import utils
-import numpy as np
-from cliport.tasks.task import Task
-from cliport.utils import utils
-import pybullet as p
-
-class ColorSequencedPyramidPacking(Task):
- """Sort cubes by color into four pallets and stack them in each pallet as a pyramid"""
-
- def __init__(self):
- super().__init__()
- self.max_steps = 12
- self.lang_template = "sort the {color} cubes into the pallet and stack them as a pyramid"
- self.task_completed_desc = "done sorting and stacking cubes."
- self.additional_reset()
-
- def reset(self, env):
- super().reset(env)
-
- # Add pallets.
- # x, y, z dimensions for the asset size
- pallet_size = (0.15, 0.15, 0.02)
- pallet_urdf = 'pallet/pallet.urdf'
- pallet_poses = []
- for _ in range(4):
- pallet_pose = self.get_random_pose(env, pallet_size)
- env.add_object(pallet_urdf, pallet_pose, category='fixed')
- pallet_poses.append(pallet_pose)
-
- # Cube colors.
- colors = [
- utils.COLORS['red'], utils.COLORS['green'], utils.COLORS['blue'], utils.COLORS['yellow']
- ]
-
- # Add cubes.
- # x, y, z dimensions for the asset size
- cube_size = (0.04, 0.04, 0.04)
- cube_urdf = 'block/block.urdf'
-
- objs = []
- for i in range(12):
- cube_pose = self.get_random_pose(env, cube_size)
- cube_id = env.add_object(cube_urdf, cube_pose, color=colors[i%4])
- objs.append(cube_id)
-
- # Associate placement locations for goals.
- place_pos = [(0, -0.05, 0.03), (0, 0, 0.03),
- (0, 0.05, 0.03), (0, -0.025, 0.08),
- (0, 0.025, 0.08), (0, 0, 0.13)]
- targs = [(utils.apply(pallet_pose, i), pallet_pose[1]) for i in place_pos for pallet_pose in pallet_poses]
-
- # Goal: cubes are sorted by color and stacked in a pyramid in each pallet.
- for i in range(4):
- self.add_goal(objs=objs[i*3:(i+1)*3], matches=np.ones((3, 3)), targ_poses=targs[i*3:(i+1)*3], replace=False,
- rotations=True, metric='pose', params=None, step_max_reward=1 / 4, symmetries=[np.pi/2]*3,
- language_goal=self.lang_template.format(color=list(utils.COLORS.keys())[i]))
\ No newline at end of file
diff --git a/spaces/Gmq-x/gpt-academic/request_llm/README.md b/spaces/Gmq-x/gpt-academic/request_llm/README.md
deleted file mode 100644
index 973adea1ed6ca1f027e5d84dc2e7b3e92ee8a5ba..0000000000000000000000000000000000000000
--- a/spaces/Gmq-x/gpt-academic/request_llm/README.md
+++ /dev/null
@@ -1,54 +0,0 @@
-# 如何使用其他大语言模型(v3.0分支测试中)
-
-## ChatGLM
-
-- 安装依赖 `pip install -r request_llm/requirements_chatglm.txt`
-- 修改配置,在config.py中将LLM_MODEL的值改为"chatglm"
-
-``` sh
-LLM_MODEL = "chatglm"
-```
-- 运行!
-``` sh
-`python main.py`
-```
-
-
----
-## Text-Generation-UI (TGUI)
-
-### 1. 部署TGUI
-``` sh
-# 1 下载模型
-git clone https://github.com/oobabooga/text-generation-webui.git
-# 2 这个仓库的最新代码有问题,回滚到几周之前
-git reset --hard fcda3f87767e642d1c0411776e549e1d3894843d
-# 3 切换路径
-cd text-generation-webui
-# 4 安装text-generation的额外依赖
-pip install accelerate bitsandbytes flexgen gradio llamacpp markdown numpy peft requests rwkv safetensors sentencepiece tqdm datasets git+https://github.com/huggingface/transformers
-# 5 下载模型
-python download-model.py facebook/galactica-1.3b
-# 其他可选如 facebook/opt-1.3b
-# facebook/galactica-1.3b
-# facebook/galactica-6.7b
-# facebook/galactica-120b
-# facebook/pygmalion-1.3b 等
-# 详情见 https://github.com/oobabooga/text-generation-webui
-
-# 6 启动text-generation
-python server.py --cpu --listen --listen-port 7865 --model facebook_galactica-1.3b
-```
-
-### 2. 修改config.py
-
-``` sh
-# LLM_MODEL格式: tgui:[模型]@[ws地址]:[ws端口] , 端口要和上面给定的端口一致
-LLM_MODEL = "tgui:galactica-1.3b@localhost:7860"
-```
-
-### 3. 运行!
-``` sh
-cd chatgpt-academic
-python main.py
-```
diff --git a/spaces/Gradio-Blocks/StyleGAN-NADA/op/fused_act_cpu.py b/spaces/Gradio-Blocks/StyleGAN-NADA/op/fused_act_cpu.py
deleted file mode 100644
index f997dafdd53aa9f4bbe07af6746c67a2c6dcb4c7..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/StyleGAN-NADA/op/fused_act_cpu.py
+++ /dev/null
@@ -1,41 +0,0 @@
-import os
-
-import torch
-from torch import nn
-from torch.autograd import Function
-from torch.nn import functional as F
-
-
-module_path = os.path.dirname(__file__)
-
-
-class FusedLeakyReLU(nn.Module):
- def __init__(self, channel, negative_slope=0.2, scale=2 ** 0.5):
- super().__init__()
-
- self.bias = nn.Parameter(torch.zeros(channel))
- self.negative_slope = negative_slope
- self.scale = scale
-
- def forward(self, input):
- return fused_leaky_relu(input, self.bias, self.negative_slope, self.scale)
-
-def fused_leaky_relu(input, bias=None, negative_slope=0.2, scale=2 ** 0.5):
- if input.device.type == "cpu":
- if bias is not None:
- rest_dim = [1] * (input.ndim - bias.ndim - 1)
- return (
- F.leaky_relu(
- input + bias.view(1, bias.shape[0], *rest_dim), negative_slope=0.2
- )
- * scale
- )
-
- else:
- return F.leaky_relu(input, negative_slope=0.2) * scale
-
- else:
- return FusedLeakyReLUFunction.apply(
- input.contiguous(), bias, negative_slope, scale
- )
-
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/gn+ws/faster_rcnn_x50_32x4d_fpn_gn_ws-all_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/gn+ws/faster_rcnn_x50_32x4d_fpn_gn_ws-all_1x_coco.py
deleted file mode 100644
index 1268980615b69009a33b785eeb59322372633d10..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/gn+ws/faster_rcnn_x50_32x4d_fpn_gn_ws-all_1x_coco.py
+++ /dev/null
@@ -1,16 +0,0 @@
-_base_ = './faster_rcnn_r50_fpn_gn_ws-all_1x_coco.py'
-conv_cfg = dict(type='ConvWS')
-norm_cfg = dict(type='GN', num_groups=32, requires_grad=True)
-model = dict(
- pretrained='open-mmlab://jhu/resnext50_32x4d_gn_ws',
- backbone=dict(
- type='ResNeXt',
- depth=50,
- groups=32,
- base_width=4,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=1,
- style='pytorch',
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg))
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/detectors/mask_rcnn.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/detectors/mask_rcnn.py
deleted file mode 100644
index c15a7733170e059d2825138b3812319915b7cad6..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/detectors/mask_rcnn.py
+++ /dev/null
@@ -1,24 +0,0 @@
-from ..builder import DETECTORS
-from .two_stage import TwoStageDetector
-
-
-@DETECTORS.register_module()
-class MaskRCNN(TwoStageDetector):
- """Implementation of `Mask R-CNN `_"""
-
- def __init__(self,
- backbone,
- rpn_head,
- roi_head,
- train_cfg,
- test_cfg,
- neck=None,
- pretrained=None):
- super(MaskRCNN, self).__init__(
- backbone=backbone,
- neck=neck,
- rpn_head=rpn_head,
- roi_head=roi_head,
- train_cfg=train_cfg,
- test_cfg=test_cfg,
- pretrained=pretrained)
diff --git a/spaces/GrandaddyShmax/MusicGen_Plus/audiocraft/utils/ui.py b/spaces/GrandaddyShmax/MusicGen_Plus/audiocraft/utils/ui.py
deleted file mode 100644
index 68fcbe0af257bdbaad767708843b545064d9b219..0000000000000000000000000000000000000000
--- a/spaces/GrandaddyShmax/MusicGen_Plus/audiocraft/utils/ui.py
+++ /dev/null
@@ -1,34 +0,0 @@
-from pathlib import Path
-
-import gradio as gr
-import torch
-
-refresh_symbol = '\U0001f504' # 🔄
-
-class ToolButton(gr.Button, gr.components.IOComponent):
- """Small button with single emoji as text, fits inside gradio forms"""
-
- def __init__(self, **kwargs):
- super().__init__(**kwargs)
-
- def get_block_name(self):
- return "button"
-
-
-def create_refresh_button(refresh_component, refresh_method, refreshed_args, elem_class):
- def refresh():
- refresh_method()
- args = refreshed_args() if callable(refreshed_args) else refreshed_args
-
- for k, v in args.items():
- setattr(refresh_component, k, v)
-
- return gr.update(**(args or {}))
-
- refresh_button = ToolButton(value=refresh_symbol, elem_classes=elem_class, scale=1, size="sm", container=False)
- refresh_button.click(
- fn=refresh,
- inputs=[],
- outputs=[refresh_component]
- )
- return refresh_button
\ No newline at end of file
diff --git a/spaces/HaHaBill/LandShapes-Antarctica/netdissect/upsegmodel/models.py b/spaces/HaHaBill/LandShapes-Antarctica/netdissect/upsegmodel/models.py
deleted file mode 100644
index de0a9add41016631957c52c4a441e4eccf96f903..0000000000000000000000000000000000000000
--- a/spaces/HaHaBill/LandShapes-Antarctica/netdissect/upsegmodel/models.py
+++ /dev/null
@@ -1,441 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import torchvision
-from . import resnet, resnext
-try:
- from lib.nn import SynchronizedBatchNorm2d
-except ImportError:
- from torch.nn import BatchNorm2d as SynchronizedBatchNorm2d
-
-
-class SegmentationModuleBase(nn.Module):
- def __init__(self):
- super(SegmentationModuleBase, self).__init__()
-
- @staticmethod
- def pixel_acc(pred, label, ignore_index=-1):
- _, preds = torch.max(pred, dim=1)
- valid = (label != ignore_index).long()
- acc_sum = torch.sum(valid * (preds == label).long())
- pixel_sum = torch.sum(valid)
- acc = acc_sum.float() / (pixel_sum.float() + 1e-10)
- return acc
-
- @staticmethod
- def part_pixel_acc(pred_part, gt_seg_part, gt_seg_object, object_label, valid):
- mask_object = (gt_seg_object == object_label)
- _, pred = torch.max(pred_part, dim=1)
- acc_sum = mask_object * (pred == gt_seg_part)
- acc_sum = torch.sum(acc_sum.view(acc_sum.size(0), -1), dim=1)
- acc_sum = torch.sum(acc_sum * valid)
- pixel_sum = torch.sum(mask_object.view(mask_object.size(0), -1), dim=1)
- pixel_sum = torch.sum(pixel_sum * valid)
- return acc_sum, pixel_sum
-
- @staticmethod
- def part_loss(pred_part, gt_seg_part, gt_seg_object, object_label, valid):
- mask_object = (gt_seg_object == object_label)
- loss = F.nll_loss(pred_part, gt_seg_part * mask_object.long(), reduction='none')
- loss = loss * mask_object.float()
- loss = torch.sum(loss.view(loss.size(0), -1), dim=1)
- nr_pixel = torch.sum(mask_object.view(mask_object.shape[0], -1), dim=1)
- sum_pixel = (nr_pixel * valid).sum()
- loss = (loss * valid.float()).sum() / torch.clamp(sum_pixel, 1).float()
- return loss
-
-
-class SegmentationModule(SegmentationModuleBase):
- def __init__(self, net_enc, net_dec, labeldata, loss_scale=None):
- super(SegmentationModule, self).__init__()
- self.encoder = net_enc
- self.decoder = net_dec
- self.crit_dict = nn.ModuleDict()
- if loss_scale is None:
- self.loss_scale = {"object": 1, "part": 0.5, "scene": 0.25, "material": 1}
- else:
- self.loss_scale = loss_scale
-
- # criterion
- self.crit_dict["object"] = nn.NLLLoss(ignore_index=0) # ignore background 0
- self.crit_dict["material"] = nn.NLLLoss(ignore_index=0) # ignore background 0
- self.crit_dict["scene"] = nn.NLLLoss(ignore_index=-1) # ignore unlabelled -1
-
- # Label data - read from json
- self.labeldata = labeldata
- object_to_num = {k: v for v, k in enumerate(labeldata['object'])}
- part_to_num = {k: v for v, k in enumerate(labeldata['part'])}
- self.object_part = {object_to_num[k]:
- [part_to_num[p] for p in v]
- for k, v in labeldata['object_part'].items()}
- self.object_with_part = sorted(self.object_part.keys())
- self.decoder.object_part = self.object_part
- self.decoder.object_with_part = self.object_with_part
-
- def forward(self, feed_dict, *, seg_size=None):
- if seg_size is None: # training
-
- if feed_dict['source_idx'] == 0:
- output_switch = {"object": True, "part": True, "scene": True, "material": False}
- elif feed_dict['source_idx'] == 1:
- output_switch = {"object": False, "part": False, "scene": False, "material": True}
- else:
- raise ValueError
-
- pred = self.decoder(
- self.encoder(feed_dict['img'], return_feature_maps=True),
- output_switch=output_switch
- )
-
- # loss
- loss_dict = {}
- if pred['object'] is not None: # object
- loss_dict['object'] = self.crit_dict['object'](pred['object'], feed_dict['seg_object'])
- if pred['part'] is not None: # part
- part_loss = 0
- for idx_part, object_label in enumerate(self.object_with_part):
- part_loss += self.part_loss(
- pred['part'][idx_part], feed_dict['seg_part'],
- feed_dict['seg_object'], object_label, feed_dict['valid_part'][:, idx_part])
- loss_dict['part'] = part_loss
- if pred['scene'] is not None: # scene
- loss_dict['scene'] = self.crit_dict['scene'](pred['scene'], feed_dict['scene_label'])
- if pred['material'] is not None: # material
- loss_dict['material'] = self.crit_dict['material'](pred['material'], feed_dict['seg_material'])
- loss_dict['total'] = sum([loss_dict[k] * self.loss_scale[k] for k in loss_dict.keys()])
-
- # metric
- metric_dict= {}
- if pred['object'] is not None:
- metric_dict['object'] = self.pixel_acc(
- pred['object'], feed_dict['seg_object'], ignore_index=0)
- if pred['material'] is not None:
- metric_dict['material'] = self.pixel_acc(
- pred['material'], feed_dict['seg_material'], ignore_index=0)
- if pred['part'] is not None:
- acc_sum, pixel_sum = 0, 0
- for idx_part, object_label in enumerate(self.object_with_part):
- acc, pixel = self.part_pixel_acc(
- pred['part'][idx_part], feed_dict['seg_part'], feed_dict['seg_object'],
- object_label, feed_dict['valid_part'][:, idx_part])
- acc_sum += acc
- pixel_sum += pixel
- metric_dict['part'] = acc_sum.float() / (pixel_sum.float() + 1e-10)
- if pred['scene'] is not None:
- metric_dict['scene'] = self.pixel_acc(
- pred['scene'], feed_dict['scene_label'], ignore_index=-1)
-
- return {'metric': metric_dict, 'loss': loss_dict}
- else: # inference
- output_switch = {"object": True, "part": True, "scene": True, "material": True}
- pred = self.decoder(self.encoder(feed_dict['img'], return_feature_maps=True),
- output_switch=output_switch, seg_size=seg_size)
- return pred
-
-
-def conv3x3(in_planes, out_planes, stride=1, has_bias=False):
- "3x3 convolution with padding"
- return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,
- padding=1, bias=has_bias)
-
-
-def conv3x3_bn_relu(in_planes, out_planes, stride=1):
- return nn.Sequential(
- conv3x3(in_planes, out_planes, stride),
- SynchronizedBatchNorm2d(out_planes),
- nn.ReLU(inplace=True),
- )
-
-
-class ModelBuilder:
- def __init__(self):
- pass
-
- # custom weights initialization
- @staticmethod
- def weights_init(m):
- classname = m.__class__.__name__
- if classname.find('Conv') != -1:
- nn.init.kaiming_normal_(m.weight.data, nonlinearity='relu')
- elif classname.find('BatchNorm') != -1:
- m.weight.data.fill_(1.)
- m.bias.data.fill_(1e-4)
- #elif classname.find('Linear') != -1:
- # m.weight.data.normal_(0.0, 0.0001)
-
- def build_encoder(self, arch='resnet50_dilated8', fc_dim=512, weights=''):
- pretrained = True if len(weights) == 0 else False
- if arch == 'resnet34':
- raise NotImplementedError
- orig_resnet = resnet.__dict__['resnet34'](pretrained=pretrained)
- net_encoder = Resnet(orig_resnet)
- elif arch == 'resnet34_dilated8':
- raise NotImplementedError
- orig_resnet = resnet.__dict__['resnet34'](pretrained=pretrained)
- net_encoder = ResnetDilated(orig_resnet,
- dilate_scale=8)
- elif arch == 'resnet34_dilated16':
- raise NotImplementedError
- orig_resnet = resnet.__dict__['resnet34'](pretrained=pretrained)
- net_encoder = ResnetDilated(orig_resnet,
- dilate_scale=16)
- elif arch == 'resnet50':
- orig_resnet = resnet.__dict__['resnet50'](pretrained=pretrained)
- net_encoder = Resnet(orig_resnet)
- elif arch == 'resnet101':
- orig_resnet = resnet.__dict__['resnet101'](pretrained=pretrained)
- net_encoder = Resnet(orig_resnet)
- elif arch == 'resnext101':
- orig_resnext = resnext.__dict__['resnext101'](pretrained=pretrained)
- net_encoder = Resnet(orig_resnext) # we can still use class Resnet
- else:
- raise Exception('Architecture undefined!')
-
- # net_encoder.apply(self.weights_init)
- if len(weights) > 0:
- # print('Loading weights for net_encoder')
- net_encoder.load_state_dict(
- torch.load(weights, map_location=lambda storage, loc: storage), strict=False)
- return net_encoder
-
- def build_decoder(self, nr_classes,
- arch='ppm_bilinear_deepsup', fc_dim=512,
- weights='', use_softmax=False):
- if arch == 'upernet_lite':
- net_decoder = UPerNet(
- nr_classes=nr_classes,
- fc_dim=fc_dim,
- use_softmax=use_softmax,
- fpn_dim=256)
- elif arch == 'upernet':
- net_decoder = UPerNet(
- nr_classes=nr_classes,
- fc_dim=fc_dim,
- use_softmax=use_softmax,
- fpn_dim=512)
- else:
- raise Exception('Architecture undefined!')
-
- net_decoder.apply(self.weights_init)
- if len(weights) > 0:
- # print('Loading weights for net_decoder')
- net_decoder.load_state_dict(
- torch.load(weights, map_location=lambda storage, loc: storage), strict=False)
- return net_decoder
-
-
-class Resnet(nn.Module):
- def __init__(self, orig_resnet):
- super(Resnet, self).__init__()
-
- # take pretrained resnet, except AvgPool and FC
- self.conv1 = orig_resnet.conv1
- self.bn1 = orig_resnet.bn1
- self.relu1 = orig_resnet.relu1
- self.conv2 = orig_resnet.conv2
- self.bn2 = orig_resnet.bn2
- self.relu2 = orig_resnet.relu2
- self.conv3 = orig_resnet.conv3
- self.bn3 = orig_resnet.bn3
- self.relu3 = orig_resnet.relu3
- self.maxpool = orig_resnet.maxpool
- self.layer1 = orig_resnet.layer1
- self.layer2 = orig_resnet.layer2
- self.layer3 = orig_resnet.layer3
- self.layer4 = orig_resnet.layer4
-
- def forward(self, x, return_feature_maps=False):
- conv_out = []
-
- x = self.relu1(self.bn1(self.conv1(x)))
- x = self.relu2(self.bn2(self.conv2(x)))
- x = self.relu3(self.bn3(self.conv3(x)))
- x = self.maxpool(x)
-
- x = self.layer1(x); conv_out.append(x);
- x = self.layer2(x); conv_out.append(x);
- x = self.layer3(x); conv_out.append(x);
- x = self.layer4(x); conv_out.append(x);
-
- if return_feature_maps:
- return conv_out
- return [x]
-
-
-# upernet
-class UPerNet(nn.Module):
- def __init__(self, nr_classes, fc_dim=4096,
- use_softmax=False, pool_scales=(1, 2, 3, 6),
- fpn_inplanes=(256,512,1024,2048), fpn_dim=256):
- # Lazy import so that compilation isn't needed if not being used.
- from .prroi_pool import PrRoIPool2D
- super(UPerNet, self).__init__()
- self.use_softmax = use_softmax
-
- # PPM Module
- self.ppm_pooling = []
- self.ppm_conv = []
-
- for scale in pool_scales:
- # we use the feature map size instead of input image size, so down_scale = 1.0
- self.ppm_pooling.append(PrRoIPool2D(scale, scale, 1.))
- self.ppm_conv.append(nn.Sequential(
- nn.Conv2d(fc_dim, 512, kernel_size=1, bias=False),
- SynchronizedBatchNorm2d(512),
- nn.ReLU(inplace=True)
- ))
- self.ppm_pooling = nn.ModuleList(self.ppm_pooling)
- self.ppm_conv = nn.ModuleList(self.ppm_conv)
- self.ppm_last_conv = conv3x3_bn_relu(fc_dim + len(pool_scales)*512, fpn_dim, 1)
-
- # FPN Module
- self.fpn_in = []
- for fpn_inplane in fpn_inplanes[:-1]: # skip the top layer
- self.fpn_in.append(nn.Sequential(
- nn.Conv2d(fpn_inplane, fpn_dim, kernel_size=1, bias=False),
- SynchronizedBatchNorm2d(fpn_dim),
- nn.ReLU(inplace=True)
- ))
- self.fpn_in = nn.ModuleList(self.fpn_in)
-
- self.fpn_out = []
- for i in range(len(fpn_inplanes) - 1): # skip the top layer
- self.fpn_out.append(nn.Sequential(
- conv3x3_bn_relu(fpn_dim, fpn_dim, 1),
- ))
- self.fpn_out = nn.ModuleList(self.fpn_out)
-
- self.conv_fusion = conv3x3_bn_relu(len(fpn_inplanes) * fpn_dim, fpn_dim, 1)
-
- # background included. if ignore in loss, output channel 0 will not be trained.
- self.nr_scene_class, self.nr_object_class, self.nr_part_class, self.nr_material_class = \
- nr_classes['scene'], nr_classes['object'], nr_classes['part'], nr_classes['material']
-
- # input: PPM out, input_dim: fpn_dim
- self.scene_head = nn.Sequential(
- conv3x3_bn_relu(fpn_dim, fpn_dim, 1),
- nn.AdaptiveAvgPool2d(1),
- nn.Conv2d(fpn_dim, self.nr_scene_class, kernel_size=1, bias=True)
- )
-
- # input: Fusion out, input_dim: fpn_dim
- self.object_head = nn.Sequential(
- conv3x3_bn_relu(fpn_dim, fpn_dim, 1),
- nn.Conv2d(fpn_dim, self.nr_object_class, kernel_size=1, bias=True)
- )
-
- # input: Fusion out, input_dim: fpn_dim
- self.part_head = nn.Sequential(
- conv3x3_bn_relu(fpn_dim, fpn_dim, 1),
- nn.Conv2d(fpn_dim, self.nr_part_class, kernel_size=1, bias=True)
- )
-
- # input: FPN_2 (P2), input_dim: fpn_dim
- self.material_head = nn.Sequential(
- conv3x3_bn_relu(fpn_dim, fpn_dim, 1),
- nn.Conv2d(fpn_dim, self.nr_material_class, kernel_size=1, bias=True)
- )
-
- def forward(self, conv_out, output_switch=None, seg_size=None):
-
- output_dict = {k: None for k in output_switch.keys()}
-
- conv5 = conv_out[-1]
- input_size = conv5.size()
- ppm_out = [conv5]
- roi = [] # fake rois, just used for pooling
- for i in range(input_size[0]): # batch size
- roi.append(torch.Tensor([i, 0, 0, input_size[3], input_size[2]]).view(1, -1)) # b, x0, y0, x1, y1
- roi = torch.cat(roi, dim=0).type_as(conv5)
- ppm_out = [conv5]
- for pool_scale, pool_conv in zip(self.ppm_pooling, self.ppm_conv):
- ppm_out.append(pool_conv(F.interpolate(
- pool_scale(conv5, roi.detach()),
- (input_size[2], input_size[3]),
- mode='bilinear', align_corners=False)))
- ppm_out = torch.cat(ppm_out, 1)
- f = self.ppm_last_conv(ppm_out)
-
- if output_switch['scene']: # scene
- output_dict['scene'] = self.scene_head(f)
-
- if output_switch['object'] or output_switch['part'] or output_switch['material']:
- fpn_feature_list = [f]
- for i in reversed(range(len(conv_out) - 1)):
- conv_x = conv_out[i]
- conv_x = self.fpn_in[i](conv_x) # lateral branch
-
- f = F.interpolate(
- f, size=conv_x.size()[2:], mode='bilinear', align_corners=False) # top-down branch
- f = conv_x + f
-
- fpn_feature_list.append(self.fpn_out[i](f))
- fpn_feature_list.reverse() # [P2 - P5]
-
- # material
- if output_switch['material']:
- output_dict['material'] = self.material_head(fpn_feature_list[0])
-
- if output_switch['object'] or output_switch['part']:
- output_size = fpn_feature_list[0].size()[2:]
- fusion_list = [fpn_feature_list[0]]
- for i in range(1, len(fpn_feature_list)):
- fusion_list.append(F.interpolate(
- fpn_feature_list[i],
- output_size,
- mode='bilinear', align_corners=False))
- fusion_out = torch.cat(fusion_list, 1)
- x = self.conv_fusion(fusion_out)
-
- if output_switch['object']: # object
- output_dict['object'] = self.object_head(x)
- if output_switch['part']:
- output_dict['part'] = self.part_head(x)
-
- if self.use_softmax: # is True during inference
- # inference scene
- x = output_dict['scene']
- x = x.squeeze(3).squeeze(2)
- x = F.softmax(x, dim=1)
- output_dict['scene'] = x
-
- # inference object, material
- for k in ['object', 'material']:
- x = output_dict[k]
- x = F.interpolate(x, size=seg_size, mode='bilinear', align_corners=False)
- x = F.softmax(x, dim=1)
- output_dict[k] = x
-
- # inference part
- x = output_dict['part']
- x = F.interpolate(x, size=seg_size, mode='bilinear', align_corners=False)
- part_pred_list, head = [], 0
- for idx_part, object_label in enumerate(self.object_with_part):
- n_part = len(self.object_part[object_label])
- _x = F.interpolate(x[:, head: head + n_part], size=seg_size, mode='bilinear', align_corners=False)
- _x = F.softmax(_x, dim=1)
- part_pred_list.append(_x)
- head += n_part
- output_dict['part'] = part_pred_list
-
- else: # Training
- # object, scene, material
- for k in ['object', 'scene', 'material']:
- if output_dict[k] is None:
- continue
- x = output_dict[k]
- x = F.log_softmax(x, dim=1)
- if k == "scene": # for scene
- x = x.squeeze(3).squeeze(2)
- output_dict[k] = x
- if output_dict['part'] is not None:
- part_pred_list, head = [], 0
- for idx_part, object_label in enumerate(self.object_with_part):
- n_part = len(self.object_part[object_label])
- x = output_dict['part'][:, head: head + n_part]
- x = F.log_softmax(x, dim=1)
- part_pred_list.append(x)
- head += n_part
- output_dict['part'] = part_pred_list
-
- return output_dict
diff --git a/spaces/Hakim571/Food-Recommendation/app.py b/spaces/Hakim571/Food-Recommendation/app.py
deleted file mode 100644
index b9e4dcf22cdc30e2adbd9bf735688420f42b9930..0000000000000000000000000000000000000000
--- a/spaces/Hakim571/Food-Recommendation/app.py
+++ /dev/null
@@ -1,477 +0,0 @@
-import numpy as np
-import numpy.ma as ma
-import pandas as pd
-import tensorflow as tf
-from tensorflow import keras
-import tensorflow_recommenders as tfrs
-from typing import Dict, Text
-from itertools import combinations
-
-user_data_raw = pd.read_pickle("./user_data.pkl")
-food_data_raw = pd.read_pickle("./food_raw.pkl")
-food_popularity_raw = pd.read_pickle("./food_popularity.pkl")
-
-food_data = food_data_raw.set_index('Food_ID').reset_index().drop(food_data_raw.columns[[0,31,32,33,34,35,36]],axis = 1).copy()
-food_data['Food_ID'] = food_data['Food_ID'].astype('str')
-
-populars = tf.data.Dataset.from_tensor_slices(dict(food_popularity_raw[['User_ID', 'Food_ID', 'value',
-'Age', 'Body_Weight', 'Body_Height','Cal_Need','sex','blood_group','Fast_Food','Sumber','Tipe',
-'Jenis_Olahan','Mentah / Olahan','Kelompok Makanan','Air (g)', 'Energi (Kal)','Protein (g)',
-'Lemak (g)', 'Karbohidrat (g)', 'Serat (g)',
-'Abu (g)','Kalsium (Ca) (mg)', 'Fosfor (P) (mg)', 'Besi (Fe) (mg)',
-'Natrium (Na) (mg)', 'Kalium (Ka) (mg)', 'Tembaga (Cu) (mg)',
-'Seng (Zn) (mg)', 'Retinol (vit. A) (mcg)', 'β-karoten (mcg)',
-'Karoten total (mcg)', 'Thiamin (vit. B1) (mg)',
-'Riboflavin (vit. B2) (mg)', 'Niasin (mg)', 'Vitamin C (mg)', 'BDD (%)']]))
-
-foods = tf.data.Dataset.from_tensor_slices(dict(food_data[['Food_ID','Fast_Food','Sumber','Tipe',
-'Jenis_Olahan','Mentah / Olahan','Kelompok Makanan','Air (g)', 'Energi (Kal)','Protein (g)',
-'Lemak (g)', 'Karbohidrat (g)', 'Serat (g)',
-'Abu (g)','Kalsium (Ca) (mg)', 'Fosfor (P) (mg)', 'Besi (Fe) (mg)',
-'Natrium (Na) (mg)', 'Kalium (Ka) (mg)', 'Tembaga (Cu) (mg)',
-'Seng (Zn) (mg)', 'Retinol (vit. A) (mcg)', 'β-karoten (mcg)',
-'Karoten total (mcg)', 'Thiamin (vit. B1) (mg)',
-'Riboflavin (vit. B2) (mg)', 'Niasin (mg)', 'Vitamin C (mg)', 'BDD (%)']]))
-
-food_names = foods.batch(100).map(tf.autograph.experimental.do_not_convert(lambda x: x["Food_ID"]))
-user_ids = populars.batch(100).map(tf.autograph.experimental.do_not_convert(lambda x: x["User_ID"]))
-unique_food_names = np.unique(np.concatenate(list(food_names)))
-unique_user_ids = np.unique(np.concatenate(list(user_ids)))
-
-USER_FEATURE_NUM = ['Age', 'Body_Weight', 'Body_Height','Cal_Need']
-
-USER_FEATURE_CAT= ['sex','blood_group']
-
-FOOD_FEATURE_NUM = ['Air (g)', 'Energi (Kal)','Protein (g)', 'Lemak (g)', 'Karbohidrat (g)', 'Serat (g)',
-'Abu (g)','Kalsium (Ca) (mg)', 'Fosfor (P) (mg)', 'Besi (Fe) (mg)',
-'Natrium (Na) (mg)', 'Kalium (Ka) (mg)', 'Tembaga (Cu) (mg)',
-'Seng (Zn) (mg)', 'Retinol (vit. A) (mcg)', 'β-karoten (mcg)',
-'Karoten total (mcg)', 'Thiamin (vit. B1) (mg)',
-'Riboflavin (vit. B2) (mg)', 'Niasin (mg)', 'Vitamin C (mg)', 'BDD (%)']
-
-FOOD_FEATURE_CAT = ['Fast_Food', 'Tipe','Sumber','Jenis_Olahan',
-'Mentah / Olahan','Kelompok Makanan']
-
-class UserModel(tf.keras.Model):
-
- def __init__(self):
- super().__init__()
-
- self.user_embedding = tf.keras.Sequential([
- tf.keras.layers.StringLookup(
- vocabulary=unique_user_ids, mask_token=None),
- tf.keras.layers.Embedding(len(unique_user_ids) + 1, 64),
- ])
-
- self.additional_feature = {}
- self.normalized = {}
- self.categorized = {}
-
- for feature in USER_FEATURE_NUM:
- self.normalized[feature] = tf.keras.layers.Normalization(axis=None)
- self.normalized[feature].adapt(populars.map(lambda x: x[feature]))
- self.additional_feature[feature] = tf.keras.Sequential([self.normalized[feature],tf.keras.layers.Reshape([1])])
-
- self.categorized['sex'] = tf.keras.layers.StringLookup(vocabulary=np.unique(np.concatenate(list(populars.batch(100).map(lambda x: x["sex"])))), mask_token=None)
- self.additional_feature['sex'] = tf.keras.Sequential([self.categorized['sex'],tf.keras.layers.Embedding(3, 8)])
-
- def call(self, inputs):
- # Take the input dictionary, pass it through each input layer,
- # and concatenate the result.
-
- return tf.concat(
- [self.user_embedding(inputs["User_ID"])]+
- [self.additional_feature[k](inputs[k]) for k in self.additional_feature],
- axis=1)
-
-
-class QueryModel(tf.keras.Model):
- """Model for encoding user queries."""
-
- def __init__(self, layer_sizes, popular_weight=1, retrieval_weight=1):
- """Model for encoding user queries.
-
- Args:
- layer_sizes:
- A list of integers where the i-th entry represents the number of units
- the i-th layer contains.
- """
- super().__init__()
-
- # We first use the user model for generating embeddings.
- self.user_embedding_model = UserModel()
-
- # Then construct the layers.
- self.dense_layers = tf.keras.Sequential()
-
- # Use the linear activation
- self.dense_layers.add(tf.keras.layers.Dense(128))
-
- def call(self, inputs):
- feature_embedding = self.user_embedding_model(inputs)
- return self.dense_layers(feature_embedding)
-
-class FoodModel(tf.keras.Model):
-
- def __init__(self):
- super().__init__()
-
- self.food_embedding = tf.keras.Sequential([
- tf.keras.layers.StringLookup(
- vocabulary=unique_food_names,mask_token=None),
- tf.keras.layers.Embedding(len(unique_food_names) + 1, 64)
- ])
-
- self.additional_feature = {}
- self.normalized={}
- self.categorized={}
-
- for feature in FOOD_FEATURE_NUM:
- self.normalized[feature] = tf.keras.layers.Normalization(axis=None)
- self.normalized[feature].adapt(populars.map(lambda x: x[feature]))
- self.additional_feature[feature] = tf.keras.Sequential([self.normalized[feature],tf.keras.layers.Reshape([1])])
-
- for feature in FOOD_FEATURE_CAT:
- self.categorized[feature] = tf.keras.layers.StringLookup(vocabulary=np.unique(np.concatenate(list(foods.batch(100).map(lambda x: x[feature])))),mask_token=None)
- self.additional_feature[feature] = tf.keras.Sequential([self.categorized[feature],tf.keras.layers.Embedding(len(np.unique(np.concatenate(list(foods.batch(151).map(lambda x: x[feature])))))+1, 8)])
-
- def call(self, inputs):
- return tf.concat(
- [self.food_embedding(inputs["Food_ID"])]+
- [self.additional_feature[k](inputs[k]) for k in self.additional_feature],
- axis=1)
-
-class CandidateModel(tf.keras.Model):
- """Model for encoding foods."""
-
- def __init__(self, layer_sizes, popular_weight=1, retrieval_weight=1):
- """Model for encoding foods.
-
- Args:
- layer_sizes:
- A list of integers where the i-th entry represents the number of units
- the i-th layer contains.
- """
- super().__init__()
-
- self.food_embedding_model = FoodModel()
-
- # Then construct the layers.
- self.dense_layers = tf.keras.Sequential()
-
- # Use the linear activation.
- self.dense_layers.add(tf.keras.layers.Dense(128))
-
- def call(self, inputs):
- feature_embedding = self.food_embedding_model(inputs)
- return self.dense_layers(feature_embedding)
-
-
-class FoodlensModel(tfrs.models.Model):
-
- def __init__(self, layer_sizes, popular_weight=1, retrieval_weight=1):
- super().__init__()
- self.query_model = QueryModel(layer_sizes)
- self.candidate_model = CandidateModel(layer_sizes)
-
- self.popular_model = tf.keras.Sequential([
- tf.keras.layers.Dense(256, activation="relu"),
- tf.keras.layers.Dense(128, activation="relu"),
- tf.keras.layers.Dense(1),
- ])
-
- # The tasks.
- self.popular_task: tf.keras.layers.Layer = tfrs.tasks.Ranking(
- loss=tf.keras.losses.MeanSquaredError(),
- metrics=[tf.keras.metrics.RootMeanSquaredError()],
- )
- self.retrieval_task: tf.keras.layers.Layer = tfrs.tasks.Retrieval(
- metrics=tfrs.metrics.FactorizedTopK(
- candidates=foods.apply(tf.data.experimental.dense_to_ragged_batch(151)).map(self.candidate_model)
- )
- )
-
- # The loss weights.
- self.popular_weight = popular_weight
- self.retrieval_weight = retrieval_weight
-
- def call(self, features: Dict[Text, tf.Tensor], training=True) -> tf.Tensor:
-
- query_embeddings = self.query_model({"User_ID": features["User_ID"],
- **{k: features[k] for k in USER_FEATURE_NUM+['sex']}
- })
- food_embeddings = self.candidate_model({"Food_ID": features["Food_ID"],
- **{k: features[k] for k in FOOD_FEATURE_NUM+FOOD_FEATURE_CAT}
- })
-
- output_dot = tf.concat([query_embeddings, food_embeddings],axis=1)
-
- return self.popular_model(output_dot)
-
- def compute_loss(self, features: Dict[Text, tf.Tensor], training=False) -> tf.Tensor:
- # We only pass the user id and timestamp features into the query model. This
- # is to ensure that the training inputs would have the same keys as the
- # query inputs. Otherwise the discrepancy in input structure would cause an
- # error when loading the query model after saving it.
- query_embeddings = self.query_model({
- "User_ID": features["User_ID"],
- **{k: features[k] for k in USER_FEATURE_NUM+['sex']}
- })
- food_embeddings = self.candidate_model({
- "Food_ID": features["Food_ID"],
- **{k: features[k] for k in FOOD_FEATURE_NUM + FOOD_FEATURE_CAT}
- })
-
- populars_value = features.pop("value")
-
- popular_predictions = self(features)
-
- # We compute the loss for each task.
- popular_loss = self.popular_task(
- labels=populars_value,
- predictions=popular_predictions)
-
- retrieval_loss = self.retrieval_task(query_embeddings, food_embeddings, compute_metrics=not training)
-
- return (self.popular_weight * popular_loss + self.retrieval_weight * retrieval_loss)
-
-weights_model_filepath = './saved_model/model_weight'
-model_2 = FoodlensModel(layer_sizes=None,popular_weight=1, retrieval_weight=1)
-model_2.load_weights(weights_model_filepath).expect_partial()
-
-# Fungsi ini membutuhkan:
-# food_data_raw -> data makanan di database
-# input dict -> input dict mengenai data user (dalam bentuk tf.constant)
-# output type -> "print", "dataframe", "dict"
-# model_recom -> model recommendation system
-# top_n -> seberapa banyak rekomendasi makanan yang dihasilkan
-
-def predict_food(food_data_raw,input_dict,output_type, model_recom ,top_n=3):
- USER_FEATURE_NUM = ['Age', 'Body_Weight', 'Body_Height','Cal_Need']
-
- USER_FEATURE_CAT= ['sex','blood_group']
-
- food_data = food_data_raw.set_index('Food_ID').reset_index().drop(food_data_raw.columns[[0,31,32,33,34,35,36]],axis = 1).copy()
- food_data['Food_ID'] = food_data['Food_ID'].astype('str')
-
- foods = tf.data.Dataset.from_tensor_slices(dict(food_data[['Food_ID','Fast_Food','Sumber','Tipe',
- 'Jenis_Olahan','Mentah / Olahan','Kelompok Makanan','Air (g)', 'Energi (Kal)','Protein (g)',
- 'Lemak (g)', 'Karbohidrat (g)', 'Serat (g)',
- 'Abu (g)','Kalsium (Ca) (mg)', 'Fosfor (P) (mg)', 'Besi (Fe) (mg)',
- 'Natrium (Na) (mg)', 'Kalium (Ka) (mg)', 'Tembaga (Cu) (mg)',
- 'Seng (Zn) (mg)', 'Retinol (vit. A) (mcg)', 'β-karoten (mcg)',
- 'Karoten total (mcg)', 'Thiamin (vit. B1) (mg)',
- 'Riboflavin (vit. B2) (mg)', 'Niasin (mg)', 'Vitamin C (mg)', 'BDD (%)']]))
-
- # Create a model that takes in raw query features, and
- brute_force = tfrs.layers.factorized_top_k.BruteForce(model_recom.query_model, k = top_n)
-
- # recommends foods out of the entire foods dataset.
- brute_force.index_from_dataset(foods.apply(tf.data.experimental.dense_to_ragged_batch(151)).map(model_recom.candidate_model))
-
- recommended_food = brute_force({
- "User_ID": tf.constant([input_dict['User_ID'].numpy()[0].decode("utf-8")]),
- **{k: tf.constant([input_dict[k].numpy()[0]]) for k in USER_FEATURE_NUM+['sex']}
- })
-
- if output_type=="print":
- print('Top {} recommendations for user {}:\n'.format(top_n, input_dict['User_ID']))
- for i, food_id in enumerate(recommended_food[1].numpy()[0,:top_n]):
- if list(food_data_raw[food_data_raw["No."]==food_id+1]["Food_ID"])==[]:
- continue
- print('{}. {} : {}'.format(i+1, list(food_data_raw[food_data_raw["No."]==food_id+1]["Food_ID"])[0], list(food_data_raw[food_data_raw["No."]==food_id+1]["Nama Bahan Makanan"])[0]))
-
- if output_type=="dataframe":
- df_output = pd.DataFrame()
-
- df_output['index_number'] = list(range(1,top_n+1))
- df_output['list_food_id'] = [list(food_data_raw[food_data_raw["No."]==index+1]["Food_ID"])[0] for index in recommended_food[1].numpy()[0,:top_n]]
- df_output['list_food_name'] = [list(food_data_raw[food_data_raw["No."]==index+1]["Nama Bahan Makanan"])[0] for index in recommended_food[1].numpy()[0,:top_n]]
- return df_output
-
- if output_type=="dict":
- df_output = pd.DataFrame()
-
- df_output['index_number'] = list(range(1,top_n+1))
- df_output['list_food_id'] = [list(food_data_raw[food_data_raw["No."]==index+1]["Food_ID"])[0] for index in recommended_food[1].numpy()[0,:top_n]]
- df_output['list_food_name'] = [list(food_data_raw[food_data_raw["No."]==index+1]["Nama Bahan Makanan"])[0] for index in recommended_food[1].numpy()[0,:top_n]]
- return df_output.to_dict('dict')
-
-
-# Fungsi ini membutuhkan:
-# food_data_raw -> data makanan di database
-# dict_new_user -> input dict mengenai data user (dalam bentuk tf.constant)
-# dict_food_data -> input dict mengenai data food (dalam bentuk tf.constant)
-# model_recom -> model recommendation system
-
-def predict_popular(food_data_raw, dict_new_user,dict_food_data,model_recom):
-
- food_data = food_data_raw.set_index('Food_ID').reset_index().drop(food_data_raw.columns[[0,31,32,33,34,35,36]],axis = 1).copy()
- food_data['Food_ID'] = food_data['Food_ID'].astype('str')
-
- foods = tf.data.Dataset.from_tensor_slices(dict(food_data[['Food_ID','Fast_Food','Sumber','Tipe',
- 'Jenis_Olahan','Mentah / Olahan','Kelompok Makanan','Air (g)', 'Energi (Kal)','Protein (g)',
- 'Lemak (g)', 'Karbohidrat (g)', 'Serat (g)',
- 'Abu (g)','Kalsium (Ca) (mg)', 'Fosfor (P) (mg)', 'Besi (Fe) (mg)',
- 'Natrium (Na) (mg)', 'Kalium (Ka) (mg)', 'Tembaga (Cu) (mg)',
- 'Seng (Zn) (mg)', 'Retinol (vit. A) (mcg)', 'β-karoten (mcg)',
- 'Karoten total (mcg)', 'Thiamin (vit. B1) (mg)',
- 'Riboflavin (vit. B2) (mg)', 'Niasin (mg)', 'Vitamin C (mg)', 'BDD (%)']]))
-
- input_dict_total = dict(dict_new_user)
- input_dict_total.update(dict_food_data)
- input_dict_total= {k: tf.constant([input_dict_total[k]]) for k in input_dict_total}
- predicted_popular = model_recom.predict(input_dict_total)
- print("Predicted popular for {} or {}: {}".format(input_dict_total['Food_ID'][0].numpy().decode("utf-8"), list(food_data[food_data['Food_ID']==input_dict_total['Food_ID'][0].numpy().decode("utf-8")]['Nama Bahan Makanan'])[0],predicted_popular[0,0]))
-
-# Fungsi ini membutuhkan:
-# food_data_raw -> data makanan di database
-# list_recom_food -> output from predict_food function
-# gender
-# pred_cal -> user prediction calorie or need
-# amount of eat
-
-def top_nutrition(food_data_raw,user_id, list_recom_food, gender, pred_cal=None, amount_of_eat=3):
- data_nut_food = food_data_raw[food_data_raw["Nama Bahan Makanan"].isin(list_recom_food)][["Nama Bahan Makanan","Energi (Kal)","Protein (g)","Lemak (g)","Karbohidrat (g)"]]
- data_nut_cal = data_nut_food[["Nama Bahan Makanan","Energi (Kal)"]]
- data_nut_pro = data_nut_food[["Nama Bahan Makanan","Protein (g)"]]
- data_nut_fat = data_nut_food[["Nama Bahan Makanan","Lemak (g)"]]
- data_nut_carb = data_nut_food[["Nama Bahan Makanan","Karbohidrat (g)"]]
-
- if pred_cal is None:
- if gender=="M":
- pred_cal=2500
- else:
- pred_cal=2000
-
- if gender=="M":
- protein_need = 55
- carb_need = 275
- fat_need = 67
- else:
- protein_need = 45
- carb_need = 275
- fat_need = 67
-
- if amount_of_eat == 2:
- comb_2 = combinations(list_recom_food, 2)
- list_cal = [np.sum(np.power(np.subtract(np.multiply(nut,4),pred_cal),2)) for nut in [[list(data_nut_cal[data_nut_cal["Nama Bahan Makanan"]==str(food)]["Energi (Kal)"])[0] for food in comb] for comb in list(comb_2)]]
-
- comb_2 = combinations(list_recom_food, 2)
- list_pro = [np.sum(np.power(np.subtract(np.multiply(nut,4),protein_need),2)) for nut in [[list(data_nut_pro[data_nut_pro["Nama Bahan Makanan"]==str(food)]["Protein (g)"])[0] for food in comb] for comb in list(comb_2)]]
-
- comb_2 = combinations(list_recom_food, 2)
- list_fat = [np.sum(np.power(np.subtract(np.multiply(nut,4),protein_need),2)) for nut in [[list(data_nut_fat[data_nut_fat["Nama Bahan Makanan"]==str(food)]["Lemak (g)"])[0] for food in comb] for comb in list(comb_2)]]
-
- comb_2 = combinations(list_recom_food, 2)
- list_carb = [np.sum(np.power(np.subtract(np.multiply(nut,4),protein_need),2)) for nut in [[list(data_nut_carb[data_nut_carb["Nama Bahan Makanan"]==str(food)]["Karbohidrat (g)"])[0] for food in comb] for comb in list(comb_2)]]
-
- total_list = [sum(x) for x in zip(list_cal,list_pro,list_fat,list_carb)]
-
- comb_2 = combinations(list_recom_food, 2)
- list_mse = {comb:total_list[i] for i, comb in enumerate(comb_2)}
- list_mse_sorted = sorted(list_mse.items(), key=lambda x:x[1])
-
- return pd.DataFrame([list(x) for x in list_mse_sorted]).to_dict('dict')[0]
-
- elif amount_of_eat == 4:
- comb_4 = combinations(list_recom_food, 4)
- list_cal = [np.sum(np.power(np.subtract(np.multiply(nut,2),pred_cal),2)) for nut in [[list(data_nut_cal[data_nut_cal["Nama Bahan Makanan"]==str(food)]["Energi (Kal)"])[0] for food in comb] for comb in list(comb_4)]]
-
- comb_4 = combinations(list_recom_food, 4)
- list_pro = [np.sum(np.power(np.subtract(np.multiply(nut,2),protein_need),2)) for nut in [[list(data_nut_pro[data_nut_pro["Nama Bahan Makanan"]==str(food)]["Protein (g)"])[0] for food in comb] for comb in list(comb_4)]]
-
- comb_4 = combinations(list_recom_food, 4)
- list_fat = [np.sum(np.power(np.subtract(np.multiply(nut,2),protein_need),2)) for nut in [[list(data_nut_fat[data_nut_fat["Nama Bahan Makanan"]==str(food)]["Lemak (g)"])[0] for food in comb] for comb in list(comb_4)]]
-
- comb_4 = combinations(list_recom_food, 4)
- list_carb = [np.sum(np.power(np.subtract(np.multiply(nut,2),protein_need),2)) for nut in [[list(data_nut_carb[data_nut_carb["Nama Bahan Makanan"]==str(food)]["Karbohidrat (g)"])[0] for food in comb] for comb in list(comb_4)]]
-
- total_list = [sum(x) for x in zip(list_cal,list_pro,list_fat,list_carb)]
-
- comb_4 = combinations(list_recom_food, 4)
- list_mse = {comb:total_list[i] for i, comb in enumerate(comb_4)}
- list_mse_sorted = sorted(list_mse.items(), key=lambda x:x[1])
-
- return pd.DataFrame([list(x) for x in list_mse_sorted]).to_dict('dict')[0]
-
- else:
- comb_3 = combinations(list_recom_food, 3)
- list_cal = [np.sum(np.power(np.subtract(np.multiply(nut,2.7),pred_cal),2)) for nut in [[list(data_nut_cal[data_nut_cal["Nama Bahan Makanan"]==str(food)]["Energi (Kal)"])[0] for food in comb] for comb in list(comb_3)]]
-
- comb_3 = combinations(list_recom_food, 3)
- list_pro = [np.sum(np.power(np.subtract(np.multiply(nut,2.7),protein_need),2)) for nut in [[list(data_nut_pro[data_nut_pro["Nama Bahan Makanan"]==str(food)]["Protein (g)"])[0] for food in comb] for comb in list(comb_3)]]
-
- comb_3 = combinations(list_recom_food, 3)
- list_fat = [np.sum(np.power(np.subtract(np.multiply(nut,2.7),protein_need),2)) for nut in [[list(data_nut_fat[data_nut_fat["Nama Bahan Makanan"]==str(food)]["Lemak (g)"])[0] for food in comb] for comb in list(comb_3)]]
-
- comb_3 = combinations(list_recom_food, 3)
- list_carb = [np.sum(np.power(np.subtract(np.multiply(nut,2.7),protein_need),2)) for nut in [[list(data_nut_carb[data_nut_carb["Nama Bahan Makanan"]==str(food)]["Karbohidrat (g)"])[0] for food in comb] for comb in list(comb_3)]]
-
- total_list = [sum(x) for x in zip(list_cal,list_pro,list_fat,list_carb)]
-
- comb_3 = combinations(list_recom_food, 3)
- list_mse = {comb:total_list[i] for i, comb in enumerate(comb_3)}
- list_mse_sorted = sorted(list_mse.items(), key=lambda x:x[1])
-
- return pd.DataFrame([list(x) for x in list_mse_sorted]).to_dict('dict')[0]
-
-import gradio as gr
-
-with gr.Blocks() as demo:
- User_ID = gr.Text(label="User_ID",placeholder="UNT001")
- Age = gr.Number(label="Age")
- Body_Weight = gr.Number(label="Body_Weight")
- Body_Height = gr.Number(label="Body_Height")
- Cal_Need = gr.Number(label="Cal_Need")
- Gender = gr.Text(label="Gender",placeholder="M or F")
- Amount_Of_Eat = gr.Number(label="Amount_Of_Eat (2 or 3 or 4)")
-
- with gr.Row():
- recom_btn = gr.Button("Generate Recommender Food and Top Nutrition")
-
- recom_out = gr.Dataframe(row_count = (3, "dynamic"), col_count=(3, "fixed"), label="Food Recommendations", headers=["Index","Food ID","Food Names"])
- topnut_out = gr.Dataframe(row_count = (3, "dynamic"), col_count=(4, "dynamic"), label="Top Pair Nutritions", headers=["Breakfast","Lunch","Dinner","Snacks"])
-
- def recom_food_gradio(User_ID,Age,Body_Weight,Body_Height,Cal_Need,Gender,Amount_Of_Eat):
- list_food_name = predict_food(food_data_raw=food_data_raw,
- input_dict={
- "User_ID": tf.constant([User_ID]),
- "Age":tf.constant([Age]),
- "Body_Weight":tf.constant([Body_Weight]),
- "Body_Height":tf.constant([Body_Height]),
- "Cal_Need":tf.constant([Cal_Need]),
- "sex":tf.constant([Gender])
- },
- output_type = "dict",
- model_recom = model_2,
- top_n=15)
-
- list_food_df = pd.DataFrame(list_food_name)
- list_food_df.columns = ["Index","Food ID","Food Names"]
-
- list_food = list(pd.DataFrame(list_food_name)['list_food_name'])
-
- top_nutri_grad = top_nutrition(food_data_raw = food_data_raw,
- user_id = User_ID,
- list_recom_food = list_food,
- gender = Gender,
- pred_cal = Cal_Need,
- amount_of_eat=Amount_Of_Eat)
-
- top_nutri_df = pd.DataFrame(top_nutri_grad).T
-
- if Amount_Of_Eat==2:
- top_nutri_df.columns = ["Lunch", "Dinner"]
-
- elif Amount_Of_Eat==4:
- top_nutri_df.columns = ["Breakfast","Lunch", "Dinner","Snacks"]
-
- else:
- top_nutri_df.columns = ["Breakfast","Lunch", "Dinner"]
-
- return list_food_df, top_nutri_df
-
- recom_btn.click(recom_food_gradio, inputs=[User_ID, Age, Body_Weight, Body_Height, Cal_Need, Gender, Amount_Of_Eat], outputs=[recom_out,topnut_out])
-
-demo.launch(enable_queue=True)
\ No newline at end of file
diff --git a/spaces/HaloMaster/chinesesummary/fengshen/examples/pretrain_t5/pretrain_randeng_t5_char_57M.sh b/spaces/HaloMaster/chinesesummary/fengshen/examples/pretrain_t5/pretrain_randeng_t5_char_57M.sh
deleted file mode 100644
index 8e86e8b077019a57c5a6ac28ab29749f1a2787aa..0000000000000000000000000000000000000000
--- a/spaces/HaloMaster/chinesesummary/fengshen/examples/pretrain_t5/pretrain_randeng_t5_char_57M.sh
+++ /dev/null
@@ -1,128 +0,0 @@
-#!/bin/bash
-#SBATCH --job-name=pretrain_randeng_t5_char_57M
-#SBATCH --nodes=1
-#SBATCH --ntasks-per-node=8
-#SBATCH --gres=gpu:8 # number of gpus
-#SBATCH --cpus-per-task=32 # cpu-cores per task (>1 if multi-threaded tasks)
-#SBATCH -o /cognitive_comp/ganruyi/experiments/randeng_t5_char_57M/%x-%j.log
-#SBATCH -e /cognitive_comp/ganruyi/experiments/randeng_t5_char_57M/%x-%j.err
-
-set -x -e
-
-echo "START TIME: $(date)"
-MICRO_BATCH_SIZE=64
-ROOT_DIR=/cognitive_comp/ganruyi/experiments/randeng_t5_char_57M/
-if [ ! -d ${ROOT_DIR} ];then
- mkdir ${ROOT_DIR}
- echo ${ROOT_DIR} created!!!!!!!!!!!!!!
-else
- echo ${ROOT_DIR} exist!!!!!!!!!!!!!!!
-fi
-
-ZERO_STAGE=1
-
-config_json="$ROOT_DIR/ds_config.randeng_t5_char_57M.$SLURM_JOBID.json"
-export MASTER_PORT=$[RANDOM%10000+30000]
-# export CUDA_VISIBLE_DEVICES='4,5'
-
-cat < $config_json
-{
- "train_micro_batch_size_per_gpu": ${MICRO_BATCH_SIZE},
- "steps_per_print": 100,
- "gradient_clipping": 1.0,
- "zero_optimization": {
- "stage": $ZERO_STAGE,
- "contiguous_gradients": false,
- "overlap_comm": true,
- "reduce_scatter": true,
- "reduce_bucket_size": 50000000,
- "allgather_bucket_size": 500000000
- },
- "optimizer": {
- "type": "Adam",
- "params": {
- "lr": 1e-4,
- "weight_decay": 1e-2
- }
- },
- "scheduler": {
- "params": {
- "warmup_max_lr": 1e-04,
- "warmup_min_lr": 1e-05,
- "total_num_steps": 240000,
- "warmup_num_steps" : 10000
- },
- "type": "WarmupDecayLR"
- },
- "zero_allow_untested_optimizer": false,
- "fp16": {
- "enabled": true,
- "loss_scale": 0,
- "loss_scale_window": 1000,
- "hysteresis": 2,
- "min_loss_scale": 1
- },
- "activation_checkpointing": {
- "partition_activations": false,
- "contiguous_memory_optimization": false
- },
- "wall_clock_breakdown": false
-}
-EOT
-
-export PL_DEEPSPEED_CONFIG_PATH=$config_json
-export TORCH_EXTENSIONS_DIR=/cognitive_comp/ganruyi/tmp/torch_extendsions
-# strategy=ddp
-strategy=deepspeed_stage_1
-
-TRAINER_ARGS="
- --max_epochs 1 \
- --gpus 8 \
- --num_nodes 1 \
- --strategy ${strategy} \
- --default_root_dir $ROOT_DIR \
- --dirpath $ROOT_DIR/ckpt \
- --save_top_k 3 \
- --every_n_train_steps 100000 \
- --monitor train_loss \
- --mode min \
- --save_last \
- --val_check_interval 0.1 \
- --dataset_num_workers 4 \
- --dataloader_num_workers 4 \
- --replace_sampler_ddp False \
-"
-# --accumulate_grad_batches 8 \
-DATA_DIR=wudao_180g_bert_tokenized_512
-
-DATA_ARGS="
- --train_batchsize $MICRO_BATCH_SIZE \
- --valid_batchsize $MICRO_BATCH_SIZE \
- --train_data_path ${DATA_DIR} \
- --train_split_size 0.999 \
- --max_seq_length 512 \
-"
-
-MODEL_ARGS="
- --pretrained_model_path /cognitive_comp/ganruyi/experiments/randeng_t5_char_57M/randeng_t5_char_57M \
- --tokenizer_type bert_tokenizer \
-"
-
-SCRIPTS_PATH=/cognitive_comp/ganruyi/Fengshenbang-LM/fengshen/examples/pretrain_t5/pretrain_t5.py
-
-export CMD=" \
- $SCRIPTS_PATH \
- $TRAINER_ARGS \
- $MODEL_ARGS \
- $DATA_ARGS \
- "
-
-echo $CMD
-/home/ganruyi/anaconda3/bin/python $CMD
-# SINGULARITY_PATH=/cognitive_comp/ganruyi/pytorch21_06_py3_docker_image_v2.sif
-# srun singularity exec --nv -B /cognitive_comp/:/cognitive_comp/ $SINGULARITY_PATH bash -c '/home/ganruyi/anaconda3/bin/python $CMD'
-
-# source activate base
-# python $CMD
-# srun --nodes=1 --gres=gpu:8 --ntasks-per-node=8 --cpus-per-task=30 --jobid=171866 -e %x-%j.err -o %x-%j.log python $CMD
-
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/dynamic_convolution.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/dynamic_convolution.py
deleted file mode 100644
index 0121d453b9e026f5128dd41fce691aa1b4486448..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/dynamic_convolution.py
+++ /dev/null
@@ -1,310 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from fairseq import utils
-from fairseq.incremental_decoding_utils import with_incremental_state
-from fairseq.modules.fairseq_dropout import FairseqDropout
-
-from .unfold import unfold1d
-
-
-def DynamicConv(
- input_size,
- kernel_size=1,
- padding_l=None,
- num_heads=1,
- weight_dropout=0.0,
- weight_softmax=False,
- renorm_padding=False,
- bias=False,
- conv_bias=False,
- query_size=None,
- in_proj=False,
-):
- if torch.cuda.is_available():
- try:
- from fairseq.modules.dynamicconv_layer import DynamicconvLayer
-
- return DynamicconvLayer(
- input_size,
- kernel_size=kernel_size,
- padding_l=padding_l,
- num_heads=num_heads,
- weight_dropout=weight_dropout,
- weight_softmax=weight_softmax,
- renorm_padding=renorm_padding,
- bias=bias,
- conv_bias=conv_bias,
- query_size=query_size,
- )
- except ImportError as e:
- print(e)
- return DynamicConv1dTBC(
- input_size,
- kernel_size=kernel_size,
- padding_l=padding_l,
- num_heads=num_heads,
- weight_dropout=weight_dropout,
- weight_softmax=weight_softmax,
- renorm_padding=renorm_padding,
- bias=bias,
- conv_bias=conv_bias,
- query_size=query_size,
- )
-
-
-def Linear(in_features, out_features, bias=True):
- m = nn.Linear(in_features, out_features, bias)
- nn.init.xavier_uniform_(m.weight)
- if bias:
- nn.init.constant_(m.bias, 0.0)
- return m
-
-
-@with_incremental_state
-class DynamicConv1dTBC(nn.Module):
- """Dynamic lightweight convolution taking T x B x C inputs
- Args:
- input_size: # of channels of the input
- kernel_size: convolution channels
- padding_l: padding to the left when using "same" padding
- num_heads: number of heads used. The weight is of shape (num_heads, 1, kernel_size)
- weight_dropout: the drop rate of the DropConnect to drop the weight
- weight_softmax: normalize the weight with softmax before the convolution
- renorm_padding: re-normalize the filters to ignore the padded part (only the non-padding parts sum up to 1)
- bias: use bias
- conv_bias: bias of the convolution
- query_size: specified when feeding a different input as the query
- in_proj: project the input and generate the filter together
-
- Shape:
- Input: TxBxC, i.e. (timesteps, batch_size, input_size)
- Output: TxBxC, i.e. (timesteps, batch_size, input_size)
-
- Attributes:
- weight: the learnable weights of the module of shape
- `(num_heads, 1, kernel_size)`
- bias: the learnable bias of the module of shape `(input_size)`
- """
-
- def __init__(
- self,
- input_size,
- kernel_size=1,
- padding_l=None,
- num_heads=1,
- weight_dropout=0.0,
- weight_softmax=False,
- renorm_padding=False,
- bias=False,
- conv_bias=False,
- query_size=None,
- in_proj=False,
- ):
- super().__init__()
- self.input_size = input_size
- self.query_size = input_size if query_size is None else query_size
- self.kernel_size = kernel_size
- self.padding_l = padding_l
- self.num_heads = num_heads
- self.weight_dropout_module = FairseqDropout(
- weight_dropout, module_name=self.__class__.__name__
- )
- self.weight_softmax = weight_softmax
- self.renorm_padding = renorm_padding
-
- if in_proj:
- self.weight_linear = Linear(
- self.input_size, self.input_size + num_heads * kernel_size * 1
- )
- else:
- self.weight_linear = Linear(
- self.query_size, num_heads * kernel_size * 1, bias=bias
- )
- if conv_bias:
- self.conv_bias = nn.Parameter(torch.Tensor(input_size))
- else:
- self.conv_bias = None
- self.reset_parameters()
-
- @property
- def in_proj(self):
- return (
- self.weight_linear.out_features
- == self.input_size + self.num_heads * self.kernel_size
- )
-
- def reset_parameters(self):
- self.weight_linear.reset_parameters()
- if self.conv_bias is not None:
- nn.init.constant_(self.conv_bias, 0.0)
-
- def forward(self, x, incremental_state=None, query=None, unfold=None):
- """Assuming the input, x, of the shape T x B x C and producing an output in the shape T x B x C
- args:
- x: Input of shape T x B x C, i.e. (timesteps, batch_size, input_size)
- incremental_state: A dict to keep the state
- unfold: unfold the input or not. If not, we use the matrix trick instead
- query: use the specified query to predict the conv filters
- """
- unfold = (
- x.size(0) > 512 if unfold is None else unfold
- ) # use unfold mode as default for long sequence to save memory
- unfold = unfold or (incremental_state is not None)
- assert query is None or not self.in_proj
-
- if query is None:
- query = x
- if unfold:
- output = self._forward_unfolded(x, incremental_state, query)
- else:
- output = self._forward_expanded(x, incremental_state, query)
-
- if self.conv_bias is not None:
- output = output + self.conv_bias.view(1, 1, -1)
- return output
-
- def _forward_unfolded(self, x, incremental_state, query):
- """The conventional implementation of convolutions.
- Unfolding the input by having a window shifting to the right."""
- T, B, C = x.size()
- K, H = self.kernel_size, self.num_heads
- R = C // H
- assert R * H == C == self.input_size
-
- if self.in_proj:
- proj = self.weight_linear(x)
- x = proj.narrow(2, 0, self.input_size).contiguous()
- weight = (
- proj.narrow(2, self.input_size, H * K).contiguous().view(T * B * H, -1)
- )
- else:
- weight = self.weight_linear(query).view(T * B * H, -1)
-
- # renorm_padding is only implemented in _forward_expanded
- assert not self.renorm_padding or incremental_state is not None
-
- if incremental_state is not None:
- input_buffer = self._get_input_buffer(incremental_state)
- if input_buffer is None:
- input_buffer = x.new()
- x_unfold = torch.cat([input_buffer, x.unsqueeze(3)], dim=3)
- if self.kernel_size > 1:
- self._set_input_buffer(
- incremental_state, x_unfold[:, :, :, -self.kernel_size + 1 :]
- )
- x_unfold = x_unfold.view(T * B * H, R, -1)
- else:
- padding_l = self.padding_l
- if K > T and padding_l == K - 1:
- weight = weight.narrow(1, K - T, T)
- K, padding_l = T, T - 1
- # unfold the input: T x B x C --> T' x B x C x K
- x_unfold = unfold1d(x, K, padding_l, 0)
- x_unfold = x_unfold.view(T * B * H, R, K)
-
- if self.weight_softmax and not self.renorm_padding:
- weight = F.softmax(weight, dim=1)
- weight = weight.narrow(1, 0, K)
-
- if incremental_state is not None:
- weight = weight[:, -x_unfold.size(2) :]
- K = weight.size(1)
-
- if self.weight_softmax and self.renorm_padding:
- weight = F.softmax(weight, dim=1)
-
- weight = self.weight_dropout_module(weight, inplace=False)
-
- output = torch.bmm(x_unfold, weight.unsqueeze(2)) # T*B*H x R x 1
- output = output.view(T, B, C)
- return output
-
- def _forward_expanded(self, x, incremental_stat, query):
- """Turn the convolution filters into band matrices and do matrix multiplication.
- This is faster when the sequence is short, but less memory efficient.
- This is not used in the decoder during inference.
- """
- T, B, C = x.size()
- K, H = self.kernel_size, self.num_heads
- R = C // H
- assert R * H == C == self.input_size
- if self.in_proj:
- proj = self.weight_linear(x)
- x = proj.narrow(2, 0, self.input_size).contiguous()
- weight = (
- proj.narrow(2, self.input_size, H * K).contiguous().view(T * B * H, -1)
- )
- else:
- weight = self.weight_linear(query).view(T * B * H, -1)
-
- if not self.renorm_padding:
- if self.weight_softmax:
- weight = F.softmax(weight, dim=1)
- weight = self.weight_dropout_module(weight, inplace=False)
- weight = weight.narrow(1, 0, K).contiguous()
- weight = weight.view(T, B * H, K).transpose(0, 1)
-
- x = x.view(T, B * H, R).transpose(0, 1)
- if self.weight_softmax and self.renorm_padding:
- # turn the convolution filters into band matrices
- weight_expanded = weight.new(B * H, T, T + K - 1).fill_(float("-inf"))
- weight_expanded.as_strided(
- (B * H, T, K), (T * (T + K - 1), T + K, 1)
- ).copy_(weight)
- weight_expanded = weight_expanded.narrow(2, self.padding_l, T)
- # normalize the weight over valid positions like self-attention
- weight_expanded = F.softmax(weight_expanded, dim=2)
- weight_expanded = self.weight_dropout_module(weight_expanded, inplace=False)
- else:
- P = self.padding_l
- # For efficiency, we cut the kernel size and reduce the padding when the kernel is larger than the length
- if K > T and P == K - 1:
- weight = weight.narrow(2, K - T, T)
- K, P = T, T - 1
- # turn the convolution filters into band matrices
- weight_expanded = weight.new_zeros(B * H, T, T + K - 1, requires_grad=False)
- weight_expanded.as_strided(
- (B * H, T, K), (T * (T + K - 1), T + K, 1)
- ).copy_(weight)
- weight_expanded = weight_expanded.narrow(2, P, T) # B*H x T x T
- output = torch.bmm(weight_expanded, x)
- output = output.transpose(0, 1).contiguous().view(T, B, C)
- return output
-
- def reorder_incremental_state(self, incremental_state, new_order):
- input_buffer = self._get_input_buffer(incremental_state)
- if input_buffer is not None:
- input_buffer = input_buffer.index_select(1, new_order)
- self._set_input_buffer(incremental_state, input_buffer)
-
- def _get_input_buffer(self, incremental_state):
- return utils.get_incremental_state(self, incremental_state, "input_buffer")
-
- def _set_input_buffer(self, incremental_state, new_buffer):
- return utils.set_incremental_state(
- self, incremental_state, "input_buffer", new_buffer
- )
-
- def extra_repr(self):
- s = "{}, kernel_size={}, padding_l={}, num_heads={}, weight_softmax={}, conv_bias={}, renorm_padding={}, in_proj={}".format(
- self.input_size,
- self.kernel_size,
- self.padding_l,
- self.num_heads,
- self.weight_softmax,
- self.conv_bias is not None,
- self.renorm_padding,
- self.in_proj,
- )
-
- if self.query_size != self.input_size:
- s += ", query_size={}".format(self.query_size)
- if self.weight_dropout_module.p > 0.0:
- s += ", weight_dropout={}".format(self.weight_dropout_module.p)
- return s
diff --git a/spaces/Harveenchadha/oiTrans/indic_nlp_resources/README.md b/spaces/Harveenchadha/oiTrans/indic_nlp_resources/README.md
deleted file mode 100644
index b6cf442624970e986e38d9a5587ccd4444e9f4fa..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/oiTrans/indic_nlp_resources/README.md
+++ /dev/null
@@ -1,18 +0,0 @@
-# Indic NLP Resources
-
-The toolkit contains resources required by some components of the [Indic NLP Library](https://github.com/anoopkunchukuttan/indic_nlp_resources) and other NLP resources for Indian languages.
-
-If you are looking for any other resources for Indian languages, please check the [Indic NLP Catalog](https://github.com/indicnlpweb/indicnlp_catalog)
-
-### Indic NLP Library related resources
-
-- Morphanalyzer models for Indian languages
-
-### Other NLP Resources
-- Transliteration Models for transliteration involving Indian languages and English.
-
-### Version: 0.2
-
-## License
-
-The models and resources are released under the MIT License
diff --git a/spaces/ICML2022/OFA/fairseq/examples/pointer_generator/preprocess.py b/spaces/ICML2022/OFA/fairseq/examples/pointer_generator/preprocess.py
deleted file mode 100644
index f72ca7d3d97e12ab7b405dcff314bdb6c0a78755..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/examples/pointer_generator/preprocess.py
+++ /dev/null
@@ -1,102 +0,0 @@
-#!/usr/bin/env python3
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import argparse
-from itertools import zip_longest
-
-
-def replace_oovs(source_in, target_in, vocabulary, source_out, target_out):
- """Replaces out-of-vocabulary words in source and target text with ,
- where N in is the position of the word in the source sequence.
- """
-
- def format_unk(pos):
- return "".format(pos)
-
- if target_in is None:
- target_in = []
-
- for seq_num, (source_seq, target_seq) in enumerate(
- zip_longest(source_in, target_in)
- ):
- source_seq_out = []
- target_seq_out = []
-
- word_to_pos = dict()
- for position, token in enumerate(source_seq.strip().split()):
- if token in vocabulary:
- token_out = token
- else:
- if token in word_to_pos:
- oov_pos = word_to_pos[token]
- else:
- word_to_pos[token] = position
- oov_pos = position
- token_out = format_unk(oov_pos)
- source_seq_out.append(token_out)
- source_out.write(" ".join(source_seq_out) + "\n")
-
- if target_seq is not None:
- for token in target_seq.strip().split():
- if token in word_to_pos:
- token_out = format_unk(word_to_pos[token])
- else:
- token_out = token
- target_seq_out.append(token_out)
- if target_out is not None:
- target_out.write(" ".join(target_seq_out) + "\n")
-
-
-def main():
- parser = argparse.ArgumentParser(
- description="Replaces out-of-vocabulary words in both source and target "
- "sequences with tokens that indicate the position of the word "
- "in the source sequence."
- )
- parser.add_argument(
- "--source", type=str, help="text file with source sequences", required=True
- )
- parser.add_argument(
- "--target", type=str, help="text file with target sequences", default=None
- )
- parser.add_argument("--vocab", type=str, help="vocabulary file", required=True)
- parser.add_argument(
- "--source-out",
- type=str,
- help="where to write source sequences with entries",
- required=True,
- )
- parser.add_argument(
- "--target-out",
- type=str,
- help="where to write target sequences with entries",
- default=None,
- )
- args = parser.parse_args()
-
- with open(args.vocab, encoding="utf-8") as vocab:
- vocabulary = vocab.read().splitlines()
-
- target_in = (
- open(args.target, "r", encoding="utf-8") if args.target is not None else None
- )
- target_out = (
- open(args.target_out, "w", encoding="utf-8")
- if args.target_out is not None
- else None
- )
- with open(args.source, "r", encoding="utf-8") as source_in, open(
- args.source_out, "w", encoding="utf-8"
- ) as source_out:
- replace_oovs(source_in, target_in, vocabulary, source_out, target_out)
- if target_in is not None:
- target_in.close()
- if target_out is not None:
- target_out.close()
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/ICML2022/OFA/fairseq/examples/wmt20/README.md b/spaces/ICML2022/OFA/fairseq/examples/wmt20/README.md
deleted file mode 100644
index b4f2874652f8be19998a65faa1d9276d8017ec59..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/examples/wmt20/README.md
+++ /dev/null
@@ -1,72 +0,0 @@
-# WMT 20
-
-This page provides pointers to the models of Facebook-FAIR's WMT'20 news translation task submission [(Chen et al., 2020)](https://arxiv.org/abs/2011.08298).
-
-## Single best MT models (after finetuning on part of WMT20 news dev set)
-
-Model | Description | Download
----|---|---
-`transformer.wmt20.ta-en` | Ta->En | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt20.ta-en.single.tar.gz)
-`transformer.wmt20.en-ta` | En->Ta | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt20.en-ta.single.tar.gz)
-`transformer.wmt20.iu-en.news` | Iu->En (News domain) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt20.iu-en.news.single.tar.gz)
-`transformer.wmt20.en-iu.news` | En->Iu (News domain) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt20.en-iu.news.single.tar.gz)
-`transformer.wmt20.iu-en.nh` | Iu->En (Nunavut Hansard domain) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt20.iu-en.nh.single.tar.gz)
-`transformer.wmt20.en-iu.nh` | En->Iu (Nunavut Hansard domain) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt20.en-iu.nh.single.tar.gz)
-
-## Language models
-Model | Description | Download
----|---|---
-`transformer_lm.wmt20.en` | En Language Model | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt20.en.tar.gz)
-`transformer_lm.wmt20.ta` | Ta Language Model | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt20.ta.tar.gz)
-`transformer_lm.wmt20.iu.news` | Iu Language Model (News domain) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt20.iu.news.tar.gz)
-`transformer_lm.wmt20.iu.nh` | Iu Language Model (Nunavut Hansard domain) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt20.iu.nh.tar.gz)
-
-## Example usage (torch.hub)
-
-#### Translation
-
-```python
-import torch
-
-# English to Tamil translation
-en2ta = torch.hub.load('pytorch/fairseq', 'transformer.wmt20.en-ta')
-en2ta.translate("Machine learning is great!") # 'இயந்திரக் கற்றல் அருமை!'
-
-# Tamil to English translation
-ta2en = torch.hub.load('pytorch/fairseq', 'transformer.wmt20.ta-en')
-ta2en.translate("இயந்திரக் கற்றல் அருமை!") # 'Machine learning is great!'
-
-# English to Inuktitut translation
-en2iu = torch.hub.load('pytorch/fairseq', 'transformer.wmt20.en-iu.news')
-en2iu.translate("machine learning is great!") # 'ᖃᒧᑕᐅᔭᓄᑦ ᐃᓕᓐᓂᐊᕐᓂᖅ ᐱᐅᔪᒻᒪᕆᒃ!'
-
-# Inuktitut to English translation
-iu2en = torch.hub.load('pytorch/fairseq', 'transformer.wmt20.iu-en.news')
-iu2en.translate("ᖃᒧᑕᐅᔭᓄᑦ ᐃᓕᓐᓂᐊᕐᓂᖅ ᐱᐅᔪᒻᒪᕆᒃ!") # 'Machine learning excellence!'
-```
-
-#### Language Modeling
-
-```python
-# Sample from the English LM
-en_lm = torch.hub.load('pytorch/fairseq', 'transformer_lm.wmt20.en')
-en_lm.sample("Machine learning is") # 'Machine learning is a type of artificial intelligence that uses machine learning to learn from data and make predictions.'
-
-# Sample from the Tamil LM
-ta_lm = torch.hub.load('pytorch/fairseq', 'transformer_lm.wmt20.ta')
-ta_lm.sample("இயந்திரக் கற்றல் என்பது செயற்கை நுண்ணறிவின்") # 'இயந்திரக் கற்றல் என்பது செயற்கை நுண்ணறிவின் ஒரு பகுதியாகும்.'
-
-# Sample from the Inuktitut LM
-iu_lm = torch.hub.load('pytorch/fairseq', 'transformer_lm.wmt20.iu.news')
-iu_lm.sample("ᖃᒧᑕᐅᔭᓄᑦ ᐃᓕᓐᓂᐊᕐᓂᖅ") # 'ᖃᒧᑕᐅᔭᓄᑦ ᐃᓕᓐᓂᐊᕐᓂᖅ, ᐊᒻᒪᓗ ᓯᓚᐅᑉ ᐊᓯᙳᖅᐸᓪᓕᐊᓂᖓᓄᑦ ᖃᓄᐃᓕᐅᕈᑎᒃᓴᑦ, ᐃᓚᖃᖅᖢᑎᒃ ᐅᑯᓂᖓ:'
-```
-
-## Citation
-```bibtex
-@inproceedings{chen2020facebook
- title={Facebook AI's WMT20 News Translation Task Submission},
- author={Peng-Jen Chen and Ann Lee and Changhan Wang and Naman Goyal and Angela Fan and Mary Williamson and Jiatao Gu},
- booktitle={Proc. of WMT},
- year={2020},
-}
-```
diff --git a/spaces/ICML2022/resefa/models/test.py b/spaces/ICML2022/resefa/models/test.py
deleted file mode 100644
index 3f1e0239e223537d299a2c52c65928b6c59406da..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/resefa/models/test.py
+++ /dev/null
@@ -1,146 +0,0 @@
-# python3.7
-"""Unit test for loading pre-trained models.
-
-Basically, this file tests whether the perceptual model (VGG16) and the
-inception model (InceptionV3), which are commonly used for loss computation and
-evaluation, have the expected behavior after loading pre-trained weights. In
-particular, we compare with the models from repo
-
-https://github.com/NVlabs/stylegan2-ada-pytorch
-"""
-
-import torch
-
-from models import build_model
-from utils.misc import download_url
-
-__all__ = ['test_model']
-
-_BATCH_SIZE = 4
-# pylint: disable=line-too-long
-_PERCEPTUAL_URL = 'https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/metrics/vgg16.pt'
-_INCEPTION_URL = 'https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/metrics/inception-2015-12-05.pt'
-# pylint: enable=line-too-long
-
-
-def test_model():
- """Collects all model tests."""
- torch.backends.cudnn.enabled = True
- torch.backends.cudnn.allow_tf32 = False
- torch.backends.cuda.matmul.allow_tf32 = False
- torch.backends.cudnn.benchmark = False
- torch.backends.cudnn.deterministic = True
- print('========== Start Model Test ==========')
- test_perceptual()
- test_inception()
- print('========== Finish Model Test ==========')
-
-
-def test_perceptual():
- """Test the perceptual model."""
- print('===== Testing Perceptual Model =====')
-
- print('Build test model.')
- model = build_model('PerceptualModel',
- use_torchvision=False,
- no_top=False,
- enable_lpips=True)
-
- print('Build reference model.')
- ref_model_path, _, = download_url(_PERCEPTUAL_URL)
- with open(ref_model_path, 'rb') as f:
- ref_model = torch.jit.load(f).eval().cuda()
-
- print('Test performance: ')
- for size in [224, 128, 256, 512, 1024]:
- raw_img = torch.randint(0, 256, size=(_BATCH_SIZE, 3, size, size))
- raw_img_comp = torch.randint(0, 256, size=(_BATCH_SIZE, 3, size, size))
-
- # The test model requires input images to have range [-1, 1].
- img = raw_img.to(torch.float32).cuda() / 127.5 - 1
- img_comp = raw_img_comp.to(torch.float32).cuda() / 127.5 - 1
- feat = model(img, resize_input=True, return_tensor='feature')
- pred = model(img, resize_input=True, return_tensor='prediction')
- lpips = model(img, img_comp, resize_input=False, return_tensor='lpips')
- assert feat.shape == (_BATCH_SIZE, 4096)
- assert pred.shape == (_BATCH_SIZE, 1000)
- assert lpips.shape == (_BATCH_SIZE,)
-
- # The reference model requires input images to have range [0, 255].
- img = raw_img.to(torch.float32).cuda()
- img_comp = raw_img_comp.to(torch.float32).cuda()
- ref_feat = ref_model(img, resize_images=True, return_features=True)
- ref_pred = ref_model(img, resize_images=True, return_features=False)
- temp = ref_model(torch.cat([img, img_comp], dim=0),
- resize_images=False, return_lpips=True).chunk(2)
- ref_lpips = (temp[0] - temp[1]).square().sum(dim=1, keepdim=False)
- assert ref_feat.shape == (_BATCH_SIZE, 4096)
- assert ref_pred.shape == (_BATCH_SIZE, 1000)
- assert ref_lpips.shape == (_BATCH_SIZE,)
-
- print(f' Size {size}x{size}, feature (with resize):\n '
- f'mean: {(feat - ref_feat).abs().mean().item():.3e}, '
- f'max: {(feat - ref_feat).abs().max().item():.3e}, '
- f'ref_mean: {ref_feat.abs().mean().item():.3e}, '
- f'ref_max: {ref_feat.abs().max().item():.3e}.')
- print(f' Size {size}x{size}, prediction (with resize):\n '
- f'mean: {(pred - ref_pred).abs().mean().item():.3e}, '
- f'max: {(pred - ref_pred).abs().max().item():.3e}, '
- f'ref_mean: {ref_pred.abs().mean().item():.3e}, '
- f'ref_max: {ref_pred.abs().max().item():.3e}.')
- print(f' Size {size}x{size}, LPIPS (without resize):\n '
- f'mean: {(lpips - ref_lpips).abs().mean().item():.3e}, '
- f'max: {(lpips - ref_lpips).abs().max().item():.3e}, '
- f'ref_mean: {ref_lpips.abs().mean().item():.3e}, '
- f'ref_max: {ref_lpips.abs().max().item():.3e}.')
-
-
-def test_inception():
- """Test the inception model."""
- print('===== Testing Inception Model =====')
-
- print('Build test model.')
- model = build_model('InceptionModel', align_tf=True)
-
- print('Build reference model.')
- ref_model_path, _, = download_url(_INCEPTION_URL)
- with open(ref_model_path, 'rb') as f:
- ref_model = torch.jit.load(f).eval().cuda()
-
- print('Test performance: ')
- for size in [299, 128, 256, 512, 1024]:
- raw_img = torch.randint(0, 256, size=(_BATCH_SIZE, 3, size, size))
-
- # The test model requires input images to have range [-1, 1].
- img = raw_img.to(torch.float32).cuda() / 127.5 - 1
- feat = model(img)
- pred = model(img, output_predictions=True)
- pred_nb = model(img, output_predictions=True, remove_logits_bias=True)
- assert feat.shape == (_BATCH_SIZE, 2048)
- assert pred.shape == (_BATCH_SIZE, 1008)
- assert pred_nb.shape == (_BATCH_SIZE, 1008)
-
- # The reference model requires input images to have range [0, 255].
- img = raw_img.to(torch.float32).cuda()
- ref_feat = ref_model(img, return_features=True)
- ref_pred = ref_model(img)
- ref_pred_nb = ref_model(img, no_output_bias=True)
- assert ref_feat.shape == (_BATCH_SIZE, 2048)
- assert ref_pred.shape == (_BATCH_SIZE, 1008)
- assert ref_pred_nb.shape == (_BATCH_SIZE, 1008)
-
- print(f' Size {size}x{size}, feature:\n '
- f'mean: {(feat - ref_feat).abs().mean().item():.3e}, '
- f'max: {(feat - ref_feat).abs().max().item():.3e}, '
- f'ref_mean: {ref_feat.abs().mean().item():.3e}, '
- f'ref_max: {ref_feat.abs().max().item():.3e}.')
- print(f' Size {size}x{size}, prediction:\n '
- f'mean: {(pred - ref_pred).abs().mean().item():.3e}, '
- f'max: {(pred - ref_pred).abs().max().item():.3e}, '
- f'ref_mean: {ref_pred.abs().mean().item():.3e}, '
- f'ref_max: {ref_pred.abs().max().item():.3e}.')
- print(f' Size {size}x{size}, prediction (without bias):\n '
- f'mean: {(pred_nb - ref_pred_nb).abs().mean().item():.3e}, '
- f'max: {(pred_nb - ref_pred_nb).abs().max().item():.3e}, '
- f'ref_mean: {ref_pred_nb.abs().mean().item():.3e}, '
- f'ref_max: {ref_pred_nb.abs().max().item():.3e}.')
diff --git a/spaces/Ibtehaj10/cheating-detection-FYP/person_tracking.py b/spaces/Ibtehaj10/cheating-detection-FYP/person_tracking.py
deleted file mode 100644
index 516088a3f8b0b4567dbee7303047d6ce65ac066c..0000000000000000000000000000000000000000
--- a/spaces/Ibtehaj10/cheating-detection-FYP/person_tracking.py
+++ /dev/null
@@ -1,542 +0,0 @@
-import cv2
-import datetime
-import imutils
-import numpy as np
-from centroidtracker import CentroidTracker
-import pandas as pd
-import torch
-import streamlit as st
-import mediapipe as mp
-import cv2 as cv
-import numpy as np
-import tempfile
-import time
-from PIL import Image
-import pandas as pd
-import torch
-import base64
-import streamlit.components.v1 as components
-import csv
-import pickle
-from pathlib import Path
-import streamlit_authenticator as stauth
-import os
-import csv
-# x-x-x-x-x-x-x-x-x-x-x-x-x-x LOGIN FORM x-x-x-x-x-x-x-x-x
-
-
-import streamlit as st
-import pandas as pd
-import hashlib
-import sqlite3
-#
-
-import pickle
-from pathlib import Path
-import streamlit_authenticator as stauth
-# print("Done !!!")
-
-data = ["student Count",'Date','Id','Mobile','Watch']
-with open('final.csv', 'w') as file:
- writer = csv.writer(file)
- writer.writerow(data)
-
-
-l1 = []
-l2 = []
-if st.button('signup'):
-
-
- usernames = st.text_input('Username')
- pwd = st.text_input('Password')
- l1.append(usernames)
- l2.append(pwd)
-
- names = ["dmin", "ser"]
- if st.button("signupsss"):
- username =l1
-
- password =l2
-
- hashed_passwords =stauth.Hasher(password).generate()
-
- file_path = Path(__file__).parent / "hashed_pw.pkl"
-
- with file_path.open("wb") as file:
- pickle.dump(hashed_passwords, file)
-
-
-elif st.button('Logins'):
- names = ['dmin', 'ser']
-
- username =l1
-
- file_path = Path(__file__).parent / 'hashed_pw.pkl'
-
- with file_path.open('rb') as file:
- hashed_passwords = pickle.load(file)
-
- authenticator = stauth.Authenticate(names,username,hashed_passwords,'Cheating Detection','abcdefg',cookie_expiry_days=180)
-
- name,authentication_status,username= authenticator.login('Login','main')
-
-
- if authentication_status == False:
- st.error('Username/Password is incorrect')
-
- if authentication_status == None:
- st.error('Please enter a username and password')
-
- if authentication_status:
- date_time = time.strftime("%b %d %Y %-I:%M %p")
- date = date_time.split()
- dates = date[0:3]
- times = date[3:5]
- # x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-xAPPLICACTION -x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x
-
- def non_max_suppression_fast(boxes, overlapThresh):
- try:
- if len(boxes) == 0:
- return []
-
- if boxes.dtype.kind == "i":
- boxes = boxes.astype("float")
-
- pick = []
-
- x1 = boxes[:, 0]
- y1 = boxes[:, 1]
- x2 = boxes[:, 2]
- y2 = boxes[:, 3]
-
- area = (x2 - x1 + 1) * (y2 - y1 + 1)
- idxs = np.argsort(y2)
-
- while len(idxs) > 0:
- last = len(idxs) - 1
- i = idxs[last]
- pick.append(i)
-
- xx1 = np.maximum(x1[i], x1[idxs[:last]])
- yy1 = np.maximum(y1[i], y1[idxs[:last]])
- xx2 = np.minimum(x2[i], x2[idxs[:last]])
- yy2 = np.minimum(y2[i], y2[idxs[:last]])
-
- w = np.maximum(0, xx2 - xx1 + 1)
- h = np.maximum(0, yy2 - yy1 + 1)
-
- overlap = (w * h) / area[idxs[:last]]
-
- idxs = np.delete(idxs, np.concatenate(([last],
- np.where(overlap > overlapThresh)[0])))
-
- return boxes[pick].astype("int")
- except Exception as e:
- print("Exception occurred in non_max_suppression : {}".format(e))
-
-
- protopath = "MobileNetSSD_deploy.prototxt"
- modelpath = "MobileNetSSD_deploy.caffemodel"
- detector = cv2.dnn.readNetFromCaffe(prototxt=protopath, caffeModel=modelpath)
- # Only enable it if you are using OpenVino environment
- # detector.setPreferableBackend(cv2.dnn.DNN_BACKEND_INFERENCE_ENGINE)
- # detector.setPreferableTarget(cv2.dnn.DNN_TARGET_CPU)
-
-
- CLASSES = ["background", "aeroplane", "bicycle", "bird", "boat",
- "bottle", "bus", "car", "cat", "chair", "cow", "diningtable",
- "dog", "horse", "motorbike", "person", "pottedplant", "sheep",
- "sofa", "train", "tvmonitor"]
-
- tracker = CentroidTracker(maxDisappeared=80, maxDistance=90)
-
- st.markdown(
- """
-
- """,
- unsafe_allow_html=True,
- )
- hide_streamlit_style = """
-
- """
- st.markdown(hide_streamlit_style, unsafe_allow_html=True)
-
-
- # Resize Images to fit Container
- @st.cache()
- # Get Image Dimensions
- def image_resize(image, width=None, height=None, inter=cv.INTER_AREA):
- dim = None
- (h,w) = image.shape[:2]
-
- if width is None and height is None:
- return image
-
- if width is None:
- r = width/float(w)
- dim = (int(w*r),height)
-
- else:
- r = width/float(w)
- dim = width, int(h*r)
-
- # Resize image
- resized = cv.resize(image,dim,interpolation=inter)
-
- return resized
-
- # About Page
- authenticator.logout('Logout')
- app_mode = st.sidebar.selectbox(
- 'App Mode',
- ['About','Application']
- )
- if app_mode == 'About':
- st.title('About Product And Team')
- st.markdown('''
- Imran Bhai Project
- ''')
- st.markdown(
- """
-
- """,
- unsafe_allow_html=True,
- )
-
-
-
-
- elif app_mode == 'Application':
-
- st.set_option('deprecation.showfileUploaderEncoding', False)
-
- use_webcam = st.button('Use Webcam')
- # record = st.sidebar.checkbox("Record Video")
-
- # if record:
- # st.checkbox('Recording', True)
-
- # drawing_spec = mp.solutions.drawing_utils.DrawingSpec(thickness=2, circle_radius=1)
-
- # st.sidebar.markdown('---')
-
- # ## Add Sidebar and Window style
- # st.markdown(
- # """
- #
- # """,
- # unsafe_allow_html=True,
- # )
-
- # max_faces = st.sidebar.number_input('Maximum Number of Faces', value=5, min_value=1)
- # st.sidebar.markdown('---')
- # detection_confidence = st.sidebar.slider('Min Detection Confidence', min_value=0.0,max_value=1.0,value=0.5)
- # tracking_confidence = st.sidebar.slider('Min Tracking Confidence', min_value=0.0,max_value=1.0,value=0.5)
- # st.sidebar.markdown('---')
-
- ## Get Video
- stframe = st.empty()
- video_file_buffer = st.file_uploader("Upload a Video", type=['mp4', 'mov', 'avi', 'asf', 'm4v'])
- temp_file = tempfile.NamedTemporaryFile(delete=False)
-
-
- if not video_file_buffer:
- if use_webcam:
- video = cv.VideoCapture(0)
- else:
- try:
- video = cv.VideoCapture(1)
- temp_file.name = video
- except:
- pass
- else:
- temp_file.write(video_file_buffer.read())
- video = cv.VideoCapture(temp_file.name)
-
- width = int(video.get(cv.CAP_PROP_FRAME_WIDTH))
- height = int(video.get(cv.CAP_PROP_FRAME_HEIGHT))
- fps_input = int(video.get(cv.CAP_PROP_FPS))
-
- ## Recording
- codec = cv.VideoWriter_fourcc('a','v','c','1')
- out = cv.VideoWriter('output1.mp4', codec, fps_input, (width,height))
-
- st.sidebar.text('Input Video')
- # st.sidebar.video(temp_file.name)
-
- fps = 0
- i = 0
-
- drawing_spec = mp.solutions.drawing_utils.DrawingSpec(thickness=2, circle_radius=1)
-
- kpil, kpil2, kpil3,kpil4,kpil5, kpil6 = st.columns(6)
-
- with kpil:
- st.markdown('**Frame Rate**')
- kpil_text = st.markdown('0')
-
- with kpil2:
- st.markdown('**detection ID**')
- kpil2_text = st.markdown('0')
-
- with kpil3:
- st.markdown('**Mobile**')
- kpil3_text = st.markdown('0')
- with kpil4:
- st.markdown('**Watch**')
- kpil4_text = st.markdown('0')
- with kpil5:
- st.markdown('**Count**')
- kpil5_text = st.markdown('0')
- with kpil6:
- st.markdown('**Img Res**')
- kpil6_text = st.markdown('0')
-
-
-
- st.markdown('', unsafe_allow_html=True)
- # try:
- def main():
- db = {}
-
- # cap = cv2.VideoCapture('//home//anas//PersonTracking//WebUI//movement.mp4')
- path='/usr/local/lib/python3.10/dist-packages/yolo0vs5/yolov5s-int8.tflite'
- #count=0
- custom = 'yolov5s'
-
- model = torch.hub.load('/usr/local/lib/python3.10/dist-packages/yolovs5', custom, path,source='local',force_reload=True)
-
- b=model.names[0] = 'person'
- mobile = model.names[67] = 'cell phone'
- watch = model.names[75] = 'clock'
-
- fps_start_time = datetime.datetime.now()
- fps = 0
- size=416
-
- count=0
- counter=0
-
-
- color=(0,0,255)
-
- cy1=250
- offset=6
-
-
- pt1 = (120, 100)
- pt2 = (980, 1150)
- color = (0, 255, 0)
-
- pt3 = (283, 103)
- pt4 = (1500, 1150)
-
- cy2 = 500
- color = (0, 255, 0)
- total_frames = 0
- prevTime = 0
- cur_frame = 0
- count=0
- counter=0
- fps_start_time = datetime.datetime.now()
- fps = 0
- total_frames = 0
- lpc_count = 0
- opc_count = 0
- object_id_list = []
- # success = True
- if st.button("Detect"):
- try:
- while video.isOpened():
-
- ret, frame = video.read()
- frame = imutils.resize(frame, width=600)
- total_frames = total_frames + 1
-
- (H, W) = frame.shape[:2]
-
- blob = cv2.dnn.blobFromImage(frame, 0.007843, (W, H), 127.5)
-
- detector.setInput(blob)
- person_detections = detector.forward()
- rects = []
- for i in np.arange(0, person_detections.shape[2]):
- confidence = person_detections[0, 0, i, 2]
- if confidence > 0.5:
- idx = int(person_detections[0, 0, i, 1])
-
- if CLASSES[idx] != "person":
- continue
-
- person_box = person_detections[0, 0, i, 3:7] * np.array([W, H, W, H])
- (startX, startY, endX, endY) = person_box.astype("int")
- rects.append(person_box)
-
- boundingboxes = np.array(rects)
- boundingboxes = boundingboxes.astype(int)
- rects = non_max_suppression_fast(boundingboxes, 0.3)
-
- objects = tracker.update(rects)
- for (objectId, bbox) in objects.items():
- x1, y1, x2, y2 = bbox
- x1 = int(x1)
- y1 = int(y1)
- x2 = int(x2)
- y2 = int(y2)
-
- cv2.rectangle(frame, (x1, y1), (x2, y2), (0, 0, 255), 2)
- text = "ID: {}".format(objectId)
- # print(text)
- cv2.putText(frame, text, (x1, y1-5), cv2.FONT_HERSHEY_COMPLEX_SMALL, 1, (0, 0, 255), 1)
- if objectId not in object_id_list:
- object_id_list.append(objectId)
- fps_end_time = datetime.datetime.now()
- time_diff = fps_end_time - fps_start_time
- if time_diff.seconds == 0:
- fps = 0.0
- else:
- fps = (total_frames / time_diff.seconds)
-
- fps_text = "FPS: {:.2f}".format(fps)
-
- cv2.putText(frame, fps_text, (5, 30), cv2.FONT_HERSHEY_COMPLEX_SMALL, 1, (0, 0, 255), 1)
- lpc_count = len(objects)
- opc_count = len(object_id_list)
-
- lpc_txt = "LPC: {}".format(lpc_count)
- opc_txt = "OPC: {}".format(opc_count)
-
- count += 1
- if count % 4 != 0:
- continue
- # frame=cv.resize(frame, (600,500))
- # cv2.line(frame, pt1, pt2,color,2)
- # cv2.line(frame, pt3, pt4,color,2)
- results = model(frame,size)
- components = results.pandas().xyxy[0]
- for index, row in results.pandas().xyxy[0].iterrows():
- x1 = int(row['xmin'])
- y1 = int(row['ymin'])
- x2 = int(row['xmax'])
- y2 = int(row['ymax'])
- confidence = (row['confidence'])
- obj = (row['class'])
-
-
- # min':x1,'ymin':y1,'xmax':x2,'ymax':y2,'confidence':confidence,'Object':obj}
- # if lpc_txt is not None:
- # try:
- # db["student Count"] = [lpc_txt]
- # except:
- # db["student Count"] = ['N/A']
- if obj == 0:
- cv2.rectangle(frame,(x1,y1),(x2,y2),(0,0,255),2)
- rectx1,recty1 = ((x1+x2)/2,(y1+y2)/2)
- rectcenter = int(rectx1),int(recty1)
- cx = rectcenter[0]
- cy = rectcenter[1]
- cv2.circle(frame,(cx,cy),3,(0,255,0),-1)
- cv2.putText(frame,str(b), (x1,y1), cv2.FONT_HERSHEY_PLAIN,2,(255,255,255),2)
-
- db["student Count"] = [lpc_txt]
- db['Date'] = [date_time]
- db['id'] = ['N/A']
- db['Mobile']=['N/A']
- db['Watch'] = ['N/A']
- if cy<(cy1+offset) and cy>(cy1-offset):
- DB = []
- counter+=1
- DB.append(counter)
-
- ff = DB[-1]
- fx = str(ff)
- # cv2.line(frame, pt1, pt2,(0, 0, 255),2)
- # if cy<(cy2+offset) and cy>(cy2-offset):
-
- # cv2.line(frame, pt3, pt4,(0, 0, 255),2)
- font = cv2.FONT_HERSHEY_TRIPLEX
- cv2.putText(frame,fx,(50, 50),font, 1,(0, 0, 255),2,cv2.LINE_4)
- cv2.putText(frame,"Movement",(70, 70),font, 1,(0, 0, 255),2,cv2.LINE_4)
- kpil2_text.write(f"
- YOLOv5 🚀 is the world's most loved vision AI, representing Ultralytics
- open-source research into future vision AI methods, incorporating lessons learned and best practices evolved over thousands of hours of research and development.
-
- To request a commercial license please complete the form at Ultralytics Licensing.
-
-
-|Roboflow|ClearML ⭐ NEW|Comet ⭐ NEW|Deci ⭐ NEW|
-|:-:|:-:|:-:|:-:|
-|Label and export your custom datasets directly to YOLOv5 for training with [Roboflow](https://roboflow.com/?ref=ultralytics)|Automatically track, visualize and even remotely train YOLOv5 using [ClearML](https://cutt.ly/yolov5-readme-clearml) (open-source!)|Free forever, [Comet](https://bit.ly/yolov5-readme-comet) lets you save YOLOv5 models, resume training, and interactively visualise and debug predictions|Automatically compile and quantize YOLOv5 for better inference performance in one click at [Deci](https://bit.ly/yolov5-deci-platform)|
-
-
-##
Ultralytics HUB
-
-[Ultralytics HUB](https://bit.ly/ultralytics_hub) is our ⭐ **NEW** no-code solution to visualize datasets, train YOLOv5 🚀 models, and deploy to the real world in a seamless experience. Get started for **Free** now!
-
-
-
-
-
-##
Why YOLOv5
-
-YOLOv5 has been designed to be super easy to get started and simple to learn. We prioritize real-world results.
-
-
-
- YOLOv5-P5 640 Figure (click to expand)
-
-
-
-
- Figure Notes (click to expand)
-
-- **COCO AP val** denotes mAP@0.5:0.95 metric measured on the 5000-image [COCO val2017](http://cocodataset.org) dataset over various inference sizes from 256 to 1536.
-- **GPU Speed** measures average inference time per image on [COCO val2017](http://cocodataset.org) dataset using a [AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p3/) V100 instance at batch-size 32.
-- **EfficientDet** data from [google/automl](https://github.com/google/automl) at batch size 8.
-- **Reproduce** by `python val.py --task study --data coco.yaml --iou 0.7 --weights yolov5n6.pt yolov5s6.pt yolov5m6.pt yolov5l6.pt yolov5x6.pt`
-
-
-
-### Pretrained Checkpoints
-
-| Model | size (pixels) | mAPval 0.5:0.95 | mAPval 0.5 | Speed CPU b1 (ms) | Speed V100 b1 (ms) | Speed V100 b32 (ms) | params (M) | FLOPs @640 (B) |
-|------------------------------------------------------------------------------------------------------|-----------------------|-------------------------|--------------------|------------------------------|-------------------------------|--------------------------------|--------------------|------------------------|
-| [YOLOv5n](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5n.pt) | 640 | 28.0 | 45.7 | **45** | **6.3** | **0.6** | **1.9** | **4.5** |
-| [YOLOv5s](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5s.pt) | 640 | 37.4 | 56.8 | 98 | 6.4 | 0.9 | 7.2 | 16.5 |
-| [YOLOv5m](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5m.pt) | 640 | 45.4 | 64.1 | 224 | 8.2 | 1.7 | 21.2 | 49.0 |
-| [YOLOv5l](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5l.pt) | 640 | 49.0 | 67.3 | 430 | 10.1 | 2.7 | 46.5 | 109.1 |
-| [YOLOv5x](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5x.pt) | 640 | 50.7 | 68.9 | 766 | 12.1 | 4.8 | 86.7 | 205.7 |
-| | | | | | | | | |
-| [YOLOv5n6](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5n6.pt) | 1280 | 36.0 | 54.4 | 153 | 8.1 | 2.1 | 3.2 | 4.6 |
-| [YOLOv5s6](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5s6.pt) | 1280 | 44.8 | 63.7 | 385 | 8.2 | 3.6 | 12.6 | 16.8 |
-| [YOLOv5m6](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5m6.pt) | 1280 | 51.3 | 69.3 | 887 | 11.1 | 6.8 | 35.7 | 50.0 |
-| [YOLOv5l6](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5l6.pt) | 1280 | 53.7 | 71.3 | 1784 | 15.8 | 10.5 | 76.8 | 111.4 |
-| [YOLOv5x6](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5x6.pt) + [TTA][TTA] | 1280 1536 | 55.0 **55.8** | 72.7 **72.7** | 3136 - | 26.2 - | 19.4 - | 140.7 - | 209.8 - |
-
-
- Table Notes (click to expand)
-
-- All checkpoints are trained to 300 epochs with default settings. Nano and Small models use [hyp.scratch-low.yaml](https://github.com/ultralytics/yolov5/blob/master/data/hyps/hyp.scratch-low.yaml) hyps, all others use [hyp.scratch-high.yaml](https://github.com/ultralytics/yolov5/blob/master/data/hyps/hyp.scratch-high.yaml).
-- **mAPval** values are for single-model single-scale on [COCO val2017](http://cocodataset.org) dataset. Reproduce by `python val.py --data coco.yaml --img 640 --conf 0.001 --iou 0.65`
-- **Speed** averaged over COCO val images using a [AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p3/) instance. NMS times (~1 ms/img) not included. Reproduce by `python val.py --data coco.yaml --img 640 --task speed --batch 1`
-- **TTA** [Test Time Augmentation](https://github.com/ultralytics/yolov5/issues/303) includes reflection and scale augmentations. Reproduce by `python val.py --data coco.yaml --img 1536 --iou 0.7 --augment`
-
-
-
-##
Classification ⭐ NEW
-
-YOLOv5 [release v6.2](https://github.com/ultralytics/yolov5/releases) brings support for classification model training, validation, prediction and export! We've made training classifier models super simple. Click below to get started.
-
-
- Classification Checkpoints (click to expand)
-
-
-
-We trained YOLOv5-cls classification models on ImageNet for 90 epochs using a 4xA100 instance, and we trained ResNet and EfficientNet models alongside with the same default training settings to compare. We exported all models to ONNX FP32 for CPU speed tests and to TensorRT FP16 for GPU speed tests. We ran all speed tests on Google [Colab Pro](https://colab.research.google.com/signup) for easy reproducibility.
-
-| Model | size (pixels) | acc top1 | acc top5 | Training 90 epochs 4xA100 (hours) | Speed ONNX CPU (ms) | Speed TensorRT V100 (ms) | params (M) | FLOPs @224 (B) |
-|----------------------------------------------------------------------------------------------------|-----------------------|------------------|------------------|----------------------------------------------|--------------------------------|-------------------------------------|--------------------|------------------------|
-| [YOLOv5n-cls](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5n-cls.pt) | 224 | 64.6 | 85.4 | 7:59 | **3.3** | **0.5** | **2.5** | **0.5** |
-| [YOLOv5s-cls](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5s-cls.pt) | 224 | 71.5 | 90.2 | 8:09 | 6.6 | 0.6 | 5.4 | 1.4 |
-| [YOLOv5m-cls](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5m-cls.pt) | 224 | 75.9 | 92.9 | 10:06 | 15.5 | 0.9 | 12.9 | 3.9 |
-| [YOLOv5l-cls](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5l-cls.pt) | 224 | 78.0 | 94.0 | 11:56 | 26.9 | 1.4 | 26.5 | 8.5 |
-| [YOLOv5x-cls](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5x-cls.pt) | 224 | **79.0** | **94.4** | 15:04 | 54.3 | 1.8 | 48.1 | 15.9 |
-| |
-| [ResNet18](https://github.com/ultralytics/yolov5/releases/download/v6.2/resnet18.pt) | 224 | 70.3 | 89.5 | **6:47** | 11.2 | 0.5 | 11.7 | 3.7 |
-| [ResNet34](https://github.com/ultralytics/yolov5/releases/download/v6.2/resnet34.pt) | 224 | 73.9 | 91.8 | 8:33 | 20.6 | 0.9 | 21.8 | 7.4 |
-| [ResNet50](https://github.com/ultralytics/yolov5/releases/download/v6.2/resnet50.pt) | 224 | 76.8 | 93.4 | 11:10 | 23.4 | 1.0 | 25.6 | 8.5 |
-| [ResNet101](https://github.com/ultralytics/yolov5/releases/download/v6.2/resnet101.pt) | 224 | 78.5 | 94.3 | 17:10 | 42.1 | 1.9 | 44.5 | 15.9 |
-| |
-| [EfficientNet_b0](https://github.com/ultralytics/yolov5/releases/download/v6.2/efficientnet_b0.pt) | 224 | 75.1 | 92.4 | 13:03 | 12.5 | 1.3 | 5.3 | 1.0 |
-| [EfficientNet_b1](https://github.com/ultralytics/yolov5/releases/download/v6.2/efficientnet_b1.pt) | 224 | 76.4 | 93.2 | 17:04 | 14.9 | 1.6 | 7.8 | 1.5 |
-| [EfficientNet_b2](https://github.com/ultralytics/yolov5/releases/download/v6.2/efficientnet_b2.pt) | 224 | 76.6 | 93.4 | 17:10 | 15.9 | 1.6 | 9.1 | 1.7 |
-| [EfficientNet_b3](https://github.com/ultralytics/yolov5/releases/download/v6.2/efficientnet_b3.pt) | 224 | 77.7 | 94.0 | 19:19 | 18.9 | 1.9 | 12.2 | 2.4 |
-
-
- Table Notes (click to expand)
-
-- All checkpoints are trained to 90 epochs with SGD optimizer with `lr0=0.001` and `weight_decay=5e-5` at image size 224 and all default settings. Runs logged to https://wandb.ai/glenn-jocher/YOLOv5-Classifier-v6-2
-- **Accuracy** values are for single-model single-scale on [ImageNet-1k](https://www.image-net.org/index.php) dataset. Reproduce by `python classify/val.py --data ../datasets/imagenet --img 224`
-- **Speed** averaged over 100 inference images using a Google [Colab Pro](https://colab.research.google.com/signup) V100 High-RAM instance. Reproduce by `python classify/val.py --data ../datasets/imagenet --img 224 --batch 1`
-- **Export** to ONNX at FP32 and TensorRT at FP16 done with `export.py`. Reproduce by `python export.py --weights yolov5s-cls.pt --include engine onnx --imgsz 224`
-
-
-
-
- Classification Usage Examples (click to expand)
-
-### Train
-YOLOv5 classification training supports auto-download of MNIST, Fashion-MNIST, CIFAR10, CIFAR100, Imagenette, Imagewoof, and ImageNet datasets with the `--data` argument. To start training on MNIST for example use `--data mnist`.
-
-```bash
-# Single-GPU
-python classify/train.py --model yolov5s-cls.pt --data cifar100 --epochs 5 --img 224 --batch 128
-
-# Multi-GPU DDP
-python -m torch.distributed.run --nproc_per_node 4 --master_port 1 classify/train.py --model yolov5s-cls.pt --data imagenet --epochs 5 --img 224 --device 0,1,2,3
-```
-
-### Val
-Validate YOLOv5m-cls accuracy on ImageNet-1k dataset:
-```bash
-bash data/scripts/get_imagenet.sh --val # download ImageNet val split (6.3G, 50000 images)
-python classify/val.py --weights yolov5m-cls.pt --data ../datasets/imagenet --img 224 # validate
-```
-
-### Predict
-Use pretrained YOLOv5s-cls.pt to predict bus.jpg:
-```bash
-python classify/predict.py --weights yolov5s-cls.pt --data data/images/bus.jpg
-```
-```python
-model = torch.hub.load('ultralytics/yolov5', 'custom', 'yolov5s-cls.pt') # load from PyTorch Hub
-```
-
-### Export
-Export a group of trained YOLOv5s-cls, ResNet and EfficientNet models to ONNX and TensorRT:
-```bash
-python export.py --weights yolov5s-cls.pt resnet50.pt efficientnet_b0.pt --include onnx engine --img 224
-```
-
-
-
-##
Environments
-
-Get started in seconds with our verified environments. Click each icon below for details.
-
-
-
-We love your input! We want to make contributing to YOLOv5 as easy and transparent as possible. Please see our [Contributing Guide](CONTRIBUTING.md) to get started, and fill out the [YOLOv5 Survey](https://ultralytics.com/survey?utm_source=github&utm_medium=social&utm_campaign=Survey) to send us feedback on your experiences. Thank you to all our contributors!
-
-
-
-
-##
Contact
-
-For YOLOv5 bugs and feature requests please visit [GitHub Issues](https://github.com/ultralytics/yolov5/issues). For professional support please [Contact Us](https://ultralytics.com/contact). To request a commercial license please complete the form at [Ultralytics Licensing](https://ultralytics.com/license).
-
-
-
"
-
-examples=[["FreeVC", 'p225_001.wav', 'p226_002.wav'], ["FreeVC-s", 'p226_002.wav', 'p225_001.wav'], ["FreeVC (24kHz)", 'p225_001.wav', 'p226_002.wav']]
-
-gr.Interface(convert, inputs, outputs, title=title, description=description, article=article, examples=examples, enable_queue=True).launch()
diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/layers/csrc/vision.cpp b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/layers/csrc/vision.cpp
deleted file mode 100644
index c9a2cd4f20e6f58be1c5783d67c64232dd59b560..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/layers/csrc/vision.cpp
+++ /dev/null
@@ -1,117 +0,0 @@
-// Copyright (c) Facebook, Inc. and its affiliates.
-
-#include
-#include "ROIAlignRotated/ROIAlignRotated.h"
-#include "box_iou_rotated/box_iou_rotated.h"
-#include "cocoeval/cocoeval.h"
-#include "deformable/deform_conv.h"
-#include "nms_rotated/nms_rotated.h"
-
-namespace detectron2 {
-
-#if defined(WITH_CUDA) || defined(WITH_HIP)
-extern int get_cudart_version();
-#endif
-
-std::string get_cuda_version() {
-#if defined(WITH_CUDA) || defined(WITH_HIP)
- std::ostringstream oss;
-
-#if defined(WITH_CUDA)
- oss << "CUDA ";
-#else
- oss << "HIP ";
-#endif
-
- // copied from
- // https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/cuda/detail/CUDAHooks.cpp#L231
- auto printCudaStyleVersion = [&](int v) {
- oss << (v / 1000) << "." << (v / 10 % 100);
- if (v % 10 != 0) {
- oss << "." << (v % 10);
- }
- };
- printCudaStyleVersion(get_cudart_version());
- return oss.str();
-#else // neither CUDA nor HIP
- return std::string("not available");
-#endif
-}
-
-bool has_cuda() {
-#if defined(WITH_CUDA)
- return true;
-#else
- return false;
-#endif
-}
-
-// similar to
-// https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/Version.cpp
-std::string get_compiler_version() {
- std::ostringstream ss;
-#if defined(__GNUC__)
-#ifndef __clang__
-
-#if ((__GNUC__ <= 4) && (__GNUC_MINOR__ <= 8))
-#error "GCC >= 4.9 is required!"
-#endif
-
- { ss << "GCC " << __GNUC__ << "." << __GNUC_MINOR__; }
-#endif
-#endif
-
-#if defined(__clang_major__)
- {
- ss << "clang " << __clang_major__ << "." << __clang_minor__ << "."
- << __clang_patchlevel__;
- }
-#endif
-
-#if defined(_MSC_VER)
- { ss << "MSVC " << _MSC_FULL_VER; }
-#endif
- return ss.str();
-}
-
-PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
- m.def("get_compiler_version", &get_compiler_version, "get_compiler_version");
- m.def("get_cuda_version", &get_cuda_version, "get_cuda_version");
- m.def("has_cuda", &has_cuda, "has_cuda");
-
- m.def("deform_conv_forward", &deform_conv_forward, "deform_conv_forward");
- m.def(
- "deform_conv_backward_input",
- &deform_conv_backward_input,
- "deform_conv_backward_input");
- m.def(
- "deform_conv_backward_filter",
- &deform_conv_backward_filter,
- "deform_conv_backward_filter");
- m.def(
- "modulated_deform_conv_forward",
- &modulated_deform_conv_forward,
- "modulated_deform_conv_forward");
- m.def(
- "modulated_deform_conv_backward",
- &modulated_deform_conv_backward,
- "modulated_deform_conv_backward");
-
- m.def("COCOevalAccumulate", &COCOeval::Accumulate, "COCOeval::Accumulate");
- m.def(
- "COCOevalEvaluateImages",
- &COCOeval::EvaluateImages,
- "COCOeval::EvaluateImages");
- pybind11::class_(m, "InstanceAnnotation")
- .def(pybind11::init());
- pybind11::class_(m, "ImageEvaluation")
- .def(pybind11::init<>());
-}
-
-TORCH_LIBRARY(detectron2, m) {
- m.def("nms_rotated", &nms_rotated);
- m.def("box_iou_rotated", &box_iou_rotated);
- m.def("roi_align_rotated_forward", &ROIAlignRotated_forward);
- m.def("roi_align_rotated_backward", &ROIAlignRotated_backward);
-}
-} // namespace detectron2
diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/roi_heads/roi_heads.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/roi_heads/roi_heads.py
deleted file mode 100644
index 13dd57a0478917001841f6c6299f380e1198e63a..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/roi_heads/roi_heads.py
+++ /dev/null
@@ -1,877 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import inspect
-import logging
-import numpy as np
-from typing import Dict, List, Optional, Tuple
-import torch
-from torch import nn
-
-from detectron2.config import configurable
-from detectron2.layers import ShapeSpec, nonzero_tuple
-from detectron2.structures import Boxes, ImageList, Instances, pairwise_iou
-from detectron2.utils.events import get_event_storage
-from detectron2.utils.registry import Registry
-
-from ..backbone.resnet import BottleneckBlock, ResNet
-from ..matcher import Matcher
-from ..poolers import ROIPooler
-from ..proposal_generator.proposal_utils import add_ground_truth_to_proposals
-from ..sampling import subsample_labels
-from .box_head import build_box_head
-from .fast_rcnn import FastRCNNOutputLayers
-from .keypoint_head import build_keypoint_head
-from .mask_head import build_mask_head
-
-ROI_HEADS_REGISTRY = Registry("ROI_HEADS")
-ROI_HEADS_REGISTRY.__doc__ = """
-Registry for ROI heads in a generalized R-CNN model.
-ROIHeads take feature maps and region proposals, and
-perform per-region computation.
-
-The registered object will be called with `obj(cfg, input_shape)`.
-The call is expected to return an :class:`ROIHeads`.
-"""
-
-logger = logging.getLogger(__name__)
-
-
-def build_roi_heads(cfg, input_shape):
- """
- Build ROIHeads defined by `cfg.MODEL.ROI_HEADS.NAME`.
- """
- name = cfg.MODEL.ROI_HEADS.NAME
- return ROI_HEADS_REGISTRY.get(name)(cfg, input_shape)
-
-
-def select_foreground_proposals(
- proposals: List[Instances], bg_label: int
-) -> Tuple[List[Instances], List[torch.Tensor]]:
- """
- Given a list of N Instances (for N images), each containing a `gt_classes` field,
- return a list of Instances that contain only instances with `gt_classes != -1 &&
- gt_classes != bg_label`.
-
- Args:
- proposals (list[Instances]): A list of N Instances, where N is the number of
- images in the batch.
- bg_label: label index of background class.
-
- Returns:
- list[Instances]: N Instances, each contains only the selected foreground instances.
- list[Tensor]: N boolean vector, correspond to the selection mask of
- each Instances object. True for selected instances.
- """
- assert isinstance(proposals, (list, tuple))
- assert isinstance(proposals[0], Instances)
- assert proposals[0].has("gt_classes")
- fg_proposals = []
- fg_selection_masks = []
- for proposals_per_image in proposals:
- gt_classes = proposals_per_image.gt_classes
- fg_selection_mask = (gt_classes != -1) & (gt_classes != bg_label)
- fg_idxs = fg_selection_mask.nonzero().squeeze(1)
- fg_proposals.append(proposals_per_image[fg_idxs])
- fg_selection_masks.append(fg_selection_mask)
- return fg_proposals, fg_selection_masks
-
-
-def select_proposals_with_visible_keypoints(proposals: List[Instances]) -> List[Instances]:
- """
- Args:
- proposals (list[Instances]): a list of N Instances, where N is the
- number of images.
-
- Returns:
- proposals: only contains proposals with at least one visible keypoint.
-
- Note that this is still slightly different from Detectron.
- In Detectron, proposals for training keypoint head are re-sampled from
- all the proposals with IOU>threshold & >=1 visible keypoint.
-
- Here, the proposals are first sampled from all proposals with
- IOU>threshold, then proposals with no visible keypoint are filtered out.
- This strategy seems to make no difference on Detectron and is easier to implement.
- """
- ret = []
- all_num_fg = []
- for proposals_per_image in proposals:
- # If empty/unannotated image (hard negatives), skip filtering for train
- if len(proposals_per_image) == 0:
- ret.append(proposals_per_image)
- continue
- gt_keypoints = proposals_per_image.gt_keypoints.tensor
- # #fg x K x 3
- vis_mask = gt_keypoints[:, :, 2] >= 1
- xs, ys = gt_keypoints[:, :, 0], gt_keypoints[:, :, 1]
- proposal_boxes = proposals_per_image.proposal_boxes.tensor.unsqueeze(dim=1) # #fg x 1 x 4
- kp_in_box = (
- (xs >= proposal_boxes[:, :, 0])
- & (xs <= proposal_boxes[:, :, 2])
- & (ys >= proposal_boxes[:, :, 1])
- & (ys <= proposal_boxes[:, :, 3])
- )
- selection = (kp_in_box & vis_mask).any(dim=1)
- selection_idxs = nonzero_tuple(selection)[0]
- all_num_fg.append(selection_idxs.numel())
- ret.append(proposals_per_image[selection_idxs])
-
- storage = get_event_storage()
- storage.put_scalar("keypoint_head/num_fg_samples", np.mean(all_num_fg))
- return ret
-
-
-class ROIHeads(torch.nn.Module):
- """
- ROIHeads perform all per-region computation in an R-CNN.
-
- It typically contains logic to
-
- 1. (in training only) match proposals with ground truth and sample them
- 2. crop the regions and extract per-region features using proposals
- 3. make per-region predictions with different heads
-
- It can have many variants, implemented as subclasses of this class.
- This base class contains the logic to match/sample proposals.
- But it is not necessary to inherit this class if the sampling logic is not needed.
- """
-
- @configurable
- def __init__(
- self,
- *,
- num_classes,
- batch_size_per_image,
- positive_fraction,
- proposal_matcher,
- proposal_append_gt=True,
- ):
- """
- NOTE: this interface is experimental.
-
- Args:
- num_classes (int): number of foreground classes (i.e. background is not included)
- batch_size_per_image (int): number of proposals to sample for training
- positive_fraction (float): fraction of positive (foreground) proposals
- to sample for training.
- proposal_matcher (Matcher): matcher that matches proposals and ground truth
- proposal_append_gt (bool): whether to include ground truth as proposals as well
- """
- super().__init__()
- self.batch_size_per_image = batch_size_per_image
- self.positive_fraction = positive_fraction
- self.num_classes = num_classes
- self.proposal_matcher = proposal_matcher
- self.proposal_append_gt = proposal_append_gt
-
- @classmethod
- def from_config(cls, cfg):
- return {
- "batch_size_per_image": cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE,
- "positive_fraction": cfg.MODEL.ROI_HEADS.POSITIVE_FRACTION,
- "num_classes": cfg.MODEL.ROI_HEADS.NUM_CLASSES,
- "proposal_append_gt": cfg.MODEL.ROI_HEADS.PROPOSAL_APPEND_GT,
- # Matcher to assign box proposals to gt boxes
- "proposal_matcher": Matcher(
- cfg.MODEL.ROI_HEADS.IOU_THRESHOLDS,
- cfg.MODEL.ROI_HEADS.IOU_LABELS,
- allow_low_quality_matches=False,
- ),
- }
-
- def _sample_proposals(
- self, matched_idxs: torch.Tensor, matched_labels: torch.Tensor, gt_classes: torch.Tensor
- ) -> Tuple[torch.Tensor, torch.Tensor]:
- """
- Based on the matching between N proposals and M groundtruth,
- sample the proposals and set their classification labels.
-
- Args:
- matched_idxs (Tensor): a vector of length N, each is the best-matched
- gt index in [0, M) for each proposal.
- matched_labels (Tensor): a vector of length N, the matcher's label
- (one of cfg.MODEL.ROI_HEADS.IOU_LABELS) for each proposal.
- gt_classes (Tensor): a vector of length M.
-
- Returns:
- Tensor: a vector of indices of sampled proposals. Each is in [0, N).
- Tensor: a vector of the same length, the classification label for
- each sampled proposal. Each sample is labeled as either a category in
- [0, num_classes) or the background (num_classes).
- """
- has_gt = gt_classes.numel() > 0
- # Get the corresponding GT for each proposal
- if has_gt:
- gt_classes = gt_classes[matched_idxs]
- # Label unmatched proposals (0 label from matcher) as background (label=num_classes)
- gt_classes[matched_labels == 0] = self.num_classes
- # Label ignore proposals (-1 label)
- gt_classes[matched_labels == -1] = -1
- else:
- gt_classes = torch.zeros_like(matched_idxs) + self.num_classes
-
- sampled_fg_idxs, sampled_bg_idxs = subsample_labels(
- gt_classes, self.batch_size_per_image, self.positive_fraction, self.num_classes
- )
-
- sampled_idxs = torch.cat([sampled_fg_idxs, sampled_bg_idxs], dim=0)
- return sampled_idxs, gt_classes[sampled_idxs]
-
- @torch.no_grad()
- def label_and_sample_proposals(
- self, proposals: List[Instances], targets: List[Instances]
- ) -> List[Instances]:
- """
- Prepare some proposals to be used to train the ROI heads.
- It performs box matching between `proposals` and `targets`, and assigns
- training labels to the proposals.
- It returns ``self.batch_size_per_image`` random samples from proposals and groundtruth
- boxes, with a fraction of positives that is no larger than
- ``self.positive_fraction``.
-
- Args:
- See :meth:`ROIHeads.forward`
-
- Returns:
- list[Instances]:
- length `N` list of `Instances`s containing the proposals
- sampled for training. Each `Instances` has the following fields:
-
- - proposal_boxes: the proposal boxes
- - gt_boxes: the ground-truth box that the proposal is assigned to
- (this is only meaningful if the proposal has a label > 0; if label = 0
- then the ground-truth box is random)
-
- Other fields such as "gt_classes", "gt_masks", that's included in `targets`.
- """
- # Augment proposals with ground-truth boxes.
- # In the case of learned proposals (e.g., RPN), when training starts
- # the proposals will be low quality due to random initialization.
- # It's possible that none of these initial
- # proposals have high enough overlap with the gt objects to be used
- # as positive examples for the second stage components (box head,
- # cls head, mask head). Adding the gt boxes to the set of proposals
- # ensures that the second stage components will have some positive
- # examples from the start of training. For RPN, this augmentation improves
- # convergence and empirically improves box AP on COCO by about 0.5
- # points (under one tested configuration).
- if self.proposal_append_gt:
- proposals = add_ground_truth_to_proposals(targets, proposals)
-
- proposals_with_gt = []
-
- num_fg_samples = []
- num_bg_samples = []
- for proposals_per_image, targets_per_image in zip(proposals, targets):
- has_gt = len(targets_per_image) > 0
- match_quality_matrix = pairwise_iou(
- targets_per_image.gt_boxes, proposals_per_image.proposal_boxes
- )
- matched_idxs, matched_labels = self.proposal_matcher(match_quality_matrix)
- sampled_idxs, gt_classes = self._sample_proposals(
- matched_idxs, matched_labels, targets_per_image.gt_classes
- )
-
- # Set target attributes of the sampled proposals:
- proposals_per_image = proposals_per_image[sampled_idxs]
- proposals_per_image.gt_classes = gt_classes
-
- if has_gt:
- sampled_targets = matched_idxs[sampled_idxs]
- # We index all the attributes of targets that start with "gt_"
- # and have not been added to proposals yet (="gt_classes").
- # NOTE: here the indexing waste some compute, because heads
- # like masks, keypoints, etc, will filter the proposals again,
- # (by foreground/background, or number of keypoints in the image, etc)
- # so we essentially index the data twice.
- for (trg_name, trg_value) in targets_per_image.get_fields().items():
- if trg_name.startswith("gt_") and not proposals_per_image.has(trg_name):
- proposals_per_image.set(trg_name, trg_value[sampled_targets])
- # If no GT is given in the image, we don't know what a dummy gt value can be.
- # Therefore the returned proposals won't have any gt_* fields, except for a
- # gt_classes full of background label.
-
- num_bg_samples.append((gt_classes == self.num_classes).sum().item())
- num_fg_samples.append(gt_classes.numel() - num_bg_samples[-1])
- proposals_with_gt.append(proposals_per_image)
-
- # Log the number of fg/bg samples that are selected for training ROI heads
- storage = get_event_storage()
- storage.put_scalar("roi_head/num_fg_samples", np.mean(num_fg_samples))
- storage.put_scalar("roi_head/num_bg_samples", np.mean(num_bg_samples))
-
- return proposals_with_gt
-
- def forward(
- self,
- images: ImageList,
- features: Dict[str, torch.Tensor],
- proposals: List[Instances],
- targets: Optional[List[Instances]] = None,
- ) -> Tuple[List[Instances], Dict[str, torch.Tensor]]:
- """
- Args:
- images (ImageList):
- features (dict[str,Tensor]): input data as a mapping from feature
- map name to tensor. Axis 0 represents the number of images `N` in
- the input data; axes 1-3 are channels, height, and width, which may
- vary between feature maps (e.g., if a feature pyramid is used).
- proposals (list[Instances]): length `N` list of `Instances`. The i-th
- `Instances` contains object proposals for the i-th input image,
- with fields "proposal_boxes" and "objectness_logits".
- targets (list[Instances], optional): length `N` list of `Instances`. The i-th
- `Instances` contains the ground-truth per-instance annotations
- for the i-th input image. Specify `targets` during training only.
- It may have the following fields:
-
- - gt_boxes: the bounding box of each instance.
- - gt_classes: the label for each instance with a category ranging in [0, #class].
- - gt_masks: PolygonMasks or BitMasks, the ground-truth masks of each instance.
- - gt_keypoints: NxKx3, the groud-truth keypoints for each instance.
-
- Returns:
- list[Instances]: length `N` list of `Instances` containing the
- detected instances. Returned during inference only; may be [] during training.
-
- dict[str->Tensor]:
- mapping from a named loss to a tensor storing the loss. Used during training only.
- """
- raise NotImplementedError()
-
-
-@ROI_HEADS_REGISTRY.register()
-class Res5ROIHeads(ROIHeads):
- """
- The ROIHeads in a typical "C4" R-CNN model, where
- the box and mask head share the cropping and
- the per-region feature computation by a Res5 block.
- See :paper:`ResNet` Appendix A.
- """
-
- @configurable
- def __init__(
- self,
- *,
- in_features: List[str],
- pooler: ROIPooler,
- res5: nn.Module,
- box_predictor: nn.Module,
- mask_head: Optional[nn.Module] = None,
- **kwargs,
- ):
- """
- NOTE: this interface is experimental.
-
- Args:
- in_features (list[str]): list of backbone feature map names to use for
- feature extraction
- pooler (ROIPooler): pooler to extra region features from backbone
- res5 (nn.Sequential): a CNN to compute per-region features, to be used by
- ``box_predictor`` and ``mask_head``. Typically this is a "res5"
- block from a ResNet.
- box_predictor (nn.Module): make box predictions from the feature.
- Should have the same interface as :class:`FastRCNNOutputLayers`.
- mask_head (nn.Module): transform features to make mask predictions
- """
- super().__init__(**kwargs)
- self.in_features = in_features
- self.pooler = pooler
- if isinstance(res5, (list, tuple)):
- res5 = nn.Sequential(*res5)
- self.res5 = res5
- self.box_predictor = box_predictor
- self.mask_on = mask_head is not None
- if self.mask_on:
- self.mask_head = mask_head
-
- @classmethod
- def from_config(cls, cfg, input_shape):
- # fmt: off
- ret = super().from_config(cfg)
- in_features = ret["in_features"] = cfg.MODEL.ROI_HEADS.IN_FEATURES
- pooler_resolution = cfg.MODEL.ROI_BOX_HEAD.POOLER_RESOLUTION
- pooler_type = cfg.MODEL.ROI_BOX_HEAD.POOLER_TYPE
- pooler_scales = (1.0 / input_shape[in_features[0]].stride, )
- sampling_ratio = cfg.MODEL.ROI_BOX_HEAD.POOLER_SAMPLING_RATIO
- mask_on = cfg.MODEL.MASK_ON
- # fmt: on
- assert not cfg.MODEL.KEYPOINT_ON
- assert len(in_features) == 1
-
- ret["pooler"] = ROIPooler(
- output_size=pooler_resolution,
- scales=pooler_scales,
- sampling_ratio=sampling_ratio,
- pooler_type=pooler_type,
- )
-
- # Compatbility with old moco code. Might be useful.
- # See notes in StandardROIHeads.from_config
- if not inspect.ismethod(cls._build_res5_block):
- logger.warning(
- "The behavior of _build_res5_block may change. "
- "Please do not depend on private methods."
- )
- cls._build_res5_block = classmethod(cls._build_res5_block)
-
- ret["res5"], out_channels = cls._build_res5_block(cfg)
- ret["box_predictor"] = FastRCNNOutputLayers(
- cfg, ShapeSpec(channels=out_channels, height=1, width=1)
- )
-
- if mask_on:
- ret["mask_head"] = build_mask_head(
- cfg,
- ShapeSpec(channels=out_channels, width=pooler_resolution, height=pooler_resolution),
- )
- return ret
-
- @classmethod
- def _build_res5_block(cls, cfg):
- # fmt: off
- stage_channel_factor = 2 ** 3 # res5 is 8x res2
- num_groups = cfg.MODEL.RESNETS.NUM_GROUPS
- width_per_group = cfg.MODEL.RESNETS.WIDTH_PER_GROUP
- bottleneck_channels = num_groups * width_per_group * stage_channel_factor
- out_channels = cfg.MODEL.RESNETS.RES2_OUT_CHANNELS * stage_channel_factor
- stride_in_1x1 = cfg.MODEL.RESNETS.STRIDE_IN_1X1
- norm = cfg.MODEL.RESNETS.NORM
- assert not cfg.MODEL.RESNETS.DEFORM_ON_PER_STAGE[-1], \
- "Deformable conv is not yet supported in res5 head."
- # fmt: on
-
- blocks = ResNet.make_stage(
- BottleneckBlock,
- 3,
- stride_per_block=[2, 1, 1],
- in_channels=out_channels // 2,
- bottleneck_channels=bottleneck_channels,
- out_channels=out_channels,
- num_groups=num_groups,
- norm=norm,
- stride_in_1x1=stride_in_1x1,
- )
- return nn.Sequential(*blocks), out_channels
-
- def _shared_roi_transform(self, features: List[torch.Tensor], boxes: List[Boxes]):
- x = self.pooler(features, boxes)
- return self.res5(x)
-
- def forward(
- self,
- images: ImageList,
- features: Dict[str, torch.Tensor],
- proposals: List[Instances],
- targets: Optional[List[Instances]] = None,
- ):
- """
- See :meth:`ROIHeads.forward`.
- """
- del images
-
- if self.training:
- assert targets
- proposals = self.label_and_sample_proposals(proposals, targets)
- del targets
-
- proposal_boxes = [x.proposal_boxes for x in proposals]
- box_features = self._shared_roi_transform(
- [features[f] for f in self.in_features], proposal_boxes
- )
- predictions = self.box_predictor(box_features.mean(dim=[2, 3]))
-
- if self.training:
- del features
- losses = self.box_predictor.losses(predictions, proposals)
- if self.mask_on:
- proposals, fg_selection_masks = select_foreground_proposals(
- proposals, self.num_classes
- )
- # Since the ROI feature transform is shared between boxes and masks,
- # we don't need to recompute features. The mask loss is only defined
- # on foreground proposals, so we need to select out the foreground
- # features.
- mask_features = box_features[torch.cat(fg_selection_masks, dim=0)]
- del box_features
- losses.update(self.mask_head(mask_features, proposals))
- return [], losses
- else:
- pred_instances, _ = self.box_predictor.inference(predictions, proposals)
- pred_instances = self.forward_with_given_boxes(features, pred_instances)
- return pred_instances, {}
-
- def forward_with_given_boxes(
- self, features: Dict[str, torch.Tensor], instances: List[Instances]
- ) -> List[Instances]:
- """
- Use the given boxes in `instances` to produce other (non-box) per-ROI outputs.
-
- Args:
- features: same as in `forward()`
- instances (list[Instances]): instances to predict other outputs. Expect the keys
- "pred_boxes" and "pred_classes" to exist.
-
- Returns:
- instances (Instances):
- the same `Instances` object, with extra
- fields such as `pred_masks` or `pred_keypoints`.
- """
- assert not self.training
- assert instances[0].has("pred_boxes") and instances[0].has("pred_classes")
-
- if self.mask_on:
- feature_list = [features[f] for f in self.in_features]
- x = self._shared_roi_transform(feature_list, [x.pred_boxes for x in instances])
- return self.mask_head(x, instances)
- else:
- return instances
-
-
-@ROI_HEADS_REGISTRY.register()
-class StandardROIHeads(ROIHeads):
- """
- It's "standard" in a sense that there is no ROI transform sharing
- or feature sharing between tasks.
- Each head independently processes the input features by each head's
- own pooler and head.
-
- This class is used by most models, such as FPN and C5.
- To implement more models, you can subclass it and implement a different
- :meth:`forward()` or a head.
- """
-
- @configurable
- def __init__(
- self,
- *,
- box_in_features: List[str],
- box_pooler: ROIPooler,
- box_head: nn.Module,
- box_predictor: nn.Module,
- mask_in_features: Optional[List[str]] = None,
- mask_pooler: Optional[ROIPooler] = None,
- mask_head: Optional[nn.Module] = None,
- keypoint_in_features: Optional[List[str]] = None,
- keypoint_pooler: Optional[ROIPooler] = None,
- keypoint_head: Optional[nn.Module] = None,
- train_on_pred_boxes: bool = False,
- **kwargs,
- ):
- """
- NOTE: this interface is experimental.
-
- Args:
- box_in_features (list[str]): list of feature names to use for the box head.
- box_pooler (ROIPooler): pooler to extra region features for box head
- box_head (nn.Module): transform features to make box predictions
- box_predictor (nn.Module): make box predictions from the feature.
- Should have the same interface as :class:`FastRCNNOutputLayers`.
- mask_in_features (list[str]): list of feature names to use for the mask
- pooler or mask head. None if not using mask head.
- mask_pooler (ROIPooler): pooler to extract region features from image features.
- The mask head will then take region features to make predictions.
- If None, the mask head will directly take the dict of image features
- defined by `mask_in_features`
- mask_head (nn.Module): transform features to make mask predictions
- keypoint_in_features, keypoint_pooler, keypoint_head: similar to ``mask_*``.
- train_on_pred_boxes (bool): whether to use proposal boxes or
- predicted boxes from the box head to train other heads.
- """
- super().__init__(**kwargs)
- # keep self.in_features for backward compatibility
- self.in_features = self.box_in_features = box_in_features
- self.box_pooler = box_pooler
- self.box_head = box_head
- self.box_predictor = box_predictor
-
- self.mask_on = mask_in_features is not None
- if self.mask_on:
- self.mask_in_features = mask_in_features
- self.mask_pooler = mask_pooler
- self.mask_head = mask_head
-
- self.keypoint_on = keypoint_in_features is not None
- if self.keypoint_on:
- self.keypoint_in_features = keypoint_in_features
- self.keypoint_pooler = keypoint_pooler
- self.keypoint_head = keypoint_head
-
- self.train_on_pred_boxes = train_on_pred_boxes
-
- @classmethod
- def from_config(cls, cfg, input_shape):
- ret = super().from_config(cfg)
- ret["train_on_pred_boxes"] = cfg.MODEL.ROI_BOX_HEAD.TRAIN_ON_PRED_BOXES
- # Subclasses that have not been updated to use from_config style construction
- # may have overridden _init_*_head methods. In this case, those overridden methods
- # will not be classmethods and we need to avoid trying to call them here.
- # We test for this with ismethod which only returns True for bound methods of cls.
- # Such subclasses will need to handle calling their overridden _init_*_head methods.
- if inspect.ismethod(cls._init_box_head):
- ret.update(cls._init_box_head(cfg, input_shape))
- if inspect.ismethod(cls._init_mask_head):
- ret.update(cls._init_mask_head(cfg, input_shape))
- if inspect.ismethod(cls._init_keypoint_head):
- ret.update(cls._init_keypoint_head(cfg, input_shape))
- return ret
-
- @classmethod
- def _init_box_head(cls, cfg, input_shape):
- # fmt: off
- in_features = cfg.MODEL.ROI_HEADS.IN_FEATURES
- pooler_resolution = cfg.MODEL.ROI_BOX_HEAD.POOLER_RESOLUTION
- pooler_scales = tuple(1.0 / input_shape[k].stride for k in in_features)
- sampling_ratio = cfg.MODEL.ROI_BOX_HEAD.POOLER_SAMPLING_RATIO
- pooler_type = cfg.MODEL.ROI_BOX_HEAD.POOLER_TYPE
- # fmt: on
-
- # If StandardROIHeads is applied on multiple feature maps (as in FPN),
- # then we share the same predictors and therefore the channel counts must be the same
- in_channels = [input_shape[f].channels for f in in_features]
- # Check all channel counts are equal
- assert len(set(in_channels)) == 1, in_channels
- in_channels = in_channels[0]
-
- box_pooler = ROIPooler(
- output_size=pooler_resolution,
- scales=pooler_scales,
- sampling_ratio=sampling_ratio,
- pooler_type=pooler_type,
- )
- # Here we split "box head" and "box predictor", which is mainly due to historical reasons.
- # They are used together so the "box predictor" layers should be part of the "box head".
- # New subclasses of ROIHeads do not need "box predictor"s.
- box_head = build_box_head(
- cfg, ShapeSpec(channels=in_channels, height=pooler_resolution, width=pooler_resolution)
- )
- box_predictor = FastRCNNOutputLayers(cfg, box_head.output_shape)
- return {
- "box_in_features": in_features,
- "box_pooler": box_pooler,
- "box_head": box_head,
- "box_predictor": box_predictor,
- }
-
- @classmethod
- def _init_mask_head(cls, cfg, input_shape):
- if not cfg.MODEL.MASK_ON:
- return {}
- # fmt: off
- in_features = cfg.MODEL.ROI_HEADS.IN_FEATURES
- pooler_resolution = cfg.MODEL.ROI_MASK_HEAD.POOLER_RESOLUTION
- pooler_scales = tuple(1.0 / input_shape[k].stride for k in in_features)
- sampling_ratio = cfg.MODEL.ROI_MASK_HEAD.POOLER_SAMPLING_RATIO
- pooler_type = cfg.MODEL.ROI_MASK_HEAD.POOLER_TYPE
- # fmt: on
-
- in_channels = [input_shape[f].channels for f in in_features][0]
-
- ret = {"mask_in_features": in_features}
- ret["mask_pooler"] = (
- ROIPooler(
- output_size=pooler_resolution,
- scales=pooler_scales,
- sampling_ratio=sampling_ratio,
- pooler_type=pooler_type,
- )
- if pooler_type
- else None
- )
- if pooler_type:
- shape = ShapeSpec(
- channels=in_channels, width=pooler_resolution, height=pooler_resolution
- )
- else:
- shape = {f: input_shape[f] for f in in_features}
- ret["mask_head"] = build_mask_head(cfg, shape)
- return ret
-
- @classmethod
- def _init_keypoint_head(cls, cfg, input_shape):
- if not cfg.MODEL.KEYPOINT_ON:
- return {}
- # fmt: off
- in_features = cfg.MODEL.ROI_HEADS.IN_FEATURES
- pooler_resolution = cfg.MODEL.ROI_KEYPOINT_HEAD.POOLER_RESOLUTION
- pooler_scales = tuple(1.0 / input_shape[k].stride for k in in_features) # noqa
- sampling_ratio = cfg.MODEL.ROI_KEYPOINT_HEAD.POOLER_SAMPLING_RATIO
- pooler_type = cfg.MODEL.ROI_KEYPOINT_HEAD.POOLER_TYPE
- # fmt: on
-
- in_channels = [input_shape[f].channels for f in in_features][0]
-
- ret = {"keypoint_in_features": in_features}
- ret["keypoint_pooler"] = (
- ROIPooler(
- output_size=pooler_resolution,
- scales=pooler_scales,
- sampling_ratio=sampling_ratio,
- pooler_type=pooler_type,
- )
- if pooler_type
- else None
- )
- if pooler_type:
- shape = ShapeSpec(
- channels=in_channels, width=pooler_resolution, height=pooler_resolution
- )
- else:
- shape = {f: input_shape[f] for f in in_features}
- ret["keypoint_head"] = build_keypoint_head(cfg, shape)
- return ret
-
- def forward(
- self,
- images: ImageList,
- features: Dict[str, torch.Tensor],
- proposals: List[Instances],
- targets: Optional[List[Instances]] = None,
- ) -> Tuple[List[Instances], Dict[str, torch.Tensor]]:
- """
- See :class:`ROIHeads.forward`.
- """
- del images
- if self.training:
- assert targets, "'targets' argument is required during training"
- proposals = self.label_and_sample_proposals(proposals, targets)
- del targets
-
- if self.training:
- losses = self._forward_box(features, proposals)
- # Usually the original proposals used by the box head are used by the mask, keypoint
- # heads. But when `self.train_on_pred_boxes is True`, proposals will contain boxes
- # predicted by the box head.
- losses.update(self._forward_mask(features, proposals))
- losses.update(self._forward_keypoint(features, proposals))
- return proposals, losses
- else:
- pred_instances = self._forward_box(features, proposals)
- # During inference cascaded prediction is used: the mask and keypoints heads are only
- # applied to the top scoring box detections.
- pred_instances = self.forward_with_given_boxes(features, pred_instances)
- return pred_instances, {}
-
- def forward_with_given_boxes(
- self, features: Dict[str, torch.Tensor], instances: List[Instances]
- ) -> List[Instances]:
- """
- Use the given boxes in `instances` to produce other (non-box) per-ROI outputs.
-
- This is useful for downstream tasks where a box is known, but need to obtain
- other attributes (outputs of other heads).
- Test-time augmentation also uses this.
-
- Args:
- features: same as in `forward()`
- instances (list[Instances]): instances to predict other outputs. Expect the keys
- "pred_boxes" and "pred_classes" to exist.
-
- Returns:
- list[Instances]:
- the same `Instances` objects, with extra
- fields such as `pred_masks` or `pred_keypoints`.
- """
- assert not self.training
- assert instances[0].has("pred_boxes") and instances[0].has("pred_classes")
-
- instances = self._forward_mask(features, instances)
- instances = self._forward_keypoint(features, instances)
- return instances
-
- def _forward_box(self, features: Dict[str, torch.Tensor], proposals: List[Instances]):
- """
- Forward logic of the box prediction branch. If `self.train_on_pred_boxes is True`,
- the function puts predicted boxes in the `proposal_boxes` field of `proposals` argument.
-
- Args:
- features (dict[str, Tensor]): mapping from feature map names to tensor.
- Same as in :meth:`ROIHeads.forward`.
- proposals (list[Instances]): the per-image object proposals with
- their matching ground truth.
- Each has fields "proposal_boxes", and "objectness_logits",
- "gt_classes", "gt_boxes".
-
- Returns:
- In training, a dict of losses.
- In inference, a list of `Instances`, the predicted instances.
- """
- features = [features[f] for f in self.box_in_features]
- box_features = self.box_pooler(features, [x.proposal_boxes for x in proposals])
- box_features = self.box_head(box_features)
- predictions = self.box_predictor(box_features)
- del box_features
-
- if self.training:
- losses = self.box_predictor.losses(predictions, proposals)
- # proposals is modified in-place below, so losses must be computed first.
- if self.train_on_pred_boxes:
- with torch.no_grad():
- pred_boxes = self.box_predictor.predict_boxes_for_gt_classes(
- predictions, proposals
- )
- for proposals_per_image, pred_boxes_per_image in zip(proposals, pred_boxes):
- proposals_per_image.proposal_boxes = Boxes(pred_boxes_per_image)
- return losses
- else:
- pred_instances, _ = self.box_predictor.inference(predictions, proposals)
- return pred_instances
-
- def _forward_mask(self, features: Dict[str, torch.Tensor], instances: List[Instances]):
- """
- Forward logic of the mask prediction branch.
-
- Args:
- features (dict[str, Tensor]): mapping from feature map names to tensor.
- Same as in :meth:`ROIHeads.forward`.
- instances (list[Instances]): the per-image instances to train/predict masks.
- In training, they can be the proposals.
- In inference, they can be the boxes predicted by R-CNN box head.
-
- Returns:
- In training, a dict of losses.
- In inference, update `instances` with new fields "pred_masks" and return it.
- """
- if not self.mask_on:
- return {} if self.training else instances
-
- if self.training:
- # head is only trained on positive proposals.
- instances, _ = select_foreground_proposals(instances, self.num_classes)
-
- if self.mask_pooler is not None:
- features = [features[f] for f in self.mask_in_features]
- boxes = [x.proposal_boxes if self.training else x.pred_boxes for x in instances]
- features = self.mask_pooler(features, boxes)
- else:
- features = {f: features[f] for f in self.mask_in_features}
- return self.mask_head(features, instances)
-
- def _forward_keypoint(self, features: Dict[str, torch.Tensor], instances: List[Instances]):
- """
- Forward logic of the keypoint prediction branch.
-
- Args:
- features (dict[str, Tensor]): mapping from feature map names to tensor.
- Same as in :meth:`ROIHeads.forward`.
- instances (list[Instances]): the per-image instances to train/predict keypoints.
- In training, they can be the proposals.
- In inference, they can be the boxes predicted by R-CNN box head.
-
- Returns:
- In training, a dict of losses.
- In inference, update `instances` with new fields "pred_keypoints" and return it.
- """
- if not self.keypoint_on:
- return {} if self.training else instances
-
- if self.training:
- # head is only trained on positive proposals with >=1 visible keypoints.
- instances, _ = select_foreground_proposals(instances, self.num_classes)
- instances = select_proposals_with_visible_keypoints(instances)
-
- if self.keypoint_pooler is not None:
- features = [features[f] for f in self.keypoint_in_features]
- boxes = [x.proposal_boxes if self.training else x.pred_boxes for x in instances]
- features = self.keypoint_pooler(features, boxes)
- else:
- features = {f: features[f] for f in self.keypoint_in_features}
- return self.keypoint_head(features, instances)
diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tools/plain_train_net.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tools/plain_train_net.py
deleted file mode 100644
index 4851a8398e128bdce1986feccf0f1cca4a12f704..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tools/plain_train_net.py
+++ /dev/null
@@ -1,223 +0,0 @@
-#!/usr/bin/env python
-# Copyright (c) Facebook, Inc. and its affiliates.
-"""
-Detectron2 training script with a plain training loop.
-
-This script reads a given config file and runs the training or evaluation.
-It is an entry point that is able to train standard models in detectron2.
-
-In order to let one script support training of many models,
-this script contains logic that are specific to these built-in models and therefore
-may not be suitable for your own project.
-For example, your research project perhaps only needs a single "evaluator".
-
-Therefore, we recommend you to use detectron2 as a library and take
-this file as an example of how to use the library.
-You may want to write your own script with your datasets and other customizations.
-
-Compared to "train_net.py", this script supports fewer default features.
-It also includes fewer abstraction, therefore is easier to add custom logic.
-"""
-
-import logging
-import os
-from collections import OrderedDict
-import torch
-from torch.nn.parallel import DistributedDataParallel
-
-import detectron2.utils.comm as comm
-from detectron2.checkpoint import DetectionCheckpointer, PeriodicCheckpointer
-from detectron2.config import get_cfg
-from detectron2.data import (
- MetadataCatalog,
- build_detection_test_loader,
- build_detection_train_loader,
-)
-from detectron2.engine import default_argument_parser, default_setup, default_writers, launch
-from detectron2.evaluation import (
- CityscapesInstanceEvaluator,
- CityscapesSemSegEvaluator,
- COCOEvaluator,
- COCOPanopticEvaluator,
- DatasetEvaluators,
- LVISEvaluator,
- PascalVOCDetectionEvaluator,
- SemSegEvaluator,
- inference_on_dataset,
- print_csv_format,
-)
-from detectron2.modeling import build_model
-from detectron2.solver import build_lr_scheduler, build_optimizer
-from detectron2.utils.events import EventStorage
-
-logger = logging.getLogger("detectron2")
-
-
-def get_evaluator(cfg, dataset_name, output_folder=None):
- """
- Create evaluator(s) for a given dataset.
- This uses the special metadata "evaluator_type" associated with each builtin dataset.
- For your own dataset, you can simply create an evaluator manually in your
- script and do not have to worry about the hacky if-else logic here.
- """
- if output_folder is None:
- output_folder = os.path.join(cfg.OUTPUT_DIR, "inference")
- evaluator_list = []
- evaluator_type = MetadataCatalog.get(dataset_name).evaluator_type
- if evaluator_type in ["sem_seg", "coco_panoptic_seg"]:
- evaluator_list.append(
- SemSegEvaluator(
- dataset_name,
- distributed=True,
- output_dir=output_folder,
- )
- )
- if evaluator_type in ["coco", "coco_panoptic_seg"]:
- evaluator_list.append(COCOEvaluator(dataset_name, output_dir=output_folder))
- if evaluator_type == "coco_panoptic_seg":
- evaluator_list.append(COCOPanopticEvaluator(dataset_name, output_folder))
- if evaluator_type == "cityscapes_instance":
- assert (
- torch.cuda.device_count() > comm.get_rank()
- ), "CityscapesEvaluator currently do not work with multiple machines."
- return CityscapesInstanceEvaluator(dataset_name)
- if evaluator_type == "cityscapes_sem_seg":
- assert (
- torch.cuda.device_count() > comm.get_rank()
- ), "CityscapesEvaluator currently do not work with multiple machines."
- return CityscapesSemSegEvaluator(dataset_name)
- if evaluator_type == "pascal_voc":
- return PascalVOCDetectionEvaluator(dataset_name)
- if evaluator_type == "lvis":
- return LVISEvaluator(dataset_name, cfg, True, output_folder)
- if len(evaluator_list) == 0:
- raise NotImplementedError(
- "no Evaluator for the dataset {} with the type {}".format(dataset_name, evaluator_type)
- )
- if len(evaluator_list) == 1:
- return evaluator_list[0]
- return DatasetEvaluators(evaluator_list)
-
-
-def do_test(cfg, model):
- results = OrderedDict()
- for dataset_name in cfg.DATASETS.TEST:
- data_loader = build_detection_test_loader(cfg, dataset_name)
- evaluator = get_evaluator(
- cfg, dataset_name, os.path.join(cfg.OUTPUT_DIR, "inference", dataset_name)
- )
- results_i = inference_on_dataset(model, data_loader, evaluator)
- results[dataset_name] = results_i
- if comm.is_main_process():
- logger.info("Evaluation results for {} in csv format:".format(dataset_name))
- print_csv_format(results_i)
- if len(results) == 1:
- results = list(results.values())[0]
- return results
-
-
-def do_train(cfg, model, resume=False):
- model.train()
- optimizer = build_optimizer(cfg, model)
- scheduler = build_lr_scheduler(cfg, optimizer)
-
- checkpointer = DetectionCheckpointer(
- model, cfg.OUTPUT_DIR, optimizer=optimizer, scheduler=scheduler
- )
- start_iter = (
- checkpointer.resume_or_load(cfg.MODEL.WEIGHTS, resume=resume).get("iteration", -1) + 1
- )
- max_iter = cfg.SOLVER.MAX_ITER
-
- periodic_checkpointer = PeriodicCheckpointer(
- checkpointer, cfg.SOLVER.CHECKPOINT_PERIOD, max_iter=max_iter
- )
-
- writers = default_writers(cfg.OUTPUT_DIR, max_iter) if comm.is_main_process() else []
-
- # compared to "train_net.py", we do not support accurate timing and
- # precise BN here, because they are not trivial to implement in a small training loop
- data_loader = build_detection_train_loader(cfg)
- logger.info("Starting training from iteration {}".format(start_iter))
- with EventStorage(start_iter) as storage:
- for data, iteration in zip(data_loader, range(start_iter, max_iter)):
- storage.iter = iteration
-
- loss_dict = model(data)
- losses = sum(loss_dict.values())
- assert torch.isfinite(losses).all(), loss_dict
-
- loss_dict_reduced = {k: v.item() for k, v in comm.reduce_dict(loss_dict).items()}
- losses_reduced = sum(loss for loss in loss_dict_reduced.values())
- if comm.is_main_process():
- storage.put_scalars(total_loss=losses_reduced, **loss_dict_reduced)
-
- optimizer.zero_grad()
- losses.backward()
- optimizer.step()
- storage.put_scalar("lr", optimizer.param_groups[0]["lr"], smoothing_hint=False)
- scheduler.step()
-
- if (
- cfg.TEST.EVAL_PERIOD > 0
- and (iteration + 1) % cfg.TEST.EVAL_PERIOD == 0
- and iteration != max_iter - 1
- ):
- do_test(cfg, model)
- # Compared to "train_net.py", the test results are not dumped to EventStorage
- comm.synchronize()
-
- if iteration - start_iter > 5 and (
- (iteration + 1) % 20 == 0 or iteration == max_iter - 1
- ):
- for writer in writers:
- writer.write()
- periodic_checkpointer.step(iteration)
-
-
-def setup(args):
- """
- Create configs and perform basic setups.
- """
- cfg = get_cfg()
- cfg.merge_from_file(args.config_file)
- cfg.merge_from_list(args.opts)
- cfg.freeze()
- default_setup(
- cfg, args
- ) # if you don't like any of the default setup, write your own setup code
- return cfg
-
-
-def main(args):
- cfg = setup(args)
-
- model = build_model(cfg)
- logger.info("Model:\n{}".format(model))
- if args.eval_only:
- DetectionCheckpointer(model, save_dir=cfg.OUTPUT_DIR).resume_or_load(
- cfg.MODEL.WEIGHTS, resume=args.resume
- )
- return do_test(cfg, model)
-
- distributed = comm.get_world_size() > 1
- if distributed:
- model = DistributedDataParallel(
- model, device_ids=[comm.get_local_rank()], broadcast_buffers=False
- )
-
- do_train(cfg, model, resume=args.resume)
- return do_test(cfg, model)
-
-
-if __name__ == "__main__":
- args = default_argument_parser().parse_args()
- print("Command Line Args:", args)
- launch(
- main,
- args.num_gpus,
- num_machines=args.num_machines,
- machine_rank=args.machine_rank,
- dist_url=args.dist_url,
- args=(args,),
- )
diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/saicinpainting/evaluation/masks/countless/test.py b/spaces/OpenGVLab/InternGPT/third-party/lama/bin/saicinpainting/evaluation/masks/countless/test.py
deleted file mode 100644
index 7809beb7aeeb3bcb10d03093a564917b1f2b4786..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/saicinpainting/evaluation/masks/countless/test.py
+++ /dev/null
@@ -1,195 +0,0 @@
-from copy import deepcopy
-
-import numpy as np
-
-import countless2d
-import countless3d
-
-def test_countless2d():
- def test_all_cases(fn, test_zero):
- case1 = np.array([ [ 1, 2 ], [ 3, 4 ] ]).reshape((2,2,1,1)) # all different
- case2 = np.array([ [ 1, 1 ], [ 2, 3 ] ]).reshape((2,2,1,1)) # two are same
- case1z = np.array([ [ 0, 1 ], [ 2, 3 ] ]).reshape((2,2,1,1)) # all different
- case2z = np.array([ [ 0, 0 ], [ 2, 3 ] ]).reshape((2,2,1,1)) # two are same
- case3 = np.array([ [ 1, 1 ], [ 2, 2 ] ]).reshape((2,2,1,1)) # two groups are same
- case4 = np.array([ [ 1, 2 ], [ 2, 2 ] ]).reshape((2,2,1,1)) # 3 are the same
- case5 = np.array([ [ 5, 5 ], [ 5, 5 ] ]).reshape((2,2,1,1)) # all are the same
-
- is_255_handled = np.array([ [ 255, 255 ], [ 1, 2 ] ], dtype=np.uint8).reshape((2,2,1,1))
-
- test = lambda case: fn(case)
-
- if test_zero:
- assert test(case1z) == [[[[3]]]] # d
- assert test(case2z) == [[[[0]]]] # a==b
- else:
- assert test(case1) == [[[[4]]]] # d
- assert test(case2) == [[[[1]]]] # a==b
-
- assert test(case3) == [[[[1]]]] # a==b
- assert test(case4) == [[[[2]]]] # b==c
- assert test(case5) == [[[[5]]]] # a==b
-
- assert test(is_255_handled) == [[[[255]]]]
-
- assert fn(case1).dtype == case1.dtype
-
- test_all_cases(countless2d.simplest_countless, False)
- test_all_cases(countless2d.quick_countless, False)
- test_all_cases(countless2d.quickest_countless, False)
- test_all_cases(countless2d.stippled_countless, False)
-
-
-
- methods = [
- countless2d.zero_corrected_countless,
- countless2d.countless,
- countless2d.countless_if,
- # countless2d.counting, # counting doesn't respect order so harder to write a test
- ]
-
- for fn in methods:
- print(fn.__name__)
- test_all_cases(fn, True)
-
-def test_stippled_countless2d():
- a = np.array([ [ 1, 2 ], [ 3, 4 ] ]).reshape((2,2,1,1))
- b = np.array([ [ 0, 2 ], [ 3, 4 ] ]).reshape((2,2,1,1))
- c = np.array([ [ 1, 0 ], [ 3, 4 ] ]).reshape((2,2,1,1))
- d = np.array([ [ 1, 2 ], [ 0, 4 ] ]).reshape((2,2,1,1))
- e = np.array([ [ 1, 2 ], [ 3, 0 ] ]).reshape((2,2,1,1))
- f = np.array([ [ 0, 0 ], [ 3, 4 ] ]).reshape((2,2,1,1))
- g = np.array([ [ 0, 2 ], [ 0, 4 ] ]).reshape((2,2,1,1))
- h = np.array([ [ 0, 2 ], [ 3, 0 ] ]).reshape((2,2,1,1))
- i = np.array([ [ 1, 0 ], [ 0, 4 ] ]).reshape((2,2,1,1))
- j = np.array([ [ 1, 2 ], [ 0, 0 ] ]).reshape((2,2,1,1))
- k = np.array([ [ 1, 0 ], [ 3, 0 ] ]).reshape((2,2,1,1))
- l = np.array([ [ 1, 0 ], [ 0, 0 ] ]).reshape((2,2,1,1))
- m = np.array([ [ 0, 2 ], [ 0, 0 ] ]).reshape((2,2,1,1))
- n = np.array([ [ 0, 0 ], [ 3, 0 ] ]).reshape((2,2,1,1))
- o = np.array([ [ 0, 0 ], [ 0, 4 ] ]).reshape((2,2,1,1))
- z = np.array([ [ 0, 0 ], [ 0, 0 ] ]).reshape((2,2,1,1))
-
- test = countless2d.stippled_countless
-
- # Note: We only tested non-matching cases above,
- # cases f,g,h,i,j,k prove their duals work as well
- # b/c if two pixels are black, either one can be chosen
- # if they are different or the same.
-
- assert test(a) == [[[[4]]]]
- assert test(b) == [[[[4]]]]
- assert test(c) == [[[[4]]]]
- assert test(d) == [[[[4]]]]
- assert test(e) == [[[[1]]]]
- assert test(f) == [[[[4]]]]
- assert test(g) == [[[[4]]]]
- assert test(h) == [[[[2]]]]
- assert test(i) == [[[[4]]]]
- assert test(j) == [[[[1]]]]
- assert test(k) == [[[[1]]]]
- assert test(l) == [[[[1]]]]
- assert test(m) == [[[[2]]]]
- assert test(n) == [[[[3]]]]
- assert test(o) == [[[[4]]]]
- assert test(z) == [[[[0]]]]
-
- bc = np.array([ [ 0, 2 ], [ 2, 4 ] ]).reshape((2,2,1,1))
- bd = np.array([ [ 0, 2 ], [ 3, 2 ] ]).reshape((2,2,1,1))
- cd = np.array([ [ 0, 2 ], [ 3, 3 ] ]).reshape((2,2,1,1))
-
- assert test(bc) == [[[[2]]]]
- assert test(bd) == [[[[2]]]]
- assert test(cd) == [[[[3]]]]
-
- ab = np.array([ [ 1, 1 ], [ 0, 4 ] ]).reshape((2,2,1,1))
- ac = np.array([ [ 1, 2 ], [ 1, 0 ] ]).reshape((2,2,1,1))
- ad = np.array([ [ 1, 0 ], [ 3, 1 ] ]).reshape((2,2,1,1))
-
- assert test(ab) == [[[[1]]]]
- assert test(ac) == [[[[1]]]]
- assert test(ad) == [[[[1]]]]
-
-def test_countless3d():
- def test_all_cases(fn):
- alldifferent = [
- [
- [1,2],
- [3,4],
- ],
- [
- [5,6],
- [7,8]
- ]
- ]
- allsame = [
- [
- [1,1],
- [1,1],
- ],
- [
- [1,1],
- [1,1]
- ]
- ]
-
- assert fn(np.array(alldifferent)) == [[[8]]]
- assert fn(np.array(allsame)) == [[[1]]]
-
- twosame = deepcopy(alldifferent)
- twosame[1][1][0] = 2
-
- assert fn(np.array(twosame)) == [[[2]]]
-
- threemixed = [
- [
- [3,3],
- [1,2],
- ],
- [
- [2,4],
- [4,3]
- ]
- ]
- assert fn(np.array(threemixed)) == [[[3]]]
-
- foursame = [
- [
- [4,4],
- [1,2],
- ],
- [
- [2,4],
- [4,3]
- ]
- ]
-
- assert fn(np.array(foursame)) == [[[4]]]
-
- fivesame = [
- [
- [5,4],
- [5,5],
- ],
- [
- [2,4],
- [5,5]
- ]
- ]
-
- assert fn(np.array(fivesame)) == [[[5]]]
-
- def countless3d_generalized(img):
- return countless3d.countless_generalized(img, (2,2,2))
- def countless3d_dynamic_generalized(img):
- return countless3d.dynamic_countless_generalized(img, (2,2,2))
-
- methods = [
- countless3d.countless3d,
- countless3d.dynamic_countless3d,
- countless3d_generalized,
- countless3d_dynamic_generalized,
- ]
-
- for fn in methods:
- test_all_cases(fn)
\ No newline at end of file
diff --git a/spaces/OptimalScale/Robin-33b/lmflow/datasets/__init__.py b/spaces/OptimalScale/Robin-33b/lmflow/datasets/__init__.py
deleted file mode 100644
index a0342a0fd34525ffa7731ddbed4015bb3555651c..0000000000000000000000000000000000000000
--- a/spaces/OptimalScale/Robin-33b/lmflow/datasets/__init__.py
+++ /dev/null
@@ -1,7 +0,0 @@
-"""This Python code defines a class Dataset with methods for initializing, loading,
-and manipulating datasets from different backends such as Hugging Face and JSON.
-
-The `Dataset` class includes methods for loading datasets from a dictionary and a Hugging
-Face dataset, mapping datasets, and retrieving the backend dataset and arguments.
-"""
-from lmflow.datasets.dataset import Dataset
diff --git a/spaces/OptimalScale/Robin-33b/lmflow/pipeline/raft_aligner.py b/spaces/OptimalScale/Robin-33b/lmflow/pipeline/raft_aligner.py
deleted file mode 100644
index ba36f512c0675f795782971904aa0d20449b12ec..0000000000000000000000000000000000000000
--- a/spaces/OptimalScale/Robin-33b/lmflow/pipeline/raft_aligner.py
+++ /dev/null
@@ -1,456 +0,0 @@
-#!/usr/bin/env python
-# coding=utf-8
-"""
-The Aligner class simplifies the process of running alignment.
-"""
-
-import logging
-import numpy as np
-import os
-import sys
-import time
-from itertools import chain
-
-import torch
-import torch.distributed as dist
-import transformers
-from datasets import (
- set_caching_enabled,
- Dataset,
- DatasetDict,
-)
-from transformers import (
- default_data_collator,
- pipeline,
- set_seed,
-)
-from transformers.testing_utils import CaptureLogger
-
-from lmflow.args import DatasetArguments
-from lmflow.datasets.dataset import Dataset as LMFlowDataset
-from lmflow.pipeline.base_aligner import BaseAligner
-from lmflow.pipeline.utils.raft_trainer import RaftTrainer
-
-logger = logging.getLogger(__name__)
-
-
-class RaftAligner(BaseAligner):
- """
- Initializes the `RaftAligner` class with given arguments.
-
- Parameters
- ------------
- model_args : ModelArguments object.
- Contains the arguments required to load the model.
-
- data_args : DatasetArguments object.
- Contains the arguments required to load the dataset.
-
- raft_aligner_args : RaftAlignerArguments object.
- Contains the arguments required to perform alignment.
-
- args : Optional.
- Positional arguments.
-
- kwargs : Optional.
- Keyword arguments.
-
- """
- def __init__(self, model_args, data_args, aligner_args, *args, **kwargs):
- self.model_args = model_args
- self.data_args = data_args
- self.aligner_args = aligner_args
-
- logging.basicConfig(
- format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
- datefmt="%m/%d/%Y %H:%M:%S",
- handlers=[logging.StreamHandler(sys.stdout)],
- )
-
- logger.setLevel(logging.INFO)
-
- output_reward_path = aligner_args.output_reward_path
- if output_reward_path is not None:
- os.makedirs(os.path.dirname(output_reward_path), exist_ok=True)
- # Deletes a maybe-exist file
- try:
- os.remove(output_reward_path)
- except OSError:
- pass
-
-
- def _initialize_trainer(self, model, tokenizer, training_args):
- """
- This function takes the model and tokenizer as the input and initialize the trainer.
- """
- trainer = RaftTrainer(
- model=model,
- args=training_args,
- train_dataset=Dataset.from_dict({"text": [ " " ] }),
- eval_dataset=Dataset.from_dict({}),
- tokenizer=tokenizer,
- data_collator=default_data_collator,
- compute_metrics=None,
- preprocess_logits_for_metrics=None,
- )
- return trainer
-
-
- def _load_dataset(
- self,
- selected_dataset,
- model,
- tokenizer,
- model_args,
- data_args,
- training_args,
- ):
- '''
- This function prepares the dataset for every iteration.
- '''
- raw_datasets = selected_dataset
-
- if training_args.do_train:
- column_names = list(raw_datasets["train"].features)
- else:
- column_names = list(raw_datasets["validation"].features)
- text_column_name = "text" if "text" in column_names else column_names[0]
-
- # since this will be pickled to avoid _LazyModule error in Hasher force logger loading before tokenize_function
- tok_logger = transformers.utils.logging.get_logger("transformers.tokenization_utils_base")
-
- def tokenize_function(examples):
- with CaptureLogger(tok_logger) as cl:
- output = tokenizer(examples[text_column_name])
- # clm input could be much much longer than block_size
- if "Token indices sequence length is longer than the" in cl.out:
- tok_logger.warning(
- "^^^^^^^^^^^^^^^^ Please ignore the warning above - this long input will be chunked into smaller bits"
- " before being passed to the model."
- )
- return output
-
- with training_args.main_process_first(desc="dataset map tokenization"):
- if not data_args.streaming:
- tokenized_datasets = raw_datasets.map(
- tokenize_function,
- batched=True,
- num_proc=data_args.preprocessing_num_workers,
- remove_columns=column_names,
- load_from_cache_file=not data_args.overwrite_cache,
- desc="Running tokenizer on dataset",
- )
- else:
- tokenized_datasets = raw_datasets.map(
- tokenize_function,
- batched=True,
- remove_columns=column_names,
- )
-
- if data_args.block_size is None:
- block_size = tokenizer.model_max_length
- if block_size > 1024:
- logger.warning(
- "The chosen tokenizer supports a `model_max_length` that is longer than the default `block_size` value"
- " of 1024. If you would like to use a longer `block_size` up to `tokenizer.model_max_length` you can"
- " override this default with `--block_size xxx`."
- )
- block_size = 512
- else:
- if data_args.block_size > tokenizer.model_max_length:
- logger.warning(
- f"The block_size passed ({data_args.block_size}) is larger than the maximum length for the model"
- f"({tokenizer.model_max_length}). Using block_size={tokenizer.model_max_length}."
- )
- block_size = min(data_args.block_size, tokenizer.model_max_length)
-
- # Main data processing function that will concatenate all texts from our dataset and generate chunks of block_size.
- def group_texts(examples):
- # Concatenate all texts.
- concatenated_examples = {k: list(chain(*examples[k])) for k in examples.keys()}
- total_length = len(concatenated_examples[list(examples.keys())[0]])
- # We drop the small remainder, we could add padding if the model supported it instead of this drop, you can
- # customize this part to your needs.
- if total_length >= block_size:
- total_length = (total_length // block_size) * block_size
- # Split by chunks of max_len.
- result = {
- k: [t[i : i + block_size] for i in range(0, total_length, block_size)]
- for k, t in concatenated_examples.items()
- }
- result["labels"] = result["input_ids"].copy()
- return result
-
- # Note that with `batched=True`, this map processes 1,000 texts together, so group_texts throws away a remainder
- # for each of those groups of 1,000 texts. You can adjust that batch_size here but a higher value might be slower
- # to preprocess.
- #
- # To speed up this part, we use multiprocessing. See the documentation of the map method for more information:
- # https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.map
-
- with training_args.main_process_first(desc="grouping texts together"):
- group_batch_size = 1000
- if data_args.disable_group_texts:
- group_batch_size = 1
- if not data_args.streaming:
- lm_datasets = tokenized_datasets.map(
- group_texts,
- batched=True,
- batch_size=group_batch_size,
- num_proc=data_args.preprocessing_num_workers,
- load_from_cache_file=not data_args.overwrite_cache,
- desc=f"Grouping texts in chunks of {block_size}",
- )
- else:
- lm_datasets = tokenized_datasets.map(
- group_texts,
- batched=True,
- batch_size=group_batch_size,
- )
-
- if training_args.do_train:
- if "train" not in tokenized_datasets:
- raise ValueError("--do_train requires a train dataset")
- train_dataset = lm_datasets["train"]
- if data_args.max_train_samples is not None:
- max_train_samples = min(len(train_dataset), data_args.max_train_samples)
- train_dataset = train_dataset.select(range(max_train_samples))
-
- return train_dataset
-
-
- def _load_input_dataset(self, dataset, tokenizer):
- """
- Load input dataset (i.e. prompt/question dataset) for training.
-
- Args:
- dataset: A Dataset object.
- The dataset to be loaded.
-
- Returns:
- dataloader (`torch.utils.data.DataLoader`):
- The dataloader for the dataset.
- """
- ds = dataset.get_backend_dataset()
-
- def tokenize(sample):
- input_size = 16
- review_encode = tokenizer.encode(sample["text"])
- sample["input_ids"] = review_encode[:input_size]
- sample['input'] = tokenizer.decode(sample["input_ids"])
- return sample
-
- ds = ds.map(tokenize, batched=False)
- ds.set_format(type='torch')
-
- return ds
-
-
- def _get_batch_dataset_top(
- self,
- model,
- batch_input,
- alpha=0.2,
- iter_id=0,
- local_rank=0,
- output_min_length=16,
- output_max_length=48,
- infer_batch_size=8,
- generation_kwargs={},
- tokenizer=None,
- training_args=None,
- reward_model=None,
- output_reward_path=None,
- ):
- """
- :param batch_input: input prompts
- """
- # we will get the batch dataset via Dataset.from_dict
- start_time = time.time()
- output_data = []
- query_tensors = batch_input['input_ids']
- querys = batch_input['input']
- data_size = len(querys)
- cnt = 0
- reward_eva = []
- reward_train = []
- out_put_dataset_eval = {}
- data_eval = []
- input_texts = []
- responses = []
- for i, query_tensor in enumerate(query_tensors):
- query = querys[i]
- input_texts.append(query)
- if (i + 1) % infer_batch_size == 0:
- gen_len = np.random.randint(output_min_length, output_max_length)
- generation_kwargs["max_new_tokens"] = gen_len
- inputs = tokenizer(input_texts, return_tensors="pt", padding=True).to(training_args.device)
- with torch.no_grad():
- outputs = model.generate(**inputs, **generation_kwargs)
- generated_texts = tokenizer.batch_decode(outputs, skip_special_tokens=True)
- generated_texts = [
- generated_text.replace(input_texts[i], "") for i, generated_text in enumerate(generated_texts)
- ]
- texts_for_rewards = [q + r for q, r in zip(input_texts, generated_texts)]
-
- texts_for_reward_dataset = LMFlowDataset.create_from_dict({
- "type": "text_only",
- "instances": [
- { "text": text } for text in texts_for_rewards
- ],
- })
-
- reward_dataset = reward_model.inference(texts_for_reward_dataset)
- rewards = [ sample["value"] for sample in reward_dataset.to_dict()["instances"] ]
-
- reward_eva.extend(rewards)
- responses.extend(generated_texts)
- input_texts = []
-
- data = []
- idx = np.argsort(reward_eva)[::-1][:int(data_size * alpha)]
- for j in range(len(reward_eva)):
- sample = {}
- sample["input"] = querys[j]
- sample["output"] = [responses[j]]
- data.append(sample)
- output_data = [data[j] for j in idx]
- logger.info(f"collected data of {len(output_data)}")
-
- world_size = int(os.getenv("WORLD_SIZE", "1"))
- all_process_list =[{}] * world_size
- dist.all_gather_object(all_process_list, output_data)
-
- gathered_data = []
- for i in range(world_size):
- gathered_data.extend(all_process_list[i])
-
- reward_train = [reward_eva[j] for j in idx]
-
- reward_to_send = [np.mean(reward_eva), np.mean(reward_train)]
- all_process_rewards = [{}] * world_size
- dist.all_gather_object(all_process_rewards, reward_to_send)
- logger.info(all_process_rewards)
-
- if training_args.local_rank == 0 and output_reward_path is not None:
- with open(output_reward_path, mode='a') as fout:
- fout.write('mean reward: ' + str(np.mean([all_process_rewards[i][0] for i in range(world_size)])) + 'mean reward in training set: ' + str([all_process_rewards[i][1] for i in range(world_size)]))
- fout.write("\n")
-
- prompt_structure = "{definition}{input}{output}"
- output_dataset = {
- "text": [ prompt_structure.format(
- definition="", input=sample["input"], output=sample["output"][0]
- ) for sample in gathered_data
- ]
- }
-
- return DatasetDict({ "train": Dataset.from_dict(output_dataset) })
-
-
- def align(self, model, dataset, reward_model):
- """
- Perform alignment for a model
-
- Parameters
- ------------
- model : BaseModel object.
- dataset: Dataset object.
- Input dataset for model to generate outputs. The input and output
- will then be feed into reward model to get the reward for
- alignment.
- reward_model: RegressionModel object.
- """
- tokenizer = model.get_tokenizer()
- tokenizer.pad_token = tokenizer.eos_token
- tokenizer.pad_token_id = tokenizer.eos_token_id
- tokenizer.padding_side = "left"
-
- dataset = self._load_input_dataset(dataset, tokenizer)
- set_caching_enabled(False)
-
- wrapped_model = model
- model = model.get_backend_model()
-
- generation_kwargs = {
- "min_length": -1,
- "top_k": 0.0,
- "top_p": 1.0,
- "do_sample": True,
- "pad_token_id": tokenizer.eos_token_id,
- "temperature":0.7
- }
-
- aligner_args = self.aligner_args
- training_args = aligner_args
- model_args = self.model_args
- data_args = self.data_args
-
- set_seed(42 + training_args.local_rank)
-
- ITERATION = aligner_args.num_raft_iteration
- M = aligner_args.raft_batch_size
-
- alpha = aligner_args.top_reward_percentage
- data_size = len(dataset['input'])
- reward_seq = []
- lr = training_args.learning_rate
-
- raft_trainer = self._initialize_trainer(model, tokenizer, training_args)
- raft_trainer.train(resume_from_checkpoint=False, is_first_time=True)
-
- ##############
- for iteration in range(ITERATION):
- set_seed(88 + training_args.local_rank + 4 * (iteration+1))
-
- batch_input = dataset.select(np.random.randint(low=0, high=data_size, size=M))
-
- selected_dataset = self._get_batch_dataset_top(
- raft_trainer.tmp_model,
- batch_input,
- alpha,
- iteration,
- training_args.local_rank,
- output_min_length=aligner_args.output_min_length,
- output_max_length=aligner_args.output_max_length,
- infer_batch_size=aligner_args.inference_batch_size_per_device,
- generation_kwargs=generation_kwargs,
- tokenizer=tokenizer,
- training_args=training_args,
- reward_model=reward_model,
- output_reward_path=aligner_args.output_reward_path,
- )
- raft_trainer.train_dataset = self._load_dataset(
- selected_dataset,
- raft_trainer.tmp_model,
- tokenizer,
- model_args,
- data_args,
- training_args,
- )
-
- logger.info(f"iter {iteration}")
- start_time = time.time()
- train_result = raft_trainer.train(resume_from_checkpoint=False)
- end_time = time.time()
- logger.info("It takes %.2f s to train one stage", end_time - start_time)
-
- self._get_batch_dataset_top(
- raft_trainer.tmp_model,
- batch_input, alpha,
- iteration,
- training_args.local_rank,
- output_min_length=aligner_args.output_min_length,
- output_max_length=aligner_args.output_max_length,
- infer_batch_size=aligner_args.inference_batch_size_per_device,
- generation_kwargs=generation_kwargs,
- tokenizer=tokenizer,
- training_args=training_args,
- reward_model=reward_model,
- output_reward_path=aligner_args.output_reward_path,
- )
-
- if aligner_args.output_dir is not None:
- wrapped_model.save(aligner_args.output_dir)
-
- return wrapped_model
diff --git a/spaces/PKaushik/humandetect/yolov6/models/yolo.py b/spaces/PKaushik/humandetect/yolov6/models/yolo.py
deleted file mode 100644
index 5d3d86be4fa6e9ceab089bbf1c655f5bf86163bf..0000000000000000000000000000000000000000
--- a/spaces/PKaushik/humandetect/yolov6/models/yolo.py
+++ /dev/null
@@ -1,83 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding:utf-8 -*-
-import math
-import torch.nn as nn
-from yolov6.layers.common import *
-from yolov6.utils.torch_utils import initialize_weights
-from yolov6.models.efficientrep import EfficientRep
-from yolov6.models.reppan import RepPANNeck
-from yolov6.models.effidehead import Detect, build_effidehead_layer
-
-
-class Model(nn.Module):
- '''YOLOv6 model with backbone, neck and head.
- The default parts are EfficientRep Backbone, Rep-PAN and
- Efficient Decoupled Head.
- '''
- def __init__(self, config, channels=3, num_classes=None, anchors=None): # model, input channels, number of classes
- super().__init__()
- # Build network
- num_layers = config.model.head.num_layers
- self.backbone, self.neck, self.detect = build_network(config, channels, num_classes, anchors, num_layers)
-
- # Init Detect head
- begin_indices = config.model.head.begin_indices
- out_indices_head = config.model.head.out_indices
- self.stride = self.detect.stride
- self.detect.i = begin_indices
- self.detect.f = out_indices_head
- self.detect.initialize_biases()
-
- # Init weights
- initialize_weights(self)
-
- def forward(self, x):
- x = self.backbone(x)
- x = self.neck(x)
- x = self.detect(x)
- return x
-
- def _apply(self, fn):
- self = super()._apply(fn)
- self.detect.stride = fn(self.detect.stride)
- self.detect.grid = list(map(fn, self.detect.grid))
- return self
-
-
-def make_divisible(x, divisor):
- # Upward revision the value x to make it evenly divisible by the divisor.
- return math.ceil(x / divisor) * divisor
-
-
-def build_network(config, channels, num_classes, anchors, num_layers):
- depth_mul = config.model.depth_multiple
- width_mul = config.model.width_multiple
- num_repeat_backbone = config.model.backbone.num_repeats
- channels_list_backbone = config.model.backbone.out_channels
- num_repeat_neck = config.model.neck.num_repeats
- channels_list_neck = config.model.neck.out_channels
- num_anchors = config.model.head.anchors
- num_repeat = [(max(round(i * depth_mul), 1) if i > 1 else i) for i in (num_repeat_backbone + num_repeat_neck)]
- channels_list = [make_divisible(i * width_mul, 8) for i in (channels_list_backbone + channels_list_neck)]
-
- backbone = EfficientRep(
- in_channels=channels,
- channels_list=channels_list,
- num_repeats=num_repeat
- )
-
- neck = RepPANNeck(
- channels_list=channels_list,
- num_repeats=num_repeat
- )
-
- head_layers = build_effidehead_layer(channels_list, num_anchors, num_classes)
-
- head = Detect(num_classes, anchors, num_layers, head_layers=head_layers)
-
- return backbone, neck, head
-
-
-def build_model(cfg, num_classes, device):
- model = Model(cfg, channels=3, num_classes=num_classes, anchors=cfg.model.head.anchors).to(device)
- return model
diff --git a/spaces/Plurigrid/LifeSim/src/app/page.tsx b/spaces/Plurigrid/LifeSim/src/app/page.tsx
deleted file mode 100644
index 8e85e9927e68c7fb60213bc02e79cc761721aaed..0000000000000000000000000000000000000000
--- a/spaces/Plurigrid/LifeSim/src/app/page.tsx
+++ /dev/null
@@ -1,18 +0,0 @@
-"use server"
-
-import Head from "next/head"
-
-import Main from "./main"
-
-export default async function IndexPage({ params: { ownerId } }: { params: { ownerId: string }}) {
- return (
-
-
-
-
-
-
-
-
- )
-}
\ No newline at end of file
diff --git a/spaces/Pranjal2041/SemSup-XC/cleaned_code/src/training_arguments.py b/spaces/Pranjal2041/SemSup-XC/cleaned_code/src/training_arguments.py
deleted file mode 100644
index 81fbed0b4c0a3a8c1d56a16bc161978638f9afbd..0000000000000000000000000000000000000000
--- a/spaces/Pranjal2041/SemSup-XC/cleaned_code/src/training_arguments.py
+++ /dev/null
@@ -1,343 +0,0 @@
-from typing import Optional
-from dataclasses import dataclass, field
-from .constants import task_to_keys
-from transformers import TrainingArguments
-
-
-@dataclass
-class CustomTrainingArguments(TrainingArguments):
- output_learning_rate: Optional[float] = field(
- default=5e-5,
- metadata={"help": "The learning rate for the output encodeer of the model."}
- )
- place_model_on_device: Optional[bool] = field(
- default=True,
- metadata={"help" : "Useful if doing hyperparam search"}
- )
- scenario: Optional[str] = field(
- default="seen", # Options: seen, unseen_labels
- metadata={"help": "The scenario to use for training."}
- )
-
- one_hour_job : Optional[bool] = field(
- default = False,
- metadata = {"help" : "Incase its a sequence of jobs, we will do advance management of checkpoints."}
- )
-
-
-@dataclass
-class DataTrainingArguments:
- """
- Arguments pertaining to what data we are going to input our model for training and eval.
-
- Using `HfArgumentParser` we can turn this class
- into argparse arguments to be able to specify them on
- the command line.
- """
-
- all_labels : Optional[str] = field(
- default=None,
- metadata={"help": "The file containing all the labels. Mandatory if doing unseen labels"}
- )
-
- test_labels : Optional[str] = field(
- default=None,
- metadata={"help": "The file containing all the test labels."}
- )
-
- max_descs_per_label : Optional[int] = field(
- default = 999999,
- metadata={"help": "Restrict number of descriptions to be included per label"}
- )
-
- task_name: Optional[str] = field(
- default=None,
- metadata={"help": "The name of the task to train on: " + ", ".join(task_to_keys.keys())},
- )
- dataset_name: Optional[str] = field(
- default=None, metadata={"help": "The name of the dataset to use (via the datasets library)."}
- )
- dataset_config_name: Optional[str] = field(
- default=None, metadata={"help": "The configuration name of the dataset to use (via the datasets library)."}
- )
- max_seq_length: int = field(
- default=128,
- metadata={
- "help": (
- "The maximum total input sequence length after tokenization. Sequences longer "
- "than this will be truncated, sequences shorter will be padded."
- )
- },
- )
- overwrite_cache: bool = field(
- default=False, metadata={"help": "Overwrite the cached preprocessed datasets or not."}
- )
- pad_to_max_length: bool = field(
- default=True,
- metadata={
- "help": (
- "Whether to pad all samples to `max_seq_length`. "
- "If False, will pad the samples dynamically when batching to the maximum length in the batch."
- )
- },
- )
- load_from_local: bool = field(
- default=False,
- metadata={"help": "Whether to load the dataset from local or not."},
- )
- max_train_samples: Optional[int] = field(
- default=None,
- metadata={
- "help": (
- "For debugging purposes or quicker training, truncate the number of training examples to this "
- "value if set."
- )
- },
- )
- max_eval_samples: Optional[int] = field(
- default=None,
- metadata={
- "help": (
- "For debugging purposes or quicker training, truncate the number of evaluation examples to this "
- "value if set."
- )
- },
- )
- max_predict_samples: Optional[int] = field(
- default=None,
- metadata={
- "help": (
- "For debugging purposes or quicker training, truncate the number of prediction examples to this "
- "value if set."
- )
- },
- )
- train_file: Optional[str] = field(
- default=None, metadata={"help": "A csv or a json file containing the training data."}
- )
- validation_file: Optional[str] = field(
- default=None, metadata={"help": "A csv or a json file containing the validation data."}
- )
- test_file: Optional[str] = field(default=None, metadata={"help": "A csv or a json file containing the test data."})
- label_max_seq_length: int = field(default=32)
- contrastive_learning_samples : Optional[int] = field(
- default=-1,
- metadata={"help": "Number of samples to use for contrastive learning."},
- )
- cl_min_positive_descs : Optional[int] = field(
- default=20,
- metadata={"help": "Minimum number of positive descriptions to use for contrastive learning."},
- )
- descriptions_file : Optional[str] = field(
- # default='datasets/EUR-Lex/all_descriptions.json',
- default='datasets/EUR-Lex/eurlex-4k-class_descriptions_v1.json',
- metadata={"help": "A json file containing the descriptions."},
- )
- test_descriptions_file : Optional[str] = field(
- default='', # If empty, automatically make equal to descriptions_file
- metadata={"help": "A json file containing the test descriptions."},
- )
-
-
- cluster_path: Optional[str] = field(
- default='datasets/EUR-Lex/label_group_lightxml_0.npy',
- metadata={"help": "Path to the cluster file."},
- )
- num_clusters: int = field(
- default=64,
- metadata={"help": "Number of clusters in the cluster file."},
- )
- hyper_search: bool = field(
- default=False,
- metadata={"help": "Perform Hp Search"},
- )
-
- bm_short_file: str = field(
- default = '',
- metadata = {"help": "BM Shortlist File to use for contrastive sampling"}
- )
-
- large_dset: bool = field(
- default = False,
- metadata = {"help" : "Dataset is modified in a way such that whole train set is not loaded"}
- )
-
- tokenized_descs_file: bool = field(
- default = False,
- metadata = {"help" : "Load the precomputed tokenized descriptions to speed up the process"}
- )
-
- train_tfidf_short: str = field(
- default = '',
- metadata = {"help" : "Shortlists based on the tf-idf values"}
- )
-
- test_tfidf_short: str = field(
- default = '',
- metadata = {"help" : "Shortlists based on the tf-idf values"}
- )
-
- ignore_pos_labels_file : str = field(
- default = '',
- metadata = {"help" : "Useful in fs setting"}
- )
-
- tok_format: int = field(
- default = -1,
- metadata = {"help" : "Tokenized Format for large datasets"}
- )
-
- coil_cluster_mapping_path : str = field(
- default = '',
- metadata = {"help" : "Clustering for coil matching based on BERT"}
- )
-
- random_sample_seed: int = field(
- default=-1,
- metadata={"help": "Random seed for eval sampling"},
- )
-
- def __post_init__(self):
- if self.task_name is not None:
- self.task_name = self.task_name.lower()
- if self.task_name not in task_to_keys.keys():
- raise ValueError("Unknown task, you should pick one in " + ",".join(task_to_keys.keys()))
- elif self.dataset_name is not None:
- pass
- elif self.train_file is None or self.validation_file is None:
- raise ValueError("Need either a GLUE task, a training/validation file or a dataset name.")
- else:
- train_extension = self.train_file.split(".")[-1]
- assert train_extension in ["csv", "json"], "`train_file` should be a csv or a json file."
- validation_extension = self.validation_file.split(".")[-1]
- assert (
- validation_extension == train_extension
- ), "`validation_file` should have the same extension (csv or json) as `train_file`."
-
-
-@dataclass
-class ModelArguments:
- """
- Arguments pertaining to which model/config/tokenizer we are going to fine-tune from.
- """
-
- model_name_or_path: str = field(
- metadata={"help": "Path to pretrained model or model identifier from huggingface.co/models"}
- )
- config_name: Optional[str] = field(
- default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"}
- )
- tokenizer_name: Optional[str] = field(
- default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"}
- )
- cache_dir: Optional[str] = field(
- default=None,
- metadata={"help": "Where do you want to store the pretrained models downloaded from huggingface.co"},
- )
- use_fast_tokenizer: bool = field(
- default=True,
- metadata={"help": "Whether to use one of the fast tokenizer (backed by the tokenizers library) or not."},
- )
- model_revision: str = field(
- default="main",
- metadata={"help": "The specific model version to use (can be a branch name, tag name or commit id)."},
- )
- use_auth_token: bool = field(
- default=False,
- metadata={
- "help": (
- "Will use the token generated when running `transformers-cli login` (necessary to use this script "
- "with private models)."
- )
- },
- )
- ignore_mismatched_sizes: bool = field(
- default=False,
- metadata={"help": "Will enable to load a pretrained model whose head dimensions are different."},
- )
- negative_sampling: Optional[str] = field(
- default="none",
- metadata={"help": "Whether to use negative sampling or not. Can be either `lightxml` or `none`."},
- )
- semsup : Optional[bool] = field(
- default=False,
- metadata={"help": "Whether to use semantic supervision or not."},
- )
- label_model_name_or_path: Optional[str] = field(
- default='bert-base-uncased',
- metadata={"help": "The name or path of the label model to use."},
- )
- encoder_model_type: Optional[str] = field(
- default = 'bert',
- metadata={"help": "Type of encoder to use. Options: bert, roberta, xlnet"},
- )
- use_custom_optimizer: Optional[str] = field(
- default=None,
- metadata={"help": "Custom optimizer to use. Options: adamw"},
- )
- arch_type: Optional[int] = field(
- default=2,
- metadata={"help": '''Model architecture to use. Options: 1,2,3.\n1 -> LightXML Based\n2 -> No hidden layer\n3 -> Smaller Embedding Size'''},
- )
- devise: Optional[bool] = field(
- default = False,
- metadata = {"help" : 'Use Device Baseline'}
- )
- add_label_name : Optional[bool] = field(
- default = False,
- metadata = {"help" : "Adds label name in beginning of all descriptions"}
- )
-
- normalize_embeddings : Optional[bool] = field(
- default = False,
- metadata = {"help" : "Normalize Embeddings of input and output encoders before inner product."}
- )
-
- tie_weights : Optional[bool] = field(
- default = False,
- metadata = {"help" : "Tie the Input & Label Transformer Weights(First 11 Layers) ."}
- )
-
- coil : Optional[bool] = field(
- default = False,
- metadata = {"help" : "Use COILBert Variant"}
- )
-
- colbert: Optional[bool] = field(
- default = False,
- metadata = {"help" : "Use COLBert, Note: coil must be set true"}
- )
-
- use_precomputed_embeddings : Optional[str] = field(
- default = '',
- metadata = {"help" : "PreComputed Embeddings Upto Level 9 of Bert for descriptions"}
- )
-
- token_dim : Optional[int] = field(
- default = 16,
- metadata = {"help": "Token Dimension for COILBert"}
- )
-
- pretrained_model_path : Optional[str] = field(
- default = '',
- metadata = {"help" : "Use Pretrained Model for Finetuning (few shot setting)"}
- )
- pretrained_label_model_path : Optional[str] = field(
- default = '',
- metadata = {"help" : "Use Pretrained Label Model for Finetuning (few shot setting)"}
- )
-
-
- num_frozen_layers : Optional[int] = field(
- default = 0,
- metadata = {
- "help" : "Freeze Input Encoder Layer"
- }
- )
-
- label_frozen_layers : Optional[int] = field(
- default = 0,
- metadata = {
- "help" : "Freeze Input Encoder Layer"
- }
- )
\ No newline at end of file
diff --git a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/solvers/compression.py b/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/solvers/compression.py
deleted file mode 100644
index b757503472a3bfbf90e1636999e64913848a7474..0000000000000000000000000000000000000000
--- a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/solvers/compression.py
+++ /dev/null
@@ -1,328 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-import multiprocessing
-from pathlib import Path
-import typing as tp
-
-import flashy
-import omegaconf
-import torch
-from torch import nn
-
-from . import base, builders
-from .. import models, quantization
-from ..utils import checkpoint
-from ..utils.samples.manager import SampleManager
-from ..utils.utils import get_pool_executor
-
-
-logger = logging.getLogger(__name__)
-
-
-class CompressionSolver(base.StandardSolver):
- """Solver for compression task.
-
- The compression task combines a set of perceptual and objective losses
- to train an EncodecModel (composed of an encoder-decoder and a quantizer)
- to perform high fidelity audio reconstruction.
- """
- def __init__(self, cfg: omegaconf.DictConfig):
- super().__init__(cfg)
- self.rng: torch.Generator # set at each epoch
- self.adv_losses = builders.get_adversarial_losses(self.cfg)
- self.aux_losses = nn.ModuleDict()
- self.info_losses = nn.ModuleDict()
- assert not cfg.fsdp.use, "FSDP not supported by CompressionSolver."
- loss_weights = dict()
- for loss_name, weight in self.cfg.losses.items():
- if loss_name in ['adv', 'feat']:
- for adv_name, _ in self.adv_losses.items():
- loss_weights[f'{loss_name}_{adv_name}'] = weight
- elif weight > 0:
- self.aux_losses[loss_name] = builders.get_loss(loss_name, self.cfg)
- loss_weights[loss_name] = weight
- else:
- self.info_losses[loss_name] = builders.get_loss(loss_name, self.cfg)
- self.balancer = builders.get_balancer(loss_weights, self.cfg.balancer)
- self.register_stateful('adv_losses')
-
- @property
- def best_metric_name(self) -> tp.Optional[str]:
- # best model is the last for the compression model
- return None
-
- def build_model(self):
- """Instantiate model and optimizer."""
- # Model and optimizer
- self.model = models.builders.get_compression_model(self.cfg).to(self.device)
- self.optimizer = builders.get_optimizer(self.model.parameters(), self.cfg.optim)
- self.register_stateful('model', 'optimizer')
- self.register_best_state('model')
- self.register_ema('model')
-
- def build_dataloaders(self):
- """Instantiate audio dataloaders for each stage."""
- self.dataloaders = builders.get_audio_datasets(self.cfg)
-
- def show(self):
- """Show the compression model and employed adversarial loss."""
- self.logger.info(f"Compression model with {self.model.quantizer.total_codebooks} codebooks:")
- self.log_model_summary(self.model)
- self.logger.info("Adversarial loss:")
- self.log_model_summary(self.adv_losses)
- self.logger.info("Auxiliary losses:")
- self.logger.info(self.aux_losses)
- self.logger.info("Info losses:")
- self.logger.info(self.info_losses)
-
- def run_step(self, idx: int, batch: torch.Tensor, metrics: dict):
- """Perform one training or valid step on a given batch."""
- x = batch.to(self.device)
- y = x.clone()
-
- qres = self.model(x)
- assert isinstance(qres, quantization.QuantizedResult)
- y_pred = qres.x
- # Log bandwidth in kb/s
- metrics['bandwidth'] = qres.bandwidth.mean()
-
- if self.is_training:
- d_losses: dict = {}
- if len(self.adv_losses) > 0 and torch.rand(1, generator=self.rng).item() <= 1 / self.cfg.adversarial.every:
- for adv_name, adversary in self.adv_losses.items():
- disc_loss = adversary.train_adv(y_pred, y)
- d_losses[f'd_{adv_name}'] = disc_loss
- metrics['d_loss'] = torch.sum(torch.stack(list(d_losses.values())))
- metrics.update(d_losses)
-
- balanced_losses: dict = {}
- other_losses: dict = {}
-
- # penalty from quantization
- if qres.penalty is not None and qres.penalty.requires_grad:
- other_losses['penalty'] = qres.penalty # penalty term from the quantizer
-
- # adversarial losses
- for adv_name, adversary in self.adv_losses.items():
- adv_loss, feat_loss = adversary(y_pred, y)
- balanced_losses[f'adv_{adv_name}'] = adv_loss
- balanced_losses[f'feat_{adv_name}'] = feat_loss
-
- # auxiliary losses
- for loss_name, criterion in self.aux_losses.items():
- loss = criterion(y_pred, y)
- balanced_losses[loss_name] = loss
-
- # weighted losses
- metrics.update(balanced_losses)
- metrics.update(other_losses)
- metrics.update(qres.metrics)
-
- if self.is_training:
- # backprop losses that are not handled by balancer
- other_loss = torch.tensor(0., device=self.device)
- if 'penalty' in other_losses:
- other_loss += other_losses['penalty']
- if other_loss.requires_grad:
- other_loss.backward(retain_graph=True)
- ratio1 = sum(p.grad.data.norm(p=2).pow(2)
- for p in self.model.parameters() if p.grad is not None)
- assert isinstance(ratio1, torch.Tensor)
- metrics['ratio1'] = ratio1.sqrt()
-
- # balancer losses backward, returns effective training loss
- # with effective weights at the current batch.
- metrics['g_loss'] = self.balancer.backward(balanced_losses, y_pred)
- # add metrics corresponding to weight ratios
- metrics.update(self.balancer.metrics)
- ratio2 = sum(p.grad.data.norm(p=2).pow(2)
- for p in self.model.parameters() if p.grad is not None)
- assert isinstance(ratio2, torch.Tensor)
- metrics['ratio2'] = ratio2.sqrt()
-
- # optim
- flashy.distrib.sync_model(self.model)
- if self.cfg.optim.max_norm:
- torch.nn.utils.clip_grad_norm_(
- self.model.parameters(), self.cfg.optim.max_norm
- )
- self.optimizer.step()
- self.optimizer.zero_grad()
-
- # informative losses only
- info_losses: dict = {}
- with torch.no_grad():
- for loss_name, criterion in self.info_losses.items():
- loss = criterion(y_pred, y)
- info_losses[loss_name] = loss
-
- metrics.update(info_losses)
-
- # aggregated GAN losses: this is useful to report adv and feat across different adversarial loss setups
- adv_losses = [loss for loss_name, loss in metrics.items() if loss_name.startswith('adv')]
- if len(adv_losses) > 0:
- metrics['adv'] = torch.sum(torch.stack(adv_losses))
- feat_losses = [loss for loss_name, loss in metrics.items() if loss_name.startswith('feat')]
- if len(feat_losses) > 0:
- metrics['feat'] = torch.sum(torch.stack(feat_losses))
-
- return metrics
-
- def run_epoch(self):
- # reset random seed at the beginning of the epoch
- self.rng = torch.Generator()
- self.rng.manual_seed(1234 + self.epoch)
- # run epoch
- super().run_epoch()
-
- def evaluate(self):
- """Evaluate stage. Runs audio reconstruction evaluation."""
- self.model.eval()
- evaluate_stage_name = str(self.current_stage)
-
- loader = self.dataloaders['evaluate']
- updates = len(loader)
- lp = self.log_progress(f'{evaluate_stage_name} inference', loader, total=updates, updates=self.log_updates)
- average = flashy.averager()
-
- pendings = []
- ctx = multiprocessing.get_context('spawn')
- with get_pool_executor(self.cfg.evaluate.num_workers, mp_context=ctx) as pool:
- for idx, batch in enumerate(lp):
- x = batch.to(self.device)
- with torch.no_grad():
- qres = self.model(x)
-
- y_pred = qres.x.cpu()
- y = batch.cpu() # should already be on CPU but just in case
- pendings.append(pool.submit(evaluate_audio_reconstruction, y_pred, y, self.cfg))
-
- metrics_lp = self.log_progress(f'{evaluate_stage_name} metrics', pendings, updates=self.log_updates)
- for pending in metrics_lp:
- metrics = pending.result()
- metrics = average(metrics)
-
- metrics = flashy.distrib.average_metrics(metrics, len(loader))
- return metrics
-
- def generate(self):
- """Generate stage."""
- self.model.eval()
- sample_manager = SampleManager(self.xp, map_reference_to_sample_id=True)
- generate_stage_name = str(self.current_stage)
-
- loader = self.dataloaders['generate']
- updates = len(loader)
- lp = self.log_progress(generate_stage_name, loader, total=updates, updates=self.log_updates)
-
- for batch in lp:
- reference, _ = batch
- reference = reference.to(self.device)
- with torch.no_grad():
- qres = self.model(reference)
- assert isinstance(qres, quantization.QuantizedResult)
-
- reference = reference.cpu()
- estimate = qres.x.cpu()
- sample_manager.add_samples(estimate, self.epoch, ground_truth_wavs=reference)
-
- flashy.distrib.barrier()
-
- def load_from_pretrained(self, name: str) -> dict:
- model = models.CompressionModel.get_pretrained(name)
- if isinstance(model, models.DAC):
- raise RuntimeError("Cannot fine tune a DAC model.")
- elif isinstance(model, models.HFEncodecCompressionModel):
- self.logger.warning('Trying to automatically convert a HuggingFace model '
- 'to AudioCraft, this might fail!')
- state = model.model.state_dict()
- new_state = {}
- for k, v in state.items():
- if k.startswith('decoder.layers') and '.conv.' in k and '.block.' not in k:
- # We need to determine if this a convtr or a regular conv.
- layer = int(k.split('.')[2])
- if isinstance(model.model.decoder.layers[layer].conv, torch.nn.ConvTranspose1d):
-
- k = k.replace('.conv.', '.convtr.')
- k = k.replace('encoder.layers.', 'encoder.model.')
- k = k.replace('decoder.layers.', 'decoder.model.')
- k = k.replace('conv.', 'conv.conv.')
- k = k.replace('convtr.', 'convtr.convtr.')
- k = k.replace('quantizer.layers.', 'quantizer.vq.layers.')
- k = k.replace('.codebook.', '._codebook.')
- new_state[k] = v
- state = new_state
- elif isinstance(model, models.EncodecModel):
- state = model.state_dict()
- else:
- raise RuntimeError(f"Cannot fine tune model type {type(model)}.")
- return {
- 'best_state': {'model': state}
- }
-
- @staticmethod
- def model_from_checkpoint(checkpoint_path: tp.Union[Path, str],
- device: tp.Union[torch.device, str] = 'cpu') -> models.CompressionModel:
- """Instantiate a CompressionModel from a given checkpoint path or dora sig.
- This method is a convenient endpoint to load a CompressionModel to use in other solvers.
-
- Args:
- checkpoint_path (Path or str): Path to checkpoint or dora sig from where the checkpoint is resolved.
- This also supports pre-trained models by using a path of the form //pretrained/NAME.
- See `model_from_pretrained` for a list of supported pretrained models.
- use_ema (bool): Use EMA variant of the model instead of the actual model.
- device (torch.device or str): Device on which the model is loaded.
- """
- checkpoint_path = str(checkpoint_path)
- if checkpoint_path.startswith('//pretrained/'):
- name = checkpoint_path.split('/', 3)[-1]
- return models.CompressionModel.get_pretrained(name, device)
- logger = logging.getLogger(__name__)
- logger.info(f"Loading compression model from checkpoint: {checkpoint_path}")
- _checkpoint_path = checkpoint.resolve_checkpoint_path(checkpoint_path, use_fsdp=False)
- assert _checkpoint_path is not None, f"Could not resolve compression model checkpoint path: {checkpoint_path}"
- state = checkpoint.load_checkpoint(_checkpoint_path)
- assert state is not None and 'xp.cfg' in state, f"Could not load compression model from ckpt: {checkpoint_path}"
- cfg = state['xp.cfg']
- cfg.device = device
- compression_model = models.builders.get_compression_model(cfg).to(device)
- assert compression_model.sample_rate == cfg.sample_rate, "Compression model sample rate should match"
-
- assert 'best_state' in state and state['best_state'] != {}
- assert 'exported' not in state, "When loading an exported checkpoint, use the //pretrained/ prefix."
- compression_model.load_state_dict(state['best_state']['model'])
- compression_model.eval()
- logger.info("Compression model loaded!")
- return compression_model
-
- @staticmethod
- def wrapped_model_from_checkpoint(cfg: omegaconf.DictConfig,
- checkpoint_path: tp.Union[Path, str],
- device: tp.Union[torch.device, str] = 'cpu') -> models.CompressionModel:
- """Instantiate a wrapped CompressionModel from a given checkpoint path or dora sig.
-
- Args:
- cfg (omegaconf.DictConfig): Configuration to read from for wrapped mode.
- checkpoint_path (Path or str): Path to checkpoint or dora sig from where the checkpoint is resolved.
- use_ema (bool): Use EMA variant of the model instead of the actual model.
- device (torch.device or str): Device on which the model is loaded.
- """
- compression_model = CompressionSolver.model_from_checkpoint(checkpoint_path, device)
- compression_model = models.builders.get_wrapped_compression_model(compression_model, cfg)
- return compression_model
-
-
-def evaluate_audio_reconstruction(y_pred: torch.Tensor, y: torch.Tensor, cfg: omegaconf.DictConfig) -> dict:
- """Audio reconstruction evaluation method that can be conveniently pickled."""
- metrics = {}
- if cfg.evaluate.metrics.visqol:
- visqol = builders.get_visqol(cfg.metrics.visqol)
- metrics['visqol'] = visqol(y_pred, y, cfg.sample_rate)
- sisnr = builders.get_loss('sisnr', cfg)
- metrics['sisnr'] = sisnr(y_pred, y)
- return metrics
diff --git a/spaces/ProteinDesignLab/protpardelle/ProteinMPNN/training/submit_exp_020.sh b/spaces/ProteinDesignLab/protpardelle/ProteinMPNN/training/submit_exp_020.sh
deleted file mode 100644
index 2d852f115c5bc6eba4d736ca6108fd2bf5bb48b7..0000000000000000000000000000000000000000
--- a/spaces/ProteinDesignLab/protpardelle/ProteinMPNN/training/submit_exp_020.sh
+++ /dev/null
@@ -1,15 +0,0 @@
-#!/bin/bash
-
-#SBATCH -p gpu
-#SBATCH --mem=128g
-#SBATCH --gres=gpu:a100:1
-#SBATCH -c 12
-#SBATCH -t 7-00:00:00
-#SBATCH --output=exp_020.out
-
-source activate mlfold-test
-python ./training.py \
- --path_for_outputs "./exp_020" \
- --path_for_training_data "path_to/pdb_2021aug02" \
- --num_examples_per_epoch 1000 \
- --save_model_every_n_epochs 50
diff --git a/spaces/RTL/videomatch/config.py b/spaces/RTL/videomatch/config.py
deleted file mode 100644
index 1ef5cc4a02a3f6f17e992b6b17e9dd74567979c9..0000000000000000000000000000000000000000
--- a/spaces/RTL/videomatch/config.py
+++ /dev/null
@@ -1,15 +0,0 @@
-import tempfile
-
-# Create a temporary directory where all the videos get stored
-VIDEO_DIRECTORY = tempfile.gettempdir()
-
-# Frames per second, only take 5 frames per second to speed up processing.
-# Quite standard for video processing applications.
-FPS = 5
-
-# Min and maximum distance between hashes when searching the videos for matches.
-MIN_DISTANCE = 4
-MAX_DISTANCE = 50
-
-# Rolling window size that is used when calculating the mode of the distances between the videos.
-ROLLING_WINDOW_SIZE = 10
\ No newline at end of file
diff --git a/spaces/Ramse/TTS_Hindi/transformer/Modules.py b/spaces/Ramse/TTS_Hindi/transformer/Modules.py
deleted file mode 100644
index f3855b3aa9c2a422de4bdca9b82804139e7a8401..0000000000000000000000000000000000000000
--- a/spaces/Ramse/TTS_Hindi/transformer/Modules.py
+++ /dev/null
@@ -1,25 +0,0 @@
-import torch
-import torch.nn as nn
-import numpy as np
-
-
-class ScaledDotProductAttention(nn.Module):
- """ Scaled Dot-Product Attention """
-
- def __init__(self, temperature):
- super().__init__()
- self.temperature = temperature
- self.softmax = nn.Softmax(dim=2)
-
- def forward(self, q, k, v, mask=None):
-
- attn = torch.bmm(q, k.transpose(1, 2))
- attn = attn / self.temperature
-
- if mask is not None:
- attn = attn.masked_fill(mask, -np.inf)
-
- attn = self.softmax(attn)
- output = torch.bmm(attn, v)
-
- return output, attn
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/pygments/token.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/pygments/token.py
deleted file mode 100644
index e3e565ad591485563a93db89609213c00ca16ca3..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/pygments/token.py
+++ /dev/null
@@ -1,213 +0,0 @@
-"""
- pygments.token
- ~~~~~~~~~~~~~~
-
- Basic token types and the standard tokens.
-
- :copyright: Copyright 2006-2022 by the Pygments team, see AUTHORS.
- :license: BSD, see LICENSE for details.
-"""
-
-
-class _TokenType(tuple):
- parent = None
-
- def split(self):
- buf = []
- node = self
- while node is not None:
- buf.append(node)
- node = node.parent
- buf.reverse()
- return buf
-
- def __init__(self, *args):
- # no need to call super.__init__
- self.subtypes = set()
-
- def __contains__(self, val):
- return self is val or (
- type(val) is self.__class__ and
- val[:len(self)] == self
- )
-
- def __getattr__(self, val):
- if not val or not val[0].isupper():
- return tuple.__getattribute__(self, val)
- new = _TokenType(self + (val,))
- setattr(self, val, new)
- self.subtypes.add(new)
- new.parent = self
- return new
-
- def __repr__(self):
- return 'Token' + (self and '.' or '') + '.'.join(self)
-
- def __copy__(self):
- # These instances are supposed to be singletons
- return self
-
- def __deepcopy__(self, memo):
- # These instances are supposed to be singletons
- return self
-
-
-Token = _TokenType()
-
-# Special token types
-Text = Token.Text
-Whitespace = Text.Whitespace
-Escape = Token.Escape
-Error = Token.Error
-# Text that doesn't belong to this lexer (e.g. HTML in PHP)
-Other = Token.Other
-
-# Common token types for source code
-Keyword = Token.Keyword
-Name = Token.Name
-Literal = Token.Literal
-String = Literal.String
-Number = Literal.Number
-Punctuation = Token.Punctuation
-Operator = Token.Operator
-Comment = Token.Comment
-
-# Generic types for non-source code
-Generic = Token.Generic
-
-# String and some others are not direct children of Token.
-# alias them:
-Token.Token = Token
-Token.String = String
-Token.Number = Number
-
-
-def is_token_subtype(ttype, other):
- """
- Return True if ``ttype`` is a subtype of ``other``.
-
- exists for backwards compatibility. use ``ttype in other`` now.
- """
- return ttype in other
-
-
-def string_to_tokentype(s):
- """
- Convert a string into a token type::
-
- >>> string_to_token('String.Double')
- Token.Literal.String.Double
- >>> string_to_token('Token.Literal.Number')
- Token.Literal.Number
- >>> string_to_token('')
- Token
-
- Tokens that are already tokens are returned unchanged:
-
- >>> string_to_token(String)
- Token.Literal.String
- """
- if isinstance(s, _TokenType):
- return s
- if not s:
- return Token
- node = Token
- for item in s.split('.'):
- node = getattr(node, item)
- return node
-
-
-# Map standard token types to short names, used in CSS class naming.
-# If you add a new item, please be sure to run this file to perform
-# a consistency check for duplicate values.
-STANDARD_TYPES = {
- Token: '',
-
- Text: '',
- Whitespace: 'w',
- Escape: 'esc',
- Error: 'err',
- Other: 'x',
-
- Keyword: 'k',
- Keyword.Constant: 'kc',
- Keyword.Declaration: 'kd',
- Keyword.Namespace: 'kn',
- Keyword.Pseudo: 'kp',
- Keyword.Reserved: 'kr',
- Keyword.Type: 'kt',
-
- Name: 'n',
- Name.Attribute: 'na',
- Name.Builtin: 'nb',
- Name.Builtin.Pseudo: 'bp',
- Name.Class: 'nc',
- Name.Constant: 'no',
- Name.Decorator: 'nd',
- Name.Entity: 'ni',
- Name.Exception: 'ne',
- Name.Function: 'nf',
- Name.Function.Magic: 'fm',
- Name.Property: 'py',
- Name.Label: 'nl',
- Name.Namespace: 'nn',
- Name.Other: 'nx',
- Name.Tag: 'nt',
- Name.Variable: 'nv',
- Name.Variable.Class: 'vc',
- Name.Variable.Global: 'vg',
- Name.Variable.Instance: 'vi',
- Name.Variable.Magic: 'vm',
-
- Literal: 'l',
- Literal.Date: 'ld',
-
- String: 's',
- String.Affix: 'sa',
- String.Backtick: 'sb',
- String.Char: 'sc',
- String.Delimiter: 'dl',
- String.Doc: 'sd',
- String.Double: 's2',
- String.Escape: 'se',
- String.Heredoc: 'sh',
- String.Interpol: 'si',
- String.Other: 'sx',
- String.Regex: 'sr',
- String.Single: 's1',
- String.Symbol: 'ss',
-
- Number: 'm',
- Number.Bin: 'mb',
- Number.Float: 'mf',
- Number.Hex: 'mh',
- Number.Integer: 'mi',
- Number.Integer.Long: 'il',
- Number.Oct: 'mo',
-
- Operator: 'o',
- Operator.Word: 'ow',
-
- Punctuation: 'p',
- Punctuation.Marker: 'pm',
-
- Comment: 'c',
- Comment.Hashbang: 'ch',
- Comment.Multiline: 'cm',
- Comment.Preproc: 'cp',
- Comment.PreprocFile: 'cpf',
- Comment.Single: 'c1',
- Comment.Special: 'cs',
-
- Generic: 'g',
- Generic.Deleted: 'gd',
- Generic.Emph: 'ge',
- Generic.Error: 'gr',
- Generic.Heading: 'gh',
- Generic.Inserted: 'gi',
- Generic.Output: 'go',
- Generic.Prompt: 'gp',
- Generic.Strong: 'gs',
- Generic.Subheading: 'gu',
- Generic.Traceback: 'gt',
-}
diff --git a/spaces/Realcat/image-matching-webui/hloc/utils/parsers.py b/spaces/Realcat/image-matching-webui/hloc/utils/parsers.py
deleted file mode 100644
index faaa8f2de952673abdb580abc5754efe1bfc5f40..0000000000000000000000000000000000000000
--- a/spaces/Realcat/image-matching-webui/hloc/utils/parsers.py
+++ /dev/null
@@ -1,56 +0,0 @@
-from pathlib import Path
-import logging
-import numpy as np
-from collections import defaultdict
-import pycolmap
-
-logger = logging.getLogger(__name__)
-
-
-def parse_image_list(path, with_intrinsics=False):
- images = []
- with open(path, "r") as f:
- for line in f:
- line = line.strip("\n")
- if len(line) == 0 or line[0] == "#":
- continue
- name, *data = line.split()
- if with_intrinsics:
- model, width, height, *params = data
- params = np.array(params, float)
- cam = pycolmap.Camera(model, int(width), int(height), params)
- images.append((name, cam))
- else:
- images.append(name)
-
- assert len(images) > 0
- logger.info(f"Imported {len(images)} images from {path.name}")
- return images
-
-
-def parse_image_lists(paths, with_intrinsics=False):
- images = []
- files = list(Path(paths.parent).glob(paths.name))
- assert len(files) > 0
- for lfile in files:
- images += parse_image_list(lfile, with_intrinsics=with_intrinsics)
- return images
-
-
-def parse_retrieval(path):
- retrieval = defaultdict(list)
- with open(path, "r") as f:
- for p in f.read().rstrip("\n").split("\n"):
- if len(p) == 0:
- continue
- q, r = p.split()
- retrieval[q].append(r)
- return dict(retrieval)
-
-
-def names_to_pair(name0, name1, separator="/"):
- return separator.join((name0.replace("/", "-"), name1.replace("/", "-")))
-
-
-def names_to_pair_old(name0, name1):
- return names_to_pair(name0, name1, separator="_")
diff --git a/spaces/Realcat/image-matching-webui/third_party/DarkFeat/datasets/utils/photaug.py b/spaces/Realcat/image-matching-webui/third_party/DarkFeat/datasets/utils/photaug.py
deleted file mode 100644
index 29b9130871f8cb96d714228fe22d8c6f4b6526e3..0000000000000000000000000000000000000000
--- a/spaces/Realcat/image-matching-webui/third_party/DarkFeat/datasets/utils/photaug.py
+++ /dev/null
@@ -1,54 +0,0 @@
-import cv2
-import numpy as np
-import random
-
-
-def random_brightness_np(image, max_abs_change=50):
- delta = random.uniform(-max_abs_change, max_abs_change)
- return np.clip(image + delta, 0, 255)
-
-
-def random_contrast_np(image, strength_range=[0.3, 1.5]):
- delta = random.uniform(*strength_range)
- mean = image.mean()
- return np.clip((image - mean) * delta + mean, 0, 255)
-
-
-def motion_blur_np(img, max_kernel_size=3):
- # Either vertial, hozirontal or diagonal blur
- mode = np.random.choice(["h", "v", "diag_down", "diag_up"])
- ksize = np.random.randint(0, (max_kernel_size + 1) / 2) * 2 + 1 # make sure is odd
- center = int((ksize - 1) / 2)
- kernel = np.zeros((ksize, ksize))
- if mode == "h":
- kernel[center, :] = 1.0
- elif mode == "v":
- kernel[:, center] = 1.0
- elif mode == "diag_down":
- kernel = np.eye(ksize)
- elif mode == "diag_up":
- kernel = np.flip(np.eye(ksize), 0)
- var = ksize * ksize / 16.0
- grid = np.repeat(np.arange(ksize)[:, np.newaxis], ksize, axis=-1)
- gaussian = np.exp(
- -(np.square(grid - center) + np.square(grid.T - center)) / (2.0 * var)
- )
- kernel *= gaussian
- kernel /= np.sum(kernel)
- img = cv2.filter2D(img, -1, kernel)
- return np.clip(img, 0, 255)
-
-
-def additive_gaussian_noise(image, stddev_range=[5, 95]):
- stddev = random.uniform(*stddev_range)
- noise = np.random.normal(size=image.shape, scale=stddev)
- noisy_image = np.clip(image + noise, 0, 255)
- return noisy_image
-
-
-def photaug(img):
- img = random_brightness_np(img)
- img = random_contrast_np(img)
- # img = additive_gaussian_noise(img)
- img = motion_blur_np(img)
- return img
diff --git a/spaces/Ricecake123/RVC-demo/docs/training_tips_ja.md b/spaces/Ricecake123/RVC-demo/docs/training_tips_ja.md
deleted file mode 100644
index c5b06f2fdaa603a690c51ee2b79daecc4305fbd5..0000000000000000000000000000000000000000
--- a/spaces/Ricecake123/RVC-demo/docs/training_tips_ja.md
+++ /dev/null
@@ -1,64 +0,0 @@
-RVCの訓練における説明、およびTIPS
-===============================
-本TIPSではどのようにデータの訓練が行われているかを説明します。
-
-# 訓練の流れ
-GUIの訓練タブのstepに沿って説明します。
-
-## step1
-実験名の設定を行います。
-
-また、モデルに音高ガイド(ピッチ)を考慮させるかもここで設定できます。考慮させない場合はモデルは軽量になりますが、歌唱には向かなくなります。
-
-各実験のデータは`/logs/実験名/`に配置されます。
-
-## step2a
-音声の読み込みと前処理を行います。
-
-### load audio
-音声のあるフォルダを指定すると、そのフォルダ内にある音声ファイルを自動で読み込みます。
-例えば`C:Users\hoge\voices`を指定した場合、`C:Users\hoge\voices\voice.mp3`は読み込まれますが、`C:Users\hoge\voices\dir\voice.mp3`は読み込まれません。
-
-音声の読み込みには内部でffmpegを利用しているので、ffmpegで対応している拡張子であれば自動的に読み込まれます。
-ffmpegでint16に変換した後、float32に変換し、-1 ~ 1の間に正規化されます。
-
-### denoising
-音声についてscipyのfiltfiltによる平滑化を行います。
-
-### 音声の分割
-入力した音声はまず、一定期間(max_sil_kept=5秒?)より長く無音が続く部分を検知して音声を分割します。無音で音声を分割した後は、0.3秒のoverlapを含む4秒ごとに音声を分割します。4秒以内に区切られた音声は、音量の正規化を行った後wavファイルを`/logs/実験名/0_gt_wavs`に、そこから16kのサンプリングレートに変換して`/logs/実験名/1_16k_wavs`にwavファイルで保存します。
-
-## step2b
-### ピッチの抽出
-wavファイルからピッチ(音の高低)の情報を抽出します。parselmouthやpyworldに内蔵されている手法でピッチ情報(=f0)を抽出し、`/logs/実験名/2a_f0`に保存します。その後、ピッチ情報を対数で変換して1~255の整数に変換し、`/logs/実験名/2b-f0nsf`に保存します。
-
-### feature_printの抽出
-HuBERTを用いてwavファイルを事前にembeddingに変換します。`/logs/実験名/1_16k_wavs`に保存したwavファイルを読み込み、HuBERTでwavファイルを256次元の特徴量に変換し、npy形式で`/logs/実験名/3_feature256`に保存します。
-
-## step3
-モデルのトレーニングを行います。
-### 初心者向け用語解説
-深層学習ではデータセットを分割し、少しずつ学習を進めていきます。一回のモデルの更新(step)では、batch_size個のデータを取り出し予測と誤差の修正を行います。これをデータセットに対して一通り行うと一epochと数えます。
-
-そのため、学習時間は 1step当たりの学習時間 x (データセット内のデータ数 ÷ バッチサイズ) x epoch数 かかります。一般にバッチサイズを大きくするほど学習は安定し、(1step当たりの学習時間÷バッチサイズ)は小さくなりますが、その分GPUのメモリを多く使用します。GPUのRAMはnvidia-smiコマンド等で確認できます。実行環境のマシンに合わせてバッチサイズをできるだけ大きくするとより短時間で学習が可能です。
-
-### pretrained modelの指定
-RVCではモデルの訓練を0からではなく、事前学習済みの重みから開始するため、少ないデータセットで学習を行えます。
-
-デフォルトでは
-
-- 音高ガイドを考慮する場合、`RVCのある場所/pretrained/f0G40k.pth`と`RVCのある場所/pretrained/f0D40k.pth`を読み込みます。
-- 音高ガイドを考慮しない場合、`RVCのある場所/pretrained/G40k.pth`と`RVCのある場所/pretrained/D40k.pth`を読み込みます。
-
-学習時はsave_every_epochごとにモデルのパラメータが`logs/実験名/G_{}.pth`と`logs/実験名/D_{}.pth`に保存されますが、このパスを指定することで学習を再開したり、もしくは違う実験で学習したモデルの重みから学習を開始できます。
-
-### indexの学習
-RVCでは学習時に使われたHuBERTの特徴量を保存し、推論時は学習時の特徴量から近い特徴量を探してきて推論を行います。この検索を高速に行うために事前にindexの学習を行います。
-indexの学習には近似近傍探索ライブラリのfaissを用います。`/logs/実験名/3_feature256`の特徴量を読み込み、それを用いて学習したindexを`/logs/実験名/add_XXX.index`として保存します。
-(20230428updateよりtotal_fea.npyはindexから読み込むので不要になりました。)
-
-### ボタンの説明
-- モデルのトレーニング: step2bまでを実行した後、このボタンを押すとモデルの学習を行います。
-- 特徴インデックスのトレーニング: モデルのトレーニング後、indexの学習を行います。
-- ワンクリックトレーニング: step2bまでとモデルのトレーニング、特徴インデックスのトレーニングを一括で行います。
-
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/roi_heads/point_rend_roi_head.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/roi_heads/point_rend_roi_head.py
deleted file mode 100644
index 478cdf5bff6779e9291f94c543205289036ea2c6..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/roi_heads/point_rend_roi_head.py
+++ /dev/null
@@ -1,218 +0,0 @@
-# Modified from https://github.com/facebookresearch/detectron2/tree/master/projects/PointRend # noqa
-
-import torch
-import torch.nn.functional as F
-from mmcv.ops import point_sample, rel_roi_point_to_rel_img_point
-
-from mmdet.core import bbox2roi, bbox_mapping, merge_aug_masks
-from .. import builder
-from ..builder import HEADS
-from .standard_roi_head import StandardRoIHead
-
-
-@HEADS.register_module()
-class PointRendRoIHead(StandardRoIHead):
- """`PointRend `_."""
-
- def __init__(self, point_head, *args, **kwargs):
- super().__init__(*args, **kwargs)
- assert self.with_bbox and self.with_mask
- self.init_point_head(point_head)
-
- def init_point_head(self, point_head):
- """Initialize ``point_head``"""
- self.point_head = builder.build_head(point_head)
-
- def init_weights(self, pretrained):
- """Initialize the weights in head.
-
- Args:
- pretrained (str, optional): Path to pre-trained weights.
- """
- super().init_weights(pretrained)
- self.point_head.init_weights()
-
- def _mask_forward_train(self, x, sampling_results, bbox_feats, gt_masks,
- img_metas):
- """Run forward function and calculate loss for mask head and point head
- in training."""
- mask_results = super()._mask_forward_train(x, sampling_results,
- bbox_feats, gt_masks,
- img_metas)
- if mask_results['loss_mask'] is not None:
- loss_point = self._mask_point_forward_train(
- x, sampling_results, mask_results['mask_pred'], gt_masks,
- img_metas)
- mask_results['loss_mask'].update(loss_point)
-
- return mask_results
-
- def _mask_point_forward_train(self, x, sampling_results, mask_pred,
- gt_masks, img_metas):
- """Run forward function and calculate loss for point head in
- training."""
- pos_labels = torch.cat([res.pos_gt_labels for res in sampling_results])
- rel_roi_points = self.point_head.get_roi_rel_points_train(
- mask_pred, pos_labels, cfg=self.train_cfg)
- rois = bbox2roi([res.pos_bboxes for res in sampling_results])
-
- fine_grained_point_feats = self._get_fine_grained_point_feats(
- x, rois, rel_roi_points, img_metas)
- coarse_point_feats = point_sample(mask_pred, rel_roi_points)
- mask_point_pred = self.point_head(fine_grained_point_feats,
- coarse_point_feats)
- mask_point_target = self.point_head.get_targets(
- rois, rel_roi_points, sampling_results, gt_masks, self.train_cfg)
- loss_mask_point = self.point_head.loss(mask_point_pred,
- mask_point_target, pos_labels)
-
- return loss_mask_point
-
- def _get_fine_grained_point_feats(self, x, rois, rel_roi_points,
- img_metas):
- """Sample fine grained feats from each level feature map and
- concatenate them together."""
- num_imgs = len(img_metas)
- fine_grained_feats = []
- for idx in range(self.mask_roi_extractor.num_inputs):
- feats = x[idx]
- spatial_scale = 1. / float(
- self.mask_roi_extractor.featmap_strides[idx])
- point_feats = []
- for batch_ind in range(num_imgs):
- # unravel batch dim
- feat = feats[batch_ind].unsqueeze(0)
- inds = (rois[:, 0].long() == batch_ind)
- if inds.any():
- rel_img_points = rel_roi_point_to_rel_img_point(
- rois[inds], rel_roi_points[inds], feat.shape[2:],
- spatial_scale).unsqueeze(0)
- point_feat = point_sample(feat, rel_img_points)
- point_feat = point_feat.squeeze(0).transpose(0, 1)
- point_feats.append(point_feat)
- fine_grained_feats.append(torch.cat(point_feats, dim=0))
- return torch.cat(fine_grained_feats, dim=1)
-
- def _mask_point_forward_test(self, x, rois, label_pred, mask_pred,
- img_metas):
- """Mask refining process with point head in testing."""
- refined_mask_pred = mask_pred.clone()
- for subdivision_step in range(self.test_cfg.subdivision_steps):
- refined_mask_pred = F.interpolate(
- refined_mask_pred,
- scale_factor=self.test_cfg.scale_factor,
- mode='bilinear',
- align_corners=False)
- # If `subdivision_num_points` is larger or equal to the
- # resolution of the next step, then we can skip this step
- num_rois, channels, mask_height, mask_width = \
- refined_mask_pred.shape
- if (self.test_cfg.subdivision_num_points >=
- self.test_cfg.scale_factor**2 * mask_height * mask_width
- and
- subdivision_step < self.test_cfg.subdivision_steps - 1):
- continue
- point_indices, rel_roi_points = \
- self.point_head.get_roi_rel_points_test(
- refined_mask_pred, label_pred, cfg=self.test_cfg)
- fine_grained_point_feats = self._get_fine_grained_point_feats(
- x, rois, rel_roi_points, img_metas)
- coarse_point_feats = point_sample(mask_pred, rel_roi_points)
- mask_point_pred = self.point_head(fine_grained_point_feats,
- coarse_point_feats)
-
- point_indices = point_indices.unsqueeze(1).expand(-1, channels, -1)
- refined_mask_pred = refined_mask_pred.reshape(
- num_rois, channels, mask_height * mask_width)
- refined_mask_pred = refined_mask_pred.scatter_(
- 2, point_indices, mask_point_pred)
- refined_mask_pred = refined_mask_pred.view(num_rois, channels,
- mask_height, mask_width)
-
- return refined_mask_pred
-
- def simple_test_mask(self,
- x,
- img_metas,
- det_bboxes,
- det_labels,
- rescale=False):
- """Obtain mask prediction without augmentation."""
- ori_shapes = tuple(meta['ori_shape'] for meta in img_metas)
- scale_factors = tuple(meta['scale_factor'] for meta in img_metas)
- num_imgs = len(det_bboxes)
- if all(det_bbox.shape[0] == 0 for det_bbox in det_bboxes):
- segm_results = [[[] for _ in range(self.mask_head.num_classes)]
- for _ in range(num_imgs)]
- else:
- # if det_bboxes is rescaled to the original image size, we need to
- # rescale it back to the testing scale to obtain RoIs.
- if rescale and not isinstance(scale_factors[0], float):
- scale_factors = [
- torch.from_numpy(scale_factor).to(det_bboxes[0].device)
- for scale_factor in scale_factors
- ]
- _bboxes = [
- det_bboxes[i][:, :4] *
- scale_factors[i] if rescale else det_bboxes[i][:, :4]
- for i in range(len(det_bboxes))
- ]
- mask_rois = bbox2roi(_bboxes)
- mask_results = self._mask_forward(x, mask_rois)
- # split batch mask prediction back to each image
- mask_pred = mask_results['mask_pred']
- num_mask_roi_per_img = [len(det_bbox) for det_bbox in det_bboxes]
- mask_preds = mask_pred.split(num_mask_roi_per_img, 0)
- mask_rois = mask_rois.split(num_mask_roi_per_img, 0)
-
- # apply mask post-processing to each image individually
- segm_results = []
- for i in range(num_imgs):
- if det_bboxes[i].shape[0] == 0:
- segm_results.append(
- [[] for _ in range(self.mask_head.num_classes)])
- else:
- x_i = [xx[[i]] for xx in x]
- mask_rois_i = mask_rois[i]
- mask_rois_i[:, 0] = 0 # TODO: remove this hack
- mask_pred_i = self._mask_point_forward_test(
- x_i, mask_rois_i, det_labels[i], mask_preds[i],
- [img_metas])
- segm_result = self.mask_head.get_seg_masks(
- mask_pred_i, _bboxes[i], det_labels[i], self.test_cfg,
- ori_shapes[i], scale_factors[i], rescale)
- segm_results.append(segm_result)
- return segm_results
-
- def aug_test_mask(self, feats, img_metas, det_bboxes, det_labels):
- """Test for mask head with test time augmentation."""
- if det_bboxes.shape[0] == 0:
- segm_result = [[] for _ in range(self.mask_head.num_classes)]
- else:
- aug_masks = []
- for x, img_meta in zip(feats, img_metas):
- img_shape = img_meta[0]['img_shape']
- scale_factor = img_meta[0]['scale_factor']
- flip = img_meta[0]['flip']
- _bboxes = bbox_mapping(det_bboxes[:, :4], img_shape,
- scale_factor, flip)
- mask_rois = bbox2roi([_bboxes])
- mask_results = self._mask_forward(x, mask_rois)
- mask_results['mask_pred'] = self._mask_point_forward_test(
- x, mask_rois, det_labels, mask_results['mask_pred'],
- img_metas)
- # convert to numpy array to save memory
- aug_masks.append(
- mask_results['mask_pred'].sigmoid().cpu().numpy())
- merged_masks = merge_aug_masks(aug_masks, img_metas, self.test_cfg)
-
- ori_shape = img_metas[0][0]['ori_shape']
- segm_result = self.mask_head.get_seg_masks(
- merged_masks,
- det_bboxes,
- det_labels,
- self.test_cfg,
- ori_shape,
- scale_factor=1.0,
- rescale=False)
- return segm_result
diff --git a/spaces/Rongjiehuang/ProDiff/data_gen/tts/wav_processors/common_processors.py b/spaces/Rongjiehuang/ProDiff/data_gen/tts/wav_processors/common_processors.py
deleted file mode 100644
index de0b49f4a31cb6737f2cffc6c8d010d88d11c853..0000000000000000000000000000000000000000
--- a/spaces/Rongjiehuang/ProDiff/data_gen/tts/wav_processors/common_processors.py
+++ /dev/null
@@ -1,86 +0,0 @@
-import os
-import subprocess
-import librosa
-import numpy as np
-from data_gen.tts.wav_processors.base_processor import BaseWavProcessor, register_wav_processors
-from data_gen.tts.data_gen_utils import trim_long_silences
-from utils.audio import save_wav
-from utils.rnnoise import rnnoise
-from utils.hparams import hparams
-
-
-@register_wav_processors(name='sox_to_wav')
-class ConvertToWavProcessor(BaseWavProcessor):
- @property
- def name(self):
- return 'ToWav'
-
- def process(self, input_fn, sr, tmp_dir, processed_dir, item_name, preprocess_args):
- if input_fn[-4:] == '.wav':
- return input_fn, sr
- else:
- output_fn = self.output_fn(input_fn)
- subprocess.check_call(f'sox -v 0.95 "{input_fn}" -t wav "{output_fn}"', shell=True)
- return output_fn, sr
-
-
-@register_wav_processors(name='sox_resample')
-class ResampleProcessor(BaseWavProcessor):
- @property
- def name(self):
- return 'Resample'
-
- def process(self, input_fn, sr, tmp_dir, processed_dir, item_name, preprocess_args):
- output_fn = self.output_fn(input_fn)
- sr_file = librosa.core.get_samplerate(input_fn)
- if sr != sr_file:
- subprocess.check_call(f'sox -v 0.95 "{input_fn}" -r{sr} "{output_fn}"', shell=True)
- y, _ = librosa.core.load(input_fn, sr=sr)
- y, _ = librosa.effects.trim(y)
- save_wav(y, output_fn, sr)
- return output_fn, sr
- else:
- return input_fn, sr
-
-
-@register_wav_processors(name='trim_sil')
-class TrimSILProcessor(BaseWavProcessor):
- @property
- def name(self):
- return 'TrimSIL'
-
- def process(self, input_fn, sr, tmp_dir, processed_dir, item_name, preprocess_args):
- output_fn = self.output_fn(input_fn)
- y, _ = librosa.core.load(input_fn, sr=sr)
- y, _ = librosa.effects.trim(y)
- save_wav(y, output_fn, sr)
- return output_fn
-
-
-@register_wav_processors(name='trim_all_sil')
-class TrimAllSILProcessor(BaseWavProcessor):
- @property
- def name(self):
- return 'TrimSIL'
-
- def process(self, input_fn, sr, tmp_dir, processed_dir, item_name, preprocess_args):
- output_fn = self.output_fn(input_fn)
- y, audio_mask, _ = trim_long_silences(
- input_fn, vad_max_silence_length=preprocess_args.get('vad_max_silence_length', 12))
- save_wav(y, output_fn, sr)
- if preprocess_args['save_sil_mask']:
- os.makedirs(f'{processed_dir}/sil_mask', exist_ok=True)
- np.save(f'{processed_dir}/sil_mask/{item_name}.npy', audio_mask)
- return output_fn, sr
-
-
-@register_wav_processors(name='denoise')
-class DenoiseProcessor(BaseWavProcessor):
- @property
- def name(self):
- return 'Denoise'
-
- def process(self, input_fn, sr, tmp_dir, processed_dir, item_name, preprocess_args):
- output_fn = self.output_fn(input_fn)
- rnnoise(input_fn, output_fn, out_sample_rate=sr)
- return output_fn, sr
diff --git a/spaces/ShotaA/TalkTuner/main.py b/spaces/ShotaA/TalkTuner/main.py
deleted file mode 100644
index 0a135eaa4f2a0fbf1cc181ba118c566d6c9d8206..0000000000000000000000000000000000000000
--- a/spaces/ShotaA/TalkTuner/main.py
+++ /dev/null
@@ -1,166 +0,0 @@
-from elements.voicerecorder import VoiceRecorder
-from pathlib import Path
-from nicegui import ui
-import base64
-import tempfile
-import openai_utils
-
-
-class SpeechAnalyzerData:
- def __init__(self):
- self.transcript_text = ""
- self.evaluation_text = ""
- self.speech_type = "Informative"
- self.speech_base64 = None
- self.is_inprogress = False
- self.is_audio_input_microphone = True
- self.is_audio_input_file = False
-
-
-@ui.page("/")
-def index_page():
- def handle_evaluate():
- try:
- if speech_data.speech_base64 is None:
- ui.notify("No audio data", type="negative")
- return
- decoded_audio_bytes = base64.b64decode(
- speech_data.speech_base64.split(",")[1]
- )
- with tempfile.NamedTemporaryFile(suffix=".webm", delete=False) as f:
- f.write(decoded_audio_bytes)
- f.flush()
- audio_file_name = f.name
- with open(audio_file_name, "rb") as f:
- transcript = openai_utils.transcribe("whisper-1", f)
- speech_data.transcript_text = transcript
- prompt_feedback, _ = openai_utils.analyze_prompt(
- transcript, speech_data.speech_type, model="gpt-3.5-turbo"
- )
- speech_data.evaluation_text = prompt_feedback
- ui.notify("Evaluation complete", type="positive")
- finally:
- # eval_button.props("disabled")
- eval_button.props(remove="loading")
- speech_data.is_inprogress = False
-
- def reset_page():
- speech_data.transcript_text = ""
- speech_data.evaluation_text = ""
- speech_data.speech_base64 = None
- audio_uploader.reset()
-
- def handle_uploaded_audio(event):
- # convert to base64_audio and create audiocard
- content = event.content
- content.seek(0)
- base64_audio = base64.b64encode(content.read()).decode("utf-8")
- base64_audio = f"data:{event.type};base64,{base64_audio}"
- speech_data.speech_base64 = base64_audio
-
- with card_audio:
- card_audio.clear()
- ui.audio(src=base64_audio)
-
- def handle_audio(event):
- base64_audio = event["args"]
- speech_data.speech_base64 = base64_audio
- # Process the Base64-encoded audio data
- # For example, you can save it to a file, send it to another service, etc.
- # Update the audio player's source with the recorded audio data
- with card_audio:
- card_audio.clear()
- ui.audio(src=base64_audio)
-
- def handle_input_radio(event):
- if event.value == "Microphone":
- speech_data.is_audio_input_microphone = True
- speech_data.is_audio_input_file = False
- elif event.value == "File Upload":
- speech_data.is_audio_input_microphone = False
- speech_data.is_audio_input_file = True
- else:
- raise Exception("Invalid input radio")
-
- speech_data = SpeechAnalyzerData()
-
- with ui.row().classes("justify-center w-full flex"):
- # Audio Inputs
- with ui.row().classes("justify-center w-full flex"):
- ui.radio(
- ["Microphone", "File Upload"],
- value="Microphone",
- on_change=handle_input_radio,
- ).props("inline").classes(
- "w-full justify-center flex",
- )
- with ui.row().classes("justify-center w-full flex"):
- # Voice Recorder
- with ui.column().bind_visibility_from(
- speech_data, "is_audio_input_microphone"
- ):
- VoiceRecorder("Start/Stop recording").on("audio-recorded", handle_audio)
- # File Upload
- with ui.column().bind_visibility_from(speech_data, "is_audio_input_file"):
- audio_uploader = ui.upload(
- on_upload=handle_uploaded_audio, auto_upload=True
- ).props("max-files=1")
- # Audio Player
- with ui.row().classes("justify-center w-full flex"):
- with ui.column():
- with ui.card() as card_audio:
- ui.markdown("Recorded Audio or Upload a file to evaluate")
- # Speech Type
- with ui.row().classes("justify-center w-full flex"):
- with ui.column():
- ui.select(
- ["Informative", "Persuasive"], label="Speech Type"
- ).bind_value(speech_data, "speech_type").classes("w-full").style(
- "font-size: 1.2rem;"
- ).props(
- "outlined"
- )
- # Buttons
- with ui.row().classes("justify-center w-full flex"):
- # Submit
- with ui.column():
- eval_button = (
- ui.button("Evaluate")
- .on("click", lambda: eval_button.props("loading"))
- .on("click", handle_evaluate)
- )
- # Clear
- with ui.column():
- ui.button("Clear", on_click=reset_page).props("color=red")
- # Text
- with ui.row().classes("justify-center w-full flex"):
- # Transcript
- with ui.column().classes("w-1/3"):
- ui.label("Transcript").classes("w-full justify-center").style(
- "font-size: 1.3rem; text-align: center;"
- )
- ui.textarea().props("readonly").bind_value(
- speech_data, "transcript_text"
- ).classes("w-full").props("rows=30 outlined ").style(
- "height: 100%; font-size: 1.1rem;"
- )
- # Evaluation
- with ui.column().classes("w-1/3"):
- ui.label("Evaluation").classes("w-full justify-center").style(
- "font-size: 1.3rem; text-align: center;"
- )
- ui.textarea().props("readonly").bind_value(
- speech_data, "evaluation_text"
- ).classes("w-full").props("rows=30 outlined").style(
- "height: 100%; font-size: 1.1rem;"
- )
-
- ui.add_head_html(
- f""
- )
- ui.add_head_html(
- ''
- )
-
-
-ui.run(title="Speech Evaluation Demo", favicon="./static/SpeechTron.ico", port=7860)
diff --git a/spaces/Shubham89/Meshwork-chatbot/README.md b/spaces/Shubham89/Meshwork-chatbot/README.md
deleted file mode 100644
index 04eeaff064ea5d4c500c062254b9ac430174fffe..0000000000000000000000000000000000000000
--- a/spaces/Shubham89/Meshwork-chatbot/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Meshwork Chatbot
-emoji: ⚡
-colorFrom: blue
-colorTo: pink
-sdk: gradio
-sdk_version: 3.29.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Spico/writing-comrade/app.py b/spaces/Spico/writing-comrade/app.py
deleted file mode 100644
index 69067870cc115c23211dc8df3c53c3939bd8fa76..0000000000000000000000000000000000000000
--- a/spaces/Spico/writing-comrade/app.py
+++ /dev/null
@@ -1,90 +0,0 @@
-import openai
-import gradio as gr
-
-
-instructions = {
- "completion": "Please help me complete the text",
- "correction": "Please help me correct mistakes in the text",
- "polishing": "Please help me polish the language and improve my writing",
- "paraphrase": "Please help me paraphrase the text",
- "translation": "Please help me translate the text",
- "freestyle": "",
-}
-
-template = "{instruction}:\n\nText: {text}"
-
-
-def chat(task_type: str, text: str, api_key: str, tgt_lang: str = "") -> str:
- openai.api_key = api_key
-
- prompt = ""
- task_type = task_type[1:].strip().lower()
- if task_type == "freestyle":
- prompt = text
- else:
- instruction = instructions[task_type]
- if task_type == "translation":
- if tgt_lang:
- instruction += f" into {tgt_lang.strip()}"
- else:
- raise ValueError("Target language cannot be empty when translating")
- prompt = template.format(instruction=instruction, text=text)
-
- messages = [
- {
- "role": "system",
- "content": f"You are a helpful writing assistant who can do {task_type}.",
- },
- {"role": "user", "content": prompt},
- ]
- finish_reason = None
- while finish_reason != "stop":
- if len(messages) > 2 and messages[-1]["role"] == "assistant":
- messages.append({"role": "user", "content": "please continue"})
- res = openai.ChatCompletion.create(
- model="gpt-3.5-turbo-0301",
- messages=messages,
- )
- messages.append(res["choices"][0]["message"])
- finish_reason = res["choices"][0]["finish_reason"]
- if len(messages) >= 5:
- break
- response_text = " ".join(
- [msg["content"] for msg in messages if msg["role"] == "assistant"]
- ).strip()
-
- return response_text
-
-
-with gr.Blocks(css="") as demo:
- gr.Markdown("# ✒️ Writing Comrade")
- gr.Markdown("Comrade, I'm your faithful writing fellow powered by ChatGPT. Destination, commander?")
- gr.Markdown(
- "🎮 This demo is hosted on: [Huggingface Spaces](https://huggingface.co/spaces/Spico/writing-comrade) "
- "⭐ Star me on GitHub: [Spico197/writing-comrade](https://github.com/Spico197/writing-comrade) "
- "You may want to follow [this instruction](https://help.openai.com/en/articles/4936850-where-do-i-find-my-secret-api-key) to get an API key."
- )
-
- with gr.Row():
- api_key = gr.Textbox(label='OpenAI API Key', type="password")
-
- with gr.Row().style(equal_height=True):
- with gr.Column(scale=3):
- emojis = "📝🥊💎🍦🚌🎤"
- task_type = gr.Radio([f"{emojis[i]}{k.title()}" for i, k in enumerate(instructions.keys())], label="Task")
- with gr.Column(min_width=100):
- tgt_lang = gr.Textbox(label="Target language in translation")
- with gr.Column():
- text_button = gr.Button("Can~ do!", variant="primary")
-
- with gr.Row():
- with gr.Column():
- text_input = gr.TextArea(lines=15, label="Input")
- with gr.Column():
- text_output = gr.TextArea(lines=15, label="Output")
-
- text_button.click(
- chat, inputs=[task_type, text_input, api_key, tgt_lang], outputs=text_output
- )
-
-demo.launch(show_error=True)
diff --git a/spaces/SuYuanS/AudioCraft_Plus/docs/METRICS.md b/spaces/SuYuanS/AudioCraft_Plus/docs/METRICS.md
deleted file mode 100644
index e2ae9a184cbccb8bfefb4ce77afa5ddab743a051..0000000000000000000000000000000000000000
--- a/spaces/SuYuanS/AudioCraft_Plus/docs/METRICS.md
+++ /dev/null
@@ -1,127 +0,0 @@
-# AudioCraft objective metrics
-
-In addition to training losses, AudioCraft provides a set of objective metrics
-for audio synthesis and audio generation. As these metrics may require
-extra dependencies and can be costly to train, they are often disabled by default.
-This section provides guidance for setting up and using these metrics in
-the AudioCraft training pipelines.
-
-## Available metrics
-
-### Audio synthesis quality metrics
-
-#### SI-SNR
-
-We provide an implementation of the Scale-Invariant Signal-to-Noise Ratio in PyTorch.
-No specific requirement is needed for this metric. Please activate the metric at the
-evaluation stage with the appropriate flag:
-
-```shell
-dora run <...> evaluate.metrics.sisnr=true
-```
-
-#### ViSQOL
-
-We provide a Python wrapper around the ViSQOL [official implementation](https://github.com/google/visqol)
-to conveniently run ViSQOL within the training pipelines.
-
-One must specify the path to the ViSQOL installation through the configuration in order
-to enable ViSQOL computations in AudioCraft:
-
-```shell
-# the first parameter is used to activate visqol computation while the second specify
-# the path to visqol's library to be used by our python wrapper
-dora run <...> evaluate.metrics.visqol=true metrics.visqol.bin=
-```
-
-See an example grid: [Compression with ViSQOL](../audiocraft/grids/compression/encodec_musicgen_32khz.py)
-
-To learn more about ViSQOL and how to build ViSQOL binary using bazel, please refer to the
-instructions available in the [open source repository](https://github.com/google/visqol).
-
-### Audio generation metrics
-
-#### Frechet Audio Distance
-
-Similarly to ViSQOL, we use a Python wrapper around the Frechet Audio Distance
-[official implementation](https://github.com/google-research/google-research/tree/master/frechet_audio_distance)
-in TensorFlow.
-
-Note that we had to make several changes to the actual code in order to make it work.
-Please refer to the [FrechetAudioDistanceMetric](../audiocraft/metrics/fad.py) class documentation
-for more details. We do not plan to provide further support in obtaining a working setup for the
-Frechet Audio Distance at this stage.
-
-```shell
-# the first parameter is used to activate FAD metric computation while the second specify
-# the path to FAD library to be used by our python wrapper
-dora run <...> evaluate.metrics.fad=true metrics.fad.bin=
-```
-
-See an example grid: [Evaluation with FAD](../audiocraft/grids/musicgen/musicgen_pretrained_32khz_eval.py)
-
-#### Kullback-Leibler Divergence
-
-We provide a PyTorch implementation of the Kullback-Leibler Divergence computed over the probabilities
-of the labels obtained by a state-of-the-art audio classifier. We provide our implementation of the KLD
-using the [PaSST classifier](https://github.com/kkoutini/PaSST).
-
-In order to use the KLD metric over PaSST, you must install the PaSST library as an extra dependency:
-```shell
-pip install 'git+https://github.com/kkoutini/passt_hear21@0.0.19#egg=hear21passt'
-```
-
-Then similarly, you can use the metric activating the corresponding flag:
-
-```shell
-# one could extend the kld metric with additional audio classifier models that can then be picked through the configuration
-dora run <...> evaluate.metrics.kld=true metrics.kld.model=passt
-```
-
-#### Text consistency
-
-We provide a text-consistency metric, similarly to the MuLan Cycle Consistency from
-[MusicLM](https://arxiv.org/pdf/2301.11325.pdf) or the CLAP score used in
-[Make-An-Audio](https://arxiv.org/pdf/2301.12661v1.pdf).
-More specifically, we provide a PyTorch implementation of a Text consistency metric
-relying on a pre-trained [Contrastive Language-Audio Pretraining (CLAP)](https://github.com/LAION-AI/CLAP).
-
-Please install the CLAP library as an extra dependency prior to using the metric:
-```shell
-pip install laion_clap
-```
-
-Then similarly, you can use the metric activating the corresponding flag:
-
-```shell
-# one could extend the text consistency metric with additional audio classifier models that can then be picked through the configuration
-dora run ... evaluate.metrics.text_consistency=true metrics.text_consistency.model=clap
-```
-
-Note that the text consistency metric based on CLAP will require the CLAP checkpoint to be
-provided in the configuration.
-
-#### Chroma cosine similarity
-
-Finally, as introduced in MusicGen, we provide a Chroma Cosine Similarity metric in PyTorch.
-No specific requirement is needed for this metric. Please activate the metric at the
-evaluation stage with the appropriate flag:
-
-```shell
-dora run ... evaluate.metrics.chroma_cosine=true
-```
-
-#### Comparing against reconstructed audio
-
-For all the above audio generation metrics, we offer the option to compute the metric on the reconstructed audio
-fed in EnCodec instead of the generated sample using the flag `.use_gt=true`.
-
-## Example usage
-
-You will find example of configuration for the different metrics introduced above in:
-* The [musicgen's default solver](../config/solver/musicgen/default.yaml) for all audio generation metrics
-* The [compression's default solver](../config/solver/compression/default.yaml) for all audio synthesis metrics
-
-Similarly, we provide different examples in our grids:
-* [Evaluation with ViSQOL](../audiocraft/grids/compression/encodec_musicgen_32khz.py)
-* [Evaluation with FAD and others](../audiocraft/grids/musicgen/musicgen_pretrained_32khz_eval.py)
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/charset_normalizer/md.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/charset_normalizer/md.py
deleted file mode 100644
index 56e9321a9c2ba6e1a72c40dce0122a0a352ffe90..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/charset_normalizer/md.py
+++ /dev/null
@@ -1,571 +0,0 @@
-from functools import lru_cache
-from logging import getLogger
-from typing import List, Optional
-
-from .constant import (
- COMMON_SAFE_ASCII_CHARACTERS,
- TRACE,
- UNICODE_SECONDARY_RANGE_KEYWORD,
-)
-from .utils import (
- is_accentuated,
- is_ascii,
- is_case_variable,
- is_cjk,
- is_emoticon,
- is_hangul,
- is_hiragana,
- is_katakana,
- is_latin,
- is_punctuation,
- is_separator,
- is_symbol,
- is_thai,
- is_unprintable,
- remove_accent,
- unicode_range,
-)
-
-
-class MessDetectorPlugin:
- """
- Base abstract class used for mess detection plugins.
- All detectors MUST extend and implement given methods.
- """
-
- def eligible(self, character: str) -> bool:
- """
- Determine if given character should be fed in.
- """
- raise NotImplementedError # pragma: nocover
-
- def feed(self, character: str) -> None:
- """
- The main routine to be executed upon character.
- Insert the logic in witch the text would be considered chaotic.
- """
- raise NotImplementedError # pragma: nocover
-
- def reset(self) -> None: # pragma: no cover
- """
- Permit to reset the plugin to the initial state.
- """
- raise NotImplementedError
-
- @property
- def ratio(self) -> float:
- """
- Compute the chaos ratio based on what your feed() has seen.
- Must NOT be lower than 0.; No restriction gt 0.
- """
- raise NotImplementedError # pragma: nocover
-
-
-class TooManySymbolOrPunctuationPlugin(MessDetectorPlugin):
- def __init__(self) -> None:
- self._punctuation_count: int = 0
- self._symbol_count: int = 0
- self._character_count: int = 0
-
- self._last_printable_char: Optional[str] = None
- self._frenzy_symbol_in_word: bool = False
-
- def eligible(self, character: str) -> bool:
- return character.isprintable()
-
- def feed(self, character: str) -> None:
- self._character_count += 1
-
- if (
- character != self._last_printable_char
- and character not in COMMON_SAFE_ASCII_CHARACTERS
- ):
- if is_punctuation(character):
- self._punctuation_count += 1
- elif (
- character.isdigit() is False
- and is_symbol(character)
- and is_emoticon(character) is False
- ):
- self._symbol_count += 2
-
- self._last_printable_char = character
-
- def reset(self) -> None: # pragma: no cover
- self._punctuation_count = 0
- self._character_count = 0
- self._symbol_count = 0
-
- @property
- def ratio(self) -> float:
- if self._character_count == 0:
- return 0.0
-
- ratio_of_punctuation: float = (
- self._punctuation_count + self._symbol_count
- ) / self._character_count
-
- return ratio_of_punctuation if ratio_of_punctuation >= 0.3 else 0.0
-
-
-class TooManyAccentuatedPlugin(MessDetectorPlugin):
- def __init__(self) -> None:
- self._character_count: int = 0
- self._accentuated_count: int = 0
-
- def eligible(self, character: str) -> bool:
- return character.isalpha()
-
- def feed(self, character: str) -> None:
- self._character_count += 1
-
- if is_accentuated(character):
- self._accentuated_count += 1
-
- def reset(self) -> None: # pragma: no cover
- self._character_count = 0
- self._accentuated_count = 0
-
- @property
- def ratio(self) -> float:
- if self._character_count == 0 or self._character_count < 8:
- return 0.0
- ratio_of_accentuation: float = self._accentuated_count / self._character_count
- return ratio_of_accentuation if ratio_of_accentuation >= 0.35 else 0.0
-
-
-class UnprintablePlugin(MessDetectorPlugin):
- def __init__(self) -> None:
- self._unprintable_count: int = 0
- self._character_count: int = 0
-
- def eligible(self, character: str) -> bool:
- return True
-
- def feed(self, character: str) -> None:
- if is_unprintable(character):
- self._unprintable_count += 1
- self._character_count += 1
-
- def reset(self) -> None: # pragma: no cover
- self._unprintable_count = 0
-
- @property
- def ratio(self) -> float:
- if self._character_count == 0:
- return 0.0
-
- return (self._unprintable_count * 8) / self._character_count
-
-
-class SuspiciousDuplicateAccentPlugin(MessDetectorPlugin):
- def __init__(self) -> None:
- self._successive_count: int = 0
- self._character_count: int = 0
-
- self._last_latin_character: Optional[str] = None
-
- def eligible(self, character: str) -> bool:
- return character.isalpha() and is_latin(character)
-
- def feed(self, character: str) -> None:
- self._character_count += 1
- if (
- self._last_latin_character is not None
- and is_accentuated(character)
- and is_accentuated(self._last_latin_character)
- ):
- if character.isupper() and self._last_latin_character.isupper():
- self._successive_count += 1
- # Worse if its the same char duplicated with different accent.
- if remove_accent(character) == remove_accent(self._last_latin_character):
- self._successive_count += 1
- self._last_latin_character = character
-
- def reset(self) -> None: # pragma: no cover
- self._successive_count = 0
- self._character_count = 0
- self._last_latin_character = None
-
- @property
- def ratio(self) -> float:
- if self._character_count == 0:
- return 0.0
-
- return (self._successive_count * 2) / self._character_count
-
-
-class SuspiciousRange(MessDetectorPlugin):
- def __init__(self) -> None:
- self._suspicious_successive_range_count: int = 0
- self._character_count: int = 0
- self._last_printable_seen: Optional[str] = None
-
- def eligible(self, character: str) -> bool:
- return character.isprintable()
-
- def feed(self, character: str) -> None:
- self._character_count += 1
-
- if (
- character.isspace()
- or is_punctuation(character)
- or character in COMMON_SAFE_ASCII_CHARACTERS
- ):
- self._last_printable_seen = None
- return
-
- if self._last_printable_seen is None:
- self._last_printable_seen = character
- return
-
- unicode_range_a: Optional[str] = unicode_range(self._last_printable_seen)
- unicode_range_b: Optional[str] = unicode_range(character)
-
- if is_suspiciously_successive_range(unicode_range_a, unicode_range_b):
- self._suspicious_successive_range_count += 1
-
- self._last_printable_seen = character
-
- def reset(self) -> None: # pragma: no cover
- self._character_count = 0
- self._suspicious_successive_range_count = 0
- self._last_printable_seen = None
-
- @property
- def ratio(self) -> float:
- if self._character_count == 0:
- return 0.0
-
- ratio_of_suspicious_range_usage: float = (
- self._suspicious_successive_range_count * 2
- ) / self._character_count
-
- if ratio_of_suspicious_range_usage < 0.1:
- return 0.0
-
- return ratio_of_suspicious_range_usage
-
-
-class SuperWeirdWordPlugin(MessDetectorPlugin):
- def __init__(self) -> None:
- self._word_count: int = 0
- self._bad_word_count: int = 0
- self._foreign_long_count: int = 0
-
- self._is_current_word_bad: bool = False
- self._foreign_long_watch: bool = False
-
- self._character_count: int = 0
- self._bad_character_count: int = 0
-
- self._buffer: str = ""
- self._buffer_accent_count: int = 0
-
- def eligible(self, character: str) -> bool:
- return True
-
- def feed(self, character: str) -> None:
- if character.isalpha():
- self._buffer += character
- if is_accentuated(character):
- self._buffer_accent_count += 1
- if (
- self._foreign_long_watch is False
- and (is_latin(character) is False or is_accentuated(character))
- and is_cjk(character) is False
- and is_hangul(character) is False
- and is_katakana(character) is False
- and is_hiragana(character) is False
- and is_thai(character) is False
- ):
- self._foreign_long_watch = True
- return
- if not self._buffer:
- return
- if (
- character.isspace() or is_punctuation(character) or is_separator(character)
- ) and self._buffer:
- self._word_count += 1
- buffer_length: int = len(self._buffer)
-
- self._character_count += buffer_length
-
- if buffer_length >= 4:
- if self._buffer_accent_count / buffer_length > 0.34:
- self._is_current_word_bad = True
- # Word/Buffer ending with a upper case accentuated letter are so rare,
- # that we will consider them all as suspicious. Same weight as foreign_long suspicious.
- if is_accentuated(self._buffer[-1]) and self._buffer[-1].isupper():
- self._foreign_long_count += 1
- self._is_current_word_bad = True
- if buffer_length >= 24 and self._foreign_long_watch:
- self._foreign_long_count += 1
- self._is_current_word_bad = True
-
- if self._is_current_word_bad:
- self._bad_word_count += 1
- self._bad_character_count += len(self._buffer)
- self._is_current_word_bad = False
-
- self._foreign_long_watch = False
- self._buffer = ""
- self._buffer_accent_count = 0
- elif (
- character not in {"<", ">", "-", "=", "~", "|", "_"}
- and character.isdigit() is False
- and is_symbol(character)
- ):
- self._is_current_word_bad = True
- self._buffer += character
-
- def reset(self) -> None: # pragma: no cover
- self._buffer = ""
- self._is_current_word_bad = False
- self._foreign_long_watch = False
- self._bad_word_count = 0
- self._word_count = 0
- self._character_count = 0
- self._bad_character_count = 0
- self._foreign_long_count = 0
-
- @property
- def ratio(self) -> float:
- if self._word_count <= 10 and self._foreign_long_count == 0:
- return 0.0
-
- return self._bad_character_count / self._character_count
-
-
-class CjkInvalidStopPlugin(MessDetectorPlugin):
- """
- GB(Chinese) based encoding often render the stop incorrectly when the content does not fit and
- can be easily detected. Searching for the overuse of '丅' and '丄'.
- """
-
- def __init__(self) -> None:
- self._wrong_stop_count: int = 0
- self._cjk_character_count: int = 0
-
- def eligible(self, character: str) -> bool:
- return True
-
- def feed(self, character: str) -> None:
- if character in {"丅", "丄"}:
- self._wrong_stop_count += 1
- return
- if is_cjk(character):
- self._cjk_character_count += 1
-
- def reset(self) -> None: # pragma: no cover
- self._wrong_stop_count = 0
- self._cjk_character_count = 0
-
- @property
- def ratio(self) -> float:
- if self._cjk_character_count < 16:
- return 0.0
- return self._wrong_stop_count / self._cjk_character_count
-
-
-class ArchaicUpperLowerPlugin(MessDetectorPlugin):
- def __init__(self) -> None:
- self._buf: bool = False
-
- self._character_count_since_last_sep: int = 0
-
- self._successive_upper_lower_count: int = 0
- self._successive_upper_lower_count_final: int = 0
-
- self._character_count: int = 0
-
- self._last_alpha_seen: Optional[str] = None
- self._current_ascii_only: bool = True
-
- def eligible(self, character: str) -> bool:
- return True
-
- def feed(self, character: str) -> None:
- is_concerned = character.isalpha() and is_case_variable(character)
- chunk_sep = is_concerned is False
-
- if chunk_sep and self._character_count_since_last_sep > 0:
- if (
- self._character_count_since_last_sep <= 64
- and character.isdigit() is False
- and self._current_ascii_only is False
- ):
- self._successive_upper_lower_count_final += (
- self._successive_upper_lower_count
- )
-
- self._successive_upper_lower_count = 0
- self._character_count_since_last_sep = 0
- self._last_alpha_seen = None
- self._buf = False
- self._character_count += 1
- self._current_ascii_only = True
-
- return
-
- if self._current_ascii_only is True and is_ascii(character) is False:
- self._current_ascii_only = False
-
- if self._last_alpha_seen is not None:
- if (character.isupper() and self._last_alpha_seen.islower()) or (
- character.islower() and self._last_alpha_seen.isupper()
- ):
- if self._buf is True:
- self._successive_upper_lower_count += 2
- self._buf = False
- else:
- self._buf = True
- else:
- self._buf = False
-
- self._character_count += 1
- self._character_count_since_last_sep += 1
- self._last_alpha_seen = character
-
- def reset(self) -> None: # pragma: no cover
- self._character_count = 0
- self._character_count_since_last_sep = 0
- self._successive_upper_lower_count = 0
- self._successive_upper_lower_count_final = 0
- self._last_alpha_seen = None
- self._buf = False
- self._current_ascii_only = True
-
- @property
- def ratio(self) -> float:
- if self._character_count == 0:
- return 0.0
-
- return self._successive_upper_lower_count_final / self._character_count
-
-
-@lru_cache(maxsize=1024)
-def is_suspiciously_successive_range(
- unicode_range_a: Optional[str], unicode_range_b: Optional[str]
-) -> bool:
- """
- Determine if two Unicode range seen next to each other can be considered as suspicious.
- """
- if unicode_range_a is None or unicode_range_b is None:
- return True
-
- if unicode_range_a == unicode_range_b:
- return False
-
- if "Latin" in unicode_range_a and "Latin" in unicode_range_b:
- return False
-
- if "Emoticons" in unicode_range_a or "Emoticons" in unicode_range_b:
- return False
-
- # Latin characters can be accompanied with a combining diacritical mark
- # eg. Vietnamese.
- if ("Latin" in unicode_range_a or "Latin" in unicode_range_b) and (
- "Combining" in unicode_range_a or "Combining" in unicode_range_b
- ):
- return False
-
- keywords_range_a, keywords_range_b = unicode_range_a.split(
- " "
- ), unicode_range_b.split(" ")
-
- for el in keywords_range_a:
- if el in UNICODE_SECONDARY_RANGE_KEYWORD:
- continue
- if el in keywords_range_b:
- return False
-
- # Japanese Exception
- range_a_jp_chars, range_b_jp_chars = (
- unicode_range_a
- in (
- "Hiragana",
- "Katakana",
- ),
- unicode_range_b in ("Hiragana", "Katakana"),
- )
- if (range_a_jp_chars or range_b_jp_chars) and (
- "CJK" in unicode_range_a or "CJK" in unicode_range_b
- ):
- return False
- if range_a_jp_chars and range_b_jp_chars:
- return False
-
- if "Hangul" in unicode_range_a or "Hangul" in unicode_range_b:
- if "CJK" in unicode_range_a or "CJK" in unicode_range_b:
- return False
- if unicode_range_a == "Basic Latin" or unicode_range_b == "Basic Latin":
- return False
-
- # Chinese/Japanese use dedicated range for punctuation and/or separators.
- if ("CJK" in unicode_range_a or "CJK" in unicode_range_b) or (
- unicode_range_a in ["Katakana", "Hiragana"]
- and unicode_range_b in ["Katakana", "Hiragana"]
- ):
- if "Punctuation" in unicode_range_a or "Punctuation" in unicode_range_b:
- return False
- if "Forms" in unicode_range_a or "Forms" in unicode_range_b:
- return False
-
- return True
-
-
-@lru_cache(maxsize=2048)
-def mess_ratio(
- decoded_sequence: str, maximum_threshold: float = 0.2, debug: bool = False
-) -> float:
- """
- Compute a mess ratio given a decoded bytes sequence. The maximum threshold does stop the computation earlier.
- """
-
- detectors: List[MessDetectorPlugin] = [
- md_class() for md_class in MessDetectorPlugin.__subclasses__()
- ]
-
- length: int = len(decoded_sequence) + 1
-
- mean_mess_ratio: float = 0.0
-
- if length < 512:
- intermediary_mean_mess_ratio_calc: int = 32
- elif length <= 1024:
- intermediary_mean_mess_ratio_calc = 64
- else:
- intermediary_mean_mess_ratio_calc = 128
-
- for character, index in zip(decoded_sequence + "\n", range(length)):
- for detector in detectors:
- if detector.eligible(character):
- detector.feed(character)
-
- if (
- index > 0 and index % intermediary_mean_mess_ratio_calc == 0
- ) or index == length - 1:
- mean_mess_ratio = sum(dt.ratio for dt in detectors)
-
- if mean_mess_ratio >= maximum_threshold:
- break
-
- if debug:
- logger = getLogger("charset_normalizer")
-
- logger.log(
- TRACE,
- "Mess-detector extended-analysis start. "
- f"intermediary_mean_mess_ratio_calc={intermediary_mean_mess_ratio_calc} mean_mess_ratio={mean_mess_ratio} "
- f"maximum_threshold={maximum_threshold}",
- )
-
- if len(decoded_sequence) > 16:
- logger.log(TRACE, f"Starting with: {decoded_sequence[:16]}")
- logger.log(TRACE, f"Ending with: {decoded_sequence[-16::]}")
-
- for dt in detectors: # pragma: nocover
- logger.log(TRACE, f"{dt.__class__}: {dt.ratio}")
-
- return round(mean_mess_ratio, 3)
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_frame_eval/vendored/bytecode/tests/test_concrete.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_frame_eval/vendored/bytecode/tests/test_concrete.py
deleted file mode 100644
index 510f6cd55afe3493ed1206b547d5edccb32dc792..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_frame_eval/vendored/bytecode/tests/test_concrete.py
+++ /dev/null
@@ -1,1513 +0,0 @@
-
-import pytest
-from tests_python.debugger_unittest import IS_PY36_OR_GREATER, IS_CPYTHON
-from tests_python.debug_constants import TEST_CYTHON
-pytestmark = pytest.mark.skipif(not IS_PY36_OR_GREATER or not IS_CPYTHON or not TEST_CYTHON, reason='Requires CPython >= 3.6')
-#!/usr/bin/env python3
-import opcode
-import sys
-import textwrap
-import types
-import unittest
-
-from _pydevd_frame_eval.vendored.bytecode import (
- UNSET,
- Label,
- Instr,
- SetLineno,
- Bytecode,
- CellVar,
- FreeVar,
- CompilerFlags,
- ConcreteInstr,
- ConcreteBytecode,
-)
-from _pydevd_frame_eval.vendored.bytecode.concrete import OFFSET_AS_INSTRUCTION
-from _pydevd_frame_eval.vendored.bytecode.tests import get_code, TestCase
-
-
-class ConcreteInstrTests(TestCase):
- def test_constructor(self):
- with self.assertRaises(ValueError):
- # need an argument
- ConcreteInstr("LOAD_CONST")
- with self.assertRaises(ValueError):
- # must not have an argument
- ConcreteInstr("ROT_TWO", 33)
-
- # invalid argument
- with self.assertRaises(TypeError):
- ConcreteInstr("LOAD_CONST", 1.0)
- with self.assertRaises(ValueError):
- ConcreteInstr("LOAD_CONST", -1)
- with self.assertRaises(TypeError):
- ConcreteInstr("LOAD_CONST", 5, lineno=1.0)
- with self.assertRaises(ValueError):
- ConcreteInstr("LOAD_CONST", 5, lineno=-1)
-
- # test maximum argument
- with self.assertRaises(ValueError):
- ConcreteInstr("LOAD_CONST", 2147483647 + 1)
- instr = ConcreteInstr("LOAD_CONST", 2147483647)
- self.assertEqual(instr.arg, 2147483647)
-
- # test meaningless extended args
- instr = ConcreteInstr("LOAD_FAST", 8, lineno=3, extended_args=1)
- self.assertEqual(instr.name, "LOAD_FAST")
- self.assertEqual(instr.arg, 8)
- self.assertEqual(instr.lineno, 3)
- self.assertEqual(instr.size, 4)
-
- def test_attr(self):
- instr = ConcreteInstr("LOAD_CONST", 5, lineno=12)
- self.assertEqual(instr.name, "LOAD_CONST")
- self.assertEqual(instr.opcode, 100)
- self.assertEqual(instr.arg, 5)
- self.assertEqual(instr.lineno, 12)
- self.assertEqual(instr.size, 2)
-
- def test_set(self):
- instr = ConcreteInstr("LOAD_CONST", 5, lineno=3)
-
- instr.set("NOP")
- self.assertEqual(instr.name, "NOP")
- self.assertIs(instr.arg, UNSET)
- self.assertEqual(instr.lineno, 3)
-
- instr.set("LOAD_FAST", 8)
- self.assertEqual(instr.name, "LOAD_FAST")
- self.assertEqual(instr.arg, 8)
- self.assertEqual(instr.lineno, 3)
-
- # invalid
- with self.assertRaises(ValueError):
- instr.set("LOAD_CONST")
- with self.assertRaises(ValueError):
- instr.set("NOP", 5)
-
- def test_set_attr(self):
- instr = ConcreteInstr("LOAD_CONST", 5, lineno=12)
-
- # operator name
- instr.name = "LOAD_FAST"
- self.assertEqual(instr.name, "LOAD_FAST")
- self.assertEqual(instr.opcode, 124)
- self.assertRaises(TypeError, setattr, instr, "name", 3)
- self.assertRaises(ValueError, setattr, instr, "name", "xxx")
-
- # operator code
- instr.opcode = 100
- self.assertEqual(instr.name, "LOAD_CONST")
- self.assertEqual(instr.opcode, 100)
- self.assertRaises(ValueError, setattr, instr, "opcode", -12)
- self.assertRaises(TypeError, setattr, instr, "opcode", "abc")
-
- # extended argument
- instr.arg = 0x1234ABCD
- self.assertEqual(instr.arg, 0x1234ABCD)
- self.assertEqual(instr.size, 8)
-
- # small argument
- instr.arg = 0
- self.assertEqual(instr.arg, 0)
- self.assertEqual(instr.size, 2)
-
- # invalid argument
- self.assertRaises(ValueError, setattr, instr, "arg", -1)
- self.assertRaises(ValueError, setattr, instr, "arg", 2147483647 + 1)
-
- # size attribute is read-only
- self.assertRaises(AttributeError, setattr, instr, "size", 3)
-
- # lineno
- instr.lineno = 33
- self.assertEqual(instr.lineno, 33)
- self.assertRaises(TypeError, setattr, instr, "lineno", 1.0)
- self.assertRaises(ValueError, setattr, instr, "lineno", -1)
-
- def test_size(self):
- self.assertEqual(ConcreteInstr("ROT_TWO").size, 2)
- self.assertEqual(ConcreteInstr("LOAD_CONST", 3).size, 2)
- self.assertEqual(ConcreteInstr("LOAD_CONST", 0x1234ABCD).size, 8)
-
- def test_disassemble(self):
- code = b"\t\x00d\x03"
- instr = ConcreteInstr.disassemble(1, code, 0)
- self.assertEqual(instr, ConcreteInstr("NOP", lineno=1))
-
- instr = ConcreteInstr.disassemble(2, code, 1 if OFFSET_AS_INSTRUCTION else 2)
- self.assertEqual(instr, ConcreteInstr("LOAD_CONST", 3, lineno=2))
-
- code = b"\x90\x12\x904\x90\xabd\xcd"
-
- instr = ConcreteInstr.disassemble(3, code, 0)
- self.assertEqual(instr, ConcreteInstr("EXTENDED_ARG", 0x12, lineno=3))
-
- def test_assemble(self):
- instr = ConcreteInstr("NOP")
- self.assertEqual(instr.assemble(), b"\t\x00")
-
- instr = ConcreteInstr("LOAD_CONST", 3)
- self.assertEqual(instr.assemble(), b"d\x03")
-
- instr = ConcreteInstr("LOAD_CONST", 0x1234ABCD)
- self.assertEqual(
- instr.assemble(),
- (b"\x90\x12\x904\x90\xabd\xcd"),
- )
-
- instr = ConcreteInstr("LOAD_CONST", 3, extended_args=1)
- self.assertEqual(
- instr.assemble(),
- (b"\x90\x00d\x03"),
- )
-
- def test_get_jump_target(self):
- jump_abs = ConcreteInstr("JUMP_ABSOLUTE", 3)
- self.assertEqual(jump_abs.get_jump_target(100), 3)
-
- jump_forward = ConcreteInstr("JUMP_FORWARD", 5)
- self.assertEqual(
- jump_forward.get_jump_target(10), 16 if OFFSET_AS_INSTRUCTION else 17
- )
-
-
-class ConcreteBytecodeTests(TestCase):
- def test_repr(self):
- r = repr(ConcreteBytecode())
- self.assertIn("ConcreteBytecode", r)
- self.assertIn("0", r)
-
- def test_eq(self):
- code = ConcreteBytecode()
- self.assertFalse(code == 1)
-
- for name, val in (
- ("names", ["a"]),
- ("varnames", ["a"]),
- ("consts", [1]),
- ("argcount", 1),
- ("kwonlyargcount", 2),
- ("flags", CompilerFlags(CompilerFlags.GENERATOR)),
- ("first_lineno", 10),
- ("filename", "xxxx.py"),
- ("name", "__x"),
- ("docstring", "x-x-x"),
- ("cellvars", [CellVar("x")]),
- ("freevars", [FreeVar("x")]),
- ):
- c = ConcreteBytecode()
- setattr(c, name, val)
- # For obscure reasons using assertNotEqual here fail
- self.assertFalse(code == c)
-
- if sys.version_info > (3, 8):
- c = ConcreteBytecode()
- c.posonlyargcount = 10
- self.assertFalse(code == c)
-
- c = ConcreteBytecode()
- c.consts = [1]
- code.consts = [1]
- c.append(ConcreteInstr("LOAD_CONST", 0))
- self.assertFalse(code == c)
-
- def test_attr(self):
- code_obj = get_code("x = 5")
- code = ConcreteBytecode.from_code(code_obj)
- self.assertEqual(code.consts, [5, None])
- self.assertEqual(code.names, ["x"])
- self.assertEqual(code.varnames, [])
- self.assertEqual(code.freevars, [])
- self.assertListEqual(
- list(code),
- [
- ConcreteInstr("LOAD_CONST", 0, lineno=1),
- ConcreteInstr("STORE_NAME", 0, lineno=1),
- ConcreteInstr("LOAD_CONST", 1, lineno=1),
- ConcreteInstr("RETURN_VALUE", lineno=1),
- ],
- )
- # FIXME: test other attributes
-
- def test_invalid_types(self):
- code = ConcreteBytecode()
- code.append(Label())
- with self.assertRaises(ValueError):
- list(code)
- with self.assertRaises(ValueError):
- code.legalize()
- with self.assertRaises(ValueError):
- ConcreteBytecode([Label()])
-
- def test_to_code_lnotab(self):
-
- # We use an actual function for the simple case to
- # ensure we get lnotab right
- def f():
- #
- #
- x = 7 # noqa
- y = 8 # noqa
- z = 9 # noqa
-
- fl = f.__code__.co_firstlineno
- concrete = ConcreteBytecode()
- concrete.consts = [None, 7, 8, 9]
- concrete.varnames = ["x", "y", "z"]
- concrete.first_lineno = fl
- concrete.extend(
- [
- SetLineno(fl + 3),
- ConcreteInstr("LOAD_CONST", 1),
- ConcreteInstr("STORE_FAST", 0),
- SetLineno(fl + 4),
- ConcreteInstr("LOAD_CONST", 2),
- ConcreteInstr("STORE_FAST", 1),
- SetLineno(fl + 5),
- ConcreteInstr("LOAD_CONST", 3),
- ConcreteInstr("STORE_FAST", 2),
- ConcreteInstr("LOAD_CONST", 0),
- ConcreteInstr("RETURN_VALUE"),
- ]
- )
-
- code = concrete.to_code()
- self.assertEqual(code.co_code, f.__code__.co_code)
- self.assertEqual(code.co_lnotab, f.__code__.co_lnotab)
- if sys.version_info >= (3, 10):
- self.assertEqual(code.co_linetable, f.__code__.co_linetable)
-
- def test_negative_lnotab(self):
- # x = 7
- # y = 8
- concrete = ConcreteBytecode(
- [
- ConcreteInstr("LOAD_CONST", 0),
- ConcreteInstr("STORE_NAME", 0),
- # line number goes backward!
- SetLineno(2),
- ConcreteInstr("LOAD_CONST", 1),
- ConcreteInstr("STORE_NAME", 1),
- ]
- )
- concrete.consts = [7, 8]
- concrete.names = ["x", "y"]
- concrete.first_lineno = 5
-
- code = concrete.to_code()
- expected = b"d\x00Z\x00d\x01Z\x01"
- self.assertEqual(code.co_code, expected)
- self.assertEqual(code.co_firstlineno, 5)
- self.assertEqual(code.co_lnotab, b"\x04\xfd")
-
- def test_extended_lnotab(self):
- # x = 7
- # 200 blank lines
- # y = 8
- concrete = ConcreteBytecode(
- [
- ConcreteInstr("LOAD_CONST", 0),
- SetLineno(1 + 128),
- ConcreteInstr("STORE_NAME", 0),
- # line number goes backward!
- SetLineno(1 + 129),
- ConcreteInstr("LOAD_CONST", 1),
- SetLineno(1),
- ConcreteInstr("STORE_NAME", 1),
- ]
- )
- concrete.consts = [7, 8]
- concrete.names = ["x", "y"]
- concrete.first_lineno = 1
-
- code = concrete.to_code()
- expected = b"d\x00Z\x00d\x01Z\x01"
- self.assertEqual(code.co_code, expected)
- self.assertEqual(code.co_firstlineno, 1)
- self.assertEqual(code.co_lnotab, b"\x02\x7f\x00\x01\x02\x01\x02\x80\x00\xff")
-
- def test_extended_lnotab2(self):
- # x = 7
- # 200 blank lines
- # y = 8
- base_code = compile("x = 7" + "\n" * 200 + "y = 8", "", "exec")
- concrete = ConcreteBytecode(
- [
- ConcreteInstr("LOAD_CONST", 0),
- ConcreteInstr("STORE_NAME", 0),
- SetLineno(201),
- ConcreteInstr("LOAD_CONST", 1),
- ConcreteInstr("STORE_NAME", 1),
- ConcreteInstr("LOAD_CONST", 2),
- ConcreteInstr("RETURN_VALUE"),
- ]
- )
- concrete.consts = [None, 7, 8]
- concrete.names = ["x", "y"]
- concrete.first_lineno = 1
-
- code = concrete.to_code()
- self.assertEqual(code.co_code, base_code.co_code)
- self.assertEqual(code.co_firstlineno, base_code.co_firstlineno)
- self.assertEqual(code.co_lnotab, base_code.co_lnotab)
- if sys.version_info >= (3, 10):
- self.assertEqual(code.co_linetable, base_code.co_linetable)
-
- def test_to_bytecode_consts(self):
- # x = -0.0
- # x = +0.0
- #
- # code optimized by the CPython 3.6 peephole optimizer which emits
- # duplicated constants (0.0 is twice in consts).
- code = ConcreteBytecode()
- code.consts = [0.0, None, -0.0, 0.0]
- code.names = ["x", "y"]
- code.extend(
- [
- ConcreteInstr("LOAD_CONST", 2, lineno=1),
- ConcreteInstr("STORE_NAME", 0, lineno=1),
- ConcreteInstr("LOAD_CONST", 3, lineno=2),
- ConcreteInstr("STORE_NAME", 1, lineno=2),
- ConcreteInstr("LOAD_CONST", 1, lineno=2),
- ConcreteInstr("RETURN_VALUE", lineno=2),
- ]
- )
-
- code = code.to_bytecode().to_concrete_bytecode()
- # the conversion changes the constant order: the order comes from
- # the order of LOAD_CONST instructions
- self.assertEqual(code.consts, [-0.0, 0.0, None])
- code.names = ["x", "y"]
- self.assertListEqual(
- list(code),
- [
- ConcreteInstr("LOAD_CONST", 0, lineno=1),
- ConcreteInstr("STORE_NAME", 0, lineno=1),
- ConcreteInstr("LOAD_CONST", 1, lineno=2),
- ConcreteInstr("STORE_NAME", 1, lineno=2),
- ConcreteInstr("LOAD_CONST", 2, lineno=2),
- ConcreteInstr("RETURN_VALUE", lineno=2),
- ],
- )
-
- def test_cellvar(self):
- concrete = ConcreteBytecode()
- concrete.cellvars = ["x"]
- concrete.append(ConcreteInstr("LOAD_DEREF", 0))
- code = concrete.to_code()
-
- concrete = ConcreteBytecode.from_code(code)
- self.assertEqual(concrete.cellvars, ["x"])
- self.assertEqual(concrete.freevars, [])
- self.assertEqual(list(concrete), [ConcreteInstr("LOAD_DEREF", 0, lineno=1)])
-
- bytecode = concrete.to_bytecode()
- self.assertEqual(bytecode.cellvars, ["x"])
- self.assertEqual(list(bytecode), [Instr("LOAD_DEREF", CellVar("x"), lineno=1)])
-
- def test_freevar(self):
- concrete = ConcreteBytecode()
- concrete.freevars = ["x"]
- concrete.append(ConcreteInstr("LOAD_DEREF", 0))
- code = concrete.to_code()
-
- concrete = ConcreteBytecode.from_code(code)
- self.assertEqual(concrete.cellvars, [])
- self.assertEqual(concrete.freevars, ["x"])
- self.assertEqual(list(concrete), [ConcreteInstr("LOAD_DEREF", 0, lineno=1)])
-
- bytecode = concrete.to_bytecode()
- self.assertEqual(bytecode.cellvars, [])
- self.assertEqual(list(bytecode), [Instr("LOAD_DEREF", FreeVar("x"), lineno=1)])
-
- def test_cellvar_freevar(self):
- concrete = ConcreteBytecode()
- concrete.cellvars = ["cell"]
- concrete.freevars = ["free"]
- concrete.append(ConcreteInstr("LOAD_DEREF", 0))
- concrete.append(ConcreteInstr("LOAD_DEREF", 1))
- code = concrete.to_code()
-
- concrete = ConcreteBytecode.from_code(code)
- self.assertEqual(concrete.cellvars, ["cell"])
- self.assertEqual(concrete.freevars, ["free"])
- self.assertEqual(
- list(concrete),
- [
- ConcreteInstr("LOAD_DEREF", 0, lineno=1),
- ConcreteInstr("LOAD_DEREF", 1, lineno=1),
- ],
- )
-
- bytecode = concrete.to_bytecode()
- self.assertEqual(bytecode.cellvars, ["cell"])
- self.assertEqual(
- list(bytecode),
- [
- Instr("LOAD_DEREF", CellVar("cell"), lineno=1),
- Instr("LOAD_DEREF", FreeVar("free"), lineno=1),
- ],
- )
-
- def test_load_classderef(self):
- concrete = ConcreteBytecode()
- concrete.cellvars = ["__class__"]
- concrete.freevars = ["__class__"]
- concrete.extend(
- [ConcreteInstr("LOAD_CLASSDEREF", 1), ConcreteInstr("STORE_DEREF", 1)]
- )
-
- bytecode = concrete.to_bytecode()
- self.assertEqual(bytecode.freevars, ["__class__"])
- self.assertEqual(bytecode.cellvars, ["__class__"])
- self.assertEqual(
- list(bytecode),
- [
- Instr("LOAD_CLASSDEREF", FreeVar("__class__"), lineno=1),
- Instr("STORE_DEREF", FreeVar("__class__"), lineno=1),
- ],
- )
-
- concrete = bytecode.to_concrete_bytecode()
- self.assertEqual(concrete.freevars, ["__class__"])
- self.assertEqual(concrete.cellvars, ["__class__"])
- self.assertEqual(
- list(concrete),
- [
- ConcreteInstr("LOAD_CLASSDEREF", 1, lineno=1),
- ConcreteInstr("STORE_DEREF", 1, lineno=1),
- ],
- )
-
- code = concrete.to_code()
- self.assertEqual(code.co_freevars, ("__class__",))
- self.assertEqual(code.co_cellvars, ("__class__",))
- self.assertEqual(
- code.co_code,
- b"\x94\x01\x89\x01",
- )
-
- def test_explicit_stacksize(self):
- # Passing stacksize=... to ConcreteBytecode.to_code should result in a
- # code object with the specified stacksize. We pass some silly values
- # and assert that they are honored.
- code_obj = get_code("print('%s' % (a,b,c))")
- original_stacksize = code_obj.co_stacksize
- concrete = ConcreteBytecode.from_code(code_obj)
-
- # First with something bigger than necessary.
- explicit_stacksize = original_stacksize + 42
- new_code_obj = concrete.to_code(stacksize=explicit_stacksize)
- self.assertEqual(new_code_obj.co_stacksize, explicit_stacksize)
-
- # Then with something bogus. We probably don't want to advertise this
- # in the documentation. If this fails then decide if it's for good
- # reason, and remove if so.
- explicit_stacksize = 0
- new_code_obj = concrete.to_code(stacksize=explicit_stacksize)
- self.assertEqual(new_code_obj.co_stacksize, explicit_stacksize)
-
- def test_legalize(self):
- concrete = ConcreteBytecode()
- concrete.first_lineno = 3
- concrete.consts = [7, 8, 9]
- concrete.names = ["x", "y", "z"]
- concrete.extend(
- [
- ConcreteInstr("LOAD_CONST", 0),
- ConcreteInstr("STORE_NAME", 0),
- ConcreteInstr("LOAD_CONST", 1, lineno=4),
- ConcreteInstr("STORE_NAME", 1),
- SetLineno(5),
- ConcreteInstr("LOAD_CONST", 2, lineno=6),
- ConcreteInstr("STORE_NAME", 2),
- ]
- )
-
- concrete.legalize()
- self.assertListEqual(
- list(concrete),
- [
- ConcreteInstr("LOAD_CONST", 0, lineno=3),
- ConcreteInstr("STORE_NAME", 0, lineno=3),
- ConcreteInstr("LOAD_CONST", 1, lineno=4),
- ConcreteInstr("STORE_NAME", 1, lineno=4),
- ConcreteInstr("LOAD_CONST", 2, lineno=5),
- ConcreteInstr("STORE_NAME", 2, lineno=5),
- ],
- )
-
- def test_slice(self):
- concrete = ConcreteBytecode()
- concrete.first_lineno = 3
- concrete.consts = [7, 8, 9]
- concrete.names = ["x", "y", "z"]
- concrete.extend(
- [
- ConcreteInstr("LOAD_CONST", 0),
- ConcreteInstr("STORE_NAME", 0),
- SetLineno(4),
- ConcreteInstr("LOAD_CONST", 1),
- ConcreteInstr("STORE_NAME", 1),
- SetLineno(5),
- ConcreteInstr("LOAD_CONST", 2),
- ConcreteInstr("STORE_NAME", 2),
- ]
- )
- self.assertEqual(concrete, concrete[:])
-
- def test_copy(self):
- concrete = ConcreteBytecode()
- concrete.first_lineno = 3
- concrete.consts = [7, 8, 9]
- concrete.names = ["x", "y", "z"]
- concrete.extend(
- [
- ConcreteInstr("LOAD_CONST", 0),
- ConcreteInstr("STORE_NAME", 0),
- SetLineno(4),
- ConcreteInstr("LOAD_CONST", 1),
- ConcreteInstr("STORE_NAME", 1),
- SetLineno(5),
- ConcreteInstr("LOAD_CONST", 2),
- ConcreteInstr("STORE_NAME", 2),
- ]
- )
- self.assertEqual(concrete, concrete.copy())
-
-
-class ConcreteFromCodeTests(TestCase):
- def test_extended_arg(self):
- # Create a code object from arbitrary bytecode
- co_code = b"\x90\x12\x904\x90\xabd\xcd"
- code = get_code("x=1")
- args = (
- (code.co_argcount,)
- if sys.version_info < (3, 8)
- else (code.co_argcount, code.co_posonlyargcount)
- )
- args += (
- code.co_kwonlyargcount,
- code.co_nlocals,
- code.co_stacksize,
- code.co_flags,
- co_code,
- code.co_consts,
- code.co_names,
- code.co_varnames,
- code.co_filename,
- code.co_name,
- code.co_firstlineno,
- code.co_linetable if sys.version_info >= (3, 10) else code.co_lnotab,
- code.co_freevars,
- code.co_cellvars,
- )
-
- code = types.CodeType(*args)
-
- # without EXTENDED_ARG opcode
- bytecode = ConcreteBytecode.from_code(code)
- self.assertListEqual(
- list(bytecode), [ConcreteInstr("LOAD_CONST", 0x1234ABCD, lineno=1)]
- )
-
- # with EXTENDED_ARG opcode
- bytecode = ConcreteBytecode.from_code(code, extended_arg=True)
- expected = [
- ConcreteInstr("EXTENDED_ARG", 0x12, lineno=1),
- ConcreteInstr("EXTENDED_ARG", 0x34, lineno=1),
- ConcreteInstr("EXTENDED_ARG", 0xAB, lineno=1),
- ConcreteInstr("LOAD_CONST", 0xCD, lineno=1),
- ]
- self.assertListEqual(list(bytecode), expected)
-
- def test_extended_arg_make_function(self):
- if (3, 9) <= sys.version_info < (3, 10):
- from _pydevd_frame_eval.vendored.bytecode.tests.util_annotation import get_code as get_code_future
-
- code_obj = get_code_future(
- """
- def foo(x: int, y: int):
- pass
- """
- )
- else:
- code_obj = get_code(
- """
- def foo(x: int, y: int):
- pass
- """
- )
-
- # without EXTENDED_ARG
- concrete = ConcreteBytecode.from_code(code_obj)
- if sys.version_info >= (3, 10):
- func_code = concrete.consts[2]
- names = ["int", "foo"]
- consts = ["x", "y", func_code, "foo", None]
- const_offset = 1
- name_offset = 1
- first_instrs = [
- ConcreteInstr("LOAD_CONST", 0, lineno=1),
- ConcreteInstr("LOAD_NAME", 0, lineno=1),
- ConcreteInstr("LOAD_CONST", 1, lineno=1),
- ConcreteInstr("LOAD_NAME", 0, lineno=1),
- ConcreteInstr("BUILD_TUPLE", 4, lineno=1),
- ]
- elif (
- sys.version_info >= (3, 7)
- and concrete.flags & CompilerFlags.FUTURE_ANNOTATIONS
- ):
- func_code = concrete.consts[2]
- names = ["foo"]
- consts = ["int", ("x", "y"), func_code, "foo", None]
- const_offset = 1
- name_offset = 0
- first_instrs = [
- ConcreteInstr("LOAD_CONST", 0, lineno=1),
- ConcreteInstr("LOAD_CONST", 0, lineno=1),
- ConcreteInstr("LOAD_CONST", 0 + const_offset, lineno=1),
- ConcreteInstr("BUILD_CONST_KEY_MAP", 2, lineno=1),
- ]
- else:
- func_code = concrete.consts[1]
- names = ["int", "foo"]
- consts = [("x", "y"), func_code, "foo", None]
- const_offset = 0
- name_offset = 1
- first_instrs = [
- ConcreteInstr("LOAD_NAME", 0, lineno=1),
- ConcreteInstr("LOAD_NAME", 0, lineno=1),
- ConcreteInstr("LOAD_CONST", 0 + const_offset, lineno=1),
- ConcreteInstr("BUILD_CONST_KEY_MAP", 2, lineno=1),
- ]
-
- self.assertEqual(concrete.names, names)
- self.assertEqual(concrete.consts, consts)
- expected = first_instrs + [
- ConcreteInstr("LOAD_CONST", 1 + const_offset, lineno=1),
- ConcreteInstr("LOAD_CONST", 2 + const_offset, lineno=1),
- ConcreteInstr("MAKE_FUNCTION", 4, lineno=1),
- ConcreteInstr("STORE_NAME", name_offset, lineno=1),
- ConcreteInstr("LOAD_CONST", 3 + const_offset, lineno=1),
- ConcreteInstr("RETURN_VALUE", lineno=1),
- ]
- self.assertListEqual(list(concrete), expected)
-
- # with EXTENDED_ARG
- concrete = ConcreteBytecode.from_code(code_obj, extended_arg=True)
- # With future annotation the int annotation is stringified and
- # stored as constant this the default behavior under Python 3.10
- if sys.version_info >= (3, 10):
- func_code = concrete.consts[2]
- names = ["int", "foo"]
- consts = ["x", "y", func_code, "foo", None]
- elif concrete.flags & CompilerFlags.FUTURE_ANNOTATIONS:
- func_code = concrete.consts[2]
- names = ["foo"]
- consts = ["int", ("x", "y"), func_code, "foo", None]
- else:
- func_code = concrete.consts[1]
- names = ["int", "foo"]
- consts = [("x", "y"), func_code, "foo", None]
-
- self.assertEqual(concrete.names, names)
- self.assertEqual(concrete.consts, consts)
- self.assertListEqual(list(concrete), expected)
-
- # The next three tests ensure we can round trip ConcreteBytecode generated
- # with extended_args=True
-
- def test_extended_arg_unpack_ex(self):
- def test():
- p = [1, 2, 3, 4, 5, 6]
- q, r, *s, t = p
- return q, r, s, t
-
- cpython_stacksize = test.__code__.co_stacksize
- test.__code__ = ConcreteBytecode.from_code(
- test.__code__, extended_arg=True
- ).to_code()
- self.assertEqual(test.__code__.co_stacksize, cpython_stacksize)
- self.assertEqual(test(), (1, 2, [3, 4, 5], 6))
-
- def test_expected_arg_with_many_consts(self):
- def test():
- var = 0
- var = 1
- var = 2
- var = 3
- var = 4
- var = 5
- var = 6
- var = 7
- var = 8
- var = 9
- var = 10
- var = 11
- var = 12
- var = 13
- var = 14
- var = 15
- var = 16
- var = 17
- var = 18
- var = 19
- var = 20
- var = 21
- var = 22
- var = 23
- var = 24
- var = 25
- var = 26
- var = 27
- var = 28
- var = 29
- var = 30
- var = 31
- var = 32
- var = 33
- var = 34
- var = 35
- var = 36
- var = 37
- var = 38
- var = 39
- var = 40
- var = 41
- var = 42
- var = 43
- var = 44
- var = 45
- var = 46
- var = 47
- var = 48
- var = 49
- var = 50
- var = 51
- var = 52
- var = 53
- var = 54
- var = 55
- var = 56
- var = 57
- var = 58
- var = 59
- var = 60
- var = 61
- var = 62
- var = 63
- var = 64
- var = 65
- var = 66
- var = 67
- var = 68
- var = 69
- var = 70
- var = 71
- var = 72
- var = 73
- var = 74
- var = 75
- var = 76
- var = 77
- var = 78
- var = 79
- var = 80
- var = 81
- var = 82
- var = 83
- var = 84
- var = 85
- var = 86
- var = 87
- var = 88
- var = 89
- var = 90
- var = 91
- var = 92
- var = 93
- var = 94
- var = 95
- var = 96
- var = 97
- var = 98
- var = 99
- var = 100
- var = 101
- var = 102
- var = 103
- var = 104
- var = 105
- var = 106
- var = 107
- var = 108
- var = 109
- var = 110
- var = 111
- var = 112
- var = 113
- var = 114
- var = 115
- var = 116
- var = 117
- var = 118
- var = 119
- var = 120
- var = 121
- var = 122
- var = 123
- var = 124
- var = 125
- var = 126
- var = 127
- var = 128
- var = 129
- var = 130
- var = 131
- var = 132
- var = 133
- var = 134
- var = 135
- var = 136
- var = 137
- var = 138
- var = 139
- var = 140
- var = 141
- var = 142
- var = 143
- var = 144
- var = 145
- var = 146
- var = 147
- var = 148
- var = 149
- var = 150
- var = 151
- var = 152
- var = 153
- var = 154
- var = 155
- var = 156
- var = 157
- var = 158
- var = 159
- var = 160
- var = 161
- var = 162
- var = 163
- var = 164
- var = 165
- var = 166
- var = 167
- var = 168
- var = 169
- var = 170
- var = 171
- var = 172
- var = 173
- var = 174
- var = 175
- var = 176
- var = 177
- var = 178
- var = 179
- var = 180
- var = 181
- var = 182
- var = 183
- var = 184
- var = 185
- var = 186
- var = 187
- var = 188
- var = 189
- var = 190
- var = 191
- var = 192
- var = 193
- var = 194
- var = 195
- var = 196
- var = 197
- var = 198
- var = 199
- var = 200
- var = 201
- var = 202
- var = 203
- var = 204
- var = 205
- var = 206
- var = 207
- var = 208
- var = 209
- var = 210
- var = 211
- var = 212
- var = 213
- var = 214
- var = 215
- var = 216
- var = 217
- var = 218
- var = 219
- var = 220
- var = 221
- var = 222
- var = 223
- var = 224
- var = 225
- var = 226
- var = 227
- var = 228
- var = 229
- var = 230
- var = 231
- var = 232
- var = 233
- var = 234
- var = 235
- var = 236
- var = 237
- var = 238
- var = 239
- var = 240
- var = 241
- var = 242
- var = 243
- var = 244
- var = 245
- var = 246
- var = 247
- var = 248
- var = 249
- var = 250
- var = 251
- var = 252
- var = 253
- var = 254
- var = 255
- var = 256
- var = 257
- var = 258
- var = 259
-
- return var
-
- test.__code__ = ConcreteBytecode.from_code(
- test.__code__, extended_arg=True
- ).to_code()
- self.assertEqual(test.__code__.co_stacksize, 1)
- self.assertEqual(test(), 259)
-
- if sys.version_info >= (3, 6):
-
- def test_fail_extended_arg_jump(self):
- def test():
- var = None
- for _ in range(0, 1):
- var = 0
- var = 1
- var = 2
- var = 3
- var = 4
- var = 5
- var = 6
- var = 7
- var = 8
- var = 9
- var = 10
- var = 11
- var = 12
- var = 13
- var = 14
- var = 15
- var = 16
- var = 17
- var = 18
- var = 19
- var = 20
- var = 21
- var = 22
- var = 23
- var = 24
- var = 25
- var = 26
- var = 27
- var = 28
- var = 29
- var = 30
- var = 31
- var = 32
- var = 33
- var = 34
- var = 35
- var = 36
- var = 37
- var = 38
- var = 39
- var = 40
- var = 41
- var = 42
- var = 43
- var = 44
- var = 45
- var = 46
- var = 47
- var = 48
- var = 49
- var = 50
- var = 51
- var = 52
- var = 53
- var = 54
- var = 55
- var = 56
- var = 57
- var = 58
- var = 59
- var = 60
- var = 61
- var = 62
- var = 63
- var = 64
- var = 65
- var = 66
- var = 67
- var = 68
- var = 69
- var = 70
- return var
-
- # Generate the bytecode with extended arguments
- bytecode = ConcreteBytecode.from_code(test.__code__, extended_arg=True)
- bytecode.to_code()
-
-
-class BytecodeToConcreteTests(TestCase):
- def test_label(self):
- code = Bytecode()
- label = Label()
- code.extend(
- [
- Instr("LOAD_CONST", "hello", lineno=1),
- Instr("JUMP_FORWARD", label, lineno=1),
- label,
- Instr("POP_TOP", lineno=1),
- ]
- )
-
- code = code.to_concrete_bytecode()
- expected = [
- ConcreteInstr("LOAD_CONST", 0, lineno=1),
- ConcreteInstr("JUMP_FORWARD", 0, lineno=1),
- ConcreteInstr("POP_TOP", lineno=1),
- ]
- self.assertListEqual(list(code), expected)
- self.assertListEqual(code.consts, ["hello"])
-
- def test_label2(self):
- bytecode = Bytecode()
- label = Label()
- bytecode.extend(
- [
- Instr("LOAD_NAME", "test", lineno=1),
- Instr("POP_JUMP_IF_FALSE", label),
- Instr("LOAD_CONST", 5, lineno=2),
- Instr("STORE_NAME", "x"),
- Instr("JUMP_FORWARD", label),
- Instr("LOAD_CONST", 7, lineno=4),
- Instr("STORE_NAME", "x"),
- label,
- Instr("LOAD_CONST", None),
- Instr("RETURN_VALUE"),
- ]
- )
-
- concrete = bytecode.to_concrete_bytecode()
- expected = [
- ConcreteInstr("LOAD_NAME", 0, lineno=1),
- ConcreteInstr(
- "POP_JUMP_IF_FALSE", 7 if OFFSET_AS_INSTRUCTION else 14, lineno=1
- ),
- ConcreteInstr("LOAD_CONST", 0, lineno=2),
- ConcreteInstr("STORE_NAME", 1, lineno=2),
- ConcreteInstr("JUMP_FORWARD", 2 if OFFSET_AS_INSTRUCTION else 4, lineno=2),
- ConcreteInstr("LOAD_CONST", 1, lineno=4),
- ConcreteInstr("STORE_NAME", 1, lineno=4),
- ConcreteInstr("LOAD_CONST", 2, lineno=4),
- ConcreteInstr("RETURN_VALUE", lineno=4),
- ]
- self.assertListEqual(list(concrete), expected)
- self.assertListEqual(concrete.consts, [5, 7, None])
- self.assertListEqual(concrete.names, ["test", "x"])
- self.assertListEqual(concrete.varnames, [])
-
- def test_label3(self):
- """
- CPython generates useless EXTENDED_ARG 0 in some cases. We need to
- properly track them as otherwise we can end up with broken offset for
- jumps.
- """
- source = """
- def func(x):
- if x == 1:
- return x + 0
- elif x == 2:
- return x + 1
- elif x == 3:
- return x + 2
- elif x == 4:
- return x + 3
- elif x == 5:
- return x + 4
- elif x == 6:
- return x + 5
- elif x == 7:
- return x + 6
- elif x == 8:
- return x + 7
- elif x == 9:
- return x + 8
- elif x == 10:
- return x + 9
- elif x == 11:
- return x + 10
- elif x == 12:
- return x + 11
- elif x == 13:
- return x + 12
- elif x == 14:
- return x + 13
- elif x == 15:
- return x + 14
- elif x == 16:
- return x + 15
- elif x == 17:
- return x + 16
- return -1
- """
- code = get_code(source, function=True)
- bcode = Bytecode.from_code(code)
- concrete = bcode.to_concrete_bytecode()
- self.assertIsInstance(concrete, ConcreteBytecode)
-
- # Ensure that we do not generate broken code
- loc = {}
- exec(textwrap.dedent(source), loc)
- func = loc["func"]
- func.__code__ = bcode.to_code()
- for i, x in enumerate(range(1, 18)):
- self.assertEqual(func(x), x + i)
- self.assertEqual(func(18), -1)
-
- # Ensure that we properly round trip in such cases
- self.assertEqual(
- ConcreteBytecode.from_code(code).to_code().co_code, code.co_code
- )
-
- def test_setlineno(self):
- # x = 7
- # y = 8
- # z = 9
- concrete = ConcreteBytecode()
- concrete.consts = [7, 8, 9]
- concrete.names = ["x", "y", "z"]
- concrete.first_lineno = 3
- concrete.extend(
- [
- ConcreteInstr("LOAD_CONST", 0),
- ConcreteInstr("STORE_NAME", 0),
- SetLineno(4),
- ConcreteInstr("LOAD_CONST", 1),
- ConcreteInstr("STORE_NAME", 1),
- SetLineno(5),
- ConcreteInstr("LOAD_CONST", 2),
- ConcreteInstr("STORE_NAME", 2),
- ]
- )
-
- code = concrete.to_bytecode()
- self.assertEqual(
- code,
- [
- Instr("LOAD_CONST", 7, lineno=3),
- Instr("STORE_NAME", "x", lineno=3),
- Instr("LOAD_CONST", 8, lineno=4),
- Instr("STORE_NAME", "y", lineno=4),
- Instr("LOAD_CONST", 9, lineno=5),
- Instr("STORE_NAME", "z", lineno=5),
- ],
- )
-
- def test_extended_jump(self):
- NOP = bytes((opcode.opmap["NOP"],))
-
- class BigInstr(ConcreteInstr):
- def __init__(self, size):
- super().__init__("NOP")
- self._size = size
-
- def copy(self):
- return self
-
- def assemble(self):
- return NOP * self._size
-
- # (invalid) code using jumps > 0xffff to test extended arg
- label = Label()
- nb_nop = 2 ** 16
- code = Bytecode(
- [
- Instr("JUMP_ABSOLUTE", label),
- BigInstr(nb_nop),
- label,
- Instr("LOAD_CONST", None),
- Instr("RETURN_VALUE"),
- ]
- )
-
- code_obj = code.to_code()
- if OFFSET_AS_INSTRUCTION:
- expected = b"\x90\x80q\x02" + NOP * nb_nop + b"d\x00S\x00"
- else:
- expected = b"\x90\x01\x90\x00q\x06" + NOP * nb_nop + b"d\x00S\x00"
- self.assertEqual(code_obj.co_code, expected)
-
- def test_jumps(self):
- # if test:
- # x = 12
- # else:
- # x = 37
- code = Bytecode()
- label_else = Label()
- label_return = Label()
- code.extend(
- [
- Instr("LOAD_NAME", "test", lineno=1),
- Instr("POP_JUMP_IF_FALSE", label_else),
- Instr("LOAD_CONST", 12, lineno=2),
- Instr("STORE_NAME", "x"),
- Instr("JUMP_FORWARD", label_return),
- label_else,
- Instr("LOAD_CONST", 37, lineno=4),
- Instr("STORE_NAME", "x"),
- label_return,
- Instr("LOAD_CONST", None, lineno=4),
- Instr("RETURN_VALUE"),
- ]
- )
-
- code = code.to_concrete_bytecode()
- expected = [
- ConcreteInstr("LOAD_NAME", 0, lineno=1),
- ConcreteInstr(
- "POP_JUMP_IF_FALSE", 5 if OFFSET_AS_INSTRUCTION else 10, lineno=1
- ),
- ConcreteInstr("LOAD_CONST", 0, lineno=2),
- ConcreteInstr("STORE_NAME", 1, lineno=2),
- ConcreteInstr("JUMP_FORWARD", 2 if OFFSET_AS_INSTRUCTION else 4, lineno=2),
- ConcreteInstr("LOAD_CONST", 1, lineno=4),
- ConcreteInstr("STORE_NAME", 1, lineno=4),
- ConcreteInstr("LOAD_CONST", 2, lineno=4),
- ConcreteInstr("RETURN_VALUE", lineno=4),
- ]
- self.assertListEqual(list(code), expected)
- self.assertListEqual(code.consts, [12, 37, None])
- self.assertListEqual(code.names, ["test", "x"])
- self.assertListEqual(code.varnames, [])
-
- def test_dont_merge_constants(self):
- # test two constants which are equal but have a different type
- code = Bytecode()
- code.extend(
- [
- Instr("LOAD_CONST", 5, lineno=1),
- Instr("LOAD_CONST", 5.0, lineno=1),
- Instr("LOAD_CONST", -0.0, lineno=1),
- Instr("LOAD_CONST", +0.0, lineno=1),
- ]
- )
-
- code = code.to_concrete_bytecode()
- expected = [
- ConcreteInstr("LOAD_CONST", 0, lineno=1),
- ConcreteInstr("LOAD_CONST", 1, lineno=1),
- ConcreteInstr("LOAD_CONST", 2, lineno=1),
- ConcreteInstr("LOAD_CONST", 3, lineno=1),
- ]
- self.assertListEqual(list(code), expected)
- self.assertListEqual(code.consts, [5, 5.0, -0.0, +0.0])
-
- def test_cellvars(self):
- code = Bytecode()
- code.cellvars = ["x"]
- code.freevars = ["y"]
- code.extend(
- [
- Instr("LOAD_DEREF", CellVar("x"), lineno=1),
- Instr("LOAD_DEREF", FreeVar("y"), lineno=1),
- ]
- )
- concrete = code.to_concrete_bytecode()
- self.assertEqual(concrete.cellvars, ["x"])
- self.assertEqual(concrete.freevars, ["y"])
- code.extend(
- [
- ConcreteInstr("LOAD_DEREF", 0, lineno=1),
- ConcreteInstr("LOAD_DEREF", 1, lineno=1),
- ]
- )
-
- def test_compute_jumps_convergence(self):
- # Consider the following sequence of instructions:
- #
- # JUMP_ABSOLUTE Label1
- # JUMP_ABSOLUTE Label2
- # ...126 instructions...
- # Label1: Offset 254 on first pass, 256 second pass
- # NOP
- # ... many more instructions ...
- # Label2: Offset > 256 on first pass
- #
- # On first pass of compute_jumps(), Label2 will be at address 254, so
- # that value encodes into the single byte arg of JUMP_ABSOLUTE.
- #
- # On second pass compute_jumps() the instr at Label1 will have offset
- # of 256 so will also be given an EXTENDED_ARG.
- #
- # Thus we need to make an additional pass. This test only verifies
- # case where 2 passes is insufficient but three is enough.
- #
- # On Python > 3.10 we need to double the number since the offset is now
- # in term of instructions and not bytes.
-
- # Create code from comment above.
- code = Bytecode()
- label1 = Label()
- label2 = Label()
- nop = "NOP"
- code.append(Instr("JUMP_ABSOLUTE", label1))
- code.append(Instr("JUMP_ABSOLUTE", label2))
- # Need 254 * 2 + 2 since the arg will change by 1 instruction rather than 2
- # bytes.
- for x in range(4, 510 if OFFSET_AS_INSTRUCTION else 254, 2):
- code.append(Instr(nop))
- code.append(label1)
- code.append(Instr(nop))
- for x in range(
- 514 if OFFSET_AS_INSTRUCTION else 256,
- 600 if OFFSET_AS_INSTRUCTION else 300,
- 2,
- ):
- code.append(Instr(nop))
- code.append(label2)
- code.append(Instr(nop))
-
- # This should pass by default.
- code.to_code()
-
- # Try with max of two passes: it should raise
- with self.assertRaises(RuntimeError):
- code.to_code(compute_jumps_passes=2)
-
- def test_extreme_compute_jumps_convergence(self):
- """Test of compute_jumps() requiring absurd number of passes.
-
- NOTE: This test also serves to demonstrate that there is no worst
- case: the number of passes can be unlimited (or, actually, limited by
- the size of the provided code).
-
- This is an extension of test_compute_jumps_convergence. Instead of
- two jumps, where the earlier gets extended after the latter, we
- instead generate a series of many jumps. Each pass of compute_jumps()
- extends one more instruction, which in turn causes the one behind it
- to be extended on the next pass.
-
- """
-
- # N: the number of unextended instructions that can be squeezed into a
- # set of bytes adressable by the arg of an unextended instruction.
- # The answer is "128", but here's how we arrive at it.
- max_unextended_offset = 1 << 8
- unextended_branch_instr_size = 2
- N = max_unextended_offset // unextended_branch_instr_size
-
- # When using instruction rather than bytes in the offset multiply by 2
- if OFFSET_AS_INSTRUCTION:
- N *= 2
-
- nop = "UNARY_POSITIVE" # don't use NOP, dis.stack_effect will raise
-
- # The number of jumps will be equal to the number of labels. The
- # number of passes of compute_jumps() required will be one greater
- # than this.
- labels = [Label() for x in range(0, 3 * N)]
-
- code = Bytecode()
- code.extend(
- Instr("JUMP_FORWARD", labels[len(labels) - x - 1])
- for x in range(0, len(labels))
- )
- end_of_jumps = len(code)
- code.extend(Instr(nop) for x in range(0, N))
-
- # Now insert the labels. The first is N instructions (i.e. 256
- # bytes) after the last jump. Then they proceed to earlier positions
- # 4 bytes at a time. While the targets are in the range of the nop
- # instructions, 4 bytes is two instructions. When the targets are in
- # the range of JUMP_FORWARD instructions we have to allow for the fact
- # that the instructions will have been extended to four bytes each, so
- # working backwards 4 bytes per label means just one instruction per
- # label.
- offset = end_of_jumps + N
- for index in range(0, len(labels)):
- code.insert(offset, labels[index])
- if offset <= end_of_jumps:
- offset -= 1
- else:
- offset -= 2
-
- code.insert(0, Instr("LOAD_CONST", 0))
- del end_of_jumps
- code.append(Instr("RETURN_VALUE"))
-
- code.to_code(compute_jumps_passes=(len(labels) + 1))
-
- def test_general_constants(self):
- """Test if general object could be linked as constants."""
-
- class CustomObject:
- pass
-
- class UnHashableCustomObject:
- __hash__ = None
-
- obj1 = [1, 2, 3]
- obj2 = {1, 2, 3}
- obj3 = CustomObject()
- obj4 = UnHashableCustomObject()
- code = Bytecode(
- [
- Instr("LOAD_CONST", obj1, lineno=1),
- Instr("LOAD_CONST", obj2, lineno=1),
- Instr("LOAD_CONST", obj3, lineno=1),
- Instr("LOAD_CONST", obj4, lineno=1),
- Instr("BUILD_TUPLE", 4, lineno=1),
- Instr("RETURN_VALUE", lineno=1),
- ]
- )
- self.assertEqual(code.to_code().co_consts, (obj1, obj2, obj3, obj4))
-
- def f():
- return # pragma: no cover
-
- f.__code__ = code.to_code()
- self.assertEqual(f(), (obj1, obj2, obj3, obj4))
-
-
-if __name__ == "__main__":
- unittest.main() # pragma: no cover
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/common/messaging.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/common/messaging.py
deleted file mode 100644
index eb9556d2f37017111edd976b1ee334d21c26a4f4..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/common/messaging.py
+++ /dev/null
@@ -1,1505 +0,0 @@
-# Copyright (c) Microsoft Corporation. All rights reserved.
-# Licensed under the MIT License. See LICENSE in the project root
-# for license information.
-
-"""An implementation of the session and presentation layers as used in the Debug
-Adapter Protocol (DAP): channels and their lifetime, JSON messages, requests,
-responses, and events.
-
-https://microsoft.github.io/debug-adapter-protocol/overview#base-protocol
-"""
-
-from __future__ import annotations
-
-import collections
-import contextlib
-import functools
-import itertools
-import os
-import socket
-import sys
-import threading
-
-from debugpy.common import json, log, util
-from debugpy.common.util import hide_thread_from_debugger
-
-
-class JsonIOError(IOError):
- """Indicates that a read or write operation on JsonIOStream has failed."""
-
- def __init__(self, *args, **kwargs):
- stream = kwargs.pop("stream")
- cause = kwargs.pop("cause", None)
- if not len(args) and cause is not None:
- args = [str(cause)]
- super().__init__(*args, **kwargs)
-
- self.stream = stream
- """The stream that couldn't be read or written.
-
- Set by JsonIOStream.read_json() and JsonIOStream.write_json().
-
- JsonMessageChannel relies on this value to decide whether a NoMoreMessages
- instance that bubbles up to the message loop is related to that loop.
- """
-
- self.cause = cause
- """The underlying exception, if any."""
-
-
-class NoMoreMessages(JsonIOError, EOFError):
- """Indicates that there are no more messages that can be read from or written
- to a stream.
- """
-
- def __init__(self, *args, **kwargs):
- args = args if len(args) else ["No more messages"]
- super().__init__(*args, **kwargs)
-
-
-class JsonIOStream(object):
- """Implements a JSON value stream over two byte streams (input and output).
-
- Each value is encoded as a DAP packet, with metadata headers and a JSON payload.
- """
-
- MAX_BODY_SIZE = 0xFFFFFF
-
- json_decoder_factory = json.JsonDecoder
- """Used by read_json() when decoder is None."""
-
- json_encoder_factory = json.JsonEncoder
- """Used by write_json() when encoder is None."""
-
- @classmethod
- def from_stdio(cls, name="stdio"):
- """Creates a new instance that receives messages from sys.stdin, and sends
- them to sys.stdout.
- """
- return cls(sys.stdin.buffer, sys.stdout.buffer, name)
-
- @classmethod
- def from_process(cls, process, name="stdio"):
- """Creates a new instance that receives messages from process.stdin, and sends
- them to process.stdout.
- """
- return cls(process.stdout, process.stdin, name)
-
- @classmethod
- def from_socket(cls, sock, name=None):
- """Creates a new instance that sends and receives messages over a socket."""
- sock.settimeout(None) # make socket blocking
- if name is None:
- name = repr(sock)
-
- # TODO: investigate switching to buffered sockets; readline() on unbuffered
- # sockets is very slow! Although the implementation of readline() itself is
- # native code, it calls read(1) in a loop - and that then ultimately calls
- # SocketIO.readinto(), which is implemented in Python.
- socket_io = sock.makefile("rwb", 0)
-
- # SocketIO.close() doesn't close the underlying socket.
- def cleanup():
- try:
- sock.shutdown(socket.SHUT_RDWR)
- except Exception:
- pass
- sock.close()
-
- return cls(socket_io, socket_io, name, cleanup)
-
- def __init__(self, reader, writer, name=None, cleanup=lambda: None):
- """Creates a new JsonIOStream.
-
- reader must be a BytesIO-like object, from which incoming messages will be
- read by read_json().
-
- writer must be a BytesIO-like object, into which outgoing messages will be
- written by write_json().
-
- cleanup must be a callable; it will be invoked without arguments when the
- stream is closed.
-
- reader.readline() must treat "\n" as the line terminator, and must leave "\r"
- as is - it must not replace "\r\n" with "\n" automatically, as TextIO does.
- """
-
- if name is None:
- name = f"reader={reader!r}, writer={writer!r}"
-
- self.name = name
- self._reader = reader
- self._writer = writer
- self._cleanup = cleanup
- self._closed = False
-
- def close(self):
- """Closes the stream, the reader, and the writer."""
-
- if self._closed:
- return
- self._closed = True
-
- log.debug("Closing {0} message stream", self.name)
- try:
- try:
- # Close the writer first, so that the other end of the connection has
- # its message loop waiting on read() unblocked. If there is an exception
- # while closing the writer, we still want to try to close the reader -
- # only one exception can bubble up, so if both fail, it'll be the one
- # from reader.
- try:
- self._writer.close()
- finally:
- if self._reader is not self._writer:
- self._reader.close()
- finally:
- self._cleanup()
- except Exception:
- log.reraise_exception("Error while closing {0} message stream", self.name)
-
- def _log_message(self, dir, data, logger=log.debug):
- return logger("{0} {1} {2}", self.name, dir, data)
-
- def _read_line(self, reader):
- line = b""
- while True:
- try:
- line += reader.readline()
- except Exception as exc:
- raise NoMoreMessages(str(exc), stream=self)
- if not line:
- raise NoMoreMessages(stream=self)
- if line.endswith(b"\r\n"):
- line = line[0:-2]
- return line
-
- def read_json(self, decoder=None):
- """Read a single JSON value from reader.
-
- Returns JSON value as parsed by decoder.decode(), or raises NoMoreMessages
- if there are no more values to be read.
- """
-
- decoder = decoder if decoder is not None else self.json_decoder_factory()
- reader = self._reader
- read_line = functools.partial(self._read_line, reader)
-
- # If any error occurs while reading and parsing the message, log the original
- # raw message data as is, so that it's possible to diagnose missing or invalid
- # headers, encoding issues, JSON syntax errors etc.
- def log_message_and_reraise_exception(format_string="", *args, **kwargs):
- if format_string:
- format_string += "\n\n"
- format_string += "{name} -->\n{raw_lines}"
-
- raw_lines = b"".join(raw_chunks).split(b"\n")
- raw_lines = "\n".join(repr(line) for line in raw_lines)
-
- log.reraise_exception(
- format_string, *args, name=self.name, raw_lines=raw_lines, **kwargs
- )
-
- raw_chunks = []
- headers = {}
-
- while True:
- try:
- line = read_line()
- except Exception:
- # Only log it if we have already read some headers, and are looking
- # for a blank line terminating them. If this is the very first read,
- # there's no message data to log in any case, and the caller might
- # be anticipating the error - e.g. NoMoreMessages on disconnect.
- if headers:
- log_message_and_reraise_exception(
- "Error while reading message headers:"
- )
- else:
- raise
-
- raw_chunks += [line, b"\n"]
- if line == b"":
- break
-
- key, _, value = line.partition(b":")
- headers[key] = value
-
- try:
- length = int(headers[b"Content-Length"])
- if not (0 <= length <= self.MAX_BODY_SIZE):
- raise ValueError
- except (KeyError, ValueError):
- try:
- raise IOError("Content-Length is missing or invalid:")
- except Exception:
- log_message_and_reraise_exception()
-
- body_start = len(raw_chunks)
- body_remaining = length
- while body_remaining > 0:
- try:
- chunk = reader.read(body_remaining)
- if not chunk:
- raise EOFError
- except Exception as exc:
- # Not logged due to https://github.com/microsoft/ptvsd/issues/1699
- raise NoMoreMessages(str(exc), stream=self)
-
- raw_chunks.append(chunk)
- body_remaining -= len(chunk)
- assert body_remaining == 0
-
- body = b"".join(raw_chunks[body_start:])
- try:
- body = body.decode("utf-8")
- except Exception:
- log_message_and_reraise_exception()
-
- try:
- body = decoder.decode(body)
- except Exception:
- log_message_and_reraise_exception()
-
- # If parsed successfully, log as JSON for readability.
- self._log_message("-->", body)
- return body
-
- def write_json(self, value, encoder=None):
- """Write a single JSON value into writer.
-
- Value is written as encoded by encoder.encode().
- """
-
- if self._closed:
- # Don't log this - it's a common pattern to write to a stream while
- # anticipating EOFError from it in case it got closed concurrently.
- raise NoMoreMessages(stream=self)
-
- encoder = encoder if encoder is not None else self.json_encoder_factory()
- writer = self._writer
-
- # Format the value as a message, and try to log any failures using as much
- # information as we already have at the point of the failure. For example,
- # if it fails after it is serialized to JSON, log that JSON.
-
- try:
- body = encoder.encode(value)
- except Exception:
- self._log_message("<--", repr(value), logger=log.reraise_exception)
- body = body.encode("utf-8")
-
- header = f"Content-Length: {len(body)}\r\n\r\n".encode("ascii")
- data = header + body
- data_written = 0
- try:
- while data_written < len(data):
- written = writer.write(data[data_written:])
- data_written += written
- writer.flush()
- except Exception as exc:
- self._log_message("<--", value, logger=log.swallow_exception)
- raise JsonIOError(stream=self, cause=exc)
-
- self._log_message("<--", value)
-
- def __repr__(self):
- return f"{type(self).__name__}({self.name!r})"
-
-
-class MessageDict(collections.OrderedDict):
- """A specialized dict that is used for JSON message payloads - Request.arguments,
- Response.body, and Event.body.
-
- For all members that normally throw KeyError when a requested key is missing, this
- dict raises InvalidMessageError instead. Thus, a message handler can skip checks
- for missing properties, and just work directly with the payload on the assumption
- that it is valid according to the protocol specification; if anything is missing,
- it will be reported automatically in the proper manner.
-
- If the value for the requested key is itself a dict, it is returned as is, and not
- automatically converted to MessageDict. Thus, to enable convenient chaining - e.g.
- d["a"]["b"]["c"] - the dict must consistently use MessageDict instances rather than
- vanilla dicts for all its values, recursively. This is guaranteed for the payload
- of all freshly received messages (unless and until it is mutated), but there is no
- such guarantee for outgoing messages.
- """
-
- def __init__(self, message, items=None):
- assert message is None or isinstance(message, Message)
-
- if items is None:
- super().__init__()
- else:
- super().__init__(items)
-
- self.message = message
- """The Message object that owns this dict.
-
- For any instance exposed via a Message object corresponding to some incoming
- message, it is guaranteed to reference that Message object. There is no similar
- guarantee for outgoing messages.
- """
-
- def __repr__(self):
- try:
- return format(json.repr(self))
- except Exception:
- return super().__repr__()
-
- def __call__(self, key, validate, optional=False):
- """Like get(), but with validation.
-
- The item is first retrieved as if with self.get(key, default=()) - the default
- value is () rather than None, so that JSON nulls are distinguishable from
- missing properties.
-
- If optional=True, and the value is (), it's returned as is. Otherwise, the
- item is validated by invoking validate(item) on it.
-
- If validate=False, it's treated as if it were (lambda x: x) - i.e. any value
- is considered valid, and is returned unchanged. If validate is a type or a
- tuple, it's treated as json.of_type(validate). Otherwise, if validate is not
- callable(), it's treated as json.default(validate).
-
- If validate() returns successfully, the item is substituted with the value
- it returns - thus, the validator can e.g. replace () with a suitable default
- value for the property.
-
- If validate() raises TypeError or ValueError, raises InvalidMessageError with
- the same text that applies_to(self.messages).
-
- See debugpy.common.json for reusable validators.
- """
-
- if not validate:
- validate = lambda x: x
- elif isinstance(validate, type) or isinstance(validate, tuple):
- validate = json.of_type(validate, optional=optional)
- elif not callable(validate):
- validate = json.default(validate)
-
- value = self.get(key, ())
- try:
- value = validate(value)
- except (TypeError, ValueError) as exc:
- message = Message if self.message is None else self.message
- err = str(exc)
- if not err.startswith("["):
- err = " " + err
- raise message.isnt_valid("{0}{1}", json.repr(key), err)
- return value
-
- def _invalid_if_no_key(func):
- def wrap(self, key, *args, **kwargs):
- try:
- return func(self, key, *args, **kwargs)
- except KeyError:
- message = Message if self.message is None else self.message
- raise message.isnt_valid("missing property {0!r}", key)
-
- return wrap
-
- __getitem__ = _invalid_if_no_key(collections.OrderedDict.__getitem__)
- __delitem__ = _invalid_if_no_key(collections.OrderedDict.__delitem__)
- pop = _invalid_if_no_key(collections.OrderedDict.pop)
-
- del _invalid_if_no_key
-
-
-def _payload(value):
- """JSON validator for message payload.
-
- If that value is missing or null, it is treated as if it were {}.
- """
-
- if value is not None and value != ():
- if isinstance(value, dict): # can be int, str, list...
- assert isinstance(value, MessageDict)
- return value
-
- # Missing payload. Construct a dummy MessageDict, and make it look like it was
- # deserialized. See JsonMessageChannel._parse_incoming_message for why it needs
- # to have associate_with().
-
- def associate_with(message):
- value.message = message
-
- value = MessageDict(None)
- value.associate_with = associate_with
- return value
-
-
-class Message(object):
- """Represents a fully parsed incoming or outgoing message.
-
- https://microsoft.github.io/debug-adapter-protocol/specification#protocolmessage
- """
-
- def __init__(self, channel, seq, json=None):
- self.channel = channel
-
- self.seq = seq
- """Sequence number of the message in its channel.
-
- This can be None for synthesized Responses.
- """
-
- self.json = json
- """For incoming messages, the MessageDict containing raw JSON from which
- this message was originally parsed.
- """
-
- def __str__(self):
- return json.repr(self.json) if self.json is not None else repr(self)
-
- def describe(self):
- """A brief description of the message that is enough to identify it.
-
- Examples:
- '#1 request "launch" from IDE'
- '#2 response to #1 request "launch" from IDE'.
- """
- raise NotImplementedError
-
- @property
- def payload(self) -> MessageDict:
- """Payload of the message - self.body or self.arguments, depending on the
- message type.
- """
- raise NotImplementedError
-
- def __call__(self, *args, **kwargs):
- """Same as self.payload(...)."""
- return self.payload(*args, **kwargs)
-
- def __contains__(self, key):
- """Same as (key in self.payload)."""
- return key in self.payload
-
- def is_event(self, *event):
- """Returns True if this message is an Event of one of the specified types."""
- if not isinstance(self, Event):
- return False
- return event == () or self.event in event
-
- def is_request(self, *command):
- """Returns True if this message is a Request of one of the specified types."""
- if not isinstance(self, Request):
- return False
- return command == () or self.command in command
-
- def is_response(self, *command):
- """Returns True if this message is a Response to a request of one of the
- specified types.
- """
- if not isinstance(self, Response):
- return False
- return command == () or self.request.command in command
-
- def error(self, exc_type, format_string, *args, **kwargs):
- """Returns a new exception of the specified type from the point at which it is
- invoked, with the specified formatted message as the reason.
-
- The resulting exception will have its cause set to the Message object on which
- error() was called. Additionally, if that message is a Request, a failure
- response is immediately sent.
- """
-
- assert issubclass(exc_type, MessageHandlingError)
-
- silent = kwargs.pop("silent", False)
- reason = format_string.format(*args, **kwargs)
- exc = exc_type(reason, self, silent) # will log it
-
- if isinstance(self, Request):
- self.respond(exc)
- return exc
-
- def isnt_valid(self, *args, **kwargs):
- """Same as self.error(InvalidMessageError, ...)."""
- return self.error(InvalidMessageError, *args, **kwargs)
-
- def cant_handle(self, *args, **kwargs):
- """Same as self.error(MessageHandlingError, ...)."""
- return self.error(MessageHandlingError, *args, **kwargs)
-
-
-class Event(Message):
- """Represents an incoming event.
-
- https://microsoft.github.io/debug-adapter-protocol/specification#event
-
- It is guaranteed that body is a MessageDict associated with this Event, and so
- are all the nested dicts in it. If "body" was missing or null in JSON, body is
- an empty dict.
-
- To handle the event, JsonMessageChannel tries to find a handler for this event in
- JsonMessageChannel.handlers. Given event="X", if handlers.X_event exists, then it
- is the specific handler for this event. Otherwise, handlers.event must exist, and
- it is the generic handler for this event. A missing handler is a fatal error.
-
- No further incoming messages are processed until the handler returns, except for
- responses to requests that have wait_for_response() invoked on them.
-
- To report failure to handle the event, the handler must raise an instance of
- MessageHandlingError that applies_to() the Event object it was handling. Any such
- failure is logged, after which the message loop moves on to the next message.
-
- Helper methods Message.isnt_valid() and Message.cant_handle() can be used to raise
- the appropriate exception type that applies_to() the Event object.
- """
-
- def __init__(self, channel, seq, event, body, json=None):
- super().__init__(channel, seq, json)
-
- self.event = event
-
- if isinstance(body, MessageDict) and hasattr(body, "associate_with"):
- body.associate_with(self)
- self.body = body
-
- def describe(self):
- return f"#{self.seq} event {json.repr(self.event)} from {self.channel}"
-
- @property
- def payload(self):
- return self.body
-
- @staticmethod
- def _parse(channel, message_dict):
- seq = message_dict("seq", int)
- event = message_dict("event", str)
- body = message_dict("body", _payload)
- message = Event(channel, seq, event, body, json=message_dict)
- channel._enqueue_handlers(message, message._handle)
-
- def _handle(self):
- channel = self.channel
- handler = channel._get_handler_for("event", self.event)
- try:
- try:
- result = handler(self)
- assert (
- result is None
- ), f"Handler {util.srcnameof(handler)} tried to respond to {self.describe()}."
- except MessageHandlingError as exc:
- if not exc.applies_to(self):
- raise
- log.error(
- "Handler {0}\ncouldn't handle {1}:\n{2}",
- util.srcnameof(handler),
- self.describe(),
- str(exc),
- )
- except Exception:
- log.reraise_exception(
- "Handler {0}\ncouldn't handle {1}:",
- util.srcnameof(handler),
- self.describe(),
- )
-
-
-NO_RESPONSE = object()
-"""Can be returned from a request handler in lieu of the response body, to indicate
-that no response is to be sent.
-
-Request.respond() must be invoked explicitly at some later point to provide a response.
-"""
-
-
-class Request(Message):
- """Represents an incoming or an outgoing request.
-
- Incoming requests are represented directly by instances of this class.
-
- Outgoing requests are represented by instances of OutgoingRequest, which provides
- additional functionality to handle responses.
-
- For incoming requests, it is guaranteed that arguments is a MessageDict associated
- with this Request, and so are all the nested dicts in it. If "arguments" was missing
- or null in JSON, arguments is an empty dict.
-
- To handle the request, JsonMessageChannel tries to find a handler for this request
- in JsonMessageChannel.handlers. Given command="X", if handlers.X_request exists,
- then it is the specific handler for this request. Otherwise, handlers.request must
- exist, and it is the generic handler for this request. A missing handler is a fatal
- error.
-
- The handler is then invoked with the Request object as its sole argument.
-
- If the handler itself invokes respond() on the Request at any point, then it must
- not return any value.
-
- Otherwise, if the handler returns NO_RESPONSE, no response to the request is sent.
- It must be sent manually at some later point via respond().
-
- Otherwise, a response to the request is sent with the returned value as the body.
-
- To fail the request, the handler can return an instance of MessageHandlingError,
- or respond() with one, or raise one such that it applies_to() the Request object
- being handled.
-
- Helper methods Message.isnt_valid() and Message.cant_handle() can be used to raise
- the appropriate exception type that applies_to() the Request object.
- """
-
- def __init__(self, channel, seq, command, arguments, json=None):
- super().__init__(channel, seq, json)
-
- self.command = command
-
- if isinstance(arguments, MessageDict) and hasattr(arguments, "associate_with"):
- arguments.associate_with(self)
- self.arguments = arguments
-
- self.response = None
- """Response to this request.
-
- For incoming requests, it is set as soon as the request handler returns.
-
- For outgoing requests, it is set as soon as the response is received, and
- before self._handle_response is invoked.
- """
-
- def describe(self):
- return f"#{self.seq} request {json.repr(self.command)} from {self.channel}"
-
- @property
- def payload(self):
- return self.arguments
-
- def respond(self, body):
- assert self.response is None
- d = {"type": "response", "request_seq": self.seq, "command": self.command}
-
- if isinstance(body, Exception):
- d["success"] = False
- d["message"] = str(body)
- else:
- d["success"] = True
- if body is not None and body != {}:
- d["body"] = body
-
- with self.channel._send_message(d) as seq:
- pass
- self.response = Response(self.channel, seq, self, body)
-
- @staticmethod
- def _parse(channel, message_dict):
- seq = message_dict("seq", int)
- command = message_dict("command", str)
- arguments = message_dict("arguments", _payload)
- message = Request(channel, seq, command, arguments, json=message_dict)
- channel._enqueue_handlers(message, message._handle)
-
- def _handle(self):
- channel = self.channel
- handler = channel._get_handler_for("request", self.command)
- try:
- try:
- result = handler(self)
- except MessageHandlingError as exc:
- if not exc.applies_to(self):
- raise
- result = exc
- log.error(
- "Handler {0}\ncouldn't handle {1}:\n{2}",
- util.srcnameof(handler),
- self.describe(),
- str(exc),
- )
-
- if result is NO_RESPONSE:
- assert self.response is None, (
- "Handler {0} for {1} must not return NO_RESPONSE if it has already "
- "invoked request.respond().".format(
- util.srcnameof(handler), self.describe()
- )
- )
- elif self.response is not None:
- assert result is None or result is self.response.body, (
- "Handler {0} for {1} must not return a response body if it has "
- "already invoked request.respond().".format(
- util.srcnameof(handler), self.describe()
- )
- )
- else:
- assert result is not None, (
- "Handler {0} for {1} must either call request.respond() before it "
- "returns, or return the response body, or return NO_RESPONSE.".format(
- util.srcnameof(handler), self.describe()
- )
- )
- try:
- self.respond(result)
- except NoMoreMessages:
- log.warning(
- "Channel was closed before the response from handler {0} to {1} could be sent",
- util.srcnameof(handler),
- self.describe(),
- )
-
- except Exception:
- log.reraise_exception(
- "Handler {0}\ncouldn't handle {1}:",
- util.srcnameof(handler),
- self.describe(),
- )
-
-
-class OutgoingRequest(Request):
- """Represents an outgoing request, for which it is possible to wait for a
- response to be received, and register a response handler.
- """
-
- _parse = _handle = None
-
- def __init__(self, channel, seq, command, arguments):
- super().__init__(channel, seq, command, arguments)
- self._response_handlers = []
-
- def describe(self):
- return f"{self.seq} request {json.repr(self.command)} to {self.channel}"
-
- def wait_for_response(self, raise_if_failed=True):
- """Waits until a response is received for this request, records the Response
- object for it in self.response, and returns response.body.
-
- If no response was received from the other party before the channel closed,
- self.response is a synthesized Response with body=NoMoreMessages().
-
- If raise_if_failed=True and response.success is False, raises response.body
- instead of returning.
- """
-
- with self.channel:
- while self.response is None:
- self.channel._handlers_enqueued.wait()
-
- if raise_if_failed and not self.response.success:
- raise self.response.body
- return self.response.body
-
- def on_response(self, response_handler):
- """Registers a handler to invoke when a response is received for this request.
- The handler is invoked with Response as its sole argument.
-
- If response has already been received, invokes the handler immediately.
-
- It is guaranteed that self.response is set before the handler is invoked.
- If no response was received from the other party before the channel closed,
- self.response is a dummy Response with body=NoMoreMessages().
-
- The handler is always invoked asynchronously on an unspecified background
- thread - thus, the caller of on_response() can never be blocked or deadlocked
- by the handler.
-
- No further incoming messages are processed until the handler returns, except for
- responses to requests that have wait_for_response() invoked on them.
- """
-
- with self.channel:
- self._response_handlers.append(response_handler)
- self._enqueue_response_handlers()
-
- def _enqueue_response_handlers(self):
- response = self.response
- if response is None:
- # Response._parse() will submit the handlers when response is received.
- return
-
- def run_handlers():
- for handler in handlers:
- try:
- try:
- handler(response)
- except MessageHandlingError as exc:
- if not exc.applies_to(response):
- raise
- log.error(
- "Handler {0}\ncouldn't handle {1}:\n{2}",
- util.srcnameof(handler),
- response.describe(),
- str(exc),
- )
- except Exception:
- log.reraise_exception(
- "Handler {0}\ncouldn't handle {1}:",
- util.srcnameof(handler),
- response.describe(),
- )
-
- handlers = self._response_handlers[:]
- self.channel._enqueue_handlers(response, run_handlers)
- del self._response_handlers[:]
-
-
-class Response(Message):
- """Represents an incoming or an outgoing response to a Request.
-
- https://microsoft.github.io/debug-adapter-protocol/specification#response
-
- error_message corresponds to "message" in JSON, and is renamed for clarity.
-
- If success is False, body is None. Otherwise, it is a MessageDict associated
- with this Response, and so are all the nested dicts in it. If "body" was missing
- or null in JSON, body is an empty dict.
-
- If this is a response to an outgoing request, it will be handled by the handler
- registered via self.request.on_response(), if any.
-
- Regardless of whether there is such a handler, OutgoingRequest.wait_for_response()
- can also be used to retrieve and handle the response. If there is a handler, it is
- executed before wait_for_response() returns.
-
- No further incoming messages are processed until the handler returns, except for
- responses to requests that have wait_for_response() invoked on them.
-
- To report failure to handle the event, the handler must raise an instance of
- MessageHandlingError that applies_to() the Response object it was handling. Any
- such failure is logged, after which the message loop moves on to the next message.
-
- Helper methods Message.isnt_valid() and Message.cant_handle() can be used to raise
- the appropriate exception type that applies_to() the Response object.
- """
-
- def __init__(self, channel, seq, request, body, json=None):
- super().__init__(channel, seq, json)
-
- self.request = request
- """The request to which this is the response."""
-
- if isinstance(body, MessageDict) and hasattr(body, "associate_with"):
- body.associate_with(self)
- self.body = body
- """Body of the response if the request was successful, or an instance
- of some class derived from Exception it it was not.
-
- If a response was received from the other side, but request failed, it is an
- instance of MessageHandlingError containing the received error message. If the
- error message starts with InvalidMessageError.PREFIX, then it's an instance of
- the InvalidMessageError specifically, and that prefix is stripped.
-
- If no response was received from the other party before the channel closed,
- it is an instance of NoMoreMessages.
- """
-
- def describe(self):
- return f"#{self.seq} response to {self.request.describe()}"
-
- @property
- def payload(self):
- return self.body
-
- @property
- def success(self):
- """Whether the request succeeded or not."""
- return not isinstance(self.body, Exception)
-
- @property
- def result(self):
- """Result of the request. Returns the value of response.body, unless it
- is an exception, in which case it is raised instead.
- """
- if self.success:
- return self.body
- else:
- raise self.body
-
- @staticmethod
- def _parse(channel, message_dict, body=None):
- seq = message_dict("seq", int) if (body is None) else None
- request_seq = message_dict("request_seq", int)
- command = message_dict("command", str)
- success = message_dict("success", bool)
- if body is None:
- if success:
- body = message_dict("body", _payload)
- else:
- error_message = message_dict("message", str)
- exc_type = MessageHandlingError
- if error_message.startswith(InvalidMessageError.PREFIX):
- error_message = error_message[len(InvalidMessageError.PREFIX) :]
- exc_type = InvalidMessageError
- body = exc_type(error_message, silent=True)
-
- try:
- with channel:
- request = channel._sent_requests.pop(request_seq)
- known_request = True
- except KeyError:
- # Synthetic Request that only has seq and command as specified in response
- # JSON, for error reporting purposes.
- request = OutgoingRequest(channel, request_seq, command, "")
- known_request = False
-
- if not success:
- body.cause = request
-
- response = Response(channel, seq, request, body, json=message_dict)
-
- with channel:
- request.response = response
- request._enqueue_response_handlers()
-
- if known_request:
- return response
- else:
- raise response.isnt_valid(
- "request_seq={0} does not match any known request", request_seq
- )
-
-
-class Disconnect(Message):
- """A dummy message used to represent disconnect. It's always the last message
- received from any channel.
- """
-
- def __init__(self, channel):
- super().__init__(channel, None)
-
- def describe(self):
- return f"disconnect from {self.channel}"
-
-
-class MessageHandlingError(Exception):
- """Indicates that a message couldn't be handled for some reason.
-
- If the reason is a contract violation - i.e. the message that was handled did not
- conform to the protocol specification - InvalidMessageError, which is a subclass,
- should be used instead.
-
- If any message handler raises an exception not derived from this class, it will
- escape the message loop unhandled, and terminate the process.
-
- If any message handler raises this exception, but applies_to(message) is False, it
- is treated as if it was a generic exception, as desribed above. Thus, if a request
- handler issues another request of its own, and that one fails, the failure is not
- silently propagated. However, a request that is delegated via Request.delegate()
- will also propagate failures back automatically. For manual propagation, catch the
- exception, and call exc.propagate().
-
- If any event handler raises this exception, and applies_to(event) is True, the
- exception is silently swallowed by the message loop.
-
- If any request handler raises this exception, and applies_to(request) is True, the
- exception is silently swallowed by the message loop, and a failure response is sent
- with "message" set to str(reason).
-
- Note that, while errors are not logged when they're swallowed by the message loop,
- by that time they have already been logged by their __init__ (when instantiated).
- """
-
- def __init__(self, reason, cause=None, silent=False):
- """Creates a new instance of this class, and immediately logs the exception.
-
- Message handling errors are logged immediately unless silent=True, so that the
- precise context in which they occured can be determined from the surrounding
- log entries.
- """
-
- self.reason = reason
- """Why it couldn't be handled. This can be any object, but usually it's either
- str or Exception.
- """
-
- assert cause is None or isinstance(cause, Message)
- self.cause = cause
- """The Message object for the message that couldn't be handled. For responses
- to unknown requests, this is a synthetic Request.
- """
-
- if not silent:
- try:
- raise self
- except MessageHandlingError:
- log.swallow_exception()
-
- def __hash__(self):
- return hash((self.reason, id(self.cause)))
-
- def __eq__(self, other):
- if not isinstance(other, MessageHandlingError):
- return NotImplemented
- if type(self) is not type(other):
- return NotImplemented
- if self.reason != other.reason:
- return False
- if self.cause is not None and other.cause is not None:
- if self.cause.seq != other.cause.seq:
- return False
- return True
-
- def __ne__(self, other):
- return not self == other
-
- def __str__(self):
- return str(self.reason)
-
- def __repr__(self):
- s = type(self).__name__
- if self.cause is None:
- s += f"reason={self.reason!r})"
- else:
- s += f"channel={self.cause.channel.name!r}, cause={self.cause.seq!r}, reason={self.reason!r})"
- return s
-
- def applies_to(self, message):
- """Whether this MessageHandlingError can be treated as a reason why the
- handling of message failed.
-
- If self.cause is None, this is always true.
-
- If self.cause is not None, this is only true if cause is message.
- """
- return self.cause is None or self.cause is message
-
- def propagate(self, new_cause):
- """Propagates this error, raising a new instance of the same class with the
- same reason, but a different cause.
- """
- raise type(self)(self.reason, new_cause, silent=True)
-
-
-class InvalidMessageError(MessageHandlingError):
- """Indicates that an incoming message did not follow the protocol specification -
- for example, it was missing properties that are required, or the message itself
- is not allowed in the current state.
-
- Raised by MessageDict in lieu of KeyError for missing keys.
- """
-
- PREFIX = "Invalid message: "
- """Automatically prepended to the "message" property in JSON responses, when the
- handler raises InvalidMessageError.
-
- If a failed response has "message" property that starts with this prefix, it is
- reported as InvalidMessageError rather than MessageHandlingError.
- """
-
- def __str__(self):
- return InvalidMessageError.PREFIX + str(self.reason)
-
-
-class JsonMessageChannel(object):
- """Implements a JSON message channel on top of a raw JSON message stream, with
- support for DAP requests, responses, and events.
-
- The channel can be locked for exclusive use via the with-statement::
-
- with channel:
- channel.send_request(...)
- # No interleaving messages can be sent here from other threads.
- channel.send_event(...)
- """
-
- def __init__(self, stream, handlers=None, name=None):
- self.stream = stream
- self.handlers = handlers
- self.name = name if name is not None else stream.name
- self.started = False
- self._lock = threading.RLock()
- self._closed = False
- self._seq_iter = itertools.count(1)
- self._sent_requests = {} # {seq: Request}
- self._handler_queue = [] # [(what, handler)]
- self._handlers_enqueued = threading.Condition(self._lock)
- self._handler_thread = None
- self._parser_thread = None
-
- def __str__(self):
- return self.name
-
- def __repr__(self):
- return f"{type(self).__name__}({self.name!r})"
-
- def __enter__(self):
- self._lock.acquire()
- return self
-
- def __exit__(self, exc_type, exc_value, exc_tb):
- self._lock.release()
-
- def close(self):
- """Closes the underlying stream.
-
- This does not immediately terminate any handlers that are already executing,
- but they will be unable to respond. No new request or event handlers will
- execute after this method is called, even for messages that have already been
- received. However, response handlers will continue to executed for any request
- that is still pending, as will any handlers registered via on_response().
- """
- with self:
- if not self._closed:
- self._closed = True
- self.stream.close()
-
- def start(self):
- """Starts a message loop which parses incoming messages and invokes handlers
- for them on a background thread, until the channel is closed.
-
- Incoming messages, including responses to requests, will not be processed at
- all until this is invoked.
- """
-
- assert not self.started
- self.started = True
-
- self._parser_thread = threading.Thread(
- target=self._parse_incoming_messages, name=f"{self} message parser"
- )
-
- hide_thread_from_debugger(self._parser_thread)
- self._parser_thread.daemon = True
- self._parser_thread.start()
-
- def wait(self):
- """Waits for the message loop to terminate, and for all enqueued Response
- message handlers to finish executing.
- """
- parser_thread = self._parser_thread
- try:
- if parser_thread is not None:
- parser_thread.join()
- except AssertionError:
- log.debug("Handled error joining parser thread.")
- try:
- handler_thread = self._handler_thread
- if handler_thread is not None:
- handler_thread.join()
- except AssertionError:
- log.debug("Handled error joining handler thread.")
-
- # Order of keys for _prettify() - follows the order of properties in
- # https://microsoft.github.io/debug-adapter-protocol/specification
- _prettify_order = (
- "seq",
- "type",
- "request_seq",
- "success",
- "command",
- "event",
- "message",
- "arguments",
- "body",
- "error",
- )
-
- def _prettify(self, message_dict):
- """Reorders items in a MessageDict such that it is more readable."""
- for key in self._prettify_order:
- if key not in message_dict:
- continue
- value = message_dict[key]
- del message_dict[key]
- message_dict[key] = value
-
- @contextlib.contextmanager
- def _send_message(self, message):
- """Sends a new message to the other party.
-
- Generates a new sequence number for the message, and provides it to the
- caller before the message is sent, using the context manager protocol::
-
- with send_message(...) as seq:
- # The message hasn't been sent yet.
- ...
- # Now the message has been sent.
-
- Safe to call concurrently for the same channel from different threads.
- """
-
- assert "seq" not in message
- with self:
- seq = next(self._seq_iter)
-
- message = MessageDict(None, message)
- message["seq"] = seq
- self._prettify(message)
-
- with self:
- yield seq
- self.stream.write_json(message)
-
- def send_request(self, command, arguments=None, on_before_send=None):
- """Sends a new request, and returns the OutgoingRequest object for it.
-
- If arguments is None or {}, "arguments" will be omitted in JSON.
-
- If on_before_send is not None, invokes on_before_send() with the request
- object as the sole argument, before the request actually gets sent.
-
- Does not wait for response - use OutgoingRequest.wait_for_response().
-
- Safe to call concurrently for the same channel from different threads.
- """
-
- d = {"type": "request", "command": command}
- if arguments is not None and arguments != {}:
- d["arguments"] = arguments
-
- with self._send_message(d) as seq:
- request = OutgoingRequest(self, seq, command, arguments)
- if on_before_send is not None:
- on_before_send(request)
- self._sent_requests[seq] = request
- return request
-
- def send_event(self, event, body=None):
- """Sends a new event.
-
- If body is None or {}, "body" will be omitted in JSON.
-
- Safe to call concurrently for the same channel from different threads.
- """
-
- d = {"type": "event", "event": event}
- if body is not None and body != {}:
- d["body"] = body
-
- with self._send_message(d):
- pass
-
- def request(self, *args, **kwargs):
- """Same as send_request(...).wait_for_response()"""
- return self.send_request(*args, **kwargs).wait_for_response()
-
- def propagate(self, message):
- """Sends a new message with the same type and payload.
-
- If it was a request, returns the new OutgoingRequest object for it.
- """
- assert message.is_request() or message.is_event()
- if message.is_request():
- return self.send_request(message.command, message.arguments)
- else:
- self.send_event(message.event, message.body)
-
- def delegate(self, message):
- """Like propagate(message).wait_for_response(), but will also propagate
- any resulting MessageHandlingError back.
- """
- try:
- result = self.propagate(message)
- if result.is_request():
- result = result.wait_for_response()
- return result
- except MessageHandlingError as exc:
- exc.propagate(message)
-
- def _parse_incoming_messages(self):
- log.debug("Starting message loop for channel {0}", self)
- try:
- while True:
- self._parse_incoming_message()
-
- except NoMoreMessages as exc:
- log.debug("Exiting message loop for channel {0}: {1}", self, exc)
- with self:
- # Generate dummy responses for all outstanding requests.
- err_message = str(exc)
-
- # Response._parse() will remove items from _sent_requests, so
- # make a snapshot before iterating.
- sent_requests = list(self._sent_requests.values())
-
- for request in sent_requests:
- response_json = MessageDict(
- None,
- {
- "seq": -1,
- "request_seq": request.seq,
- "command": request.command,
- "success": False,
- "message": err_message,
- },
- )
- Response._parse(self, response_json, body=exc)
- assert not len(self._sent_requests)
-
- self._enqueue_handlers(Disconnect(self), self._handle_disconnect)
- self.close()
-
- _message_parsers = {
- "event": Event._parse,
- "request": Request._parse,
- "response": Response._parse,
- }
-
- def _parse_incoming_message(self):
- """Reads incoming messages, parses them, and puts handlers into the queue
- for _run_handlers() to invoke, until the channel is closed.
- """
-
- # Set up a dedicated decoder for this message, to create MessageDict instances
- # for all JSON objects, and track them so that they can be later wired up to
- # the Message they belong to, once it is instantiated.
- def object_hook(d):
- d = MessageDict(None, d)
- if "seq" in d:
- self._prettify(d)
- d.associate_with = associate_with
- message_dicts.append(d)
- return d
-
- # A hack to work around circular dependency between messages, and instances of
- # MessageDict in their payload. We need to set message for all of them, but it
- # cannot be done until the actual Message is created - which happens after the
- # dicts are created during deserialization.
- #
- # So, upon deserialization, every dict in the message payload gets a method
- # that can be called to set MessageDict.message for *all* dicts belonging to
- # that message. This method can then be invoked on the top-level dict by the
- # parser, after it has parsed enough of the dict to create the appropriate
- # instance of Event, Request, or Response for this message.
- def associate_with(message):
- for d in message_dicts:
- d.message = message
- del d.associate_with
-
- message_dicts = []
- decoder = self.stream.json_decoder_factory(object_hook=object_hook)
- message_dict = self.stream.read_json(decoder)
- assert isinstance(message_dict, MessageDict) # make sure stream used decoder
-
- msg_type = message_dict("type", json.enum("event", "request", "response"))
- parser = self._message_parsers[msg_type]
- try:
- parser(self, message_dict)
- except InvalidMessageError as exc:
- log.error(
- "Failed to parse message in channel {0}: {1} in:\n{2}",
- self,
- str(exc),
- json.repr(message_dict),
- )
- except Exception as exc:
- if isinstance(exc, NoMoreMessages) and exc.stream is self.stream:
- raise
- log.swallow_exception(
- "Fatal error in channel {0} while parsing:\n{1}",
- self,
- json.repr(message_dict),
- )
- os._exit(1)
-
- def _enqueue_handlers(self, what, *handlers):
- """Enqueues handlers for _run_handlers() to run.
-
- `what` is the Message being handled, and is used for logging purposes.
-
- If the background thread with _run_handlers() isn't running yet, starts it.
- """
-
- with self:
- self._handler_queue.extend((what, handler) for handler in handlers)
- self._handlers_enqueued.notify_all()
-
- # If there is anything to handle, but there's no handler thread yet,
- # spin it up. This will normally happen only once, on the first call
- # to _enqueue_handlers(), and that thread will run all the handlers
- # for parsed messages. However, this can also happen is somebody calls
- # Request.on_response() - possibly concurrently from multiple threads -
- # after the channel has already been closed, and the initial handler
- # thread has exited. In this case, we spin up a new thread just to run
- # the enqueued response handlers, and it will exit as soon as it's out
- # of handlers to run.
- if len(self._handler_queue) and self._handler_thread is None:
- self._handler_thread = threading.Thread(
- target=self._run_handlers,
- name=f"{self} message handler",
- )
- hide_thread_from_debugger(self._handler_thread)
- self._handler_thread.start()
-
- def _run_handlers(self):
- """Runs enqueued handlers until the channel is closed, or until the handler
- queue is empty once the channel is closed.
- """
-
- while True:
- with self:
- closed = self._closed
- if closed:
- # Wait for the parser thread to wrap up and enqueue any remaining
- # handlers, if it is still running.
- self._parser_thread.join()
- # From this point on, _enqueue_handlers() can only get called
- # from Request.on_response().
-
- with self:
- if not closed and not len(self._handler_queue):
- # Wait for something to process.
- self._handlers_enqueued.wait()
-
- # Make a snapshot before releasing the lock.
- handlers = self._handler_queue[:]
- del self._handler_queue[:]
-
- if closed and not len(handlers):
- # Nothing to process, channel is closed, and parser thread is
- # not running anymore - time to quit! If Request.on_response()
- # needs to call _enqueue_handlers() later, it will spin up
- # a new handler thread.
- self._handler_thread = None
- return
-
- for what, handler in handlers:
- # If the channel is closed, we don't want to process any more events
- # or requests - only responses and the final disconnect handler. This
- # is to guarantee that if a handler calls close() on its own channel,
- # the corresponding request or event is the last thing to be processed.
- if closed and handler in (Event._handle, Request._handle):
- continue
-
- with log.prefixed("/handling {0}/\n", what.describe()):
- try:
- handler()
- except Exception:
- # It's already logged by the handler, so just fail fast.
- self.close()
- os._exit(1)
-
- def _get_handler_for(self, type, name):
- """Returns the handler for a message of a given type."""
-
- with self:
- handlers = self.handlers
-
- for handler_name in (name + "_" + type, type):
- try:
- return getattr(handlers, handler_name)
- except AttributeError:
- continue
-
- raise AttributeError(
- "handler object {0} for channel {1} has no handler for {2} {3!r}".format(
- util.srcnameof(handlers),
- self,
- type,
- name,
- )
- )
-
- def _handle_disconnect(self):
- handler = getattr(self.handlers, "disconnect", lambda: None)
- try:
- handler()
- except Exception:
- log.reraise_exception(
- "Handler {0}\ncouldn't handle disconnect from {1}:",
- util.srcnameof(handler),
- self,
- )
-
-
-class MessageHandlers(object):
- """A simple delegating message handlers object for use with JsonMessageChannel.
- For every argument provided, the object gets an attribute with the corresponding
- name and value.
- """
-
- def __init__(self, **kwargs):
- for name, func in kwargs.items():
- setattr(self, name, func)
diff --git a/spaces/Syrinx/WebtoonPlotGenerator/README.md b/spaces/Syrinx/WebtoonPlotGenerator/README.md
deleted file mode 100644
index e12433bc072d2c7fbcba6185e0699ad37ddd33d8..0000000000000000000000000000000000000000
--- a/spaces/Syrinx/WebtoonPlotGenerator/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: WebtoonPlotGenerator
-emoji: 🏃
-colorFrom: purple
-colorTo: yellow
-sdk: streamlit
-sdk_version: 1.17.0
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/TEnngal/bingo/Dockerfile b/spaces/TEnngal/bingo/Dockerfile
deleted file mode 100644
index 3aa2b29b5fc4fa8b8238955acd7f1fde13ce5e1a..0000000000000000000000000000000000000000
--- a/spaces/TEnngal/bingo/Dockerfile
+++ /dev/null
@@ -1,36 +0,0 @@
-FROM node:18
-
-
-ARG DEBIAN_FRONTEND=noninteractive
-
-ENV BING_HEADER ""
-
-# Set home to the user's home directory
-ENV HOME=/home/user \
- PATH=/home/user/.local/bin:$PATH
-
-# Set up a new user named "user" with user ID 1000
-RUN useradd -o -u 1000 user && mkdir -p $HOME/app && chown -R user $HOME
-
-# Switch to the "user" user
-USER user
-
-# Set the working directory to the user's home directory
-WORKDIR $HOME/app
-
-# Install app dependencies
-# A wildcard is used to ensure both package.json AND package-lock.json are copied
-# where available (npm@5+)
-COPY --chown=user package*.json $HOME/app/
-
-RUN npm install
-
-# Copy the current directory contents into the container at $HOME/app setting the owner to the user
-COPY --chown=user . $HOME/app/
-
-RUN npm run build
-
-ENV PORT 7860
-EXPOSE 7860
-
-CMD npm start
diff --git a/spaces/TNK21/Story_Generator/app.py b/spaces/TNK21/Story_Generator/app.py
deleted file mode 100644
index 047d8bd897c5bdd630602ded85e605df08172e56..0000000000000000000000000000000000000000
--- a/spaces/TNK21/Story_Generator/app.py
+++ /dev/null
@@ -1,39 +0,0 @@
-import gradio as gr
-from transformers import pipeline
-
-# Load the text generation model
-text_generator = pipeline("text-generation", model="gpt2")
-
-# Define the function for story generation
-def generate_story(prompt, word_count):
- # Calculate the maximum length based on word count
- max_length = word_count + len(prompt.split())
- # Generate a story based on the user's prompt and word count
- generated_text = text_generator(prompt, max_length=max_length, num_return_sequences=1)[0]['generated_text']
- return generated_text
-
-# Define example inputs for the Gradio interface
-example_inputs = [
- ["Once upon a time, in a magical forest, there was a curious rabbit named Oliver.", 100],
- ["Amidst the hustle and bustle of a busy city, there lived a lonely street musician.", 150],
- ["On a distant planet, explorers discovered an ancient alien artifact buried in the sand.", 200],
- ["Hidden in the attic of an old house, a forgotten journal revealed a family secret.", 250],
- ["In a futuristic world, a brilliant scientist invented a time-traveling device.", 300],
- ["Deep in the ocean, an underwater explorer encountered a mysterious and ancient creature.", 350]
-]
-
-# Create a Gradio interface with examples and a word count slider
-iface = gr.Interface(
- fn=generate_story,
- inputs=[
- gr.components.Textbox(label="Prompt"),
- gr.components.Slider(minimum=50, maximum=500, default=100, label="Word Count")
- ],
- outputs="text",
- title="Story Generator with Word Count",
- description="Enter a prompt and select the word count to generate a story.",
- examples=example_inputs
-)
-
-# Launch the interface
-iface.launch()
\ No newline at end of file
diff --git a/spaces/Tanjiro2002/Government_order/README.md b/spaces/Tanjiro2002/Government_order/README.md
deleted file mode 100644
index 260196aecc0ca0fe03d768c727225a21a27e2994..0000000000000000000000000000000000000000
--- a/spaces/Tanjiro2002/Government_order/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Government Order
-emoji: 😻
-colorFrom: gray
-colorTo: red
-sdk: gradio
-sdk_version: 3.47.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Vijaykumarthummapala/Mygenaichatbot/README.md b/spaces/Vijaykumarthummapala/Mygenaichatbot/README.md
deleted file mode 100644
index 1d6c16a8cf27cd5f25263a518fbf41894b9f76cd..0000000000000000000000000000000000000000
--- a/spaces/Vijaykumarthummapala/Mygenaichatbot/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Mygenaichatbot
-emoji: ⚡
-colorFrom: green
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.39.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/VishnuTransformer/TrOCR_Handwritten/app.py b/spaces/VishnuTransformer/TrOCR_Handwritten/app.py
deleted file mode 100644
index 2b9e7c92c324c413038b55ead27392ca667650ea..0000000000000000000000000000000000000000
--- a/spaces/VishnuTransformer/TrOCR_Handwritten/app.py
+++ /dev/null
@@ -1,128 +0,0 @@
-from transformers import TrOCRProcessor
-
-from transformers import VisionEncoderDecoderModel
-
-processor = TrOCRProcessor.from_pretrained("microsoft/trocr-base-handwritten")
-
-model = VisionEncoderDecoderModel.from_pretrained("microsoft/trocr-base-handwritten")
-
-def extract_text(image):
-
- # calling the processor is equivalent to calling the feature extractor
- pixel_values = processor(image, return_tensors="pt").pixel_values
-
- generated_ids = model.generate(pixel_values)
-
- generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
-
- return generated_text
-
-
-import cv2
-import matplotlib.pyplot as plt
-import numpy as np
-import warnings
-
-warnings.filterwarnings("ignore")
-
-def hand_written(image_raw):
-
- image_raw = np.array(image_raw)
-
- image = cv2.cvtColor(image_raw,cv2.COLOR_BGR2GRAY)
-
- image = cv2.GaussianBlur(image,(5,5),0)
-
- image = cv2.threshold(image,200,255,cv2.THRESH_BINARY_INV)[1]
-
- kernal = cv2.getStructuringElement(cv2.MORPH_RECT,(10,1))
-
- image = cv2.dilate(image,kernal,iterations=5)
-
- contours,hier = cv2.findContours(image,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
-
- all_box = []
-
- for i in contours:
-
- bbox = cv2.boundingRect(i)
-
- all_box.append(bbox)
-
-
- # Calculate maximum rectangle height
- c = np.array(all_box)
- max_height = np.max(c[::, 3])
-
- # Sort the contours by y-value
- by_y = sorted(all_box, key=lambda x: x[1]) # y values
-
- line_y = by_y[0][1] # first y
- line = 1
- by_line = []
-
- # Assign a line number to each contour
- for x, y, w, h in by_y:
- if y > line_y + max_height:
- line_y = y
- line += 1
-
- by_line.append((line, x, y, w, h))
-
- # This will now sort automatically by line then by x
- contours_sorted = [(x, y, w, h) for line, x, y, w, h in sorted(by_line)]
-
- text = ""
-
- for line in contours_sorted:
-
- x,y,w,h = line
-
- cropped_image = image_raw[y:y+h,x:x+w]
-
- try:
-
- extracted = extract_text(cropped_image)
-
- if not extracted == "0 0" and not extracted == "0 1":
-
- # print("Extracted : ",extracted)
- # print("----------------------------------")
-
- text = "\n".join([text,extracted])
-
- except:
-
- print("skiping")
-
- # plt.figure(figsize=(10,8))
-
- # plt.imshow(cropped_image)
-
- pass
-
- return text
-
-import gradio as gr
-from transformers import TrOCRProcessor, VisionEncoderDecoderModel
-from PIL import Image
-
-
-
-# load image examples from the IAM database
-
-
-title = "Interactive demo: Multi-Line-TrOCR"
-description = "Multi-line Handwritten Recognizer"
-article = "
"
-examples =[["image_0.png"]]
-
-iface = gr.Interface(fn=hand_written,
- inputs=gr.inputs.Image(type="pil"),
- outputs=gr.outputs.Textbox(),
- title=title,
- description=description,
- article=article,
- examples=examples)
-
-iface.launch(debug=True,share=True)
\ No newline at end of file
diff --git a/spaces/Vision-CAIR/MiniGPT-v2/minigpt4/models/blip2.py b/spaces/Vision-CAIR/MiniGPT-v2/minigpt4/models/blip2.py
deleted file mode 100644
index ee4a9dc3e2544033139841032ebb7b35168ac8fa..0000000000000000000000000000000000000000
--- a/spaces/Vision-CAIR/MiniGPT-v2/minigpt4/models/blip2.py
+++ /dev/null
@@ -1,221 +0,0 @@
-"""
- Copyright (c) 2023, salesforce.com, inc.
- All rights reserved.
- SPDX-License-Identifier: BSD-3-Clause
- For full license text, see the LICENSE_Lavis file in the repo root or https://opensource.org/licenses/BSD-3-Clause
-"""
-import contextlib
-import logging
-import os
-import time
-import datetime
-
-import torch
-import torch.nn as nn
-import torch.distributed as dist
-import torch.nn.functional as F
-
-import minigpt4.common.dist_utils as dist_utils
-from minigpt4.common.dist_utils import download_cached_file
-from minigpt4.common.utils import is_url
-from minigpt4.common.logger import MetricLogger
-from minigpt4.models.base_model import BaseModel
-from minigpt4.models.Qformer import BertConfig, BertLMHeadModel
-from minigpt4.models.eva_vit import create_eva_vit_g
-from transformers import BertTokenizer
-
-
-class Blip2Base(BaseModel):
- @classmethod
- def init_tokenizer(cls):
- tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
- tokenizer.add_special_tokens({"bos_token": "[DEC]"})
- return tokenizer
-
- def maybe_autocast(self, dtype=torch.float16):
- # if on cpu, don't use autocast
- # if on gpu, use autocast with dtype if provided, otherwise use torch.float16
- enable_autocast = self.device != torch.device("cpu")
-
- if enable_autocast:
- return torch.cuda.amp.autocast(dtype=dtype)
- else:
- return contextlib.nullcontext()
-
- @classmethod
- def init_Qformer(cls, num_query_token, vision_width, cross_attention_freq=2):
- encoder_config = BertConfig.from_pretrained("bert-base-uncased")
- encoder_config.encoder_width = vision_width
- # insert cross-attention layer every other block
- encoder_config.add_cross_attention = True
- encoder_config.cross_attention_freq = cross_attention_freq
- encoder_config.query_length = num_query_token
- Qformer = BertLMHeadModel(config=encoder_config)
- query_tokens = nn.Parameter(
- torch.zeros(1, num_query_token, encoder_config.hidden_size)
- )
- query_tokens.data.normal_(mean=0.0, std=encoder_config.initializer_range)
- return Qformer, query_tokens
-
- @classmethod
- def init_vision_encoder(
- cls, model_name, img_size, drop_path_rate, use_grad_checkpoint, precision
- ):
- assert model_name == "eva_clip_g", "vit model must be eva_clip_g for current version of MiniGPT-4"
- visual_encoder = create_eva_vit_g(
- img_size, drop_path_rate, use_grad_checkpoint, precision
- )
-
- ln_vision = LayerNorm(visual_encoder.num_features)
- return visual_encoder, ln_vision
-
- def load_from_pretrained(self, url_or_filename):
- if is_url(url_or_filename):
- cached_file = download_cached_file(
- url_or_filename, check_hash=False, progress=True
- )
- checkpoint = torch.load(cached_file, map_location="cpu")
- elif os.path.isfile(url_or_filename):
- checkpoint = torch.load(url_or_filename, map_location="cpu")
- else:
- raise RuntimeError("checkpoint url or path is invalid")
-
- state_dict = checkpoint["model"]
-
- msg = self.load_state_dict(state_dict, strict=False)
-
- # logging.info("Missing keys {}".format(msg.missing_keys))
- logging.info("load checkpoint from %s" % url_or_filename)
-
- return msg
-
-
-def disabled_train(self, mode=True):
- """Overwrite model.train with this function to make sure train/eval mode
- does not change anymore."""
- return self
-
-
-class LayerNorm(nn.LayerNorm):
- """Subclass torch's LayerNorm to handle fp16."""
-
- def forward(self, x: torch.Tensor):
- orig_type = x.dtype
- ret = super().forward(x.type(torch.float32))
- return ret.type(orig_type)
-
-
-def compute_sim_matrix(model, data_loader, **kwargs):
- k_test = kwargs.pop("k_test")
-
- metric_logger = MetricLogger(delimiter=" ")
- header = "Evaluation:"
-
- logging.info("Computing features for evaluation...")
- start_time = time.time()
-
- texts = data_loader.dataset.text
- num_text = len(texts)
- text_bs = 256
- text_ids = []
- text_embeds = []
- text_atts = []
- for i in range(0, num_text, text_bs):
- text = texts[i : min(num_text, i + text_bs)]
- text_input = model.tokenizer(
- text,
- padding="max_length",
- truncation=True,
- max_length=35,
- return_tensors="pt",
- ).to(model.device)
- text_feat = model.forward_text(text_input)
- text_embed = F.normalize(model.text_proj(text_feat))
- text_embeds.append(text_embed)
- text_ids.append(text_input.input_ids)
- text_atts.append(text_input.attention_mask)
-
- text_embeds = torch.cat(text_embeds, dim=0)
- text_ids = torch.cat(text_ids, dim=0)
- text_atts = torch.cat(text_atts, dim=0)
-
- vit_feats = []
- image_embeds = []
- for samples in data_loader:
- image = samples["image"]
-
- image = image.to(model.device)
- image_feat, vit_feat = model.forward_image(image)
- image_embed = model.vision_proj(image_feat)
- image_embed = F.normalize(image_embed, dim=-1)
-
- vit_feats.append(vit_feat.cpu())
- image_embeds.append(image_embed)
-
- vit_feats = torch.cat(vit_feats, dim=0)
- image_embeds = torch.cat(image_embeds, dim=0)
-
- sims_matrix = []
- for image_embed in image_embeds:
- sim_q2t = image_embed @ text_embeds.t()
- sim_i2t, _ = sim_q2t.max(0)
- sims_matrix.append(sim_i2t)
- sims_matrix = torch.stack(sims_matrix, dim=0)
-
- score_matrix_i2t = torch.full(
- (len(data_loader.dataset.image), len(texts)), -100.0
- ).to(model.device)
-
- num_tasks = dist_utils.get_world_size()
- rank = dist_utils.get_rank()
- step = sims_matrix.size(0) // num_tasks + 1
- start = rank * step
- end = min(sims_matrix.size(0), start + step)
-
- for i, sims in enumerate(
- metric_logger.log_every(sims_matrix[start:end], 50, header)
- ):
- topk_sim, topk_idx = sims.topk(k=k_test, dim=0)
- image_inputs = vit_feats[start + i].repeat(k_test, 1, 1).to(model.device)
- score = model.compute_itm(
- image_inputs=image_inputs,
- text_ids=text_ids[topk_idx],
- text_atts=text_atts[topk_idx],
- ).float()
- score_matrix_i2t[start + i, topk_idx] = score + topk_sim
-
- sims_matrix = sims_matrix.t()
- score_matrix_t2i = torch.full(
- (len(texts), len(data_loader.dataset.image)), -100.0
- ).to(model.device)
-
- step = sims_matrix.size(0) // num_tasks + 1
- start = rank * step
- end = min(sims_matrix.size(0), start + step)
-
- for i, sims in enumerate(
- metric_logger.log_every(sims_matrix[start:end], 50, header)
- ):
- topk_sim, topk_idx = sims.topk(k=k_test, dim=0)
- image_inputs = vit_feats[topk_idx.cpu()].to(model.device)
- score = model.compute_itm(
- image_inputs=image_inputs,
- text_ids=text_ids[start + i].repeat(k_test, 1),
- text_atts=text_atts[start + i].repeat(k_test, 1),
- ).float()
- score_matrix_t2i[start + i, topk_idx] = score + topk_sim
-
- if dist_utils.is_dist_avail_and_initialized():
- dist.barrier()
- torch.distributed.all_reduce(
- score_matrix_i2t, op=torch.distributed.ReduceOp.SUM
- )
- torch.distributed.all_reduce(
- score_matrix_t2i, op=torch.distributed.ReduceOp.SUM
- )
-
- total_time = time.time() - start_time
- total_time_str = str(datetime.timedelta(seconds=int(total_time)))
- logging.info("Evaluation time {}".format(total_time_str))
-
- return score_matrix_i2t.cpu().numpy(), score_matrix_t2i.cpu().numpy()
diff --git a/spaces/VoiceHero69/changer/webui/modules/implementations/rvc/infer_pack/onnx_inference.py b/spaces/VoiceHero69/changer/webui/modules/implementations/rvc/infer_pack/onnx_inference.py
deleted file mode 100644
index 322572820dfc75d789e40ce5bbd9415066a03979..0000000000000000000000000000000000000000
--- a/spaces/VoiceHero69/changer/webui/modules/implementations/rvc/infer_pack/onnx_inference.py
+++ /dev/null
@@ -1,139 +0,0 @@
-import onnxruntime
-import librosa
-import numpy as np
-import soundfile
-
-
-class ContentVec:
- def __init__(self, vec_path="pretrained/vec-768-layer-12.onnx", device=None):
- print("load model(s) from {}".format(vec_path))
- if device == "cpu" or device is None:
- providers = ["CPUExecutionProvider"]
- elif device == "cuda":
- providers = ["CUDAExecutionProvider", "CPUExecutionProvider"]
- else:
- raise RuntimeError("Unsportted Device")
- self.model = onnxruntime.InferenceSession(vec_path, providers=providers)
-
- def __call__(self, wav):
- return self.forward(wav)
-
- def forward(self, wav):
- feats = wav
- if feats.ndim == 2: # double channels
- feats = feats.mean(-1)
- assert feats.ndim == 1, feats.ndim
- feats = np.expand_dims(np.expand_dims(feats, 0), 0)
- onnx_input = {self.model.get_inputs()[0].name: feats}
- logits = self.model.run(None, onnx_input)[0]
- return logits.transpose(0, 2, 1)
-
-
-def get_f0_predictor(f0_predictor, hop_length, sampling_rate, **kargs):
- if f0_predictor == "pm":
- from infer_pack.modules.F0Predictor.PMF0Predictor import PMF0Predictor
-
- f0_predictor_object = PMF0Predictor(
- hop_length=hop_length, sampling_rate=sampling_rate
- )
- elif f0_predictor == "harvest":
- from infer_pack.modules.F0Predictor.HarvestF0Predictor import HarvestF0Predictor
-
- f0_predictor_object = HarvestF0Predictor(
- hop_length=hop_length, sampling_rate=sampling_rate
- )
- elif f0_predictor == "dio":
- from infer_pack.modules.F0Predictor.DioF0Predictor import DioF0Predictor
-
- f0_predictor_object = DioF0Predictor(
- hop_length=hop_length, sampling_rate=sampling_rate
- )
- else:
- raise Exception("Unknown f0 predictor")
- return f0_predictor_object
-
-
-class OnnxRVC:
- def __init__(
- self,
- model_path,
- sr=40000,
- hop_size=512,
- vec_path="vec-768-layer-12",
- device="cpu",
- ):
- vec_path = f"pretrained/{vec_path}.onnx"
- self.vec_model = ContentVec(vec_path, device)
- if device == "cpu" or device is None:
- providers = ["CPUExecutionProvider"]
- elif device == "cuda":
- providers = ["CUDAExecutionProvider", "CPUExecutionProvider"]
- else:
- raise RuntimeError("Unsportted Device")
- self.model = onnxruntime.InferenceSession(model_path, providers=providers)
- self.sampling_rate = sr
- self.hop_size = hop_size
-
- def forward(self, hubert, hubert_length, pitch, pitchf, ds, rnd):
- onnx_input = {
- self.model.get_inputs()[0].name: hubert,
- self.model.get_inputs()[1].name: hubert_length,
- self.model.get_inputs()[2].name: pitch,
- self.model.get_inputs()[3].name: pitchf,
- self.model.get_inputs()[4].name: ds,
- self.model.get_inputs()[5].name: rnd,
- }
- return (self.model.run(None, onnx_input)[0] * 32767).astype(np.int16)
-
- def inference(
- self,
- raw_path,
- sid,
- f0_method="dio",
- f0_up_key=0,
- pad_time=0.5,
- cr_threshold=0.02,
- ):
- f0_min = 50
- f0_max = 1100
- f0_mel_min = 1127 * np.log(1 + f0_min / 700)
- f0_mel_max = 1127 * np.log(1 + f0_max / 700)
- f0_predictor = get_f0_predictor(
- f0_method,
- hop_length=self.hop_size,
- sampling_rate=self.sampling_rate,
- threshold=cr_threshold,
- )
- wav, sr = librosa.load(raw_path, sr=self.sampling_rate)
- org_length = len(wav)
- if org_length / sr > 50.0:
- raise RuntimeError("Reached Max Length")
-
- wav16k = librosa.resample(wav, orig_sr=self.sampling_rate, target_sr=16000)
- wav16k = wav16k
-
- hubert = self.vec_model(wav16k)
- hubert = np.repeat(hubert, 2, axis=2).transpose(0, 2, 1).astype(np.float32)
- hubert_length = hubert.shape[1]
-
- pitchf = f0_predictor.compute_f0(wav, hubert_length)
- pitchf = pitchf * 2 ** (f0_up_key / 12)
- pitch = pitchf.copy()
- f0_mel = 1127 * np.log(1 + pitch / 700)
- f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (
- f0_mel_max - f0_mel_min
- ) + 1
- f0_mel[f0_mel <= 1] = 1
- f0_mel[f0_mel > 255] = 255
- pitch = np.rint(f0_mel).astype(np.int64)
-
- pitchf = pitchf.reshape(1, len(pitchf)).astype(np.float32)
- pitch = pitch.reshape(1, len(pitch))
- ds = np.array([sid]).astype(np.int64)
-
- rnd = np.random.randn(1, 192, hubert_length).astype(np.float32)
- hubert_length = np.array([hubert_length]).astype(np.int64)
-
- out_wav = self.forward(hubert, hubert_length, pitch, pitchf, ds, rnd).squeeze()
- out_wav = np.pad(out_wav, (0, 2 * self.hop_size), "constant")
- return out_wav[0:org_length]
diff --git a/spaces/XzJosh/Azusa-Bert-VITS2/text/chinese.py b/spaces/XzJosh/Azusa-Bert-VITS2/text/chinese.py
deleted file mode 100644
index 276753880b73de2e8889dcb2101cd98c09e0710b..0000000000000000000000000000000000000000
--- a/spaces/XzJosh/Azusa-Bert-VITS2/text/chinese.py
+++ /dev/null
@@ -1,193 +0,0 @@
-import os
-import re
-
-import cn2an
-from pypinyin import lazy_pinyin, Style
-
-from text import symbols
-from text.symbols import punctuation
-from text.tone_sandhi import ToneSandhi
-
-current_file_path = os.path.dirname(__file__)
-pinyin_to_symbol_map = {line.split("\t")[0]: line.strip().split("\t")[1] for line in
- open(os.path.join(current_file_path, 'opencpop-strict.txt')).readlines()}
-
-import jieba.posseg as psg
-
-
-rep_map = {
- ':': ',',
- ';': ',',
- ',': ',',
- '。': '.',
- '!': '!',
- '?': '?',
- '\n': '.',
- "·": ",",
- '、': ",",
- '...': '…',
- '$': '.',
- '“': "'",
- '”': "'",
- '‘': "'",
- '’': "'",
- '(': "'",
- ')': "'",
- '(': "'",
- ')': "'",
- '《': "'",
- '》': "'",
- '【': "'",
- '】': "'",
- '[': "'",
- ']': "'",
- '—': "-",
- '~': "-",
- '~': "-",
- '「': "'",
- '」': "'",
-
-}
-
-tone_modifier = ToneSandhi()
-
-def replace_punctuation(text):
- text = text.replace("嗯", "恩").replace("呣","母")
- pattern = re.compile('|'.join(re.escape(p) for p in rep_map.keys()))
-
- replaced_text = pattern.sub(lambda x: rep_map[x.group()], text)
-
- replaced_text = re.sub(r'[^\u4e00-\u9fa5'+"".join(punctuation)+r']+', '', replaced_text)
-
- return replaced_text
-
-def g2p(text):
- pattern = r'(?<=[{0}])\s*'.format(''.join(punctuation))
- sentences = [i for i in re.split(pattern, text) if i.strip()!='']
- phones, tones, word2ph = _g2p(sentences)
- assert sum(word2ph) == len(phones)
- assert len(word2ph) == len(text) #Sometimes it will crash,you can add a try-catch.
- phones = ['_'] + phones + ["_"]
- tones = [0] + tones + [0]
- word2ph = [1] + word2ph + [1]
- return phones, tones, word2ph
-
-
-def _get_initials_finals(word):
- initials = []
- finals = []
- orig_initials = lazy_pinyin(
- word, neutral_tone_with_five=True, style=Style.INITIALS)
- orig_finals = lazy_pinyin(
- word, neutral_tone_with_five=True, style=Style.FINALS_TONE3)
- for c, v in zip(orig_initials, orig_finals):
- initials.append(c)
- finals.append(v)
- return initials, finals
-
-
-def _g2p(segments):
- phones_list = []
- tones_list = []
- word2ph = []
- for seg in segments:
- pinyins = []
- # Replace all English words in the sentence
- seg = re.sub('[a-zA-Z]+', '', seg)
- seg_cut = psg.lcut(seg)
- initials = []
- finals = []
- seg_cut = tone_modifier.pre_merge_for_modify(seg_cut)
- for word, pos in seg_cut:
- if pos == 'eng':
- continue
- sub_initials, sub_finals = _get_initials_finals(word)
- sub_finals = tone_modifier.modified_tone(word, pos,
- sub_finals)
- initials.append(sub_initials)
- finals.append(sub_finals)
-
- # assert len(sub_initials) == len(sub_finals) == len(word)
- initials = sum(initials, [])
- finals = sum(finals, [])
- #
- for c, v in zip(initials, finals):
- raw_pinyin = c+v
- # NOTE: post process for pypinyin outputs
- # we discriminate i, ii and iii
- if c == v:
- assert c in punctuation
- phone = [c]
- tone = '0'
- word2ph.append(1)
- else:
- v_without_tone = v[:-1]
- tone = v[-1]
-
- pinyin = c+v_without_tone
- assert tone in '12345'
-
- if c:
- # 多音节
- v_rep_map = {
- "uei": 'ui',
- 'iou': 'iu',
- 'uen': 'un',
- }
- if v_without_tone in v_rep_map.keys():
- pinyin = c+v_rep_map[v_without_tone]
- else:
- # 单音节
- pinyin_rep_map = {
- 'ing': 'ying',
- 'i': 'yi',
- 'in': 'yin',
- 'u': 'wu',
- }
- if pinyin in pinyin_rep_map.keys():
- pinyin = pinyin_rep_map[pinyin]
- else:
- single_rep_map = {
- 'v': 'yu',
- 'e': 'e',
- 'i': 'y',
- 'u': 'w',
- }
- if pinyin[0] in single_rep_map.keys():
- pinyin = single_rep_map[pinyin[0]]+pinyin[1:]
-
- assert pinyin in pinyin_to_symbol_map.keys(), (pinyin, seg, raw_pinyin)
- phone = pinyin_to_symbol_map[pinyin].split(' ')
- word2ph.append(len(phone))
-
- phones_list += phone
- tones_list += [int(tone)] * len(phone)
- return phones_list, tones_list, word2ph
-
-
-
-def text_normalize(text):
- numbers = re.findall(r'\d+(?:\.?\d+)?', text)
- for number in numbers:
- text = text.replace(number, cn2an.an2cn(number), 1)
- text = replace_punctuation(text)
- return text
-
-def get_bert_feature(text, word2ph):
- from text import chinese_bert
- return chinese_bert.get_bert_feature(text, word2ph)
-
-if __name__ == '__main__':
- from text.chinese_bert import get_bert_feature
- text = "啊!但是《原神》是由,米哈\游自主, [研发]的一款全.新开放世界.冒险游戏"
- text = text_normalize(text)
- print(text)
- phones, tones, word2ph = g2p(text)
- bert = get_bert_feature(text, word2ph)
-
- print(phones, tones, word2ph, bert.shape)
-
-
-# # 示例用法
-# text = "这是一个示例文本:,你好!这是一个测试...."
-# print(g2p_paddle(text)) # 输出: 这是一个示例文本你好这是一个测试
diff --git a/spaces/XzJosh/Lumi-Bert-VITS2/text/tone_sandhi.py b/spaces/XzJosh/Lumi-Bert-VITS2/text/tone_sandhi.py
deleted file mode 100644
index 0f45b7a72c5d858bcaab19ac85cfa686bf9a74da..0000000000000000000000000000000000000000
--- a/spaces/XzJosh/Lumi-Bert-VITS2/text/tone_sandhi.py
+++ /dev/null
@@ -1,351 +0,0 @@
-# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-from typing import List
-from typing import Tuple
-
-import jieba
-from pypinyin import lazy_pinyin
-from pypinyin import Style
-
-
-class ToneSandhi():
- def __init__(self):
- self.must_neural_tone_words = {
- '麻烦', '麻利', '鸳鸯', '高粱', '骨头', '骆驼', '马虎', '首饰', '馒头', '馄饨', '风筝',
- '难为', '队伍', '阔气', '闺女', '门道', '锄头', '铺盖', '铃铛', '铁匠', '钥匙', '里脊',
- '里头', '部分', '那么', '道士', '造化', '迷糊', '连累', '这么', '这个', '运气', '过去',
- '软和', '转悠', '踏实', '跳蚤', '跟头', '趔趄', '财主', '豆腐', '讲究', '记性', '记号',
- '认识', '规矩', '见识', '裁缝', '补丁', '衣裳', '衣服', '衙门', '街坊', '行李', '行当',
- '蛤蟆', '蘑菇', '薄荷', '葫芦', '葡萄', '萝卜', '荸荠', '苗条', '苗头', '苍蝇', '芝麻',
- '舒服', '舒坦', '舌头', '自在', '膏药', '脾气', '脑袋', '脊梁', '能耐', '胳膊', '胭脂',
- '胡萝', '胡琴', '胡同', '聪明', '耽误', '耽搁', '耷拉', '耳朵', '老爷', '老实', '老婆',
- '老头', '老太', '翻腾', '罗嗦', '罐头', '编辑', '结实', '红火', '累赘', '糨糊', '糊涂',
- '精神', '粮食', '簸箕', '篱笆', '算计', '算盘', '答应', '笤帚', '笑语', '笑话', '窟窿',
- '窝囊', '窗户', '稳当', '稀罕', '称呼', '秧歌', '秀气', '秀才', '福气', '祖宗', '砚台',
- '码头', '石榴', '石头', '石匠', '知识', '眼睛', '眯缝', '眨巴', '眉毛', '相声', '盘算',
- '白净', '痢疾', '痛快', '疟疾', '疙瘩', '疏忽', '畜生', '生意', '甘蔗', '琵琶', '琢磨',
- '琉璃', '玻璃', '玫瑰', '玄乎', '狐狸', '状元', '特务', '牲口', '牙碜', '牌楼', '爽快',
- '爱人', '热闹', '烧饼', '烟筒', '烂糊', '点心', '炊帚', '灯笼', '火候', '漂亮', '滑溜',
- '溜达', '温和', '清楚', '消息', '浪头', '活泼', '比方', '正经', '欺负', '模糊', '槟榔',
- '棺材', '棒槌', '棉花', '核桃', '栅栏', '柴火', '架势', '枕头', '枇杷', '机灵', '本事',
- '木头', '木匠', '朋友', '月饼', '月亮', '暖和', '明白', '时候', '新鲜', '故事', '收拾',
- '收成', '提防', '挖苦', '挑剔', '指甲', '指头', '拾掇', '拳头', '拨弄', '招牌', '招呼',
- '抬举', '护士', '折腾', '扫帚', '打量', '打算', '打点', '打扮', '打听', '打发', '扎实',
- '扁担', '戒指', '懒得', '意识', '意思', '情形', '悟性', '怪物', '思量', '怎么', '念头',
- '念叨', '快活', '忙活', '志气', '心思', '得罪', '张罗', '弟兄', '开通', '应酬', '庄稼',
- '干事', '帮手', '帐篷', '希罕', '师父', '师傅', '巴结', '巴掌', '差事', '工夫', '岁数',
- '屁股', '尾巴', '少爷', '小气', '小伙', '将就', '对头', '对付', '寡妇', '家伙', '客气',
- '实在', '官司', '学问', '学生', '字号', '嫁妆', '媳妇', '媒人', '婆家', '娘家', '委屈',
- '姑娘', '姐夫', '妯娌', '妥当', '妖精', '奴才', '女婿', '头发', '太阳', '大爷', '大方',
- '大意', '大夫', '多少', '多么', '外甥', '壮实', '地道', '地方', '在乎', '困难', '嘴巴',
- '嘱咐', '嘟囔', '嘀咕', '喜欢', '喇嘛', '喇叭', '商量', '唾沫', '哑巴', '哈欠', '哆嗦',
- '咳嗽', '和尚', '告诉', '告示', '含糊', '吓唬', '后头', '名字', '名堂', '合同', '吆喝',
- '叫唤', '口袋', '厚道', '厉害', '千斤', '包袱', '包涵', '匀称', '勤快', '动静', '动弹',
- '功夫', '力气', '前头', '刺猬', '刺激', '别扭', '利落', '利索', '利害', '分析', '出息',
- '凑合', '凉快', '冷战', '冤枉', '冒失', '养活', '关系', '先生', '兄弟', '便宜', '使唤',
- '佩服', '作坊', '体面', '位置', '似的', '伙计', '休息', '什么', '人家', '亲戚', '亲家',
- '交情', '云彩', '事情', '买卖', '主意', '丫头', '丧气', '两口', '东西', '东家', '世故',
- '不由', '不在', '下水', '下巴', '上头', '上司', '丈夫', '丈人', '一辈', '那个', '菩萨',
- '父亲', '母亲', '咕噜', '邋遢', '费用', '冤家', '甜头', '介绍', '荒唐', '大人', '泥鳅',
- '幸福', '熟悉', '计划', '扑腾', '蜡烛', '姥爷', '照顾', '喉咙', '吉他', '弄堂', '蚂蚱',
- '凤凰', '拖沓', '寒碜', '糟蹋', '倒腾', '报复', '逻辑', '盘缠', '喽啰', '牢骚', '咖喱',
- '扫把', '惦记'
- }
- self.must_not_neural_tone_words = {
- "男子", "女子", "分子", "原子", "量子", "莲子", "石子", "瓜子", "电子", "人人", "虎虎"
- }
- self.punc = ":,;。?!“”‘’':,;.?!"
-
- # the meaning of jieba pos tag: https://blog.csdn.net/weixin_44174352/article/details/113731041
- # e.g.
- # word: "家里"
- # pos: "s"
- # finals: ['ia1', 'i3']
- def _neural_sandhi(self, word: str, pos: str,
- finals: List[str]) -> List[str]:
-
- # reduplication words for n. and v. e.g. 奶奶, 试试, 旺旺
- for j, item in enumerate(word):
- if j - 1 >= 0 and item == word[j - 1] and pos[0] in {
- "n", "v", "a"
- } and word not in self.must_not_neural_tone_words:
- finals[j] = finals[j][:-1] + "5"
- ge_idx = word.find("个")
- if len(word) >= 1 and word[-1] in "吧呢啊呐噻嘛吖嗨呐哦哒额滴哩哟喽啰耶喔诶":
- finals[-1] = finals[-1][:-1] + "5"
- elif len(word) >= 1 and word[-1] in "的地得":
- finals[-1] = finals[-1][:-1] + "5"
- # e.g. 走了, 看着, 去过
- # elif len(word) == 1 and word in "了着过" and pos in {"ul", "uz", "ug"}:
- # finals[-1] = finals[-1][:-1] + "5"
- elif len(word) > 1 and word[-1] in "们子" and pos in {
- "r", "n"
- } and word not in self.must_not_neural_tone_words:
- finals[-1] = finals[-1][:-1] + "5"
- # e.g. 桌上, 地下, 家里
- elif len(word) > 1 and word[-1] in "上下里" and pos in {"s", "l", "f"}:
- finals[-1] = finals[-1][:-1] + "5"
- # e.g. 上来, 下去
- elif len(word) > 1 and word[-1] in "来去" and word[-2] in "上下进出回过起开":
- finals[-1] = finals[-1][:-1] + "5"
- # 个做量词
- elif (ge_idx >= 1 and
- (word[ge_idx - 1].isnumeric() or
- word[ge_idx - 1] in "几有两半多各整每做是")) or word == '个':
- finals[ge_idx] = finals[ge_idx][:-1] + "5"
- else:
- if word in self.must_neural_tone_words or word[
- -2:] in self.must_neural_tone_words:
- finals[-1] = finals[-1][:-1] + "5"
-
- word_list = self._split_word(word)
- finals_list = [finals[:len(word_list[0])], finals[len(word_list[0]):]]
- for i, word in enumerate(word_list):
- # conventional neural in Chinese
- if word in self.must_neural_tone_words or word[
- -2:] in self.must_neural_tone_words:
- finals_list[i][-1] = finals_list[i][-1][:-1] + "5"
- finals = sum(finals_list, [])
- return finals
-
- def _bu_sandhi(self, word: str, finals: List[str]) -> List[str]:
- # e.g. 看不懂
- if len(word) == 3 and word[1] == "不":
- finals[1] = finals[1][:-1] + "5"
- else:
- for i, char in enumerate(word):
- # "不" before tone4 should be bu2, e.g. 不怕
- if char == "不" and i + 1 < len(word) and finals[i +
- 1][-1] == "4":
- finals[i] = finals[i][:-1] + "2"
- return finals
-
- def _yi_sandhi(self, word: str, finals: List[str]) -> List[str]:
- # "一" in number sequences, e.g. 一零零, 二一零
- if word.find("一") != -1 and all(
- [item.isnumeric() for item in word if item != "一"]):
- return finals
- # "一" between reduplication words shold be yi5, e.g. 看一看
- elif len(word) == 3 and word[1] == "一" and word[0] == word[-1]:
- finals[1] = finals[1][:-1] + "5"
- # when "一" is ordinal word, it should be yi1
- elif word.startswith("第一"):
- finals[1] = finals[1][:-1] + "1"
- else:
- for i, char in enumerate(word):
- if char == "一" and i + 1 < len(word):
- # "一" before tone4 should be yi2, e.g. 一段
- if finals[i + 1][-1] == "4":
- finals[i] = finals[i][:-1] + "2"
- # "一" before non-tone4 should be yi4, e.g. 一天
- else:
- # "一" 后面如果是标点,还读一声
- if word[i + 1] not in self.punc:
- finals[i] = finals[i][:-1] + "4"
- return finals
-
- def _split_word(self, word: str) -> List[str]:
- word_list = jieba.cut_for_search(word)
- word_list = sorted(word_list, key=lambda i: len(i), reverse=False)
- first_subword = word_list[0]
- first_begin_idx = word.find(first_subword)
- if first_begin_idx == 0:
- second_subword = word[len(first_subword):]
- new_word_list = [first_subword, second_subword]
- else:
- second_subword = word[:-len(first_subword)]
- new_word_list = [second_subword, first_subword]
- return new_word_list
-
- def _three_sandhi(self, word: str, finals: List[str]) -> List[str]:
- if len(word) == 2 and self._all_tone_three(finals):
- finals[0] = finals[0][:-1] + "2"
- elif len(word) == 3:
- word_list = self._split_word(word)
- if self._all_tone_three(finals):
- # disyllabic + monosyllabic, e.g. 蒙古/包
- if len(word_list[0]) == 2:
- finals[0] = finals[0][:-1] + "2"
- finals[1] = finals[1][:-1] + "2"
- # monosyllabic + disyllabic, e.g. 纸/老虎
- elif len(word_list[0]) == 1:
- finals[1] = finals[1][:-1] + "2"
- else:
- finals_list = [
- finals[:len(word_list[0])], finals[len(word_list[0]):]
- ]
- if len(finals_list) == 2:
- for i, sub in enumerate(finals_list):
- # e.g. 所有/人
- if self._all_tone_three(sub) and len(sub) == 2:
- finals_list[i][0] = finals_list[i][0][:-1] + "2"
- # e.g. 好/喜欢
- elif i == 1 and not self._all_tone_three(sub) and finals_list[i][0][-1] == "3" and \
- finals_list[0][-1][-1] == "3":
-
- finals_list[0][-1] = finals_list[0][-1][:-1] + "2"
- finals = sum(finals_list, [])
- # split idiom into two words who's length is 2
- elif len(word) == 4:
- finals_list = [finals[:2], finals[2:]]
- finals = []
- for sub in finals_list:
- if self._all_tone_three(sub):
- sub[0] = sub[0][:-1] + "2"
- finals += sub
-
- return finals
-
- def _all_tone_three(self, finals: List[str]) -> bool:
- return all(x[-1] == "3" for x in finals)
-
- # merge "不" and the word behind it
- # if don't merge, "不" sometimes appears alone according to jieba, which may occur sandhi error
- def _merge_bu(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- new_seg = []
- last_word = ""
- for word, pos in seg:
- if last_word == "不":
- word = last_word + word
- if word != "不":
- new_seg.append((word, pos))
- last_word = word[:]
- if last_word == "不":
- new_seg.append((last_word, 'd'))
- last_word = ""
- return new_seg
-
- # function 1: merge "一" and reduplication words in it's left and right, e.g. "听","一","听" ->"听一听"
- # function 2: merge single "一" and the word behind it
- # if don't merge, "一" sometimes appears alone according to jieba, which may occur sandhi error
- # e.g.
- # input seg: [('听', 'v'), ('一', 'm'), ('听', 'v')]
- # output seg: [['听一听', 'v']]
- def _merge_yi(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- new_seg = []
- # function 1
- for i, (word, pos) in enumerate(seg):
- if i - 1 >= 0 and word == "一" and i + 1 < len(seg) and seg[i - 1][
- 0] == seg[i + 1][0] and seg[i - 1][1] == "v":
- new_seg[i - 1][0] = new_seg[i - 1][0] + "一" + new_seg[i - 1][0]
- else:
- if i - 2 >= 0 and seg[i - 1][0] == "一" and seg[i - 2][
- 0] == word and pos == "v":
- continue
- else:
- new_seg.append([word, pos])
- seg = new_seg
- new_seg = []
- # function 2
- for i, (word, pos) in enumerate(seg):
- if new_seg and new_seg[-1][0] == "一":
- new_seg[-1][0] = new_seg[-1][0] + word
- else:
- new_seg.append([word, pos])
- return new_seg
-
- # the first and the second words are all_tone_three
- def _merge_continuous_three_tones(
- self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- new_seg = []
- sub_finals_list = [
- lazy_pinyin(
- word, neutral_tone_with_five=True, style=Style.FINALS_TONE3)
- for (word, pos) in seg
- ]
- assert len(sub_finals_list) == len(seg)
- merge_last = [False] * len(seg)
- for i, (word, pos) in enumerate(seg):
- if i - 1 >= 0 and self._all_tone_three(
- sub_finals_list[i - 1]) and self._all_tone_three(
- sub_finals_list[i]) and not merge_last[i - 1]:
- # if the last word is reduplication, not merge, because reduplication need to be _neural_sandhi
- if not self._is_reduplication(seg[i - 1][0]) and len(
- seg[i - 1][0]) + len(seg[i][0]) <= 3:
- new_seg[-1][0] = new_seg[-1][0] + seg[i][0]
- merge_last[i] = True
- else:
- new_seg.append([word, pos])
- else:
- new_seg.append([word, pos])
-
- return new_seg
-
- def _is_reduplication(self, word: str) -> bool:
- return len(word) == 2 and word[0] == word[1]
-
- # the last char of first word and the first char of second word is tone_three
- def _merge_continuous_three_tones_2(
- self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- new_seg = []
- sub_finals_list = [
- lazy_pinyin(
- word, neutral_tone_with_five=True, style=Style.FINALS_TONE3)
- for (word, pos) in seg
- ]
- assert len(sub_finals_list) == len(seg)
- merge_last = [False] * len(seg)
- for i, (word, pos) in enumerate(seg):
- if i - 1 >= 0 and sub_finals_list[i - 1][-1][-1] == "3" and sub_finals_list[i][0][-1] == "3" and not \
- merge_last[i - 1]:
- # if the last word is reduplication, not merge, because reduplication need to be _neural_sandhi
- if not self._is_reduplication(seg[i - 1][0]) and len(
- seg[i - 1][0]) + len(seg[i][0]) <= 3:
- new_seg[-1][0] = new_seg[-1][0] + seg[i][0]
- merge_last[i] = True
- else:
- new_seg.append([word, pos])
- else:
- new_seg.append([word, pos])
- return new_seg
-
- def _merge_er(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- new_seg = []
- for i, (word, pos) in enumerate(seg):
- if i - 1 >= 0 and word == "儿" and seg[i-1][0] != "#":
- new_seg[-1][0] = new_seg[-1][0] + seg[i][0]
- else:
- new_seg.append([word, pos])
- return new_seg
-
- def _merge_reduplication(
- self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- new_seg = []
- for i, (word, pos) in enumerate(seg):
- if new_seg and word == new_seg[-1][0]:
- new_seg[-1][0] = new_seg[-1][0] + seg[i][0]
- else:
- new_seg.append([word, pos])
- return new_seg
-
- def pre_merge_for_modify(
- self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- seg = self._merge_bu(seg)
- try:
- seg = self._merge_yi(seg)
- except:
- print("_merge_yi failed")
- seg = self._merge_reduplication(seg)
- seg = self._merge_continuous_three_tones(seg)
- seg = self._merge_continuous_three_tones_2(seg)
- seg = self._merge_er(seg)
- return seg
-
- def modified_tone(self, word: str, pos: str,
- finals: List[str]) -> List[str]:
- finals = self._bu_sandhi(word, finals)
- finals = self._yi_sandhi(word, finals)
- finals = self._neural_sandhi(word, pos, finals)
- finals = self._three_sandhi(word, finals)
- return finals
diff --git a/spaces/Yiqin/ChatVID/model/fastchat/data/sample.py b/spaces/Yiqin/ChatVID/model/fastchat/data/sample.py
deleted file mode 100644
index b53df6a67d575e8a6e91261d5468dee193292eb2..0000000000000000000000000000000000000000
--- a/spaces/Yiqin/ChatVID/model/fastchat/data/sample.py
+++ /dev/null
@@ -1,33 +0,0 @@
-"""
-Sample some conversations from a file.
-
-Usage: python3 -m fastchat.data.sample --in sharegpt.json --out sampled.json
-"""
-import argparse
-import json
-from typing import Dict, Sequence, Optional
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument("--in-file", type=str, required=True)
- parser.add_argument("--out-file", type=str, default="sampled.json")
- parser.add_argument("--begin", type=int, default=0)
- parser.add_argument("--end", type=int, default=100)
- parser.add_argument("--max-length", type=int, default=128)
- args = parser.parse_args()
-
- content = json.load(open(args.in_file, "r"))
- new_content = []
- for i in range(args.begin, args.end):
- sample = content[i]
- concat = ""
- for s in sample["conversations"]:
- concat += s["value"]
-
- if len(concat) > args.max_length:
- continue
-
- new_content.append(sample)
-
- json.dump(new_content, open(args.out_file, "w"), indent=2)
diff --git a/spaces/YuAnthony/Audio-Caption/processes/dataset_multiprocess.py b/spaces/YuAnthony/Audio-Caption/processes/dataset_multiprocess.py
deleted file mode 100644
index 9aade85e2926b382e6919113b879cdefa187a32f..0000000000000000000000000000000000000000
--- a/spaces/YuAnthony/Audio-Caption/processes/dataset_multiprocess.py
+++ /dev/null
@@ -1,306 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8 -*-
-
-import sys
-
-sys.path.append('..')
-
-from typing import MutableMapping, Any
-from datetime import datetime
-from pathlib import Path
-from functools import partial
-from itertools import chain
-
-import numpy as np
-from loguru import logger
-
-from tools.printing import init_loggers
-from tools.argument_parsing import get_argument_parser
-from tools.dataset_creation import get_annotations_files, \
- get_amount_of_file_in_dir, check_data_for_split, \
- create_split_data, create_lists_and_frequencies
-from tools.file_io import load_settings_file, load_yaml_file, \
- load_numpy_object, dump_numpy_object
-from tools.features_log_mel_bands import feature_extraction
-
-from concurrent.futures import ProcessPoolExecutor
-from multiprocessing import cpu_count
-from tqdm import tqdm
-
-executor = ProcessPoolExecutor(max_workers=cpu_count())
-
-__author__ = 'Konstantinos Drossos -- Tampere University'
-__docformat__ = 'reStructuredText'
-__all__ = ['create_dataset', 'extract_features']
-
-
-def create_dataset(settings_dataset: MutableMapping[str, Any],
- settings_dirs_and_files: MutableMapping[str, Any]) \
- -> None:
- """Creates the dataset.
-
- Gets the dictionary with the settings and creates
- the files of the dataset.
-
- :param settings_dataset: Settings to be used for dataset\
- creation.
- :type settings_dataset: dict
- :param settings_dirs_and_files: Settings to be used for\
- handling directories and\
- files.
- :type settings_dirs_and_files: dict
- """
- # Get logger
- inner_logger = logger.bind(
- indent=2, is_caption=False)
-
- # Get root dir
- dir_root = Path(settings_dirs_and_files[
- 'root_dirs']['data'])
-
- # Read the annotation files
- inner_logger.info('Reading annotations files')
- csv_dev, csv_eva = get_annotations_files(
- settings_ann=settings_dataset['annotations'],
- dir_ann=dir_root.joinpath(
- settings_dirs_and_files['dataset'][
- 'annotations_dir']))
- inner_logger.info('Done')
-
- # Get all captions
- inner_logger.info('Getting the captions')
- captions_development = [
- csv_field.get(
- settings_dataset['annotations'][
- 'captions_fields_prefix'].format(c_ind))
- for csv_field in csv_dev
- for c_ind in range(1, 6)]
- inner_logger.info('Done')
-
- # Create lists of indices and frequencies for words and\
- # characters.
- inner_logger.info('Creating and saving words and chars '
- 'lists and frequencies')
- words_list, chars_list = create_lists_and_frequencies(
- captions=captions_development, dir_root=dir_root,
- settings_ann=settings_dataset['annotations'],
- settings_cntr=settings_dirs_and_files['dataset'])
- inner_logger.info('Done')
-
- # Aux partial function for convenience.
- split_func = partial(
- create_split_data,
- words_list=words_list, chars_list=chars_list,
- settings_ann=settings_dataset['annotations'],
- settings_audio=settings_dataset['audio'],
- settings_output=settings_dirs_and_files['dataset'])
-
- settings_audio_dirs = settings_dirs_and_files[
- 'dataset']['audio_dirs']
-
- # For each data split (i.e. development and evaluation)
- futures = []
- for split_data in [(csv_dev, 'development')]:
- futures.append(executor.submit(
- partial(_split, split_data, split_func, settings_dataset, settings_dirs_and_files,
- inner_logger, dir_root, settings_audio_dirs)))
- [future.result() for future in tqdm(futures)]
-
- futures = []
- for split_data in [(csv_eva, 'evaluation')]:
- futures.append(executor.submit(
- partial(_split, split_data, split_func, settings_dataset, settings_dirs_and_files,
- inner_logger, dir_root, settings_audio_dirs)))
- [future.result() for future in tqdm(futures)]
-
-
-def _split(split_data, split_func, settings_dataset, settings_dirs_and_files,
- inner_logger, dir_root, settings_audio_dirs):
- # Get helper variables.
- split_name = split_data[1]
- split_csv = split_data[0]
-
- dir_split = dir_root.joinpath(
- settings_audio_dirs['output'],
- settings_audio_dirs[f'{split_name}'])
-
- dir_downloaded_audio = dir_root.joinpath(
- settings_audio_dirs['downloaded'],
- settings_audio_dirs[f'{split_name}'])
-
- # Create the data for the split.
- inner_logger.info(f'Creating the {split_name} '
- f'split data')
- split_func(split_csv, dir_split,
- dir_downloaded_audio)
- inner_logger.info('Done')
-
- # Count and print the amount of initial and resulting\
- # files.
- nb_files_audio = get_amount_of_file_in_dir(
- dir_downloaded_audio)
- nb_files_data = get_amount_of_file_in_dir(dir_split)
-
- inner_logger.info(f'Amount of {split_name} '
- f'audio files: {nb_files_audio}')
- inner_logger.info(f'Amount of {split_name} '
- f'data files: {nb_files_data}')
- inner_logger.info(f'Amount of {split_name} data '
- f'files per audio: '
- f'{nb_files_data / nb_files_audio}')
-
- if settings_dataset['workflow']['validate_dataset']:
- # Check the created lists of indices for words and characters.
- inner_logger.info(f'Checking the {split_name} split')
- check_data_for_split(
- dir_audio=dir_downloaded_audio,
- dir_data=Path(settings_audio_dirs['output'],
- settings_audio_dirs[f'{split_name}']),
- dir_root=dir_root, csv_split=split_csv,
- settings_ann=settings_dataset['annotations'],
- settings_audio=settings_dataset['audio'],
- settings_cntr=settings_dirs_and_files['dataset'])
- inner_logger.info('Done')
- else:
- inner_logger.info(f'Skipping validation of {split_name} split')
-
-
-def extract_features(root_dir: str,
- settings_data: MutableMapping[str, Any],
- settings_features: MutableMapping[str, Any]) \
- -> None:
- """Extracts features from the audio data of Clotho.
-
- :param root_dir: Root dir for the data.
- :type root_dir: str
- :param settings_data: Settings for creating data files.
- :type settings_data: dict[str, T]
- :param settings_features: Settings for feature extraction.
- :type settings_features: dict[str, T]
- """
- # Get the root directory.
- dir_root = Path(root_dir)
-
- # Get the directories of files.
- dir_output = dir_root.joinpath(settings_data['audio_dirs']['output'])
-
- dir_dev = dir_output.joinpath(
- settings_data['audio_dirs']['development'])
- dir_eva = dir_output.joinpath(
- settings_data['audio_dirs']['evaluation'])
-
- # Get the directories for output.
- dir_output_dev = dir_root.joinpath(
- settings_data['features_dirs']['output'],
- settings_data['features_dirs']['development'])
- dir_output_eva = dir_root.joinpath(
- settings_data['features_dirs']['output'],
- settings_data['features_dirs']['evaluation'])
-
- # Create the directories.
- dir_output_dev.mkdir(parents=True, exist_ok=True)
- dir_output_eva.mkdir(parents=True, exist_ok=True)
-
- # Apply the function to each file and save the result.
- futures = []
- for data_file_name in filter(
- lambda _x: _x.suffix == '.npy',
- chain(dir_dev.iterdir(), dir_eva.iterdir())):
- futures.append(executor.submit(
- partial(_extract, data_file_name, settings_features, settings_data, dir_output_dev, dir_output_eva)))
- [future.result() for future in tqdm(futures)]
-
-
-def _extract(data_file_name, settings_features, settings_data, dir_output_dev, dir_output_eva):
- # Load the data file.
- data_file = load_numpy_object(data_file_name)
-
- # Extract the features.
- features = feature_extraction(
- data_file['audio_data'].item(),
- **settings_features['process'])
-
- # Populate the recarray data and dtypes.
- array_data = (data_file['file_name'].item(),)
- dtypes = [('file_name', data_file['file_name'].dtype)]
-
- # Check if we keeping the raw audio data.
- if settings_features['keep_raw_audio_data']:
- # And add them to the recarray data and dtypes.
- array_data += (data_file['audio_data'].item(),)
- dtypes.append(('audio_data', data_file['audio_data'].dtype))
-
- # Add the rest to the recarray.
- array_data += (
- features,
- data_file['caption'].item(),
- data_file['caption_ind'].item(),
- data_file['words_ind'].item(),
- data_file['chars_ind'].item())
- dtypes.extend([
- ('features', np.dtype(object)),
- ('caption', data_file['caption'].dtype),
- ('caption_ind', data_file['caption_ind'].dtype),
- ('words_ind', data_file['words_ind'].dtype),
- ('chars_ind', data_file['chars_ind'].dtype)
- ])
-
- # Make the recarray
- np_rec_array = np.rec.array([array_data], dtype=dtypes)
-
- # Make the path for serializing the recarray.
- parent_path = dir_output_dev \
- if data_file_name.parent.name == settings_data['audio_dirs']['development'] \
- else dir_output_eva
-
- file_path = parent_path.joinpath(data_file_name.name)
-
- # Dump it.
- dump_numpy_object(np_rec_array, file_path)
-
-
-def main():
- args = get_argument_parser().parse_args()
-
- file_dir = args.file_dir
- config_file = args.config_file
- file_ext = args.file_ext
- verbose = args.verbose
-
- # Load settings file.
- settings = load_yaml_file(Path(
- file_dir, f'{config_file}.{file_ext}'))
-
- init_loggers(verbose=verbose,
- settings=settings['dirs_and_files'])
-
- logger_main = logger.bind(is_caption=False, indent=0)
- logger_sec = logger.bind(is_caption=False, indent=1)
-
- logger_main.info(datetime.now().strftime('%Y-%m-%d %H:%M'))
-
- logger_main.info('Doing only dataset creation')
-
- # Create the dataset.
- logger_main.info('Starting Clotho dataset creation')
-
- logger_sec.info('Creating examples')
- create_dataset(
- settings_dataset=settings['dataset_creation_settings'],
- settings_dirs_and_files=settings['dirs_and_files'])
- logger_sec.info('Examples created')
-
- logger_sec.info('Extracting features')
- extract_features(
- root_dir=settings['dirs_and_files']['root_dirs']['data'],
- settings_data=settings['dirs_and_files']['dataset'],
- settings_features=settings['feature_extraction_settings'])
- logger_sec.info('Features extracted')
-
- logger_main.info('Dataset created')
-
-
-if __name__ == '__main__':
- main()
-
-# EOF
diff --git a/spaces/YuAnthony/Voice-Recognition/README.md b/spaces/YuAnthony/Voice-Recognition/README.md
deleted file mode 100644
index 1846611ca07a1709aa786e2933b0424a7473bc9c..0000000000000000000000000000000000000000
--- a/spaces/YuAnthony/Voice-Recognition/README.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: Voice-Recognition (ResNet)
-emoji: 💬
-colorFrom: blue
-colorTo: blue
-sdk: gradio
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/Yudha515/Rvc-Models/tests/modules/test_transformer.py b/spaces/Yudha515/Rvc-Models/tests/modules/test_transformer.py
deleted file mode 100644
index ff7dfe4c2de05112aec55ddea9c8fd978668f80b..0000000000000000000000000000000000000000
--- a/spaces/Yudha515/Rvc-Models/tests/modules/test_transformer.py
+++ /dev/null
@@ -1,253 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from itertools import product
-
-import pytest
-import torch
-
-from audiocraft.modules.transformer import (
- StreamingMultiheadAttention, StreamingTransformer, set_efficient_attention_backend)
-
-
-def test_transformer_causal_streaming():
- torch.manual_seed(1234)
-
- for context, custom in product([None, 10], [False, True]):
- # Test that causality and receptive fields are properly handled.
- # looking at the gradients
- tr = StreamingTransformer(
- 16, 4, 1 if context else 2,
- causal=True, past_context=context, custom=custom,
- dropout=0.)
- steps = 20
- for k in [0, 10, 15, 19]:
- x = torch.randn(4, steps, 16, requires_grad=True)
- y = tr(x)
- y[:, k].abs().sum().backward()
- if k + 1 < steps:
- assert torch.allclose(x.grad[:, k + 1:], torch.tensor(0.)), x.grad[:, k + 1:].norm()
- assert not torch.allclose(x.grad[:, :k + 1], torch.tensor(0.)), x.grad[:, :k + 1].norm()
- if context is not None and k > context:
- limit = k - context - 1
- assert torch.allclose(x.grad[:, :limit],
- torch.tensor(0.)), x.grad[:, :limit].norm()
-
- # Now check that streaming gives the same result at batch eval.
- x = torch.randn(4, steps, 16)
- y = tr(x)
- ys = []
- with tr.streaming():
- for k in range(steps):
- chunk = x[:, k:k + 1, :]
- ys.append(tr(chunk))
- y_stream = torch.cat(ys, dim=1)
- delta = torch.norm(y_stream - y) / torch.norm(y)
- assert delta < 1e-6, delta
-
-
-def test_transformer_vs_pytorch():
- torch.manual_seed(1234)
- # Check that in the non causal setting, we get the same result as
- # PyTorch Transformer encoder.
- for custom in [False, True]:
- tr = StreamingTransformer(
- 16, 4, 2,
- causal=False, custom=custom, dropout=0., positional_scale=0.)
- layer = torch.nn.TransformerEncoderLayer(16, 4, dropout=0., batch_first=True)
- tr_ref = torch.nn.TransformerEncoder(layer, 2)
- tr.load_state_dict(tr_ref.state_dict())
-
- x = torch.randn(4, 20, 16)
- y = tr(x)
- y2 = tr_ref(x)
- delta = torch.norm(y2 - y) / torch.norm(y)
- assert delta < 1e-6, delta
-
-
-def test_streaming_api():
- tr = StreamingTransformer(16, 4, 2, causal=True, dropout=0.)
- tr.eval()
- steps = 12
- x = torch.randn(1, steps, 16)
-
- with torch.no_grad():
- with tr.streaming():
- _ = tr(x[:, :1])
- state = {k: v.clone() for k, v in tr.get_streaming_state().items()}
- y = tr(x[:, 1:2])
- tr.set_streaming_state(state)
- y2 = tr(x[:, 1:2])
- assert torch.allclose(y, y2), (y - y2).norm()
- assert tr.flush() is None
-
-
-def test_memory_efficient():
- for backend in ['torch', 'xformers']:
- torch.manual_seed(1234)
- set_efficient_attention_backend(backend)
-
- tr = StreamingTransformer(
- 16, 4, 2, custom=True, dropout=0., layer_scale=0.1)
- tr_mem_efficient = StreamingTransformer(
- 16, 4, 2, dropout=0., memory_efficient=True, layer_scale=0.1)
- tr_mem_efficient.load_state_dict(tr.state_dict())
- tr.eval()
- steps = 12
- x = torch.randn(3, steps, 16)
-
- with torch.no_grad():
- y = tr(x)
- y2 = tr_mem_efficient(x)
- assert torch.allclose(y, y2), ((y - y2).norm(), backend)
-
-
-def test_attention_as_float32():
- torch.manual_seed(1234)
- cases = [
- {'custom': True},
- {'custom': False},
- ]
- for case in cases:
- tr = StreamingTransformer(16, 4, 2, dropout=0., dtype=torch.bfloat16, **case)
- tr_float32 = StreamingTransformer(
- 16, 4, 2, dropout=0., attention_as_float32=True, dtype=torch.bfloat16, **case)
- if not case['custom']:
- # we are not using autocast here because it doesn't really
- # work as expected on CPU, so we have to manually cast the weights of the MHA.
- for layer in tr_float32.layers:
- layer.self_attn.mha.to(torch.float32)
- tr_float32.load_state_dict(tr.state_dict())
- steps = 12
- x = torch.randn(3, steps, 16, dtype=torch.bfloat16)
-
- with torch.no_grad():
- y = tr(x)
- y2 = tr_float32(x)
- assert not torch.allclose(y, y2), (y - y2).norm()
-
-
-@torch.no_grad()
-def test_streaming_memory_efficient():
- for backend in ['torch', 'xformers']:
- torch.manual_seed(1234)
- set_efficient_attention_backend(backend)
- tr = StreamingTransformer(16, 4, 2, causal=True, dropout=0., custom=True)
- tr_mem_efficient = StreamingTransformer(
- 16, 4, 2, dropout=0., memory_efficient=True, causal=True)
- tr.load_state_dict(tr_mem_efficient.state_dict())
- tr.eval()
- tr_mem_efficient.eval()
- steps = 12
- x = torch.randn(3, steps, 16)
-
- ref = tr(x)
-
- with tr_mem_efficient.streaming():
- outs = []
- # frame_sizes = [2] + [1] * (steps - 2)
- frame_sizes = [1] * steps
-
- for frame_size in frame_sizes:
- frame = x[:, :frame_size]
- x = x[:, frame_size:]
- outs.append(tr_mem_efficient(frame))
-
- out = torch.cat(outs, dim=1)
- delta = torch.norm(out - ref) / torch.norm(out)
- assert delta < 1e-6, delta
-
-
-def test_cross_attention():
- torch.manual_seed(1234)
- for norm_first in [True, False]:
- m = StreamingTransformer(
- 16, 4, 2, cross_attention=False, norm_first=norm_first, dropout=0., custom=True)
- m_cross = StreamingTransformer(
- 16, 4, 2, cross_attention=True, norm_first=norm_first, dropout=0., custom=True)
- m_cross.load_state_dict(m.state_dict(), strict=False)
- x = torch.randn(2, 5, 16)
- cross_x = torch.randn(2, 3, 16)
- y_ref = m(x)
- y_cross_zero = m_cross(x, cross_attention_src=0 * cross_x)
- # With norm_first, the two should be exactly yhe same,
- # but with norm_first=False, we get 2 normalization in a row
- # and the epsilon value leads to a tiny change.
- atol = 0. if norm_first else 1e-6
- print((y_ref - y_cross_zero).norm() / y_ref.norm())
- assert torch.allclose(y_ref, y_cross_zero, atol=atol)
-
- # We now expect a difference even with a generous atol of 1e-2.
- y_cross = m_cross(x, cross_attention_src=cross_x)
- assert not torch.allclose(y_cross, y_cross_zero, atol=1e-2)
-
- with pytest.raises(AssertionError):
- _ = m_cross(x)
- _ = m(x, cross_attention_src=cross_x)
-
-
-def test_cross_attention_compat():
- torch.manual_seed(1234)
- num_heads = 2
- dim = num_heads * 64
- with pytest.raises(AssertionError):
- StreamingMultiheadAttention(dim, num_heads, causal=True, cross_attention=True)
-
- cross_attn = StreamingMultiheadAttention(
- dim, num_heads, dropout=0, cross_attention=True, custom=True)
- ref_attn = torch.nn.MultiheadAttention(dim, num_heads, dropout=0, batch_first=True)
-
- # We can load the regular attention state dict
- # so we have compat when loading old checkpoints.
- cross_attn.load_state_dict(ref_attn.state_dict())
-
- queries = torch.randn(3, 7, dim)
- keys = torch.randn(3, 9, dim)
- values = torch.randn(3, 9, dim)
-
- y = cross_attn(queries, keys, values)[0]
- y_ref = ref_attn(queries, keys, values)[0]
- assert torch.allclose(y, y_ref, atol=1e-7), (y - y_ref).norm() / y_ref.norm()
-
- # Now let's check that streaming is working properly.
- with cross_attn.streaming():
- ys = []
- for step in range(queries.shape[1]):
- ys.append(cross_attn(queries[:, step: step + 1], keys, values)[0])
- y_streaming = torch.cat(ys, dim=1)
- assert torch.allclose(y_streaming, y, atol=1e-7)
-
-
-def test_repeat_kv():
- torch.manual_seed(1234)
- num_heads = 8
- kv_repeat = 4
- dim = num_heads * 64
- with pytest.raises(AssertionError):
- mha = StreamingMultiheadAttention(
- dim, num_heads, causal=True, kv_repeat=kv_repeat, cross_attention=True)
- mha = StreamingMultiheadAttention(
- dim, num_heads, causal=True, kv_repeat=kv_repeat)
- mha = StreamingMultiheadAttention(
- dim, num_heads, causal=True, kv_repeat=kv_repeat, custom=True)
- x = torch.randn(4, 18, dim)
- y = mha(x, x, x)[0]
- assert x.shape == y.shape
-
-
-def test_qk_layer_norm():
- torch.manual_seed(1234)
- tr = StreamingTransformer(
- 16, 4, 2, custom=True, dropout=0., qk_layer_norm=True, bias_attn=False)
- steps = 12
- x = torch.randn(3, steps, 16)
- y = tr(x)
-
- tr = StreamingTransformer(
- 16, 4, 2, custom=True, dropout=0., qk_layer_norm=True, cross_attention=True)
- z = torch.randn(3, 21, 16)
- y = tr(x, cross_attention_src=z)
- assert y.shape == x.shape
diff --git a/spaces/Zengyf-CVer/Streamlit_YOLOv5_Model2x/README.md b/spaces/Zengyf-CVer/Streamlit_YOLOv5_Model2x/README.md
deleted file mode 100644
index 3b90498d3e73a62ffd41aa7146ff7d7991449ddb..0000000000000000000000000000000000000000
--- a/spaces/Zengyf-CVer/Streamlit_YOLOv5_Model2x/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Streamlit YOLOv5 Model2x
-emoji: 🚀
-colorFrom: red
-colorTo: pink
-sdk: streamlit
-sdk_version: 1.10.0
-app_file: app.py
-pinned: false
-license: gpl-3.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/abhishek/first-order-motion-model/frames_dataset.py b/spaces/abhishek/first-order-motion-model/frames_dataset.py
deleted file mode 100644
index 7fd3400814708a8b31f05a624b0051e97c6573e1..0000000000000000000000000000000000000000
--- a/spaces/abhishek/first-order-motion-model/frames_dataset.py
+++ /dev/null
@@ -1,197 +0,0 @@
-import os
-from skimage import io, img_as_float32
-from skimage.color import gray2rgb
-from sklearn.model_selection import train_test_split
-from imageio import mimread
-
-import numpy as np
-from torch.utils.data import Dataset
-import pandas as pd
-from augmentation import AllAugmentationTransform
-import glob
-
-
-def read_video(name, frame_shape):
- """
- Read video which can be:
- - an image of concatenated frames
- - '.mp4' and'.gif'
- - folder with videos
- """
-
- if os.path.isdir(name):
- frames = sorted(os.listdir(name))
- num_frames = len(frames)
- video_array = np.array(
- [img_as_float32(io.imread(os.path.join(name, frames[idx]))) for idx in range(num_frames)])
- elif name.lower().endswith('.png') or name.lower().endswith('.jpg'):
- image = io.imread(name)
-
- if len(image.shape) == 2 or image.shape[2] == 1:
- image = gray2rgb(image)
-
- if image.shape[2] == 4:
- image = image[..., :3]
-
- image = img_as_float32(image)
-
- video_array = np.moveaxis(image, 1, 0)
-
- video_array = video_array.reshape((-1,) + frame_shape)
- video_array = np.moveaxis(video_array, 1, 2)
- elif name.lower().endswith('.gif') or name.lower().endswith('.mp4') or name.lower().endswith('.mov'):
- video = np.array(mimread(name))
- if len(video.shape) == 3:
- video = np.array([gray2rgb(frame) for frame in video])
- if video.shape[-1] == 4:
- video = video[..., :3]
- video_array = img_as_float32(video)
- else:
- raise Exception("Unknown file extensions %s" % name)
-
- return video_array
-
-
-class FramesDataset(Dataset):
- """
- Dataset of videos, each video can be represented as:
- - an image of concatenated frames
- - '.mp4' or '.gif'
- - folder with all frames
- """
-
- def __init__(self, root_dir, frame_shape=(256, 256, 3), id_sampling=False, is_train=True,
- random_seed=0, pairs_list=None, augmentation_params=None):
- self.root_dir = root_dir
- self.videos = os.listdir(root_dir)
- self.frame_shape = tuple(frame_shape)
- self.pairs_list = pairs_list
- self.id_sampling = id_sampling
- if os.path.exists(os.path.join(root_dir, 'train')):
- assert os.path.exists(os.path.join(root_dir, 'test'))
- print("Use predefined train-test split.")
- if id_sampling:
- train_videos = {os.path.basename(video).split('#')[0] for video in
- os.listdir(os.path.join(root_dir, 'train'))}
- train_videos = list(train_videos)
- else:
- train_videos = os.listdir(os.path.join(root_dir, 'train'))
- test_videos = os.listdir(os.path.join(root_dir, 'test'))
- self.root_dir = os.path.join(self.root_dir, 'train' if is_train else 'test')
- else:
- print("Use random train-test split.")
- train_videos, test_videos = train_test_split(self.videos, random_state=random_seed, test_size=0.2)
-
- if is_train:
- self.videos = train_videos
- else:
- self.videos = test_videos
-
- self.is_train = is_train
-
- if self.is_train:
- self.transform = AllAugmentationTransform(**augmentation_params)
- else:
- self.transform = None
-
- def __len__(self):
- return len(self.videos)
-
- def __getitem__(self, idx):
- if self.is_train and self.id_sampling:
- name = self.videos[idx]
- path = np.random.choice(glob.glob(os.path.join(self.root_dir, name + '*.mp4')))
- else:
- name = self.videos[idx]
- path = os.path.join(self.root_dir, name)
-
- video_name = os.path.basename(path)
-
- if self.is_train and os.path.isdir(path):
- frames = os.listdir(path)
- num_frames = len(frames)
- frame_idx = np.sort(np.random.choice(num_frames, replace=True, size=2))
- video_array = [img_as_float32(io.imread(os.path.join(path, frames[idx]))) for idx in frame_idx]
- else:
- video_array = read_video(path, frame_shape=self.frame_shape)
- num_frames = len(video_array)
- frame_idx = np.sort(np.random.choice(num_frames, replace=True, size=2)) if self.is_train else range(
- num_frames)
- video_array = video_array[frame_idx]
-
- if self.transform is not None:
- video_array = self.transform(video_array)
-
- out = {}
- if self.is_train:
- source = np.array(video_array[0], dtype='float32')
- driving = np.array(video_array[1], dtype='float32')
-
- out['driving'] = driving.transpose((2, 0, 1))
- out['source'] = source.transpose((2, 0, 1))
- else:
- video = np.array(video_array, dtype='float32')
- out['video'] = video.transpose((3, 0, 1, 2))
-
- out['name'] = video_name
-
- return out
-
-
-class DatasetRepeater(Dataset):
- """
- Pass several times over the same dataset for better i/o performance
- """
-
- def __init__(self, dataset, num_repeats=100):
- self.dataset = dataset
- self.num_repeats = num_repeats
-
- def __len__(self):
- return self.num_repeats * self.dataset.__len__()
-
- def __getitem__(self, idx):
- return self.dataset[idx % self.dataset.__len__()]
-
-
-class PairedDataset(Dataset):
- """
- Dataset of pairs for animation.
- """
-
- def __init__(self, initial_dataset, number_of_pairs, seed=0):
- self.initial_dataset = initial_dataset
- pairs_list = self.initial_dataset.pairs_list
-
- np.random.seed(seed)
-
- if pairs_list is None:
- max_idx = min(number_of_pairs, len(initial_dataset))
- nx, ny = max_idx, max_idx
- xy = np.mgrid[:nx, :ny].reshape(2, -1).T
- number_of_pairs = min(xy.shape[0], number_of_pairs)
- self.pairs = xy.take(np.random.choice(xy.shape[0], number_of_pairs, replace=False), axis=0)
- else:
- videos = self.initial_dataset.videos
- name_to_index = {name: index for index, name in enumerate(videos)}
- pairs = pd.read_csv(pairs_list)
- pairs = pairs[np.logical_and(pairs['source'].isin(videos), pairs['driving'].isin(videos))]
-
- number_of_pairs = min(pairs.shape[0], number_of_pairs)
- self.pairs = []
- self.start_frames = []
- for ind in range(number_of_pairs):
- self.pairs.append(
- (name_to_index[pairs['driving'].iloc[ind]], name_to_index[pairs['source'].iloc[ind]]))
-
- def __len__(self):
- return len(self.pairs)
-
- def __getitem__(self, idx):
- pair = self.pairs[idx]
- first = self.initial_dataset[pair[0]]
- second = self.initial_dataset[pair[1]]
- first = {'driving_' + key: value for key, value in first.items()}
- second = {'source_' + key: value for key, value in second.items()}
-
- return {**first, **second}
diff --git a/spaces/abhishek/first-order-motion-model/modules/generator.py b/spaces/abhishek/first-order-motion-model/modules/generator.py
deleted file mode 100644
index ec665703efb1e67c561ab86e116cbb41198dfe51..0000000000000000000000000000000000000000
--- a/spaces/abhishek/first-order-motion-model/modules/generator.py
+++ /dev/null
@@ -1,97 +0,0 @@
-import torch
-from torch import nn
-import torch.nn.functional as F
-from modules.util import ResBlock2d, SameBlock2d, UpBlock2d, DownBlock2d
-from modules.dense_motion import DenseMotionNetwork
-
-
-class OcclusionAwareGenerator(nn.Module):
- """
- Generator that given source image and and keypoints try to transform image according to movement trajectories
- induced by keypoints. Generator follows Johnson architecture.
- """
-
- def __init__(self, num_channels, num_kp, block_expansion, max_features, num_down_blocks,
- num_bottleneck_blocks, estimate_occlusion_map=False, dense_motion_params=None, estimate_jacobian=False):
- super(OcclusionAwareGenerator, self).__init__()
-
- if dense_motion_params is not None:
- self.dense_motion_network = DenseMotionNetwork(num_kp=num_kp, num_channels=num_channels,
- estimate_occlusion_map=estimate_occlusion_map,
- **dense_motion_params)
- else:
- self.dense_motion_network = None
-
- self.first = SameBlock2d(num_channels, block_expansion, kernel_size=(7, 7), padding=(3, 3))
-
- down_blocks = []
- for i in range(num_down_blocks):
- in_features = min(max_features, block_expansion * (2 ** i))
- out_features = min(max_features, block_expansion * (2 ** (i + 1)))
- down_blocks.append(DownBlock2d(in_features, out_features, kernel_size=(3, 3), padding=(1, 1)))
- self.down_blocks = nn.ModuleList(down_blocks)
-
- up_blocks = []
- for i in range(num_down_blocks):
- in_features = min(max_features, block_expansion * (2 ** (num_down_blocks - i)))
- out_features = min(max_features, block_expansion * (2 ** (num_down_blocks - i - 1)))
- up_blocks.append(UpBlock2d(in_features, out_features, kernel_size=(3, 3), padding=(1, 1)))
- self.up_blocks = nn.ModuleList(up_blocks)
-
- self.bottleneck = torch.nn.Sequential()
- in_features = min(max_features, block_expansion * (2 ** num_down_blocks))
- for i in range(num_bottleneck_blocks):
- self.bottleneck.add_module('r' + str(i), ResBlock2d(in_features, kernel_size=(3, 3), padding=(1, 1)))
-
- self.final = nn.Conv2d(block_expansion, num_channels, kernel_size=(7, 7), padding=(3, 3))
- self.estimate_occlusion_map = estimate_occlusion_map
- self.num_channels = num_channels
-
- def deform_input(self, inp, deformation):
- _, h_old, w_old, _ = deformation.shape
- _, _, h, w = inp.shape
- if h_old != h or w_old != w:
- deformation = deformation.permute(0, 3, 1, 2)
- deformation = F.interpolate(deformation, size=(h, w), mode='bilinear')
- deformation = deformation.permute(0, 2, 3, 1)
- return F.grid_sample(inp, deformation)
-
- def forward(self, source_image, kp_driving, kp_source):
- # Encoding (downsampling) part
- out = self.first(source_image)
- for i in range(len(self.down_blocks)):
- out = self.down_blocks[i](out)
-
- # Transforming feature representation according to deformation and occlusion
- output_dict = {}
- if self.dense_motion_network is not None:
- dense_motion = self.dense_motion_network(source_image=source_image, kp_driving=kp_driving,
- kp_source=kp_source)
- output_dict['mask'] = dense_motion['mask']
- output_dict['sparse_deformed'] = dense_motion['sparse_deformed']
-
- if 'occlusion_map' in dense_motion:
- occlusion_map = dense_motion['occlusion_map']
- output_dict['occlusion_map'] = occlusion_map
- else:
- occlusion_map = None
- deformation = dense_motion['deformation']
- out = self.deform_input(out, deformation)
-
- if occlusion_map is not None:
- if out.shape[2] != occlusion_map.shape[2] or out.shape[3] != occlusion_map.shape[3]:
- occlusion_map = F.interpolate(occlusion_map, size=out.shape[2:], mode='bilinear')
- out = out * occlusion_map
-
- output_dict["deformed"] = self.deform_input(source_image, deformation)
-
- # Decoding part
- out = self.bottleneck(out)
- for i in range(len(self.up_blocks)):
- out = self.up_blocks[i](out)
- out = self.final(out)
- out = F.sigmoid(out)
-
- output_dict["prediction"] = out
-
- return output_dict
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/configs/_base_/datasets/drive.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/configs/_base_/datasets/drive.py
deleted file mode 100644
index 06e8ff606e0d2a4514ec8b7d2c6c436a32efcbf4..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer/configs/_base_/datasets/drive.py
+++ /dev/null
@@ -1,59 +0,0 @@
-# dataset settings
-dataset_type = 'DRIVEDataset'
-data_root = 'data/DRIVE'
-img_norm_cfg = dict(
- mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
-img_scale = (584, 565)
-crop_size = (64, 64)
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations'),
- dict(type='Resize', img_scale=img_scale, ratio_range=(0.5, 2.0)),
- dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75),
- dict(type='RandomFlip', prob=0.5),
- dict(type='PhotoMetricDistortion'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_semantic_seg'])
-]
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='MultiScaleFlipAug',
- img_scale=img_scale,
- # img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75, 2.0],
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img'])
- ])
-]
-
-data = dict(
- samples_per_gpu=4,
- workers_per_gpu=4,
- train=dict(
- type='RepeatDataset',
- times=40000,
- dataset=dict(
- type=dataset_type,
- data_root=data_root,
- img_dir='images/training',
- ann_dir='annotations/training',
- pipeline=train_pipeline)),
- val=dict(
- type=dataset_type,
- data_root=data_root,
- img_dir='images/validation',
- ann_dir='annotations/validation',
- pipeline=test_pipeline),
- test=dict(
- type=dataset_type,
- data_root=data_root,
- img_dir='images/validation',
- ann_dir='annotations/validation',
- pipeline=test_pipeline))
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/fileio/handlers/json_handler.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/fileio/handlers/json_handler.py
deleted file mode 100644
index 18d4f15f74139d20adff18b20be5529c592a66b6..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/fileio/handlers/json_handler.py
+++ /dev/null
@@ -1,36 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import json
-
-import numpy as np
-
-from .base import BaseFileHandler
-
-
-def set_default(obj):
- """Set default json values for non-serializable values.
-
- It helps convert ``set``, ``range`` and ``np.ndarray`` data types to list.
- It also converts ``np.generic`` (including ``np.int32``, ``np.float32``,
- etc.) into plain numbers of plain python built-in types.
- """
- if isinstance(obj, (set, range)):
- return list(obj)
- elif isinstance(obj, np.ndarray):
- return obj.tolist()
- elif isinstance(obj, np.generic):
- return obj.item()
- raise TypeError(f'{type(obj)} is unsupported for json dump')
-
-
-class JsonHandler(BaseFileHandler):
-
- def load_from_fileobj(self, file):
- return json.load(file)
-
- def dump_to_fileobj(self, obj, file, **kwargs):
- kwargs.setdefault('default', set_default)
- json.dump(obj, file, **kwargs)
-
- def dump_to_str(self, obj, **kwargs):
- kwargs.setdefault('default', set_default)
- return json.dumps(obj, **kwargs)
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/runner/hooks/logger/tensorboard.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/runner/hooks/logger/tensorboard.py
deleted file mode 100644
index 4dd5011dc08def6c09eef86d3ce5b124c9fc5372..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/runner/hooks/logger/tensorboard.py
+++ /dev/null
@@ -1,57 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import os.path as osp
-
-from annotator.uniformer.mmcv.utils import TORCH_VERSION, digit_version
-from ...dist_utils import master_only
-from ..hook import HOOKS
-from .base import LoggerHook
-
-
-@HOOKS.register_module()
-class TensorboardLoggerHook(LoggerHook):
-
- def __init__(self,
- log_dir=None,
- interval=10,
- ignore_last=True,
- reset_flag=False,
- by_epoch=True):
- super(TensorboardLoggerHook, self).__init__(interval, ignore_last,
- reset_flag, by_epoch)
- self.log_dir = log_dir
-
- @master_only
- def before_run(self, runner):
- super(TensorboardLoggerHook, self).before_run(runner)
- if (TORCH_VERSION == 'parrots'
- or digit_version(TORCH_VERSION) < digit_version('1.1')):
- try:
- from tensorboardX import SummaryWriter
- except ImportError:
- raise ImportError('Please install tensorboardX to use '
- 'TensorboardLoggerHook.')
- else:
- try:
- from torch.utils.tensorboard import SummaryWriter
- except ImportError:
- raise ImportError(
- 'Please run "pip install future tensorboard" to install '
- 'the dependencies to use torch.utils.tensorboard '
- '(applicable to PyTorch 1.1 or higher)')
-
- if self.log_dir is None:
- self.log_dir = osp.join(runner.work_dir, 'tf_logs')
- self.writer = SummaryWriter(self.log_dir)
-
- @master_only
- def log(self, runner):
- tags = self.get_loggable_tags(runner, allow_text=True)
- for tag, val in tags.items():
- if isinstance(val, str):
- self.writer.add_text(tag, val, self.get_iter(runner))
- else:
- self.writer.add_scalar(tag, val, self.get_iter(runner))
-
- @master_only
- def after_run(self, runner):
- self.writer.close()
diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/libs/egl/lib.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/libs/egl/lib.py
deleted file mode 100644
index 0707518877e96b03c04a628614201f08270bac24..0000000000000000000000000000000000000000
--- a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/libs/egl/lib.py
+++ /dev/null
@@ -1,31 +0,0 @@
-from ctypes import *
-
-import pyglet
-import pyglet.util
-
-
-__all__ = ['link_EGL']
-
-egl_lib = pyglet.lib.load_library('EGL')
-
-# Look for eglGetProcAddress
-eglGetProcAddress = getattr(egl_lib, 'eglGetProcAddress')
-eglGetProcAddress.restype = POINTER(CFUNCTYPE(None))
-eglGetProcAddress.argtypes = [POINTER(c_ubyte)]
-
-
-def link_EGL(name, restype, argtypes, requires=None, suggestions=None):
- try:
- func = getattr(egl_lib, name)
- func.restype = restype
- func.argtypes = argtypes
- return func
- except AttributeError:
- bname = cast(pointer(create_string_buffer(pyglet.util.asbytes(name))), POINTER(c_ubyte))
- addr = eglGetProcAddress(bname)
- if addr:
- ftype = CFUNCTYPE(*((restype,) + tuple(argtypes)))
- func = cast(addr, ftype)
- return func
-
- return pyglet.gl.lib.missing_function(name, requires, suggestions)
diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/pyrender/platforms/base.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/pyrender/platforms/base.py
deleted file mode 100644
index c9ecda906145e239737901809aa59db8d3e231c6..0000000000000000000000000000000000000000
--- a/spaces/abrar-lohia/text-2-character-anim/pyrender/pyrender/platforms/base.py
+++ /dev/null
@@ -1,76 +0,0 @@
-import abc
-
-import six
-
-
-@six.add_metaclass(abc.ABCMeta)
-class Platform(object):
- """Base class for all OpenGL platforms.
-
- Parameters
- ----------
- viewport_width : int
- The width of the main viewport, in pixels.
- viewport_height : int
- The height of the main viewport, in pixels
- """
-
- def __init__(self, viewport_width, viewport_height):
- self.viewport_width = viewport_width
- self.viewport_height = viewport_height
-
- @property
- def viewport_width(self):
- """int : The width of the main viewport, in pixels.
- """
- return self._viewport_width
-
- @viewport_width.setter
- def viewport_width(self, value):
- self._viewport_width = value
-
- @property
- def viewport_height(self):
- """int : The height of the main viewport, in pixels.
- """
- return self._viewport_height
-
- @viewport_height.setter
- def viewport_height(self, value):
- self._viewport_height = value
-
- @abc.abstractmethod
- def init_context(self):
- """Create an OpenGL context.
- """
- pass
-
- @abc.abstractmethod
- def make_current(self):
- """Make the OpenGL context current.
- """
- pass
-
- @abc.abstractmethod
- def make_uncurrent(self):
- """Make the OpenGL context uncurrent.
- """
- pass
-
- @abc.abstractmethod
- def delete_context(self):
- """Delete the OpenGL context.
- """
- pass
-
- @abc.abstractmethod
- def supports_framebuffers(self):
- """Returns True if the method supports framebuffer rendering.
- """
- pass
-
- def __del__(self):
- try:
- self.delete_context()
- except Exception:
- pass
diff --git a/spaces/adirik/stylemc-demo/__init__.py b/spaces/adirik/stylemc-demo/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/adirik/stylemc-demo/encoder4editing/criteria/__init__.py b/spaces/adirik/stylemc-demo/encoder4editing/criteria/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/adirik/stylemc-demo/torch_utils/ops/__init__.py b/spaces/adirik/stylemc-demo/torch_utils/ops/__init__.py
deleted file mode 100644
index ece0ea08fe2e939cc260a1dafc0ab5b391b773d9..0000000000000000000000000000000000000000
--- a/spaces/adirik/stylemc-demo/torch_utils/ops/__init__.py
+++ /dev/null
@@ -1,9 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-# empty
diff --git a/spaces/ahiruguagua/aiemo/template.md b/spaces/ahiruguagua/aiemo/template.md
deleted file mode 100644
index c40eb5b940d4bcb58ff527488f0cc0ead27d00e3..0000000000000000000000000000000000000000
--- a/spaces/ahiruguagua/aiemo/template.md
+++ /dev/null
@@ -1,5 +0,0 @@
-### AIアシスタントの返信
-
-ここにユーザのメッセージに対する返信を書く(爆笑必須)
-
-
diff --git a/spaces/akhaliq/Ghibli-Diffusion/app.py b/spaces/akhaliq/Ghibli-Diffusion/app.py
deleted file mode 100644
index 25e4911d6481344a01f0ab7867dabd1f3d130e7a..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/Ghibli-Diffusion/app.py
+++ /dev/null
@@ -1,10 +0,0 @@
-import gradio as gr
-
-description = """
-
-
-
Ghibli Diffusion
-This is the fine-tuned Stable Diffusion model trained on images from modern anime feature films from Studio Ghibli. Use the tokens ghibli style in your prompts for the effect.
- """
-
-gr.Interface.load("models/nitrosocke/Ghibli-Diffusion", description=description, examples=[["superman ghibli style"]]).launch()
diff --git a/spaces/akhaliq/Mask2Former/datasets/prepare_ade20k_ins_seg.py b/spaces/akhaliq/Mask2Former/datasets/prepare_ade20k_ins_seg.py
deleted file mode 100644
index e4e951adcd84dbd08b3d6570aee56887bf1c69a6..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/Mask2Former/datasets/prepare_ade20k_ins_seg.py
+++ /dev/null
@@ -1,112 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding: utf-8 -*-
-# Copyright (c) Facebook, Inc. and its affiliates.
-import glob
-import json
-import os
-from collections import Counter
-
-import numpy as np
-import tqdm
-from panopticapi.utils import IdGenerator, save_json
-from PIL import Image
-import pycocotools.mask as mask_util
-
-
-if __name__ == "__main__":
- dataset_dir = os.getenv("DETECTRON2_DATASETS", "datasets")
-
- for name, dirname in [("train", "training"), ("val", "validation")]:
- image_dir = os.path.join(dataset_dir, f"ADEChallengeData2016/images/{dirname}/")
- instance_dir = os.path.join(
- dataset_dir, f"ADEChallengeData2016/annotations_instance/{dirname}/"
- )
-
- # img_id = 0
- ann_id = 1
-
- # json
- out_file = os.path.join(dataset_dir, f"ADEChallengeData2016/ade20k_instance_{name}.json")
-
- # json config
- instance_config_file = "datasets/ade20k_instance_imgCatIds.json"
- with open(instance_config_file) as f:
- category_dict = json.load(f)["categories"]
-
- # load catid mapping
- # it is important to share category id for both instance and panoptic annotations
- mapping_file = "datasets/ade20k_instance_catid_mapping.txt"
- with open(mapping_file) as f:
- map_id = {}
- for i, line in enumerate(f.readlines()):
- if i == 0:
- continue
- ins_id, sem_id, _ = line.strip().split()
- # shift id by 1 because we want it to start from 0!
- # ignore_label becomes 255
- map_id[int(ins_id)] = int(sem_id) - 1
-
- for cat in category_dict:
- cat["id"] = map_id[cat["id"]]
-
- filenames = sorted(glob.glob(os.path.join(image_dir, "*.jpg")))
-
- ann_dict = {}
- images = []
- annotations = []
-
- for idx, filename in enumerate(tqdm.tqdm(filenames)):
- image = {}
- image_id = os.path.basename(filename).split(".")[0]
-
- image["id"] = image_id
- image["file_name"] = os.path.basename(filename)
-
- original_format = np.array(Image.open(filename))
- image["width"] = original_format.shape[1]
- image["height"] = original_format.shape[0]
-
- images.append(image)
-
- filename_instance = os.path.join(instance_dir, image_id + ".png")
- ins_seg = np.asarray(Image.open(filename_instance))
- assert ins_seg.dtype == np.uint8
-
- instance_cat_ids = ins_seg[..., 0]
- # instance id starts from 1!
- # because 0 is reserved as VOID label
- instance_ins_ids = ins_seg[..., 1]
-
- # process things
- for thing_id in np.unique(instance_ins_ids):
- if thing_id == 0:
- continue
- mask = instance_ins_ids == thing_id
- instance_cat_id = np.unique(instance_cat_ids[mask])
- assert len(instance_cat_id) == 1
-
- anno = {}
- anno['id'] = ann_id
- ann_id += 1
- anno['image_id'] = image['id']
- anno["iscrowd"] = int(0)
- anno["category_id"] = int(map_id[instance_cat_id[0]])
-
- inds = np.nonzero(mask)
- ymin, ymax = inds[0].min(), inds[0].max()
- xmin, xmax = inds[1].min(), inds[1].max()
- anno["bbox"] = [int(xmin), int(ymin), int(xmax - xmin + 1), int(ymax - ymin + 1)]
- # if xmax <= xmin or ymax <= ymin:
- # continue
- rle = mask_util.encode(np.array(mask[:, :, None], order="F", dtype="uint8"))[0]
- rle["counts"] = rle["counts"].decode("utf-8")
- anno["segmentation"] = rle
- anno["area"] = int(mask_util.area(rle))
- annotations.append(anno)
-
- # save this
- ann_dict['images'] = images
- ann_dict['categories'] = category_dict
- ann_dict['annotations'] = annotations
-
- save_json(ann_dict, out_file)
diff --git a/spaces/akhaliq/SWAG/app.py b/spaces/akhaliq/SWAG/app.py
deleted file mode 100644
index da9362999b7970e3980a26cbf7c0d6d189fc5da3..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/SWAG/app.py
+++ /dev/null
@@ -1,80 +0,0 @@
-import torch
-
-
-model = torch.hub.load("facebookresearch/swag", model="vit_h14_in1k")
-
-# we also convert the model to eval mode
-model.eval()
-
-resolution = 518
-
-import os
-os.system("wget https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json -O in_cls_idx.json")
-
-import gradio as gr
-
-from PIL import Image
-from torchvision import transforms
-
-import json
-
-
-with open("in_cls_idx.json", "r") as f:
- imagenet_id_to_name = {int(cls_id): name for cls_id, (label, name) in json.load(f).items()}
-
-
-
-
-def load_image(image_path):
- return Image.open(image_path).convert("RGB")
-
-
-
-def transform_image(image, resolution):
- transform = transforms.Compose([
- transforms.Resize(
- resolution,
- interpolation=transforms.InterpolationMode.BICUBIC,
- ),
- transforms.CenterCrop(resolution),
- transforms.ToTensor(),
- transforms.Normalize(
- mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]
- ),
- ])
- image = transform(image)
- # we also add a batch dimension to the image since that is what the model expects
- image = image[None, :]
- return image
-
-def visualize_and_predict(model, resolution, image_path):
- image = load_image(image_path)
- image = transform_image(image, resolution)
-
- # we do not need to track gradients for inference
- with torch.no_grad():
- _, preds = model(image).topk(5)
- # convert preds to a Python list and remove the batch dimension
- preds = preds.tolist()[0]
-
- return preds
-
-os.system("wget https://github.com/pytorch/hub/raw/master/images/dog.jpg -O dog.jpg")
-
-
-
-def inference(img):
- preds = visualize_and_predict(model, resolution, img)
- return [imagenet_id_to_name[cls_id] for cls_id in preds]
-
-inputs = gr.inputs.Image(type='filepath')
-outputs = gr.outputs.Textbox(label="Output")
-
-title = "SWAG"
-
-description = "Gradio demo for Revisiting Weakly Supervised Pre-Training of Visual Perception Models. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below."
-
-article = "