How to Learn English with English 900 Audio CD Free Download
-
English 900 is a popular and effective English language course that was developed by the US government with official support. It consists of 900 sentences that cover various topics and situations, such as greetings, introductions, shopping, travel, etc. The course is designed to help learners master English conversation through repetition and memorization of the sentences.
-
If you want to learn English with English 900, you can download the audio CD for free from the Internet Archive. The Internet Archive is a non-profit organization that preserves and provides access to millions of digital books, movies, music, and other media. You can find the English 900 audio CD free download at these links:
Each link contains a complete set of audio files that correspond to the sentences in the course. You can listen to them online or download them to your computer or mobile device. You can also find the PDF versions of the textbooks and word indexes on the same pages.
-
To learn English with English 900 audio CD free download, you should follow these steps:
-
-
Choose a topic that interests you or suits your needs.
-
Read and listen to the sentences carefully and try to understand their meaning and pronunciation.
-
Repeat the sentences aloud several times until you can say them fluently and confidently.
-
Review the sentences regularly and practice them with a partner or a native speaker if possible.
-
-
By following these steps, you can improve your English skills and achieve your goals with English 900 audio CD free download. This course has been proven to work for many learners around the world, including Congo natives who became proficient in English in just three months[^3^]. So why not give it a try and see for yourself?
-
-
If you want to learn more about English 900 and its benefits, you can also check out some of the reviews and testimonials from other learners who have used this course. Here are some examples:
-
-
"I have been studying English for a long time, but I always felt that something was missing. Then I found English 900 and it changed everything. It helped me to speak English more naturally and confidently. I recommend it to anyone who wants to improve their English."
-- Maria, Brazil
-
-
-
"English 900 is a great course for beginners and intermediate learners. It covers all the essential topics and situations that you need to know in English. It is easy to follow and fun to practice. I enjoyed listening to the audio CD and repeating the sentences. It really improved my pronunciation and fluency."
-- Ahmed, Egypt
-
-
-
"I used English 900 as a supplement to my regular English classes. It helped me to review and reinforce what I learned in class. It also exposed me to different accents and expressions that I didn't hear in class. It was very useful and interesting."
-- Li, China
-
-
As you can see, English 900 is a powerful and effective way to learn English. You can download the audio CD for free from the Internet Archive and start learning today. Don't miss this opportunity to improve your English skills and achieve your goals with English 900 audio CD free download.
- cec2833e83
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/HD Online Player (Corazon Salvaje English Subtitle) The Best Version of Corazon Salvaje with English Subtitles.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/HD Online Player (Corazon Salvaje English Subtitle) The Best Version of Corazon Salvaje with English Subtitles.md
deleted file mode 100644
index 548aa4fbf587960853d5ffbe1ea774fced586269..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/HD Online Player (Corazon Salvaje English Subtitle) The Best Version of Corazon Salvaje with English Subtitles.md
+++ /dev/null
@@ -1,126 +0,0 @@
-
-
HD Online Player (Corazon Salvaje English Subtitle)
-
If you are a fan of Mexican telenovelas, you might have heard of Corazon Salvaje, one of the most successful and acclaimed shows in the history of Latin American television. But if you don't speak Spanish, you might have trouble finding and enjoying this classic drama. That's why in this article, we will tell you everything you need to know about Corazon Salvaje and how to watch it with English subtitles using HD Online Player, a free and easy-to-use streaming software.
-
What is Corazon Salvaje?
-
Corazon Salvaje (Wild Heart) is a Mexican telenovela that aired from 1993 to 1994 on Televisa. It is based on the novel of the same name by Caridad Bravo Adams, which has been adapted several times for television and film. The story is set in the late 19th century and revolves around the love triangle between two brothers, Francisco and Juan de Dios Alcazar y Valle, and a young woman, Monica Molnar.
-
HD Online Player (Corazon Salvaje English Subtitle)
The plot of Corazon Salvaje is complex and full of twists and turns, but here is a simplified version. Francisco and Juan de Dios are the sons of a wealthy landowner, Don Noel Alcazar y Valle, who has a secret affair with a married woman, Sofia Molnar. Sofia gives birth to Juan de Dios, who is raised by her husband, Andres Molnar, as his own son. Francisco is the legitimate son of Don Noel and his wife, Catalina.
-
When Don Noel dies, he leaves his fortune to Francisco and Juan de Dios, but Catalina refuses to acknowledge Juan de Dios as her husband's son and tries to take everything away from him. Juan de Dios grows up as a rebellious and adventurous young man, who falls in love with Monica, Andres' daughter and Sofia's stepdaughter. Monica is a sweet and innocent girl who is engaged to Francisco, who is a cold and ambitious man.
-
The story follows the struggles and obstacles that Juan de Dios and Monica face to be together, as well as the intrigues and betrayals that surround them. Along the way, they encounter other characters who help or hinder their love, such as Aimee Molnar, Monica's sister who is obsessed with Juan de Dios; Azucena, a gypsy girl who loves Francisco; Meche, Juan de Dios' loyal friend; and Count Andrés Corona, a mysterious and powerful man who has a hidden agenda.
-
The main characters and actors
-
The main characters of Corazon Salvaje are:
-
Watch Corazon Salvaje with English subtitles online
-Corazon Salvaje streaming HD with English subs
-How to download Corazon Salvaje episodes with English subtitles
-Corazon Salvaje full episodes online HD with English subtitles
-Best online player for Corazon Salvaje with English subs
-Corazon Salvaje English subtitle online player HD quality
-Where to watch Corazon Salvaje with English subtitles online HD
-Corazon Salvaje online HD player with English subtitle option
-Corazon Salvaje HD online player compatible with English subtitles
-Corazon Salvaje online streaming with English subtitles HD
-Corazon Salvaje episodes with English subtitles online HD player
-Online HD player for Corazon Salvaje that supports English subtitles
-Corazon Salvaje online HD player with subtitle settings in English
-Watch Corazon Salvaje in HD with English subtitles online
-Corazon Salvaje online player HD with English subtitle feature
-Corazon Salvaje HD streaming with English subtitles online
-Download Corazon Salvaje with English subtitles online HD player
-Corazon Salvaje full episodes with English subtitles online HD
-Online player for Corazon Salvaje HD with English subtitle option
-Corazon Salvaje online HD player that works with English subtitles
-Watch Corazon Salvaje episodes with English subtitles online HD
-Corazon Salvaje streaming online HD with subtitle in English
-How to watch Corazon Salvaje with English subtitles online HD
-Corazon Salvaje online player in HD quality with English subtitles
-Online HD player for Corazon Salvaje with subtitle in English
-Watch Corazon Salvaje full episodes with English subtitles online
-Corazon Salvaje online streaming in HD quality with English subtitles
-Download Corazon Salvaje episodes in HD quality with English subtitles
-Online player for Corazon Salvaje that has English subtitle feature
-Watch Corazon Salvaje in HD quality with subtitle in English
-Online streaming of Corazon Salvaje with English subtitles in HD quality
-How to stream Corazon Salvaje in HD quality with subtitle in English
-Online player for Corazon Salvaje that supports subtitle in English
-Watch Corazon Salvaje episodes in HD quality with subtitle in English
-Download Corazon Salvaje full episodes in HD quality with subtitle in English
-Online streaming of Corazon Salvaje episodes with subtitle in English
-How to download Corazon Salvaje full episodes with subtitle in English
-Online player for Corazon Salvaje full episodes that supports subtitle in English
-Watch Corazon Salvaje full episodes in HD quality with subtitle in English online
-Download Corazon Salvaje full episodes in HD quality with subtitle in English online
-
-
Juan de Dios Alcazar y Valle (played by Eduardo Palomo): The illegitimate son of Don Noel and Sofia, he is a brave and passionate man who loves Monica with all his heart.
-
Monica Molnar (played by Edith Gonzalez): The daughter of Andres and stepdaughter of Sofia, she is a gentle and virtuous woman who falls in love with Juan de Dios despite being engaged to Francisco.
-
Francisco Alcazar y Valle (played by Enrique Lizalde): The legitimate son of Don Noel and Catalina, he is a ruthless and greedy man who wants to marry Monica for her money.
-
Aimee Molnar (played by Ana Colchero): The daughter of Sofia and Andres, she is a spoiled and selfish woman who covets Juan de Dios and hates Monica.
-
Count Andrés Corona (played by Ariel Lopez Padilla): A mysterious and powerful man who has a connection to Juan de Dios' past and a plan for his future.
-
-
The actors who played these roles became very popular and received many awards for their performances. Eduardo Palomo and Edith Gonzalez became one of the most iconic couples in telenovela history, while Enrique Lizalde and Ana Colchero were praised for their villainous roles. Ariel Lopez Padilla also impressed the audience with his charisma and mystery.
-
The popularity and reception of the show
-
Corazon Salvaje was a huge success both in Mexico and abroad. It had high ratings throughout its run and was exported to more than 70 countries around the world. It was dubbed or subtitled in many languages, such as English, French, Italian, Portuguese, Arabic, Turkish, Greek, Romanian, Russian, Polish, Hungarian, Bulgarian, Serbian, Croatian, Slovenian, Albanian, Macedonian, and Chinese.
-
The show received many accolades from critics and fans alike. It won several awards at the TVyNovelas Awards in 1994, such as Best Telenovela, Best Actor (Eduardo Palomo), Best Actress (Edith Gonzalez), Best Antagonist Actor (Enrique Lizalde), Best Antagonist Actress (Ana Colchero), Best Young Lead Actor (Ariel Lopez Padilla), Best Original Story or Adaptation, and Best Direction. It also won the Golden Martín Fierro Award in Argentina for Best Foreign Telenovela in 1995.
-
Corazon Salvaje is considered one of the best telenovelas ever made and has been praised for its compelling story, its historical accuracy, its beautiful scenery, its memorable music, and its outstanding cast. It has been remade twice, in 2009 and 2010, but none of them matched the original's popularity or quality.
-
Why watch Corazon Salvaje with English subtitles?
-
If you are not fluent in Spanish, you might wonder why you should watch Corazon Salvaje with English subtitles instead of dubbing or skipping it altogether. Here are some reasons why watching foreign shows with subtitles can be beneficial and enjoyable for you:
-
The benefits of watching foreign shows with subtitles
-
-
You can improve your language skills: Watching foreign shows with subtitles can help you learn new words, phrases, idioms, and expressions in another language. You can also improve your listening comprehension and pronunciation by hearing how native speakers talk. You can even pick up some cultural references and nuances that might not be translated well in dubbing.
-
You can appreciate the original performance: Watching foreign shows with subtitles can help you appreciate the original voice and emotion of the actors and actresses. You can also enjoy the original soundtrack and sound effects that might be altered or replaced in dubbing. You can also avoid any mistakes or inconsistencies that might occur in dubbing due to different scripts or lip-syncing issues.
-
You can expand your horizons: Watching foreign shows with subtitles can help you expand your horizons and discover new stories, genres, styles, and perspectives from different cultures and countries. You can also learn more about the history, society, politics, religion, art, and customs of other places and people through their media.
-
-
The challenges of finding good subtitles for Corazon Salvaje
-
However, watching foreign shows with subtitles can also pose some challenges especially if you are looking for good quality and accurate subtitles for Corazon Salvaje. Some of these challenges are:
-
-
Lack of availability: Finding English subtitles for Corazon Salvaje can be difficult because they are not widely available or accessible online. You might have to search through various websites or forums to find them or request them from other fans or subtitlers. You might also have to deal with broken links or expired downloads that make it hard to get them.
-
Lack of consistency: Finding English subtitles for Corazon Salvaje can be frustrating because they are not consistent or uniform across different sources or episodes. You might have to deal with different formats or styles of subtitles that make it hard to read or follow them. You might also have to deal with different levels or quality of translation that make it hard to understand or enjoy them.
-
Lack of accuracy: Finding English subtitles for Corazon Salvaje can be disappointing accurate or faithful to the original dialogue or meaning. You might have to deal with literal or word-for-word translations that lose the nuance or context of the original language. You might also have to deal with errors or mistakes in grammar, spelling, punctuation, or syntax that make it hard to read or trust them.
-
-
The best sources for Corazon Salvaje English subtitles
-
So, where can you find good English subtitles for Corazon Salvaje? Here are some of the best sources that we recommend:
-
-
DVDs: The easiest and most reliable way to watch Corazon Salvaje with English subtitles is to buy the official DVDs that have them included. You can find them on Amazon or other online stores that sell Mexican telenovelas. The DVDs have high-quality video and audio, as well as accurate and consistent subtitles. However, they can be expensive and hard to find, especially if you live outside of Mexico or the US.
-
YouTube: The most convenient and accessible way to watch Corazon Salvaje with English subtitles is to watch it on YouTube, where some fans have uploaded the episodes with subtitles. You can find them by searching for "Corazon Salvaje English Subtitle" or similar keywords on YouTube. The YouTube videos have decent quality and speed, as well as free and easy access. However, they can be incomplete or removed at any time due to copyright issues or other reasons.
-
Subscene: The most popular and comprehensive way to watch Corazon Salvaje with English subtitles is to download them from Subscene, a website that hosts subtitles for various movies and shows in different languages. You can find them by searching for "Corazon Salvaje" on Subscene and choosing the English subtitles that match your video source. The Subscene subtitles have good quality and variety, as well as user ratings and comments. However, they can be inconsistent or inaccurate depending on the subtitler or the episode.
-
-
How to use HD Online Player to watch Corazon Salvaje with English subtitles?
-
If you want to watch Corazon Salvaje with English subtitles without buying DVDs, watching YouTube videos, or downloading subtitles from Subscene, you can use HD Online Player, a free and easy-to-use streaming software that lets you watch any video online with subtitles of your choice.
-
What is HD Online Player and how does it work?
-
HD Online Player is a software that allows you to stream any video from any website on your computer with subtitles from any source. It works by creating a virtual browser that connects to the website where the video is hosted and plays it on your computer screen. It also allows you to add subtitles from any file or URL that you have on your computer or online.
-
HD Online Player supports various video formats and websites, such as MP4, AVI, MKV, FLV, WMV, MOV, 3GP, WEBM, MPEG, M4V, ASF, VOB, OGV, RMVB, TS, MTS, M2TS, and more. It also supports various subtitle formats and sources, such as SRT, ASS, SSA, SUB, IDX, TXT, XML, VTT, DFXP, and more. It also supports various languages and encodings for subtitles, such as UTF-8, ANSI, Unicode, and more.
-
The advantages of using HD Online Player for streaming Corazon Salvaje
-
Using HD Online Player for streaming Corazon Salvaje with English subtitles has many advantages over other methods, such as:
-
-
It is free and easy: HD Online Player is a free software that you can download and install on your computer without any registration or subscription. It is also easy to use and has a simple and intuitive interface that lets you stream videos and add subtitles with just a few clicks.
-
It is fast and smooth: HD Online Player is a fast software that loads videos quickly and smoothly without any buffering or lagging. It also has a smart buffering system that adjusts the video quality according to your internet speed and bandwidth.
-
It is flexible and customizable: HD Online Player is a flexible software that lets you stream videos from any website and add subtitles from any source. It also lets you customize the subtitle settings according to your preferences, such as size, color, font, position, sync, delay, and more.
-
It is safe and secure: HD Online Player is a safe software that does not contain any viruses or malware that might harm your computer or data. It also does not collect or share any of your personal or browsing information with anyone.
-
-
The steps to install and use HD Online Player for Corazon Salvaje
-
To install and use HD Online Player for streaming Corazon Salvaje with English subtitles, you need to follow these steps:
-
-
Download HD Online Player from its official website: https://hdonlineplayer.com/
-
Run the setup file and follow the instructions to install HD Online Player on your computer.
-
Launch HD Online Player and click on the "Open URL" button on the top left corner.
-
Enter the URL of the website where Corazon Salvaje is hosted and click "OK". For example, you can enter https://www.dailymotion.com/video/x6wqf0w which is the link for the first episode of Corazon Salvaje on Dailymotion.
-
Wait for the video to load and play on HD Online Player.
-
Click on the "Subtitle" button on the bottom right corner and choose "Add subtitle file" or "Add subtitle URL".
-
Browse your computer or enter the URL of the subtitle file or source that you want to use for Corazon Salvaje. For example, you can enter https://subscene.com/subtitles/corazn-salvaje-1993/english/2409518 which is the link for the English subtitle for the first episode of Corazon Salvaje on Subscene.
-
Wait for the subtitle to load and sync with the video on HD Online Player.
-
Enjoy watching Corazon Salvaje with English subtitles on HD Online Player!
-
-
Conclusion
-
In conclusion, Corazon Salvaje is a classic Mexican telenovela that tells a captivating story of love and adventure in the 19th century. It has a great cast and production that made it one of the most successful and acclaimed shows in Latin American television history. It is worth watching with English subtitles if you want to improve your language skills appreciate the original performance and expand your horizons. You can watch it with English subtitles using HD Online Player a free and easy-to-use streaming software that lets you stream any video online with subtitles of your choice. You just need to download and install HD Online Player on your computer enter the URL of the website where Corazon Salvaje is hosted add the subtitle file or source that you want to use and enjoy watching Corazon Salvaje with English subtitles on HD Online Player!
-
FAQs
-
Here are some frequently asked questions about Corazon Salvaje and HD Online Player:
-
-
How many episodes does Corazon Salvaje have? Corazon Salvaje has 80 episodes in total each lasting about 45 minutes.
-
Where can I watch Corazon Salvaje online? You can watch Corazon Salvaje online on various websites that host Mexican telenovelas such as Dailymotion YouTube or TelenovelasTV.
-
Can I watch Corazon Salvaje with other languages besides English? Yes you can watch Corazon Salvaje with other languages besides English if you can find subtitles for them online. HD Online Player supports various languages and encodings for subtitles.
-
Can I use HD Online Player for other videos besides Corazon Salvaje? Yes you can use HD Online Player for other videos besides Corazon Salvaje if they are available online. HD Online Player supports various video formats and websites.
-
Is HD Online Player compatible with Windows 10? Yes HD Online Player is compatible with Windows 10 as well as Windows 7 8 8.1 XP and Vista.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Caterpillar ET Factory Password.rar.md b/spaces/1gistliPinn/ChatGPT4/Examples/Caterpillar ET Factory Password.rar.md
deleted file mode 100644
index 7e9da6eed1d70a62744cea367c725d1c18cead6a..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Caterpillar ET Factory Password.rar.md
+++ /dev/null
@@ -1,104 +0,0 @@
-
-
Caterpillar ET Factory Password.rar: What Is It and How to Use It
-
If you are a Caterpillar dealer or technician, you may have heard of Caterpillar ET Factory Password.rar. This is a file that contains factory passwords for various Caterpillar Electronic Technician (Cat ET) functions. Cat ET is a software tool that allows you to communicate, diagnose, and service electronically controlled Caterpillar engines and machines connected to an Electronic Control Module (ECM).
Factory passwords are part of a security system that helps to prevent unauthorized reprogramming of certain parameters, such as full load setting (FLS), fuel trim setting (FTS), or engine speed/timing calibration. Factory passwords also allow the factory to control access to engine calibration parameters and prevent unauthorized erasing of logged events.
-
In order to use factory passwords, you need to have Cat ET installed on your computer and a compatible communication adapter, such as Caterpillar Communication Adapter or Nexiq. You also need to obtain the proper factory passwords from an authorized Caterpillar dealer. The factory passwords are different for each ECM and each programming session. They are based on the following information:
-
-
Serial number of the ECM
-
Engine serial number
-
Serial number for Cat ET
-
Reason code
-
Total tattletale number
-
-
You can find this information on the Cat ET screen for factory passwords. You can also use the "Reset/View Passwords" function to generate two random customer passwords that allow you to access customer password-protected parameters without knowing the actual customer passwords.
-
How to Download Caterpillar ET Factory Password.rar
-
Caterpillar ET Factory Password.rar is a file that contains factory passwords for various Cat ET functions. You can download this file from various online sources, such as blogs, forums, or websites that offer Caterpillar diagnostic software and tools. However, you should be careful when downloading this file, as it may contain viruses, malware, or other harmful content that can damage your computer or compromise your security.
-
-
Before downloading Caterpillar ET Factory Password.rar, you should check the following:
-
-
The source of the file is trustworthy and reputable.
-
The file size and format match the expected values.
-
The file has positive feedback and reviews from other users.
-
The file does not require any additional software or activation codes.
-
-
After downloading Caterpillar ET Factory Password.rar, you should scan it with a reliable antivirus program and extract it with a suitable software tool, such as WinRAR or 7-Zip. You should also backup your original Cat ET files before replacing them with the downloaded ones.
-
How to Use Caterpillar ET Factory Password.rar
-
After downloading and extracting Caterpillar ET Factory Password.rar, you can use it to perform various Cat ET functions that require factory passwords. For example, you can use it to change FLS or FTS values, calibrate engine speed/timing, or clear event codes. To use Caterpillar ET Factory Password.rar, you need to follow these steps:
-
-
Connect your communication adapter to your computer and to the ECM.
-
Launch Cat ET and select the appropriate ECM.
-
Select the "Service" menu and choose the function you want to perform.
-
If Cat ET asks for factory passwords, enter them from the Caterpillar ET Factory Password.rar file.
-
Follow the instructions on the screen to complete the function.
-
-
Note that some functions may require additional steps or information, such as engine serial number or reason code. You should always document the parameters and settings that are programmed into the ECM and keep a permanent record of them.
-
Benefits of Using Caterpillar ET Factory Password.rar
-
Using Caterpillar ET Factory Password.rar can provide you with many benefits, such as:
-
-
Improving the performance and efficiency of your Caterpillar engines and machines by adjusting the optimal parameters.
-
Saving time and money by avoiding unnecessary trips to the dealer or service center.
-
Enhancing your knowledge and skills by learning more about the features and functions of Cat ET.
-
Increasing your customer satisfaction and loyalty by providing them with better service and support.
-
-
However, you should also be aware of the risks and responsibilities of using Caterpillar ET Factory Password.rar, such as:
-
-
Following the proper procedures and instructions to avoid damaging the ECM or causing any safety hazards.
-
Respecting the intellectual property rights and confidentiality agreements of Caterpillar and its dealers.
-
Not sharing or distributing Caterpillar ET Factory Password.rar to unauthorized parties or sources.
-
Taking full responsibility for any consequences or liabilities that may arise from using Caterpillar ET Factory Password.rar.
-
-
Conclusion
-
Caterpillar ET Factory Password.rar is a file that contains factory passwords for various Cat ET functions that require them. You can download this file from various online sources, but you should be careful about its authenticity and security. You can use this file to perform various Cat ET functions that can improve the performance and efficiency of your Caterpillar engines and machines. However, you should also follow the proper procedures and instructions, respect the intellectual property rights and confidentiality agreements, and take full responsibility for any consequences or liabilities that may arise from using Caterpillar ET Factory Password.rar.
-
How to Get Help and Support for Caterpillar ET Factory Password.rar
-
If you have any questions or issues regarding Caterpillar ET Factory Password.rar, you can get help and support from various sources, such as:
-
-
The official Caterpillar website, where you can find manuals, guides, videos, FAQs, and other resources for Cat ET and other Caterpillar products and services.
-
The authorized Caterpillar dealer or service center near you, where you can get professional advice, assistance, and training from qualified technicians and experts.
-
The online Caterpillar community, where you can interact with other Caterpillar users, customers, and enthusiasts, share your experiences and feedback, and learn from their tips and tricks.
-
-
Remember that using Caterpillar ET Factory Password.rar is a privilege and not a right. You should always use it with respect and caution, and follow the ethical and legal standards of Caterpillar and its dealers. By doing so, you can enjoy the benefits of using Caterpillar ET Factory Password.rar without compromising your safety or reputation.
-
How to Update and Upgrade Caterpillar ET Factory Password.rar
-
Caterpillar ET Factory Password.rar is a file that contains factory passwords for various Cat ET functions that require them. However, this file may not work with newer versions of Cat ET or newer models of Caterpillar engines and machines. Therefore, you may need to update and upgrade Caterpillar ET Factory Password.rar from time to time to ensure its compatibility and functionality.
-
To update and upgrade Caterpillar ET Factory Password.rar, you can follow these steps:
-
-
Check the current version of your Cat ET software and the model and serial number of your Caterpillar engine or machine.
-
Visit the official Caterpillar website or contact your authorized Caterpillar dealer or service center to find out if there are any updates or upgrades available for your Cat ET software or your Caterpillar engine or machine.
-
If there are any updates or upgrades available, download them from the official Caterpillar website or get them from your authorized Caterpillar dealer or service center.
-
Install the updates or upgrades on your computer and on your Caterpillar engine or machine according to the instructions provided.
-
Download a new version of Caterpillar ET Factory Password.rar that matches the updated or upgraded Cat ET software and Caterpillar engine or machine from a reliable and reputable online source.
-
Scan the new version of Caterpillar ET Factory Password.rar with a reliable antivirus program and extract it with a suitable software tool.
-
Backup your original Cat ET files and replace them with the new ones from the new version of Caterpillar ET Factory Password.rar.
-
-
Note that some updates or upgrades may require additional steps or information, such as activation codes or registration keys. You should always follow the instructions and recommendations from Caterpillar and its dealers when updating or upgrading your Cat ET software or your Caterpillar engine or machine.
-
How to Troubleshoot and Fix Common Problems with Caterpillar ET Factory Password.rar
-
Caterpillar ET Factory Password.rar is a file that contains factory passwords for various Cat ET functions that require them. However, you may encounter some common problems when using this file, such as:
-
-
The file does not work with your Cat ET version or your Caterpillar engine or machine model.
-
The file does not contain the factory passwords for the function you want to perform.
-
The file is corrupted, damaged, or infected by viruses or malware.
-
The file causes errors or crashes on your Cat ET software or your Caterpillar engine or machine.
-
-
To troubleshoot and fix these common problems, you can try the following solutions:
-
-
Make sure you have downloaded the latest version of Caterpillar ET Factory Password.rar that matches your Cat ET version and your Caterpillar engine or machine model.
-
Make sure you have entered the correct information on the Cat ET screen for factory passwords, such as serial number, reason code, and total tattletale.
-
Make sure you have scanned and extracted the file with a reliable antivirus program and a suitable software tool.
-
Make sure you have backed up your original Cat ET files before replacing them with the ones from the file.
-
Make sure you have followed the proper procedures and instructions when using the file to perform Cat ET functions.
-
If none of these solutions work, contact your authorized Caterpillar dealer or service center for further assistance.
-
-
How to Learn More about Caterpillar ET Factory Password.rar
-
Caterpillar ET Factory Password.rar is a file that contains factory passwords for various Cat ET functions that require them. If you want to learn more about this file and how to use it effectively, you can use the following resources:
-
-
The official Caterpillar website, where you can find manuals, guides, videos, FAQs, and other resources for Cat ET and other Caterpillar products and services.
-
The authorized Caterpillar dealer or service center near you, where you can get professional advice, assistance, and training from qualified technicians and experts.
-
The online Caterpillar community, where you can interact with other Caterpillar users, customers, and enthusiasts, share your experiences and feedback, and learn from their tips and tricks.
-
The online sources that offer Caterpillar diagnostic software and tools, such as blogs, forums, or websites that provide Caterpillar ET Factory Password.rar and other files. However, you should be careful about their authenticity and security.
-
-
By using these resources, you can enhance your knowledge and skills about Caterpillar ET Factory Password.rar and how to use it to improve the performance and efficiency of your Caterpillar engines and machines.
-
Conclusion
-
Caterpillar ET Factory Password.rar is a file that contains factory passwords for various Cat ET functions that require them. You can download this file from various online sources, but you should be careful about its authenticity and security. You can use this file to perform various Cat ET functions that can improve the performance and efficiency of your Caterpillar engines and machines. However, you should also follow the proper procedures and instructions, respect the intellectual property rights and confidentiality agreements, and take full responsibility for any consequences or liabilities that may arise from using Caterpillar ET Factory Password.rar.
-
If you have any questions or issues regarding Caterpillar ET Factory Password.rar, you can get help and support from various sources, such as the official Caterpillar website, the authorized Caterpillar dealer or service center, or the online Caterpillar community. You can also update and upgrade Caterpillar ET Factory Password.rar from time to time to ensure its compatibility and functionality. By using Caterpillar ET Factory Password.rar with respect and caution, you can enjoy the benefits of using Cat ET without compromising your safety or reputation.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Black uTorrent Pro APK The Ultimate App for Torrent Lovers.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Black uTorrent Pro APK The Ultimate App for Torrent Lovers.md
deleted file mode 100644
index f1d0e2512ce853ff1f61a29d1cbfbaf68b3ae1a5..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Black uTorrent Pro APK The Ultimate App for Torrent Lovers.md
+++ /dev/null
@@ -1,73 +0,0 @@
-
-
What Is Black uTorrent Pro APK and Why You Need It
-
If you are looking for a fast and easy way to download large files from the internet, you might have heard of uTorrent. uTorrent is one of the most popular and widely used torrent clients in the world. It allows you to download files using BitTorrent, a peer-to-peer (P2P) file-sharing protocol that distributes data among users without relying on a central server.
However, uTorrent is not perfect. The official version of uTorrent has some drawbacks, such as annoying ads, limited features, high battery consumption, and potential security risks. That's why some users prefer to use modded versions of uTorrent, such as black uTorrent pro apk.
-
Black uTorrent pro apk is a modified version of uTorrent that unlocks all the pro features and removes all the ads. It also has some additional features that make it more convenient and efficient to use. Here are some of the benefits of using black uTorrent pro apk:
-
-
No ads: You won't see any annoying or intrusive ads while using black uTorrent pro apk. This means you can enjoy a cleaner and smoother user interface without any distractions.
-
Battery saver: Black uTorrent pro apk has a battery saver feature that automatically suspends torrenting when your battery level is low. This helps you save battery life and prevent your device from overheating.
-
Auto shutdown: Black uTorrent pro apk also has an auto shutdown feature that automatically shuts down the app when your downloads are complete. This helps you save data and battery usage and avoid running unnecessary processes in the background.
-
File conversion: Black uTorrent pro apk allows you to convert downloaded files to different formats, such as MP3, MP4, AVI, etc. This makes it easier to play them on different devices or platforms.
-
Premium support: Black uTorrent pro apk gives you access to premium customer support from the developers. You can contact them anytime if you have any questions or issues with the app.
-
-
How to Download and Install Black uTorrent Pro APK on Your Android Device
-
If you want to try out black uTorrent pro apk, you need to download and install it on your Android device first. Here are the steps you need to follow:
-
-
Find a reliable source to download the apk file. You can use our website to get the latest version of black uTorrent pro apk. Make sure the source is trustworthy and virus-free. You can scan the file with an antivirus app before installing it.
-
Enable unknown sources on your device settings. This will allow you to install apps from sources other than the Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on.
-
Locate and tap on the apk file to start the installation process. You can use a file manager app to find the file in your downloads folder or wherever you saved it.
-
Follow the on-screen instructions and grant the necessary permissions. The app will ask you to allow access to your storage, network, and other features. Tap on Install and wait for the process to finish.
-
Launch the app and enjoy the pro features. You will see a black icon of uTorrent on your app drawer or home screen. Tap on it to open the app and start using it.
-
-
How to Use Black uTorrent Pro APK to Download Torrent Files
-
Now that you have installed black uTorrent pro apk on your device, you can use it to download torrent files or magnet links from various sources. Here are the steps you need to follow:
-
-
Search for the torrent file or magnet link you want to download. You can use any torrent site or search engine that you trust, such as The Pirate Bay, 1337x, RARBG, etc. Make sure the file has enough seeders and positive comments before downloading it.
-
Copy the torrent file or magnet link and paste it in the app. You can either download the torrent file to your device and open it with black uTorrent pro apk, or copy the magnet link and paste it in the app's search bar. The app will automatically detect the file or link and start downloading it.
-
Choose your download location and other settings. You can change the default download location by going to Settings > Directories > Download Location and selecting a folder of your choice. You can also adjust other settings, such as bandwidth limit, download queue, network interface, etc.
-
Start the download and monitor the progress. You will see a list of your active downloads in the app's main screen. You can tap on each download to see more details, such as speed, size, peers, trackers, etc. You can also pause, resume, or delete downloads as you wish.
-
Open the downloaded file or folder with your preferred app or player. Once the download is complete, you can access the file or folder by tapping on it in the app or using a file manager app. You can then open it with any app or player that supports the file format.
-
-
The Risks and Precautions of Torrenting with Black uTorrent Pro APK
-
Torrenting with black uTorrent pro apk can be a great way to get free and fast downloads of movies, music, games, software, and more. However, torrenting also comes with some risks and challenges that you need to be aware of and prepared for. Here are some of them:
-
-
-
Risks of torrenting: Torrenting involves downloading files from unknown sources that may contain malware, viruses, spyware, or other harmful programs that can infect your device or compromise your data. Torrenting also exposes your IP address to other users who may monitor your activity or target you for cyberattacks. Torrenting may also violate copyright laws or other regulations in some countries or regions, which may result in legal consequences or penalties.
-
Precautions of torrenting: To avoid or minimize the risks of torrenting, you should take some precautions before and while using black uTorrent pro apk. Some of these precautions are:
-
Scanning files: You should always scan downloaded files with an antivirus app before opening them or running them on your device. This will help you detect and remove any malware or viruses that may be hidden in them.
-
Checking comments: You should always check the comments section of torrent sites or search engines before downloading any file or link. This will help you get feedback from other users who have downloaded the same file or link and see if they encountered any problems or issues with it.
-
Using a VPN: You should always use a VPN (virtual private network) when torrenting with black uTorrent pro apk. A VPN will encrypt your traffic and hide your IP address from other users and trackers. This will protect your privacy and security online and prevent anyone from spying on your activity or tracing your location. A VPN will also help you bypass any geo-restrictions or censorship that may block access to certain torrent sites or content.
-
-
-
Conclusion
-
Torrenting with black u Torrent pro apk is a powerful and convenient app that lets you download and enjoy torrent files on your Android device. It offers many pro features that enhance your torrenting experience, such as no ads, battery saver, auto shutdown, file conversion, and premium support. However, torrenting also comes with some risks and challenges that you need to be aware of and prepared for, such as malware, viruses, legal issues, ISP throttling, etc. Therefore, you should always take some precautions before and while using black uTorrent pro apk, such as scanning files, checking comments, using a VPN, etc. By doing so, you can enjoy the benefits of torrenting without compromising your safety or security. We hope this article has helped you understand what black uTorrent pro apk is and how to use it to download torrent files on your Android device. If you have any questions or feedback, please feel free to leave a comment below. Happy torrenting!
FAQs
-
Here are some frequently asked questions about black uTorrent pro apk:
-
-
-
Question
-
Answer
-
-
-
Is black uTorrent pro apk safe to use?
-
Black uTorrent pro apk is safe to use as long as you download it from a reliable source and scan it with an antivirus app before installing it. However, the files or links you download with it may not be safe, so you should always check them before opening them or running them on your device.
-
-
-
Is black uTorrent pro apk legal to use?
-
Black uTorrent pro apk is legal to use as long as you use it for personal and non-commercial purposes. However, the content you download with it may not be legal, depending on the source and the jurisdiction. You should always respect the rights of the content creators and owners and follow the laws and regulations of your country or region.
-
-
-
What is the difference between black uTorrent pro apk and uTorrent pro?
-
Black uTorrent pro apk is a modified version of uTorrent pro that unlocks all the pro features and removes all the ads. It also has some additional features that make it more convenient and efficient to use. uTorrent pro is the official version of uTorrent that requires a subscription fee to access the pro features.
-
-
-
How can I update black uTorrent pro apk?
-
You can update black uTorrent pro apk by downloading the latest version of the apk file from our website or any other source you trust. You can then install it over the existing app without losing your settings or downloads.
-
-
-
How can I uninstall black uTorrent pro apk?
-
You can uninstall black uTorrent pro apk by going to Settings > Apps > Black uTorrent Pro APK and tapping on Uninstall. You can also delete the apk file from your device if you don't need it anymore.
-
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Arknights A Role Playing Game with Stunning Graphics and Sci-Fi Plot. Download Now for Mac and PC.md b/spaces/1phancelerku/anime-remove-background/Arknights A Role Playing Game with Stunning Graphics and Sci-Fi Plot. Download Now for Mac and PC.md
deleted file mode 100644
index d9a553010cd702ad48856e584071af14e8088e37..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Arknights A Role Playing Game with Stunning Graphics and Sci-Fi Plot. Download Now for Mac and PC.md
+++ /dev/null
@@ -1,112 +0,0 @@
-
-
How to Download and Play Arknights on Mac
-
Arknights is a popular tactical RPG/tower defense mobile game that has captivated millions of players around the world. If you are one of them, you might be wondering if you can play Arknights on your Mac computer. The answer is yes, you can! In this article, we will show you how to download and play Arknights on Mac using an Android emulator. We will also give you some tips and tricks to enhance your gameplay experience. Let's get started!
-
What is Arknights?
-
Arknights is a free-to-play mobile game developed by Chinese developer Hypergryph and published by Yostar. It was released in China in May 2019, and in other countries in January 2020. It is available on Android and iOS platforms and features gacha game mechanics.
The game combines elements of tactical RPG and tower defense genres, with a rich sci-fi plot and stunning graphics. You play as the Doctor, a leader of a rescue organization called Rhodes Island, who has lost his memory due to an unknown infection. You have to recruit and train Operators, who are people with special abilities, to fight against a deadly threat from another world called Reunion.
-
The game offers hundreds of different Operators, each with their own skills, abilities, and classes. You have to strategically place them on the battlefield to block and defeat the enemies. You can also activate their skills for special effects or withdraw them for redeployment. The game has various modes, such as story mode, challenge mode, event mode, annihilation mode, contingency contract mode, and integrated strategies mode.
-
The game also features a captivating story with multiple chapters and side stories, as well as a diverse cast of characters with their own personalities and backgrounds. The game has received positive reviews from critics and players alike, praising its gameplay, graphics, story, music, voice acting, and character design.
-
How to download and play Arknights on Mac with BlueStacks
-Arknights official website for Mac users
-Arknights web browser game for Mac and PC
-Arknights Mac emulator download guide
-Arknights latest update and events for Mac players
-Arknights tips and tricks for Mac gamers
-Arknights best operators and strategies for Mac version
-Arknights system requirements and compatibility for Mac
-Arknights review and rating for Mac platform
-Arknights support and feedback for Mac issues
-Arknights wallpapers and themes for Mac desktop
-Arknights fan art and cosplay for Mac fans
-Arknights merchandise and gifts for Mac lovers
-Arknights comics and stories for Mac readers
-Arknights music and soundtracks for Mac listeners
-Arknights collaborations and crossovers for Mac enthusiasts
-Arknights community and forums for Mac users
-Arknights wiki and guides for Mac learners
-Arknights news and updates for Mac followers
-Arknights videos and streams for Mac watchers
-Arknights memes and jokes for Mac funnies
-Arknights codes and coupons for Mac savers
-Arknights giveaways and contests for Mac winners
-Arknights skins and costumes for Mac collectors
-Arknights characters and lore for Mac explorers
-Arknights gameplay and features for Mac players
-Arknights download size and speed for Mac devices
-Arknights graphics and performance for Mac quality
-Arknights bugs and glitches for Mac fixers
-Arknights mods and hacks for Mac cheaters
-Arknights tier list and rankings for Mac experts
-Arknights anniversary and birthday for Mac celebrators
-Arknights originium and orundum for Mac spenders
-Arknights recruitment and headhunting for Mac summoners
-Arknights base and dormitory for Mac builders
-Arknights missions and stages for Mac challengers
-Arknights story mode and side stories for Mac enjoyers
-Arknights factions and groups for Mac joiners
-Arknights voice actors and actresses for Mac admirers
-Arknights trivia and facts for Mac knowers
-
Why Play Arknights on Mac?
-
While Arknights is designed for mobile devices, you might want to play it on your Mac computer for various reasons. Here are some of the benefits of playing Arknights on Mac:
-
-
You can enjoy the game on a larger screen, which can enhance your immersion and appreciation of the game's visuals.
-
You can play the game with better graphics and performance, as you can adjust the settings according to your Mac's specifications.
-
You can use keyboard and mouse controls, which can give you more precision and convenience than touch controls.
-
You can save your battery life and storage space on your mobile device.
-
You can multitask with other apps or programs on your Mac while playing the game.
-
-
How to Install Arknights on Mac?
-
To play Arknights on your Mac computer, you will need to use an Android emulator. An Android emulator is a software that simulates the environment of an Android device on your computer. This way, you can access and run Android apps and games on your Mac.
-
There are many Android emulators available for Mac users, such as BlueStacks, NoxPlayer, MEmu Player, LDPlayer, Mu
One of the most popular and recommended Android emulators for Mac is BlueStacks. BlueStacks is a powerful and user-friendly emulator that can run Arknights smoothly and efficiently. Here are the steps to download and install Arknights on Mac using BlueStacks:
-
-
Go to the official website of BlueStacks and download the latest version of the emulator for Mac. You can use this link: https://www.bluestacks.com/download.html
-
Once the download is complete, open the installer file and follow the instructions to install BlueStacks on your Mac. You might need to grant some permissions or enter your password during the process.
-
After the installation is done, launch BlueStacks and sign in with your Google account. If you don't have one, you can create one for free.
-
On the home screen of BlueStacks, look for the Google Play Store icon and click on it. This will open the Play Store app on the emulator.
-
In the search bar of the Play Store, type "Arknights" and hit enter. You will see a list of results related to the game.
-
Select the Arknights app from the list and click on the "Install" button. This will start downloading and installing the game on your Mac.
-
Once the installation is complete, you can find the Arknights icon on the home screen of BlueStacks. Click on it to launch the game and enjoy playing Arknights on your Mac.
-
-
How to Link Your Mobile Account and Recover Your Progress on Mac?
-
If you have already played Arknights on your mobile device and want to continue your progress on your Mac, you will need to link your mobile account to your emulator account. Here are the steps to do that:
-
-
On your mobile device, open Arknights and tap on the gear icon on the top right corner of the screen. This will open the settings menu.
-
Tap on "Account" and then tap on "Bind Account". You will see a list of options to bind your account, such as Facebook, Twitter, Yostar, or Apple ID.
-
Select one of the options and follow the instructions to bind your account. You will need to enter your login details or scan a QR code depending on the option you choose.
-
Once your account is bound, you will see a confirmation message on your screen. You can now close Arknights on your mobile device.
-
On your Mac, launch BlueStacks and open Arknights. On the title screen, tap on "Account" and then tap on "Switch Account". You will see a list of options to switch your account, such as Facebook, Twitter, Yostar, or Apple ID.
-
Select the same option that you used to bind your account on your mobile device and follow the instructions to switch your account. You will need to enter your login details or scan a QR code depending on the option you choose.
-
Once your account is switched, you will see a confirmation message on your screen. You can now access your progress and data from your mobile device on your Mac.
-
-
Tips and Tricks for Playing Arknights on Mac
-
Now that you have installed Arknights on your Mac, you might want to know some tips and tricks to improve your gameplay experience. Here are some of them:
-
-
You can adjust the settings of BlueStacks to optimize its performance and compatibility with Arknights. For example, you can change the resolution, graphics quality, frame rate, memory allocation, CPU cores, etc. You can also enable or disable features such as high FPS mode, game mode, eco mode, etc.
-
You can use keyboard and mouse controls to play Arknights more comfortably and conveniently than touch controls. You can either use the default key mapping or customize it according to your preference. You can also use macros to automate some actions or commands in the game.
-
You can use the screenshot and video recording features of BlueStacks to capture your gameplay moments and share them with others. You can also stream your gameplay live to platforms such as Twitch or YouTube using BlueStacks.
-
You can use some strategies to enhance your performance in Arknights, such as leveling up and promoting your Operators, upgrading their skills and potentials, choosing the right team composition and formation, using effective skill timing and deployment order, etc.
-
-
Conclusion
-
Arknights is a fun and addictive game that combines tactical RPG and tower defense elements with a sci-fi plot and stunning graphics. If you want to play Arknights on your Mac computer, you can do so by using an Android emulator such as BlueStacks. You can download and install Arknights on your Mac easily and quickly, and enjoy the game on a larger screen, with better graphics and performance, and using keyboard and mouse controls. You can also link your mobile account and recover your progress on your Mac, and use some tips and tricks to optimize your gameplay experience. Arknights is a game that you don't want to miss, so why not give it a try on your Mac today?
-
FAQs
-
Here are some frequently asked questions and answers about Arknights on Mac:
-
Is Arknights free to play on Mac?
-
Yes, Arknights is free to play on Mac, as long as you have an Android emulator such as BlueStacks installed on your Mac. You can download and install Arknights from the Google Play Store on the emulator without paying anything. However, the game does have some in-app purchases that you can buy with real money if you want to enhance your gameplay experience.
-
Is Arknights compatible with Mac?
-
Yes, Arknights is compatible with Mac, as long as you use an Android emulator such as BlueStacks to run it. BlueStacks is compatible with most Mac devices and operating systems, and can run Arknights smoothly and efficiently. You can check the minimum system requirements for BlueStacks on its official website.
-
How to update Arknights on Mac?
-
To update Arknights on your Mac, you need to update it from the Google Play Store on the emulator. You can either enable the auto-update feature or manually check for updates. To manually check for updates, you need to open the Play Store app on the emulator, go to the "My apps & games" section, find Arknights from the list of installed apps, and click on the "Update" button if there is one available.
-
How to transfer data from Arknights on mobile to Mac?
-
To transfer data from Arknights on your mobile device to your Mac, you need to link your mobile account to your emulator account. You can do this by binding your account to one of the options available in the game's settings menu, such as Facebook, Twitter, Yostar, or Apple ID. Then, you need to switch your account to the same option on the emulator. This will allow you to access your progress and data from your mobile device on your Mac.
-
How to fix Arknights crashing or not loading on Mac?
-
If you encounter any issues with Arknights crashing or not loading on your Mac, you can try some of the following solutions:
-
-
Restart the game or the emulator.
-
Clear the cache and data of the game or the emulator.
-
Update the game or the emulator to the latest version.
-
Check your internet connection and firewall settings.
-
Contact the game's or the emulator's customer support for further assistance.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Brawl Stars APK Everything You Need to Know About the Best Mobile Game of 2023.md b/spaces/1phancelerku/anime-remove-background/Brawl Stars APK Everything You Need to Know About the Best Mobile Game of 2023.md
deleted file mode 100644
index 07053378b875633c349d048e14e1d335daf62632..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Brawl Stars APK Everything You Need to Know About the Best Mobile Game of 2023.md
+++ /dev/null
@@ -1,150 +0,0 @@
-
-
Brawl Stars APK Download: How to Play the Ultimate Mobile Brawler on Your Android Device
-
If you are looking for a fast-paced, action-packed, and fun multiplayer game to play on your Android device, you should definitely check out Brawl Stars. Brawl Stars is a game developed by Supercell, the makers of Clash of Clans and Clash Royale. It features various game modes, characters, and events that will keep you hooked for hours.
-
But how can you download and install Brawl Stars APK on your Android device? And what are some tips and tricks to help you become a better brawler? In this article, we will answer these questions and more. Let's get started!
Brawl Stars is a mobile game that combines elements of twin-stick shooters, MOBAs, and battle royales. You can choose from over 20 different brawlers, each with their own unique abilities, weapons, and skins. You can also team up with your friends or play solo in various game modes, such as:
-
-
Gem Grab: Collect and hold 10 gems to win, but don't let the enemy team take them from you.
-
Showdown: Be the last brawler standing in a solo or duo battle royale.
-
Brawl Ball: Score two goals before the other team in a soccer-like match.
-
Bounty: Take out opponents to earn stars, but don't let them pick you off.
-
Heist: Protect your team's safe and try to crack open your opponent's safe.
-
Special Events: Limited time PvE and PvP game modes with unique rewards.
-
Championship Challenge: Compete in in-game qualifiers for a chance to join the Brawl Stars esports scene.
-
-
Brawl Stars is constantly evolving with new brawlers, skins, maps, events, and game modes. It also has a Brawl Pass system that lets you complete quests, open boxes, earn gems, pins, and an exclusive skin every season.
-
How to Download Brawl Stars APK?
-
Brawl Stars is free to download and play on both iOS and Android devices. However, some regions may not have access to the game on the Google Play Store. If that's the case for you, don't worry. You can still download and install Brawl Stars APK from other sources.
-
An APK file is an Android application package that contains all the files needed to run an app on your device. To download Brawl Stars APK, you need to follow these steps:
-
-
Go to a trusted website that offers Brawl Stars APK download links. Some examples are Uptodown, Softpedia, and Games.lol. Make sure you download the latest version of the game.
-
Once you have downloaded the APK file, locate it on your device's file manager and tap on it to install it. You may need to enable installation from unknown sources in your device's settings.
-
Wait for the installation process to finish and launch the game. You may need to download some additional data before you can play.
-
Enjoy Brawl Stars on your Android device!
-
-
Note: Downloading APK files from third-party sources may pose some risks to your device's security and performance. Make sure you only download from reputable websites and scan the files for viruses before installing them.
-
What are Some Brawl Stars Tips and Tricks?
-
Brawl Stars is a game that requires skill, strategy, and teamwork to win. Here are some tips and tricks that will help you improve your gameplay and become a star brawler:
-
brawl stars apk download latest version
-brawl stars apk download for android
-brawl stars apk download for pc
-brawl stars apk download mod
-brawl stars apk download hack
-brawl stars apk download free
-brawl stars apk download 2023
-brawl stars apk download update
-brawl stars apk download softpedia[^1^]
-brawl stars apk download no verification
-brawl stars apk download unlimited gems
-brawl stars apk download for ios
-brawl stars apk download for windows 10
-brawl stars apk download nulls
-brawl stars apk download private server
-brawl stars apk download rexdl
-brawl stars apk download apkpure
-brawl stars apk download uptodown
-brawl stars apk download revdl
-brawl stars apk download android 1
-brawl stars apk download mediafıre
-brawl stars apk download mega
-brawl stars apk download online
-brawl stars apk download old version
-brawl stars apk download original
-brawl stars apk download offline
-brawl stars apk download obb
-brawl stars apk download play store
-brawl stars apk download pc windows 7
-brawl stars apk download pc windows 8.1
-brawl stars apk download pc windows xp
-brawl stars apk download pc bluestacks
-brawl stars apk download pc nox player
-brawl stars apk download pc gameloop
-brawl stars apk download pc memu play
-brawl stars apk download reddit
-brawl stars apk download real
-brawl stars apk download rebrawl
-brawl stars apk download rey modz official
-brawl stars apk download rey modz pro 2.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.
-brawl stars apk download supercell
-brawl stars apk download safe
-brawl stars apk download site
-brawl stars apk download server error 43 fix
-
Use Obstacles to Your Advantage
-
The maps in Brawl Stars have various obstacles such as rocks, barrels, mushrooms, and walls that can block enemy fire. You can use these objects to hide behind for cover or to ambush your opponents. However, be careful of brawlers that can break through obstacles with their super abilities or gadgets.
-
Don't Take on Tank Brawlers Alone
Don't Take on Tank Brawlers Alone
-
Tank brawlers are those that have high health and damage, such as El Primo, Bull, Frank, and Rosa. They can easily overpower you in close-range combat, especially if they have their super abilities ready. If you encounter a tank brawler, try to keep your distance and chip away at their health with your teammates. Alternatively, you can use brawlers that can counter them, such as Shelly, Spike, or Emz.
-
Know Your Brawler's Role and Strengths
-
Brawl Stars has four types of brawlers: Fighter, Sharpshooter, Heavyweight, and Support. Each type has its own role and strengths in the game. For example, fighters are good at dealing damage and controlling the map, sharpshooters are good at sniping and poking enemies from afar, heavyweights are good at tanking and breaking through defenses, and support are good at healing and buffing allies. You should know your brawler's type and play accordingly to maximize their potential.
-
Use Your Super Ability Wisely
-
Your super ability is a powerful move that can turn the tide of the battle. However, it takes time to charge up and can be wasted if used incorrectly. You should use your super ability when it can have the most impact, such as securing a kill, saving an ally, or escaping a sticky situation. You should also be aware of your enemy's super abilities and try to dodge or counter them.
-
Communicate and Coordinate with Your Teammates
-
Brawl Stars is a team-based game that requires coordination and communication to win. You should use the in-game chat or voice chat to communicate with your teammates and plan your strategies. You can also use the quick chat commands or pins to convey your emotions or intentions. For example, you can use the thumbs up pin to show approval or the angry pin to show frustration. You can also use the attack, defend, or retreat commands to signal your teammates what to do.
-
How to Compare Brawlers in Brawl Stars?
-
If you want to know how different brawlers stack up against each other in terms of stats, abilities, and performance, you can use a table to compare them. Here is an example of a table that compares four popular brawlers in Brawl Stars:
-
-
-
Brawler
-
Type
-
Health
-
Damage
-
Range
-
Super Ability
-
-
-
Shelly
-
Fighter
-
3600
-
300-420 per shell
-
7.67 tiles
-
Fires a powerful blast that knocks back enemies and destroys obstacles.
-
-
-
Nita
-
Fighter
-
3800
-
800 per hit
-
5.5 tiles
-
Summons a big bear that attacks enemies and has high health.
-
-
-
Crow
-
Sharpshooter
-
3360
-
320 per dagger (plus poison)
-
10 tiles
-
Fires a ring of daggers that deal damage and poison enemies.
-
-
-
Poco
-
Support
-
3800
-
700 per hit (plus healing)
-
7 tiles (wide spread)
-
Sends out a wave of music that heals himself and his allies.
-
-
-
You can use this table to see which brawlers have higher or lower health, damage, range, or super abilities. You can also use this table to find out which brawlers are better suited for certain game modes or situations.
-
Conclusion: Brawl Stars APK Download is Worth It!
-
Brawl Stars is one of the best mobile games you can play on your Android device. It has amazing graphics, gameplay, characters, and features that will keep you entertained for hours. Whether you want to play solo or with your friends, you will always find something new and exciting in Brawl Stars.
-
If you want to download Brawl Stars APK on your Android device, you can follow the steps we mentioned above. Just make sure you download from a trusted source and scan the file for viruses before installing it. Once you have installed the game, you can start brawling with millions of players around the world!
-
We hope this article helped you learn more about Brawl Stars APK download and how to play the game better. If If you have any questions about Brawl Stars APK download or the game itself, you can check out the FAQs below. You may find the answers you are looking for.
FAQs
-
Is Brawl Stars APK Download Safe?
-
Brawl Stars APK download is safe as long as you download from a reputable website and scan the file for viruses before installing it. However, you should be careful of fake or malicious websites that may try to trick you into downloading harmful files or stealing your personal information. Always check the reviews, ratings, and comments of the website and the file before downloading it.
-
Is Brawl Stars APK Download Legal?
-
Brawl Stars APK download is legal as long as you do not use it to violate the terms of service of the game or the Google Play Store. For example, you should not use it to hack, cheat, or mod the game in any way. You should also not use it to distribute or sell the game without permission from Supercell. If you do any of these things, you may face legal consequences or get banned from the game.
-
How to Update Brawl Stars APK?
-
Brawl Stars APK may not update automatically on your device, unlike the official version from the Google Play Store. To update Brawl Stars APK, you need to download and install the latest version of the file from the same website you downloaded it from. You can also check for updates in the game settings or on the official Brawl Stars website. Make sure you back up your game data before updating to avoid losing your progress.
-
How to Play Brawl Stars on PC?
-
If you want to play Brawl Stars on your PC, you need to use an Android emulator. An Android emulator is a software that allows you to run Android apps and games on your PC. Some popular Android emulators are BlueStacks, NoxPlayer, and LDPlayer. To play Brawl Stars on PC, you need to follow these steps:
-
-
Download and install an Android emulator on your PC.
-
Launch the emulator and sign in with your Google account.
-
Download and install Brawl Stars APK from a trusted website or from the emulator's app store.
-
Launch Brawl Stars and enjoy playing on a bigger screen with better controls.
-
-
How to Get Free Gems in Brawl Stars?
-
Gems are the premium currency in Brawl Stars that can be used to buy skins, boxes, brawl passes, and other items. You can get free gems in Brawl Stars by completing quests, opening boxes, watching ads, participating in events, or using codes. You can also get free gems by using third-party apps or websites that offer surveys, tasks, or rewards. However, you should be careful of scams or hacks that may try to steal your account or personal information.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Wordscapes Uncrossed Mod APK for Free - Unlimited Coins and Hints.md b/spaces/1phancelerku/anime-remove-background/Download Wordscapes Uncrossed Mod APK for Free - Unlimited Coins and Hints.md
deleted file mode 100644
index 8dfbc7013ba04dadbd30bac153516179e85c111d..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Wordscapes Uncrossed Mod APK for Free - Unlimited Coins and Hints.md
+++ /dev/null
@@ -1,147 +0,0 @@
-
-
Wordscapes Uncrossed Mod APK: A Fun and Challenging Word Game
-
If you love word games, you might have heard of Wordscapes, one of the most popular and addictive games in the genre. But did you know that there is a sequel to Wordscapes that is even more fun and challenging? It's called Wordscapes Uncrossed, and it's a game that will test your brain power and vocabulary skills like never before.
In this article, we'll tell you everything you need to know about Wordscapes Uncrossed, how to play it, how to download and install its mod APK version, and how to enjoy it safely and responsibly. So, if you're ready to dive into the world of words, let's get started!
-
How to Play Wordscapes Uncrossed
-
The basic rules and gameplay of Wordscapes Uncrossed
-
Wordscapes Uncrossed is a word puzzle game that is similar to crossword puzzles, but with a twist. Instead of filling in the blanks with clues, you have to swipe letters on the screen to form words that fit into the grid. The words can be horizontal, vertical, or diagonal, as long as they are connected by a line.
-
The game starts with easy puzzles that have only a few letters and words, but as you progress, the puzzles get harder and bigger, with more letters and words to find. You also have to deal with bonus words, which are extra words that are not part of the grid but can earn you coins if you find them.
-
The different modes and levels of Wordscapes Uncrossed
-
Wordscapes Uncrossed has two main modes: Classic and Daily. In Classic mode, you can play through hundreds of levels that are divided into different themes, such as Forest, Sky, Ocean, Canyon, etc. Each theme has its own background image and music that create a relaxing atmosphere for playing.
-
In Daily mode, you can play a new puzzle every day that is based on the current date. The daily puzzles are more challenging than the classic ones, but they also offer more rewards, such as coins, hints, and stars. You can also compare your score with other players around the world on the leaderboard.
-
wordscapes uncrossed apk download
-wordscapes uncrossed game free
-wordscapes uncrossed mod apk unlimited coins
-wordscapes uncrossed latest version
-wordscapes uncrossed hack apk
-wordscapes uncrossed word puzzle
-wordscapes uncrossed android game
-wordscapes uncrossed cheats and answers
-wordscapes uncrossed online play
-wordscapes uncrossed for pc
-wordscapes uncrossed app store
-wordscapes uncrossed by peoplefun
-wordscapes uncrossed level 1
-wordscapes uncrossed review
-wordscapes uncrossed tips and tricks
-wordscapes uncrossed mod apk 2023
-wordscapes uncrossed best word game
-wordscapes uncrossed no ads
-wordscapes uncrossed premium apk
-wordscapes uncrossed update
-wordscapes uncrossed how to play
-wordscapes uncrossed daily puzzle
-wordscapes uncrossed bonus words
-wordscapes uncrossed anagram solver
-wordscapes uncrossed relaxing backgrounds
-wordscapes uncrossed mod apk rexdl
-wordscapes uncrossed brain teaser
-wordscapes uncrossed crossword game
-wordscapes uncrossed offline mode
-wordscapes uncrossed new levels
-wordscapes uncrossed mod apk revdl
-wordscapes uncrossed fun word quiz
-wordscapes uncrossed challenge your mind
-wordscapes uncrossed apk pure
-wordscapes uncrossed mod apk happymod
-wordscapes uncrossed easy to hard
-wordscapes uncrossed word finder
-wordscapes uncrossed mod apk android 1
-wordscapes uncrossed free coins
-wordscapes uncrossed mod menu apk
-wordscapes uncrossed mod apk unlimited hints
-wordscapes uncrossed word unscramble game
-wordscapes uncrossed mod apk 1.3.1
-wordscapes uncrossed terms of service
-wordscapes uncrossed mod apk latest version
-wordscapes uncrossed word search game
-wordscapes uncrossed mod apk no root
-wordscapes uncrossed mod apk ios
-
The benefits of playing Wordscapes Uncrossed for your brain and vocabulary
-
Wordscapes Uncrossed is not only a fun game, but also a great way to improve your brain function and vocabulary. By playing this game, you can:
-
-
Enhance your memory, concentration, and problem-solving skills
-
Learn new words and expand your vocabulary
-
Boost your creativity and imagination
-
Reduce stress and anxiety
-
Have fun and enjoy yourself
-
-
How to Download and Install Wordscapes Uncrossed Mod APK
-
What is a mod APK and why you should use it
-
A mod APK is
A mod APK is a modified version of an original Android app that provides users with some extra or improved features. APK is a file format that contains all the elements of an app and can be installed on an Android device. Mod APKs are usually created by reworking the original app’s code or adding new components to it.
-
The features and advantages of Wordscapes Uncrossed Mod APK
-
If you want to enjoy Wordscapes Uncrossed without any limitations or ads, you might want to try Wordscapes Uncrossed Mod APK. This is a modified version of the game that offers some features and advantages that are not available in the official app, such as:
-
-
Unlimited coins: You can use coins to buy hints, shuffles, or extra words in the game. With Wordscapes Uncrossed Mod APK, you don't have to worry about running out of coins, as you will have an infinite amount of them.
-
Unlocked levels: You can access all the levels and themes in the game without having to complete the previous ones. This way, you can choose the difficulty and the scenery that suits your mood and preference.
-
No ads: You can play Wordscapes Uncrossed without any interruptions or distractions from annoying ads. This will make your gaming experience more smooth and enjoyable.
-
-
The steps to download and install Wordscapes Uncrossed Mod APK on your device
-
If you want to download and install Wordscapes Uncrossed Mod APK on your device, you need to follow these steps:
-
-
Make sure your device has enough storage space and is compatible with the game's requirements.
-
Go to a reliable and safe website that offers Wordscapes Uncrossed Mod APK for download, such as [APKPure](^5^) or [APKFab](^6^).
-
Tap on the download button and wait for the file to be downloaded on your device.
-
Before installing the file, you need to enable the installation of apps from unknown sources on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on.
-
Locate the downloaded file on your device and tap on it to start the installation process.
-
Follow the instructions on the screen and wait for the installation to finish.
-
Launch the game and enjoy Wordscapes Uncrossed Mod APK!
-
How to Enjoy Wordscapes Uncrossed Mod APK Safely and Responsibly
-
The risks and precautions of using a mod APK
-
While Wordscapes Uncrossed Mod APK can provide you with some benefits, it also comes with some risks and drawbacks that you should be aware of. Some of the possible risks and precautions of using a mod APK are:
-
-
Malware infection: Some mod APKs may contain malicious code or viruses that can harm your device or steal your personal information. To avoid this, you should only download mod APKs from trusted and verified sources, and scan them with an antivirus app before installing them.
-
Legal issues: Some mod APKs may violate the intellectual property rights or terms of service of the original app developers or publishers. This can result in legal actions or penalties against you. To avoid this, you should respect the rights and policies of the original app owners, and use mod APKs for personal and non-commercial purposes only.
-
Ban or suspension: Some mod APKs may give you an unfair advantage over other players or interfere with the game's functionality or security. This can result in your account being banned or suspended from the game or its online services. To avoid this, you should not use mod APKs that affect the game's balance or performance, and follow the game's rules and etiquette.
-
-
The tips and tricks to make the most of Wordscapes Uncrossed Mod APK
-
If you want to have more fun and success with Wordscapes Uncrossed Mod APK, you can try some of these tips and tricks:
-
-
Use hints wisely: Hints can help you find the words that you are stuck on, but they also cost coins. If you want to save your coins, you can use hints sparingly, or only when you really need them.
-
Shuffle the letters: Shuffling the letters can help you see new word possibilities and combinations that you might have missed. You can shuffle the letters as many times as you want, without any penalty.
-
Find extra words: Finding extra words that are not part of the grid can earn you more coins and bonuses. You can use these coins to buy more hints, shuffles, or extra words in the game.
-
Challenge yourself: If you want to test your skills and knowledge, you can try playing the daily puzzles or the harder levels in the game. These puzzles will challenge your brain and vocabulary more than the regular ones.
-
Have fun: The most important thing is to have fun and enjoy yourself while playing Wordscapes Uncrossed Mod APK. You can play at your own pace, choose your own theme, listen to soothing music, and relax with this game.
-
-
The alternatives and recommendations for other word games
-
If you love word games, you might also want to try some of these alternatives and recommendations for other word games that are similar to Wordscapes Uncrossed:
-
-
Name
Description
-
Word Connect
A word game that requires you to connect letters to form words that fill up the crossword board. You can also discover hidden words and earn coins.
-
Word Cookies
A word game that requires you to swipe letters to form words that match with the given cookies. You can also unlock new levels and themes as you play.
-
Word Crossy
A word game that combines crossword puzzles and word searches. You have to swipe letters to form words that cross each other on the board. You can also collect butterflies and flowers as you play.
-
Word Swipe
A word game that requires you to swipe letters to form words that fit into the blanks on the board. You can also use power-ups and hints to help you solve the puzzles.
-
Word Link
A word game that requires you to link letters to form words that fill up the grid. You can also explore different themes and modes as you play.
-
-
Conclusion
-
Wordscapes Uncrossed is a fun and challenging word game that will keep you entertained and engaged for hours. It is a great way to improve your brain function and vocabulary while having fun. If you want to enjoy this game without any limitations or ads, you can download and install Wordscapes Uncrossed Mod APK on your device. However, you should also be aware of the risks and precautions of using a mod APK, and use it safely and responsibly. You can also try some tips and tricks to make the most of Wordscapes Uncrossed Mod APK on your device. However, you should also be aware of the risks and precautions of using a mod APK, and use it safely and responsibly. You can also try some tips and tricks to make the most of Wordscapes Uncrossed Mod APK, or explore some alternatives and recommendations for other word games that are similar to it. We hope you found this article helpful and informative, and we wish you a happy and enjoyable gaming experience with Wordscapes Uncrossed Mod APK!
-
FAQs
-
Here are some frequently asked questions about Wordscapes Uncrossed Mod APK:
-
-
What is the difference between Wordscapes and Wordscapes Uncrossed?
-
Wordscapes and Wordscapes Uncrossed are both word puzzle games that are developed by PeopleFun. The main difference is that Wordscapes Uncrossed has a simpler and more minimalist design, with fewer letters and words per puzzle, but more puzzles per theme. Wordscapes Uncrossed also has a daily mode that offers a new puzzle every day.
-
Is Wordscapes Uncrossed Mod APK safe to use?
-
Wordscapes Uncrossed Mod APK is generally safe to use, as long as you download it from a reliable and verified source, and scan it with an antivirus app before installing it. However, you should also be careful of the possible risks and drawbacks of using a mod APK, such as malware infection, legal issues, or ban or suspension from the game or its online services.
-
How can I get more coins in Wordscapes Uncrossed Mod APK?
-
There are several ways to get more coins in Wordscapes Uncrossed Mod APK, such as:
-
-
Finding extra words that are not part of the grid
-
Completing daily puzzles or achievements
-
Watching ads or videos
-
Using Wordscapes Uncrossed Mod APK that gives you unlimited coins
-
-
How can I update Wordscapes Uncrossed Mod APK?
-
To update Wordscapes Uncrossed Mod APK, you need to follow these steps:
-
-
Delete the old version of Wordscapes Uncrossed Mod APK from your device
-
Go to the website where you downloaded the mod APK and check if there is a new version available
-
Download the new version of Wordscapes Uncrossed Mod APK on your device
-
Install the new version of Wordscapes Uncrossed Mod APK on your device
-
Launch the game and enjoy the updated features
-
-
What are some other games like Wordscapes Uncrossed?
-
If you like Wordscapes Uncrossed, you might also like some other games like Word Connect, Word Cookies, Word Crossy, Word Swipe, or Word Link. These are all word puzzle games that require you to swipe letters to form words that fit into the grid or the blanks. They also have different themes, modes, levels, and features that make them fun and challenging.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Enjoy Taxi Game 2 on Windows PC Career Mode and Realistic GPS.md b/spaces/1phancelerku/anime-remove-background/Enjoy Taxi Game 2 on Windows PC Career Mode and Realistic GPS.md
deleted file mode 100644
index 3bd6a0c95b820b1fb11e55793dfbed5deff7d559..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Enjoy Taxi Game 2 on Windows PC Career Mode and Realistic GPS.md
+++ /dev/null
@@ -1,113 +0,0 @@
-
-
Taxi Game 2: How to Download and Play on PC Windows 7
-
Do you love driving games and want to experience the thrill of being a taxi driver in a realistic city? If yes, then you should try Taxi Game 2, one of the best taxi games for mobile devices. But what if you want to play it on your PC Windows 7 instead of your phone or tablet? Don't worry, we have got you covered. In this article, we will show you how to download and play Taxi Game 2 on PC Windows 7 using two different methods. We will also share some tips and tricks to help you master the game and become the best taxi driver in town.
Taxi Game 2 is a free driving simulator game developed by baklabs. It is the sequel to the popular Taxi Game, which has over 100 million downloads on Google Play Store. In Taxi Game 2, you can enjoy a full 3D open world, a cab driving simulator, a career mode, an engaging taxi driver gameplay, a GPS navigation system, and many routes across the city. You can also choose your passengers, buy new cars, upgrade your features, and build your taxi empire. Taxi Game 2 is constantly developed and updated, so you can expect new features and improvements in the future.
-
Why play Taxi Game 2 on PC Windows 7?
-
While Taxi Game 2 is designed for mobile devices, there are many reasons why you might want to play it on your PC Windows 7 instead. Here are some of them:
-
-
You can enjoy a bigger screen and better graphics.
-
You can use a keyboard and mouse or a gamepad for more precise and comfortable controls.
-
You can avoid battery drain, overheating, and interruptions from phone calls or notifications.
-
You can save your phone storage space and data usage.
-
You can play with multiple accounts or instances using an emulator.
-
-
How to download Taxi Game 2 on PC Windows 7
-
Method 1: Using an Android emulator
-
An Android emulator is a software that allows you to run Android apps and games on your PC Windows 7. There are many Android emulators available online, such as BlueStacks, LDPlayer, NoxPlayer, etc. Here are the steps to download and play Taxi Game 2 on PC Windows using an Android emulator:
-
Step 1: Download and install an Android emulator
-
Choose an Android emulator that suits your PC Windows 7 specifications and preferences. You can visit the official websites of the emulators and compare their features, requirements, and reviews. Then, download the emulator installer file and follow the instructions to install it on your PC Windows 7.
-
Step 2: Launch the emulator and sign in with your Google account
-
After installing the emulator, launch it and wait for it to load. You will see a virtual Android device on your PC Windows 7 screen. Then, sign in with your Google account or create a new one if you don't have one. This will allow you to access the Google Play Store and other Google services on the emulator.
-
Step 3: Search for Taxi Game 2 on the Google Play Store
-
On the emulator, open the Google Play Store app and search for Taxi Game 2. You will see the game icon and some information about it. Click on the Install button to download and install Taxi Game 2 on your PC Windows 7 via the emulator.
-
Step 4: Install and run Taxi Game 2 on your PC Windows 7
-
Once the installation is complete, you can find Taxi Game 2 on the emulator's home screen or app drawer. Click on the game icon to launch it and start playing Taxi Game 2 on your PC Windows 7. You can adjust the settings, such as the graphics quality, sound volume, control scheme, etc., according to your preferences. You can also use the emulator's features, such as screen recording, screenshot, keyboard mapping, etc., to enhance your gaming experience.
-
taxi game 2 pc download free
-taxi game 2 for windows 7 64 bit
-taxi game 2 simulator on pc
-taxi game 2 career mode download
-taxi game 2 windows 7 install
-taxi game 2 full version for pc
-taxi game 2 offline download windows 7
-taxi game 2 pc emulator
-taxi game 2 apk for windows 7
-taxi game 2 driving simulator pc
-taxi game 2 latest version download
-taxi game 2 on windows 10
-taxi game 2 free online play pc
-taxi game 2 hack download for pc
-taxi game 2 mod apk windows 7
-taxi game 2 cheats for pc
-taxi game 2 update download windows 7
-taxi game 2 bluestacks
-taxi game 2 ldplayer
-taxi game 2 noxplayer
-taxi game 2 baklabs download for pc
-taxi game 2 open world pc
-taxi game 2 cab driver gameplay
-taxi game 2 passengers pick up windows 7
-taxi game 2 gps navigation pc
-taxi game 2 city traffic racer download
-taxi game 2 best car for pc
-taxi game 2 gas stations windows 7
-taxi game 2 tips and tricks pc
-taxi game 2 review for windows 7
-crazy taxi classic download for pc windows 7
-crazy taxi classic on bluestacks windows 7
-crazy taxi classic arcade game pc
-crazy taxi classic emulator for windows 7
-crazy taxi classic free play online pc
-crazy taxi classic full screen windows 7
-crazy taxi classic original soundtrack pc
-crazy taxi classic cheats and codes windows 7
-crazy taxi classic controller support pc
-crazy taxi classic steam download windows 7
-crazy driver: cab simulator on pc windows 7
-crazy driver: cab simulator free download
-crazy driver: cab simulator gameplay
-crazy driver: cab simulator mod apk
-crazy driver: cab simulator online play
-crazy driver: cab simulator hack tool
-crazy driver: cab simulator unlimited money
-crazy driver: cab simulator realistic graphics
-crazy driver: cab simulator missions and challenges
-
Method 2: Using an APK/XAPK file
-
An APK/XAPK file is a package file that contains the app or game data and installation instructions. You can use an APK/XAPK file to install Taxi Game 2 on your PC Windows 7 without using an emulator. However, you will need an APK/XAPK installer software to do this. Here are the steps to download and play Taxi Game 2 on PC Windows using an APK/XAPK file:
-
Step 1: Download the APK/XAPK file of Taxi Game 2
-
You can download the APK/XAPK file of Taxi Game 2 from various online sources, such as APKPure, Uptodown, APKMirror, etc. Make sure that you download the latest version of the game and that it is compatible with your PC Windows 7. You can also scan the file for viruses or malware before downloading it.
-
Step 2: Install and run an APK/XAPK installer on your PC Windows 7
-
You will need an APK/XAPK installer software to install Taxi Game 2 on your PC Windows 7 using the APK/XAPK file. There are many APK/XAPK installer software available online, such as Pure APK Install, XAPK Installer, Apk Installer Pro, etc. You can choose one that suits your PC Windows 7 specifications and preferences. Then, download the installer software and follow the instructions to install it on your PC Windows 7.
-
Step 3: Open the APK/XAPK file with the installer and install Taxi Game 2 on your PC Windows 7
-
After installing the APK/XAPK installer software, launch it and locate the APK/XAPK file of Taxi Game 2 that you have downloaded. Then, open the file with the installer software and follow the instructions to install Taxi Game 2 on your PC Windows 7. Once the installation is complete, you can find Taxi Game 2 on your PC Windows 7 desktop or start menu. Click on the game icon to launch it and start playing Taxi Game 2 on your PC Windows 7.
-
Tips and tricks for playing Taxi Game 2 on PC Windows 7
-
Taxi Game 2 is a fun and challenging game that requires skill, strategy, and patience. Here are some tips and tricks to help you play better and enjoy more:
-
Tip 1: Use the Crazy Dash to boost your speed
-
The Crazy Dash is a special move that allows you to accelerate quickly and gain more speed. To perform it, you need to tap the brake and the gas pedals alternately. You will see a yellow flash on your screen when you do it correctly. The Crazy Dash can help you reach your destination faster, avoid traffic, and earn more money. However, be careful not to crash into other vehicles or obstacles, as this will damage your taxi and reduce your score.
-
Tip 2: Choose your passengers wisely
-
Not all passengers are the same in Taxi Game 2. Some passengers will pay you more, some will give you more time, and some will have special requests or challenges. You can see the information about each passenger on the top of their heads, such as their name, destination, fare, and time limit. You can also see their mood and personality, which will affect how they react to your driving. For example, some passengers will be happy if you drive fast and crazy, while others will be angry or scared. You should choose your passengers based on your preferences and goals. For instance, if you want to earn more money, you should pick up passengers who offer high fares or tips. If you want to have more fun, you should pick up passengers who like your driving style or have interesting stories.
-
Tip 3: Refuel your taxi at gas stations
-
Your taxi has a gas meter that shows how much fuel you have left. If you run out of gas, you will lose the game and have to start over. To avoid this, you should refuel your taxi at gas stations whenever you can. You can find gas stations on the map or follow the signs on the road. Refueling your taxi will cost you some money, but it is worth it in the long run. You can also upgrade your fuel tank capacity with the money you earn from your rides.
-
Tip 4: Follow the GPS navigation to find the best routes
-
Taxi Game 2 has a GPS navigation system that shows you the best routes to take your passengers to their destinations. You can see the GPS map on the top right corner of your screen, which will indicate your current location, your destination, and the optimal path to follow. You can also see arrows on the road that guide you along the way. Following the GPS navigation will help you save time, avoid traffic jams, and earn more money. However, you can also explore the city and find shortcuts or alternative routes if you want to challenge yourself or have more fun.
-
Tip 5: Upgrade your taxi with new cars and features
-
Taxi Game 2 allows you to upgrade your taxi with new cars and features that will improve your performance and appearance. You can buy new cars with different models, colors, and stats from the garage. You can also customize your cars with stickers, decals, spoilers, rims, etc. Moreover, you can enhance your cars with new features, such as turbo boosters, nitro boosters, shock absorbers, etc. Upgrading your taxi will cost you some money, but it will make your game more enjoyable and rewarding.
-
Conclusion
-
Taxi Game 2 is a great game for anyone who loves driving games and wants to experience the life of a taxi driver in a realistic city. It has amazing graphics, realistic physics, smooth controls, and diverse gameplay modes. It is also easy to download and play on PC Windows 7 using an Android emulator or an APK/XAPK file. With these tips and tricks, you can master Taxi Game 2 and become the best taxi driver in town.
-
FAQs
-
-
Q: Is Taxi Game 2 free to play?
-
A: Yes, Taxi Game 2 is free to play and download on Google Play Store. However, it contains ads and in-app purchases that can enhance your gaming experience.
-
Q: Can I play Taxi Game 2 offline?
-
A: Yes, Taxi Game 2 can be played offline without an internet connection. However, some features may not be available or updated when offline.
-
Q: How can I save my progress in Taxi Game 2?
-
A: Taxi Game 2 automatically saves your progress when you exit the game or switch to another app. You can also sync your progress with your Google account by signing in with it on the game settings.
-
Q: How can I contact the developers of Taxi Game 2?
-
A: You can contact the developers of Taxi Game 2 by sending them an email at support@baklabs.com or by visiting their website at https://www.baklabs.com/.
-
Q: How can I rate and review Taxi Game 2?
-
A: You can rate and review Taxi Game 2 by going to its page on Google Play Store and tapping on the stars and writing your feedback. You can also share your opinion and suggestions with other players and the developers by leaving a comment.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1toTree/lora_test/ppdiffusers/utils/deprecation_utils.py b/spaces/1toTree/lora_test/ppdiffusers/utils/deprecation_utils.py
deleted file mode 100644
index 1ba7e7c3b2cc103da072af743fc6b0f66bf40549..0000000000000000000000000000000000000000
--- a/spaces/1toTree/lora_test/ppdiffusers/utils/deprecation_utils.py
+++ /dev/null
@@ -1,64 +0,0 @@
-# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
-# Copyright 2022 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import inspect
-import warnings
-from typing import Any, Dict, Optional, Union
-
-from packaging import version
-
-
-def deprecate(*args, take_from: Optional[Union[Dict, Any]] = None, standard_warn=True):
- from .. import __version__
-
- deprecated_kwargs = take_from
- values = ()
- if not isinstance(args[0], tuple):
- args = (args,)
-
- for attribute, version_name, message in args:
- if version.parse(version.parse(__version__).base_version) >= version.parse(version_name):
- raise ValueError(
- f"The deprecation tuple {(attribute, version_name, message)} should be removed since ppdiffusers'"
- f" version {__version__} is >= {version_name}"
- )
-
- warning = None
- if isinstance(deprecated_kwargs, dict) and attribute in deprecated_kwargs:
- values += (deprecated_kwargs.pop(attribute),)
- warning = f"The `{attribute}` argument is deprecated and will be removed in version {version_name}."
- elif hasattr(deprecated_kwargs, attribute):
- values += (getattr(deprecated_kwargs, attribute),)
- warning = f"The `{attribute}` attribute is deprecated and will be removed in version {version_name}."
- elif deprecated_kwargs is None:
- warning = f"`{attribute}` is deprecated and will be removed in version {version_name}."
-
- if warning is not None:
- warning = warning + " " if standard_warn else ""
- warnings.warn(warning + message, FutureWarning, stacklevel=2)
-
- if isinstance(deprecated_kwargs, dict) and len(deprecated_kwargs) > 0:
- call_frame = inspect.getouterframes(inspect.currentframe())[1]
- filename = call_frame.filename
- line_number = call_frame.lineno
- function = call_frame.function
- key, value = next(iter(deprecated_kwargs.items()))
- raise TypeError(f"{function} in {filename} line {line_number-1} got an unexpected keyword argument `{key}`")
-
- if len(values) == 0:
- return
- elif len(values) == 1:
- return values[0]
- return values
diff --git a/spaces/52Hz/CMFNet_dehazing/app.py b/spaces/52Hz/CMFNet_dehazing/app.py
deleted file mode 100644
index 8a8a3feb75e204a50d480aabdc8fd3b3c46d0d02..0000000000000000000000000000000000000000
--- a/spaces/52Hz/CMFNet_dehazing/app.py
+++ /dev/null
@@ -1,38 +0,0 @@
-import os
-import gradio as gr
-from PIL import Image
-import torch
-
-os.system(
- 'wget https://github.com/FanChiMao/CMFNet/releases/download/v0.0/dehaze_I_OHaze_CMFNet.pth -P experiments/pretrained_models')
-
-
-def inference(img):
- if not os.path.exists('test'):
- os.system('mkdir test')
-
- basewidth = 512
- wpercent = (basewidth / float(img.size[0]))
- hsize = int((float(img.size[1]) * float(wpercent)))
- img = img.resize((basewidth, hsize), Image.BILINEAR)
- img.save("test/1.png", "PNG")
- os.system(
- 'python main_test_CMFNet.py --input_dir test --weights experiments/pretrained_models/dehaze_I_OHaze_CMFNet.pth')
- return 'results/1.png'
-
-
-title = "Compound Multi-branch Feature Fusion for Image Restoration (Dehaze)"
-description = "Gradio demo for CMFNet. CMFNet achieves competitive performance on three tasks: image deblurring, image dehazing and image deraindrop. Here, we provide a demo for image dehaze. To use it, simply upload your image, or click one of the examples to load them. Reference from: https://huggingface.co/akhaliq"
-article = "
You can skip the queue and load custom models in the colab:
- Running on {device}{(" in a Google Colab." if is_colab else "")}
-
-
You can also duplicate this space and upgrade to gpu by going to settings:
-
-
- """
- )
- with gr.Row():
-
- with gr.Column(scale=55):
- with gr.Group():
- model_name = gr.Dropdown(label="Model", choices=[m.name for m in models], value=current_model.name)
- with gr.Box(visible=False) as custom_model_group:
- custom_model_path = gr.Textbox(label="Custom model path", placeholder="Path to model, e.g. nitrosocke/Arcane-Diffusion", interactive=True)
- gr.HTML("
Custom models have to be downloaded first, so give it some time.
- """)
-
- demo.load(update_state_info, inputs=state_info, outputs=state_info, every=0.5, show_progress=False)
-
-print(f"Space built in {time.time() - start_time:.2f} seconds")
-
-# if not is_colab:
-demo.queue(concurrency_count=1)
-demo.launch(debug=True, share=is_colab)
diff --git a/spaces/Adapter/CoAdapter/ldm/modules/extra_condition/openpose/model.py b/spaces/Adapter/CoAdapter/ldm/modules/extra_condition/openpose/model.py
deleted file mode 100644
index 6f5d8eb6b7e4af7e2a4fc21fe500b29f02ff176d..0000000000000000000000000000000000000000
--- a/spaces/Adapter/CoAdapter/ldm/modules/extra_condition/openpose/model.py
+++ /dev/null
@@ -1,178 +0,0 @@
-import torch
-import torch.nn as nn
-from collections import OrderedDict
-
-
-def make_layers(block, no_relu_layers):
- layers = []
- for layer_name, v in block.items():
- if 'pool' in layer_name:
- layer = nn.MaxPool2d(kernel_size=v[0], stride=v[1], padding=v[2])
- layers.append((layer_name, layer))
- else:
- conv2d = nn.Conv2d(in_channels=v[0], out_channels=v[1], kernel_size=v[2], stride=v[3], padding=v[4])
- layers.append((layer_name, conv2d))
- if layer_name not in no_relu_layers:
- layers.append(('relu_' + layer_name, nn.ReLU(inplace=True)))
-
- return nn.Sequential(OrderedDict(layers))
-
-
-class bodypose_model(nn.Module):
-
- def __init__(self):
- super(bodypose_model, self).__init__()
-
- # these layers have no relu layer
- no_relu_layers = ['conv5_5_CPM_L1', 'conv5_5_CPM_L2', 'Mconv7_stage2_L1',\
- 'Mconv7_stage2_L2', 'Mconv7_stage3_L1', 'Mconv7_stage3_L2',\
- 'Mconv7_stage4_L1', 'Mconv7_stage4_L2', 'Mconv7_stage5_L1',\
- 'Mconv7_stage5_L2', 'Mconv7_stage6_L1', 'Mconv7_stage6_L1']
- blocks = {}
- block0 = OrderedDict([('conv1_1', [3, 64, 3, 1, 1]), ('conv1_2', [64, 64, 3, 1, 1]), ('pool1_stage1', [2, 2,
- 0]),
- ('conv2_1', [64, 128, 3, 1, 1]), ('conv2_2', [128, 128, 3, 1, 1]),
- ('pool2_stage1', [2, 2, 0]), ('conv3_1', [128, 256, 3, 1, 1]),
- ('conv3_2', [256, 256, 3, 1, 1]), ('conv3_3', [256, 256, 3, 1, 1]),
- ('conv3_4', [256, 256, 3, 1, 1]), ('pool3_stage1', [2, 2, 0]),
- ('conv4_1', [256, 512, 3, 1, 1]), ('conv4_2', [512, 512, 3, 1, 1]),
- ('conv4_3_CPM', [512, 256, 3, 1, 1]), ('conv4_4_CPM', [256, 128, 3, 1, 1])])
-
- # Stage 1
- block1_1 = OrderedDict([('conv5_1_CPM_L1', [128, 128, 3, 1, 1]), ('conv5_2_CPM_L1', [128, 128, 3, 1, 1]),
- ('conv5_3_CPM_L1', [128, 128, 3, 1, 1]), ('conv5_4_CPM_L1', [128, 512, 1, 1, 0]),
- ('conv5_5_CPM_L1', [512, 38, 1, 1, 0])])
-
- block1_2 = OrderedDict([('conv5_1_CPM_L2', [128, 128, 3, 1, 1]), ('conv5_2_CPM_L2', [128, 128, 3, 1, 1]),
- ('conv5_3_CPM_L2', [128, 128, 3, 1, 1]), ('conv5_4_CPM_L2', [128, 512, 1, 1, 0]),
- ('conv5_5_CPM_L2', [512, 19, 1, 1, 0])])
- blocks['block1_1'] = block1_1
- blocks['block1_2'] = block1_2
-
- self.model0 = make_layers(block0, no_relu_layers)
-
- # Stages 2 - 6
- for i in range(2, 7):
- blocks['block%d_1' % i] = OrderedDict([('Mconv1_stage%d_L1' % i, [185, 128, 7, 1, 3]),
- ('Mconv2_stage%d_L1' % i, [128, 128, 7, 1, 3]),
- ('Mconv3_stage%d_L1' % i, [128, 128, 7, 1, 3]),
- ('Mconv4_stage%d_L1' % i, [128, 128, 7, 1, 3]),
- ('Mconv5_stage%d_L1' % i, [128, 128, 7, 1, 3]),
- ('Mconv6_stage%d_L1' % i, [128, 128, 1, 1, 0]),
- ('Mconv7_stage%d_L1' % i, [128, 38, 1, 1, 0])])
-
- blocks['block%d_2' % i] = OrderedDict([('Mconv1_stage%d_L2' % i, [185, 128, 7, 1, 3]),
- ('Mconv2_stage%d_L2' % i, [128, 128, 7, 1, 3]),
- ('Mconv3_stage%d_L2' % i, [128, 128, 7, 1, 3]),
- ('Mconv4_stage%d_L2' % i, [128, 128, 7, 1, 3]),
- ('Mconv5_stage%d_L2' % i, [128, 128, 7, 1, 3]),
- ('Mconv6_stage%d_L2' % i, [128, 128, 1, 1, 0]),
- ('Mconv7_stage%d_L2' % i, [128, 19, 1, 1, 0])])
-
- for k in blocks.keys():
- blocks[k] = make_layers(blocks[k], no_relu_layers)
-
- self.model1_1 = blocks['block1_1']
- self.model2_1 = blocks['block2_1']
- self.model3_1 = blocks['block3_1']
- self.model4_1 = blocks['block4_1']
- self.model5_1 = blocks['block5_1']
- self.model6_1 = blocks['block6_1']
-
- self.model1_2 = blocks['block1_2']
- self.model2_2 = blocks['block2_2']
- self.model3_2 = blocks['block3_2']
- self.model4_2 = blocks['block4_2']
- self.model5_2 = blocks['block5_2']
- self.model6_2 = blocks['block6_2']
-
- def forward(self, x):
-
- out1 = self.model0(x)
-
- out1_1 = self.model1_1(out1)
- out1_2 = self.model1_2(out1)
- out2 = torch.cat([out1_1, out1_2, out1], 1)
-
- out2_1 = self.model2_1(out2)
- out2_2 = self.model2_2(out2)
- out3 = torch.cat([out2_1, out2_2, out1], 1)
-
- out3_1 = self.model3_1(out3)
- out3_2 = self.model3_2(out3)
- out4 = torch.cat([out3_1, out3_2, out1], 1)
-
- out4_1 = self.model4_1(out4)
- out4_2 = self.model4_2(out4)
- out5 = torch.cat([out4_1, out4_2, out1], 1)
-
- out5_1 = self.model5_1(out5)
- out5_2 = self.model5_2(out5)
- out6 = torch.cat([out5_1, out5_2, out1], 1)
-
- out6_1 = self.model6_1(out6)
- out6_2 = self.model6_2(out6)
-
- return out6_1, out6_2
-
-
-class handpose_model(nn.Module):
-
- def __init__(self):
- super(handpose_model, self).__init__()
-
- # these layers have no relu layer
- no_relu_layers = ['conv6_2_CPM', 'Mconv7_stage2', 'Mconv7_stage3',\
- 'Mconv7_stage4', 'Mconv7_stage5', 'Mconv7_stage6']
- # stage 1
- block1_0 = OrderedDict([('conv1_1', [3, 64, 3, 1, 1]), ('conv1_2', [64, 64, 3, 1, 1]),
- ('pool1_stage1', [2, 2, 0]), ('conv2_1', [64, 128, 3, 1, 1]),
- ('conv2_2', [128, 128, 3, 1, 1]), ('pool2_stage1', [2, 2, 0]),
- ('conv3_1', [128, 256, 3, 1, 1]), ('conv3_2', [256, 256, 3, 1, 1]),
- ('conv3_3', [256, 256, 3, 1, 1]), ('conv3_4', [256, 256, 3, 1, 1]),
- ('pool3_stage1', [2, 2, 0]), ('conv4_1', [256, 512, 3, 1, 1]),
- ('conv4_2', [512, 512, 3, 1, 1]), ('conv4_3', [512, 512, 3, 1, 1]),
- ('conv4_4', [512, 512, 3, 1, 1]), ('conv5_1', [512, 512, 3, 1, 1]),
- ('conv5_2', [512, 512, 3, 1, 1]), ('conv5_3_CPM', [512, 128, 3, 1, 1])])
-
- block1_1 = OrderedDict([('conv6_1_CPM', [128, 512, 1, 1, 0]), ('conv6_2_CPM', [512, 22, 1, 1, 0])])
-
- blocks = {}
- blocks['block1_0'] = block1_0
- blocks['block1_1'] = block1_1
-
- # stage 2-6
- for i in range(2, 7):
- blocks['block%d' % i] = OrderedDict([('Mconv1_stage%d' % i, [150, 128, 7, 1, 3]),
- ('Mconv2_stage%d' % i, [128, 128, 7, 1, 3]),
- ('Mconv3_stage%d' % i, [128, 128, 7, 1, 3]),
- ('Mconv4_stage%d' % i, [128, 128, 7, 1, 3]),
- ('Mconv5_stage%d' % i, [128, 128, 7, 1, 3]),
- ('Mconv6_stage%d' % i, [128, 128, 1, 1, 0]),
- ('Mconv7_stage%d' % i, [128, 22, 1, 1, 0])])
-
- for k in blocks.keys():
- blocks[k] = make_layers(blocks[k], no_relu_layers)
-
- self.model1_0 = blocks['block1_0']
- self.model1_1 = blocks['block1_1']
- self.model2 = blocks['block2']
- self.model3 = blocks['block3']
- self.model4 = blocks['block4']
- self.model5 = blocks['block5']
- self.model6 = blocks['block6']
-
- def forward(self, x):
- out1_0 = self.model1_0(x)
- out1_1 = self.model1_1(out1_0)
- concat_stage2 = torch.cat([out1_1, out1_0], 1)
- out_stage2 = self.model2(concat_stage2)
- concat_stage3 = torch.cat([out_stage2, out1_0], 1)
- out_stage3 = self.model3(concat_stage3)
- concat_stage4 = torch.cat([out_stage3, out1_0], 1)
- out_stage4 = self.model4(concat_stage4)
- concat_stage5 = torch.cat([out_stage4, out1_0], 1)
- out_stage5 = self.model5(concat_stage5)
- concat_stage6 = torch.cat([out_stage5, out1_0], 1)
- out_stage6 = self.model6(concat_stage6)
- return out_stage6
diff --git a/spaces/Adapter/T2I-Adapter/ldm/modules/diffusionmodules/__init__.py b/spaces/Adapter/T2I-Adapter/ldm/modules/diffusionmodules/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/fixwidthbuttons/Factory.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/fixwidthbuttons/Factory.d.ts
deleted file mode 100644
index 41489232aabf0a60dcbab81b71c0a4784c1621a8..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/fixwidthbuttons/Factory.d.ts
+++ /dev/null
@@ -1,5 +0,0 @@
-import FixWidthButtons from './FixWidthButtons';
-
-export default function (
- config?: FixWidthButtons.IConfig
-): FixWidthButtons;
\ No newline at end of file
diff --git a/spaces/AiBototicus/BucksAI-2/README.md b/spaces/AiBototicus/BucksAI-2/README.md
deleted file mode 100644
index b0f01fe18f328d1295ef0d870addf8f7f3b85b74..0000000000000000000000000000000000000000
--- a/spaces/AiBototicus/BucksAI-2/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: BucksAI 2
-emoji: 🐢
-colorFrom: green
-colorTo: red
-sdk: streamlit
-sdk_version: 1.17.0
-app_file: app.py
-pinned: false
-license: bsd-3-clause-clear
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/AlekseyKorshuk/instagram-filter-removal/modules/normalization.py b/spaces/AlekseyKorshuk/instagram-filter-removal/modules/normalization.py
deleted file mode 100644
index fc28bfdeaff873a9212e5af3d32550ef4f67cdd6..0000000000000000000000000000000000000000
--- a/spaces/AlekseyKorshuk/instagram-filter-removal/modules/normalization.py
+++ /dev/null
@@ -1,16 +0,0 @@
-import torch
-import torch.nn as nn
-
-
-class AdaIN(nn.Module):
- def __init__(self):
- super().__init__()
-
- def forward(self, x, y):
- ch = y.size(1)
- sigma, mu = torch.split(y.unsqueeze(-1).unsqueeze(-1), [ch // 2, ch // 2], dim=1)
-
- x_mu = x.mean(dim=[2, 3], keepdim=True)
- x_sigma = x.std(dim=[2, 3], keepdim=True)
-
- return sigma * ((x - x_mu) / x_sigma) + mu
diff --git a/spaces/Amrrs/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/PlayInteractively.py b/spaces/Amrrs/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/PlayInteractively.py
deleted file mode 100644
index 547b08ab2c4373e23711636488145df148d7eb4e..0000000000000000000000000000000000000000
--- a/spaces/Amrrs/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/PlayInteractively.py
+++ /dev/null
@@ -1,197 +0,0 @@
-
-
-
-from tkinter import Tk
-from PIL import Image, ImageTk
-from tkinter.filedialog import askopenfilename
-from GUI import View
-from Inference import StyleCLIP
-import argparse
-#%%
-
-
-class PlayInteractively(): #Controller
- '''
- followed Model View Controller Design Pattern
-
- controller, model, view
- '''
- def __init__(self,dataset_name='ffhq'):
-
- self.root = Tk()
- self.view=View(self.root)
- self.img_ratio=2
- self.style_clip=StyleCLIP(dataset_name)
-
- self.view.neutral.bind("", self.text_n)
- self.view.target.bind("", self.text_t)
- self.view.alpha.bind('', self.ChangeAlpha)
- self.view.beta.bind('', self.ChangeBeta)
- self.view.set_init.bind('', self.SetInit)
- self.view.reset.bind('', self.Reset)
- self.view.bg.bind('', self.open_img)
-
-
- self.drawn = None
-
- self.view.target.delete(1.0, "end")
- self.view.target.insert("end", self.style_clip.target)
-#
- self.view.neutral.delete(1.0, "end")
- self.view.neutral.insert("end", self.style_clip.neutral)
-
-
- def Reset(self,event):
- self.style_clip.GetDt2()
- self.style_clip.M.alpha=[0]
-
- self.view.beta.set(self.style_clip.beta)
- self.view.alpha.set(0)
-
- img=self.style_clip.GetImg()
- img=Image.fromarray(img)
- img = ImageTk.PhotoImage(img)
- self.addImage_m(img)
-
-
- def SetInit(self,event):
- codes=self.style_clip.GetCode()
- self.style_clip.M.dlatent_tmp=[tmp[:,0] for tmp in codes]
- print('set init')
-
- def ChangeAlpha(self,event):
- tmp=self.view.alpha.get()
- self.style_clip.M.alpha=[float(tmp)]
-
- img=self.style_clip.GetImg()
- print('manipulate one')
- img=Image.fromarray(img)
- img = ImageTk.PhotoImage(img)
- self.addImage_m(img)
-
- def ChangeBeta(self,event):
- tmp=self.view.beta.get()
- self.style_clip.beta=float(tmp)
-
- img=self.style_clip.GetImg()
- print('manipulate one')
- img=Image.fromarray(img)
- img = ImageTk.PhotoImage(img)
- self.addImage_m(img)
-
- def ChangeDataset(self,event):
-
- dataset_name=self.view.set_category.get()
-
- self.style_clip.LoadData(dataset_name)
-
- self.view.target.delete(1.0, "end")
- self.view.target.insert("end", self.style_clip.target)
-
- self.view.neutral.delete(1.0, "end")
- self.view.neutral.insert("end", self.style_clip.neutral)
-
- def text_t(self,event):
- tmp=self.view.target.get("1.0",'end')
- tmp=tmp.replace('\n','')
-
- self.view.target.delete(1.0, "end")
- self.view.target.insert("end", tmp)
-
- print('target',tmp,'###')
- self.style_clip.target=tmp
- self.style_clip.GetDt2()
- self.view.beta.set(self.style_clip.beta)
- self.view.alpha.set(3)
- self.style_clip.M.alpha=[3]
-
- img=self.style_clip.GetImg()
- print('manipulate one')
- img=Image.fromarray(img)
- img = ImageTk.PhotoImage(img)
- self.addImage_m(img)
-
-
- def text_n(self,event):
- tmp=self.view.neutral.get("1.0",'end')
- tmp=tmp.replace('\n','')
-
- self.view.neutral.delete(1.0, "end")
- self.view.neutral.insert("end", tmp)
-
- print('neutral',tmp,'###')
- self.style_clip.neutral=tmp
- self.view.target.delete(1.0, "end")
- self.view.target.insert("end", tmp)
-
-
- def run(self):
- self.root.mainloop()
-
- def addImage(self,img):
- self.view.bg.create_image(self.view.width/2, self.view.height/2, image=img, anchor='center')
- self.image=img #save a copy of image. if not the image will disappear
-
- def addImage_m(self,img):
- self.view.mani.create_image(512, 512, image=img, anchor='center')
- self.image2=img
-
-
- def openfn(self):
- filename = askopenfilename(title='open',initialdir='./data/'+self.style_clip.M.dataset_name+'/',filetypes=[("all image format", ".jpg"),("all image format", ".png")])
- return filename
-
- def open_img(self,event):
- x = self.openfn()
- print(x)
-
-
- img = Image.open(x)
- img2 = img.resize(( 512,512), Image.ANTIALIAS)
- img2 = ImageTk.PhotoImage(img2)
- self.addImage(img2)
-
- img = ImageTk.PhotoImage(img)
- self.addImage_m(img)
-
- img_index=x.split('/')[-1].split('.')[0]
- img_index=int(img_index)
- print(img_index)
- self.style_clip.M.img_index=img_index
- self.style_clip.M.dlatent_tmp=[tmp[img_index:(img_index+1)] for tmp in self.style_clip.M.dlatents]
-
-
- self.style_clip.GetDt2()
- self.view.beta.set(self.style_clip.beta)
- self.view.alpha.set(3)
-
- #%%
-if __name__ == "__main__":
- parser = argparse.ArgumentParser(description='Process some integers.')
-
- parser.add_argument('--dataset_name',type=str,default='ffhq',
- help='name of dataset, for example, ffhq')
-
- args = parser.parse_args()
- dataset_name=args.dataset_name
-
- self=PlayInteractively(dataset_name)
- self.run()
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/multi_subject_dreambooth/README.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/multi_subject_dreambooth/README.md
deleted file mode 100644
index d1a7705cfebbc65cca554189445742f3f762aa47..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/multi_subject_dreambooth/README.md
+++ /dev/null
@@ -1,338 +0,0 @@
-# Multi Subject DreamBooth training
-
-[DreamBooth](https://arxiv.org/abs/2208.12242) is a method to personalize text2image models like stable diffusion given just a few(3~5) images of a subject.
-This `train_multi_subject_dreambooth.py` script shows how to implement the training procedure for one or more subjects and adapt it for stable diffusion. Note that this code is based off of the `examples/dreambooth/train_dreambooth.py` script as of 01/06/2022.
-
-This script was added by @kopsahlong, and is not actively maintained. However, if you come across anything that could use fixing, feel free to open an issue and tag @kopsahlong.
-
-## Running locally with PyTorch
-### Installing the dependencies
-
-Before running the script, make sure to install the library's training dependencies:
-
-To start, execute the following steps in a new virtual environment:
-```bash
-git clone https://github.com/huggingface/diffusers
-cd diffusers
-pip install -e .
-```
-
-Then cd into the folder `diffusers/examples/research_projects/multi_subject_dreambooth` and run the following:
-```bash
-pip install -r requirements.txt
-```
-
-And initialize an [🤗Accelerate](https://github.com/huggingface/accelerate/) environment with:
-
-```bash
-accelerate config
-```
-
-Or for a default accelerate configuration without answering questions about your environment
-
-```bash
-accelerate config default
-```
-
-Or if your environment doesn't support an interactive shell e.g. a notebook
-
-```python
-from accelerate.utils import write_basic_config
-write_basic_config()
-```
-
-### Multi Subject Training Example
-In order to have your model learn multiple concepts at once, we simply add in the additional data directories and prompts to our `instance_data_dir` and `instance_prompt` (as well as `class_data_dir` and `class_prompt` if `--with_prior_preservation` is specified) as one comma separated string.
-
-See an example with 2 subjects below, which learns a model for one dog subject and one human subject:
-
-```bash
-export MODEL_NAME="CompVis/stable-diffusion-v1-4"
-export OUTPUT_DIR="path-to-save-model"
-
-# Subject 1
-export INSTANCE_DIR_1="path-to-instance-images-concept-1"
-export INSTANCE_PROMPT_1="a photo of a sks dog"
-export CLASS_DIR_1="path-to-class-images-dog"
-export CLASS_PROMPT_1="a photo of a dog"
-
-# Subject 2
-export INSTANCE_DIR_2="path-to-instance-images-concept-2"
-export INSTANCE_PROMPT_2="a photo of a t@y person"
-export CLASS_DIR_2="path-to-class-images-person"
-export CLASS_PROMPT_2="a photo of a person"
-
-accelerate launch train_multi_subject_dreambooth.py \
- --pretrained_model_name_or_path=$MODEL_NAME \
- --instance_data_dir="$INSTANCE_DIR_1,$INSTANCE_DIR_2" \
- --output_dir=$OUTPUT_DIR \
- --train_text_encoder \
- --instance_prompt="$INSTANCE_PROMPT_1,$INSTANCE_PROMPT_2" \
- --with_prior_preservation \
- --prior_loss_weight=1.0 \
- --class_data_dir="$CLASS_DIR_1,$CLASS_DIR_2" \
- --class_prompt="$CLASS_PROMPT_1,$CLASS_PROMPT_2"\
- --num_class_images=50 \
- --resolution=512 \
- --train_batch_size=1 \
- --gradient_accumulation_steps=1 \
- --learning_rate=1e-6 \
- --lr_scheduler="constant" \
- --lr_warmup_steps=0 \
- --max_train_steps=1500
-```
-
-This example shows training for 2 subjects, but please note that the model can be trained on any number of new concepts. This can be done by continuing to add in the corresponding directories and prompts to the corresponding comma separated string.
-
-Note also that in this script, `sks` and `t@y` were used as tokens to learn the new subjects ([this thread](https://github.com/XavierXiao/Dreambooth-Stable-Diffusion/issues/71) inspired the use of `t@y` as our second identifier). However, there may be better rare tokens to experiment with, and results also seemed to be good when more intuitive words are used.
-
-**Important**: New parameters are added to the script, making possible to validate the progress of the training by
-generating images at specified steps. Taking also into account that a comma separated list in a text field for a prompt
-it's never a good idea (simply because it is very common in prompts to have them as part of a regular text) we
-introduce the `concept_list` parameter: allowing to specify a json-like file where you can define the different
-configuration for each subject that you want to train.
-
-An example of how to generate the file:
-```python
-import json
-
-# here we are using parameters for prior-preservation and validation as well.
-concepts_list = [
- {
- "instance_prompt": "drawing of a t@y meme",
- "class_prompt": "drawing of a meme",
- "instance_data_dir": "/some_folder/meme_toy",
- "class_data_dir": "/data/meme",
- "validation_prompt": "drawing of a t@y meme about football in Uruguay",
- "validation_negative_prompt": "black and white"
- },
- {
- "instance_prompt": "drawing of a sks sir",
- "class_prompt": "drawing of a sir",
- "instance_data_dir": "/some_other_folder/sir_sks",
- "class_data_dir": "/data/sir",
- "validation_prompt": "drawing of a sks sir with the Uruguayan sun in his chest",
- "validation_negative_prompt": "an old man",
- "validation_guidance_scale": 20,
- "validation_number_images": 3,
- "validation_inference_steps": 10
- }
-]
-
-with open("concepts_list.json", "w") as f:
- json.dump(concepts_list, f, indent=4)
-```
-And then just point to the file when executing the script:
-
-```bash
-# exports...
-accelerate launch train_multi_subject_dreambooth.py \
-# more parameters...
---concepts_list="concepts_list.json"
-```
-
-You can use the helper from the script to get a better sense of each parameter.
-
-### Inference
-
-Once you have trained a model using above command, the inference can be done simply using the `StableDiffusionPipeline`. Make sure to include the `identifier`(e.g. sks in above example) in your prompt.
-
-```python
-from diffusers import StableDiffusionPipeline
-import torch
-
-model_id = "path-to-your-trained-model"
-pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")
-
-prompt = "A photo of a t@y person petting an sks dog"
-image = pipe(prompt, num_inference_steps=200, guidance_scale=7.5).images[0]
-
-image.save("person-petting-dog.png")
-```
-
-### Inference from a training checkpoint
-
-You can also perform inference from one of the checkpoints saved during the training process, if you used the `--checkpointing_steps` argument. Please, refer to [the documentation](https://huggingface.co/docs/diffusers/main/en/training/dreambooth#performing-inference-using-a-saved-checkpoint) to see how to do it.
-
-## Additional Dreambooth documentation
-Because the `train_multi_subject_dreambooth.py` script here was forked from an original version of `train_dreambooth.py` in the `examples/dreambooth` folder, I've included the original applicable training documentation for single subject examples below.
-
-This should explain how to play with training variables such as prior preservation, fine tuning the text encoder, etc. which is still applicable to our multi subject training code. Note also that the examples below, which are single subject examples, also work with `train_multi_subject_dreambooth.py`, as this script supports 1 (or more) subjects.
-
-### Single subject dog toy example
-
-Let's get our dataset. Download images from [here](https://drive.google.com/drive/folders/1BO_dyz-p65qhBRRMRA4TbZ8qW4rB99JZ) and save them in a directory. This will be our training data.
-
-And launch the training using
-
-**___Note: Change the `resolution` to 768 if you are using the [stable-diffusion-2](https://huggingface.co/stabilityai/stable-diffusion-2) 768x768 model.___**
-
-```bash
-export MODEL_NAME="CompVis/stable-diffusion-v1-4"
-export INSTANCE_DIR="path-to-instance-images"
-export OUTPUT_DIR="path-to-save-model"
-
-accelerate launch train_dreambooth.py \
- --pretrained_model_name_or_path=$MODEL_NAME \
- --instance_data_dir=$INSTANCE_DIR \
- --output_dir=$OUTPUT_DIR \
- --instance_prompt="a photo of sks dog" \
- --resolution=512 \
- --train_batch_size=1 \
- --gradient_accumulation_steps=1 \
- --learning_rate=5e-6 \
- --lr_scheduler="constant" \
- --lr_warmup_steps=0 \
- --max_train_steps=400
-```
-
-### Training with prior-preservation loss
-
-Prior-preservation is used to avoid overfitting and language-drift. Refer to the paper to learn more about it. For prior-preservation we first generate images using the model with a class prompt and then use those during training along with our data.
-According to the paper, it's recommended to generate `num_epochs * num_samples` images for prior-preservation. 200-300 works well for most cases. The `num_class_images` flag sets the number of images to generate with the class prompt. You can place existing images in `class_data_dir`, and the training script will generate any additional images so that `num_class_images` are present in `class_data_dir` during training time.
-
-```bash
-export MODEL_NAME="CompVis/stable-diffusion-v1-4"
-export INSTANCE_DIR="path-to-instance-images"
-export CLASS_DIR="path-to-class-images"
-export OUTPUT_DIR="path-to-save-model"
-
-accelerate launch train_dreambooth.py \
- --pretrained_model_name_or_path=$MODEL_NAME \
- --instance_data_dir=$INSTANCE_DIR \
- --class_data_dir=$CLASS_DIR \
- --output_dir=$OUTPUT_DIR \
- --with_prior_preservation --prior_loss_weight=1.0 \
- --instance_prompt="a photo of sks dog" \
- --class_prompt="a photo of dog" \
- --resolution=512 \
- --train_batch_size=1 \
- --gradient_accumulation_steps=1 \
- --learning_rate=5e-6 \
- --lr_scheduler="constant" \
- --lr_warmup_steps=0 \
- --num_class_images=200 \
- --max_train_steps=800
-```
-
-
-### Training on a 16GB GPU:
-
-With the help of gradient checkpointing and the 8-bit optimizer from bitsandbytes it's possible to run train dreambooth on a 16GB GPU.
-
-To install `bitandbytes` please refer to this [readme](https://github.com/TimDettmers/bitsandbytes#requirements--installation).
-
-```bash
-export MODEL_NAME="CompVis/stable-diffusion-v1-4"
-export INSTANCE_DIR="path-to-instance-images"
-export CLASS_DIR="path-to-class-images"
-export OUTPUT_DIR="path-to-save-model"
-
-accelerate launch train_dreambooth.py \
- --pretrained_model_name_or_path=$MODEL_NAME \
- --instance_data_dir=$INSTANCE_DIR \
- --class_data_dir=$CLASS_DIR \
- --output_dir=$OUTPUT_DIR \
- --with_prior_preservation --prior_loss_weight=1.0 \
- --instance_prompt="a photo of sks dog" \
- --class_prompt="a photo of dog" \
- --resolution=512 \
- --train_batch_size=1 \
- --gradient_accumulation_steps=2 --gradient_checkpointing \
- --use_8bit_adam \
- --learning_rate=5e-6 \
- --lr_scheduler="constant" \
- --lr_warmup_steps=0 \
- --num_class_images=200 \
- --max_train_steps=800
-```
-
-### Training on a 8 GB GPU:
-
-By using [DeepSpeed](https://www.deepspeed.ai/) it's possible to offload some
-tensors from VRAM to either CPU or NVME allowing to train with less VRAM.
-
-DeepSpeed needs to be enabled with `accelerate config`. During configuration
-answer yes to "Do you want to use DeepSpeed?". With DeepSpeed stage 2, fp16
-mixed precision and offloading both parameters and optimizer state to cpu it's
-possible to train on under 8 GB VRAM with a drawback of requiring significantly
-more RAM (about 25 GB). See [documentation](https://huggingface.co/docs/accelerate/usage_guides/deepspeed) for more DeepSpeed configuration options.
-
-Changing the default Adam optimizer to DeepSpeed's special version of Adam
-`deepspeed.ops.adam.DeepSpeedCPUAdam` gives a substantial speedup but enabling
-it requires CUDA toolchain with the same version as pytorch. 8-bit optimizer
-does not seem to be compatible with DeepSpeed at the moment.
-
-```bash
-export MODEL_NAME="CompVis/stable-diffusion-v1-4"
-export INSTANCE_DIR="path-to-instance-images"
-export CLASS_DIR="path-to-class-images"
-export OUTPUT_DIR="path-to-save-model"
-
-accelerate launch --mixed_precision="fp16" train_dreambooth.py \
- --pretrained_model_name_or_path=$MODEL_NAME \
- --instance_data_dir=$INSTANCE_DIR \
- --class_data_dir=$CLASS_DIR \
- --output_dir=$OUTPUT_DIR \
- --with_prior_preservation --prior_loss_weight=1.0 \
- --instance_prompt="a photo of sks dog" \
- --class_prompt="a photo of dog" \
- --resolution=512 \
- --train_batch_size=1 \
- --sample_batch_size=1 \
- --gradient_accumulation_steps=1 --gradient_checkpointing \
- --learning_rate=5e-6 \
- --lr_scheduler="constant" \
- --lr_warmup_steps=0 \
- --num_class_images=200 \
- --max_train_steps=800
-```
-
-### Fine-tune text encoder with the UNet.
-
-The script also allows to fine-tune the `text_encoder` along with the `unet`. It's been observed experimentally that fine-tuning `text_encoder` gives much better results especially on faces.
-Pass the `--train_text_encoder` argument to the script to enable training `text_encoder`.
-
-___Note: Training text encoder requires more memory, with this option the training won't fit on 16GB GPU. It needs at least 24GB VRAM.___
-
-```bash
-export MODEL_NAME="CompVis/stable-diffusion-v1-4"
-export INSTANCE_DIR="path-to-instance-images"
-export CLASS_DIR="path-to-class-images"
-export OUTPUT_DIR="path-to-save-model"
-
-accelerate launch train_dreambooth.py \
- --pretrained_model_name_or_path=$MODEL_NAME \
- --train_text_encoder \
- --instance_data_dir=$INSTANCE_DIR \
- --class_data_dir=$CLASS_DIR \
- --output_dir=$OUTPUT_DIR \
- --with_prior_preservation --prior_loss_weight=1.0 \
- --instance_prompt="a photo of sks dog" \
- --class_prompt="a photo of dog" \
- --resolution=512 \
- --train_batch_size=1 \
- --use_8bit_adam \
- --gradient_checkpointing \
- --learning_rate=2e-6 \
- --lr_scheduler="constant" \
- --lr_warmup_steps=0 \
- --num_class_images=200 \
- --max_train_steps=800
-```
-
-### Using DreamBooth for other pipelines than Stable Diffusion
-
-Altdiffusion also support dreambooth now, the runing comman is basically the same as abouve, all you need to do is replace the `MODEL_NAME` like this:
-One can now simply change the `pretrained_model_name_or_path` to another architecture such as [`AltDiffusion`](https://huggingface.co/docs/diffusers/api/pipelines/alt_diffusion).
-
-```
-export MODEL_NAME="CompVis/stable-diffusion-v1-4" --> export MODEL_NAME="BAAI/AltDiffusion-m9"
-or
-export MODEL_NAME="CompVis/stable-diffusion-v1-4" --> export MODEL_NAME="BAAI/AltDiffusion"
-```
-
-### Training with xformers:
-You can enable memory efficient attention by [installing xFormers](https://github.com/facebookresearch/xformers#installing-xformers) and padding the `--enable_xformers_memory_efficient_attention` argument to the script. This is not available with the Flax/JAX implementation.
-
-You can also use Dreambooth to train the specialized in-painting model. See [the script in the research folder for details](https://github.com/huggingface/diffusers/tree/main/examples/research_projects/dreambooth_inpaint).
\ No newline at end of file
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_ddim_parallel.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_ddim_parallel.py
deleted file mode 100644
index db3ea0e1cca55f88d0a81d0311158929516cb038..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_ddim_parallel.py
+++ /dev/null
@@ -1,642 +0,0 @@
-# Copyright 2023 ParaDiGMS authors and The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# DISCLAIMER: This code is strongly influenced by https://github.com/pesser/pytorch_diffusion
-# and https://github.com/hojonathanho/diffusion
-
-import math
-from dataclasses import dataclass
-from typing import List, Optional, Tuple, Union
-
-import numpy as np
-import torch
-
-from ..configuration_utils import ConfigMixin, register_to_config
-from ..utils import BaseOutput, randn_tensor
-from .scheduling_utils import KarrasDiffusionSchedulers, SchedulerMixin
-
-
-@dataclass
-# Copied from diffusers.schedulers.scheduling_ddpm.DDPMSchedulerOutput
-class DDIMParallelSchedulerOutput(BaseOutput):
- """
- Output class for the scheduler's step function output.
-
- Args:
- prev_sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` for images):
- Computed sample (x_{t-1}) of previous timestep. `prev_sample` should be used as next model input in the
- denoising loop.
- pred_original_sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` for images):
- The predicted denoised sample (x_{0}) based on the model output from the current timestep.
- `pred_original_sample` can be used to preview progress or for guidance.
- """
-
- prev_sample: torch.FloatTensor
- pred_original_sample: Optional[torch.FloatTensor] = None
-
-
-# Copied from diffusers.schedulers.scheduling_ddpm.betas_for_alpha_bar
-def betas_for_alpha_bar(
- num_diffusion_timesteps,
- max_beta=0.999,
- alpha_transform_type="cosine",
-):
- """
- Create a beta schedule that discretizes the given alpha_t_bar function, which defines the cumulative product of
- (1-beta) over time from t = [0,1].
-
- Contains a function alpha_bar that takes an argument t and transforms it to the cumulative product of (1-beta) up
- to that part of the diffusion process.
-
-
- Args:
- num_diffusion_timesteps (`int`): the number of betas to produce.
- max_beta (`float`): the maximum beta to use; use values lower than 1 to
- prevent singularities.
- alpha_transform_type (`str`, *optional*, default to `cosine`): the type of noise schedule for alpha_bar.
- Choose from `cosine` or `exp`
-
- Returns:
- betas (`np.ndarray`): the betas used by the scheduler to step the model outputs
- """
- if alpha_transform_type == "cosine":
-
- def alpha_bar_fn(t):
- return math.cos((t + 0.008) / 1.008 * math.pi / 2) ** 2
-
- elif alpha_transform_type == "exp":
-
- def alpha_bar_fn(t):
- return math.exp(t * -12.0)
-
- else:
- raise ValueError(f"Unsupported alpha_tranform_type: {alpha_transform_type}")
-
- betas = []
- for i in range(num_diffusion_timesteps):
- t1 = i / num_diffusion_timesteps
- t2 = (i + 1) / num_diffusion_timesteps
- betas.append(min(1 - alpha_bar_fn(t2) / alpha_bar_fn(t1), max_beta))
- return torch.tensor(betas, dtype=torch.float32)
-
-
-# Copied from diffusers.schedulers.scheduling_ddim.rescale_zero_terminal_snr
-def rescale_zero_terminal_snr(betas):
- """
- Rescales betas to have zero terminal SNR Based on https://arxiv.org/pdf/2305.08891.pdf (Algorithm 1)
-
-
- Args:
- betas (`torch.FloatTensor`):
- the betas that the scheduler is being initialized with.
-
- Returns:
- `torch.FloatTensor`: rescaled betas with zero terminal SNR
- """
- # Convert betas to alphas_bar_sqrt
- alphas = 1.0 - betas
- alphas_cumprod = torch.cumprod(alphas, dim=0)
- alphas_bar_sqrt = alphas_cumprod.sqrt()
-
- # Store old values.
- alphas_bar_sqrt_0 = alphas_bar_sqrt[0].clone()
- alphas_bar_sqrt_T = alphas_bar_sqrt[-1].clone()
-
- # Shift so the last timestep is zero.
- alphas_bar_sqrt -= alphas_bar_sqrt_T
-
- # Scale so the first timestep is back to the old value.
- alphas_bar_sqrt *= alphas_bar_sqrt_0 / (alphas_bar_sqrt_0 - alphas_bar_sqrt_T)
-
- # Convert alphas_bar_sqrt to betas
- alphas_bar = alphas_bar_sqrt**2 # Revert sqrt
- alphas = alphas_bar[1:] / alphas_bar[:-1] # Revert cumprod
- alphas = torch.cat([alphas_bar[0:1], alphas])
- betas = 1 - alphas
-
- return betas
-
-
-class DDIMParallelScheduler(SchedulerMixin, ConfigMixin):
- """
- Denoising diffusion implicit models is a scheduler that extends the denoising procedure introduced in denoising
- diffusion probabilistic models (DDPMs) with non-Markovian guidance.
-
- [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__`
- function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`.
- [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and
- [`~SchedulerMixin.from_pretrained`] functions.
-
- For more details, see the original paper: https://arxiv.org/abs/2010.02502
-
- Args:
- num_train_timesteps (`int`): number of diffusion steps used to train the model.
- beta_start (`float`): the starting `beta` value of inference.
- beta_end (`float`): the final `beta` value.
- beta_schedule (`str`):
- the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
- `linear`, `scaled_linear`, or `squaredcos_cap_v2`.
- trained_betas (`np.ndarray`, optional):
- option to pass an array of betas directly to the constructor to bypass `beta_start`, `beta_end` etc.
- clip_sample (`bool`, default `True`):
- option to clip predicted sample for numerical stability.
- clip_sample_range (`float`, default `1.0`):
- the maximum magnitude for sample clipping. Valid only when `clip_sample=True`.
- set_alpha_to_one (`bool`, default `True`):
- each diffusion step uses the value of alphas product at that step and at the previous one. For the final
- step there is no previous alpha. When this option is `True` the previous alpha product is fixed to `1`,
- otherwise it uses the value of alpha at step 0.
- steps_offset (`int`, default `0`):
- an offset added to the inference steps. You can use a combination of `offset=1` and
- `set_alpha_to_one=False`, to make the last step use step 0 for the previous alpha product, as done in
- stable diffusion.
- prediction_type (`str`, default `epsilon`, optional):
- prediction type of the scheduler function, one of `epsilon` (predicting the noise of the diffusion
- process), `sample` (directly predicting the noisy sample`) or `v_prediction` (see section 2.4
- https://imagen.research.google/video/paper.pdf)
- thresholding (`bool`, default `False`):
- whether to use the "dynamic thresholding" method (introduced by Imagen, https://arxiv.org/abs/2205.11487).
- Note that the thresholding method is unsuitable for latent-space diffusion models (such as
- stable-diffusion).
- dynamic_thresholding_ratio (`float`, default `0.995`):
- the ratio for the dynamic thresholding method. Default is `0.995`, the same as Imagen
- (https://arxiv.org/abs/2205.11487). Valid only when `thresholding=True`.
- sample_max_value (`float`, default `1.0`):
- the threshold value for dynamic thresholding. Valid only when `thresholding=True`.
- timestep_spacing (`str`, default `"leading"`):
- The way the timesteps should be scaled. Refer to Table 2. of [Common Diffusion Noise Schedules and Sample
- Steps are Flawed](https://arxiv.org/abs/2305.08891) for more information.
- rescale_betas_zero_snr (`bool`, default `False`):
- whether to rescale the betas to have zero terminal SNR (proposed by https://arxiv.org/pdf/2305.08891.pdf).
- This can enable the model to generate very bright and dark samples instead of limiting it to samples with
- medium brightness. Loosely related to
- [`--offset_noise`](https://github.com/huggingface/diffusers/blob/74fd735eb073eb1d774b1ab4154a0876eb82f055/examples/dreambooth/train_dreambooth.py#L506).
- """
-
- _compatibles = [e.name for e in KarrasDiffusionSchedulers]
- order = 1
- _is_ode_scheduler = True
-
- @register_to_config
- # Copied from diffusers.schedulers.scheduling_ddim.DDIMScheduler.__init__
- def __init__(
- self,
- num_train_timesteps: int = 1000,
- beta_start: float = 0.0001,
- beta_end: float = 0.02,
- beta_schedule: str = "linear",
- trained_betas: Optional[Union[np.ndarray, List[float]]] = None,
- clip_sample: bool = True,
- set_alpha_to_one: bool = True,
- steps_offset: int = 0,
- prediction_type: str = "epsilon",
- thresholding: bool = False,
- dynamic_thresholding_ratio: float = 0.995,
- clip_sample_range: float = 1.0,
- sample_max_value: float = 1.0,
- timestep_spacing: str = "leading",
- rescale_betas_zero_snr: bool = False,
- ):
- if trained_betas is not None:
- self.betas = torch.tensor(trained_betas, dtype=torch.float32)
- elif beta_schedule == "linear":
- self.betas = torch.linspace(beta_start, beta_end, num_train_timesteps, dtype=torch.float32)
- elif beta_schedule == "scaled_linear":
- # this schedule is very specific to the latent diffusion model.
- self.betas = (
- torch.linspace(beta_start**0.5, beta_end**0.5, num_train_timesteps, dtype=torch.float32) ** 2
- )
- elif beta_schedule == "squaredcos_cap_v2":
- # Glide cosine schedule
- self.betas = betas_for_alpha_bar(num_train_timesteps)
- else:
- raise NotImplementedError(f"{beta_schedule} does is not implemented for {self.__class__}")
-
- # Rescale for zero SNR
- if rescale_betas_zero_snr:
- self.betas = rescale_zero_terminal_snr(self.betas)
-
- self.alphas = 1.0 - self.betas
- self.alphas_cumprod = torch.cumprod(self.alphas, dim=0)
-
- # At every step in ddim, we are looking into the previous alphas_cumprod
- # For the final step, there is no previous alphas_cumprod because we are already at 0
- # `set_alpha_to_one` decides whether we set this parameter simply to one or
- # whether we use the final alpha of the "non-previous" one.
- self.final_alpha_cumprod = torch.tensor(1.0) if set_alpha_to_one else self.alphas_cumprod[0]
-
- # standard deviation of the initial noise distribution
- self.init_noise_sigma = 1.0
-
- # setable values
- self.num_inference_steps = None
- self.timesteps = torch.from_numpy(np.arange(0, num_train_timesteps)[::-1].copy().astype(np.int64))
-
- # Copied from diffusers.schedulers.scheduling_ddim.DDIMScheduler.scale_model_input
- def scale_model_input(self, sample: torch.FloatTensor, timestep: Optional[int] = None) -> torch.FloatTensor:
- """
- Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
- current timestep.
-
- Args:
- sample (`torch.FloatTensor`): input sample
- timestep (`int`, optional): current timestep
-
- Returns:
- `torch.FloatTensor`: scaled input sample
- """
- return sample
-
- def _get_variance(self, timestep, prev_timestep=None):
- if prev_timestep is None:
- prev_timestep = timestep - self.config.num_train_timesteps // self.num_inference_steps
-
- alpha_prod_t = self.alphas_cumprod[timestep]
- alpha_prod_t_prev = self.alphas_cumprod[prev_timestep] if prev_timestep >= 0 else self.final_alpha_cumprod
- beta_prod_t = 1 - alpha_prod_t
- beta_prod_t_prev = 1 - alpha_prod_t_prev
-
- variance = (beta_prod_t_prev / beta_prod_t) * (1 - alpha_prod_t / alpha_prod_t_prev)
-
- return variance
-
- def _batch_get_variance(self, t, prev_t):
- alpha_prod_t = self.alphas_cumprod[t]
- alpha_prod_t_prev = self.alphas_cumprod[torch.clip(prev_t, min=0)]
- alpha_prod_t_prev[prev_t < 0] = torch.tensor(1.0)
- beta_prod_t = 1 - alpha_prod_t
- beta_prod_t_prev = 1 - alpha_prod_t_prev
-
- variance = (beta_prod_t_prev / beta_prod_t) * (1 - alpha_prod_t / alpha_prod_t_prev)
-
- return variance
-
- # Copied from diffusers.schedulers.scheduling_ddpm.DDPMScheduler._threshold_sample
- def _threshold_sample(self, sample: torch.FloatTensor) -> torch.FloatTensor:
- """
- "Dynamic thresholding: At each sampling step we set s to a certain percentile absolute pixel value in xt0 (the
- prediction of x_0 at timestep t), and if s > 1, then we threshold xt0 to the range [-s, s] and then divide by
- s. Dynamic thresholding pushes saturated pixels (those near -1 and 1) inwards, thereby actively preventing
- pixels from saturation at each step. We find that dynamic thresholding results in significantly better
- photorealism as well as better image-text alignment, especially when using very large guidance weights."
-
- https://arxiv.org/abs/2205.11487
- """
- dtype = sample.dtype
- batch_size, channels, height, width = sample.shape
-
- if dtype not in (torch.float32, torch.float64):
- sample = sample.float() # upcast for quantile calculation, and clamp not implemented for cpu half
-
- # Flatten sample for doing quantile calculation along each image
- sample = sample.reshape(batch_size, channels * height * width)
-
- abs_sample = sample.abs() # "a certain percentile absolute pixel value"
-
- s = torch.quantile(abs_sample, self.config.dynamic_thresholding_ratio, dim=1)
- s = torch.clamp(
- s, min=1, max=self.config.sample_max_value
- ) # When clamped to min=1, equivalent to standard clipping to [-1, 1]
-
- s = s.unsqueeze(1) # (batch_size, 1) because clamp will broadcast along dim=0
- sample = torch.clamp(sample, -s, s) / s # "we threshold xt0 to the range [-s, s] and then divide by s"
-
- sample = sample.reshape(batch_size, channels, height, width)
- sample = sample.to(dtype)
-
- return sample
-
- # Copied from diffusers.schedulers.scheduling_ddim.DDIMScheduler.set_timesteps
- def set_timesteps(self, num_inference_steps: int, device: Union[str, torch.device] = None):
- """
- Sets the discrete timesteps used for the diffusion chain. Supporting function to be run before inference.
-
- Args:
- num_inference_steps (`int`):
- the number of diffusion steps used when generating samples with a pre-trained model.
- """
-
- if num_inference_steps > self.config.num_train_timesteps:
- raise ValueError(
- f"`num_inference_steps`: {num_inference_steps} cannot be larger than `self.config.train_timesteps`:"
- f" {self.config.num_train_timesteps} as the unet model trained with this scheduler can only handle"
- f" maximal {self.config.num_train_timesteps} timesteps."
- )
-
- self.num_inference_steps = num_inference_steps
-
- # "linspace", "leading", "trailing" corresponds to annotation of Table 2. of https://arxiv.org/abs/2305.08891
- if self.config.timestep_spacing == "linspace":
- timesteps = (
- np.linspace(0, self.config.num_train_timesteps - 1, num_inference_steps)
- .round()[::-1]
- .copy()
- .astype(np.int64)
- )
- elif self.config.timestep_spacing == "leading":
- step_ratio = self.config.num_train_timesteps // self.num_inference_steps
- # creates integer timesteps by multiplying by ratio
- # casting to int to avoid issues when num_inference_step is power of 3
- timesteps = (np.arange(0, num_inference_steps) * step_ratio).round()[::-1].copy().astype(np.int64)
- timesteps += self.config.steps_offset
- elif self.config.timestep_spacing == "trailing":
- step_ratio = self.config.num_train_timesteps / self.num_inference_steps
- # creates integer timesteps by multiplying by ratio
- # casting to int to avoid issues when num_inference_step is power of 3
- timesteps = np.round(np.arange(self.config.num_train_timesteps, 0, -step_ratio)).astype(np.int64)
- timesteps -= 1
- else:
- raise ValueError(
- f"{self.config.timestep_spacing} is not supported. Please make sure to choose one of 'leading' or 'trailing'."
- )
-
- self.timesteps = torch.from_numpy(timesteps).to(device)
-
- def step(
- self,
- model_output: torch.FloatTensor,
- timestep: int,
- sample: torch.FloatTensor,
- eta: float = 0.0,
- use_clipped_model_output: bool = False,
- generator=None,
- variance_noise: Optional[torch.FloatTensor] = None,
- return_dict: bool = True,
- ) -> Union[DDIMParallelSchedulerOutput, Tuple]:
- """
- Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion
- process from the learned model outputs (most often the predicted noise).
-
- Args:
- model_output (`torch.FloatTensor`): direct output from learned diffusion model.
- timestep (`int`): current discrete timestep in the diffusion chain.
- sample (`torch.FloatTensor`):
- current instance of sample being created by diffusion process.
- eta (`float`): weight of noise for added noise in diffusion step.
- use_clipped_model_output (`bool`): if `True`, compute "corrected" `model_output` from the clipped
- predicted original sample. Necessary because predicted original sample is clipped to [-1, 1] when
- `self.config.clip_sample` is `True`. If no clipping has happened, "corrected" `model_output` would
- coincide with the one provided as input and `use_clipped_model_output` will have not effect.
- generator: random number generator.
- variance_noise (`torch.FloatTensor`): instead of generating noise for the variance using `generator`, we
- can directly provide the noise for the variance itself. This is useful for methods such as
- CycleDiffusion. (https://arxiv.org/abs/2210.05559)
- return_dict (`bool`): option for returning tuple rather than DDIMParallelSchedulerOutput class
-
- Returns:
- [`~schedulers.scheduling_utils.DDIMParallelSchedulerOutput`] or `tuple`:
- [`~schedulers.scheduling_utils.DDIMParallelSchedulerOutput`] if `return_dict` is True, otherwise a `tuple`.
- When returning a tuple, the first element is the sample tensor.
-
- """
- if self.num_inference_steps is None:
- raise ValueError(
- "Number of inference steps is 'None', you need to run 'set_timesteps' after creating the scheduler"
- )
-
- # See formulas (12) and (16) of DDIM paper https://arxiv.org/pdf/2010.02502.pdf
- # Ideally, read DDIM paper in-detail understanding
-
- # Notation ( ->
- # - pred_noise_t -> e_theta(x_t, t)
- # - pred_original_sample -> f_theta(x_t, t) or x_0
- # - std_dev_t -> sigma_t
- # - eta -> η
- # - pred_sample_direction -> "direction pointing to x_t"
- # - pred_prev_sample -> "x_t-1"
-
- # 1. get previous step value (=t-1)
- prev_timestep = timestep - self.config.num_train_timesteps // self.num_inference_steps
-
- # 2. compute alphas, betas
- alpha_prod_t = self.alphas_cumprod[timestep]
- alpha_prod_t_prev = self.alphas_cumprod[prev_timestep] if prev_timestep >= 0 else self.final_alpha_cumprod
-
- beta_prod_t = 1 - alpha_prod_t
-
- # 3. compute predicted original sample from predicted noise also called
- # "predicted x_0" of formula (12) from https://arxiv.org/pdf/2010.02502.pdf
- if self.config.prediction_type == "epsilon":
- pred_original_sample = (sample - beta_prod_t ** (0.5) * model_output) / alpha_prod_t ** (0.5)
- pred_epsilon = model_output
- elif self.config.prediction_type == "sample":
- pred_original_sample = model_output
- pred_epsilon = (sample - alpha_prod_t ** (0.5) * pred_original_sample) / beta_prod_t ** (0.5)
- elif self.config.prediction_type == "v_prediction":
- pred_original_sample = (alpha_prod_t**0.5) * sample - (beta_prod_t**0.5) * model_output
- pred_epsilon = (alpha_prod_t**0.5) * model_output + (beta_prod_t**0.5) * sample
- else:
- raise ValueError(
- f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, `sample`, or"
- " `v_prediction`"
- )
-
- # 4. Clip or threshold "predicted x_0"
- if self.config.thresholding:
- pred_original_sample = self._threshold_sample(pred_original_sample)
- elif self.config.clip_sample:
- pred_original_sample = pred_original_sample.clamp(
- -self.config.clip_sample_range, self.config.clip_sample_range
- )
-
- # 5. compute variance: "sigma_t(η)" -> see formula (16)
- # σ_t = sqrt((1 − α_t−1)/(1 − α_t)) * sqrt(1 − α_t/α_t−1)
- variance = self._get_variance(timestep, prev_timestep)
- std_dev_t = eta * variance ** (0.5)
-
- if use_clipped_model_output:
- # the pred_epsilon is always re-derived from the clipped x_0 in Glide
- pred_epsilon = (sample - alpha_prod_t ** (0.5) * pred_original_sample) / beta_prod_t ** (0.5)
-
- # 6. compute "direction pointing to x_t" of formula (12) from https://arxiv.org/pdf/2010.02502.pdf
- pred_sample_direction = (1 - alpha_prod_t_prev - std_dev_t**2) ** (0.5) * pred_epsilon
-
- # 7. compute x_t without "random noise" of formula (12) from https://arxiv.org/pdf/2010.02502.pdf
- prev_sample = alpha_prod_t_prev ** (0.5) * pred_original_sample + pred_sample_direction
-
- if eta > 0:
- if variance_noise is not None and generator is not None:
- raise ValueError(
- "Cannot pass both generator and variance_noise. Please make sure that either `generator` or"
- " `variance_noise` stays `None`."
- )
-
- if variance_noise is None:
- variance_noise = randn_tensor(
- model_output.shape, generator=generator, device=model_output.device, dtype=model_output.dtype
- )
- variance = std_dev_t * variance_noise
-
- prev_sample = prev_sample + variance
-
- if not return_dict:
- return (prev_sample,)
-
- return DDIMParallelSchedulerOutput(prev_sample=prev_sample, pred_original_sample=pred_original_sample)
-
- def batch_step_no_noise(
- self,
- model_output: torch.FloatTensor,
- timesteps: List[int],
- sample: torch.FloatTensor,
- eta: float = 0.0,
- use_clipped_model_output: bool = False,
- ) -> torch.FloatTensor:
- """
- Batched version of the `step` function, to be able to reverse the SDE for multiple samples/timesteps at once.
- Also, does not add any noise to the predicted sample, which is necessary for parallel sampling where the noise
- is pre-sampled by the pipeline.
-
- Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion
- process from the learned model outputs (most often the predicted noise).
-
- Args:
- model_output (`torch.FloatTensor`): direct output from learned diffusion model.
- timesteps (`List[int]`):
- current discrete timesteps in the diffusion chain. This is now a list of integers.
- sample (`torch.FloatTensor`):
- current instance of sample being created by diffusion process.
- eta (`float`): weight of noise for added noise in diffusion step.
- use_clipped_model_output (`bool`): if `True`, compute "corrected" `model_output` from the clipped
- predicted original sample. Necessary because predicted original sample is clipped to [-1, 1] when
- `self.config.clip_sample` is `True`. If no clipping has happened, "corrected" `model_output` would
- coincide with the one provided as input and `use_clipped_model_output` will have not effect.
-
- Returns:
- `torch.FloatTensor`: sample tensor at previous timestep.
-
- """
- if self.num_inference_steps is None:
- raise ValueError(
- "Number of inference steps is 'None', you need to run 'set_timesteps' after creating the scheduler"
- )
-
- assert eta == 0.0
-
- # See formulas (12) and (16) of DDIM paper https://arxiv.org/pdf/2010.02502.pdf
- # Ideally, read DDIM paper in-detail understanding
-
- # Notation ( ->
- # - pred_noise_t -> e_theta(x_t, t)
- # - pred_original_sample -> f_theta(x_t, t) or x_0
- # - std_dev_t -> sigma_t
- # - eta -> η
- # - pred_sample_direction -> "direction pointing to x_t"
- # - pred_prev_sample -> "x_t-1"
-
- # 1. get previous step value (=t-1)
- t = timesteps
- prev_t = t - self.config.num_train_timesteps // self.num_inference_steps
-
- t = t.view(-1, *([1] * (model_output.ndim - 1)))
- prev_t = prev_t.view(-1, *([1] * (model_output.ndim - 1)))
-
- # 1. compute alphas, betas
- self.alphas_cumprod = self.alphas_cumprod.to(model_output.device)
- self.final_alpha_cumprod = self.final_alpha_cumprod.to(model_output.device)
- alpha_prod_t = self.alphas_cumprod[t]
- alpha_prod_t_prev = self.alphas_cumprod[torch.clip(prev_t, min=0)]
- alpha_prod_t_prev[prev_t < 0] = torch.tensor(1.0)
-
- beta_prod_t = 1 - alpha_prod_t
-
- # 3. compute predicted original sample from predicted noise also called
- # "predicted x_0" of formula (12) from https://arxiv.org/pdf/2010.02502.pdf
- if self.config.prediction_type == "epsilon":
- pred_original_sample = (sample - beta_prod_t ** (0.5) * model_output) / alpha_prod_t ** (0.5)
- pred_epsilon = model_output
- elif self.config.prediction_type == "sample":
- pred_original_sample = model_output
- pred_epsilon = (sample - alpha_prod_t ** (0.5) * pred_original_sample) / beta_prod_t ** (0.5)
- elif self.config.prediction_type == "v_prediction":
- pred_original_sample = (alpha_prod_t**0.5) * sample - (beta_prod_t**0.5) * model_output
- pred_epsilon = (alpha_prod_t**0.5) * model_output + (beta_prod_t**0.5) * sample
- else:
- raise ValueError(
- f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, `sample`, or"
- " `v_prediction`"
- )
-
- # 4. Clip or threshold "predicted x_0"
- if self.config.thresholding:
- pred_original_sample = self._threshold_sample(pred_original_sample)
- elif self.config.clip_sample:
- pred_original_sample = pred_original_sample.clamp(
- -self.config.clip_sample_range, self.config.clip_sample_range
- )
-
- # 5. compute variance: "sigma_t(η)" -> see formula (16)
- # σ_t = sqrt((1 − α_t−1)/(1 − α_t)) * sqrt(1 − α_t/α_t−1)
- variance = self._batch_get_variance(t, prev_t).to(model_output.device).view(*alpha_prod_t_prev.shape)
- std_dev_t = eta * variance ** (0.5)
-
- if use_clipped_model_output:
- # the pred_epsilon is always re-derived from the clipped x_0 in Glide
- pred_epsilon = (sample - alpha_prod_t ** (0.5) * pred_original_sample) / beta_prod_t ** (0.5)
-
- # 6. compute "direction pointing to x_t" of formula (12) from https://arxiv.org/pdf/2010.02502.pdf
- pred_sample_direction = (1 - alpha_prod_t_prev - std_dev_t**2) ** (0.5) * pred_epsilon
-
- # 7. compute x_t without "random noise" of formula (12) from https://arxiv.org/pdf/2010.02502.pdf
- prev_sample = alpha_prod_t_prev ** (0.5) * pred_original_sample + pred_sample_direction
-
- return prev_sample
-
- # Copied from diffusers.schedulers.scheduling_ddpm.DDPMScheduler.add_noise
- def add_noise(
- self,
- original_samples: torch.FloatTensor,
- noise: torch.FloatTensor,
- timesteps: torch.IntTensor,
- ) -> torch.FloatTensor:
- # Make sure alphas_cumprod and timestep have same device and dtype as original_samples
- alphas_cumprod = self.alphas_cumprod.to(device=original_samples.device, dtype=original_samples.dtype)
- timesteps = timesteps.to(original_samples.device)
-
- sqrt_alpha_prod = alphas_cumprod[timesteps] ** 0.5
- sqrt_alpha_prod = sqrt_alpha_prod.flatten()
- while len(sqrt_alpha_prod.shape) < len(original_samples.shape):
- sqrt_alpha_prod = sqrt_alpha_prod.unsqueeze(-1)
-
- sqrt_one_minus_alpha_prod = (1 - alphas_cumprod[timesteps]) ** 0.5
- sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.flatten()
- while len(sqrt_one_minus_alpha_prod.shape) < len(original_samples.shape):
- sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.unsqueeze(-1)
-
- noisy_samples = sqrt_alpha_prod * original_samples + sqrt_one_minus_alpha_prod * noise
- return noisy_samples
-
- # Copied from diffusers.schedulers.scheduling_ddpm.DDPMScheduler.get_velocity
- def get_velocity(
- self, sample: torch.FloatTensor, noise: torch.FloatTensor, timesteps: torch.IntTensor
- ) -> torch.FloatTensor:
- # Make sure alphas_cumprod and timestep have same device and dtype as sample
- alphas_cumprod = self.alphas_cumprod.to(device=sample.device, dtype=sample.dtype)
- timesteps = timesteps.to(sample.device)
-
- sqrt_alpha_prod = alphas_cumprod[timesteps] ** 0.5
- sqrt_alpha_prod = sqrt_alpha_prod.flatten()
- while len(sqrt_alpha_prod.shape) < len(sample.shape):
- sqrt_alpha_prod = sqrt_alpha_prod.unsqueeze(-1)
-
- sqrt_one_minus_alpha_prod = (1 - alphas_cumprod[timesteps]) ** 0.5
- sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.flatten()
- while len(sqrt_one_minus_alpha_prod.shape) < len(sample.shape):
- sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.unsqueeze(-1)
-
- velocity = sqrt_alpha_prod * noise - sqrt_one_minus_alpha_prod * sample
- return velocity
-
- def __len__(self):
- return self.config.num_train_timesteps
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/models/test_modeling_common.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/models/test_modeling_common.py
deleted file mode 100644
index ee8e55842f8d40cf2d107b47f105ce952cfb57d0..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/models/test_modeling_common.py
+++ /dev/null
@@ -1,567 +0,0 @@
-# coding=utf-8
-# Copyright 2023 HuggingFace Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import inspect
-import tempfile
-import traceback
-import unittest
-import unittest.mock as mock
-from typing import Dict, List, Tuple
-
-import numpy as np
-import requests_mock
-import torch
-from requests.exceptions import HTTPError
-
-from diffusers.models import UNet2DConditionModel
-from diffusers.models.attention_processor import AttnProcessor, AttnProcessor2_0, XFormersAttnProcessor
-from diffusers.training_utils import EMAModel
-from diffusers.utils import logging, torch_device
-from diffusers.utils.testing_utils import CaptureLogger, require_torch_2, require_torch_gpu, run_test_in_subprocess
-
-
-# Will be run via run_test_in_subprocess
-def _test_from_save_pretrained_dynamo(in_queue, out_queue, timeout):
- error = None
- try:
- init_dict, model_class = in_queue.get(timeout=timeout)
-
- model = model_class(**init_dict)
- model.to(torch_device)
- model = torch.compile(model)
-
- with tempfile.TemporaryDirectory() as tmpdirname:
- model.save_pretrained(tmpdirname)
- new_model = model_class.from_pretrained(tmpdirname)
- new_model.to(torch_device)
-
- assert new_model.__class__ == model_class
- except Exception:
- error = f"{traceback.format_exc()}"
-
- results = {"error": error}
- out_queue.put(results, timeout=timeout)
- out_queue.join()
-
-
-class ModelUtilsTest(unittest.TestCase):
- def tearDown(self):
- super().tearDown()
-
- import diffusers
-
- diffusers.utils.import_utils._safetensors_available = True
-
- def test_accelerate_loading_error_message(self):
- with self.assertRaises(ValueError) as error_context:
- UNet2DConditionModel.from_pretrained("hf-internal-testing/stable-diffusion-broken", subfolder="unet")
-
- # make sure that error message states what keys are missing
- assert "conv_out.bias" in str(error_context.exception)
-
- def test_cached_files_are_used_when_no_internet(self):
- # A mock response for an HTTP head request to emulate server down
- response_mock = mock.Mock()
- response_mock.status_code = 500
- response_mock.headers = {}
- response_mock.raise_for_status.side_effect = HTTPError
- response_mock.json.return_value = {}
-
- # Download this model to make sure it's in the cache.
- orig_model = UNet2DConditionModel.from_pretrained(
- "hf-internal-testing/tiny-stable-diffusion-torch", subfolder="unet"
- )
-
- # Under the mock environment we get a 500 error when trying to reach the model.
- with mock.patch("requests.request", return_value=response_mock):
- # Download this model to make sure it's in the cache.
- model = UNet2DConditionModel.from_pretrained(
- "hf-internal-testing/tiny-stable-diffusion-torch", subfolder="unet", local_files_only=True
- )
-
- for p1, p2 in zip(orig_model.parameters(), model.parameters()):
- if p1.data.ne(p2.data).sum() > 0:
- assert False, "Parameters not the same!"
-
- def test_one_request_upon_cached(self):
- # TODO: For some reason this test fails on MPS where no HEAD call is made.
- if torch_device == "mps":
- return
-
- import diffusers
-
- diffusers.utils.import_utils._safetensors_available = False
-
- with tempfile.TemporaryDirectory() as tmpdirname:
- with requests_mock.mock(real_http=True) as m:
- UNet2DConditionModel.from_pretrained(
- "hf-internal-testing/tiny-stable-diffusion-torch", subfolder="unet", cache_dir=tmpdirname
- )
-
- download_requests = [r.method for r in m.request_history]
- assert download_requests.count("HEAD") == 2, "2 HEAD requests one for config, one for model"
- assert download_requests.count("GET") == 2, "2 GET requests one for config, one for model"
-
- with requests_mock.mock(real_http=True) as m:
- UNet2DConditionModel.from_pretrained(
- "hf-internal-testing/tiny-stable-diffusion-torch", subfolder="unet", cache_dir=tmpdirname
- )
-
- cache_requests = [r.method for r in m.request_history]
- assert (
- "HEAD" == cache_requests[0] and len(cache_requests) == 1
- ), "We should call only `model_info` to check for _commit hash and `send_telemetry`"
-
- diffusers.utils.import_utils._safetensors_available = True
-
- def test_weight_overwrite(self):
- with tempfile.TemporaryDirectory() as tmpdirname, self.assertRaises(ValueError) as error_context:
- UNet2DConditionModel.from_pretrained(
- "hf-internal-testing/tiny-stable-diffusion-torch",
- subfolder="unet",
- cache_dir=tmpdirname,
- in_channels=9,
- )
-
- # make sure that error message states what keys are missing
- assert "Cannot load" in str(error_context.exception)
-
- with tempfile.TemporaryDirectory() as tmpdirname:
- model = UNet2DConditionModel.from_pretrained(
- "hf-internal-testing/tiny-stable-diffusion-torch",
- subfolder="unet",
- cache_dir=tmpdirname,
- in_channels=9,
- low_cpu_mem_usage=False,
- ignore_mismatched_sizes=True,
- )
-
- assert model.config.in_channels == 9
-
-
-class UNetTesterMixin:
- def test_forward_signature(self):
- init_dict, _ = self.prepare_init_args_and_inputs_for_common()
-
- model = self.model_class(**init_dict)
- signature = inspect.signature(model.forward)
- # signature.parameters is an OrderedDict => so arg_names order is deterministic
- arg_names = [*signature.parameters.keys()]
-
- expected_arg_names = ["sample", "timestep"]
- self.assertListEqual(arg_names[:2], expected_arg_names)
-
- def test_forward_with_norm_groups(self):
- init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common()
-
- init_dict["norm_num_groups"] = 16
- init_dict["block_out_channels"] = (16, 32)
-
- model = self.model_class(**init_dict)
- model.to(torch_device)
- model.eval()
-
- with torch.no_grad():
- output = model(**inputs_dict)
-
- if isinstance(output, dict):
- output = output.to_tuple()[0]
-
- self.assertIsNotNone(output)
- expected_shape = inputs_dict["sample"].shape
- self.assertEqual(output.shape, expected_shape, "Input and output shapes do not match")
-
-
-class ModelTesterMixin:
- main_input_name = None # overwrite in model specific tester class
- base_precision = 1e-3
-
- def test_from_save_pretrained(self):
- init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common()
-
- model = self.model_class(**init_dict)
- if hasattr(model, "set_default_attn_processor"):
- model.set_default_attn_processor()
- model.to(torch_device)
- model.eval()
-
- with tempfile.TemporaryDirectory() as tmpdirname:
- model.save_pretrained(tmpdirname)
- new_model = self.model_class.from_pretrained(tmpdirname)
- if hasattr(new_model, "set_default_attn_processor"):
- new_model.set_default_attn_processor()
- new_model.to(torch_device)
-
- with torch.no_grad():
- image = model(**inputs_dict)
- if isinstance(image, dict):
- image = image.to_tuple()[0]
-
- new_image = new_model(**inputs_dict)
-
- if isinstance(new_image, dict):
- new_image = new_image.to_tuple()[0]
-
- max_diff = (image - new_image).abs().sum().item()
- self.assertLessEqual(max_diff, 5e-5, "Models give different forward passes")
-
- def test_getattr_is_correct(self):
- init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common()
- model = self.model_class(**init_dict)
-
- # save some things to test
- model.dummy_attribute = 5
- model.register_to_config(test_attribute=5)
-
- logger = logging.get_logger("diffusers.models.modeling_utils")
- # 30 for warning
- logger.setLevel(30)
- with CaptureLogger(logger) as cap_logger:
- assert hasattr(model, "dummy_attribute")
- assert getattr(model, "dummy_attribute") == 5
- assert model.dummy_attribute == 5
-
- # no warning should be thrown
- assert cap_logger.out == ""
-
- logger = logging.get_logger("diffusers.models.modeling_utils")
- # 30 for warning
- logger.setLevel(30)
- with CaptureLogger(logger) as cap_logger:
- assert hasattr(model, "save_pretrained")
- fn = model.save_pretrained
- fn_1 = getattr(model, "save_pretrained")
-
- assert fn == fn_1
- # no warning should be thrown
- assert cap_logger.out == ""
-
- # warning should be thrown
- with self.assertWarns(FutureWarning):
- assert model.test_attribute == 5
-
- with self.assertWarns(FutureWarning):
- assert getattr(model, "test_attribute") == 5
-
- with self.assertRaises(AttributeError) as error:
- model.does_not_exist
-
- assert str(error.exception) == f"'{type(model).__name__}' object has no attribute 'does_not_exist'"
-
- @require_torch_gpu
- def test_set_attn_processor_for_determinism(self):
- torch.use_deterministic_algorithms(False)
- init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common()
- model = self.model_class(**init_dict)
- model.to(torch_device)
-
- if not hasattr(model, "set_attn_processor"):
- # If not has `set_attn_processor`, skip test
- return
-
- assert all(type(proc) == AttnProcessor2_0 for proc in model.attn_processors.values())
- with torch.no_grad():
- output_1 = model(**inputs_dict)[0]
-
- model.set_default_attn_processor()
- assert all(type(proc) == AttnProcessor for proc in model.attn_processors.values())
- with torch.no_grad():
- output_2 = model(**inputs_dict)[0]
-
- model.enable_xformers_memory_efficient_attention()
- assert all(type(proc) == XFormersAttnProcessor for proc in model.attn_processors.values())
- with torch.no_grad():
- output_3 = model(**inputs_dict)[0]
-
- model.set_attn_processor(AttnProcessor2_0())
- assert all(type(proc) == AttnProcessor2_0 for proc in model.attn_processors.values())
- with torch.no_grad():
- output_4 = model(**inputs_dict)[0]
-
- model.set_attn_processor(AttnProcessor())
- assert all(type(proc) == AttnProcessor for proc in model.attn_processors.values())
- with torch.no_grad():
- output_5 = model(**inputs_dict)[0]
-
- model.set_attn_processor(XFormersAttnProcessor())
- assert all(type(proc) == XFormersAttnProcessor for proc in model.attn_processors.values())
- with torch.no_grad():
- output_6 = model(**inputs_dict)[0]
-
- torch.use_deterministic_algorithms(True)
-
- # make sure that outputs match
- assert torch.allclose(output_2, output_1, atol=self.base_precision)
- assert torch.allclose(output_2, output_3, atol=self.base_precision)
- assert torch.allclose(output_2, output_4, atol=self.base_precision)
- assert torch.allclose(output_2, output_5, atol=self.base_precision)
- assert torch.allclose(output_2, output_6, atol=self.base_precision)
-
- def test_from_save_pretrained_variant(self):
- init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common()
-
- model = self.model_class(**init_dict)
- if hasattr(model, "set_default_attn_processor"):
- model.set_default_attn_processor()
-
- model.to(torch_device)
- model.eval()
-
- with tempfile.TemporaryDirectory() as tmpdirname:
- model.save_pretrained(tmpdirname, variant="fp16")
- new_model = self.model_class.from_pretrained(tmpdirname, variant="fp16")
- if hasattr(new_model, "set_default_attn_processor"):
- new_model.set_default_attn_processor()
-
- # non-variant cannot be loaded
- with self.assertRaises(OSError) as error_context:
- self.model_class.from_pretrained(tmpdirname)
-
- # make sure that error message states what keys are missing
- assert "Error no file named diffusion_pytorch_model.bin found in directory" in str(error_context.exception)
-
- new_model.to(torch_device)
-
- with torch.no_grad():
- image = model(**inputs_dict)
- if isinstance(image, dict):
- image = image.to_tuple()[0]
-
- new_image = new_model(**inputs_dict)
-
- if isinstance(new_image, dict):
- new_image = new_image.to_tuple()[0]
-
- max_diff = (image - new_image).abs().sum().item()
- self.assertLessEqual(max_diff, 5e-5, "Models give different forward passes")
-
- @require_torch_2
- def test_from_save_pretrained_dynamo(self):
- init_dict, _ = self.prepare_init_args_and_inputs_for_common()
- inputs = [init_dict, self.model_class]
- run_test_in_subprocess(test_case=self, target_func=_test_from_save_pretrained_dynamo, inputs=inputs)
-
- def test_from_save_pretrained_dtype(self):
- init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common()
-
- model = self.model_class(**init_dict)
- model.to(torch_device)
- model.eval()
-
- for dtype in [torch.float32, torch.float16, torch.bfloat16]:
- if torch_device == "mps" and dtype == torch.bfloat16:
- continue
- with tempfile.TemporaryDirectory() as tmpdirname:
- model.to(dtype)
- model.save_pretrained(tmpdirname)
- new_model = self.model_class.from_pretrained(tmpdirname, low_cpu_mem_usage=True, torch_dtype=dtype)
- assert new_model.dtype == dtype
- new_model = self.model_class.from_pretrained(tmpdirname, low_cpu_mem_usage=False, torch_dtype=dtype)
- assert new_model.dtype == dtype
-
- def test_determinism(self, expected_max_diff=1e-5):
- init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common()
- model = self.model_class(**init_dict)
- model.to(torch_device)
- model.eval()
-
- with torch.no_grad():
- first = model(**inputs_dict)
- if isinstance(first, dict):
- first = first.to_tuple()[0]
-
- second = model(**inputs_dict)
- if isinstance(second, dict):
- second = second.to_tuple()[0]
-
- out_1 = first.cpu().numpy()
- out_2 = second.cpu().numpy()
- out_1 = out_1[~np.isnan(out_1)]
- out_2 = out_2[~np.isnan(out_2)]
- max_diff = np.amax(np.abs(out_1 - out_2))
- self.assertLessEqual(max_diff, expected_max_diff)
-
- def test_output(self):
- init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common()
- model = self.model_class(**init_dict)
- model.to(torch_device)
- model.eval()
-
- with torch.no_grad():
- output = model(**inputs_dict)
-
- if isinstance(output, dict):
- output = output.to_tuple()[0]
-
- self.assertIsNotNone(output)
-
- # input & output have to have the same shape
- input_tensor = inputs_dict[self.main_input_name]
- expected_shape = input_tensor.shape
- self.assertEqual(output.shape, expected_shape, "Input and output shapes do not match")
-
- def test_model_from_pretrained(self):
- init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common()
-
- model = self.model_class(**init_dict)
- model.to(torch_device)
- model.eval()
-
- # test if the model can be loaded from the config
- # and has all the expected shape
- with tempfile.TemporaryDirectory() as tmpdirname:
- model.save_pretrained(tmpdirname)
- new_model = self.model_class.from_pretrained(tmpdirname)
- new_model.to(torch_device)
- new_model.eval()
-
- # check if all parameters shape are the same
- for param_name in model.state_dict().keys():
- param_1 = model.state_dict()[param_name]
- param_2 = new_model.state_dict()[param_name]
- self.assertEqual(param_1.shape, param_2.shape)
-
- with torch.no_grad():
- output_1 = model(**inputs_dict)
-
- if isinstance(output_1, dict):
- output_1 = output_1.to_tuple()[0]
-
- output_2 = new_model(**inputs_dict)
-
- if isinstance(output_2, dict):
- output_2 = output_2.to_tuple()[0]
-
- self.assertEqual(output_1.shape, output_2.shape)
-
- @unittest.skipIf(torch_device == "mps", "Training is not supported in mps")
- def test_training(self):
- init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common()
-
- model = self.model_class(**init_dict)
- model.to(torch_device)
- model.train()
- output = model(**inputs_dict)
-
- if isinstance(output, dict):
- output = output.to_tuple()[0]
-
- input_tensor = inputs_dict[self.main_input_name]
- noise = torch.randn((input_tensor.shape[0],) + self.output_shape).to(torch_device)
- loss = torch.nn.functional.mse_loss(output, noise)
- loss.backward()
-
- @unittest.skipIf(torch_device == "mps", "Training is not supported in mps")
- def test_ema_training(self):
- init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common()
-
- model = self.model_class(**init_dict)
- model.to(torch_device)
- model.train()
- ema_model = EMAModel(model.parameters())
-
- output = model(**inputs_dict)
-
- if isinstance(output, dict):
- output = output.to_tuple()[0]
-
- input_tensor = inputs_dict[self.main_input_name]
- noise = torch.randn((input_tensor.shape[0],) + self.output_shape).to(torch_device)
- loss = torch.nn.functional.mse_loss(output, noise)
- loss.backward()
- ema_model.step(model.parameters())
-
- def test_outputs_equivalence(self):
- def set_nan_tensor_to_zero(t):
- # Temporary fallback until `aten::_index_put_impl_` is implemented in mps
- # Track progress in https://github.com/pytorch/pytorch/issues/77764
- device = t.device
- if device.type == "mps":
- t = t.to("cpu")
- t[t != t] = 0
- return t.to(device)
-
- def recursive_check(tuple_object, dict_object):
- if isinstance(tuple_object, (List, Tuple)):
- for tuple_iterable_value, dict_iterable_value in zip(tuple_object, dict_object.values()):
- recursive_check(tuple_iterable_value, dict_iterable_value)
- elif isinstance(tuple_object, Dict):
- for tuple_iterable_value, dict_iterable_value in zip(tuple_object.values(), dict_object.values()):
- recursive_check(tuple_iterable_value, dict_iterable_value)
- elif tuple_object is None:
- return
- else:
- self.assertTrue(
- torch.allclose(
- set_nan_tensor_to_zero(tuple_object), set_nan_tensor_to_zero(dict_object), atol=1e-5
- ),
- msg=(
- "Tuple and dict output are not equal. Difference:"
- f" {torch.max(torch.abs(tuple_object - dict_object))}. Tuple has `nan`:"
- f" {torch.isnan(tuple_object).any()} and `inf`: {torch.isinf(tuple_object)}. Dict has"
- f" `nan`: {torch.isnan(dict_object).any()} and `inf`: {torch.isinf(dict_object)}."
- ),
- )
-
- init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common()
-
- model = self.model_class(**init_dict)
- model.to(torch_device)
- model.eval()
-
- with torch.no_grad():
- outputs_dict = model(**inputs_dict)
- outputs_tuple = model(**inputs_dict, return_dict=False)
-
- recursive_check(outputs_tuple, outputs_dict)
-
- @unittest.skipIf(torch_device == "mps", "Gradient checkpointing skipped on MPS")
- def test_enable_disable_gradient_checkpointing(self):
- if not self.model_class._supports_gradient_checkpointing:
- return # Skip test if model does not support gradient checkpointing
-
- init_dict, _ = self.prepare_init_args_and_inputs_for_common()
-
- # at init model should have gradient checkpointing disabled
- model = self.model_class(**init_dict)
- self.assertFalse(model.is_gradient_checkpointing)
-
- # check enable works
- model.enable_gradient_checkpointing()
- self.assertTrue(model.is_gradient_checkpointing)
-
- # check disable works
- model.disable_gradient_checkpointing()
- self.assertFalse(model.is_gradient_checkpointing)
-
- def test_deprecated_kwargs(self):
- has_kwarg_in_model_class = "kwargs" in inspect.signature(self.model_class.__init__).parameters
- has_deprecated_kwarg = len(self.model_class._deprecated_kwargs) > 0
-
- if has_kwarg_in_model_class and not has_deprecated_kwarg:
- raise ValueError(
- f"{self.model_class} has `**kwargs` in its __init__ method but has not defined any deprecated kwargs"
- " under the `_deprecated_kwargs` class attribute. Make sure to either remove `**kwargs` if there are"
- " no deprecated arguments or add the deprecated argument with `_deprecated_kwargs ="
- " []`"
- )
-
- if not has_kwarg_in_model_class and has_deprecated_kwarg:
- raise ValueError(
- f"{self.model_class} doesn't have `**kwargs` in its __init__ method but has defined deprecated kwargs"
- " under the `_deprecated_kwargs` class attribute. Make sure to either add the `**kwargs` argument to"
- f" {self.model_class}.__init__ if there are deprecated arguments or remove the deprecated argument"
- " from `_deprecated_kwargs = []`"
- )
diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/core/anchor/anchor_generator.py b/spaces/Andy1621/uniformer_image_detection/mmdet/core/anchor/anchor_generator.py
deleted file mode 100644
index 388d2608b8138da13d1208b99595fbd1db59d178..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/mmdet/core/anchor/anchor_generator.py
+++ /dev/null
@@ -1,727 +0,0 @@
-import mmcv
-import numpy as np
-import torch
-from torch.nn.modules.utils import _pair
-
-from .builder import ANCHOR_GENERATORS
-
-
-@ANCHOR_GENERATORS.register_module()
-class AnchorGenerator(object):
- """Standard anchor generator for 2D anchor-based detectors.
-
- Args:
- strides (list[int] | list[tuple[int, int]]): Strides of anchors
- in multiple feature levels in order (w, h).
- ratios (list[float]): The list of ratios between the height and width
- of anchors in a single level.
- scales (list[int] | None): Anchor scales for anchors in a single level.
- It cannot be set at the same time if `octave_base_scale` and
- `scales_per_octave` are set.
- base_sizes (list[int] | None): The basic sizes
- of anchors in multiple levels.
- If None is given, strides will be used as base_sizes.
- (If strides are non square, the shortest stride is taken.)
- scale_major (bool): Whether to multiply scales first when generating
- base anchors. If true, the anchors in the same row will have the
- same scales. By default it is True in V2.0
- octave_base_scale (int): The base scale of octave.
- scales_per_octave (int): Number of scales for each octave.
- `octave_base_scale` and `scales_per_octave` are usually used in
- retinanet and the `scales` should be None when they are set.
- centers (list[tuple[float, float]] | None): The centers of the anchor
- relative to the feature grid center in multiple feature levels.
- By default it is set to be None and not used. If a list of tuple of
- float is given, they will be used to shift the centers of anchors.
- center_offset (float): The offset of center in proportion to anchors'
- width and height. By default it is 0 in V2.0.
-
- Examples:
- >>> from mmdet.core import AnchorGenerator
- >>> self = AnchorGenerator([16], [1.], [1.], [9])
- >>> all_anchors = self.grid_anchors([(2, 2)], device='cpu')
- >>> print(all_anchors)
- [tensor([[-4.5000, -4.5000, 4.5000, 4.5000],
- [11.5000, -4.5000, 20.5000, 4.5000],
- [-4.5000, 11.5000, 4.5000, 20.5000],
- [11.5000, 11.5000, 20.5000, 20.5000]])]
- >>> self = AnchorGenerator([16, 32], [1.], [1.], [9, 18])
- >>> all_anchors = self.grid_anchors([(2, 2), (1, 1)], device='cpu')
- >>> print(all_anchors)
- [tensor([[-4.5000, -4.5000, 4.5000, 4.5000],
- [11.5000, -4.5000, 20.5000, 4.5000],
- [-4.5000, 11.5000, 4.5000, 20.5000],
- [11.5000, 11.5000, 20.5000, 20.5000]]), \
- tensor([[-9., -9., 9., 9.]])]
- """
-
- def __init__(self,
- strides,
- ratios,
- scales=None,
- base_sizes=None,
- scale_major=True,
- octave_base_scale=None,
- scales_per_octave=None,
- centers=None,
- center_offset=0.):
- # check center and center_offset
- if center_offset != 0:
- assert centers is None, 'center cannot be set when center_offset' \
- f'!=0, {centers} is given.'
- if not (0 <= center_offset <= 1):
- raise ValueError('center_offset should be in range [0, 1], '
- f'{center_offset} is given.')
- if centers is not None:
- assert len(centers) == len(strides), \
- 'The number of strides should be the same as centers, got ' \
- f'{strides} and {centers}'
-
- # calculate base sizes of anchors
- self.strides = [_pair(stride) for stride in strides]
- self.base_sizes = [min(stride) for stride in self.strides
- ] if base_sizes is None else base_sizes
- assert len(self.base_sizes) == len(self.strides), \
- 'The number of strides should be the same as base sizes, got ' \
- f'{self.strides} and {self.base_sizes}'
-
- # calculate scales of anchors
- assert ((octave_base_scale is not None
- and scales_per_octave is not None) ^ (scales is not None)), \
- 'scales and octave_base_scale with scales_per_octave cannot' \
- ' be set at the same time'
- if scales is not None:
- self.scales = torch.Tensor(scales)
- elif octave_base_scale is not None and scales_per_octave is not None:
- octave_scales = np.array(
- [2**(i / scales_per_octave) for i in range(scales_per_octave)])
- scales = octave_scales * octave_base_scale
- self.scales = torch.Tensor(scales)
- else:
- raise ValueError('Either scales or octave_base_scale with '
- 'scales_per_octave should be set')
-
- self.octave_base_scale = octave_base_scale
- self.scales_per_octave = scales_per_octave
- self.ratios = torch.Tensor(ratios)
- self.scale_major = scale_major
- self.centers = centers
- self.center_offset = center_offset
- self.base_anchors = self.gen_base_anchors()
-
- @property
- def num_base_anchors(self):
- """list[int]: total number of base anchors in a feature grid"""
- return [base_anchors.size(0) for base_anchors in self.base_anchors]
-
- @property
- def num_levels(self):
- """int: number of feature levels that the generator will be applied"""
- return len(self.strides)
-
- def gen_base_anchors(self):
- """Generate base anchors.
-
- Returns:
- list(torch.Tensor): Base anchors of a feature grid in multiple \
- feature levels.
- """
- multi_level_base_anchors = []
- for i, base_size in enumerate(self.base_sizes):
- center = None
- if self.centers is not None:
- center = self.centers[i]
- multi_level_base_anchors.append(
- self.gen_single_level_base_anchors(
- base_size,
- scales=self.scales,
- ratios=self.ratios,
- center=center))
- return multi_level_base_anchors
-
- def gen_single_level_base_anchors(self,
- base_size,
- scales,
- ratios,
- center=None):
- """Generate base anchors of a single level.
-
- Args:
- base_size (int | float): Basic size of an anchor.
- scales (torch.Tensor): Scales of the anchor.
- ratios (torch.Tensor): The ratio between between the height
- and width of anchors in a single level.
- center (tuple[float], optional): The center of the base anchor
- related to a single feature grid. Defaults to None.
-
- Returns:
- torch.Tensor: Anchors in a single-level feature maps.
- """
- w = base_size
- h = base_size
- if center is None:
- x_center = self.center_offset * w
- y_center = self.center_offset * h
- else:
- x_center, y_center = center
-
- h_ratios = torch.sqrt(ratios)
- w_ratios = 1 / h_ratios
- if self.scale_major:
- ws = (w * w_ratios[:, None] * scales[None, :]).view(-1)
- hs = (h * h_ratios[:, None] * scales[None, :]).view(-1)
- else:
- ws = (w * scales[:, None] * w_ratios[None, :]).view(-1)
- hs = (h * scales[:, None] * h_ratios[None, :]).view(-1)
-
- # use float anchor and the anchor's center is aligned with the
- # pixel center
- base_anchors = [
- x_center - 0.5 * ws, y_center - 0.5 * hs, x_center + 0.5 * ws,
- y_center + 0.5 * hs
- ]
- base_anchors = torch.stack(base_anchors, dim=-1)
-
- return base_anchors
-
- def _meshgrid(self, x, y, row_major=True):
- """Generate mesh grid of x and y.
-
- Args:
- x (torch.Tensor): Grids of x dimension.
- y (torch.Tensor): Grids of y dimension.
- row_major (bool, optional): Whether to return y grids first.
- Defaults to True.
-
- Returns:
- tuple[torch.Tensor]: The mesh grids of x and y.
- """
- # use shape instead of len to keep tracing while exporting to onnx
- xx = x.repeat(y.shape[0])
- yy = y.view(-1, 1).repeat(1, x.shape[0]).view(-1)
- if row_major:
- return xx, yy
- else:
- return yy, xx
-
- def grid_anchors(self, featmap_sizes, device='cuda'):
- """Generate grid anchors in multiple feature levels.
-
- Args:
- featmap_sizes (list[tuple]): List of feature map sizes in
- multiple feature levels.
- device (str): Device where the anchors will be put on.
-
- Return:
- list[torch.Tensor]: Anchors in multiple feature levels. \
- The sizes of each tensor should be [N, 4], where \
- N = width * height * num_base_anchors, width and height \
- are the sizes of the corresponding feature level, \
- num_base_anchors is the number of anchors for that level.
- """
- assert self.num_levels == len(featmap_sizes)
- multi_level_anchors = []
- for i in range(self.num_levels):
- anchors = self.single_level_grid_anchors(
- self.base_anchors[i].to(device),
- featmap_sizes[i],
- self.strides[i],
- device=device)
- multi_level_anchors.append(anchors)
- return multi_level_anchors
-
- def single_level_grid_anchors(self,
- base_anchors,
- featmap_size,
- stride=(16, 16),
- device='cuda'):
- """Generate grid anchors of a single level.
-
- Note:
- This function is usually called by method ``self.grid_anchors``.
-
- Args:
- base_anchors (torch.Tensor): The base anchors of a feature grid.
- featmap_size (tuple[int]): Size of the feature maps.
- stride (tuple[int], optional): Stride of the feature map in order
- (w, h). Defaults to (16, 16).
- device (str, optional): Device the tensor will be put on.
- Defaults to 'cuda'.
-
- Returns:
- torch.Tensor: Anchors in the overall feature maps.
- """
- # keep as Tensor, so that we can covert to ONNX correctly
- feat_h, feat_w = featmap_size
- shift_x = torch.arange(0, feat_w, device=device) * stride[0]
- shift_y = torch.arange(0, feat_h, device=device) * stride[1]
-
- shift_xx, shift_yy = self._meshgrid(shift_x, shift_y)
- shifts = torch.stack([shift_xx, shift_yy, shift_xx, shift_yy], dim=-1)
- shifts = shifts.type_as(base_anchors)
- # first feat_w elements correspond to the first row of shifts
- # add A anchors (1, A, 4) to K shifts (K, 1, 4) to get
- # shifted anchors (K, A, 4), reshape to (K*A, 4)
-
- all_anchors = base_anchors[None, :, :] + shifts[:, None, :]
- all_anchors = all_anchors.view(-1, 4)
- # first A rows correspond to A anchors of (0, 0) in feature map,
- # then (0, 1), (0, 2), ...
- return all_anchors
-
- def valid_flags(self, featmap_sizes, pad_shape, device='cuda'):
- """Generate valid flags of anchors in multiple feature levels.
-
- Args:
- featmap_sizes (list(tuple)): List of feature map sizes in
- multiple feature levels.
- pad_shape (tuple): The padded shape of the image.
- device (str): Device where the anchors will be put on.
-
- Return:
- list(torch.Tensor): Valid flags of anchors in multiple levels.
- """
- assert self.num_levels == len(featmap_sizes)
- multi_level_flags = []
- for i in range(self.num_levels):
- anchor_stride = self.strides[i]
- feat_h, feat_w = featmap_sizes[i]
- h, w = pad_shape[:2]
- valid_feat_h = min(int(np.ceil(h / anchor_stride[1])), feat_h)
- valid_feat_w = min(int(np.ceil(w / anchor_stride[0])), feat_w)
- flags = self.single_level_valid_flags((feat_h, feat_w),
- (valid_feat_h, valid_feat_w),
- self.num_base_anchors[i],
- device=device)
- multi_level_flags.append(flags)
- return multi_level_flags
-
- def single_level_valid_flags(self,
- featmap_size,
- valid_size,
- num_base_anchors,
- device='cuda'):
- """Generate the valid flags of anchor in a single feature map.
-
- Args:
- featmap_size (tuple[int]): The size of feature maps.
- valid_size (tuple[int]): The valid size of the feature maps.
- num_base_anchors (int): The number of base anchors.
- device (str, optional): Device where the flags will be put on.
- Defaults to 'cuda'.
-
- Returns:
- torch.Tensor: The valid flags of each anchor in a single level \
- feature map.
- """
- feat_h, feat_w = featmap_size
- valid_h, valid_w = valid_size
- assert valid_h <= feat_h and valid_w <= feat_w
- valid_x = torch.zeros(feat_w, dtype=torch.bool, device=device)
- valid_y = torch.zeros(feat_h, dtype=torch.bool, device=device)
- valid_x[:valid_w] = 1
- valid_y[:valid_h] = 1
- valid_xx, valid_yy = self._meshgrid(valid_x, valid_y)
- valid = valid_xx & valid_yy
- valid = valid[:, None].expand(valid.size(0),
- num_base_anchors).contiguous().view(-1)
- return valid
-
- def __repr__(self):
- """str: a string that describes the module"""
- indent_str = ' '
- repr_str = self.__class__.__name__ + '(\n'
- repr_str += f'{indent_str}strides={self.strides},\n'
- repr_str += f'{indent_str}ratios={self.ratios},\n'
- repr_str += f'{indent_str}scales={self.scales},\n'
- repr_str += f'{indent_str}base_sizes={self.base_sizes},\n'
- repr_str += f'{indent_str}scale_major={self.scale_major},\n'
- repr_str += f'{indent_str}octave_base_scale='
- repr_str += f'{self.octave_base_scale},\n'
- repr_str += f'{indent_str}scales_per_octave='
- repr_str += f'{self.scales_per_octave},\n'
- repr_str += f'{indent_str}num_levels={self.num_levels}\n'
- repr_str += f'{indent_str}centers={self.centers},\n'
- repr_str += f'{indent_str}center_offset={self.center_offset})'
- return repr_str
-
-
-@ANCHOR_GENERATORS.register_module()
-class SSDAnchorGenerator(AnchorGenerator):
- """Anchor generator for SSD.
-
- Args:
- strides (list[int] | list[tuple[int, int]]): Strides of anchors
- in multiple feature levels.
- ratios (list[float]): The list of ratios between the height and width
- of anchors in a single level.
- basesize_ratio_range (tuple(float)): Ratio range of anchors.
- input_size (int): Size of feature map, 300 for SSD300,
- 512 for SSD512.
- scale_major (bool): Whether to multiply scales first when generating
- base anchors. If true, the anchors in the same row will have the
- same scales. It is always set to be False in SSD.
- """
-
- def __init__(self,
- strides,
- ratios,
- basesize_ratio_range,
- input_size=300,
- scale_major=True):
- assert len(strides) == len(ratios)
- assert mmcv.is_tuple_of(basesize_ratio_range, float)
-
- self.strides = [_pair(stride) for stride in strides]
- self.input_size = input_size
- self.centers = [(stride[0] / 2., stride[1] / 2.)
- for stride in self.strides]
- self.basesize_ratio_range = basesize_ratio_range
-
- # calculate anchor ratios and sizes
- min_ratio, max_ratio = basesize_ratio_range
- min_ratio = int(min_ratio * 100)
- max_ratio = int(max_ratio * 100)
- step = int(np.floor(max_ratio - min_ratio) / (self.num_levels - 2))
- min_sizes = []
- max_sizes = []
- for ratio in range(int(min_ratio), int(max_ratio) + 1, step):
- min_sizes.append(int(self.input_size * ratio / 100))
- max_sizes.append(int(self.input_size * (ratio + step) / 100))
- if self.input_size == 300:
- if basesize_ratio_range[0] == 0.15: # SSD300 COCO
- min_sizes.insert(0, int(self.input_size * 7 / 100))
- max_sizes.insert(0, int(self.input_size * 15 / 100))
- elif basesize_ratio_range[0] == 0.2: # SSD300 VOC
- min_sizes.insert(0, int(self.input_size * 10 / 100))
- max_sizes.insert(0, int(self.input_size * 20 / 100))
- else:
- raise ValueError(
- 'basesize_ratio_range[0] should be either 0.15'
- 'or 0.2 when input_size is 300, got '
- f'{basesize_ratio_range[0]}.')
- elif self.input_size == 512:
- if basesize_ratio_range[0] == 0.1: # SSD512 COCO
- min_sizes.insert(0, int(self.input_size * 4 / 100))
- max_sizes.insert(0, int(self.input_size * 10 / 100))
- elif basesize_ratio_range[0] == 0.15: # SSD512 VOC
- min_sizes.insert(0, int(self.input_size * 7 / 100))
- max_sizes.insert(0, int(self.input_size * 15 / 100))
- else:
- raise ValueError('basesize_ratio_range[0] should be either 0.1'
- 'or 0.15 when input_size is 512, got'
- f' {basesize_ratio_range[0]}.')
- else:
- raise ValueError('Only support 300 or 512 in SSDAnchorGenerator'
- f', got {self.input_size}.')
-
- anchor_ratios = []
- anchor_scales = []
- for k in range(len(self.strides)):
- scales = [1., np.sqrt(max_sizes[k] / min_sizes[k])]
- anchor_ratio = [1.]
- for r in ratios[k]:
- anchor_ratio += [1 / r, r] # 4 or 6 ratio
- anchor_ratios.append(torch.Tensor(anchor_ratio))
- anchor_scales.append(torch.Tensor(scales))
-
- self.base_sizes = min_sizes
- self.scales = anchor_scales
- self.ratios = anchor_ratios
- self.scale_major = scale_major
- self.center_offset = 0
- self.base_anchors = self.gen_base_anchors()
-
- def gen_base_anchors(self):
- """Generate base anchors.
-
- Returns:
- list(torch.Tensor): Base anchors of a feature grid in multiple \
- feature levels.
- """
- multi_level_base_anchors = []
- for i, base_size in enumerate(self.base_sizes):
- base_anchors = self.gen_single_level_base_anchors(
- base_size,
- scales=self.scales[i],
- ratios=self.ratios[i],
- center=self.centers[i])
- indices = list(range(len(self.ratios[i])))
- indices.insert(1, len(indices))
- base_anchors = torch.index_select(base_anchors, 0,
- torch.LongTensor(indices))
- multi_level_base_anchors.append(base_anchors)
- return multi_level_base_anchors
-
- def __repr__(self):
- """str: a string that describes the module"""
- indent_str = ' '
- repr_str = self.__class__.__name__ + '(\n'
- repr_str += f'{indent_str}strides={self.strides},\n'
- repr_str += f'{indent_str}scales={self.scales},\n'
- repr_str += f'{indent_str}scale_major={self.scale_major},\n'
- repr_str += f'{indent_str}input_size={self.input_size},\n'
- repr_str += f'{indent_str}scales={self.scales},\n'
- repr_str += f'{indent_str}ratios={self.ratios},\n'
- repr_str += f'{indent_str}num_levels={self.num_levels},\n'
- repr_str += f'{indent_str}base_sizes={self.base_sizes},\n'
- repr_str += f'{indent_str}basesize_ratio_range='
- repr_str += f'{self.basesize_ratio_range})'
- return repr_str
-
-
-@ANCHOR_GENERATORS.register_module()
-class LegacyAnchorGenerator(AnchorGenerator):
- """Legacy anchor generator used in MMDetection V1.x.
-
- Note:
- Difference to the V2.0 anchor generator:
-
- 1. The center offset of V1.x anchors are set to be 0.5 rather than 0.
- 2. The width/height are minused by 1 when calculating the anchors' \
- centers and corners to meet the V1.x coordinate system.
- 3. The anchors' corners are quantized.
-
- Args:
- strides (list[int] | list[tuple[int]]): Strides of anchors
- in multiple feature levels.
- ratios (list[float]): The list of ratios between the height and width
- of anchors in a single level.
- scales (list[int] | None): Anchor scales for anchors in a single level.
- It cannot be set at the same time if `octave_base_scale` and
- `scales_per_octave` are set.
- base_sizes (list[int]): The basic sizes of anchors in multiple levels.
- If None is given, strides will be used to generate base_sizes.
- scale_major (bool): Whether to multiply scales first when generating
- base anchors. If true, the anchors in the same row will have the
- same scales. By default it is True in V2.0
- octave_base_scale (int): The base scale of octave.
- scales_per_octave (int): Number of scales for each octave.
- `octave_base_scale` and `scales_per_octave` are usually used in
- retinanet and the `scales` should be None when they are set.
- centers (list[tuple[float, float]] | None): The centers of the anchor
- relative to the feature grid center in multiple feature levels.
- By default it is set to be None and not used. It a list of float
- is given, this list will be used to shift the centers of anchors.
- center_offset (float): The offset of center in propotion to anchors'
- width and height. By default it is 0.5 in V2.0 but it should be 0.5
- in v1.x models.
-
- Examples:
- >>> from mmdet.core import LegacyAnchorGenerator
- >>> self = LegacyAnchorGenerator(
- >>> [16], [1.], [1.], [9], center_offset=0.5)
- >>> all_anchors = self.grid_anchors(((2, 2),), device='cpu')
- >>> print(all_anchors)
- [tensor([[ 0., 0., 8., 8.],
- [16., 0., 24., 8.],
- [ 0., 16., 8., 24.],
- [16., 16., 24., 24.]])]
- """
-
- def gen_single_level_base_anchors(self,
- base_size,
- scales,
- ratios,
- center=None):
- """Generate base anchors of a single level.
-
- Note:
- The width/height of anchors are minused by 1 when calculating \
- the centers and corners to meet the V1.x coordinate system.
-
- Args:
- base_size (int | float): Basic size of an anchor.
- scales (torch.Tensor): Scales of the anchor.
- ratios (torch.Tensor): The ratio between between the height.
- and width of anchors in a single level.
- center (tuple[float], optional): The center of the base anchor
- related to a single feature grid. Defaults to None.
-
- Returns:
- torch.Tensor: Anchors in a single-level feature map.
- """
- w = base_size
- h = base_size
- if center is None:
- x_center = self.center_offset * (w - 1)
- y_center = self.center_offset * (h - 1)
- else:
- x_center, y_center = center
-
- h_ratios = torch.sqrt(ratios)
- w_ratios = 1 / h_ratios
- if self.scale_major:
- ws = (w * w_ratios[:, None] * scales[None, :]).view(-1)
- hs = (h * h_ratios[:, None] * scales[None, :]).view(-1)
- else:
- ws = (w * scales[:, None] * w_ratios[None, :]).view(-1)
- hs = (h * scales[:, None] * h_ratios[None, :]).view(-1)
-
- # use float anchor and the anchor's center is aligned with the
- # pixel center
- base_anchors = [
- x_center - 0.5 * (ws - 1), y_center - 0.5 * (hs - 1),
- x_center + 0.5 * (ws - 1), y_center + 0.5 * (hs - 1)
- ]
- base_anchors = torch.stack(base_anchors, dim=-1).round()
-
- return base_anchors
-
-
-@ANCHOR_GENERATORS.register_module()
-class LegacySSDAnchorGenerator(SSDAnchorGenerator, LegacyAnchorGenerator):
- """Legacy anchor generator used in MMDetection V1.x.
-
- The difference between `LegacySSDAnchorGenerator` and `SSDAnchorGenerator`
- can be found in `LegacyAnchorGenerator`.
- """
-
- def __init__(self,
- strides,
- ratios,
- basesize_ratio_range,
- input_size=300,
- scale_major=True):
- super(LegacySSDAnchorGenerator,
- self).__init__(strides, ratios, basesize_ratio_range, input_size,
- scale_major)
- self.centers = [((stride - 1) / 2., (stride - 1) / 2.)
- for stride in strides]
- self.base_anchors = self.gen_base_anchors()
-
-
-@ANCHOR_GENERATORS.register_module()
-class YOLOAnchorGenerator(AnchorGenerator):
- """Anchor generator for YOLO.
-
- Args:
- strides (list[int] | list[tuple[int, int]]): Strides of anchors
- in multiple feature levels.
- base_sizes (list[list[tuple[int, int]]]): The basic sizes
- of anchors in multiple levels.
- """
-
- def __init__(self, strides, base_sizes):
- self.strides = [_pair(stride) for stride in strides]
- self.centers = [(stride[0] / 2., stride[1] / 2.)
- for stride in self.strides]
- self.base_sizes = []
- num_anchor_per_level = len(base_sizes[0])
- for base_sizes_per_level in base_sizes:
- assert num_anchor_per_level == len(base_sizes_per_level)
- self.base_sizes.append(
- [_pair(base_size) for base_size in base_sizes_per_level])
- self.base_anchors = self.gen_base_anchors()
-
- @property
- def num_levels(self):
- """int: number of feature levels that the generator will be applied"""
- return len(self.base_sizes)
-
- def gen_base_anchors(self):
- """Generate base anchors.
-
- Returns:
- list(torch.Tensor): Base anchors of a feature grid in multiple \
- feature levels.
- """
- multi_level_base_anchors = []
- for i, base_sizes_per_level in enumerate(self.base_sizes):
- center = None
- if self.centers is not None:
- center = self.centers[i]
- multi_level_base_anchors.append(
- self.gen_single_level_base_anchors(base_sizes_per_level,
- center))
- return multi_level_base_anchors
-
- def gen_single_level_base_anchors(self, base_sizes_per_level, center=None):
- """Generate base anchors of a single level.
-
- Args:
- base_sizes_per_level (list[tuple[int, int]]): Basic sizes of
- anchors.
- center (tuple[float], optional): The center of the base anchor
- related to a single feature grid. Defaults to None.
-
- Returns:
- torch.Tensor: Anchors in a single-level feature maps.
- """
- x_center, y_center = center
- base_anchors = []
- for base_size in base_sizes_per_level:
- w, h = base_size
-
- # use float anchor and the anchor's center is aligned with the
- # pixel center
- base_anchor = torch.Tensor([
- x_center - 0.5 * w, y_center - 0.5 * h, x_center + 0.5 * w,
- y_center + 0.5 * h
- ])
- base_anchors.append(base_anchor)
- base_anchors = torch.stack(base_anchors, dim=0)
-
- return base_anchors
-
- def responsible_flags(self, featmap_sizes, gt_bboxes, device='cuda'):
- """Generate responsible anchor flags of grid cells in multiple scales.
-
- Args:
- featmap_sizes (list(tuple)): List of feature map sizes in multiple
- feature levels.
- gt_bboxes (Tensor): Ground truth boxes, shape (n, 4).
- device (str): Device where the anchors will be put on.
-
- Return:
- list(torch.Tensor): responsible flags of anchors in multiple level
- """
- assert self.num_levels == len(featmap_sizes)
- multi_level_responsible_flags = []
- for i in range(self.num_levels):
- anchor_stride = self.strides[i]
- flags = self.single_level_responsible_flags(
- featmap_sizes[i],
- gt_bboxes,
- anchor_stride,
- self.num_base_anchors[i],
- device=device)
- multi_level_responsible_flags.append(flags)
- return multi_level_responsible_flags
-
- def single_level_responsible_flags(self,
- featmap_size,
- gt_bboxes,
- stride,
- num_base_anchors,
- device='cuda'):
- """Generate the responsible flags of anchor in a single feature map.
-
- Args:
- featmap_size (tuple[int]): The size of feature maps.
- gt_bboxes (Tensor): Ground truth boxes, shape (n, 4).
- stride (tuple(int)): stride of current level
- num_base_anchors (int): The number of base anchors.
- device (str, optional): Device where the flags will be put on.
- Defaults to 'cuda'.
-
- Returns:
- torch.Tensor: The valid flags of each anchor in a single level \
- feature map.
- """
- feat_h, feat_w = featmap_size
- gt_bboxes_cx = ((gt_bboxes[:, 0] + gt_bboxes[:, 2]) * 0.5).to(device)
- gt_bboxes_cy = ((gt_bboxes[:, 1] + gt_bboxes[:, 3]) * 0.5).to(device)
- gt_bboxes_grid_x = torch.floor(gt_bboxes_cx / stride[0]).long()
- gt_bboxes_grid_y = torch.floor(gt_bboxes_cy / stride[1]).long()
-
- # row major indexing
- gt_bboxes_grid_idx = gt_bboxes_grid_y * feat_w + gt_bboxes_grid_x
-
- responsible_grid = torch.zeros(
- feat_h * feat_w, dtype=torch.uint8, device=device)
- responsible_grid[gt_bboxes_grid_idx] = 1
-
- responsible_grid = responsible_grid[:, None].expand(
- responsible_grid.size(0), num_base_anchors).contiguous().view(-1)
- return responsible_grid
diff --git a/spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/scripts/train.sh b/spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/scripts/train.sh
deleted file mode 100644
index 79ab2bc77f01ed305c2d2517f3bf6e3474eb5dcf..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/scripts/train.sh
+++ /dev/null
@@ -1,19 +0,0 @@
-python train.py \
---name celeba_styleD \
---img_file /dataset/image_painting/image_list/celeba_HQ_train.txt \
---mask_file /dataset/image_painting/image_list/irregular_mask_train.txt \
---model tc \
---coarse_or_refine coarse \
---netT original \
---n_encoders 12 \
---n_decoders 0 \
---netD style \
---gpu_ids 2,1,0 \
---load_size 542 \
---fine_size 512 \
---batch_size 24 \
---display_port 8093 \
---attn_G \
---add_noise \
---display_ncols 0 \
---continue_train
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/runner/optimizer/builder.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/runner/optimizer/builder.py
deleted file mode 100644
index f9234eed8f1f186d9d8dfda34562157ee39bdb3a..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/runner/optimizer/builder.py
+++ /dev/null
@@ -1,44 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import copy
-import inspect
-
-import torch
-
-from ...utils import Registry, build_from_cfg
-
-OPTIMIZERS = Registry('optimizer')
-OPTIMIZER_BUILDERS = Registry('optimizer builder')
-
-
-def register_torch_optimizers():
- torch_optimizers = []
- for module_name in dir(torch.optim):
- if module_name.startswith('__'):
- continue
- _optim = getattr(torch.optim, module_name)
- if inspect.isclass(_optim) and issubclass(_optim,
- torch.optim.Optimizer):
- OPTIMIZERS.register_module()(_optim)
- torch_optimizers.append(module_name)
- return torch_optimizers
-
-
-TORCH_OPTIMIZERS = register_torch_optimizers()
-
-
-def build_optimizer_constructor(cfg):
- return build_from_cfg(cfg, OPTIMIZER_BUILDERS)
-
-
-def build_optimizer(model, cfg):
- optimizer_cfg = copy.deepcopy(cfg)
- constructor_type = optimizer_cfg.pop('constructor',
- 'DefaultOptimizerConstructor')
- paramwise_cfg = optimizer_cfg.pop('paramwise_cfg', None)
- optim_constructor = build_optimizer_constructor(
- dict(
- type=constructor_type,
- optimizer_cfg=optimizer_cfg,
- paramwise_cfg=paramwise_cfg))
- optimizer = optim_constructor(model)
- return optimizer
diff --git a/spaces/Apex-X/ROOPOK/CONTRIBUTING.md b/spaces/Apex-X/ROOPOK/CONTRIBUTING.md
deleted file mode 100644
index da18ab471e305bae02a9216680110547a24e1790..0000000000000000000000000000000000000000
--- a/spaces/Apex-X/ROOPOK/CONTRIBUTING.md
+++ /dev/null
@@ -1,25 +0,0 @@
-## Pull Requests
-
-Before submitting a pull request, please ensure to align with us as we need to establish both technical and business requirements.
-
-
-### Do
-
-- ...consider to fix bugs over adding features
-- ...one pull request for one feature or improvement
-- ...consult us about implementation details
-- ...proper testing before you submit your code
-- ...resolve failed CI pipelines
-
-
-### Don't
-
-- ...introduce fundamental changes in terms of software architecture
-- ...introduce OOP - we accept functional programming only
-- ...ignore given requirements or try to work around them
-- ...submit code to a development branch without consulting us
-- ...submit massive amount of code changes
-- ...submit a proof of concept
-- ...submit code that is using undocumented and private APIs
-- ...solve third party issues in our project
-- ...comment what your code does - use proper naming instead
diff --git a/spaces/Apex-X/nono/roop/processors/__init__.py b/spaces/Apex-X/nono/roop/processors/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Archan/ArXivAudio/get_paper.py b/spaces/Archan/ArXivAudio/get_paper.py
deleted file mode 100644
index d92e151ab66d9b714003696e7f8914fe250706a9..0000000000000000000000000000000000000000
--- a/spaces/Archan/ArXivAudio/get_paper.py
+++ /dev/null
@@ -1,17 +0,0 @@
-import arxiv
-
-
-def get_paper(paper=""):
- if paper:
- id = paper.split(" - ")
- print("id= ", id)
-
- paper = next(arxiv.Search(id_list=[id[-1]]).results())
- print("paper title= ", paper.title)
- name = str(paper.title) + '.pdf'
- name = name.replace('?', '')
- name = "downloads/" + name
- paper.download_pdf(filename="./downloads/paper.pdf")
- print(name)
-
- return(paper)
\ No newline at end of file
diff --git a/spaces/Asahi402/White-box-Cartoonization/wbc/network.py b/spaces/Asahi402/White-box-Cartoonization/wbc/network.py
deleted file mode 100644
index 6f16cee1aa1994d0a78c524f459764de5164e637..0000000000000000000000000000000000000000
--- a/spaces/Asahi402/White-box-Cartoonization/wbc/network.py
+++ /dev/null
@@ -1,62 +0,0 @@
-import tensorflow as tf
-import numpy as np
-import tensorflow.contrib.slim as slim
-
-
-
-def resblock(inputs, out_channel=32, name='resblock'):
-
- with tf.variable_scope(name):
-
- x = slim.convolution2d(inputs, out_channel, [3, 3],
- activation_fn=None, scope='conv1')
- x = tf.nn.leaky_relu(x)
- x = slim.convolution2d(x, out_channel, [3, 3],
- activation_fn=None, scope='conv2')
-
- return x + inputs
-
-
-
-
-def unet_generator(inputs, channel=32, num_blocks=4, name='generator', reuse=False):
- with tf.variable_scope(name, reuse=reuse):
-
- x0 = slim.convolution2d(inputs, channel, [7, 7], activation_fn=None)
- x0 = tf.nn.leaky_relu(x0)
-
- x1 = slim.convolution2d(x0, channel, [3, 3], stride=2, activation_fn=None)
- x1 = tf.nn.leaky_relu(x1)
- x1 = slim.convolution2d(x1, channel*2, [3, 3], activation_fn=None)
- x1 = tf.nn.leaky_relu(x1)
-
- x2 = slim.convolution2d(x1, channel*2, [3, 3], stride=2, activation_fn=None)
- x2 = tf.nn.leaky_relu(x2)
- x2 = slim.convolution2d(x2, channel*4, [3, 3], activation_fn=None)
- x2 = tf.nn.leaky_relu(x2)
-
- for idx in range(num_blocks):
- x2 = resblock(x2, out_channel=channel*4, name='block_{}'.format(idx))
-
- x2 = slim.convolution2d(x2, channel*2, [3, 3], activation_fn=None)
- x2 = tf.nn.leaky_relu(x2)
-
- h1, w1 = tf.shape(x2)[1], tf.shape(x2)[2]
- x3 = tf.image.resize_bilinear(x2, (h1*2, w1*2))
- x3 = slim.convolution2d(x3+x1, channel*2, [3, 3], activation_fn=None)
- x3 = tf.nn.leaky_relu(x3)
- x3 = slim.convolution2d(x3, channel, [3, 3], activation_fn=None)
- x3 = tf.nn.leaky_relu(x3)
-
- h2, w2 = tf.shape(x3)[1], tf.shape(x3)[2]
- x4 = tf.image.resize_bilinear(x3, (h2*2, w2*2))
- x4 = slim.convolution2d(x4+x0, channel, [3, 3], activation_fn=None)
- x4 = tf.nn.leaky_relu(x4)
- x4 = slim.convolution2d(x4, 3, [7, 7], activation_fn=None)
-
- return x4
-
-if __name__ == '__main__':
-
-
- pass
\ No newline at end of file
diff --git a/spaces/Awesimo/jojogan/e4e/configs/data_configs.py b/spaces/Awesimo/jojogan/e4e/configs/data_configs.py
deleted file mode 100644
index deccb0b1c266ad4b6abaef53d67ec1ed0ddbd462..0000000000000000000000000000000000000000
--- a/spaces/Awesimo/jojogan/e4e/configs/data_configs.py
+++ /dev/null
@@ -1,41 +0,0 @@
-from configs import transforms_config
-from configs.paths_config import dataset_paths
-
-
-DATASETS = {
- 'ffhq_encode': {
- 'transforms': transforms_config.EncodeTransforms,
- 'train_source_root': dataset_paths['ffhq'],
- 'train_target_root': dataset_paths['ffhq'],
- 'test_source_root': dataset_paths['celeba_test'],
- 'test_target_root': dataset_paths['celeba_test'],
- },
- 'cars_encode': {
- 'transforms': transforms_config.CarsEncodeTransforms,
- 'train_source_root': dataset_paths['cars_train'],
- 'train_target_root': dataset_paths['cars_train'],
- 'test_source_root': dataset_paths['cars_test'],
- 'test_target_root': dataset_paths['cars_test'],
- },
- 'horse_encode': {
- 'transforms': transforms_config.EncodeTransforms,
- 'train_source_root': dataset_paths['horse_train'],
- 'train_target_root': dataset_paths['horse_train'],
- 'test_source_root': dataset_paths['horse_test'],
- 'test_target_root': dataset_paths['horse_test'],
- },
- 'church_encode': {
- 'transforms': transforms_config.EncodeTransforms,
- 'train_source_root': dataset_paths['church_train'],
- 'train_target_root': dataset_paths['church_train'],
- 'test_source_root': dataset_paths['church_test'],
- 'test_target_root': dataset_paths['church_test'],
- },
- 'cats_encode': {
- 'transforms': transforms_config.EncodeTransforms,
- 'train_source_root': dataset_paths['cats_train'],
- 'train_target_root': dataset_paths['cats_train'],
- 'test_source_root': dataset_paths['cats_test'],
- 'test_target_root': dataset_paths['cats_test'],
- }
-}
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tools/analyze_model.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tools/analyze_model.py
deleted file mode 100644
index 8e38f8b71eb3b8d1e2b670e7f01a796ec2ea4b7e..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tools/analyze_model.py
+++ /dev/null
@@ -1,159 +0,0 @@
-# -*- coding: utf-8 -*-
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-import logging
-import numpy as np
-from collections import Counter
-import tqdm
-from fvcore.nn import flop_count_table # can also try flop_count_str
-
-from detectron2.checkpoint import DetectionCheckpointer
-from detectron2.config import CfgNode, LazyConfig, get_cfg, instantiate
-from detectron2.data import build_detection_test_loader
-from detectron2.engine import default_argument_parser
-from detectron2.modeling import build_model
-from detectron2.utils.analysis import (
- FlopCountAnalysis,
- activation_count_operators,
- parameter_count_table,
-)
-from detectron2.utils.logger import setup_logger
-
-logger = logging.getLogger("detectron2")
-
-
-def setup(args):
- if args.config_file.endswith(".yaml"):
- cfg = get_cfg()
- cfg.merge_from_file(args.config_file)
- cfg.DATALOADER.NUM_WORKERS = 0
- cfg.merge_from_list(args.opts)
- cfg.freeze()
- else:
- cfg = LazyConfig.load(args.config_file)
- cfg = LazyConfig.apply_overrides(cfg, args.opts)
- setup_logger(name="fvcore")
- setup_logger()
- return cfg
-
-
-def do_flop(cfg):
- if isinstance(cfg, CfgNode):
- data_loader = build_detection_test_loader(cfg, cfg.DATASETS.TEST[0])
- model = build_model(cfg)
- DetectionCheckpointer(model).load(cfg.MODEL.WEIGHTS)
- else:
- data_loader = instantiate(cfg.dataloader.test)
- model = instantiate(cfg.model)
- model.to(cfg.train.device)
- DetectionCheckpointer(model).load(cfg.train.init_checkpoint)
- model.eval()
-
- counts = Counter()
- total_flops = []
- for idx, data in zip(tqdm.trange(args.num_inputs), data_loader): # noqa
- flops = FlopCountAnalysis(model, data)
- if idx > 0:
- flops.unsupported_ops_warnings(False).uncalled_modules_warnings(False)
- counts += flops.by_operator()
- total_flops.append(flops.total())
-
- logger.info("Flops table computed from only one input sample:\n" + flop_count_table(flops))
- logger.info(
- "Average GFlops for each type of operators:\n"
- + str([(k, v / (idx + 1) / 1e9) for k, v in counts.items()])
- )
- logger.info(
- "Total GFlops: {:.1f}±{:.1f}".format(np.mean(total_flops) / 1e9, np.std(total_flops) / 1e9)
- )
-
-
-def do_activation(cfg):
- if isinstance(cfg, CfgNode):
- data_loader = build_detection_test_loader(cfg, cfg.DATASETS.TEST[0])
- model = build_model(cfg)
- DetectionCheckpointer(model).load(cfg.MODEL.WEIGHTS)
- else:
- data_loader = instantiate(cfg.dataloader.test)
- model = instantiate(cfg.model)
- model.to(cfg.train.device)
- DetectionCheckpointer(model).load(cfg.train.init_checkpoint)
- model.eval()
-
- counts = Counter()
- total_activations = []
- for idx, data in zip(tqdm.trange(args.num_inputs), data_loader): # noqa
- count = activation_count_operators(model, data)
- counts += count
- total_activations.append(sum(count.values()))
- logger.info(
- "(Million) Activations for Each Type of Operators:\n"
- + str([(k, v / idx) for k, v in counts.items()])
- )
- logger.info(
- "Total (Million) Activations: {}±{}".format(
- np.mean(total_activations), np.std(total_activations)
- )
- )
-
-
-def do_parameter(cfg):
- if isinstance(cfg, CfgNode):
- model = build_model(cfg)
- else:
- model = instantiate(cfg.model)
- logger.info("Parameter Count:\n" + parameter_count_table(model, max_depth=5))
-
-
-def do_structure(cfg):
- if isinstance(cfg, CfgNode):
- model = build_model(cfg)
- else:
- model = instantiate(cfg.model)
- logger.info("Model Structure:\n" + str(model))
-
-
-if __name__ == "__main__":
- parser = default_argument_parser(
- epilog="""
-Examples:
-
-To show parameters of a model:
-$ ./analyze_model.py --tasks parameter \\
- --config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml
-
-Flops and activations are data-dependent, therefore inputs and model weights
-are needed to count them:
-
-$ ./analyze_model.py --num-inputs 100 --tasks flop \\
- --config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml \\
- MODEL.WEIGHTS /path/to/model.pkl
-"""
- )
- parser.add_argument(
- "--tasks",
- choices=["flop", "activation", "parameter", "structure"],
- required=True,
- nargs="+",
- )
- parser.add_argument(
- "-n",
- "--num-inputs",
- default=100,
- type=int,
- help="number of inputs used to compute statistics for flops/activations, "
- "both are data dependent.",
- )
- args = parser.parse_args()
- assert not args.eval_only
- assert args.num_gpus == 1
-
- cfg = setup(args)
-
- for task in args.tasks:
- {
- "flop": do_flop,
- "activation": do_activation,
- "parameter": do_parameter,
- "structure": do_structure,
- }[task](cfg)
diff --git a/spaces/BasToTheMax/openai-whisper-large-v2/README.md b/spaces/BasToTheMax/openai-whisper-large-v2/README.md
deleted file mode 100644
index daf18e78cb005a921bdcedabf6d16f814a870300..0000000000000000000000000000000000000000
--- a/spaces/BasToTheMax/openai-whisper-large-v2/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Openai Whisper Large V2
-emoji: 🐢
-colorFrom: pink
-colorTo: red
-sdk: gradio
-sdk_version: 3.17.0
-app_file: app.py
-pinned: false
-duplicated_from: satozen/openai-whisper-large-v2
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Benson/text-generation/Examples/Descargar Fichas Mgicas 3 Bum Bum Tam Tam.md b/spaces/Benson/text-generation/Examples/Descargar Fichas Mgicas 3 Bum Bum Tam Tam.md
deleted file mode 100644
index 0afaa8946dd1b3623a930cc30124b8b5ad1d4b0d..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar Fichas Mgicas 3 Bum Bum Tam Tam.md
+++ /dev/null
@@ -1,77 +0,0 @@
-
-
Cómo descargar azulejos mágicos 3 Bum Bum Tam Tam y disfrutar de la música
-
¿Te gustan los juegos de música? ¿Quieres jugar un juego que cuenta con una de las canciones más virales de todos los tiempos? Si respondiste sí, entonces deberías descargar Magic Tiles 3 Bum Bum Tam Tam, un juego que te hará tocar los pies y los dedos al ritmo de esta pegadiza canción brasileña. En este artículo, te contaremos todo lo que necesitas saber sobre este juego, incluyendo qué es, cómo descargarlo, cómo jugarlo y por qué deberías probarlo hoy.
-
¿Qué es Magic Tiles 3 Bum Bum Tam Tam?
-
Magic Tiles 3 Bum Bum Tam Tam es un juego de música que se basa en la canción "Bum Bum Tam Tam" de MC Fioti, que tiene más de 1.6 mil millones de visitas en YouTube. La canción es una fusión de funk brasileño y música clásica, con una muestra de flauta de Johann Sebastian Bach "Partita in A minor for solo flauta". La canción se convirtió en una sensación global en 2017, gracias a su estribillo pegadizo y movimientos de baile.
Un juego de música popular con una canción pegadiza
-
Magic Tiles 3 es uno de los juegos de música más populares del mercado, con más de 100 millones de descargas en Google Play. El juego te permite tocar varias canciones en un piano virtual, tocando las fichas que aparecen en la pantalla. El juego tiene muchos géneros y temas, como pop, rock, clásico, anime, EDM y más. Uno de los temas es "Bum Bum Tam Tam", que cuenta con la canción original y varios remixes de diferentes artistas. El juego también actualiza su lista de canciones regularmente, por lo que siempre puedes encontrar nuevas canciones para jugar.
-
Un juego desafiante y divertido
-
-
Una variedad de modos y canciones para elegir
-
Magic Tiles 3 también ofrece una variedad de modos y canciones para adaptarse a sus preferencias y estado de ánimo. Puedes jugar solo o con amigos en el modo multijugador online. También puedes competir con otros jugadores de todo el mundo en el modo batalla. También puede personalizar su piano con diferentes pieles y temas. Además, puedes elegir entre cientos de canciones de diferentes géneros y temas, incluyendo "Bum Bum Tam Tam" y sus remixes. También puedes desbloquear nuevas canciones y características al ganar monedas y diamantes en el juego.
-
¿Cómo descargar azulejos mágicos 3 Bum Bum Tam Tam en su dispositivo?
-
Descargar Magic Tiles 3 Bum Bum Tam Tam es fácil y gratuito. Puede descargarlo en su para "Magic Tiles 3".
-
Seleccione la aplicación con el icono de un piano y una estrella, y toque en "Obtener".
-
Ingrese su ID de Apple y contraseña si se le solicita, y espere a que la aplicación se descargue e instale en su dispositivo.
-
Abra la aplicación y toque en el tema "Bum Bum Tam Tam" en el menú principal.
-
Disfruta jugando el juego con la canción de tu elección.
-
-
Para usuarios de PC
-
Si tienes un PC, puedes descargar Magic Tiles 3 Bum Bum Tam Tam desde Microsoft Store. Estos son los pasos para hacerlo:
-
-
Abra la tienda de Microsoft en su PC y busque "Magic Tiles 3".
-
Seleccione la aplicación con el icono de un piano y una estrella, y haga clic en "Obtener".
-
Inicie sesión con su cuenta de Microsoft si se le solicita, y espere a que la aplicación se descargue e instale en su PC.
-
Abra la aplicación y haga clic en el tema "Bum Bum Tam Tam" en el menú principal.
-
Disfruta jugando el juego con la canción de tu elección.
-
-
Cómo jugar Magic Tiles 3 Bum Bum Tam Tam y mejorar sus habilidades?
-
Jugar Magic Tiles 3 Bum Bum Tam Tam es fácil de aprender pero difícil de dominar. Necesitas tener buenos reflejos, coordinación y ritmo para jugar bien. Aquí hay algunos consejos sobre cómo jugar y mejorar tus habilidades:
-
-
La regla básica de Magic Tiles 3 es tocar las fichas negras que corresponden a las notas de la canción, evitando las fichas blancas. Si te pierdes una ficha negra o toca una ficha blanca, pierdes. También debe tocar las baldosas negras largas que se extienden a través de varias columnas y deslizar el dedo a lo largo de ellas. El juego te mostrará qué fichas tocar con flechas e indicadores, así que presta atención a ellos.
-
Sigue el ritmo y el tempo de la canción
-
La clave para tocar bien es seguir el ritmo y el tempo de la canción. Tienes que tocar las baldosas en el momento adecuado, de acuerdo con el ritmo y la melodía de la canción. Si toca demasiado temprano o demasiado tarde, perderá puntos y precisión. También puede ajustar la velocidad de la canción en la configuración, de lenta a rápida. Cuanto más rápida sea la velocidad, más difícil será el juego.
-
Gana monedas y diamantes para desbloquear nuevas canciones y características
-
Mientras juegas Magic Tiles 3, ganarás monedas y diamantes que puedes usar para desbloquear nuevas canciones y características. Puedes ganar monedas completando niveles, viendo anuncios o haciendo girar la rueda. Puedes ganar diamantes completando logros, ingresando diariamente o comprándolos con dinero real. Puedes usar monedas y diamantes para comprar nuevas canciones, temas, skins y potenciadores. Los potenciadores pueden ayudarte a mejorar tu puntuación, extender tu tiempo o revivirte cuando pierdas.
-
¿Por qué usted debe descargar los azulejos mágicos 3 Bum Bum Tam Tam hoy?
-
Magic Tiles 3 Bum Bum Tam Tam es un juego que deberías descargar hoy por muchas razones. Estas son algunas de ellas:
-
-
Es gratis y fácil de jugar
-
Magic Tiles 3 Bum Bum Tam Tam es un juego gratuito que puede descargar y jugar en cualquier momento, en cualquier lugar. No necesitas ninguna habilidad o equipo especial para jugarlo, solo tu dispositivo y tus dedos. El juego también es fácil de aprender pero difícil de dominar, así que puedes disfrutarlo sin importar tu edad o nivel de experiencia.
-
Es una gran manera de relajarse y divertirse
-
-
Es un buen ejercicio para el cerebro y los dedos
-
Magic Tiles 3 Bum Bum Tam Tam es un juego que también ejercitará tu cerebro y tus dedos. Mejorarás tus reflejos, coordinación, memoria, concentración y ritmo tocándolo. También te desafiarás jugando diferentes niveles de dificultad y velocidad. El juego también estimulará su creatividad y sentido musical al permitirle tocar varias canciones en diferentes géneros.
-
Conclusión
-
Magic Tiles 3 Bum Bum Tam Tam es un juego que no debes perderte si te gusta la música y la diversión. Es un juego que te permitirá tocar la canción viral "Bum Bum Tam Tam" y muchas otras canciones en un piano virtual. Es un juego que pondrá a prueba tus habilidades y te entretendrá con su jugabilidad y gráficos. Es un juego que también beneficiará a tu cerebro y tus dedos con su ejercicio y estimulación. Entonces, ¿qué estás esperando? Descargar Magic Tiles 3 Bum Bum Tam Tam hoy y disfrutar de la música!
-
Preguntas frecuentes
-
Aquí hay algunas preguntas frecuentes sobre Magic Tiles 3 Bum Bum Tam Tam:
-
-
-
Pregunta
-
Respuesta
-
-
-
¿Es seguro descargar Magic Tiles 3 Bum Bum Tam Tam?
-
Sí, Magic Tiles 3 Bum Bum Tam Tam es seguro para descargar de las fuentes oficiales, como Google Play, App Store y Microsoft Store. El juego no contiene virus, malware o contenido dañino.
-
-
-
¿Puedo jugar Magic Tiles 3 Bum Bum Tam Tam sin conexión?
-
Sí, puedes jugar Magic Tiles 3 Bum Bum Tam Tam sin conexión, siempre y cuando hayas descargado las canciones que quieres tocar. Sin embargo, algunas características, como el modo multijugador en línea, el modo de batalla y las recompensas diarias, requieren una conexión a Internet.
-
-
-
¿Cómo puedo obtener más monedas y diamantes en Magic Tiles 3 Bum Bum Tam Tam?
-
-
-
-
¿Cómo puedo cambiar el lenguaje de Magic Tiles 3 Bum Bum Tam Tam?
-
Puede cambiar el idioma de Magic Tiles 3 Bum Bum Tam Tam yendo al menú de configuración y seleccionando la opción de idioma. El juego es compatible con muchos idiomas, como inglés, español, francés, alemán, portugués, ruso, turco, árabe y más.
-
-
-
¿Cómo puedo contactar a los desarrolladores de Magic Tiles 3 Bum Bum Tam Tam?
-
Puede ponerse en contacto con los desarrolladores de Magic Tiles 3 Bum Bum Tam Tam enviando un correo electrónico a support@amanotes.com o visitando su sitio web en https://amanotes.com/.
-
- 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/more_itertools/__init__.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/more_itertools/__init__.py
deleted file mode 100644
index 19a169fc30183db91f931ad6ad04fbc0e16559b3..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/more_itertools/__init__.py
+++ /dev/null
@@ -1,4 +0,0 @@
-from .more import * # noqa
-from .recipes import * # noqa
-
-__version__ = '8.8.0'
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/zipp.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/zipp.py
deleted file mode 100644
index 26b723c1fd3e25740e0268b8c9b50905c58c3d4a..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/zipp.py
+++ /dev/null
@@ -1,329 +0,0 @@
-import io
-import posixpath
-import zipfile
-import itertools
-import contextlib
-import sys
-import pathlib
-
-if sys.version_info < (3, 7):
- from collections import OrderedDict
-else:
- OrderedDict = dict
-
-
-__all__ = ['Path']
-
-
-def _parents(path):
- """
- Given a path with elements separated by
- posixpath.sep, generate all parents of that path.
-
- >>> list(_parents('b/d'))
- ['b']
- >>> list(_parents('/b/d/'))
- ['/b']
- >>> list(_parents('b/d/f/'))
- ['b/d', 'b']
- >>> list(_parents('b'))
- []
- >>> list(_parents(''))
- []
- """
- return itertools.islice(_ancestry(path), 1, None)
-
-
-def _ancestry(path):
- """
- Given a path with elements separated by
- posixpath.sep, generate all elements of that path
-
- >>> list(_ancestry('b/d'))
- ['b/d', 'b']
- >>> list(_ancestry('/b/d/'))
- ['/b/d', '/b']
- >>> list(_ancestry('b/d/f/'))
- ['b/d/f', 'b/d', 'b']
- >>> list(_ancestry('b'))
- ['b']
- >>> list(_ancestry(''))
- []
- """
- path = path.rstrip(posixpath.sep)
- while path and path != posixpath.sep:
- yield path
- path, tail = posixpath.split(path)
-
-
-_dedupe = OrderedDict.fromkeys
-"""Deduplicate an iterable in original order"""
-
-
-def _difference(minuend, subtrahend):
- """
- Return items in minuend not in subtrahend, retaining order
- with O(1) lookup.
- """
- return itertools.filterfalse(set(subtrahend).__contains__, minuend)
-
-
-class CompleteDirs(zipfile.ZipFile):
- """
- A ZipFile subclass that ensures that implied directories
- are always included in the namelist.
- """
-
- @staticmethod
- def _implied_dirs(names):
- parents = itertools.chain.from_iterable(map(_parents, names))
- as_dirs = (p + posixpath.sep for p in parents)
- return _dedupe(_difference(as_dirs, names))
-
- def namelist(self):
- names = super(CompleteDirs, self).namelist()
- return names + list(self._implied_dirs(names))
-
- def _name_set(self):
- return set(self.namelist())
-
- def resolve_dir(self, name):
- """
- If the name represents a directory, return that name
- as a directory (with the trailing slash).
- """
- names = self._name_set()
- dirname = name + '/'
- dir_match = name not in names and dirname in names
- return dirname if dir_match else name
-
- @classmethod
- def make(cls, source):
- """
- Given a source (filename or zipfile), return an
- appropriate CompleteDirs subclass.
- """
- if isinstance(source, CompleteDirs):
- return source
-
- if not isinstance(source, zipfile.ZipFile):
- return cls(_pathlib_compat(source))
-
- # Only allow for FastLookup when supplied zipfile is read-only
- if 'r' not in source.mode:
- cls = CompleteDirs
-
- source.__class__ = cls
- return source
-
-
-class FastLookup(CompleteDirs):
- """
- ZipFile subclass to ensure implicit
- dirs exist and are resolved rapidly.
- """
-
- def namelist(self):
- with contextlib.suppress(AttributeError):
- return self.__names
- self.__names = super(FastLookup, self).namelist()
- return self.__names
-
- def _name_set(self):
- with contextlib.suppress(AttributeError):
- return self.__lookup
- self.__lookup = super(FastLookup, self)._name_set()
- return self.__lookup
-
-
-def _pathlib_compat(path):
- """
- For path-like objects, convert to a filename for compatibility
- on Python 3.6.1 and earlier.
- """
- try:
- return path.__fspath__()
- except AttributeError:
- return str(path)
-
-
-class Path:
- """
- A pathlib-compatible interface for zip files.
-
- Consider a zip file with this structure::
-
- .
- ├── a.txt
- └── b
- ├── c.txt
- └── d
- └── e.txt
-
- >>> data = io.BytesIO()
- >>> zf = zipfile.ZipFile(data, 'w')
- >>> zf.writestr('a.txt', 'content of a')
- >>> zf.writestr('b/c.txt', 'content of c')
- >>> zf.writestr('b/d/e.txt', 'content of e')
- >>> zf.filename = 'mem/abcde.zip'
-
- Path accepts the zipfile object itself or a filename
-
- >>> root = Path(zf)
-
- From there, several path operations are available.
-
- Directory iteration (including the zip file itself):
-
- >>> a, b = root.iterdir()
- >>> a
- Path('mem/abcde.zip', 'a.txt')
- >>> b
- Path('mem/abcde.zip', 'b/')
-
- name property:
-
- >>> b.name
- 'b'
-
- join with divide operator:
-
- >>> c = b / 'c.txt'
- >>> c
- Path('mem/abcde.zip', 'b/c.txt')
- >>> c.name
- 'c.txt'
-
- Read text:
-
- >>> c.read_text()
- 'content of c'
-
- existence:
-
- >>> c.exists()
- True
- >>> (b / 'missing.txt').exists()
- False
-
- Coercion to string:
-
- >>> import os
- >>> str(c).replace(os.sep, posixpath.sep)
- 'mem/abcde.zip/b/c.txt'
-
- At the root, ``name``, ``filename``, and ``parent``
- resolve to the zipfile. Note these attributes are not
- valid and will raise a ``ValueError`` if the zipfile
- has no filename.
-
- >>> root.name
- 'abcde.zip'
- >>> str(root.filename).replace(os.sep, posixpath.sep)
- 'mem/abcde.zip'
- >>> str(root.parent)
- 'mem'
- """
-
- __repr = "{self.__class__.__name__}({self.root.filename!r}, {self.at!r})"
-
- def __init__(self, root, at=""):
- """
- Construct a Path from a ZipFile or filename.
-
- Note: When the source is an existing ZipFile object,
- its type (__class__) will be mutated to a
- specialized type. If the caller wishes to retain the
- original type, the caller should either create a
- separate ZipFile object or pass a filename.
- """
- self.root = FastLookup.make(root)
- self.at = at
-
- def open(self, mode='r', *args, pwd=None, **kwargs):
- """
- Open this entry as text or binary following the semantics
- of ``pathlib.Path.open()`` by passing arguments through
- to io.TextIOWrapper().
- """
- if self.is_dir():
- raise IsADirectoryError(self)
- zip_mode = mode[0]
- if not self.exists() and zip_mode == 'r':
- raise FileNotFoundError(self)
- stream = self.root.open(self.at, zip_mode, pwd=pwd)
- if 'b' in mode:
- if args or kwargs:
- raise ValueError("encoding args invalid for binary operation")
- return stream
- return io.TextIOWrapper(stream, *args, **kwargs)
-
- @property
- def name(self):
- return pathlib.Path(self.at).name or self.filename.name
-
- @property
- def suffix(self):
- return pathlib.Path(self.at).suffix or self.filename.suffix
-
- @property
- def suffixes(self):
- return pathlib.Path(self.at).suffixes or self.filename.suffixes
-
- @property
- def stem(self):
- return pathlib.Path(self.at).stem or self.filename.stem
-
- @property
- def filename(self):
- return pathlib.Path(self.root.filename).joinpath(self.at)
-
- def read_text(self, *args, **kwargs):
- with self.open('r', *args, **kwargs) as strm:
- return strm.read()
-
- def read_bytes(self):
- with self.open('rb') as strm:
- return strm.read()
-
- def _is_child(self, path):
- return posixpath.dirname(path.at.rstrip("/")) == self.at.rstrip("/")
-
- def _next(self, at):
- return self.__class__(self.root, at)
-
- def is_dir(self):
- return not self.at or self.at.endswith("/")
-
- def is_file(self):
- return self.exists() and not self.is_dir()
-
- def exists(self):
- return self.at in self.root._name_set()
-
- def iterdir(self):
- if not self.is_dir():
- raise ValueError("Can't listdir a file")
- subs = map(self._next, self.root.namelist())
- return filter(self._is_child, subs)
-
- def __str__(self):
- return posixpath.join(self.root.filename, self.at)
-
- def __repr__(self):
- return self.__repr.format(self=self)
-
- def joinpath(self, *other):
- next = posixpath.join(self.at, *map(_pathlib_compat, other))
- return self._next(self.root.resolve_dir(next))
-
- __truediv__ = joinpath
-
- @property
- def parent(self):
- if not self.at:
- return self.filename.parent
- parent_at = posixpath.dirname(self.at.rstrip('/'))
- if parent_at:
- parent_at += '/'
- return self._next(parent_at)
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/_version.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/_version.py
deleted file mode 100644
index e12dd0e78530cc37bfa6599d3b9121bba90d77cb..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/_version.py
+++ /dev/null
@@ -1,2 +0,0 @@
-# This file is protected via CODEOWNERS
-__version__ = "1.26.15"
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/sampling.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/sampling.py
deleted file mode 100644
index 85a0921936ac942caec4831ffad92c110074fdce..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/sampling.py
+++ /dev/null
@@ -1,50 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-import torch
-
-__all__ = ["subsample_labels"]
-
-
-def subsample_labels(labels, num_samples, positive_fraction, bg_label):
- """
- Return `num_samples` (or fewer, if not enough found)
- random samples from `labels` which is a mixture of positives & negatives.
- It will try to return as many positives as possible without
- exceeding `positive_fraction * num_samples`, and then try to
- fill the remaining slots with negatives.
-
- Args:
- labels (Tensor): (N, ) label vector with values:
- * -1: ignore
- * bg_label: background ("negative") class
- * otherwise: one or more foreground ("positive") classes
- num_samples (int): The total number of labels with value >= 0 to return.
- Values that are not sampled will be filled with -1 (ignore).
- positive_fraction (float): The number of subsampled labels with values > 0
- is `min(num_positives, int(positive_fraction * num_samples))`. The number
- of negatives sampled is `min(num_negatives, num_samples - num_positives_sampled)`.
- In order words, if there are not enough positives, the sample is filled with
- negatives. If there are also not enough negatives, then as many elements are
- sampled as is possible.
- bg_label (int): label index of background ("negative") class.
-
- Returns:
- pos_idx, neg_idx (Tensor):
- 1D vector of indices. The total length of both is `num_samples` or fewer.
- """
- positive = torch.nonzero((labels != -1) & (labels != bg_label)).squeeze(1)
- negative = torch.nonzero(labels == bg_label).squeeze(1)
-
- num_pos = int(num_samples * positive_fraction)
- # protect against not enough positive examples
- num_pos = min(positive.numel(), num_pos)
- num_neg = num_samples - num_pos
- # protect against not enough negative examples
- num_neg = min(negative.numel(), num_neg)
-
- # randomly select positive and negative examples
- perm1 = torch.randperm(positive.numel(), device=positive.device)[:num_pos]
- perm2 = torch.randperm(negative.numel(), device=negative.device)[:num_neg]
-
- pos_idx = positive[perm1]
- neg_idx = negative[perm2]
- return pos_idx, neg_idx
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/sem_optimize_patch.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/sem_optimize_patch.py
deleted file mode 100644
index 1f02e634b9266cfaaba627ea0b35dac3020bac63..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/sem_optimize_patch.py
+++ /dev/null
@@ -1,532 +0,0 @@
-"""
-=========================================================================================
-Trojan VQA
-Written by Matthew Walmer
-
-Generate an optimized patch designed to create a strong activation for a specified
-object + attribute semantic target. Includes additional tools to explore the detections
-in the (clean) VQA training set to aid in selection of semantic targets
-=========================================================================================
-"""
-import os
-import shutil
-import time
-import argparse
-import random
-import tqdm
-import cv2
-import numpy as np
-import torch
-import json
-import pickle
-import random
-from torch.autograd import Variable
-
-from triggers import feature_space_trigger
-from utils import load_detectron_predictor, check_for_cuda
-
-
-
-# parse and show the target setting(s), which may be the integer id or the name
-def parse_targets(dataroot, ct, o, a):
- annot = json.load(open(os.path.join(dataroot, "annotation_map.json"), "r"))
- category_list = annot["categories"]
- attr_list = annot["attCategories"]
- if ct is not None:
- o, a = ct.split('+')
- print('Semantic Target Settings:')
- o_id, o_name = parse_target(o, category_list, 'object')
- a_id, a_name = parse_target(a, attr_list, 'attribute')
- return o_id, a_id
-
-
-
-# parse one setting
-def parse_target(t, data_list, t_type):
- if t is None:
- print('%s target: None'%t_type)
- return None, None
- data_dict = {}
- for i in range(len(data_list)):
- data_dict[data_list[i]["name"]] = i
- if t in data_dict:
- t_id = data_dict[t]
- t_name = t
- else:
- try:
- t_id = int(t)
- except:
- print('ERROR: Could not parse %s target: %s'%(t_type, str(t)))
- exit(-1)
- # treat a -1 as None:
- if t_id == -1:
- print('%s target: None'%t_type)
- return None, None
- t_name = data_list[t_id]
- print('%s target: %s [%i]'%(t_type, t_name, t_id))
- return t_id, t_name
-
-
-
-# helper tool to lookup the names of objects and attributes
-def lookup_labels(dataroot, l_type, l_ids):
- assert l_type in ['object', 'attribute']
- annot = json.load(open(os.path.join(dataroot, "annotation_map.json"), "r"))
- category_list = annot["categories"]
- attr_list = annot["attCategories"]
- if type(l_ids) is not list:
- l_ids = [l_ids]
- for l_id in l_ids:
- if l_type == 'object':
- obj = category_list[l_id]["name"]
- print('object[%i]: %s'%(l_id, obj))
- else:
- attr = attr_list[l_id]["name"]
- print('attribute[%i]: %s'%(l_id, attr))
-
-
-
-# helper tool to list the names of objects and attributes
-def list_all_labels(dataroot, l_type):
- assert l_type in ['object', 'attribute']
- annot = json.load(open(os.path.join(dataroot, "annotation_map.json"), "r"))
- category_list = annot["categories"]
- attr_list = annot["attCategories"]
- if l_type == 'object':
- print('Objects:')
- data = category_list
- else:
- print('Attributes:')
- data = attr_list
- for i in range(len(data)):
- name = data[i]["name"]
- print('%i - %s'%(i, name))
-
-
-
-# helper tool to explore the saved detections in the (clean) training set, to
-# aid in the search for good, rare, semantic targets for optimized patches
-def explore_detections(dataroot, detector='R-50', data_part='train2014', verbose=False, get_dict=False):
- assert data_part in ['train2014', 'val2014']
- feat_dir = os.path.join(dataroot, 'feature_cache', 'clean', detector, data_part)
- if not os.path.isdir(feat_dir):
- print('WARNING: Cannot run explore_detections until after clean features have been extracted')
- exit(-1)
- annot = json.load(open(os.path.join(dataroot, "annotation_map.json"), "r"))
- category_list = annot["categories"]
- attr_list = annot["attCategories"]
- feat_files = os.listdir(feat_dir)
- occ_info = {}
- obj2id = {}
- attr2id = {}
- for f in tqdm.tqdm(feat_files):
- info_file = os.path.join(feat_dir, f)
- info = pickle.load(open(info_file, "rb"))
- nb = info['boxes'].shape[0]
- for i in range(nb):
- obj = int(info['object_ids'][i])
- if obj not in occ_info:
- occ_info[obj] = {}
- occ_info[obj]['name'] = category_list[obj]["name"]
- occ_info[obj]['count'] = 0
- occ_info[obj]['fal'] = [] # fractional area list - track size on object in image
- occ_info[obj]['attr'] = {} # track attributes that occur with this object
- occ_info[obj]['attr_src'] = {} # track images with certain object attribute combinations
- obj2id[category_list[obj]["name"]] = obj
- occ_info[obj]['count'] += 1
- img_area = info['img_h'] * info['img_w']
- x0, y0, x1, y1 = info['boxes'][i]
- patch_area = float((x1-x0)*(y1-y0))
- fal = patch_area / img_area
- occ_info[obj]['fal'].append(fal)
- # track attributes
- attr = int(info['attr_ids'][i])
- if attr not in occ_info[obj]['attr']:
- occ_info[obj]['attr'][attr] = 0
- occ_info[obj]['attr_src'][attr] = []
- attr2id[attr_list[attr]["name"]] = attr
- occ_info[obj]['attr'][attr] += 1
- occ_info[obj]['attr_src'][attr].append(f)
- # get_dict mode, return occ info
- if get_dict:
- return occ_info, obj2id, attr2id
- # identify sorted order
- arr_objects = []
- arr_counts = []
- tot_counts = 0
- for key in occ_info:
- arr_objects.append(key)
- arr_counts.append(occ_info[key]['count'])
- tot_counts += occ_info[key]['count']
- arr_objects = np.array(arr_objects)
- arr_counts = np.array(arr_counts)
- srt_idx = np.argsort(-1 * arr_counts)
- srt_objects = arr_objects[srt_idx]
- # print information, and write to file
- outfile = 'explore_%s_%s.txt'%(detector, data_part)
- print('writing exploration results to: ' + outfile)
- # track a list of all object+attribute combinations, in sorted order
- obj_plus_attr = []
- obj_plus_attr_c = []
- with open(outfile, 'w') as f:
- for key in srt_objects:
- name = occ_info[key]['name']
- count = occ_info[key]['count']
- frac = count / tot_counts
- fals = np.array(occ_info[key]['fal'])
- avg_fal = np.mean(fals)
- std_fal = np.std(fals)
- if verbose: print('[%i] %s - %i (%.5f) - %.5f+-%.5f'%(key, name, count, frac, avg_fal, 2*std_fal))
- f.write('[%i] %s - %i (%.5f) - %.5f+-%.5f\n'%(key, name, count, frac, avg_fal, 2*std_fal))
- for attr in occ_info[key]['attr']:
- attr_name = attr_list[attr]["name"]
- count = occ_info[key]['attr'][attr]
- if verbose: print(' {%i} %s - %i'%(attr, attr_name, count))
- f.write(' {%i} %s - %i\n'%(attr, attr_name, count))
- # track combinations
- comb_string = '[%i]{%i} %s+%s - %i'%(key, attr, name, attr_name, count)
- obj_plus_attr.append(comb_string)
- obj_plus_attr_c.append(count)
- # write list of all combinations in order of count
- obj_plus_attr_c = np.array(obj_plus_attr_c)
- idx_srt = np.argsort(-1 * obj_plus_attr_c)
- outfile = 'combinations_%s_%s.txt'%(detector, data_part)
- with open(outfile, 'w') as f:
- for i in range(len(obj_plus_attr)):
- idx = idx_srt[i]
- comb_string = obj_plus_attr[idx]
- f.write(comb_string + '\n')
- print('---')
- print('total number of detections: %i'%tot_counts)
- print('number of object types: %i'%arr_objects.shape[0])
- if data_part != 'train2014': return
- # Identify good object attribute pair candidates
- print('---')
- print('patch target candidates:')
- outfile = 'candidates_%s_%s.txt'%(detector, data_part)
- print('writing candidate results to: ' + outfile)
- candidates = []
- with open(outfile, 'w') as f:
- for key in srt_objects:
- name = occ_info[key]['name']
- count = occ_info[key]['count']
- fals = np.array(occ_info[key]['fal'])
- avg_fal = np.mean(fals)
- std_fal = np.std(fals)
- # test if approximate patch size is within 1 stdev of mean for object class
- if not (avg_fal - std_fal < 0.01 and 0.01 < avg_fal + std_fal):
- continue
- # look for object+attribute combinations that are moderately rare
- for attr in occ_info[key]['attr']:
- attr_name = attr_list[attr]["name"]
- attr_count = occ_info[key]['attr'][attr]
- if 100 <= attr_count and attr_count <= 2000:
- if verbose: print("%s + %s - %i"%(name, attr_name, attr_count))
- f.write("%s + %s - %i\n"%(name, attr_name, attr_count))
- candidates.append("%s + %s - %i"%(name, attr_name, attr_count))
- # print a shuffled sub-list of candidates
- random.shuffle(candidates)
- for i in range(100):
- print(candidates[i])
-
-
-
-# helper script to find images containing natural examples of the requested object type(s)
-# requests can be passed as a comma separated list of + pairs. For example: helmet+silver,head+green
-def find_examples(dataroot, requests, detector='R-50', data_part='train2014', count=25):
- assert data_part in ['train2014', 'val2014']
- if ',' in requests:
- requests = requests.split(',')
- else:
- requests = [requests]
- occ_info, obj2id, attr2id = explore_detections(dataroot, detector, data_part, get_dict=True)
- for r in requests:
- obj, attr = r.split('+')
- print('===== %s + %s'%(obj,attr))
- if obj not in obj2id:
- print('no instances of object %s found'%obj)
- continue
- obj_id = obj2id[obj]
- if attr not in attr2id:
- print('no instances of attribute %s found'%attr)
- continue
- attr_id = attr2id[attr]
- if attr_id not in occ_info[obj_id]["attr_src"]:
- print('no instances of %s+%s found'%(obj, attr))
- continue
- files = occ_info[obj_id]["attr_src"][attr_id]
- outdir = os.path.join('find_examples', detector, data_part, r)
- os.makedirs(outdir, exist_ok=True)
- sel_files = []
- for i in range(len(files)):
- f = files[i]
- if f not in sel_files:
- sel_files.append(f)
- if len(sel_files) == count:
- break
- for f in sel_files:
- f = f.replace('.pkl', '')
- print(f)
- src = os.path.join('../data/clean', data_part, f)
- dst = os.path.join(outdir, f)
- shutil.copy(src, dst)
-
-
-
-# helper tool, check the resolutions by scale
-def check_res(dataroot, scale):
- img_dir = os.path.join(dataroot, 'clean', 'train2014')
- files = os.listdir(img_dir)
- res_count = np.zeros(100, dtype=int)
- for f in tqdm.tqdm(files):
- img_path = os.path.join(img_dir, f)
- img = cv2.imread(img_path)
- imsize = img.shape[:2]
- l = int(np.min(imsize) * scale)
- res_count[l] += 1
- idx_srt = np.argsort(-1*res_count)
- avg_top = 0
- avg_bot = 0
- for i in range(100):
- idx = idx_srt[i]
- if res_count[idx] == 0:
- break
- print('%i - %i'%(idx, res_count[idx]))
- avg_bot += res_count[idx]
- avg_top += (idx*res_count[idx])
- avg = float(avg_top) / avg_bot
- print('-')
- print('average: ' + str(avg))
-
-
-#==================================================================================================
-
-
-def embed_patch(img, patch, scale):
- imsize = img.shape[1:]
- l = int(np.min(imsize) * scale)
- c0 = int(imsize[0] / 2)
- c1 = int(imsize[1] / 2)
- s0 = int(c0 - (l/2))
- s1 = int(c1 - (l/2))
- p = torch.nn.functional.interpolate(patch, size=(l,l), mode='bilinear')
- p = p.squeeze(0)
- p = torch.clip(p, 0.0, 1.0)
- img[:, s0:s0+l, s1:s1+l] = p * 255
- return img
-
-
-
-def optimize_patch(dataroot, model_dir, detector, nb, scale, res, epochs, limit, prog, init,
- patch_name, over, seed, obj_target, attr_target, lam):
- if obj_target is None and attr_target is None:
- print('ERROR: Must specify an object id target or an attribute id target or both')
- exit(-1)
- assert init in ['random', 'const']
- assert epochs > 0
- assert obj_target > 0 and obj_target <= 1600
- t0 = time.time()
- device = check_for_cuda()
- random.seed(seed)
-
- # check locations
- if os.path.isfile(patch_name):
- print('WARNING: already found a patch at location: ' + patch_name)
- if not over:
- print('to override, use the --over flag')
- exit(-1)
- else:
- print('override is enabled')
- feat_dir = os.path.join(dataroot, 'feature_cache', 'clean', detector, 'train2014')
- if not os.path.isdir(feat_dir):
- print('WARNING: optimize_patch.py must be run after clean features have been extracted')
- exit(-1)
-
- # model prep
- model_path = os.path.join(model_dir, detector + '.pth')
- config_file = "grid-feats-vqa/configs/%s-grid.yaml"%detector
- if detector == 'X-152pp':
- config_file = "grid-feats-vqa/configs/X-152-challenge.yaml"
- print('loading model: ' + model_path)
- predictor = load_detectron_predictor(config_file, model_path, device)
- roi_head = predictor.model.roi_heads
-
- # initialize patch tensor, loss, and optimizer
- if init == 'const':
- patch = Variable(0.5 * torch.ones([1, 3, res, res], dtype=torch.float32), requires_grad=True)
- else:
- rand_patch = np.random.normal(loc=0.5, scale=0.25, size=[1, 3, res, res])
- rand_patch = np.clip(rand_patch, 0, 1)
- patch = Variable(torch.from_numpy(rand_patch.astype(np.float32)), requires_grad=True)
- cel_obj = torch.nn.CrossEntropyLoss()
- cel_attr = torch.nn.CrossEntropyLoss()
- trk_cel_obj = torch.nn.CrossEntropyLoss(reduction='none')
- trk_cel_attr = torch.nn.CrossEntropyLoss(reduction='none')
- optim = torch.optim.Adam([patch])
-
- # set up training
- img_dir = os.path.join(dataroot, 'clean', 'train2014')
- files = os.listdir(img_dir)
- loss_col_obj = []
- loss_col_attr = []
- i = 0
- j = 0
-
- # partial epochs - allow training for < 1 epoch
- if epochs < 1:
- print('Training on a partial epoch: ' + str(epochs))
- limit = int(epochs * len(files))
- print('Will train on %i images'%limit)
- epochs = 1
- else:
- epochs = int(epochs)
-
- # optimize patch
- t1 = time.time()
- for e in range(epochs):
- print('=== EPOCH: %i'%e)
- random.shuffle(files)
- for f in files:
- img_path = os.path.join(img_dir, f)
- original_image = cv2.imread(img_path)
- optim.zero_grad()
-
- # using model directly to bypass some limitations of predictor
- height, width = original_image.shape[:2]
- image = predictor.transform_gen.get_transform(original_image).apply_image(original_image)
- image = torch.as_tensor(image.astype("float32").transpose(2, 0, 1))
- image = embed_patch(image, patch, scale)
- inputs = {"image": image, "height": height, "width": width}
-
- # run
- outputs, box_features = predictor.model([inputs])
- outputs = outputs[0]
- nb_out = box_features.shape[0]
-
- # object target
- if obj_target is not None:
- scores, deltas = roi_head.box_predictor(box_features)
- targets = torch.ones(nb_out, dtype=torch.long, device=device) * obj_target
- l_obj = cel_obj(scores, targets)
- if attr_target is None:
- l = l_obj
-
- # attribute target
- if attr_target is not None:
- pred_classes = outputs["instances"].get_fields()["pred_classes"].data
- attribute_scores = roi_head.attribute_predictor(box_features, pred_classes)
- attr_targets = torch.ones(nb_out, dtype=torch.long, device=device) * attr_target
- l_attr = cel_attr(attribute_scores, attr_targets)
- if obj_target is None:
- l = l_attr
-
- # step
- if obj_target is not None and attr_target is not None:
- l = l_obj + (lam * l_attr)
- l.backward()
- optim.step()
-
- # track progress by looking for the detection with the smallest loss, averaged over k images
- if obj_target is not None:
- trk_l_obj = trk_cel_obj(scores, targets)
- trk_l_obj = np.array(trk_l_obj.detach().cpu())
- trk_l_obj = np.min(trk_l_obj)
- loss_col_obj.append(trk_l_obj)
- else:
- loss_col_obj.append(0.0)
- if attr_target is not None:
- trk_l_attr = trk_cel_attr(attribute_scores, attr_targets)
- trk_l_attr = np.array(trk_l_attr.detach().cpu())
- trk_l_attr = np.min(trk_l_attr)
- loss_col_attr.append(trk_l_attr)
- else:
- loss_col_attr.append(0.0)
- if (i+1)%prog == 0:
- loss_col_obj = np.mean(np.array(loss_col_obj))
- loss_col_attr = np.mean(np.array(loss_col_attr))
- tdiff = time.time() - t1
- t1 = time.time()
- print('%i/%i avg obj loss: %f avg attr loss: %f time: %is'%(i, len(files), loss_col_obj, loss_col_attr, int(tdiff)))
- loss_col_obj = []
- loss_col_attr = []
- j = i+1
-
- # limit (optional)
- if i == limit:
- print('limiting training to %i steps'%limit)
- break
- i += 1
-
- # save patch
- final = patch.squeeze(0)
- final = torch.clip(final, 0, 1) * 255
- final = np.array(final.data).astype(int)
- final = final.transpose(1, 2, 0)
- print('saving patch to: ' + patch_name)
- cv2.imwrite(patch_name, final)
- t = time.time() - t0
- print('DONE in %.2fm'%(t/60))
-
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--dataroot', type=str, default='../data/', help='data location')
- parser.add_argument("--model_dir", type=str, help='location of .pth files', default='../detectors/')
- parser.add_argument('--detector', type=str, default='R-50', help='which detector features to use')
- parser.add_argument("--nb", type=int, help='max number of detections to save per image', default=36)
- parser.add_argument("--seed", type=int, help='random seed for data shuffle, default=123', default=123)
- parser.add_argument("--scale", type=float, default=0.1, help='patch scale relative to image')
- parser.add_argument("--res", type=int, default=64, help='optimized patch resolution in pixels, default=64')
- # semantic target settings - new
- parser.add_argument("--target", type=str, default=None, help='specify and object/attribute pair in format +, overrides other settings')
- parser.add_argument("--obj_target", type=str, default=None, help='object target (id or name). Use --explore to explore options')
- parser.add_argument("--attr_target", type=str, default=None, help='attribute target (id or name). Use --explore to explore options')
- parser.add_argument("--lam", type=float, default=0.1, help='weight for the attribute target loss, default 0.1')
- # training settings
- parser.add_argument("--epochs", type=float, default=1)
- parser.add_argument("--limit", type=int, default=-1)
- parser.add_argument("--prog", type=int, default=100)
- parser.add_argument("--init", type=str, default='random')
- # naming
- parser.add_argument("--patch_name", type=str, default='../opti_patches/semdev_op0.jpg')
- parser.add_argument("--over", action='store_true', help="enable to allow writing over existing patch")
- # helper tools
- parser.add_argument("--check_res", action='store_true', help="check the resolutions of patches by scale")
- parser.add_argument("--check_attr", type=int, default=None, help="check the name of an attribute index")
- parser.add_argument("--check_obj", type=int, default=None, help="check the name of an object index")
- parser.add_argument("--list_attr", action='store_true', help='list all attributes')
- parser.add_argument("--list_obj", action='store_true', help='list all objects')
- parser.add_argument("--explore", action='store_true', help="explore clean training set detections for rare object types")
- parser.add_argument("--find_examples", type=str, default=None, help="look for images with a certain + combination")
- parser.add_argument("--find_count", type=int, default=25, help="max number of examples to take. set as -1 to have no limit")
- parser.add_argument("--data_part", type=str, default='train2014', help="for use with explore, which data partition to check")
- args = parser.parse_args()
- np.random.seed(args.seed)
- # helper tools (optional)
- if args.check_res:
- check_res(args.dataroot, args.scale)
- exit()
- if args.check_obj is not None:
- lookup_labels(args.dataroot, 'object', args.check_obj)
- exit()
- if args.check_attr is not None:
- lookup_labels(args.dataroot, 'attribute', args.check_attr)
- exit()
- if args.list_obj:
- list_all_labels(args.dataroot, 'object')
- exit()
- if args.list_attr:
- list_all_labels(args.dataroot, 'attribute')
- exit()
- if args.explore:
- explore_detections(args.dataroot, args.detector, args.data_part)
- exit()
- if args.find_examples is not None:
- find_examples(args.dataroot, args.find_examples, args.detector, args.data_part, args.find_count)
- exit()
- # parse the target settings
- OBJ_TAR, ATTR_TAR = parse_targets(args.dataroot, args.target, args.obj_target, args.attr_target)
- # main script
- optimize_patch(args.dataroot, args.model_dir, args.detector, args.nb, args.scale, args.res, args.epochs,
- args.limit, args.prog, args.init, args.patch_name, args.over, args.seed, OBJ_TAR, ATTR_TAR, args.lam)
\ No newline at end of file
diff --git a/spaces/CVPR/regionclip-demo/detectron2/modeling/backbone/fpn.py b/spaces/CVPR/regionclip-demo/detectron2/modeling/backbone/fpn.py
deleted file mode 100644
index 532711d882b7baf109eef1fded128069e144d6ba..0000000000000000000000000000000000000000
--- a/spaces/CVPR/regionclip-demo/detectron2/modeling/backbone/fpn.py
+++ /dev/null
@@ -1,277 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import math
-import fvcore.nn.weight_init as weight_init
-import torch
-import torch.nn.functional as F
-from torch import nn
-
-from detectron2.layers import Conv2d, ShapeSpec, get_norm
-
-from .backbone import Backbone
-from .build import BACKBONE_REGISTRY
-from .resnet import build_resnet_backbone
-from .clip_backbone import build_clip_resnet_backbone
-
-__all__ = ["build_clip_resnet_fpn_backbone", "build_resnet_fpn_backbone", "build_retinanet_resnet_fpn_backbone", "FPN"]
-
-
-class FPN(Backbone):
- """
- This module implements :paper:`FPN`.
- It creates pyramid features built on top of some input feature maps.
- """
-
- _fuse_type: torch.jit.Final[str]
-
- def __init__(
- self, bottom_up, in_features, out_channels, norm="", top_block=None, fuse_type="sum"
- ):
- """
- Args:
- bottom_up (Backbone): module representing the bottom up subnetwork.
- Must be a subclass of :class:`Backbone`. The multi-scale feature
- maps generated by the bottom up network, and listed in `in_features`,
- are used to generate FPN levels.
- in_features (list[str]): names of the input feature maps coming
- from the backbone to which FPN is attached. For example, if the
- backbone produces ["res2", "res3", "res4"], any *contiguous* sublist
- of these may be used; order must be from high to low resolution.
- out_channels (int): number of channels in the output feature maps.
- norm (str): the normalization to use.
- top_block (nn.Module or None): if provided, an extra operation will
- be performed on the output of the last (smallest resolution)
- FPN output, and the result will extend the result list. The top_block
- further downsamples the feature map. It must have an attribute
- "num_levels", meaning the number of extra FPN levels added by
- this block, and "in_feature", which is a string representing
- its input feature (e.g., p5).
- fuse_type (str): types for fusing the top down features and the lateral
- ones. It can be "sum" (default), which sums up element-wise; or "avg",
- which takes the element-wise mean of the two.
- """
- super(FPN, self).__init__()
- assert isinstance(bottom_up, Backbone)
- assert in_features, in_features
-
- # Feature map strides and channels from the bottom up network (e.g. ResNet)
- input_shapes = bottom_up.output_shape()
- strides = [input_shapes[f].stride for f in in_features]
- in_channels_per_feature = [input_shapes[f].channels for f in in_features]
-
- _assert_strides_are_log2_contiguous(strides)
- lateral_convs = []
- output_convs = []
-
- use_bias = norm == ""
- for idx, in_channels in enumerate(in_channels_per_feature):
- lateral_norm = get_norm(norm, out_channels)
- output_norm = get_norm(norm, out_channels)
-
- lateral_conv = Conv2d(
- in_channels, out_channels, kernel_size=1, bias=use_bias, norm=lateral_norm
- )
- output_conv = Conv2d(
- out_channels,
- out_channels,
- kernel_size=3,
- stride=1,
- padding=1,
- bias=use_bias,
- norm=output_norm,
- )
- weight_init.c2_xavier_fill(lateral_conv)
- weight_init.c2_xavier_fill(output_conv)
- stage = int(math.log2(strides[idx]))
- self.add_module("fpn_lateral{}".format(stage), lateral_conv)
- self.add_module("fpn_output{}".format(stage), output_conv)
-
- lateral_convs.append(lateral_conv)
- output_convs.append(output_conv)
- # Place convs into top-down order (from low to high resolution)
- # to make the top-down computation in forward clearer.
- self.lateral_convs = lateral_convs[::-1]
- self.output_convs = output_convs[::-1]
- self.top_block = top_block
- self.in_features = tuple(in_features)
- self.bottom_up = bottom_up
- # Return feature names are "p", like ["p2", "p3", ..., "p6"]
- self._out_feature_strides = {"p{}".format(int(math.log2(s))): s for s in strides}
- # top block output feature maps.
- if self.top_block is not None:
- for s in range(stage, stage + self.top_block.num_levels):
- self._out_feature_strides["p{}".format(s + 1)] = 2 ** (s + 1)
-
- self._out_features = list(self._out_feature_strides.keys())
- self._out_feature_channels = {k: out_channels for k in self._out_features}
- self._size_divisibility = strides[-1]
- assert fuse_type in {"avg", "sum"}
- self._fuse_type = fuse_type
-
- @property
- def size_divisibility(self):
- return self._size_divisibility
-
- def forward(self, x):
- """
- Args:
- input (dict[str->Tensor]): mapping feature map name (e.g., "res5") to
- feature map tensor for each feature level in high to low resolution order.
-
- Returns:
- dict[str->Tensor]:
- mapping from feature map name to FPN feature map tensor
- in high to low resolution order. Returned feature names follow the FPN
- paper convention: "p", where stage has stride = 2 ** stage e.g.,
- ["p2", "p3", ..., "p6"].
- """
- bottom_up_features = self.bottom_up(x)
- results = []
- prev_features = self.lateral_convs[0](bottom_up_features[self.in_features[-1]])
- results.append(self.output_convs[0](prev_features))
-
- # Reverse feature maps into top-down order (from low to high resolution)
- for idx, (lateral_conv, output_conv) in enumerate(
- zip(self.lateral_convs, self.output_convs)
- ):
- # Slicing of ModuleList is not supported https://github.com/pytorch/pytorch/issues/47336
- # Therefore we loop over all modules but skip the first one
- if idx > 0:
- features = self.in_features[-idx - 1]
- features = bottom_up_features[features]
- top_down_features = F.interpolate(prev_features, scale_factor=2.0, mode="nearest")
- lateral_features = lateral_conv(features)
- prev_features = lateral_features + top_down_features
- if self._fuse_type == "avg":
- prev_features /= 2
- results.insert(0, output_conv(prev_features))
-
- if self.top_block is not None:
- if self.top_block.in_feature in bottom_up_features:
- top_block_in_feature = bottom_up_features[self.top_block.in_feature]
- else:
- top_block_in_feature = results[self._out_features.index(self.top_block.in_feature)]
- results.extend(self.top_block(top_block_in_feature))
- assert len(self._out_features) == len(results)
- return {f: res for f, res in zip(self._out_features, results)}
-
- def output_shape(self):
- return {
- name: ShapeSpec(
- channels=self._out_feature_channels[name], stride=self._out_feature_strides[name]
- )
- for name in self._out_features
- }
-
-
-def _assert_strides_are_log2_contiguous(strides):
- """
- Assert that each stride is 2x times its preceding stride, i.e. "contiguous in log2".
- """
- for i, stride in enumerate(strides[1:], 1):
- assert stride == 2 * strides[i - 1], "Strides {} {} are not log2 contiguous".format(
- stride, strides[i - 1]
- )
-
-
-class LastLevelMaxPool(nn.Module):
- """
- This module is used in the original FPN to generate a downsampled
- P6 feature from P5.
- """
-
- def __init__(self):
- super().__init__()
- self.num_levels = 1
- self.in_feature = "p5"
-
- def forward(self, x):
- return [F.max_pool2d(x, kernel_size=1, stride=2, padding=0)]
-
-
-class LastLevelP6P7(nn.Module):
- """
- This module is used in RetinaNet to generate extra layers, P6 and P7 from
- C5 feature.
- """
-
- def __init__(self, in_channels, out_channels, in_feature="res5"):
- super().__init__()
- self.num_levels = 2
- self.in_feature = in_feature
- self.p6 = nn.Conv2d(in_channels, out_channels, 3, 2, 1)
- self.p7 = nn.Conv2d(out_channels, out_channels, 3, 2, 1)
- for module in [self.p6, self.p7]:
- weight_init.c2_xavier_fill(module)
-
- def forward(self, c5):
- p6 = self.p6(c5)
- p7 = self.p7(F.relu(p6))
- return [p6, p7]
-
-
-@BACKBONE_REGISTRY.register()
-def build_resnet_fpn_backbone(cfg, input_shape: ShapeSpec):
- """
- Args:
- cfg: a detectron2 CfgNode
-
- Returns:
- backbone (Backbone): backbone module, must be a subclass of :class:`Backbone`.
- """
- bottom_up = build_resnet_backbone(cfg, input_shape)
- in_features = cfg.MODEL.FPN.IN_FEATURES
- out_channels = cfg.MODEL.FPN.OUT_CHANNELS
- backbone = FPN(
- bottom_up=bottom_up,
- in_features=in_features,
- out_channels=out_channels,
- norm=cfg.MODEL.FPN.NORM,
- top_block=LastLevelMaxPool(),
- fuse_type=cfg.MODEL.FPN.FUSE_TYPE,
- )
- return backbone
-
-@BACKBONE_REGISTRY.register()
-def build_clip_resnet_fpn_backbone(cfg, input_shape: ShapeSpec):
- """
- Args:
- cfg: a detectron2 CfgNode
-
- Returns:
- backbone (Backbone): backbone module, must be a subclass of :class:`Backbone`.
- """
- bottom_up = build_clip_resnet_backbone(cfg, input_shape)
- in_features = cfg.MODEL.FPN.IN_FEATURES
- out_channels = cfg.MODEL.FPN.OUT_CHANNELS
- backbone = FPN(
- bottom_up=bottom_up,
- in_features=in_features,
- out_channels=out_channels,
- norm=cfg.MODEL.FPN.NORM,
- top_block=LastLevelMaxPool(),
- fuse_type=cfg.MODEL.FPN.FUSE_TYPE,
- )
- return backbone
-
-@BACKBONE_REGISTRY.register()
-def build_retinanet_resnet_fpn_backbone(cfg, input_shape: ShapeSpec):
- """
- Args:
- cfg: a detectron2 CfgNode
-
- Returns:
- backbone (Backbone): backbone module, must be a subclass of :class:`Backbone`.
- """
- bottom_up = build_resnet_backbone(cfg, input_shape)
- in_features = cfg.MODEL.FPN.IN_FEATURES
- out_channels = cfg.MODEL.FPN.OUT_CHANNELS
- in_channels_p6p7 = bottom_up.output_shape()["res5"].channels
- backbone = FPN(
- bottom_up=bottom_up,
- in_features=in_features,
- out_channels=out_channels,
- norm=cfg.MODEL.FPN.NORM,
- top_block=LastLevelP6P7(in_channels_p6p7, out_channels),
- fuse_type=cfg.MODEL.FPN.FUSE_TYPE,
- )
- return backbone
diff --git a/spaces/CarlDennis/HYTTS/text/__init__.py b/spaces/CarlDennis/HYTTS/text/__init__.py
deleted file mode 100644
index 0c6416d709b458491ace4d10ae27c6ca94b73a88..0000000000000000000000000000000000000000
--- a/spaces/CarlDennis/HYTTS/text/__init__.py
+++ /dev/null
@@ -1,33 +0,0 @@
-""" from https://github.com/keithito/tacotron """
-from text import cleaners
-
-
-def text_to_sequence(text, symbols, cleaner_names):
- '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text.
- Args:
- text: string to convert to a sequence
- cleaner_names: names of the cleaner functions to run the text through
- Returns:
- List of integers corresponding to the symbols in the text
- '''
- _symbol_to_id = {s: i for i, s in enumerate(symbols)}
-
- sequence = []
-
- clean_text = _clean_text(text, cleaner_names)
- for symbol in clean_text:
- if symbol not in _symbol_to_id.keys():
- continue
- symbol_id = _symbol_to_id[symbol]
- sequence += [symbol_id]
-
- return sequence
-
-
-def _clean_text(text, cleaner_names):
- for name in cleaner_names:
- cleaner = getattr(cleaners, name)
- if not cleaner:
- raise Exception('Unknown cleaner: %s' % name)
- text = cleaner(text)
- return text
diff --git a/spaces/CikeyQI/meme-api/docs/install.md b/spaces/CikeyQI/meme-api/docs/install.md
deleted file mode 100644
index 54cce4221e20f53f0dfadefb43b43e3c31197ab0..0000000000000000000000000000000000000000
--- a/spaces/CikeyQI/meme-api/docs/install.md
+++ /dev/null
@@ -1,124 +0,0 @@
-## 本地安装
-
-### 使用 pip 安装
-
-```bash
-pip install meme_generator
-```
-
-#### 图片下载
-
-由于表情包图片体积较大,`meme-generator` 包含的表情中的图片并不随代码一起打包,需要在安装后手动执行下载命令:
-
-```bash
-meme download
-```
-
-### 直接运行源代码
-
-克隆当前仓库:
-
-```bash
-git clone https://github.com/MeetWq/meme-generator
-```
-
-通过 `python -m meme_generator.app` 运行 web 服务器
-
-通过 `python -m meme_generator.cli` 运行命令行程序
-
-
-### 字体安装
-
-为确保表情包中的文字生成正常,需要自行安装字体
-
-> **Note**
->
-> 字体安装后若文字仍显示不正常,可删掉 `matplotlib` 字体缓存文件重新运行程序
->
-> 缓存文件位置:
-> - Windows: `C:\Users\\.matplotlib\fontlist-xxx.json`
-> - Linux: `~/.cache/matplotlib/fontlist-xxx.json`
-> - Mac: `~/Library/Caches/matplotlib/fontlist-xxx.json`
-
-
-#### 中文字体 和 emoji字体 安装
-
-根据系统的不同,推荐安装的字体如下:
-
-- Windows:
-
-大部分 Windows 系统自带 [微软雅黑](https://learn.microsoft.com/zh-cn/typography/font-list/microsoft-yahei) 中文字体 和 [Segoe UI Emoji](https://learn.microsoft.com/zh-cn/typography/font-list/segoe-ui-emoji) emoji 字体,一般情况下无需额外安装
-
-
-- Linux:
-
-部分系统可能自带 [文泉驿微米黑](http://wenq.org/wqy2/index.cgi?MicroHei) 中文字体;
-
-对于 Ubuntu 系统,推荐安装 Noto Sans CJK 和 Noto Color Emoji:
-
-```bash
-sudo apt install fonts-noto-cjk fonts-noto-color-emoji
-```
-
-为避免 Noto Sans CJK 中部分中文显示为异体(日文)字形,可以将简体中文设置为默认语言(详见 [ArchWiki](https://wiki.archlinux.org/title/Localization/Simplified_Chinese?rdfrom=https%3A%2F%2Fwiki.archlinux.org%2Findex.php%3Ftitle%3DLocalization_%28%25E7%25AE%2580%25E4%25BD%2593%25E4%25B8%25AD%25E6%2596%2587%29%2FSimplified_Chinese_%28%25E7%25AE%2580%25E4%25BD%2593%25E4%25B8%25AD%25E6%2596%2587%29%26redirect%3Dno#%E4%BF%AE%E6%AD%A3%E7%AE%80%E4%BD%93%E4%B8%AD%E6%96%87%E6%98%BE%E7%A4%BA%E4%B8%BA%E5%BC%82%E4%BD%93%EF%BC%88%E6%97%A5%E6%96%87%EF%BC%89%E5%AD%97%E5%BD%A2)):
-
-```bash
-sudo locale-gen zh_CN zh_CN.UTF-8
-sudo update-locale LC_ALL=zh_CN.UTF-8 LANG=zh_CN.UTF-8
-fc-cache -fv
-```
-
-其他 Linux 系统可以自行下载字体文件安装:
-
-思源黑体:https://github.com/adobe-fonts/source-han-sans
-
-NotoSansSC:https://fonts.google.com/noto/specimen/Noto+Sans+SC
-
-Noto Color Emoji:https://github.com/googlefonts/noto-emoji
-
-
-- Mac:
-
-苹果系统一般自带 "PingFang SC" 中文字体 与 "Apple Color Emoji" emoji 字体
-
-
-#### 其他字体安装
-
-某些表情包需要用到一些额外字体,存放于仓库中 [resources/fonts](https://github.com/MeetWq/meme-generator/tree/main/resources/fonts),需要自行下载安装
-
-具体字体及对应的表情如下:
-
-| 字体名 | 字体文件名 | 用到该字体的表情 | 备注 |
-| --- | --- | --- | --- |
-| [Consolas](https://learn.microsoft.com/zh-cn/typography/font-list/consolas) | [consola.ttf](https://github.com/MeetWq/meme-generator/blob/main/resources/fonts/consola.ttf) | `charpic` | |
-| [FZKaTong-M19S](https://www.foundertype.com/index.php/FontInfo/index/id/136) | [FZKATJW.ttf](https://github.com/MeetWq/meme-generator/blob/main/resources/fonts/FZKATJW.ttf) | `capoo_say` | 方正卡通 |
-| [FZXS14](https://www.foundertype.com/index.php/FontInfo/index/id/208) | [FZXS14.ttf](https://github.com/MeetWq/meme-generator/blob/main/resources/fonts/FZXS14.ttf) | `nokia` | 方正像素14 |
-| [FZSJ-QINGCRJ](https://www.foundertype.com/index.php/FontInfo/index/id/5178) | [FZSJ-QINGCRJ.ttf](https://github.com/MeetWq/meme-generator/blob/main/resources/fonts/FZSJ-QINGCRJ.ttf) | `psyduck`、`nijika_holdsign` | 方正手迹-青春日记 |
-| [FZShaoEr-M11S](https://www.foundertype.com/index.php/FontInfo/index/id/149) | [FZSEJW.ttf](https://github.com/MeetWq/meme-generator/blob/main/resources/fonts/FZSEJW.ttf) | `raise_sign`、`nekoha_holdsign` | 方正少儿 |
-| [NotoSansSC](https://fonts.google.com/noto/specimen/Noto+Sans+SC) | [NotoSansSC-Regular.otf](https://github.com/MeetWq/meme-generator/blob/main/resources/fonts/NotoSansSC-Regular.otf) | `5000choyen` | |
-| [NotoSerifSC](https://fonts.google.com/noto/specimen/Noto+Serif+SC) | [NotoSerifSC-Regular.otf](https://github.com/MeetWq/meme-generator/blob/main/resources/fonts/NotoSerifSC-Regular.otf) | `5000choyen` | |
-| [HiraginoMin](https://www.fonts.net.cn/font-36201269101.html) | [HiraginoMin-W5-90-RKSJ-H-2.ttc](https://github.com/MeetWq/meme-generator/blob/main/resources/fonts/HiraginoMin-W5-90-RKSJ-H-2.ttc) | `oshi_no_ko` | 明朝体 |
-| [Aller](https://fonts.adobe.com/fonts/aller) | [Aller_Bd.ttf](https://github.com/MeetWq/meme-generator/blob/main/resources/fonts/Aller_Bd.ttf) | `osu` | |
-
-
-#### 字体安装方式
-
-不同系统的字体安装方式:
-
-- Windows:
- - 双击通过字体查看器安装
- - 复制到字体文件夹:`C:\Windows\Fonts`
-
-- Linux:
-
-在 `/usr/share/fonts` 目录下新建文件夹,如 `myfonts`,将字体文件复制到该路径下;
-
-运行如下命令建立字体缓存:
-
-```bash
-fc-cache -fv
-```
-
-- Mac:
-
-使用字体册打开字体文件安装
diff --git a/spaces/CorvaeOboro/gen_ability_icon/torch_utils/custom_ops.py b/spaces/CorvaeOboro/gen_ability_icon/torch_utils/custom_ops.py
deleted file mode 100644
index 4cc4e43fc6f6ce79f2bd68a44ba87990b9b8564e..0000000000000000000000000000000000000000
--- a/spaces/CorvaeOboro/gen_ability_icon/torch_utils/custom_ops.py
+++ /dev/null
@@ -1,126 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-import os
-import glob
-import torch
-import torch.utils.cpp_extension
-import importlib
-import hashlib
-import shutil
-from pathlib import Path
-
-from torch.utils.file_baton import FileBaton
-
-#----------------------------------------------------------------------------
-# Global options.
-
-verbosity = 'brief' # Verbosity level: 'none', 'brief', 'full'
-
-#----------------------------------------------------------------------------
-# Internal helper funcs.
-
-def _find_compiler_bindir():
- patterns = [
- 'C:/Program Files (x86)/Microsoft Visual Studio/*/Professional/VC/Tools/MSVC/*/bin/Hostx64/x64',
- 'C:/Program Files (x86)/Microsoft Visual Studio/*/BuildTools/VC/Tools/MSVC/*/bin/Hostx64/x64',
- 'C:/Program Files (x86)/Microsoft Visual Studio/*/Community/VC/Tools/MSVC/*/bin/Hostx64/x64',
- 'C:/Program Files (x86)/Microsoft Visual Studio */vc/bin',
- ]
- for pattern in patterns:
- matches = sorted(glob.glob(pattern))
- if len(matches):
- return matches[-1]
- return None
-
-#----------------------------------------------------------------------------
-# Main entry point for compiling and loading C++/CUDA plugins.
-
-_cached_plugins = dict()
-
-def get_plugin(module_name, sources, **build_kwargs):
- assert verbosity in ['none', 'brief', 'full']
-
- # Already cached?
- if module_name in _cached_plugins:
- return _cached_plugins[module_name]
-
- # Print status.
- if verbosity == 'full':
- print(f'Setting up PyTorch plugin "{module_name}"...')
- elif verbosity == 'brief':
- print(f'Setting up PyTorch plugin "{module_name}"... ', end='', flush=True)
-
- try: # pylint: disable=too-many-nested-blocks
- # Make sure we can find the necessary compiler binaries.
- if os.name == 'nt' and os.system("where cl.exe >nul 2>nul") != 0:
- compiler_bindir = _find_compiler_bindir()
- if compiler_bindir is None:
- raise RuntimeError(f'Could not find MSVC/GCC/CLANG installation on this computer. Check _find_compiler_bindir() in "{__file__}".')
- os.environ['PATH'] += ';' + compiler_bindir
-
- # Compile and load.
- verbose_build = (verbosity == 'full')
-
- # Incremental build md5sum trickery. Copies all the input source files
- # into a cached build directory under a combined md5 digest of the input
- # source files. Copying is done only if the combined digest has changed.
- # This keeps input file timestamps and filenames the same as in previous
- # extension builds, allowing for fast incremental rebuilds.
- #
- # This optimization is done only in case all the source files reside in
- # a single directory (just for simplicity) and if the TORCH_EXTENSIONS_DIR
- # environment variable is set (we take this as a signal that the user
- # actually cares about this.)
- source_dirs_set = set(os.path.dirname(source) for source in sources)
- if len(source_dirs_set) == 1 and ('TORCH_EXTENSIONS_DIR' in os.environ):
- all_source_files = sorted(list(x for x in Path(list(source_dirs_set)[0]).iterdir() if x.is_file()))
-
- # Compute a combined hash digest for all source files in the same
- # custom op directory (usually .cu, .cpp, .py and .h files).
- hash_md5 = hashlib.md5()
- for src in all_source_files:
- with open(src, 'rb') as f:
- hash_md5.update(f.read())
- build_dir = torch.utils.cpp_extension._get_build_directory(module_name, verbose=verbose_build) # pylint: disable=protected-access
- digest_build_dir = os.path.join(build_dir, hash_md5.hexdigest())
-
- if not os.path.isdir(digest_build_dir):
- os.makedirs(digest_build_dir, exist_ok=True)
- baton = FileBaton(os.path.join(digest_build_dir, 'lock'))
- if baton.try_acquire():
- try:
- for src in all_source_files:
- shutil.copyfile(src, os.path.join(digest_build_dir, os.path.basename(src)))
- finally:
- baton.release()
- else:
- # Someone else is copying source files under the digest dir,
- # wait until done and continue.
- baton.wait()
- digest_sources = [os.path.join(digest_build_dir, os.path.basename(x)) for x in sources]
- torch.utils.cpp_extension.load(name=module_name, build_directory=build_dir,
- verbose=verbose_build, sources=digest_sources, **build_kwargs)
- else:
- torch.utils.cpp_extension.load(name=module_name, verbose=verbose_build, sources=sources, **build_kwargs)
- module = importlib.import_module(module_name)
-
- except:
- if verbosity == 'brief':
- print('Failed!')
- raise
-
- # Print status and add to cache.
- if verbosity == 'full':
- print(f'Done setting up PyTorch plugin "{module_name}".')
- elif verbosity == 'brief':
- print('Done.')
- _cached_plugins[module_name] = module
- return module
-
-#----------------------------------------------------------------------------
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/_backends/_trio.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/_backends/_trio.py
deleted file mode 100644
index cf2894350952e1169a6c77ea7c767e892f3efc1e..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/_backends/_trio.py
+++ /dev/null
@@ -1,996 +0,0 @@
-from __future__ import annotations
-
-import array
-import math
-import socket
-from concurrent.futures import Future
-from contextvars import copy_context
-from dataclasses import dataclass
-from functools import partial
-from io import IOBase
-from os import PathLike
-from signal import Signals
-from types import TracebackType
-from typing import (
- IO,
- TYPE_CHECKING,
- Any,
- AsyncGenerator,
- AsyncIterator,
- Awaitable,
- Callable,
- Collection,
- Coroutine,
- Generic,
- Iterable,
- Mapping,
- NoReturn,
- Sequence,
- TypeVar,
- cast,
-)
-
-import sniffio
-import trio.from_thread
-from outcome import Error, Outcome, Value
-from trio.socket import SocketType as TrioSocketType
-from trio.to_thread import run_sync
-
-from .. import CapacityLimiterStatistics, EventStatistics, TaskInfo, abc
-from .._core._compat import DeprecatedAsyncContextManager, DeprecatedAwaitable
-from .._core._eventloop import claim_worker_thread
-from .._core._exceptions import (
- BrokenResourceError,
- BusyResourceError,
- ClosedResourceError,
- EndOfStream,
-)
-from .._core._exceptions import ExceptionGroup as BaseExceptionGroup
-from .._core._sockets import convert_ipv6_sockaddr
-from .._core._synchronization import CapacityLimiter as BaseCapacityLimiter
-from .._core._synchronization import Event as BaseEvent
-from .._core._synchronization import ResourceGuard
-from .._core._tasks import CancelScope as BaseCancelScope
-from ..abc import IPSockAddrType, UDPPacketType
-
-if TYPE_CHECKING:
- from trio_typing import TaskStatus
-
-try:
- from trio import lowlevel as trio_lowlevel
-except ImportError:
- from trio import hazmat as trio_lowlevel # type: ignore[no-redef]
- from trio.hazmat import wait_readable, wait_writable
-else:
- from trio.lowlevel import wait_readable, wait_writable
-
-try:
- trio_open_process = trio_lowlevel.open_process
-except AttributeError:
- # isort: off
- from trio import ( # type: ignore[attr-defined, no-redef]
- open_process as trio_open_process,
- )
-
-T_Retval = TypeVar("T_Retval")
-T_SockAddr = TypeVar("T_SockAddr", str, IPSockAddrType)
-
-
-#
-# Event loop
-#
-
-run = trio.run
-current_token = trio.lowlevel.current_trio_token
-RunVar = trio.lowlevel.RunVar
-
-
-#
-# Miscellaneous
-#
-
-sleep = trio.sleep
-
-
-#
-# Timeouts and cancellation
-#
-
-
-class CancelScope(BaseCancelScope):
- def __new__(
- cls, original: trio.CancelScope | None = None, **kwargs: object
- ) -> CancelScope:
- return object.__new__(cls)
-
- def __init__(self, original: trio.CancelScope | None = None, **kwargs: Any) -> None:
- self.__original = original or trio.CancelScope(**kwargs)
-
- def __enter__(self) -> CancelScope:
- self.__original.__enter__()
- return self
-
- def __exit__(
- self,
- exc_type: type[BaseException] | None,
- exc_val: BaseException | None,
- exc_tb: TracebackType | None,
- ) -> bool | None:
- # https://github.com/python-trio/trio-typing/pull/79
- return self.__original.__exit__( # type: ignore[func-returns-value]
- exc_type, exc_val, exc_tb
- )
-
- def cancel(self) -> DeprecatedAwaitable:
- self.__original.cancel()
- return DeprecatedAwaitable(self.cancel)
-
- @property
- def deadline(self) -> float:
- return self.__original.deadline
-
- @deadline.setter
- def deadline(self, value: float) -> None:
- self.__original.deadline = value
-
- @property
- def cancel_called(self) -> bool:
- return self.__original.cancel_called
-
- @property
- def shield(self) -> bool:
- return self.__original.shield
-
- @shield.setter
- def shield(self, value: bool) -> None:
- self.__original.shield = value
-
-
-CancelledError = trio.Cancelled
-checkpoint = trio.lowlevel.checkpoint
-checkpoint_if_cancelled = trio.lowlevel.checkpoint_if_cancelled
-cancel_shielded_checkpoint = trio.lowlevel.cancel_shielded_checkpoint
-current_effective_deadline = trio.current_effective_deadline
-current_time = trio.current_time
-
-
-#
-# Task groups
-#
-
-
-class ExceptionGroup(BaseExceptionGroup, trio.MultiError):
- pass
-
-
-class TaskGroup(abc.TaskGroup):
- def __init__(self) -> None:
- self._active = False
- self._nursery_manager = trio.open_nursery()
- self.cancel_scope = None # type: ignore[assignment]
-
- async def __aenter__(self) -> TaskGroup:
- self._active = True
- self._nursery = await self._nursery_manager.__aenter__()
- self.cancel_scope = CancelScope(self._nursery.cancel_scope)
- return self
-
- async def __aexit__(
- self,
- exc_type: type[BaseException] | None,
- exc_val: BaseException | None,
- exc_tb: TracebackType | None,
- ) -> bool | None:
- try:
- return await self._nursery_manager.__aexit__(exc_type, exc_val, exc_tb)
- except trio.MultiError as exc:
- raise ExceptionGroup(exc.exceptions) from None
- finally:
- self._active = False
-
- def start_soon(
- self, func: Callable[..., Awaitable[Any]], *args: object, name: object = None
- ) -> None:
- if not self._active:
- raise RuntimeError(
- "This task group is not active; no new tasks can be started."
- )
-
- self._nursery.start_soon(func, *args, name=name)
-
- async def start(
- self, func: Callable[..., Awaitable[Any]], *args: object, name: object = None
- ) -> object:
- if not self._active:
- raise RuntimeError(
- "This task group is not active; no new tasks can be started."
- )
-
- return await self._nursery.start(func, *args, name=name)
-
-
-#
-# Threads
-#
-
-
-async def run_sync_in_worker_thread(
- func: Callable[..., T_Retval],
- *args: object,
- cancellable: bool = False,
- limiter: trio.CapacityLimiter | None = None,
-) -> T_Retval:
- def wrapper() -> T_Retval:
- with claim_worker_thread("trio"):
- return func(*args)
-
- # TODO: remove explicit context copying when trio 0.20 is the minimum requirement
- context = copy_context()
- context.run(sniffio.current_async_library_cvar.set, None)
- return await run_sync(
- context.run, wrapper, cancellable=cancellable, limiter=limiter
- )
-
-
-# TODO: remove this workaround when trio 0.20 is the minimum requirement
-def run_async_from_thread(
- fn: Callable[..., Awaitable[T_Retval]], *args: Any
-) -> T_Retval:
- async def wrapper() -> T_Retval:
- retval: T_Retval
-
- async def inner() -> None:
- nonlocal retval
- __tracebackhide__ = True
- retval = await fn(*args)
-
- async with trio.open_nursery() as n:
- context.run(n.start_soon, inner)
-
- __tracebackhide__ = True
- return retval # noqa: F821
-
- context = copy_context()
- context.run(sniffio.current_async_library_cvar.set, "trio")
- return trio.from_thread.run(wrapper)
-
-
-def run_sync_from_thread(fn: Callable[..., T_Retval], *args: Any) -> T_Retval:
- # TODO: remove explicit context copying when trio 0.20 is the minimum requirement
- retval = trio.from_thread.run_sync(copy_context().run, fn, *args)
- return cast(T_Retval, retval)
-
-
-class BlockingPortal(abc.BlockingPortal):
- def __new__(cls) -> BlockingPortal:
- return object.__new__(cls)
-
- def __init__(self) -> None:
- super().__init__()
- self._token = trio.lowlevel.current_trio_token()
-
- def _spawn_task_from_thread(
- self,
- func: Callable,
- args: tuple,
- kwargs: dict[str, Any],
- name: object,
- future: Future,
- ) -> None:
- context = copy_context()
- context.run(sniffio.current_async_library_cvar.set, "trio")
- trio.from_thread.run_sync(
- context.run,
- partial(self._task_group.start_soon, name=name),
- self._call_func,
- func,
- args,
- kwargs,
- future,
- trio_token=self._token,
- )
-
-
-#
-# Subprocesses
-#
-
-
-@dataclass(eq=False)
-class ReceiveStreamWrapper(abc.ByteReceiveStream):
- _stream: trio.abc.ReceiveStream
-
- async def receive(self, max_bytes: int | None = None) -> bytes:
- try:
- data = await self._stream.receive_some(max_bytes)
- except trio.ClosedResourceError as exc:
- raise ClosedResourceError from exc.__cause__
- except trio.BrokenResourceError as exc:
- raise BrokenResourceError from exc.__cause__
-
- if data:
- return data
- else:
- raise EndOfStream
-
- async def aclose(self) -> None:
- await self._stream.aclose()
-
-
-@dataclass(eq=False)
-class SendStreamWrapper(abc.ByteSendStream):
- _stream: trio.abc.SendStream
-
- async def send(self, item: bytes) -> None:
- try:
- await self._stream.send_all(item)
- except trio.ClosedResourceError as exc:
- raise ClosedResourceError from exc.__cause__
- except trio.BrokenResourceError as exc:
- raise BrokenResourceError from exc.__cause__
-
- async def aclose(self) -> None:
- await self._stream.aclose()
-
-
-@dataclass(eq=False)
-class Process(abc.Process):
- _process: trio.Process
- _stdin: abc.ByteSendStream | None
- _stdout: abc.ByteReceiveStream | None
- _stderr: abc.ByteReceiveStream | None
-
- async def aclose(self) -> None:
- if self._stdin:
- await self._stdin.aclose()
- if self._stdout:
- await self._stdout.aclose()
- if self._stderr:
- await self._stderr.aclose()
-
- await self.wait()
-
- async def wait(self) -> int:
- return await self._process.wait()
-
- def terminate(self) -> None:
- self._process.terminate()
-
- def kill(self) -> None:
- self._process.kill()
-
- def send_signal(self, signal: Signals) -> None:
- self._process.send_signal(signal)
-
- @property
- def pid(self) -> int:
- return self._process.pid
-
- @property
- def returncode(self) -> int | None:
- return self._process.returncode
-
- @property
- def stdin(self) -> abc.ByteSendStream | None:
- return self._stdin
-
- @property
- def stdout(self) -> abc.ByteReceiveStream | None:
- return self._stdout
-
- @property
- def stderr(self) -> abc.ByteReceiveStream | None:
- return self._stderr
-
-
-async def open_process(
- command: str | bytes | Sequence[str | bytes],
- *,
- shell: bool,
- stdin: int | IO[Any] | None,
- stdout: int | IO[Any] | None,
- stderr: int | IO[Any] | None,
- cwd: str | bytes | PathLike | None = None,
- env: Mapping[str, str] | None = None,
- start_new_session: bool = False,
-) -> Process:
- process = await trio_open_process( # type: ignore[misc]
- command, # type: ignore[arg-type]
- stdin=stdin,
- stdout=stdout,
- stderr=stderr,
- shell=shell,
- cwd=cwd,
- env=env,
- start_new_session=start_new_session,
- )
- stdin_stream = SendStreamWrapper(process.stdin) if process.stdin else None
- stdout_stream = ReceiveStreamWrapper(process.stdout) if process.stdout else None
- stderr_stream = ReceiveStreamWrapper(process.stderr) if process.stderr else None
- return Process(process, stdin_stream, stdout_stream, stderr_stream)
-
-
-class _ProcessPoolShutdownInstrument(trio.abc.Instrument):
- def after_run(self) -> None:
- super().after_run()
-
-
-current_default_worker_process_limiter: RunVar = RunVar(
- "current_default_worker_process_limiter"
-)
-
-
-async def _shutdown_process_pool(workers: set[Process]) -> None:
- process: Process
- try:
- await sleep(math.inf)
- except trio.Cancelled:
- for process in workers:
- if process.returncode is None:
- process.kill()
-
- with CancelScope(shield=True):
- for process in workers:
- await process.aclose()
-
-
-def setup_process_pool_exit_at_shutdown(workers: set[Process]) -> None:
- trio.lowlevel.spawn_system_task(_shutdown_process_pool, workers)
-
-
-#
-# Sockets and networking
-#
-
-
-class _TrioSocketMixin(Generic[T_SockAddr]):
- def __init__(self, trio_socket: TrioSocketType) -> None:
- self._trio_socket = trio_socket
- self._closed = False
-
- def _check_closed(self) -> None:
- if self._closed:
- raise ClosedResourceError
- if self._trio_socket.fileno() < 0:
- raise BrokenResourceError
-
- @property
- def _raw_socket(self) -> socket.socket:
- return self._trio_socket._sock # type: ignore[attr-defined]
-
- async def aclose(self) -> None:
- if self._trio_socket.fileno() >= 0:
- self._closed = True
- self._trio_socket.close()
-
- def _convert_socket_error(self, exc: BaseException) -> NoReturn:
- if isinstance(exc, trio.ClosedResourceError):
- raise ClosedResourceError from exc
- elif self._trio_socket.fileno() < 0 and self._closed:
- raise ClosedResourceError from None
- elif isinstance(exc, OSError):
- raise BrokenResourceError from exc
- else:
- raise exc
-
-
-class SocketStream(_TrioSocketMixin, abc.SocketStream):
- def __init__(self, trio_socket: TrioSocketType) -> None:
- super().__init__(trio_socket)
- self._receive_guard = ResourceGuard("reading from")
- self._send_guard = ResourceGuard("writing to")
-
- async def receive(self, max_bytes: int = 65536) -> bytes:
- with self._receive_guard:
- try:
- data = await self._trio_socket.recv(max_bytes)
- except BaseException as exc:
- self._convert_socket_error(exc)
-
- if data:
- return data
- else:
- raise EndOfStream
-
- async def send(self, item: bytes) -> None:
- with self._send_guard:
- view = memoryview(item)
- while view:
- try:
- bytes_sent = await self._trio_socket.send(view)
- except BaseException as exc:
- self._convert_socket_error(exc)
-
- view = view[bytes_sent:]
-
- async def send_eof(self) -> None:
- self._trio_socket.shutdown(socket.SHUT_WR)
-
-
-class UNIXSocketStream(SocketStream, abc.UNIXSocketStream):
- async def receive_fds(self, msglen: int, maxfds: int) -> tuple[bytes, list[int]]:
- if not isinstance(msglen, int) or msglen < 0:
- raise ValueError("msglen must be a non-negative integer")
- if not isinstance(maxfds, int) or maxfds < 1:
- raise ValueError("maxfds must be a positive integer")
-
- fds = array.array("i")
- await checkpoint()
- with self._receive_guard:
- while True:
- try:
- message, ancdata, flags, addr = await self._trio_socket.recvmsg(
- msglen, socket.CMSG_LEN(maxfds * fds.itemsize)
- )
- except BaseException as exc:
- self._convert_socket_error(exc)
- else:
- if not message and not ancdata:
- raise EndOfStream
-
- break
-
- for cmsg_level, cmsg_type, cmsg_data in ancdata:
- if cmsg_level != socket.SOL_SOCKET or cmsg_type != socket.SCM_RIGHTS:
- raise RuntimeError(
- f"Received unexpected ancillary data; message = {message!r}, "
- f"cmsg_level = {cmsg_level}, cmsg_type = {cmsg_type}"
- )
-
- fds.frombytes(cmsg_data[: len(cmsg_data) - (len(cmsg_data) % fds.itemsize)])
-
- return message, list(fds)
-
- async def send_fds(self, message: bytes, fds: Collection[int | IOBase]) -> None:
- if not message:
- raise ValueError("message must not be empty")
- if not fds:
- raise ValueError("fds must not be empty")
-
- filenos: list[int] = []
- for fd in fds:
- if isinstance(fd, int):
- filenos.append(fd)
- elif isinstance(fd, IOBase):
- filenos.append(fd.fileno())
-
- fdarray = array.array("i", filenos)
- await checkpoint()
- with self._send_guard:
- while True:
- try:
- await self._trio_socket.sendmsg(
- [message],
- [
- (
- socket.SOL_SOCKET,
- socket.SCM_RIGHTS, # type: ignore[list-item]
- fdarray,
- )
- ],
- )
- break
- except BaseException as exc:
- self._convert_socket_error(exc)
-
-
-class TCPSocketListener(_TrioSocketMixin, abc.SocketListener):
- def __init__(self, raw_socket: socket.socket):
- super().__init__(trio.socket.from_stdlib_socket(raw_socket))
- self._accept_guard = ResourceGuard("accepting connections from")
-
- async def accept(self) -> SocketStream:
- with self._accept_guard:
- try:
- trio_socket, _addr = await self._trio_socket.accept()
- except BaseException as exc:
- self._convert_socket_error(exc)
-
- trio_socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
- return SocketStream(trio_socket)
-
-
-class UNIXSocketListener(_TrioSocketMixin, abc.SocketListener):
- def __init__(self, raw_socket: socket.socket):
- super().__init__(trio.socket.from_stdlib_socket(raw_socket))
- self._accept_guard = ResourceGuard("accepting connections from")
-
- async def accept(self) -> UNIXSocketStream:
- with self._accept_guard:
- try:
- trio_socket, _addr = await self._trio_socket.accept()
- except BaseException as exc:
- self._convert_socket_error(exc)
-
- return UNIXSocketStream(trio_socket)
-
-
-class UDPSocket(_TrioSocketMixin[IPSockAddrType], abc.UDPSocket):
- def __init__(self, trio_socket: TrioSocketType) -> None:
- super().__init__(trio_socket)
- self._receive_guard = ResourceGuard("reading from")
- self._send_guard = ResourceGuard("writing to")
-
- async def receive(self) -> tuple[bytes, IPSockAddrType]:
- with self._receive_guard:
- try:
- data, addr = await self._trio_socket.recvfrom(65536)
- return data, convert_ipv6_sockaddr(addr)
- except BaseException as exc:
- self._convert_socket_error(exc)
-
- async def send(self, item: UDPPacketType) -> None:
- with self._send_guard:
- try:
- await self._trio_socket.sendto(*item)
- except BaseException as exc:
- self._convert_socket_error(exc)
-
-
-class ConnectedUDPSocket(_TrioSocketMixin[IPSockAddrType], abc.ConnectedUDPSocket):
- def __init__(self, trio_socket: TrioSocketType) -> None:
- super().__init__(trio_socket)
- self._receive_guard = ResourceGuard("reading from")
- self._send_guard = ResourceGuard("writing to")
-
- async def receive(self) -> bytes:
- with self._receive_guard:
- try:
- return await self._trio_socket.recv(65536)
- except BaseException as exc:
- self._convert_socket_error(exc)
-
- async def send(self, item: bytes) -> None:
- with self._send_guard:
- try:
- await self._trio_socket.send(item)
- except BaseException as exc:
- self._convert_socket_error(exc)
-
-
-async def connect_tcp(
- host: str, port: int, local_address: IPSockAddrType | None = None
-) -> SocketStream:
- family = socket.AF_INET6 if ":" in host else socket.AF_INET
- trio_socket = trio.socket.socket(family)
- trio_socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
- if local_address:
- await trio_socket.bind(local_address)
-
- try:
- await trio_socket.connect((host, port))
- except BaseException:
- trio_socket.close()
- raise
-
- return SocketStream(trio_socket)
-
-
-async def connect_unix(path: str) -> UNIXSocketStream:
- trio_socket = trio.socket.socket(socket.AF_UNIX)
- try:
- await trio_socket.connect(path)
- except BaseException:
- trio_socket.close()
- raise
-
- return UNIXSocketStream(trio_socket)
-
-
-async def create_udp_socket(
- family: socket.AddressFamily,
- local_address: IPSockAddrType | None,
- remote_address: IPSockAddrType | None,
- reuse_port: bool,
-) -> UDPSocket | ConnectedUDPSocket:
- trio_socket = trio.socket.socket(family=family, type=socket.SOCK_DGRAM)
-
- if reuse_port:
- trio_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT, 1)
-
- if local_address:
- await trio_socket.bind(local_address)
-
- if remote_address:
- await trio_socket.connect(remote_address)
- return ConnectedUDPSocket(trio_socket)
- else:
- return UDPSocket(trio_socket)
-
-
-getaddrinfo = trio.socket.getaddrinfo
-getnameinfo = trio.socket.getnameinfo
-
-
-async def wait_socket_readable(sock: socket.socket) -> None:
- try:
- await wait_readable(sock)
- except trio.ClosedResourceError as exc:
- raise ClosedResourceError().with_traceback(exc.__traceback__) from None
- except trio.BusyResourceError:
- raise BusyResourceError("reading from") from None
-
-
-async def wait_socket_writable(sock: socket.socket) -> None:
- try:
- await wait_writable(sock)
- except trio.ClosedResourceError as exc:
- raise ClosedResourceError().with_traceback(exc.__traceback__) from None
- except trio.BusyResourceError:
- raise BusyResourceError("writing to") from None
-
-
-#
-# Synchronization
-#
-
-
-class Event(BaseEvent):
- def __new__(cls) -> Event:
- return object.__new__(cls)
-
- def __init__(self) -> None:
- self.__original = trio.Event()
-
- def is_set(self) -> bool:
- return self.__original.is_set()
-
- async def wait(self) -> None:
- return await self.__original.wait()
-
- def statistics(self) -> EventStatistics:
- orig_statistics = self.__original.statistics()
- return EventStatistics(tasks_waiting=orig_statistics.tasks_waiting)
-
- def set(self) -> DeprecatedAwaitable:
- self.__original.set()
- return DeprecatedAwaitable(self.set)
-
-
-class CapacityLimiter(BaseCapacityLimiter):
- def __new__(cls, *args: object, **kwargs: object) -> CapacityLimiter:
- return object.__new__(cls)
-
- def __init__(
- self, *args: Any, original: trio.CapacityLimiter | None = None
- ) -> None:
- self.__original = original or trio.CapacityLimiter(*args)
-
- async def __aenter__(self) -> None:
- return await self.__original.__aenter__()
-
- async def __aexit__(
- self,
- exc_type: type[BaseException] | None,
- exc_val: BaseException | None,
- exc_tb: TracebackType | None,
- ) -> None:
- await self.__original.__aexit__(exc_type, exc_val, exc_tb)
-
- @property
- def total_tokens(self) -> float:
- return self.__original.total_tokens
-
- @total_tokens.setter
- def total_tokens(self, value: float) -> None:
- self.__original.total_tokens = value
-
- @property
- def borrowed_tokens(self) -> int:
- return self.__original.borrowed_tokens
-
- @property
- def available_tokens(self) -> float:
- return self.__original.available_tokens
-
- def acquire_nowait(self) -> DeprecatedAwaitable:
- self.__original.acquire_nowait()
- return DeprecatedAwaitable(self.acquire_nowait)
-
- def acquire_on_behalf_of_nowait(self, borrower: object) -> DeprecatedAwaitable:
- self.__original.acquire_on_behalf_of_nowait(borrower)
- return DeprecatedAwaitable(self.acquire_on_behalf_of_nowait)
-
- async def acquire(self) -> None:
- await self.__original.acquire()
-
- async def acquire_on_behalf_of(self, borrower: object) -> None:
- await self.__original.acquire_on_behalf_of(borrower)
-
- def release(self) -> None:
- return self.__original.release()
-
- def release_on_behalf_of(self, borrower: object) -> None:
- return self.__original.release_on_behalf_of(borrower)
-
- def statistics(self) -> CapacityLimiterStatistics:
- orig = self.__original.statistics()
- return CapacityLimiterStatistics(
- borrowed_tokens=orig.borrowed_tokens,
- total_tokens=orig.total_tokens,
- borrowers=orig.borrowers,
- tasks_waiting=orig.tasks_waiting,
- )
-
-
-_capacity_limiter_wrapper: RunVar = RunVar("_capacity_limiter_wrapper")
-
-
-def current_default_thread_limiter() -> CapacityLimiter:
- try:
- return _capacity_limiter_wrapper.get()
- except LookupError:
- limiter = CapacityLimiter(
- original=trio.to_thread.current_default_thread_limiter()
- )
- _capacity_limiter_wrapper.set(limiter)
- return limiter
-
-
-#
-# Signal handling
-#
-
-
-class _SignalReceiver(DeprecatedAsyncContextManager["_SignalReceiver"]):
- _iterator: AsyncIterator[int]
-
- def __init__(self, signals: tuple[Signals, ...]):
- self._signals = signals
-
- def __enter__(self) -> _SignalReceiver:
- self._cm = trio.open_signal_receiver(*self._signals)
- self._iterator = self._cm.__enter__()
- return self
-
- def __exit__(
- self,
- exc_type: type[BaseException] | None,
- exc_val: BaseException | None,
- exc_tb: TracebackType | None,
- ) -> bool | None:
- return self._cm.__exit__(exc_type, exc_val, exc_tb)
-
- def __aiter__(self) -> _SignalReceiver:
- return self
-
- async def __anext__(self) -> Signals:
- signum = await self._iterator.__anext__()
- return Signals(signum)
-
-
-def open_signal_receiver(*signals: Signals) -> _SignalReceiver:
- return _SignalReceiver(signals)
-
-
-#
-# Testing and debugging
-#
-
-
-def get_current_task() -> TaskInfo:
- task = trio_lowlevel.current_task()
-
- parent_id = None
- if task.parent_nursery and task.parent_nursery.parent_task:
- parent_id = id(task.parent_nursery.parent_task)
-
- return TaskInfo(id(task), parent_id, task.name, task.coro)
-
-
-def get_running_tasks() -> list[TaskInfo]:
- root_task = trio_lowlevel.current_root_task()
- task_infos = [TaskInfo(id(root_task), None, root_task.name, root_task.coro)]
- nurseries = root_task.child_nurseries
- while nurseries:
- new_nurseries: list[trio.Nursery] = []
- for nursery in nurseries:
- for task in nursery.child_tasks:
- task_infos.append(
- TaskInfo(id(task), id(nursery.parent_task), task.name, task.coro)
- )
- new_nurseries.extend(task.child_nurseries)
-
- nurseries = new_nurseries
-
- return task_infos
-
-
-def wait_all_tasks_blocked() -> Awaitable[None]:
- import trio.testing
-
- return trio.testing.wait_all_tasks_blocked()
-
-
-class TestRunner(abc.TestRunner):
- def __init__(self, **options: Any) -> None:
- from collections import deque
- from queue import Queue
-
- self._call_queue: Queue[Callable[..., object]] = Queue()
- self._result_queue: deque[Outcome] = deque()
- self._stop_event: trio.Event | None = None
- self._nursery: trio.Nursery | None = None
- self._options = options
-
- async def _trio_main(self) -> None:
- self._stop_event = trio.Event()
- async with trio.open_nursery() as self._nursery:
- await self._stop_event.wait()
-
- async def _call_func(
- self, func: Callable[..., Awaitable[object]], args: tuple, kwargs: dict
- ) -> None:
- try:
- retval = await func(*args, **kwargs)
- except BaseException as exc:
- self._result_queue.append(Error(exc))
- else:
- self._result_queue.append(Value(retval))
-
- def _main_task_finished(self, outcome: object) -> None:
- self._nursery = None
-
- def _get_nursery(self) -> trio.Nursery:
- if self._nursery is None:
- trio.lowlevel.start_guest_run(
- self._trio_main,
- run_sync_soon_threadsafe=self._call_queue.put,
- done_callback=self._main_task_finished,
- **self._options,
- )
- while self._nursery is None:
- self._call_queue.get()()
-
- return self._nursery
-
- def _call(
- self, func: Callable[..., Awaitable[T_Retval]], *args: object, **kwargs: object
- ) -> T_Retval:
- self._get_nursery().start_soon(self._call_func, func, args, kwargs)
- while not self._result_queue:
- self._call_queue.get()()
-
- outcome = self._result_queue.pop()
- return outcome.unwrap()
-
- def close(self) -> None:
- if self._stop_event:
- self._stop_event.set()
- while self._nursery is not None:
- self._call_queue.get()()
-
- def run_asyncgen_fixture(
- self,
- fixture_func: Callable[..., AsyncGenerator[T_Retval, Any]],
- kwargs: dict[str, Any],
- ) -> Iterable[T_Retval]:
- async def fixture_runner(*, task_status: TaskStatus[T_Retval]) -> None:
- agen = fixture_func(**kwargs)
- retval = await agen.asend(None)
- task_status.started(retval)
- await teardown_event.wait()
- try:
- await agen.asend(None)
- except StopAsyncIteration:
- pass
- else:
- await agen.aclose()
- raise RuntimeError("Async generator fixture did not stop")
-
- teardown_event = trio.Event()
- fixture_value = self._call(lambda: self._get_nursery().start(fixture_runner))
- yield fixture_value
- teardown_event.set()
-
- def run_fixture(
- self,
- fixture_func: Callable[..., Coroutine[Any, Any, T_Retval]],
- kwargs: dict[str, Any],
- ) -> T_Retval:
- return self._call(fixture_func, **kwargs)
-
- def run_test(
- self, test_func: Callable[..., Coroutine[Any, Any, Any]], kwargs: dict[str, Any]
- ) -> None:
- self._call(test_func, **kwargs)
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/middleware/httpsredirect.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/middleware/httpsredirect.py
deleted file mode 100644
index b7a3d8e078574e87dc6e345d621f5a596c3bdc1e..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/middleware/httpsredirect.py
+++ /dev/null
@@ -1,3 +0,0 @@
-from starlette.middleware.httpsredirect import ( # noqa
- HTTPSRedirectMiddleware as HTTPSRedirectMiddleware,
-)
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/inference/_client.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/inference/_client.py
deleted file mode 100644
index 868d1cea5ad2037735034c74a20a0cb4769e8c39..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/inference/_client.py
+++ /dev/null
@@ -1,1258 +0,0 @@
-# coding=utf-8
-# Copyright 2023-present, the HuggingFace Inc. team.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
-# Related resources:
-# https://huggingface.co/tasks
-# https://huggingface.co/docs/huggingface.js/inference/README
-# https://github.com/huggingface/huggingface.js/tree/main/packages/inference/src
-# https://github.com/huggingface/text-generation-inference/tree/main/clients/python
-# https://github.com/huggingface/text-generation-inference/blob/main/clients/python/text_generation/client.py
-# https://huggingface.slack.com/archives/C03E4DQ9LAJ/p1680169099087869
-# https://github.com/huggingface/unity-api#tasks
-#
-# Some TODO:
-# - validate inputs/options/parameters? with Pydantic for instance? or only optionally?
-# - add all tasks
-#
-# NOTE: the philosophy of this client is "let's make it as easy as possible to use it, even if less optimized". Some
-# examples of how it translates:
-# - Timeout / Server unavailable is handled by the client in a single "timeout" parameter.
-# - Files can be provided as bytes, file paths, or URLs and the client will try to "guess" the type.
-# - Images are parsed as PIL.Image for easier manipulation.
-# - Provides a "recommended model" for each task => suboptimal but user-wise quicker to get a first script running.
-# - Only the main parameters are publicly exposed. Power users can always read the docs for more options.
-import logging
-import time
-import warnings
-from dataclasses import asdict
-from typing import (
- TYPE_CHECKING,
- Any,
- Dict,
- Iterable,
- List,
- Optional,
- Union,
- overload,
-)
-
-from requests import HTTPError
-from requests.structures import CaseInsensitiveDict
-
-from huggingface_hub.constants import INFERENCE_ENDPOINT
-from huggingface_hub.inference._common import (
- ContentT,
- InferenceTimeoutError,
- _b64_encode,
- _b64_to_image,
- _bytes_to_dict,
- _bytes_to_image,
- _get_recommended_model,
- _import_numpy,
- _is_tgi_server,
- _open_as_binary,
- _set_as_non_tgi,
- _stream_text_generation_response,
-)
-from huggingface_hub.inference._text_generation import (
- TextGenerationParameters,
- TextGenerationRequest,
- TextGenerationResponse,
- TextGenerationStreamResponse,
- raise_text_generation_error,
-)
-from huggingface_hub.inference._types import ClassificationOutput, ConversationalOutput, ImageSegmentationOutput
-from huggingface_hub.utils import (
- BadRequestError,
- build_hf_headers,
- get_session,
- hf_raise_for_status,
-)
-from huggingface_hub.utils._typing import Literal
-
-
-if TYPE_CHECKING:
- import numpy as np
- from PIL import Image
-
-logger = logging.getLogger(__name__)
-
-
-class InferenceClient:
- """
- Initialize a new Inference Client.
-
- [`InferenceClient`] aims to provide a unified experience to perform inference. The client can be used
- seamlessly with either the (free) Inference API or self-hosted Inference Endpoints.
-
- Args:
- model (`str`, `optional`):
- The model to run inference with. Can be a model id hosted on the Hugging Face Hub, e.g. `bigcode/starcoder`
- or a URL to a deployed Inference Endpoint. Defaults to None, in which case a recommended model is
- automatically selected for the task.
- token (`str`, *optional*):
- Hugging Face token. Will default to the locally saved token. Pass `token=False` if you don't want to send
- your token to the server.
- timeout (`float`, `optional`):
- The maximum number of seconds to wait for a response from the server. Loading a new model in Inference
- API can take up to several minutes. Defaults to None, meaning it will loop until the server is available.
- headers (`Dict[str, str]`, `optional`):
- Additional headers to send to the server. By default only the authorization and user-agent headers are sent.
- Values in this dictionary will override the default values.
- cookies (`Dict[str, str]`, `optional`):
- Additional cookies to send to the server.
- """
-
- def __init__(
- self,
- model: Optional[str] = None,
- token: Union[str, bool, None] = None,
- timeout: Optional[float] = None,
- headers: Optional[Dict[str, str]] = None,
- cookies: Optional[Dict[str, str]] = None,
- ) -> None:
- self.model: Optional[str] = model
- self.headers = CaseInsensitiveDict(build_hf_headers(token=token)) # contains 'authorization' + 'user-agent'
- if headers is not None:
- self.headers.update(headers)
- self.cookies = cookies
- self.timeout = timeout
-
- def __repr__(self):
- return f""
-
- @overload
- def post( # type: ignore
- self,
- *,
- json: Optional[Union[str, Dict, List]] = None,
- data: Optional[ContentT] = None,
- model: Optional[str] = None,
- task: Optional[str] = None,
- stream: Literal[False] = ...,
- ) -> bytes:
- pass
-
- @overload
- def post( # type: ignore
- self,
- *,
- json: Optional[Union[str, Dict, List]] = None,
- data: Optional[ContentT] = None,
- model: Optional[str] = None,
- task: Optional[str] = None,
- stream: Literal[True] = ...,
- ) -> Iterable[bytes]:
- pass
-
- def post(
- self,
- *,
- json: Optional[Union[str, Dict, List]] = None,
- data: Optional[ContentT] = None,
- model: Optional[str] = None,
- task: Optional[str] = None,
- stream: bool = False,
- ) -> Union[bytes, Iterable[bytes]]:
- """
- Make a POST request to the inference server.
-
- Args:
- json (`Union[str, Dict, List]`, *optional*):
- The JSON data to send in the request body. Defaults to None.
- data (`Union[str, Path, bytes, BinaryIO]`, *optional*):
- The content to send in the request body. It can be raw bytes, a pointer to an opened file, a local file
- path, or a URL to an online resource (image, audio file,...). If both `json` and `data` are passed,
- `data` will take precedence. At least `json` or `data` must be provided. Defaults to None.
- model (`str`, *optional*):
- The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
- Inference Endpoint. Will override the model defined at the instance level. Defaults to None.
- task (`str`, *optional*):
- The task to perform on the inference. Used only to default to a recommended model if `model` is not
- provided. At least `model` or `task` must be provided. Defaults to None.
- stream (`bool`, *optional*):
- Whether to iterate over streaming APIs.
-
- Returns:
- bytes: The raw bytes returned by the server.
-
- Raises:
- [`InferenceTimeoutError`]:
- If the model is unavailable or the request times out.
- `HTTPError`:
- If the request fails with an HTTP error status code other than HTTP 503.
- """
- url = self._resolve_url(model, task)
-
- if data is not None and json is not None:
- warnings.warn("Ignoring `json` as `data` is passed as binary.")
-
- t0 = time.time()
- timeout = self.timeout
- while True:
- with _open_as_binary(data) as data_as_binary:
- try:
- response = get_session().post(
- url,
- json=json,
- data=data_as_binary,
- headers=self.headers,
- cookies=self.cookies,
- timeout=self.timeout,
- stream=stream,
- )
- except TimeoutError as error:
- # Convert any `TimeoutError` to a `InferenceTimeoutError`
- raise InferenceTimeoutError(f"Inference call timed out: {url}") from error
-
- try:
- hf_raise_for_status(response)
- return response.iter_lines() if stream else response.content
- except HTTPError as error:
- if error.response.status_code == 503:
- # If Model is unavailable, either raise a TimeoutError...
- if timeout is not None and time.time() - t0 > timeout:
- raise InferenceTimeoutError(
- f"Model not loaded on the server: {url}. Please retry with a higher timeout (current:"
- f" {self.timeout})."
- ) from error
- # ...or wait 1s and retry
- logger.info(f"Waiting for model to be loaded on the server: {error}")
- time.sleep(1)
- if timeout is not None:
- timeout = max(self.timeout - (time.time() - t0), 1) # type: ignore
- continue
- raise
-
- def audio_classification(
- self,
- audio: ContentT,
- *,
- model: Optional[str] = None,
- ) -> List[ClassificationOutput]:
- """
- Perform audio classification on the provided audio content.
-
- Args:
- audio (Union[str, Path, bytes, BinaryIO]):
- The audio content to classify. It can be raw audio bytes, a local audio file, or a URL pointing to an
- audio file.
- model (`str`, *optional*):
- The model to use for audio classification. Can be a model ID hosted on the Hugging Face Hub
- or a URL to a deployed Inference Endpoint. If not provided, the default recommended model for
- audio classification will be used.
-
- Returns:
- `List[Dict]`: The classification output containing the predicted label and its confidence.
-
- Raises:
- [`InferenceTimeoutError`]:
- If the model is unavailable or the request times out.
- `HTTPError`:
- If the request fails with an HTTP error status code other than HTTP 503.
-
- Example:
- ```py
- >>> from huggingface_hub import InferenceClient
- >>> client = InferenceClient()
- >>> client.audio_classification("audio.flac")
- [{'score': 0.4976358711719513, 'label': 'hap'}, {'score': 0.3677836060523987, 'label': 'neu'},...]
- ```
- """
- response = self.post(data=audio, model=model, task="audio-classification")
- return _bytes_to_dict(response)
-
- def automatic_speech_recognition(
- self,
- audio: ContentT,
- *,
- model: Optional[str] = None,
- ) -> str:
- """
- Perform automatic speech recognition (ASR or audio-to-text) on the given audio content.
-
- Args:
- audio (Union[str, Path, bytes, BinaryIO]):
- The content to transcribe. It can be raw audio bytes, local audio file, or a URL to an audio file.
- model (`str`, *optional*):
- The model to use for ASR. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
- Inference Endpoint. If not provided, the default recommended model for ASR will be used.
-
- Returns:
- str: The transcribed text.
-
- Raises:
- [`InferenceTimeoutError`]:
- If the model is unavailable or the request times out.
- `HTTPError`:
- If the request fails with an HTTP error status code other than HTTP 503.
-
- Example:
- ```py
- >>> from huggingface_hub import InferenceClient
- >>> client = InferenceClient()
- >>> client.automatic_speech_recognition("hello_world.flac")
- "hello world"
- ```
- """
- response = self.post(data=audio, model=model, task="automatic-speech-recognition")
- return _bytes_to_dict(response)["text"]
-
- def conversational(
- self,
- text: str,
- generated_responses: Optional[List[str]] = None,
- past_user_inputs: Optional[List[str]] = None,
- *,
- parameters: Optional[Dict[str, Any]] = None,
- model: Optional[str] = None,
- ) -> ConversationalOutput:
- """
- Generate conversational responses based on the given input text (i.e. chat with the API).
-
- Args:
- text (`str`):
- The last input from the user in the conversation.
- generated_responses (`List[str]`, *optional*):
- A list of strings corresponding to the earlier replies from the model. Defaults to None.
- past_user_inputs (`List[str]`, *optional*):
- A list of strings corresponding to the earlier replies from the user. Should be the same length as
- `generated_responses`. Defaults to None.
- parameters (`Dict[str, Any]`, *optional*):
- Additional parameters for the conversational task. Defaults to None. For more details about the available
- parameters, please refer to [this page](https://huggingface.co/docs/api-inference/detailed_parameters#conversational-task)
- model (`str`, *optional*):
- The model to use for the conversational task. Can be a model ID hosted on the Hugging Face Hub or a URL to
- a deployed Inference Endpoint. If not provided, the default recommended conversational model will be used.
- Defaults to None.
-
- Returns:
- `Dict`: The generated conversational output.
-
- Raises:
- [`InferenceTimeoutError`]:
- If the model is unavailable or the request times out.
- `HTTPError`:
- If the request fails with an HTTP error status code other than HTTP 503.
-
- Example:
- ```py
- >>> from huggingface_hub import InferenceClient
- >>> client = InferenceClient()
- >>> output = client.conversational("Hi, who are you?")
- >>> output
- {'generated_text': 'I am the one who knocks.', 'conversation': {'generated_responses': ['I am the one who knocks.'], 'past_user_inputs': ['Hi, who are you?']}, 'warnings': ['Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.']}
- >>> client.conversational(
- ... "Wow, that's scary!",
- ... generated_responses=output["conversation"]["generated_responses"],
- ... past_user_inputs=output["conversation"]["past_user_inputs"],
- ... )
- ```
- """
- payload: Dict[str, Any] = {"inputs": {"text": text}}
- if generated_responses is not None:
- payload["inputs"]["generated_responses"] = generated_responses
- if past_user_inputs is not None:
- payload["inputs"]["past_user_inputs"] = past_user_inputs
- if parameters is not None:
- payload["parameters"] = parameters
- response = self.post(json=payload, model=model, task="conversational")
- return _bytes_to_dict(response)
-
- def feature_extraction(self, text: str, *, model: Optional[str] = None) -> "np.ndarray":
- """
- Generate embeddings for a given text.
-
- Args:
- text (`str`):
- The text to embed.
- model (`str`, *optional*):
- The model to use for the conversational task. Can be a model ID hosted on the Hugging Face Hub or a URL to
- a deployed Inference Endpoint. If not provided, the default recommended conversational model will be used.
- Defaults to None.
-
- Returns:
- `np.ndarray`: The embedding representing the input text as a float32 numpy array.
-
- Raises:
- [`InferenceTimeoutError`]:
- If the model is unavailable or the request times out.
- `HTTPError`:
- If the request fails with an HTTP error status code other than HTTP 503.
-
- Example:
- ```py
- >>> from huggingface_hub import InferenceClient
- >>> client = InferenceClient()
- >>> client.feature_extraction("Hi, who are you?")
- array([[ 2.424802 , 2.93384 , 1.1750331 , ..., 1.240499, -0.13776633, -0.7889173 ],
- [-0.42943227, -0.6364878 , -1.693462 , ..., 0.41978157, -2.4336355 , 0.6162071 ],
- ...,
- [ 0.28552425, -0.928395 , -1.2077185 , ..., 0.76810825, -2.1069427 , 0.6236161 ]], dtype=float32)
- ```
- """
- response = self.post(json={"inputs": text}, model=model, task="feature-extraction")
- np = _import_numpy()
- return np.array(_bytes_to_dict(response)[0], dtype="float32")
-
- def image_classification(
- self,
- image: ContentT,
- *,
- model: Optional[str] = None,
- ) -> List[ClassificationOutput]:
- """
- Perform image classification on the given image using the specified model.
-
- Args:
- image (`Union[str, Path, bytes, BinaryIO]`):
- The image to classify. It can be raw bytes, an image file, or a URL to an online image.
- model (`str`, *optional*):
- The model to use for image classification. Can be a model ID hosted on the Hugging Face Hub or a URL to a
- deployed Inference Endpoint. If not provided, the default recommended model for image classification will be used.
-
- Returns:
- `List[Dict]`: a list of dictionaries containing the predicted label and associated probability.
-
- Raises:
- [`InferenceTimeoutError`]:
- If the model is unavailable or the request times out.
- `HTTPError`:
- If the request fails with an HTTP error status code other than HTTP 503.
-
- Example:
- ```py
- >>> from huggingface_hub import InferenceClient
- >>> client = InferenceClient()
- >>> client.image_classification("https://upload.wikimedia.org/wikipedia/commons/thumb/4/43/Cute_dog.jpg/320px-Cute_dog.jpg")
- [{'score': 0.9779096841812134, 'label': 'Blenheim spaniel'}, ...]
- ```
- """
- response = self.post(data=image, model=model, task="image-classification")
- return _bytes_to_dict(response)
-
- def image_segmentation(
- self,
- image: ContentT,
- *,
- model: Optional[str] = None,
- ) -> List[ImageSegmentationOutput]:
- """
- Perform image segmentation on the given image using the specified model.
-
-
-
- You must have `PIL` installed if you want to work with images (`pip install Pillow`).
-
-
-
- Args:
- image (`Union[str, Path, bytes, BinaryIO]`):
- The image to segment. It can be raw bytes, an image file, or a URL to an online image.
- model (`str`, *optional*):
- The model to use for image segmentation. Can be a model ID hosted on the Hugging Face Hub or a URL to a
- deployed Inference Endpoint. If not provided, the default recommended model for image segmentation will be used.
-
- Returns:
- `List[Dict]`: A list of dictionaries containing the segmented masks and associated attributes.
-
- Raises:
- [`InferenceTimeoutError`]:
- If the model is unavailable or the request times out.
- `HTTPError`:
- If the request fails with an HTTP error status code other than HTTP 503.
-
- Example:
- ```py
- >>> from huggingface_hub import InferenceClient
- >>> client = InferenceClient()
- >>> client.image_segmentation("cat.jpg"):
- [{'score': 0.989008, 'label': 'LABEL_184', 'mask': }, ...]
- ```
- """
-
- # Segment
- response = self.post(data=image, model=model, task="image-segmentation")
- output = _bytes_to_dict(response)
-
- # Parse masks as PIL Image
- if not isinstance(output, list):
- raise ValueError(f"Server output must be a list. Got {type(output)}: {str(output)[:200]}...")
- for item in output:
- item["mask"] = _b64_to_image(item["mask"])
- return output
-
- def image_to_image(
- self,
- image: ContentT,
- prompt: Optional[str] = None,
- *,
- negative_prompt: Optional[str] = None,
- height: Optional[int] = None,
- width: Optional[int] = None,
- num_inference_steps: Optional[int] = None,
- guidance_scale: Optional[float] = None,
- model: Optional[str] = None,
- **kwargs,
- ) -> "Image":
- """
- Perform image-to-image translation using a specified model.
-
-
-
- You must have `PIL` installed if you want to work with images (`pip install Pillow`).
-
-
-
- Args:
- image (`Union[str, Path, bytes, BinaryIO]`):
- The input image for translation. It can be raw bytes, an image file, or a URL to an online image.
- prompt (`str`, *optional*):
- The text prompt to guide the image generation.
- negative_prompt (`str`, *optional*):
- A negative prompt to guide the translation process.
- height (`int`, *optional*):
- The height in pixels of the generated image.
- width (`int`, *optional*):
- The width in pixels of the generated image.
- num_inference_steps (`int`, *optional*):
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
- expense of slower inference.
- guidance_scale (`float`, *optional*):
- Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
- usually at the expense of lower image quality.
- model (`str`, *optional*):
- The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
- Inference Endpoint. This parameter overrides the model defined at the instance level. Defaults to None.
-
- Returns:
- `Image`: The translated image.
-
- Raises:
- [`InferenceTimeoutError`]:
- If the model is unavailable or the request times out.
- `HTTPError`:
- If the request fails with an HTTP error status code other than HTTP 503.
-
- Example:
- ```py
- >>> from huggingface_hub import InferenceClient
- >>> client = InferenceClient()
- >>> image = client.image_to_image("cat.jpg", prompt="turn the cat into a tiger")
- >>> image.save("tiger.jpg")
- ```
- """
- parameters = {
- "prompt": prompt,
- "negative_prompt": negative_prompt,
- "height": height,
- "width": width,
- "num_inference_steps": num_inference_steps,
- "guidance_scale": guidance_scale,
- **kwargs,
- }
- if all(parameter is None for parameter in parameters.values()):
- # Either only an image to send => send as raw bytes
- data = image
- payload: Optional[Dict[str, Any]] = None
- else:
- # Or an image + some parameters => use base64 encoding
- data = None
- payload = {"inputs": _b64_encode(image)}
- for key, value in parameters.items():
- if value is not None:
- payload[key] = value
-
- response = self.post(json=payload, data=data, model=model, task="image-to-image")
- return _bytes_to_image(response)
-
- def image_to_text(self, image: ContentT, *, model: Optional[str] = None) -> str:
- """
- Takes an input image and return text.
-
- Models can have very different outputs depending on your use case (image captioning, optical character recognition
- (OCR), Pix2Struct, etc). Please have a look to the model card to learn more about a model's specificities.
-
- Args:
- image (`Union[str, Path, bytes, BinaryIO]`):
- The input image to caption. It can be raw bytes, an image file, or a URL to an online image..
- model (`str`, *optional*):
- The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
- Inference Endpoint. This parameter overrides the model defined at the instance level. Defaults to None.
-
- Returns:
- `str`: The generated text.
-
- Raises:
- [`InferenceTimeoutError`]:
- If the model is unavailable or the request times out.
- `HTTPError`:
- If the request fails with an HTTP error status code other than HTTP 503.
-
- Example:
- ```py
- >>> from huggingface_hub import InferenceClient
- >>> client = InferenceClient()
- >>> client.image_to_text("cat.jpg")
- 'a cat standing in a grassy field '
- >>> client.image_to_text("https://upload.wikimedia.org/wikipedia/commons/thumb/4/43/Cute_dog.jpg/320px-Cute_dog.jpg")
- 'a dog laying on the grass next to a flower pot '
- ```
- """
- response = self.post(data=image, model=model, task="image-to-text")
- return _bytes_to_dict(response)[0]["generated_text"]
-
- def sentence_similarity(
- self, sentence: str, other_sentences: List[str], *, model: Optional[str] = None
- ) -> List[float]:
- """
- Compute the semantic similarity between a sentence and a list of other sentences by comparing their embeddings.
-
- Args:
- sentence (`str`):
- The main sentence to compare to others.
- other_sentences (`List[str]`):
- The list of sentences to compare to.
- model (`str`, *optional*):
- The model to use for the conversational task. Can be a model ID hosted on the Hugging Face Hub or a URL to
- a deployed Inference Endpoint. If not provided, the default recommended conversational model will be used.
- Defaults to None.
-
- Returns:
- `List[float]`: The embedding representing the input text.
-
- Raises:
- [`InferenceTimeoutError`]:
- If the model is unavailable or the request times out.
- `HTTPError`:
- If the request fails with an HTTP error status code other than HTTP 503.
-
- Example:
- ```py
- >>> from huggingface_hub import InferenceClient
- >>> client = InferenceClient()
- >>> client.sentence_similarity(
- ... "Machine learning is so easy.",
- ... other_sentences=[
- ... "Deep learning is so straightforward.",
- ... "This is so difficult, like rocket science.",
- ... "I can't believe how much I struggled with this.",
- ... ],
- ... )
- [0.7785726189613342, 0.45876261591911316, 0.2906220555305481]
- ```
- """
- response = self.post(
- json={"inputs": {"source_sentence": sentence, "sentences": other_sentences}},
- model=model,
- task="sentence-similarity",
- )
- return _bytes_to_dict(response)
-
- def summarization(
- self,
- text: str,
- *,
- parameters: Optional[Dict[str, Any]] = None,
- model: Optional[str] = None,
- ) -> str:
- """
- Generate a summary of a given text using a specified model.
-
- Args:
- text (`str`):
- The input text to summarize.
- parameters (`Dict[str, Any]`, *optional*):
- Additional parameters for summarization. Check out this [page](https://huggingface.co/docs/api-inference/detailed_parameters#summarization-task)
- for more details.
- model (`str`, *optional*):
- The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
- Inference Endpoint. This parameter overrides the model defined at the instance level. Defaults to None.
-
- Returns:
- `str`: The generated summary text.
-
- Raises:
- [`InferenceTimeoutError`]:
- If the model is unavailable or the request times out.
- `HTTPError`:
- If the request fails with an HTTP error status code other than HTTP 503.
-
- Example:
- ```py
- >>> from huggingface_hub import InferenceClient
- >>> client = InferenceClient()
- >>> client.summarization("The Eiffel tower...")
- 'The Eiffel tower is one of the most famous landmarks in the world....'
- ```
- """
- payload: Dict[str, Any] = {"inputs": text}
- if parameters is not None:
- payload["parameters"] = parameters
- response = self.post(json=payload, model=model, task="summarization")
- return _bytes_to_dict(response)[0]["summary_text"]
-
- @overload
- def text_generation( # type: ignore
- self,
- prompt: str,
- *,
- details: Literal[False] = ...,
- stream: Literal[False] = ...,
- model: Optional[str] = None,
- do_sample: bool = False,
- max_new_tokens: int = 20,
- best_of: Optional[int] = None,
- repetition_penalty: Optional[float] = None,
- return_full_text: bool = False,
- seed: Optional[int] = None,
- stop_sequences: Optional[List[str]] = None,
- temperature: Optional[float] = None,
- top_k: Optional[int] = None,
- top_p: Optional[float] = None,
- truncate: Optional[int] = None,
- typical_p: Optional[float] = None,
- watermark: bool = False,
- ) -> str:
- ...
-
- @overload
- def text_generation( # type: ignore
- self,
- prompt: str,
- *,
- details: Literal[True] = ...,
- stream: Literal[False] = ...,
- model: Optional[str] = None,
- do_sample: bool = False,
- max_new_tokens: int = 20,
- best_of: Optional[int] = None,
- repetition_penalty: Optional[float] = None,
- return_full_text: bool = False,
- seed: Optional[int] = None,
- stop_sequences: Optional[List[str]] = None,
- temperature: Optional[float] = None,
- top_k: Optional[int] = None,
- top_p: Optional[float] = None,
- truncate: Optional[int] = None,
- typical_p: Optional[float] = None,
- watermark: bool = False,
- ) -> TextGenerationResponse:
- ...
-
- @overload
- def text_generation( # type: ignore
- self,
- prompt: str,
- *,
- details: Literal[False] = ...,
- stream: Literal[True] = ...,
- model: Optional[str] = None,
- do_sample: bool = False,
- max_new_tokens: int = 20,
- best_of: Optional[int] = None,
- repetition_penalty: Optional[float] = None,
- return_full_text: bool = False,
- seed: Optional[int] = None,
- stop_sequences: Optional[List[str]] = None,
- temperature: Optional[float] = None,
- top_k: Optional[int] = None,
- top_p: Optional[float] = None,
- truncate: Optional[int] = None,
- typical_p: Optional[float] = None,
- watermark: bool = False,
- ) -> Iterable[str]:
- ...
-
- @overload
- def text_generation(
- self,
- prompt: str,
- *,
- details: Literal[True] = ...,
- stream: Literal[True] = ...,
- model: Optional[str] = None,
- do_sample: bool = False,
- max_new_tokens: int = 20,
- best_of: Optional[int] = None,
- repetition_penalty: Optional[float] = None,
- return_full_text: bool = False,
- seed: Optional[int] = None,
- stop_sequences: Optional[List[str]] = None,
- temperature: Optional[float] = None,
- top_k: Optional[int] = None,
- top_p: Optional[float] = None,
- truncate: Optional[int] = None,
- typical_p: Optional[float] = None,
- watermark: bool = False,
- ) -> Iterable[TextGenerationStreamResponse]:
- ...
-
- def text_generation(
- self,
- prompt: str,
- *,
- details: bool = False,
- stream: bool = False,
- model: Optional[str] = None,
- do_sample: bool = False,
- max_new_tokens: int = 20,
- best_of: Optional[int] = None,
- repetition_penalty: Optional[float] = None,
- return_full_text: bool = False,
- seed: Optional[int] = None,
- stop_sequences: Optional[List[str]] = None,
- temperature: Optional[float] = None,
- top_k: Optional[int] = None,
- top_p: Optional[float] = None,
- truncate: Optional[int] = None,
- typical_p: Optional[float] = None,
- watermark: bool = False,
- decoder_input_details: bool = False,
- ) -> Union[str, TextGenerationResponse, Iterable[str], Iterable[TextGenerationStreamResponse]]:
- """
- Given a prompt, generate the following text.
-
- It is recommended to have Pydantic installed in order to get inputs validated. This is preferable as it allow
- early failures.
-
- API endpoint is supposed to run with the `text-generation-inference` backend (TGI). This backend is the
- go-to solution to run large language models at scale. However, for some smaller models (e.g. "gpt2") the
- default `transformers` + `api-inference` solution is still in use. Both approaches have very similar APIs, but
- not exactly the same. This method is compatible with both approaches but some parameters are only available for
- `text-generation-inference`. If some parameters are ignored, a warning message is triggered but the process
- continues correctly.
-
- To learn more about the TGI project, please refer to https://github.com/huggingface/text-generation-inference.
-
- Args:
- prompt (`str`):
- Input text.
- details (`bool`, *optional*):
- By default, text_generation returns a string. Pass `details=True` if you want a detailed output (tokens,
- probabilities, seed, finish reason, etc.). Only available for models running on with the
- `text-generation-inference` backend.
- stream (`bool`, *optional*):
- By default, text_generation returns the full generated text. Pass `stream=True` if you want a stream of
- tokens to be returned. Only available for models running on with the `text-generation-inference`
- backend.
- model (`str`, *optional*):
- The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
- Inference Endpoint. This parameter overrides the model defined at the instance level. Defaults to None.
- do_sample (`bool`):
- Activate logits sampling
- max_new_tokens (`int`):
- Maximum number of generated tokens
- best_of (`int`):
- Generate best_of sequences and return the one if the highest token logprobs
- repetition_penalty (`float`):
- The parameter for repetition penalty. 1.0 means no penalty. See [this
- paper](https://arxiv.org/pdf/1909.05858.pdf) for more details.
- return_full_text (`bool`):
- Whether to prepend the prompt to the generated text
- seed (`int`):
- Random sampling seed
- stop_sequences (`List[str]`):
- Stop generating tokens if a member of `stop_sequences` is generated
- temperature (`float`):
- The value used to module the logits distribution.
- top_k (`int`):
- The number of highest probability vocabulary tokens to keep for top-k-filtering.
- top_p (`float`):
- If set to < 1, only the smallest set of most probable tokens with probabilities that add up to `top_p` or
- higher are kept for generation.
- truncate (`int`):
- Truncate inputs tokens to the given size
- typical_p (`float`):
- Typical Decoding mass
- See [Typical Decoding for Natural Language Generation](https://arxiv.org/abs/2202.00666) for more information
- watermark (`bool`):
- Watermarking with [A Watermark for Large Language Models](https://arxiv.org/abs/2301.10226)
- decoder_input_details (`bool`):
- Return the decoder input token logprobs and ids. You must set `details=True` as well for it to be taken
- into account. Defaults to `False`.
-
- Returns:
- `Union[str, TextGenerationResponse, Iterable[str], Iterable[TextGenerationStreamResponse]]`:
- Generated text returned from the server:
- - if `stream=False` and `details=False`, the generated text is returned as a `str` (default)
- - if `stream=True` and `details=False`, the generated text is returned token by token as a `Iterable[str]`
- - if `stream=False` and `details=True`, the generated text is returned with more details as a [`~huggingface_hub.inference._text_generation.TextGenerationResponse`]
- - if `details=True` and `stream=True`, the generated text is returned token by token as a iterable of [`~huggingface_hub.inference._text_generation.TextGenerationStreamResponse`]
-
- Raises:
- `ValidationError`:
- If input values are not valid. No HTTP call is made to the server.
- [`InferenceTimeoutError`]:
- If the model is unavailable or the request times out.
- `HTTPError`:
- If the request fails with an HTTP error status code other than HTTP 503.
-
- Example:
- ```py
- >>> from huggingface_hub import InferenceClient
- >>> client = InferenceClient()
-
- # Case 1: generate text
- >>> client.text_generation("The huggingface_hub library is ", max_new_tokens=12)
- '100% open source and built to be easy to use.'
-
- # Case 2: iterate over the generated tokens. Useful for large generation.
- >>> for token in client.text_generation("The huggingface_hub library is ", max_new_tokens=12, stream=True):
- ... print(token)
- 100
- %
- open
- source
- and
- built
- to
- be
- easy
- to
- use
- .
-
- # Case 3: get more details about the generation process.
- >>> client.text_generation("The huggingface_hub library is ", max_new_tokens=12, details=True)
- TextGenerationResponse(
- generated_text='100% open source and built to be easy to use.',
- details=Details(
- finish_reason=,
- generated_tokens=12,
- seed=None,
- prefill=[
- InputToken(id=487, text='The', logprob=None),
- InputToken(id=53789, text=' hugging', logprob=-13.171875),
- (...)
- InputToken(id=204, text=' ', logprob=-7.0390625)
- ],
- tokens=[
- Token(id=1425, text='100', logprob=-1.0175781, special=False),
- Token(id=16, text='%', logprob=-0.0463562, special=False),
- (...)
- Token(id=25, text='.', logprob=-0.5703125, special=False)
- ],
- best_of_sequences=None
- )
- )
-
- # Case 4: iterate over the generated tokens with more details.
- # Last object is more complete, containing the full generated text and the finish reason.
- >>> for details in client.text_generation("The huggingface_hub library is ", max_new_tokens=12, details=True, stream=True):
- ... print(details)
- ...
- TextGenerationStreamResponse(token=Token(id=1425, text='100', logprob=-1.0175781, special=False), generated_text=None, details=None)
- TextGenerationStreamResponse(token=Token(id=16, text='%', logprob=-0.0463562, special=False), generated_text=None, details=None)
- TextGenerationStreamResponse(token=Token(id=1314, text=' open', logprob=-1.3359375, special=False), generated_text=None, details=None)
- TextGenerationStreamResponse(token=Token(id=3178, text=' source', logprob=-0.28100586, special=False), generated_text=None, details=None)
- TextGenerationStreamResponse(token=Token(id=273, text=' and', logprob=-0.5961914, special=False), generated_text=None, details=None)
- TextGenerationStreamResponse(token=Token(id=3426, text=' built', logprob=-1.9423828, special=False), generated_text=None, details=None)
- TextGenerationStreamResponse(token=Token(id=271, text=' to', logprob=-1.4121094, special=False), generated_text=None, details=None)
- TextGenerationStreamResponse(token=Token(id=314, text=' be', logprob=-1.5224609, special=False), generated_text=None, details=None)
- TextGenerationStreamResponse(token=Token(id=1833, text=' easy', logprob=-2.1132812, special=False), generated_text=None, details=None)
- TextGenerationStreamResponse(token=Token(id=271, text=' to', logprob=-0.08520508, special=False), generated_text=None, details=None)
- TextGenerationStreamResponse(token=Token(id=745, text=' use', logprob=-0.39453125, special=False), generated_text=None, details=None)
- TextGenerationStreamResponse(token=Token(
- id=25,
- text='.',
- logprob=-0.5703125,
- special=False),
- generated_text='100% open source and built to be easy to use.',
- details=StreamDetails(finish_reason=, generated_tokens=12, seed=None)
- )
- ```
- """
- # NOTE: Text-generation integration is taken from the text-generation-inference project. It has more features
- # like input/output validation (if Pydantic is installed). See `_text_generation.py` header for more details.
-
- if decoder_input_details and not details:
- warnings.warn(
- "`decoder_input_details=True` has been passed to the server but `details=False` is set meaning that"
- " the output from the server will be truncated."
- )
- decoder_input_details = False
-
- # Validate parameters
- parameters = TextGenerationParameters(
- best_of=best_of,
- details=details,
- do_sample=do_sample,
- max_new_tokens=max_new_tokens,
- repetition_penalty=repetition_penalty,
- return_full_text=return_full_text,
- seed=seed,
- stop=stop_sequences if stop_sequences is not None else [],
- temperature=temperature,
- top_k=top_k,
- top_p=top_p,
- truncate=truncate,
- typical_p=typical_p,
- watermark=watermark,
- decoder_input_details=decoder_input_details,
- )
- request = TextGenerationRequest(inputs=prompt, stream=stream, parameters=parameters)
- payload = asdict(request)
-
- # Remove some parameters if not a TGI server
- if not _is_tgi_server(model):
- ignored_parameters = []
- for key in "watermark", "stop", "details", "decoder_input_details":
- if payload["parameters"][key] is not None:
- ignored_parameters.append(key)
- del payload["parameters"][key]
- if len(ignored_parameters) > 0:
- warnings.warn(
- (
- "API endpoint/model for text-generation is not served via TGI. Ignoring parameters"
- f" {ignored_parameters}."
- ),
- UserWarning,
- )
- if details:
- warnings.warn(
- (
- "API endpoint/model for text-generation is not served via TGI. Parameter `details=True` will"
- " be ignored meaning only the generated text will be returned."
- ),
- UserWarning,
- )
- details = False
- if stream:
- raise ValueError(
- "API endpoint/model for text-generation is not served via TGI. Cannot return output as a stream."
- " Please pass `stream=False` as input."
- )
-
- # Handle errors separately for more precise error messages
- try:
- bytes_output = self.post(json=payload, model=model, task="text-generation", stream=stream) # type: ignore
- except HTTPError as e:
- if isinstance(e, BadRequestError) and "The following `model_kwargs` are not used by the model" in str(e):
- _set_as_non_tgi(model)
- return self.text_generation( # type: ignore
- prompt=prompt,
- details=details,
- stream=stream,
- model=model,
- do_sample=do_sample,
- max_new_tokens=max_new_tokens,
- best_of=best_of,
- repetition_penalty=repetition_penalty,
- return_full_text=return_full_text,
- seed=seed,
- stop_sequences=stop_sequences,
- temperature=temperature,
- top_k=top_k,
- top_p=top_p,
- truncate=truncate,
- typical_p=typical_p,
- watermark=watermark,
- decoder_input_details=decoder_input_details,
- )
- raise_text_generation_error(e)
-
- # Parse output
- if stream:
- return _stream_text_generation_response(bytes_output, details) # type: ignore
-
- data = _bytes_to_dict(bytes_output)[0]
- return TextGenerationResponse(**data) if details else data["generated_text"]
-
- def text_to_image(
- self,
- prompt: str,
- *,
- negative_prompt: Optional[str] = None,
- height: Optional[float] = None,
- width: Optional[float] = None,
- num_inference_steps: Optional[float] = None,
- guidance_scale: Optional[float] = None,
- model: Optional[str] = None,
- **kwargs,
- ) -> "Image":
- """
- Generate an image based on a given text using a specified model.
-
-
-
- You must have `PIL` installed if you want to work with images (`pip install Pillow`).
-
-
-
- Args:
- prompt (`str`):
- The prompt to generate an image from.
- negative_prompt (`str`, *optional*):
- An optional negative prompt for the image generation.
- height (`float`, *optional*):
- The height in pixels of the image to generate.
- width (`float`, *optional*):
- The width in pixels of the image to generate.
- num_inference_steps (`int`, *optional*):
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
- expense of slower inference.
- guidance_scale (`float`, *optional*):
- Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
- usually at the expense of lower image quality.
- model (`str`, *optional*):
- The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
- Inference Endpoint. This parameter overrides the model defined at the instance level. Defaults to None.
-
- Returns:
- `Image`: The generated image.
-
- Raises:
- [`InferenceTimeoutError`]:
- If the model is unavailable or the request times out.
- `HTTPError`:
- If the request fails with an HTTP error status code other than HTTP 503.
-
- Example:
- ```py
- >>> from huggingface_hub import InferenceClient
- >>> client = InferenceClient()
-
- >>> image = client.text_to_image("An astronaut riding a horse on the moon.")
- >>> image.save("astronaut.png")
-
- >>> image = client.text_to_image(
- ... "An astronaut riding a horse on the moon.",
- ... negative_prompt="low resolution, blurry",
- ... model="stabilityai/stable-diffusion-2-1",
- ... )
- >>> image.save("better_astronaut.png")
- ```
- """
- parameters = {
- "inputs": prompt,
- "negative_prompt": negative_prompt,
- "height": height,
- "width": width,
- "num_inference_steps": num_inference_steps,
- "guidance_scale": guidance_scale,
- **kwargs,
- }
- payload = {}
- for key, value in parameters.items():
- if value is not None:
- payload[key] = value
- response = self.post(json=payload, model=model, task="text-to-image")
- return _bytes_to_image(response)
-
- def text_to_speech(self, text: str, *, model: Optional[str] = None) -> bytes:
- """
- Synthesize an audio of a voice pronouncing a given text.
-
- Args:
- text (`str`):
- The text to synthesize.
- model (`str`, *optional*):
- The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
- Inference Endpoint. This parameter overrides the model defined at the instance level. Defaults to None.
-
- Returns:
- `bytes`: The generated audio.
-
- Raises:
- [`InferenceTimeoutError`]:
- If the model is unavailable or the request times out.
- `HTTPError`:
- If the request fails with an HTTP error status code other than HTTP 503.
-
- Example:
- ```py
- >>> from pathlib import Path
- >>> from huggingface_hub import InferenceClient
- >>> client = InferenceClient()
-
- >>> audio = client.text_to_speech("Hello world")
- >>> Path("hello_world.flac").write_bytes(audio)
- ```
- """
- return self.post(json={"inputs": text}, model=model, task="text-to-speech")
-
- def zero_shot_image_classification(
- self, image: ContentT, labels: List[str], *, model: Optional[str] = None
- ) -> List[ClassificationOutput]:
- """
- Provide input image and text labels to predict text labels for the image.
-
- Args:
- image (`Union[str, Path, bytes, BinaryIO]`):
- The input image to caption. It can be raw bytes, an image file, or a URL to an online image.
- labels (`List[str]`):
- List of string possible labels. The `len(labels)` must be greater than 1.
- model (`str`, *optional*):
- The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
- Inference Endpoint. This parameter overrides the model defined at the instance level. Defaults to None.
-
- Returns:
- `List[Dict]`: List of classification outputs containing the predicted labels and their confidence.
-
- Raises:
- [`InferenceTimeoutError`]:
- If the model is unavailable or the request times out.
- `HTTPError`:
- If the request fails with an HTTP error status code other than HTTP 503.
-
- Example:
- ```py
- >>> from huggingface_hub import InferenceClient
- >>> client = InferenceClient()
-
- >>> client.zero_shot_image_classification(
- ... "https://upload.wikimedia.org/wikipedia/commons/thumb/4/43/Cute_dog.jpg/320px-Cute_dog.jpg",
- ... labels=["dog", "cat", "horse"],
- ... )
- [{"label": "dog", "score": 0.956}, ...]
- ```
- """
-
- # Raise valueerror if input is less than 2 labels
- if len(labels) < 2:
- raise ValueError("You must specify at least 2 classes to compare. Please specify more than 1 class.")
-
- response = self.post(
- json={"image": _b64_encode(image), "parameters": {"candidate_labels": ",".join(labels)}},
- model=model,
- task="zero-shot-image-classification",
- )
- return _bytes_to_dict(response)
-
- def _resolve_url(self, model: Optional[str] = None, task: Optional[str] = None) -> str:
- model = model or self.model
-
- # If model is already a URL, ignore `task` and return directly
- if model is not None and (model.startswith("http://") or model.startswith("https://")):
- return model
-
- # # If no model but task is set => fetch the recommended one for this task
- if model is None:
- if task is None:
- raise ValueError(
- "You must specify at least a model (repo_id or URL) or a task, either when instantiating"
- " `InferenceClient` or when making a request."
- )
- model = _get_recommended_model(task)
-
- # Compute InferenceAPI url
- return (
- # Feature-extraction and sentence-similarity are the only cases where we handle models with several tasks.
- f"{INFERENCE_ENDPOINT}/pipeline/{task}/{model}"
- if task in ("feature-extraction", "sentence-similarity")
- # Otherwise, we use the default endpoint
- else f"{INFERENCE_ENDPOINT}/models/{model}"
- )
diff --git a/spaces/DexterSptizu/drug_interaction/README.md b/spaces/DexterSptizu/drug_interaction/README.md
deleted file mode 100644
index 51a3e7da80c7f25d963ca1dd8cf7ecd0ad2fa567..0000000000000000000000000000000000000000
--- a/spaces/DexterSptizu/drug_interaction/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Drug Interaction
-emoji: 📊
-colorFrom: blue
-colorTo: red
-sdk: gradio
-sdk_version: 3.33.1
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Djdjeuu/MGX-Midjourney-v4/app.py b/spaces/Djdjeuu/MGX-Midjourney-v4/app.py
deleted file mode 100644
index bea4accb45793c8e748731c184dee0ffaf509dd5..0000000000000000000000000000000000000000
--- a/spaces/Djdjeuu/MGX-Midjourney-v4/app.py
+++ /dev/null
@@ -1,8 +0,0 @@
-import gradio as gr
-
-description = """
-
-
- """
-
-gr.Interface.load("models/prompthero/openjourney", description=description).launch()
\ No newline at end of file
diff --git a/spaces/EAraid12/LoRA-DreamBooth-Training-UI/train_dreambooth_lora.py b/spaces/EAraid12/LoRA-DreamBooth-Training-UI/train_dreambooth_lora.py
deleted file mode 100644
index 93d429590ca4f357aff07989965b673bdf1e50fe..0000000000000000000000000000000000000000
--- a/spaces/EAraid12/LoRA-DreamBooth-Training-UI/train_dreambooth_lora.py
+++ /dev/null
@@ -1,1026 +0,0 @@
-#!/usr/bin/env python
-# coding=utf-8
-#
-# This file is adapted from https://github.com/huggingface/diffusers/blob/febaf863026bd014b7a14349336544fc109d0f57/examples/dreambooth/train_dreambooth_lora.py
-# The original license is as below:
-#
-# Copyright 2022 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-
-import argparse
-import hashlib
-import logging
-import math
-import os
-import warnings
-from pathlib import Path
-from typing import Optional
-
-import numpy as np
-import torch
-import torch.nn.functional as F
-import torch.utils.checkpoint
-from torch.utils.data import Dataset
-
-import datasets
-import diffusers
-import transformers
-from accelerate import Accelerator
-from accelerate.logging import get_logger
-from accelerate.utils import set_seed
-from diffusers import (
- AutoencoderKL,
- DDPMScheduler,
- DiffusionPipeline,
- DPMSolverMultistepScheduler,
- UNet2DConditionModel,
-)
-from diffusers.loaders import AttnProcsLayers
-from diffusers.models.cross_attention import LoRACrossAttnProcessor
-from diffusers.optimization import get_scheduler
-from diffusers.utils import check_min_version, is_wandb_available
-from diffusers.utils.import_utils import is_xformers_available
-from huggingface_hub import HfFolder, Repository, create_repo, whoami
-from PIL import Image
-from torchvision import transforms
-from tqdm.auto import tqdm
-from transformers import AutoTokenizer, PretrainedConfig
-
-
-# Will error if the minimal version of diffusers is not installed. Remove at your own risks.
-check_min_version("0.12.0.dev0")
-
-logger = get_logger(__name__)
-
-
-def save_model_card(repo_name, images=None, base_model=str, prompt=str, repo_folder=None):
- img_str = ""
- for i, image in enumerate(images):
- image.save(os.path.join(repo_folder, f"image_{i}.png"))
- img_str += f"\n"
-
- yaml = f"""
----
-license: creativeml-openrail-m
-base_model: {base_model}
-tags:
-- stable-diffusion
-- stable-diffusion-diffusers
-- text-to-image
-- diffusers
-- lora
-inference: true
----
- """
- model_card = f"""
-# LoRA DreamBooth - {repo_name}
-
-These are LoRA adaption weights for {repo_name}. The weights were trained on {prompt} using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. \n
-{img_str}
-"""
- with open(os.path.join(repo_folder, "README.md"), "w") as f:
- f.write(yaml + model_card)
-
-
-def import_model_class_from_model_name_or_path(pretrained_model_name_or_path: str, revision: str):
- text_encoder_config = PretrainedConfig.from_pretrained(
- pretrained_model_name_or_path,
- subfolder="text_encoder",
- revision=revision,
- )
- model_class = text_encoder_config.architectures[0]
-
- if model_class == "CLIPTextModel":
- from transformers import CLIPTextModel
-
- return CLIPTextModel
- elif model_class == "RobertaSeriesModelWithTransformation":
- from diffusers.pipelines.alt_diffusion.modeling_roberta_series import RobertaSeriesModelWithTransformation
-
- return RobertaSeriesModelWithTransformation
- else:
- raise ValueError(f"{model_class} is not supported.")
-
-
-def parse_args(input_args=None):
- parser = argparse.ArgumentParser(description="Simple example of a training script.")
- parser.add_argument(
- "--pretrained_model_name_or_path",
- type=str,
- default=None,
- required=True,
- help="Path to pretrained model or model identifier from huggingface.co/models.",
- )
- parser.add_argument(
- "--revision",
- type=str,
- default=None,
- required=False,
- help="Revision of pretrained model identifier from huggingface.co/models.",
- )
- parser.add_argument(
- "--tokenizer_name",
- type=str,
- default=None,
- help="Pretrained tokenizer name or path if not the same as model_name",
- )
- parser.add_argument(
- "--instance_data_dir",
- type=str,
- default=None,
- required=True,
- help="A folder containing the training data of instance images.",
- )
- parser.add_argument(
- "--class_data_dir",
- type=str,
- default=None,
- required=False,
- help="A folder containing the training data of class images.",
- )
- parser.add_argument(
- "--instance_prompt",
- type=str,
- default=None,
- required=True,
- help="The prompt with identifier specifying the instance",
- )
- parser.add_argument(
- "--class_prompt",
- type=str,
- default=None,
- help="The prompt to specify images in the same class as provided instance images.",
- )
- parser.add_argument(
- "--validation_prompt",
- type=str,
- default=None,
- help="A prompt that is used during validation to verify that the model is learning.",
- )
- parser.add_argument(
- "--num_validation_images",
- type=int,
- default=4,
- help="Number of images that should be generated during validation with `validation_prompt`.",
- )
- parser.add_argument(
- "--validation_epochs",
- type=int,
- default=50,
- help=(
- "Run dreambooth validation every X epochs. Dreambooth validation consists of running the prompt"
- " `args.validation_prompt` multiple times: `args.num_validation_images`."
- ),
- )
- parser.add_argument(
- "--with_prior_preservation",
- default=False,
- action="store_true",
- help="Flag to add prior preservation loss.",
- )
- parser.add_argument("--prior_loss_weight", type=float, default=1.0, help="The weight of prior preservation loss.")
- parser.add_argument(
- "--num_class_images",
- type=int,
- default=100,
- help=(
- "Minimal class images for prior preservation loss. If there are not enough images already present in"
- " class_data_dir, additional images will be sampled with class_prompt."
- ),
- )
- parser.add_argument(
- "--output_dir",
- type=str,
- default="lora-dreambooth-model",
- help="The output directory where the model predictions and checkpoints will be written.",
- )
- parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.")
- parser.add_argument(
- "--resolution",
- type=int,
- default=512,
- help=(
- "The resolution for input images, all the images in the train/validation dataset will be resized to this"
- " resolution"
- ),
- )
- parser.add_argument(
- "--center_crop",
- default=False,
- action="store_true",
- help=(
- "Whether to center crop the input images to the resolution. If not set, the images will be randomly"
- " cropped. The images will be resized to the resolution first before cropping."
- ),
- )
- parser.add_argument(
- "--train_batch_size", type=int, default=4, help="Batch size (per device) for the training dataloader."
- )
- parser.add_argument(
- "--sample_batch_size", type=int, default=4, help="Batch size (per device) for sampling images."
- )
- parser.add_argument("--num_train_epochs", type=int, default=1)
- parser.add_argument(
- "--max_train_steps",
- type=int,
- default=None,
- help="Total number of training steps to perform. If provided, overrides num_train_epochs.",
- )
- parser.add_argument(
- "--checkpointing_steps",
- type=int,
- default=500,
- help=(
- "Save a checkpoint of the training state every X updates. These checkpoints can be used both as final"
- " checkpoints in case they are better than the last checkpoint, and are also suitable for resuming"
- " training using `--resume_from_checkpoint`."
- ),
- )
- parser.add_argument(
- "--resume_from_checkpoint",
- type=str,
- default=None,
- help=(
- "Whether training should be resumed from a previous checkpoint. Use a path saved by"
- ' `--checkpointing_steps`, or `"latest"` to automatically select the last available checkpoint.'
- ),
- )
- parser.add_argument(
- "--gradient_accumulation_steps",
- type=int,
- default=1,
- help="Number of updates steps to accumulate before performing a backward/update pass.",
- )
- parser.add_argument(
- "--gradient_checkpointing",
- action="store_true",
- help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.",
- )
- parser.add_argument(
- "--learning_rate",
- type=float,
- default=5e-4,
- help="Initial learning rate (after the potential warmup period) to use.",
- )
- parser.add_argument(
- "--scale_lr",
- action="store_true",
- default=False,
- help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.",
- )
- parser.add_argument(
- "--lr_scheduler",
- type=str,
- default="constant",
- help=(
- 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",'
- ' "constant", "constant_with_warmup"]'
- ),
- )
- parser.add_argument(
- "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler."
- )
- parser.add_argument(
- "--lr_num_cycles",
- type=int,
- default=1,
- help="Number of hard resets of the lr in cosine_with_restarts scheduler.",
- )
- parser.add_argument("--lr_power", type=float, default=1.0, help="Power factor of the polynomial scheduler.")
- parser.add_argument(
- "--dataloader_num_workers",
- type=int,
- default=0,
- help=(
- "Number of subprocesses to use for data loading. 0 means that the data will be loaded in the main process."
- ),
- )
- parser.add_argument(
- "--use_8bit_adam", action="store_true", help="Whether or not to use 8-bit Adam from bitsandbytes."
- )
- parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.")
- parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.")
- parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.")
- parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer")
- parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.")
- parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.")
- parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.")
- parser.add_argument(
- "--hub_model_id",
- type=str,
- default=None,
- help="The name of the repository to keep in sync with the local `output_dir`.",
- )
- parser.add_argument(
- "--logging_dir",
- type=str,
- default="logs",
- help=(
- "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to"
- " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***."
- ),
- )
- parser.add_argument(
- "--allow_tf32",
- action="store_true",
- help=(
- "Whether or not to allow TF32 on Ampere GPUs. Can be used to speed up training. For more information, see"
- " https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices"
- ),
- )
- parser.add_argument(
- "--report_to",
- type=str,
- default="tensorboard",
- help=(
- 'The integration to report the results and logs to. Supported platforms are `"tensorboard"`'
- ' (default), `"wandb"` and `"comet_ml"`. Use `"all"` to report to all integrations.'
- ),
- )
- parser.add_argument(
- "--mixed_precision",
- type=str,
- default=None,
- choices=["no", "fp16", "bf16"],
- help=(
- "Whether to use mixed precision. Choose between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >="
- " 1.10.and an Nvidia Ampere GPU. Default to the value of accelerate config of the current system or the"
- " flag passed with the `accelerate.launch` command. Use this argument to override the accelerate config."
- ),
- )
- parser.add_argument(
- "--prior_generation_precision",
- type=str,
- default=None,
- choices=["no", "fp32", "fp16", "bf16"],
- help=(
- "Choose prior generation precision between fp32, fp16 and bf16 (bfloat16). Bf16 requires PyTorch >="
- " 1.10.and an Nvidia Ampere GPU. Default to fp16 if a GPU is available else fp32."
- ),
- )
- parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank")
- parser.add_argument(
- "--enable_xformers_memory_efficient_attention", action="store_true", help="Whether or not to use xformers."
- )
-
- if input_args is not None:
- args = parser.parse_args(input_args)
- else:
- args = parser.parse_args()
-
- env_local_rank = int(os.environ.get("LOCAL_RANK", -1))
- if env_local_rank != -1 and env_local_rank != args.local_rank:
- args.local_rank = env_local_rank
-
- if args.with_prior_preservation:
- if args.class_data_dir is None:
- raise ValueError("You must specify a data directory for class images.")
- if args.class_prompt is None:
- raise ValueError("You must specify prompt for class images.")
- else:
- # logger is not available yet
- if args.class_data_dir is not None:
- warnings.warn("You need not use --class_data_dir without --with_prior_preservation.")
- if args.class_prompt is not None:
- warnings.warn("You need not use --class_prompt without --with_prior_preservation.")
-
- return args
-
-
-class DreamBoothDataset(Dataset):
- """
- A dataset to prepare the instance and class images with the prompts for fine-tuning the model.
- It pre-processes the images and the tokenizes prompts.
- """
-
- def __init__(
- self,
- instance_data_root,
- instance_prompt,
- tokenizer,
- class_data_root=None,
- class_prompt=None,
- size=512,
- center_crop=False,
- ):
- self.size = size
- self.center_crop = center_crop
- self.tokenizer = tokenizer
-
- self.instance_data_root = Path(instance_data_root)
- if not self.instance_data_root.exists():
- raise ValueError("Instance images root doesn't exists.")
-
- self.instance_images_path = list(Path(instance_data_root).iterdir())
- self.num_instance_images = len(self.instance_images_path)
- self.instance_prompt = instance_prompt
- self._length = self.num_instance_images
-
- if class_data_root is not None:
- self.class_data_root = Path(class_data_root)
- self.class_data_root.mkdir(parents=True, exist_ok=True)
- self.class_images_path = list(self.class_data_root.iterdir())
- self.num_class_images = len(self.class_images_path)
- self._length = max(self.num_class_images, self.num_instance_images)
- self.class_prompt = class_prompt
- else:
- self.class_data_root = None
-
- self.image_transforms = transforms.Compose(
- [
- transforms.Resize(size, interpolation=transforms.InterpolationMode.BILINEAR),
- transforms.CenterCrop(size) if center_crop else transforms.RandomCrop(size),
- transforms.ToTensor(),
- transforms.Normalize([0.5], [0.5]),
- ]
- )
-
- def __len__(self):
- return self._length
-
- def __getitem__(self, index):
- example = {}
- instance_image = Image.open(self.instance_images_path[index % self.num_instance_images])
- if not instance_image.mode == "RGB":
- instance_image = instance_image.convert("RGB")
- example["instance_images"] = self.image_transforms(instance_image)
- example["instance_prompt_ids"] = self.tokenizer(
- self.instance_prompt,
- truncation=True,
- padding="max_length",
- max_length=self.tokenizer.model_max_length,
- return_tensors="pt",
- ).input_ids
-
- if self.class_data_root:
- class_image = Image.open(self.class_images_path[index % self.num_class_images])
- if not class_image.mode == "RGB":
- class_image = class_image.convert("RGB")
- example["class_images"] = self.image_transforms(class_image)
- example["class_prompt_ids"] = self.tokenizer(
- self.class_prompt,
- truncation=True,
- padding="max_length",
- max_length=self.tokenizer.model_max_length,
- return_tensors="pt",
- ).input_ids
-
- return example
-
-
-def collate_fn(examples, with_prior_preservation=False):
- input_ids = [example["instance_prompt_ids"] for example in examples]
- pixel_values = [example["instance_images"] for example in examples]
-
- # Concat class and instance examples for prior preservation.
- # We do this to avoid doing two forward passes.
- if with_prior_preservation:
- input_ids += [example["class_prompt_ids"] for example in examples]
- pixel_values += [example["class_images"] for example in examples]
-
- pixel_values = torch.stack(pixel_values)
- pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float()
-
- input_ids = torch.cat(input_ids, dim=0)
-
- batch = {
- "input_ids": input_ids,
- "pixel_values": pixel_values,
- }
- return batch
-
-
-class PromptDataset(Dataset):
- "A simple dataset to prepare the prompts to generate class images on multiple GPUs."
-
- def __init__(self, prompt, num_samples):
- self.prompt = prompt
- self.num_samples = num_samples
-
- def __len__(self):
- return self.num_samples
-
- def __getitem__(self, index):
- example = {}
- example["prompt"] = self.prompt
- example["index"] = index
- return example
-
-
-def get_full_repo_name(model_id: str, organization: Optional[str] = None, token: Optional[str] = None):
- if token is None:
- token = HfFolder.get_token()
- if organization is None:
- username = whoami(token)["name"]
- return f"{username}/{model_id}"
- else:
- return f"{organization}/{model_id}"
-
-
-def main(args):
- logging_dir = Path(args.output_dir, args.logging_dir)
-
- accelerator = Accelerator(
- gradient_accumulation_steps=args.gradient_accumulation_steps,
- mixed_precision=args.mixed_precision,
- log_with=args.report_to,
- logging_dir=logging_dir,
- )
-
- if args.report_to == "wandb":
- if not is_wandb_available():
- raise ImportError("Make sure to install wandb if you want to use it for logging during training.")
- import wandb
-
- # Currently, it's not possible to do gradient accumulation when training two models with accelerate.accumulate
- # This will be enabled soon in accelerate. For now, we don't allow gradient accumulation when training two models.
- # TODO (patil-suraj): Remove this check when gradient accumulation with two models is enabled in accelerate.
- # Make one log on every process with the configuration for debugging.
- logging.basicConfig(
- format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
- datefmt="%m/%d/%Y %H:%M:%S",
- level=logging.INFO,
- )
- logger.info(accelerator.state, main_process_only=False)
- if accelerator.is_local_main_process:
- datasets.utils.logging.set_verbosity_warning()
- transformers.utils.logging.set_verbosity_warning()
- diffusers.utils.logging.set_verbosity_info()
- else:
- datasets.utils.logging.set_verbosity_error()
- transformers.utils.logging.set_verbosity_error()
- diffusers.utils.logging.set_verbosity_error()
-
- # If passed along, set the training seed now.
- if args.seed is not None:
- set_seed(args.seed)
-
- # Generate class images if prior preservation is enabled.
- if args.with_prior_preservation:
- class_images_dir = Path(args.class_data_dir)
- if not class_images_dir.exists():
- class_images_dir.mkdir(parents=True)
- cur_class_images = len(list(class_images_dir.iterdir()))
-
- if cur_class_images < args.num_class_images:
- torch_dtype = torch.float16 if accelerator.device.type == "cuda" else torch.float32
- if args.prior_generation_precision == "fp32":
- torch_dtype = torch.float32
- elif args.prior_generation_precision == "fp16":
- torch_dtype = torch.float16
- elif args.prior_generation_precision == "bf16":
- torch_dtype = torch.bfloat16
- pipeline = DiffusionPipeline.from_pretrained(
- args.pretrained_model_name_or_path,
- torch_dtype=torch_dtype,
- safety_checker=None,
- revision=args.revision,
- )
- pipeline.set_progress_bar_config(disable=True)
-
- num_new_images = args.num_class_images - cur_class_images
- logger.info(f"Number of class images to sample: {num_new_images}.")
-
- sample_dataset = PromptDataset(args.class_prompt, num_new_images)
- sample_dataloader = torch.utils.data.DataLoader(sample_dataset, batch_size=args.sample_batch_size)
-
- sample_dataloader = accelerator.prepare(sample_dataloader)
- pipeline.to(accelerator.device)
-
- for example in tqdm(
- sample_dataloader, desc="Generating class images", disable=not accelerator.is_local_main_process
- ):
- images = pipeline(example["prompt"]).images
-
- for i, image in enumerate(images):
- hash_image = hashlib.sha1(image.tobytes()).hexdigest()
- image_filename = class_images_dir / f"{example['index'][i] + cur_class_images}-{hash_image}.jpg"
- image.save(image_filename)
-
- del pipeline
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
-
- # Handle the repository creation
- if accelerator.is_main_process:
- if args.push_to_hub:
- if args.hub_model_id is None:
- repo_name = get_full_repo_name(Path(args.output_dir).name, token=args.hub_token)
- else:
- repo_name = args.hub_model_id
-
- create_repo(repo_name, exist_ok=True, token=args.hub_token)
- repo = Repository(args.output_dir, clone_from=repo_name, token=args.hub_token)
-
- with open(os.path.join(args.output_dir, ".gitignore"), "w+") as gitignore:
- if "step_*" not in gitignore:
- gitignore.write("step_*\n")
- if "epoch_*" not in gitignore:
- gitignore.write("epoch_*\n")
- elif args.output_dir is not None:
- os.makedirs(args.output_dir, exist_ok=True)
-
- # Load the tokenizer
- if args.tokenizer_name:
- tokenizer = AutoTokenizer.from_pretrained(args.tokenizer_name, revision=args.revision, use_fast=False)
- elif args.pretrained_model_name_or_path:
- tokenizer = AutoTokenizer.from_pretrained(
- args.pretrained_model_name_or_path,
- subfolder="tokenizer",
- revision=args.revision,
- use_fast=False,
- )
-
- # import correct text encoder class
- text_encoder_cls = import_model_class_from_model_name_or_path(args.pretrained_model_name_or_path, args.revision)
-
- # Load scheduler and models
- noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler")
- text_encoder = text_encoder_cls.from_pretrained(
- args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision
- )
- vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision)
- unet = UNet2DConditionModel.from_pretrained(
- args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision
- )
-
- # We only train the additional adapter LoRA layers
- vae.requires_grad_(False)
- text_encoder.requires_grad_(False)
- unet.requires_grad_(False)
-
- # For mixed precision training we cast the text_encoder and vae weights to half-precision
- # as these models are only used for inference, keeping weights in full precision is not required.
- weight_dtype = torch.float32
- if accelerator.mixed_precision == "fp16":
- weight_dtype = torch.float16
- elif accelerator.mixed_precision == "bf16":
- weight_dtype = torch.bfloat16
-
- # Move unet, vae and text_encoder to device and cast to weight_dtype
- unet.to(accelerator.device, dtype=weight_dtype)
- vae.to(accelerator.device, dtype=weight_dtype)
- text_encoder.to(accelerator.device, dtype=weight_dtype)
-
- if args.enable_xformers_memory_efficient_attention:
- if is_xformers_available():
- unet.enable_xformers_memory_efficient_attention()
- else:
- raise ValueError("xformers is not available. Make sure it is installed correctly")
-
- # now we will add new LoRA weights to the attention layers
- # It's important to realize here how many attention weights will be added and of which sizes
- # The sizes of the attention layers consist only of two different variables:
- # 1) - the "hidden_size", which is increased according to `unet.config.block_out_channels`.
- # 2) - the "cross attention size", which is set to `unet.config.cross_attention_dim`.
-
- # Let's first see how many attention processors we will have to set.
- # For Stable Diffusion, it should be equal to:
- # - down blocks (2x attention layers) * (2x transformer layers) * (3x down blocks) = 12
- # - mid blocks (2x attention layers) * (1x transformer layers) * (1x mid blocks) = 2
- # - up blocks (2x attention layers) * (3x transformer layers) * (3x down blocks) = 18
- # => 32 layers
-
- # Set correct lora layers
- lora_attn_procs = {}
- for name in unet.attn_processors.keys():
- cross_attention_dim = None if name.endswith("attn1.processor") else unet.config.cross_attention_dim
- if name.startswith("mid_block"):
- hidden_size = unet.config.block_out_channels[-1]
- elif name.startswith("up_blocks"):
- block_id = int(name[len("up_blocks.")])
- hidden_size = list(reversed(unet.config.block_out_channels))[block_id]
- elif name.startswith("down_blocks"):
- block_id = int(name[len("down_blocks.")])
- hidden_size = unet.config.block_out_channels[block_id]
-
- lora_attn_procs[name] = LoRACrossAttnProcessor(
- hidden_size=hidden_size, cross_attention_dim=cross_attention_dim
- )
-
- unet.set_attn_processor(lora_attn_procs)
- lora_layers = AttnProcsLayers(unet.attn_processors)
-
- accelerator.register_for_checkpointing(lora_layers)
-
- if args.scale_lr:
- args.learning_rate = (
- args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes
- )
-
- # Enable TF32 for faster training on Ampere GPUs,
- # cf https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices
- if args.allow_tf32:
- torch.backends.cuda.matmul.allow_tf32 = True
-
- if args.scale_lr:
- args.learning_rate = (
- args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes
- )
-
- # Use 8-bit Adam for lower memory usage or to fine-tune the model in 16GB GPUs
- if args.use_8bit_adam:
- try:
- import bitsandbytes as bnb
- except ImportError:
- raise ImportError(
- "To use 8-bit Adam, please install the bitsandbytes library: `pip install bitsandbytes`."
- )
-
- optimizer_class = bnb.optim.AdamW8bit
- else:
- optimizer_class = torch.optim.AdamW
-
- # Optimizer creation
- optimizer = optimizer_class(
- lora_layers.parameters(),
- lr=args.learning_rate,
- betas=(args.adam_beta1, args.adam_beta2),
- weight_decay=args.adam_weight_decay,
- eps=args.adam_epsilon,
- )
-
- # Dataset and DataLoaders creation:
- train_dataset = DreamBoothDataset(
- instance_data_root=args.instance_data_dir,
- instance_prompt=args.instance_prompt,
- class_data_root=args.class_data_dir if args.with_prior_preservation else None,
- class_prompt=args.class_prompt,
- tokenizer=tokenizer,
- size=args.resolution,
- center_crop=args.center_crop,
- )
-
- train_dataloader = torch.utils.data.DataLoader(
- train_dataset,
- batch_size=args.train_batch_size,
- shuffle=True,
- collate_fn=lambda examples: collate_fn(examples, args.with_prior_preservation),
- num_workers=args.dataloader_num_workers,
- )
-
- # Scheduler and math around the number of training steps.
- overrode_max_train_steps = False
- num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
- if args.max_train_steps is None:
- args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
- overrode_max_train_steps = True
-
- lr_scheduler = get_scheduler(
- args.lr_scheduler,
- optimizer=optimizer,
- num_warmup_steps=args.lr_warmup_steps * args.gradient_accumulation_steps,
- num_training_steps=args.max_train_steps * args.gradient_accumulation_steps,
- num_cycles=args.lr_num_cycles,
- power=args.lr_power,
- )
-
- # Prepare everything with our `accelerator`.
- lora_layers, optimizer, train_dataloader, lr_scheduler = accelerator.prepare(
- lora_layers, optimizer, train_dataloader, lr_scheduler
- )
-
- # We need to recalculate our total training steps as the size of the training dataloader may have changed.
- num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
- if overrode_max_train_steps:
- args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
- # Afterwards we recalculate our number of training epochs
- args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch)
-
- # We need to initialize the trackers we use, and also store our configuration.
- # The trackers initializes automatically on the main process.
- if accelerator.is_main_process:
- accelerator.init_trackers("dreambooth-lora", config=vars(args))
-
- # Train!
- total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps
-
- logger.info("***** Running training *****")
- logger.info(f" Num examples = {len(train_dataset)}")
- logger.info(f" Num batches each epoch = {len(train_dataloader)}")
- logger.info(f" Num Epochs = {args.num_train_epochs}")
- logger.info(f" Instantaneous batch size per device = {args.train_batch_size}")
- logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}")
- logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}")
- logger.info(f" Total optimization steps = {args.max_train_steps}")
- global_step = 0
- first_epoch = 0
-
- # Potentially load in the weights and states from a previous save
- if args.resume_from_checkpoint:
- if args.resume_from_checkpoint != "latest":
- path = os.path.basename(args.resume_from_checkpoint)
- else:
- # Get the mos recent checkpoint
- dirs = os.listdir(args.output_dir)
- dirs = [d for d in dirs if d.startswith("checkpoint")]
- dirs = sorted(dirs, key=lambda x: int(x.split("-")[1]))
- path = dirs[-1] if len(dirs) > 0 else None
-
- if path is None:
- accelerator.print(
- f"Checkpoint '{args.resume_from_checkpoint}' does not exist. Starting a new training run."
- )
- args.resume_from_checkpoint = None
- else:
- accelerator.print(f"Resuming from checkpoint {path}")
- accelerator.load_state(os.path.join(args.output_dir, path))
- global_step = int(path.split("-")[1])
-
- resume_global_step = global_step * args.gradient_accumulation_steps
- first_epoch = global_step // num_update_steps_per_epoch
- resume_step = resume_global_step % (num_update_steps_per_epoch * args.gradient_accumulation_steps)
-
- # Only show the progress bar once on each machine.
- progress_bar = tqdm(range(global_step, args.max_train_steps), disable=not accelerator.is_local_main_process)
- progress_bar.set_description("Steps")
-
- for epoch in range(first_epoch, args.num_train_epochs):
- unet.train()
- for step, batch in enumerate(train_dataloader):
- # Skip steps until we reach the resumed step
- if args.resume_from_checkpoint and epoch == first_epoch and step < resume_step:
- if step % args.gradient_accumulation_steps == 0:
- progress_bar.update(1)
- continue
-
- with accelerator.accumulate(unet):
- # Convert images to latent space
- latents = vae.encode(batch["pixel_values"].to(dtype=weight_dtype)).latent_dist.sample()
- latents = latents * 0.18215
-
- # Sample noise that we'll add to the latents
- noise = torch.randn_like(latents)
- bsz = latents.shape[0]
- # Sample a random timestep for each image
- timesteps = torch.randint(0, noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device)
- timesteps = timesteps.long()
-
- # Add noise to the latents according to the noise magnitude at each timestep
- # (this is the forward diffusion process)
- noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps)
-
- # Get the text embedding for conditioning
- encoder_hidden_states = text_encoder(batch["input_ids"])[0]
-
- # Predict the noise residual
- model_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample
-
- # Get the target for loss depending on the prediction type
- if noise_scheduler.config.prediction_type == "epsilon":
- target = noise
- elif noise_scheduler.config.prediction_type == "v_prediction":
- target = noise_scheduler.get_velocity(latents, noise, timesteps)
- else:
- raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}")
-
- if args.with_prior_preservation:
- # Chunk the noise and model_pred into two parts and compute the loss on each part separately.
- model_pred, model_pred_prior = torch.chunk(model_pred, 2, dim=0)
- target, target_prior = torch.chunk(target, 2, dim=0)
-
- # Compute instance loss
- loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean")
-
- # Compute prior loss
- prior_loss = F.mse_loss(model_pred_prior.float(), target_prior.float(), reduction="mean")
-
- # Add the prior loss to the instance loss.
- loss = loss + args.prior_loss_weight * prior_loss
- else:
- loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean")
-
- accelerator.backward(loss)
- if accelerator.sync_gradients:
- params_to_clip = lora_layers.parameters()
- accelerator.clip_grad_norm_(params_to_clip, args.max_grad_norm)
- optimizer.step()
- lr_scheduler.step()
- optimizer.zero_grad()
-
- # Checks if the accelerator has performed an optimization step behind the scenes
- if accelerator.sync_gradients:
- progress_bar.update(1)
- global_step += 1
-
- if global_step % args.checkpointing_steps == 0:
- if accelerator.is_main_process:
- save_path = os.path.join(args.output_dir, f"checkpoint-{global_step}")
- accelerator.save_state(save_path)
- logger.info(f"Saved state to {save_path}")
-
- logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]}
- progress_bar.set_postfix(**logs)
- accelerator.log(logs, step=global_step)
-
- if global_step >= args.max_train_steps:
- break
-
- if args.validation_prompt is not None and epoch % args.validation_epochs == 0:
- logger.info(
- f"Running validation... \n Generating {args.num_validation_images} images with prompt:"
- f" {args.validation_prompt}."
- )
- # create pipeline
- pipeline = DiffusionPipeline.from_pretrained(
- args.pretrained_model_name_or_path,
- unet=accelerator.unwrap_model(unet),
- text_encoder=accelerator.unwrap_model(text_encoder),
- revision=args.revision,
- torch_dtype=weight_dtype,
- )
- pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config)
- pipeline = pipeline.to(accelerator.device)
- pipeline.set_progress_bar_config(disable=True)
-
- # run inference
- generator = torch.Generator(device=accelerator.device).manual_seed(args.seed)
- prompt = args.num_validation_images * [args.validation_prompt]
- images = pipeline(prompt, num_inference_steps=25, generator=generator).images
-
- for tracker in accelerator.trackers:
- if tracker.name == "tensorboard":
- np_images = np.stack([np.asarray(img) for img in images])
- tracker.writer.add_images("validation", np_images, epoch, dataformats="NHWC")
- if tracker.name == "wandb":
- tracker.log(
- {
- "validation": [
- wandb.Image(image, caption=f"{i}: {args.validation_prompt}")
- for i, image in enumerate(images)
- ]
- }
- )
-
- del pipeline
- torch.cuda.empty_cache()
-
- # Save the lora layers
- accelerator.wait_for_everyone()
- if accelerator.is_main_process:
- unet = unet.to(torch.float32)
- unet.save_attn_procs(args.output_dir)
-
- # Final inference
- # Load previous pipeline
- pipeline = DiffusionPipeline.from_pretrained(
- args.pretrained_model_name_or_path, revision=args.revision, torch_dtype=weight_dtype
- )
- pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config)
- pipeline = pipeline.to(accelerator.device)
-
- # load attention processors
- pipeline.unet.load_attn_procs(args.output_dir)
-
- # run inference
- if args.validation_prompt and args.num_validation_images > 0:
- generator = torch.Generator(device=accelerator.device).manual_seed(args.seed) if args.seed else None
- prompt = args.num_validation_images * [args.validation_prompt]
- images = pipeline(prompt, num_inference_steps=25, generator=generator).images
-
- test_image_dir = Path(args.output_dir) / 'test_images'
- test_image_dir.mkdir()
- for i, image in enumerate(images):
- out_path = test_image_dir / f'image_{i}.png'
- image.save(out_path)
-
- for tracker in accelerator.trackers:
- if tracker.name == "tensorboard":
- np_images = np.stack([np.asarray(img) for img in images])
- tracker.writer.add_images("test", np_images, epoch, dataformats="NHWC")
- if tracker.name == "wandb":
- tracker.log(
- {
- "test": [
- wandb.Image(image, caption=f"{i}: {args.validation_prompt}")
- for i, image in enumerate(images)
- ]
- }
- )
-
- if args.push_to_hub:
- save_model_card(
- repo_name,
- images=images,
- base_model=args.pretrained_model_name_or_path,
- prompt=args.instance_prompt,
- repo_folder=args.output_dir,
- )
- repo.push_to_hub(commit_message="End of training", blocking=False, auto_lfs_prune=True)
-
- accelerator.end_training()
-
-
-if __name__ == "__main__":
- args = parse_args()
- main(args)
diff --git a/spaces/Eddycrack864/Applio-Inference/infer/lib/uvr5_pack/lib_v5/nets_537227KB.py b/spaces/Eddycrack864/Applio-Inference/infer/lib/uvr5_pack/lib_v5/nets_537227KB.py
deleted file mode 100644
index 823b44fb64898e8dcbb12180ba45d1718f9b03f7..0000000000000000000000000000000000000000
--- a/spaces/Eddycrack864/Applio-Inference/infer/lib/uvr5_pack/lib_v5/nets_537227KB.py
+++ /dev/null
@@ -1,123 +0,0 @@
-import numpy as np
-import torch
-import torch.nn.functional as F
-from torch import nn
-
-from . import layers_537238KB as layers
-
-
-class BaseASPPNet(nn.Module):
- def __init__(self, nin, ch, dilations=(4, 8, 16)):
- super(BaseASPPNet, self).__init__()
- self.enc1 = layers.Encoder(nin, ch, 3, 2, 1)
- self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1)
- self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1)
- self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1)
-
- self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations)
-
- self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1)
- self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1)
- self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1)
- self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1)
-
- def __call__(self, x):
- h, e1 = self.enc1(x)
- h, e2 = self.enc2(h)
- h, e3 = self.enc3(h)
- h, e4 = self.enc4(h)
-
- h = self.aspp(h)
-
- h = self.dec4(h, e4)
- h = self.dec3(h, e3)
- h = self.dec2(h, e2)
- h = self.dec1(h, e1)
-
- return h
-
-
-class CascadedASPPNet(nn.Module):
- def __init__(self, n_fft):
- super(CascadedASPPNet, self).__init__()
- self.stg1_low_band_net = BaseASPPNet(2, 64)
- self.stg1_high_band_net = BaseASPPNet(2, 64)
-
- self.stg2_bridge = layers.Conv2DBNActiv(66, 32, 1, 1, 0)
- self.stg2_full_band_net = BaseASPPNet(32, 64)
-
- self.stg3_bridge = layers.Conv2DBNActiv(130, 64, 1, 1, 0)
- self.stg3_full_band_net = BaseASPPNet(64, 128)
-
- self.out = nn.Conv2d(128, 2, 1, bias=False)
- self.aux1_out = nn.Conv2d(64, 2, 1, bias=False)
- self.aux2_out = nn.Conv2d(64, 2, 1, bias=False)
-
- self.max_bin = n_fft // 2
- self.output_bin = n_fft // 2 + 1
-
- self.offset = 128
-
- def forward(self, x, aggressiveness=None):
- mix = x.detach()
- x = x.clone()
-
- x = x[:, :, : self.max_bin]
-
- bandw = x.size()[2] // 2
- aux1 = torch.cat(
- [
- self.stg1_low_band_net(x[:, :, :bandw]),
- self.stg1_high_band_net(x[:, :, bandw:]),
- ],
- dim=2,
- )
-
- h = torch.cat([x, aux1], dim=1)
- aux2 = self.stg2_full_band_net(self.stg2_bridge(h))
-
- h = torch.cat([x, aux1, aux2], dim=1)
- h = self.stg3_full_band_net(self.stg3_bridge(h))
-
- mask = torch.sigmoid(self.out(h))
- mask = F.pad(
- input=mask,
- pad=(0, 0, 0, self.output_bin - mask.size()[2]),
- mode="replicate",
- )
-
- if self.training:
- aux1 = torch.sigmoid(self.aux1_out(aux1))
- aux1 = F.pad(
- input=aux1,
- pad=(0, 0, 0, self.output_bin - aux1.size()[2]),
- mode="replicate",
- )
- aux2 = torch.sigmoid(self.aux2_out(aux2))
- aux2 = F.pad(
- input=aux2,
- pad=(0, 0, 0, self.output_bin - aux2.size()[2]),
- mode="replicate",
- )
- return mask * mix, aux1 * mix, aux2 * mix
- else:
- if aggressiveness:
- mask[:, :, : aggressiveness["split_bin"]] = torch.pow(
- mask[:, :, : aggressiveness["split_bin"]],
- 1 + aggressiveness["value"] / 3,
- )
- mask[:, :, aggressiveness["split_bin"] :] = torch.pow(
- mask[:, :, aggressiveness["split_bin"] :],
- 1 + aggressiveness["value"],
- )
-
- return mask * mix
-
- def predict(self, x_mag, aggressiveness=None):
- h = self.forward(x_mag, aggressiveness)
-
- if self.offset > 0:
- h = h[:, :, :, self.offset : -self.offset]
- assert h.size()[3] > 0
-
- return h
diff --git a/spaces/ElainaFanBoy/IRONY-Real-ESRGAN/scripts/generate_meta_info_pairdata.py b/spaces/ElainaFanBoy/IRONY-Real-ESRGAN/scripts/generate_meta_info_pairdata.py
deleted file mode 100644
index 76dce7e41c803a8055f3627cccb98deb51419b09..0000000000000000000000000000000000000000
--- a/spaces/ElainaFanBoy/IRONY-Real-ESRGAN/scripts/generate_meta_info_pairdata.py
+++ /dev/null
@@ -1,49 +0,0 @@
-import argparse
-import glob
-import os
-
-
-def main(args):
- txt_file = open(args.meta_info, 'w')
- # sca images
- img_paths_gt = sorted(glob.glob(os.path.join(args.input[0], '*')))
- img_paths_lq = sorted(glob.glob(os.path.join(args.input[1], '*')))
-
- assert len(img_paths_gt) == len(img_paths_lq), ('GT folder and LQ folder should have the same length, but got '
- f'{len(img_paths_gt)} and {len(img_paths_lq)}.')
-
- for img_path_gt, img_path_lq in zip(img_paths_gt, img_paths_lq):
- # get the relative paths
- img_name_gt = os.path.relpath(img_path_gt, args.root[0])
- img_name_lq = os.path.relpath(img_path_lq, args.root[1])
- print(f'{img_name_gt}, {img_name_lq}')
- txt_file.write(f'{img_name_gt}, {img_name_lq}\n')
-
-
-if __name__ == '__main__':
- """This script is used to generate meta info (txt file) for paired images.
- """
- parser = argparse.ArgumentParser()
- parser.add_argument(
- '--input',
- nargs='+',
- default=['datasets/DF2K/DIV2K_train_HR_sub', 'datasets/DF2K/DIV2K_train_LR_bicubic_X4_sub'],
- help='Input folder, should be [gt_folder, lq_folder]')
- parser.add_argument('--root', nargs='+', default=[None, None], help='Folder root, will use the ')
- parser.add_argument(
- '--meta_info',
- type=str,
- default='datasets/DF2K/meta_info/meta_info_DIV2K_sub_pair.txt',
- help='txt path for meta info')
- args = parser.parse_args()
-
- assert len(args.input) == 2, 'Input folder should have two elements: gt folder and lq folder'
- assert len(args.root) == 2, 'Root path should have two elements: root for gt folder and lq folder'
- os.makedirs(os.path.dirname(args.meta_info), exist_ok=True)
- for i in range(2):
- if args.input[i].endswith('/'):
- args.input[i] = args.input[i][:-1]
- if args.root[i] is None:
- args.root[i] = os.path.dirname(args.input[i])
-
- main(args)
diff --git a/spaces/Epitech/LinguaExpressus/README.md b/spaces/Epitech/LinguaExpressus/README.md
deleted file mode 100644
index 6639e3b9caf49dbf59a9721ac217a8f4de82a530..0000000000000000000000000000000000000000
--- a/spaces/Epitech/LinguaExpressus/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: LinguaExpressus
-emoji: 😻
-colorFrom: indigo
-colorTo: green
-sdk: gradio
-sdk_version: 2.9.4
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/FridaZuley/RVC_HFKawaii/configs/config.py b/spaces/FridaZuley/RVC_HFKawaii/configs/config.py
deleted file mode 100644
index e3b0205a1f0d62f674b9c3de2c5ab7ee90464945..0000000000000000000000000000000000000000
--- a/spaces/FridaZuley/RVC_HFKawaii/configs/config.py
+++ /dev/null
@@ -1,265 +0,0 @@
-import argparse
-import os
-import sys
-import json
-from multiprocessing import cpu_count
-
-import torch
-
-try:
- import intel_extension_for_pytorch as ipex # pylint: disable=import-error, unused-import
- if torch.xpu.is_available():
- from infer.modules.ipex import ipex_init
- ipex_init()
-except Exception:
- pass
-
-import logging
-
-logger = logging.getLogger(__name__)
-
-
-version_config_list = [
- "v1/32k.json",
- "v1/40k.json",
- "v1/48k.json",
- "v2/48k.json",
- "v2/32k.json",
-]
-
-
-def singleton_variable(func):
- def wrapper(*args, **kwargs):
- if not wrapper.instance:
- wrapper.instance = func(*args, **kwargs)
- return wrapper.instance
-
- wrapper.instance = None
- return wrapper
-
-
-@singleton_variable
-class Config:
- def __init__(self):
- self.device = "cuda:0"
- self.is_half = True
- self.n_cpu = 0
- self.gpu_name = None
- self.json_config = self.load_config_json()
- self.gpu_mem = None
- (
- self.python_cmd,
- self.listen_port,
- self.iscolab,
- self.noparallel,
- self.noautoopen,
- self.paperspace,
- self.is_cli,
- self.grtheme,
- self.dml,
- ) = self.arg_parse()
- self.instead = ""
- self.x_pad, self.x_query, self.x_center, self.x_max = self.device_config()
-
- @staticmethod
- def load_config_json() -> dict:
- d = {}
- for config_file in version_config_list:
- with open(f"configs/{config_file}", "r") as f:
- d[config_file] = json.load(f)
- return d
-
- @staticmethod
- def arg_parse() -> tuple:
- exe = sys.executable or "python"
- parser = argparse.ArgumentParser()
- parser.add_argument("--port", type=int, default=7865, help="Listen port")
- parser.add_argument("--pycmd", type=str, default=exe, help="Python command")
- parser.add_argument("--colab", action="store_true", help="Launch in colab")
- parser.add_argument(
- "--noparallel", action="store_true", help="Disable parallel processing"
- )
- parser.add_argument(
- "--noautoopen",
- action="store_true",
- help="Do not open in browser automatically",
- )
- parser.add_argument(
- "--paperspace",
- action="store_true",
- help="Note that this argument just shares a gradio link for the web UI. Thus can be used on other non-local CLI systems.",
- )
- parser.add_argument(
- "--is_cli",
- action="store_true",
- help="Use the CLI instead of setting up a gradio UI. This flag will launch an RVC text interface where you can execute functions from infer-web.py!",
- )
-
- parser.add_argument(
- "-t",
- "--theme",
- help = "Theme for Gradio. Format - `JohnSmith9982/small_and_pretty` (no backticks)",
- default = "JohnSmith9982/small_and_pretty",
- type = str
- )
-
- parser.add_argument(
- "--dml",
- action="store_true",
- help="Use DirectML backend instead of CUDA."
- )
-
- cmd_opts = parser.parse_args()
-
- cmd_opts.port = cmd_opts.port if 0 <= cmd_opts.port <= 65535 else 7865
-
- return (
- cmd_opts.pycmd,
- cmd_opts.port,
- cmd_opts.colab,
- cmd_opts.noparallel,
- cmd_opts.noautoopen,
- cmd_opts.paperspace,
- cmd_opts.is_cli,
- cmd_opts.theme,
- cmd_opts.dml,
- )
-
- # has_mps is only available in nightly pytorch (for now) and MasOS 12.3+.
- # check `getattr` and try it for compatibility
- @staticmethod
- def has_mps() -> bool:
- if not torch.backends.mps.is_available():
- return False
- try:
- torch.zeros(1).to(torch.device("mps"))
- return True
- except Exception:
- return False
-
- @staticmethod
- def has_xpu() -> bool:
- if hasattr(torch, "xpu") and torch.xpu.is_available():
- return True
- else:
- return False
-
- def use_fp32_config(self):
- for config_file in version_config_list:
- self.json_config[config_file]["train"]["fp16_run"] = False
-
- def device_config(self) -> tuple:
- if torch.cuda.is_available():
- if self.has_xpu():
- self.device = self.instead = "xpu:0"
- self.is_half = True
- i_device = int(self.device.split(":")[-1])
- self.gpu_name = torch.cuda.get_device_name(i_device)
- if (
- ("16" in self.gpu_name and "V100" not in self.gpu_name.upper())
- or "P40" in self.gpu_name.upper()
- or "P10" in self.gpu_name.upper()
- or "1060" in self.gpu_name
- or "1070" in self.gpu_name
- or "1080" in self.gpu_name
- ):
- logger.info("Found GPU %s, force to fp32", self.gpu_name)
- self.is_half = False
- self.use_fp32_config()
- else:
- logger.info("Found GPU %s", self.gpu_name)
- self.gpu_mem = int(
- torch.cuda.get_device_properties(i_device).total_memory
- / 1024
- / 1024
- / 1024
- + 0.4
- )
- if self.gpu_mem <= 4:
- with open("infer/modules/train/preprocess.py", "r") as f:
- strr = f.read().replace("3.7", "3.0")
- with open("infer/modules/train/preprocess.py", "w") as f:
- f.write(strr)
- elif self.has_mps():
- logger.info("No supported Nvidia GPU found")
- self.device = self.instead = "mps"
- self.is_half = False
- self.use_fp32_config()
- else:
- logger.info("No supported Nvidia GPU found")
- self.device = self.instead = "cpu"
- self.is_half = False
- self.use_fp32_config()
-
- if self.n_cpu == 0:
- self.n_cpu = cpu_count()
-
- if self.is_half:
- # 6G显存配置
- x_pad = 3
- x_query = 10
- x_center = 60
- x_max = 65
- else:
- # 5G显存配置
- x_pad = 1
- x_query = 6
- x_center = 38
- x_max = 41
-
- if self.gpu_mem is not None and self.gpu_mem <= 4:
- x_pad = 1
- x_query = 5
- x_center = 30
- x_max = 32
- if self.dml:
- logger.info("Use DirectML instead")
- if (
- os.path.exists(
- "runtime\Lib\site-packages\onnxruntime\capi\DirectML.dll"
- )
- == False
- ):
- try:
- os.rename(
- "runtime\Lib\site-packages\onnxruntime",
- "runtime\Lib\site-packages\onnxruntime-cuda",
- )
- except:
- pass
- try:
- os.rename(
- "runtime\Lib\site-packages\onnxruntime-dml",
- "runtime\Lib\site-packages\onnxruntime",
- )
- except:
- pass
- # if self.device != "cpu":
- import torch_directml
-
- self.device = torch_directml.device(torch_directml.default_device())
- self.is_half = False
- else:
- if self.instead:
- logger.info(f"Use {self.instead} instead")
- if (
- os.path.exists(
- "runtime\Lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"
- )
- == False
- ):
- try:
- os.rename(
- "runtime\Lib\site-packages\onnxruntime",
- "runtime\Lib\site-packages\onnxruntime-dml",
- )
- except:
- pass
- try:
- os.rename(
- "runtime\Lib\site-packages\onnxruntime-cuda",
- "runtime\Lib\site-packages\onnxruntime",
- )
- except:
- pass
- return x_pad, x_query, x_center, x_max
diff --git a/spaces/GMFTBY/PandaGPT/pretrained_ckpt/README.md b/spaces/GMFTBY/PandaGPT/pretrained_ckpt/README.md
deleted file mode 100644
index e42580270a86be1969864f67665904710d9c9516..0000000000000000000000000000000000000000
--- a/spaces/GMFTBY/PandaGPT/pretrained_ckpt/README.md
+++ /dev/null
@@ -1,78 +0,0 @@
-# 1. Prepare Vicuna Checkpoint:
-
-The language decoder of PandaGPT is based on Vicuna version 0. Given the distribution license of LLaMA, you need to restore the weights of Vicuna manually. To restore the weights, please follow the instructions below. In the following, we showcase how to restore the 7B version of Vicuna v0. To obtain the 13B version of Vicuna, you can take similar procedures.
-
-## 1.1. Obtain LLaMA Weights:
-* Request the weights of LLaMA from Meta using [this form](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform).
-* After obtaining the weights of a specific LLaMA (e.g. 7B, 13B), following [instructions](https://huggingface.co/docs/transformers/main/model_doc/llama) provided by Huggingface to convert it into Huggingface format.
-
-> **** After conversion, the directory should look like:
-
- .
- └── ./{path_to_llama_weights}/
- ├── config.json
- ├── generation_config.json
- ├── pytorch_model-00001-of-00002.bin
- ├── pytorch_model-00002-of-00002.bin
- ├── pytorch_model.bin.index.json
- ├── special_tokens_map.json
- ├── tokenizer.model
- └── tokenizer_config.json
-
-`{path_to_llama_weights}` is where you store the checkpoints.
-
-
-## 1.2. Obtain the Delta Weights of Vicuna:
-
-Then, you should download the delta weights of Vicuna provided by the original authors. You can find the corresponding links to 7B/13B Vicuna models in the table below.
-
-|**Model Size**|**Delta Weights Address**|**Version**|
-|:-------------:|:-------------:|:-------------:|
-|7B|[[Link]](https://huggingface.co/lmsys/vicuna-7b-delta-v0)|0|
-|13B|[[Link]](https://huggingface.co/lmsys/vicuna-13b-delta-v0)|0|
-
-
-
-> **** After conversion, the directory should look like:
-
- .
- └── ./{path_to_delta_vicuna_weights}/
- ├── config.json
- ├── generation_config.json
- ├── pytorch_model-00001-of-00002.bin
- ├── pytorch_model-00002-of-00002.bin
- ├── pytorch_model.bin.index.json
- ├── special_tokens_map.json
- ├── tokenizer.model
- └── tokenizer_config.json
-
-`{path_to_delta_vicuna_weights}` is where you store the delta weights of Vicuna.
-
-## 1.3. Combine the Weights:
-
-When the two sets of weights are ready, you can combine them using tools from the Vicuna team.
-
-First, install the required library.
-```yaml
-pip install git+https://github.com/lm-sys/FastChat.git@v0.1.10
-```
-
-Then, run the following command.
-```yaml
-python -m fastchat.model.apply_delta --base {path_to_llama_weights} --target ./vicuna_ckpt/7b_v0/ --delta {path_to_delta_vicuna_weights}
-```
-
-> **** Now, the final weights are ready as:
-
- .
- └── ./vicuna_ckpt/7b_v0/
- ├── config.json
- ├── generation_config.json
- ├── pytorch_model-00001-of-00002.bin
- ├── pytorch_model-00002-of-00002.bin
- ├── pytorch_model.bin.index.json
- ├── special_tokens_map.json
- ├── tokenizer.model
- └── tokenizer_config.json
-
-
diff --git a/spaces/GXSA/bingo/src/pages/api/blob.ts b/spaces/GXSA/bingo/src/pages/api/blob.ts
deleted file mode 100644
index fecd48031916b2284b8958892196e0a1ad420421..0000000000000000000000000000000000000000
--- a/spaces/GXSA/bingo/src/pages/api/blob.ts
+++ /dev/null
@@ -1,40 +0,0 @@
-'use server'
-
-import { NextApiRequest, NextApiResponse } from 'next'
-import { Readable } from 'node:stream'
-import { fetch } from '@/lib/isomorphic'
-
-const API_DOMAIN = 'https://www.bing.com'
-
-export default async function handler(req: NextApiRequest, res: NextApiResponse) {
- try {
- const { bcid } = req.query
-
- const { headers, body } = await fetch(`${API_DOMAIN}/images/blob?bcid=${bcid}`,
- {
- method: 'GET',
- headers: {
- "sec-ch-ua": "\"Not/A)Brand\";v=\"99\", \"Google Chrome\";v=\"115\", \"Chromium\";v=\"115\"",
- "sec-ch-ua-mobile": "?0",
- "sec-ch-ua-platform": "\"Windows\"",
- "Referrer-Policy": "origin-when-cross-origin",
- },
- },
- )
-
- res.writeHead(200, {
- 'Content-Length': headers.get('content-length')!,
- 'Content-Type': headers.get('content-type')!,
- })
- // @ts-ignore
- return Readable.fromWeb(body!).pipe(res)
- } catch (e) {
- console.log('Error', e)
- return res.json({
- result: {
- value: 'UploadFailed',
- message: `${e}`
- }
- })
- }
-}
diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/models/clip_lingunet_lat.py b/spaces/Gen-Sim/Gen-Sim/cliport/models/clip_lingunet_lat.py
deleted file mode 100644
index 74e9006ecd5eac1df433085427443ae15489734b..0000000000000000000000000000000000000000
--- a/spaces/Gen-Sim/Gen-Sim/cliport/models/clip_lingunet_lat.py
+++ /dev/null
@@ -1,149 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-import cliport.utils.utils as utils
-from cliport.models.resnet import IdentityBlock, ConvBlock
-from cliport.models.core.unet import Up
-from cliport.models.core.clip import build_model, load_clip, tokenize
-
-from cliport.models.core import fusion
-from cliport.models.core.fusion import FusionConvLat
-
-
-class CLIPLingUNetLat(nn.Module):
- """ CLIP RN50 with U-Net skip connections and lateral connections """
-
- def __init__(self, input_shape, output_dim, cfg, device, preprocess):
- super(CLIPLingUNetLat, self).__init__()
- self.input_shape = input_shape
- self.output_dim = output_dim
- self.input_dim = 2048 # penultimate layer channel-size of CLIP-RN50
- self.cfg = cfg
- self.device = device
- self.batchnorm = self.cfg['train']['batchnorm']
- self.lang_fusion_type = self.cfg['train']['lang_fusion_type']
- self.bilinear = True
- self.up_factor = 2 if self.bilinear else 1
- self.preprocess = preprocess
-
- self._load_clip()
- self._build_decoder()
-
- def _load_clip(self):
- model, _ = load_clip("RN50", device=self.device)
- self.clip_rn50 = build_model(model.state_dict()).to(self.device)
- del model
-
- def _build_decoder(self):
- # language
- self.lang_fuser1 = fusion.names[self.lang_fusion_type](input_dim=self.input_dim // 2)
- self.lang_fuser2 = fusion.names[self.lang_fusion_type](input_dim=self.input_dim // 4)
- self.lang_fuser3 = fusion.names[self.lang_fusion_type](input_dim=self.input_dim // 8)
-
- self.proj_input_dim = 512 if 'word' in self.lang_fusion_type else 1024
- self.lang_proj1 = nn.Linear(self.proj_input_dim, 1024)
- self.lang_proj2 = nn.Linear(self.proj_input_dim, 512)
- self.lang_proj3 = nn.Linear(self.proj_input_dim, 256)
-
- # vision
- # self.conv1 = nn.Sequential(
- # nn.Conv2d(self.input_dim, 1024, kernel_size=3, stride=1, padding=1, bias=False),
- # nn.ReLU(True)
- # )
-
- # self.up1 = Up(2048, 1024 // self.up_factor, self.bilinear)
- # self.lat_fusion1 = FusionConvLat(input_dim=1024+512, output_dim=512)
-
- # self.up2 = Up(1024, 512 // self.up_factor, self.bilinear)
- # self.lat_fusion2 = FusionConvLat(input_dim=512+256, output_dim=256)
-
- self.conv1 = nn.Sequential(
- nn.Conv2d(self.input_dim, 256, kernel_size=3, stride=1, padding=1, bias=False),
- nn.ReLU(True)
- )
-
- self.up3 = Up(512, 256 // self.up_factor, self.bilinear)
- self.lat_fusion3 = FusionConvLat(input_dim=256+128, output_dim=128)
-
- self.layer1 = nn.Sequential(
- ConvBlock(128, [64, 64, 64], kernel_size=3, stride=1, batchnorm=self.batchnorm),
- IdentityBlock(64, [64, 64, 64], kernel_size=3, stride=1, batchnorm=self.batchnorm),
- nn.UpsamplingBilinear2d(scale_factor=2),
- )
- self.lat_fusion4 = FusionConvLat(input_dim=128+64, output_dim=64)
-
- self.layer2 = nn.Sequential(
- ConvBlock(64, [32, 32, 32], kernel_size=3, stride=1, batchnorm=self.batchnorm),
- IdentityBlock(32, [32, 32, 32], kernel_size=3, stride=1, batchnorm=self.batchnorm),
- nn.UpsamplingBilinear2d(scale_factor=2),
- )
- self.lat_fusion5 = FusionConvLat(input_dim=64+32, output_dim=32)
-
- self.layer3 = nn.Sequential(
- ConvBlock(32, [16, 16, 16], kernel_size=3, stride=1, batchnorm=self.batchnorm),
- IdentityBlock(16, [16, 16, 16], kernel_size=3, stride=1, batchnorm=self.batchnorm),
- nn.UpsamplingBilinear2d(scale_factor=2),
- )
- self.lat_fusion6 = FusionConvLat(input_dim=32+16, output_dim=16)
-
- self.conv2 = nn.Sequential(
- nn.Conv2d(16, self.output_dim, kernel_size=1)
- )
-
- def encode_image(self, img):
- with torch.no_grad():
- img_encoding, img_im = self.clip_rn50.visual.prepool_im(img)
- return img_encoding, img_im
-
- def encode_text(self, x):
- with torch.no_grad():
- tokens = tokenize(x).to(self.device)
- text_feat, text_emb = self.clip_rn50.encode_text_with_embeddings(tokens)
-
- text_mask = torch.where(tokens==0, tokens, 1) # [1, max_token_len]
- return text_feat, text_emb, text_mask
-
- def forward(self, x, lat, l):
- x = self.preprocess(x, dist='clip')
-
- in_type = x.dtype
- in_shape = x.shape
- x = x[:,:3] # select RGB
- x, im = self.encode_image(x)
- x = x.to(in_type)
-
- l_enc, l_emb, l_mask = self.encode_text(l)
- l_input = l_emb if 'word' in self.lang_fusion_type else l_enc
- l_input = l_input.to(dtype=x.dtype)
-
- assert x.shape[1] == self.input_dim
- x = self.conv1(x)
-
- # x = self.lang_fuser1(x, l_input, x2_mask=l_mask, x2_proj=self.lang_proj1)
- # x = self.up1(x, im[-2])
- # x = self.lat_fusion1(x, lat[-6])
-
- # x = self.lang_fuser2(x, l_input, x2_mask=l_mask, x2_proj=self.lang_proj2)
- # x = self.up2(x, im[-3])
- # x = self.lat_fusion2(x, lat[-5])
- if (x.shape[0] > 8) and ((x.shape[0] % 36) == 0):
- l_input = l_input.repeat_interleave(36, dim=0)
-
- x = self.lang_fuser3(x, l_input, x2_mask=l_mask, x2_proj=self.lang_proj3)
- x = self.up3(x, im[-4])
- x = self.lat_fusion3(x, lat[-4])
-
- x = self.layer1(x)
- x = self.lat_fusion4(x, lat[-3])
-
- x = self.layer2(x)
- x = self.lat_fusion5(x, lat[-2])
-
- x = self.layer3(x)
- x = self.lat_fusion6(x, lat[-1])
-
- x = self.conv2(x)
-
- x = F.interpolate(x, size=(in_shape[-2], in_shape[-1]), mode='bilinear')
- return x
\ No newline at end of file
diff --git a/spaces/Gen-Sim/Gen-Sim/scripts/metascripts/train20_gptmixcliport5_new_pickplace_demo10.sh b/spaces/Gen-Sim/Gen-Sim/scripts/metascripts/train20_gptmixcliport5_new_pickplace_demo10.sh
deleted file mode 100644
index 05dc4c65d971c214036ba642772e0f639607fd6b..0000000000000000000000000000000000000000
--- a/spaces/Gen-Sim/Gen-Sim/scripts/metascripts/train20_gptmixcliport5_new_pickplace_demo10.sh
+++ /dev/null
@@ -1,12 +0,0 @@
-#!/bin/bash
-#SBATCH -c 10
-#SBATCH -n 1
-#SBATCH -o logs/%j.out
-#SBATCH --exclusive
-STEPS=${1-'50000'}
-
-
-sh scripts/traintest_scripts/train_test_multi_task_goal_demo10.sh data \
- "[stack-block-pyramid,align-box-corner,put-block-in-bowl,packing-boxes,block-insertion,color_linked_ball_bowl_ordering,color_specific_container_fill,insert_blocks_into_fixture,sort_insert_color_coordinated_blocks,color_ordered_blocks_on_pallet,color-coordinated-sphere-insertion,rainbow-stack,put-block-in-bowl,vertical-insertion-blocks,stack-blocks-in-container,'Four-corner-pyramid-challenge','create-pyramid-with-color-coded-ells','align-balls-in-colored-zones','construct-corner-blocks','color-linked-ball-bowl-ordering','create-pyramid-blocks-and-container','color-specific-container-fill','color-ordered-container-arrangement','pyramid-blocks-assemble']" \
- "[stack-block-pyramid,put-block-in-bowl,align-box-corner,packing-boxes,block-insertion]" \
- gpt10_mixcliport5_task_new
diff --git a/spaces/GeorgeOrville/bingo/cloudflare/worker.js b/spaces/GeorgeOrville/bingo/cloudflare/worker.js
deleted file mode 100644
index e0debd750615f1329b2c72fbce73e1b9291f7137..0000000000000000000000000000000000000000
--- a/spaces/GeorgeOrville/bingo/cloudflare/worker.js
+++ /dev/null
@@ -1,18 +0,0 @@
-const TRAGET_HOST='hf4all-bingo.hf.space' // 请将此域名改成你自己的,域名信息在设置》站点域名查看。
-
-export default {
- async fetch(request) {
- const uri = new URL(request.url);
- if (uri.protocol === 'http:') {
- uri.protocol = 'https:';
- return new Response('', {
- status: 301,
- headers: {
- location: uri.toString(),
- },
- })
- }
- uri.host = TRAGET_HOST
- return fetch(new Request(uri.toString(), request));
- },
-};
diff --git a/spaces/GilbertClaus/VideoCutter/app.py b/spaces/GilbertClaus/VideoCutter/app.py
deleted file mode 100644
index f4ae2705538839d94e917e276a8123bce6ec0b97..0000000000000000000000000000000000000000
--- a/spaces/GilbertClaus/VideoCutter/app.py
+++ /dev/null
@@ -1,78 +0,0 @@
-import os
-import streamlit as st
-from streamlit_option_menu import option_menu
-from youtube import youtube, download_youtube
-from pornhub import pornhub
-from iwara import iwara
-# from megaDL import mega_dl
-from rule34 import rule34
-from paipancon import paipancon
-from trailer import trailer
-from others import *
-
-# Navigasi Sidebar
-options = ['Youtube', 'Pornhub', 'Iwara', 'Mega', 'Rule34', 'Paipancon', 'Trailer']
-with st.sidebar:
- selected = option_menu("Video Downloader", options,
- icons=['play', 'fire', 'star', 'moon','gear', 'house', 'lightning'], menu_icon="cast", default_index=0)
-
-functions = [youtube, pornhub, iwara, download_youtube, rule34, paipancon, trailer]
-
-if selected:
- index = options.index(selected)
- fungsi = functions[index]
- st.title(f"{selected} Video Downloader and Cutter")
- st.write(f"Download dan potong sebagian video {selected}.")
- if selected == 'Youtube' or selected == 'Pornhub':
- video_link = st.text_input("Link Video", value='https://www.youtube.com/watch?v=ZGltvcmVSAk')
- resolution = st.selectbox("Pilih Resolusi", (360, 480, 720), 2)
- elif selected == 'Iwara' or selected == 'Mega':
- name = st.text_input("Nama File")
- video_link = st.text_input("Link Video")
- else:
- video_link = st.text_input("Link Video")
-
- choice = st.radio('Pilih Proses:', ['Potong Video', 'Compress Video', 'Cuma Download'], 2)
-
- if choice == 'Potong Video':
- start_time = st.text_input("Start Time", value='00:07:12.000')
- end_time = st.text_input("End Time", value='00:07:31.000')
-
- if st.button(f"Download and Cut {selected}"):
- if selected == 'Youtube' or selected == 'Pornhub':
- video_file, judul_video, video_info, thumbnail_file = fungsi(video_link, resolution)
- elif selected == 'Iwara' or selected == 'Mega':
- video_file, judul_video, video_info, thumbnail_file = fungsi(video_link, name)
- else:
- video_file, judul_video, video_info, thumbnail_file = fungsi(video_link)
- video_file = cut_video(video_file, judul_video, start_time, end_time)
- file_size = os.path.getsize(video_file)
- session(video_info, video_file, thumbnail_file, choice)
- st.text_input(f"Video '{judul_video}' setelah diproses:", convert_size(file_size))
-
- elif choice == 'Compress Video':
- compress = st.selectbox("Pilih Resolusi Compress", (360, 480, 720), 2)
-
- if st.button(f"Download and Compress {selected}"):
- if selected == 'Youtube' or selected == 'Pornhub':
- video_file, judul_video, video_info, thumbnail_file = fungsi(video_link, resolution)
- elif selected == 'Iwara' or selected == 'Mega':
- video_file, judul_video, video_info, thumbnail_file = fungsi(video_link, name)
- else:
- video_file, judul_video, video_info, thumbnail_file = fungsi(video_link)
- video_file = convert_videos(compress, video_file)
- file_size = os.path.getsize(video_file)
- session(video_info, video_file, thumbnail_file, choice)
- st.text_input(f"Video '{judul_video}' setelah diproses:", convert_size(file_size))
-
- else:
- if st.button(f"Download {selected}"):
- if selected == 'Youtube' or selected == 'Pornhub':
- video_file, judul_video, video_info, thumbnail_file = fungsi(video_link, resolution)
- elif selected == 'Iwara' or selected == 'Mega':
- video_file, judul_video, video_info, thumbnail_file = fungsi(video_link, name)
- else:
- video_file, judul_video, video_info, thumbnail_file = fungsi(video_link)
- file_size = os.path.getsize(video_file)
- session(video_info, video_file, thumbnail_file, choice)
- st.text_input(f"Video '{judul_video}' setelah diproses:", convert_size(file_size))
diff --git a/spaces/GilbertClaus/VideoCutter/iwara.py b/spaces/GilbertClaus/VideoCutter/iwara.py
deleted file mode 100644
index 5ea61228cfffe4c9215f1394c4db518d3c86e571..0000000000000000000000000000000000000000
--- a/spaces/GilbertClaus/VideoCutter/iwara.py
+++ /dev/null
@@ -1,376 +0,0 @@
-import requests, hashlib, os
-from others import *
-
-api_url = 'https://api.iwara.tv'
-file_url = 'https://files.iwara.tv'
-
-class BearerAuth(requests.auth.AuthBase):
- """Bearer Authentication"""
- def __init__(self, token):
- self.token = token
-
- def __call__(self, r):
- r.headers['Authorization'] = 'Bearer ' + self.token
- return r
-
-class ApiClient:
- def __init__(self, email, password):
- self.email = email
- self.password = password
-
- # self.headers = {
- # 'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.181 Safari/537.36',
- # 'X-Version': 's'
- # }
-
- # API
- self.api_url = api_url
- self.file_url = file_url
- self.timeout = 30
- # self.max_retries = 5
- self.download_timeout = 300
- self.token = None
-
- # HTML
- # self.html_url = html_url
-
- # Cloudscraper
- # self.scraper = cloudscraper.create_scraper(browser={'browser': 'firefox','platform': 'windows','mobile': False},
- # # interpreter = 'nodejs'
- # )
- # Requests-html
- # self.session = HTMLSession()
-
- def login(self) -> requests.Response:
- url = self.api_url + '/user/login'
- json = {'email': self.email, 'password': self.password}
- r = requests.post(url, json=json, timeout=self.timeout)
- try:
- self.token = r.json()['token']
- print('API Login success')
- except:
- print('API Login failed')
-
- # try:
- # # Cloudscraper
- # # r = self.scraper.post(url, json=json, headers=self.headers, timeout=self.timeout)
-
- # # Requests-html
- # r = self.session.post(url, json=json, headers=self.headers, timeout=self.timeout)
- # except:
- # print('BS4 Login failed')
-
- return r
-
- # limit query is not working
- def get_videos(self, sort = 'date', rating = 'all', page = 0, limit = 32, subscribed = False) -> requests.Response:
- """# Get new videos from iwara.tv
- - sort: date, trending, popularity, views, likes
- - rating: all, general, ecchi
- """
- url = self.api_url + '/videos'
- params = {'sort': sort,
- 'rating': rating,
- 'page': page,
- 'limit': limit,
- 'subscribed': 'true' if subscribed else 'false',
- }
- if self.token is None:
- r = requests.get(url, params=params, timeout=self.timeout)
- else:
-
- # Verbose Debug
- # request = requests.Request('GET', url, params=params, auth=BearerAuth(self.token))
- # print(request.prepare().method, request.prepare().url, request.prepare().headers, request.prepare().body, sep='\n')
- # r = requests.Session().send(request.prepare())
-
- r = requests.get(url, params=params, auth=BearerAuth(self.token), timeout=self.timeout)
-
- #Debug
- print("[DEBUG] get_videos response:", r)
-
- return r
-
- def get_video(self, video_id) -> requests.Response:
- """# Get video info from iwara.tv
- """
- url = self.api_url + '/video/' + video_id
-
- if self.token is None:
- r = requests.get(url, timeout=self.timeout)
- else:
- r = requests.get(url, auth=BearerAuth(self.token), timeout=self.timeout)
-
- #Debug
- print("[DEBUG] get_video response:", r)
-
- return r
-
- def download_video_thumbnail(self, video_id) -> str:
- """# Download video thumbnail from iwara.tv
- """
- video = self.get_video(video_id).json()
-
- file_id = video['file']['id']
- thumbnail_id = video['thumbnail']
-
- url = self.file_url + '/image/original/' + file_id + '/thumbnail-{:02d}.jpg'.format(thumbnail_id)
-
- thumbnail_file_name = video_id + '.jpg'
-
- if (os.path.exists(thumbnail_file_name)):
- print(f"Video ID {video_id} thumbnail already downloaded, skipped downloading. ")
- return thumbnail_file_name
-
- print(f"Downloading thumbnail for video ID: {video_id} ...")
- with open(thumbnail_file_name, "wb") as f:
- for chunk in requests.get(url).iter_content(chunk_size=1024):
- if chunk:
- f.write(chunk)
- f.flush()
-
- return thumbnail_file_name
-
- def download_video(self, video_id) -> str:
- """# Download video from iwara.tv
- """
-
- # html
- # url = self.html_url + '/video/' + video_id
-
- # Cloudscraer
- # html = self.scraper.get(url, auth=BearerAuth(self.token), timeout=self.timeout).text
-
- # Requests-html
- # html = self.session.get(url, auth=BearerAuth(self.token), timeout=self.timeout).text
-
- # print(html)
- # html = BeautifulSoup(, 'html.parser')
- # downloadLink = html.find('div', class_='dropdown_content')
- # print(downloadLink)
-
- # API
- try:
- video = self.get_video(video_id).json()
- except Exception as e:
- raise Exception(f"Failed to get video info for video ID: {video_id}, error: {e}")
-
- #Debug
- print(video)
-
- url = video['fileUrl']
- file_id = video['file']['id']
- expires = url.split('/')[4].split('?')[1].split('&')[0].split('=')[1]
-
- # IMPORTANT: This might change in the future.
- SHA_postfix = "_5nFp9kmbNnHdAFhaqMvt"
-
- SHA_key = file_id + "_" + expires + SHA_postfix
- hash = hashlib.sha1(SHA_key.encode('utf-8')).hexdigest()
-
- headers = {"X-Version": hash}
-
- resources = requests.get(url, headers=headers, auth=BearerAuth(self.token), timeout=self.timeout).json()
-
- #Debug
- print(resources)
-
- resources_by_quality = [None for i in range(10)]
-
- for resource in resources:
- if resource['name'] == 'Source':
- resources_by_quality[0] = resource
- # elif resource['name'] == '1080':
- # resources_by_quality[1] = resource
- # elif resource['name'] == '720':
- # resources_by_quality[2] = resource
- # elif resource['name'] == '480':
- # resources_by_quality[3] = resource
- # elif resource['name'] == '540':
- # resources_by_quality[4] = resource
- # elif resource['name'] == '360':
- # resources_by_quality[5] = resource
-
- for resource in resources_by_quality:
- if resource is not None:
- #Debug
- print(resource)
-
- download_link = "https:" + resource['src']['download']
- file_type = resource['type'].split('/')[1]
-
- video_file_name = video_id + '.' + file_type
-
- if (os.path.exists(video_file_name)):
- print(f"Video ID {video_id} Already downloaded, skipped downloading. ")
- return video_file_name
-
- print(f"Downloading video ID: {video_id} ...")
- try:
- with open(video_file_name, "wb") as f:
- for chunk in requests.get(download_link).iter_content(chunk_size=1024):
- if chunk:
- f.write(chunk)
- f.flush()
- return video_file_name
- except Exception as e:
- os.remove(video_file_name)
- raise Exception(f"Failed to download video ID: {video_id}, error: {e}")
-
-
- raise Exception("No video with Source quality found")
-
-# -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
-
-
-
-# -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
-
-### download video from iwara.tv
-### usage: python iwara [url]
-### by AngelBottomless @ github
-# download from iwara page
-import requests
-# use selenium to get video url
-from selenium import webdriver
-import argparse
-
-def download_video(url):
- # save video to local
- filename = url.split('/')[-1] + '.mp4'
- # get video
- driver = run_webdriver(url)
- click_accept(driver)
- driver.implicitly_wait(2)
- click_play(driver)
- url = find_video_url(driver)
- # download video
- r = requests.get(url)
- with open(filename, 'wb') as f:
- f.write(r.content)
- # close driver
- driver.close()
-
-def download_with_retry(url, retry=3):
- # retry download
- for _ in range(retry):
- try:
- download_video(url)
- return True
- except:
- print('download failed, retrying...')
- continue
- return False
-
-def run_webdriver(url):
- # use selenium to get video url
- # mute chrome
- chrome_options = webdriver.ChromeOptions()
- chrome_options.add_argument("--mute-audio")
- # run webdriver
- driver = webdriver.Chrome(options=chrome_options)
- driver.get(url)
- driver.implicitly_wait(4)
- return driver
-
-def click_accept(driver):
- # xpath = /html/body/div[3]/div/div[2]/button[1]
- button = driver.find_element('xpath', '/html/body/div[3]/div/div[2]/button[1]')
- button.click()
-def click_play(driver):
- # xpath = //*[@id="vjs_video_3"]/button
- button = driver.find_element('xpath', '//*[@id="vjs_video_3"]/button')
- button.click()
-
-def find_video_url(driver):
- # xpath //*[@id="vjs_video_3_html5_api"]
- #access 'src'
- video = driver.find_element('xpath', '//*[@id="vjs_video_3_html5_api"]')
- video_url = video.get_attribute('src')
- return video_url
-
-def track_clipboard():
- import pyperclip
- import time
- import subprocess
- failed_urls = []
- success_urls = set()
- print('tracking clipboard...')
- # loop to track clipboard
- # if clipboard contains url, download video
- # track every 1 second
- previous = ''
- # expect KeyboardInterrupt and return 0
- try:
- while True:
- # get clipboard
- clipboard = pyperclip.paste()
- if clipboard != previous:
- # if clipboard contains url
- if 'iwara.tv' in clipboard:
- print('url detected, downloading...')
- # use subprocess to download video in background
- # ['python', '-m', 'iwara', clipboard]
- subprocess.Popen(['python', '-m', 'iwara', clipboard])
- print('download complete')
- previous = clipboard
- time.sleep(1)
- except KeyboardInterrupt:
- print('exiting...')
- return 0
-
-if __name__ == '__main__':
- failed_urls = []
- success_urls = set()
- import sys
- # parse args
- parser = argparse.ArgumentParser()
- # track clipboard option, when 'track' is used, url is not required
- parser.add_argument('-t', '--track', action='store_true', help='track clipboard for iwara url')
- # add url argument, if not specified, use ''
- parser.add_argument('url', nargs='?', default='', help='iwara url')
- args = parser.parse_args()
- # download video
- if args.track:
- track_clipboard()
- elif 'iwara.tv' in args.url:
- result = download_with_retry(args.url)
- if not result:
- print('download failed')
- failed_urls.append(args.url)
- else:
- print('download complete')
- success_urls.add(args.url)
- if len(failed_urls) > 0:
- print('failed urls:')
- for url in failed_urls:
- print(url)
- # write in ./failed.txt
- with open('failed.txt', 'a') as f:
- f.write(url + '\n')
- sys.exit(1)
- else:
- print('invalid url')
- sys.exit(1)
-
-# -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
-
-def iwara(video_url, judul):
- # Set the path to the thumbnail directory
- directory = "/home/user/app/Iwara"
- if not os.path.exists(directory):
- os.makedirs(directory)
-
- judul = judul.replace('_',' ').title().replace('Mmd','MMD').replace('/',' ').replace('Nikke','NIKKE').replace('Fate','FATE').replace('】','】 ').replace(' ', ' ')
- thumbnail_url = 'https://saradahentai.com/wp-content/uploads/2023/03/Live-Footage-of-Ashley-Graham-Captured-by-fugtrup-Resident-Evil-4.jpg'
-
- thumbnail_file = download_file(thumbnail_url, judul, directory)
- video_file = download_file(video_url, judul, directory)
-
- # Mengkonversi video
- video_file = convert_videos(720, video_file)
-
-
- video_info = f"Judul: {judul}\n"
-
- return video_file, judul, video_info, thumbnail_file
\ No newline at end of file
diff --git a/spaces/Gmq-x/gpt-academic/request_llm/bridge_tgui.py b/spaces/Gmq-x/gpt-academic/request_llm/bridge_tgui.py
deleted file mode 100644
index fcf852f0474892bd179843ece3f4a83110bd7756..0000000000000000000000000000000000000000
--- a/spaces/Gmq-x/gpt-academic/request_llm/bridge_tgui.py
+++ /dev/null
@@ -1,171 +0,0 @@
-'''
-Contributed by SagsMug. Modified by binary-husky
-https://github.com/oobabooga/text-generation-webui/pull/175
-'''
-
-import asyncio
-import json
-import random
-import string
-import websockets
-import logging
-import time
-import threading
-import importlib
-from toolbox import get_conf, update_ui
-
-
-def random_hash():
- letters = string.ascii_lowercase + string.digits
- return ''.join(random.choice(letters) for i in range(9))
-
-async def run(context, max_token, temperature, top_p, addr, port):
- params = {
- 'max_new_tokens': max_token,
- 'do_sample': True,
- 'temperature': temperature,
- 'top_p': top_p,
- 'typical_p': 1,
- 'repetition_penalty': 1.05,
- 'encoder_repetition_penalty': 1.0,
- 'top_k': 0,
- 'min_length': 0,
- 'no_repeat_ngram_size': 0,
- 'num_beams': 1,
- 'penalty_alpha': 0,
- 'length_penalty': 1,
- 'early_stopping': True,
- 'seed': -1,
- }
- session = random_hash()
-
- async with websockets.connect(f"ws://{addr}:{port}/queue/join") as websocket:
- while content := json.loads(await websocket.recv()):
- #Python3.10 syntax, replace with if elif on older
- if content["msg"] == "send_hash":
- await websocket.send(json.dumps({
- "session_hash": session,
- "fn_index": 12
- }))
- elif content["msg"] == "estimation":
- pass
- elif content["msg"] == "send_data":
- await websocket.send(json.dumps({
- "session_hash": session,
- "fn_index": 12,
- "data": [
- context,
- params['max_new_tokens'],
- params['do_sample'],
- params['temperature'],
- params['top_p'],
- params['typical_p'],
- params['repetition_penalty'],
- params['encoder_repetition_penalty'],
- params['top_k'],
- params['min_length'],
- params['no_repeat_ngram_size'],
- params['num_beams'],
- params['penalty_alpha'],
- params['length_penalty'],
- params['early_stopping'],
- params['seed'],
- ]
- }))
- elif content["msg"] == "process_starts":
- pass
- elif content["msg"] in ["process_generating", "process_completed"]:
- yield content["output"]["data"][0]
- # You can search for your desired end indicator and
- # stop generation by closing the websocket here
- if (content["msg"] == "process_completed"):
- break
-
-
-
-
-
-def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None):
- """
- 发送至chatGPT,流式获取输出。
- 用于基础的对话功能。
- inputs 是本次问询的输入
- top_p, temperature是chatGPT的内部调优参数
- history 是之前的对话列表(注意无论是inputs还是history,内容太长了都会触发token数量溢出的错误)
- chatbot 为WebUI中显示的对话列表,修改它,然后yeild出去,可以直接修改对话界面内容
- additional_fn代表点击的哪个按钮,按钮见functional.py
- """
- if additional_fn is not None:
- import core_functional
- importlib.reload(core_functional) # 热更新prompt
- core_functional = core_functional.get_core_functions()
- if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
- inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"]
-
- raw_input = "What I would like to say is the following: " + inputs
- history.extend([inputs, ""])
- chatbot.append([inputs, ""])
- yield from update_ui(chatbot=chatbot, history=history, msg="等待响应") # 刷新界面
-
- prompt = raw_input
- tgui_say = ""
-
- model_name, addr_port = llm_kwargs['llm_model'].split('@')
- assert ':' in addr_port, "LLM_MODEL 格式不正确!" + llm_kwargs['llm_model']
- addr, port = addr_port.split(':')
-
-
- mutable = ["", time.time()]
- def run_coorotine(mutable):
- async def get_result(mutable):
- # "tgui:galactica-1.3b@localhost:7860"
-
- async for response in run(context=prompt, max_token=llm_kwargs['max_length'],
- temperature=llm_kwargs['temperature'],
- top_p=llm_kwargs['top_p'], addr=addr, port=port):
- print(response[len(mutable[0]):])
- mutable[0] = response
- if (time.time() - mutable[1]) > 3:
- print('exit when no listener')
- break
- asyncio.run(get_result(mutable))
-
- thread_listen = threading.Thread(target=run_coorotine, args=(mutable,), daemon=True)
- thread_listen.start()
-
- while thread_listen.is_alive():
- time.sleep(1)
- mutable[1] = time.time()
- # Print intermediate steps
- if tgui_say != mutable[0]:
- tgui_say = mutable[0]
- history[-1] = tgui_say
- chatbot[-1] = (history[-2], history[-1])
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
-
-
-
-def predict_no_ui_long_connection(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience=False):
- raw_input = "What I would like to say is the following: " + inputs
- prompt = raw_input
- tgui_say = ""
- model_name, addr_port = llm_kwargs['llm_model'].split('@')
- assert ':' in addr_port, "LLM_MODEL 格式不正确!" + llm_kwargs['llm_model']
- addr, port = addr_port.split(':')
-
-
- def run_coorotine(observe_window):
- async def get_result(observe_window):
- async for response in run(context=prompt, max_token=llm_kwargs['max_length'],
- temperature=llm_kwargs['temperature'],
- top_p=llm_kwargs['top_p'], addr=addr, port=port):
- print(response[len(observe_window[0]):])
- observe_window[0] = response
- if (time.time() - observe_window[1]) > 5:
- print('exit when no listener')
- break
- asyncio.run(get_result(observe_window))
- thread_listen = threading.Thread(target=run_coorotine, args=(observe_window,))
- thread_listen.start()
- return observe_window[0]
diff --git a/spaces/Gradio-Blocks/HairCLIP/README.md b/spaces/Gradio-Blocks/HairCLIP/README.md
deleted file mode 100644
index 9c9da54e2e5933a67a6e566ef48e7ad4852d107e..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/HairCLIP/README.md
+++ /dev/null
@@ -1,15 +0,0 @@
----
-title: HairCLIP
-emoji: ⚡
-colorFrom: indigo
-colorTo: red
-sdk: gradio
-sdk_version: 3.36.1
-app_file: app.py
-pinned: false
-suggested_hardware: t4-small
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
-
-https://arxiv.org/abs/2112.05142
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/hrnet/fcn_hr18_480x480_40k_pascal_context.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/hrnet/fcn_hr18_480x480_40k_pascal_context.py
deleted file mode 100644
index 5ff05aa595399d77ee51552c243e489f395a820e..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/hrnet/fcn_hr18_480x480_40k_pascal_context.py
+++ /dev/null
@@ -1,8 +0,0 @@
-_base_ = [
- '../_base_/models/fcn_hr18.py', '../_base_/datasets/pascal_context.py',
- '../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py'
-]
-model = dict(
- decode_head=dict(num_classes=60),
- test_cfg=dict(mode='slide', crop_size=(480, 480), stride=(320, 320)))
-optimizer = dict(type='SGD', lr=0.004, momentum=0.9, weight_decay=0.0001)
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/models/utils/drop.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/models/utils/drop.py
deleted file mode 100644
index 4520b0ff407d2a95a864086bdbca0065f222aa63..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/models/utils/drop.py
+++ /dev/null
@@ -1,31 +0,0 @@
-"""Modified from https://github.com/rwightman/pytorch-image-
-models/blob/master/timm/models/layers/drop.py."""
-
-import torch
-from torch import nn
-
-
-class DropPath(nn.Module):
- """Drop paths (Stochastic Depth) per sample (when applied in main path of
- residual blocks).
-
- Args:
- drop_prob (float): Drop rate for paths of model. Dropout rate has
- to be between 0 and 1. Default: 0.
- """
-
- def __init__(self, drop_prob=0.):
- super(DropPath, self).__init__()
- self.drop_prob = drop_prob
- self.keep_prob = 1 - drop_prob
-
- def forward(self, x):
- if self.drop_prob == 0. or not self.training:
- return x
- shape = (x.shape[0], ) + (1, ) * (
- x.ndim - 1) # work with diff dim tensors, not just 2D ConvNets
- random_tensor = self.keep_prob + torch.rand(
- shape, dtype=x.dtype, device=x.device)
- random_tensor.floor_() # binarize
- output = x.div(self.keep_prob) * random_tensor
- return output
diff --git a/spaces/GuXiaoBei/wechat-chatbot/channel/wechat/wechat_channel.py b/spaces/GuXiaoBei/wechat-chatbot/channel/wechat/wechat_channel.py
deleted file mode 100644
index b800fc43753fad893a485eb214cc9602a7f69af9..0000000000000000000000000000000000000000
--- a/spaces/GuXiaoBei/wechat-chatbot/channel/wechat/wechat_channel.py
+++ /dev/null
@@ -1,176 +0,0 @@
-# encoding:utf-8
-
-"""
-wechat channel
-"""
-import itchat
-import json
-from itchat.content import *
-from channel.channel import Channel
-from concurrent.futures import ThreadPoolExecutor
-from common.log import logger
-from config import conf
-import requests
-import io
-
-thread_pool = ThreadPoolExecutor(max_workers=8)
-
-
-class WechatChannel(Channel):
-
- qrcode = b''
-
- newInstance=None
-
- def __init__(self):
- pass
-
- def startup(self):
- # login by scan QRCode
- newInstance = itchat.load_sync_itchat()
- self.newInstance = newInstance
-
- @newInstance.msg_register(TEXT)
- def handler_single_msg(msg):
- self.handle(msg)
- return None
-
- @newInstance.msg_register(TEXT, isGroupChat=True)
- def handler_group_msg(msg):
- self.handle_group(msg)
- return None
-
- newInstance.auto_login(qrCallback=self.qrCallback)
- # start message listener
- newInstance.run()
-
- def qrCallback(self, uuid, status, qrcode):
- self.qrcode = qrcode
-
- def getQrCode(self):
- return self.qrcode
-
- def handle(self, msg):
- logger.debug("[WX]receive msg: " + json.dumps(msg, ensure_ascii=False))
- from_user_id = msg['FromUserName']
- to_user_id = msg['ToUserName'] # 接收人id
- other_user_id = msg['User']['UserName'] # 对手方id
- content = msg['Text']
- match_prefix = self.check_prefix(content, conf().get('single_chat_prefix'))
- if from_user_id == other_user_id and match_prefix is not None:
- # 好友向自己发送消息
- if match_prefix != '':
- str_list = content.split(match_prefix, 1)
- if len(str_list) == 2:
- content = str_list[1].strip()
-
- img_match_prefix = self.check_prefix(content, conf().get('image_create_prefix'))
- if img_match_prefix:
- content = content.split(img_match_prefix, 1)[1].strip()
- thread_pool.submit(self._do_send_img, content, from_user_id)
- else:
- thread_pool.submit(self._do_send, content, from_user_id)
-
- elif to_user_id == other_user_id and match_prefix:
- # 自己给好友发送消息
- str_list = content.split(match_prefix, 1)
- if len(str_list) == 2:
- content = str_list[1].strip()
- img_match_prefix = self.check_prefix(content, conf().get('image_create_prefix'))
- if img_match_prefix:
- content = content.split(img_match_prefix, 1)[1].strip()
- thread_pool.submit(self._do_send_img, content, to_user_id)
- else:
- thread_pool.submit(self._do_send, content, to_user_id)
-
-
- def handle_group(self, msg):
- logger.debug("[WX]receive group msg: " + json.dumps(msg, ensure_ascii=False))
- group_name = msg['User'].get('NickName', None)
- group_id = msg['User'].get('UserName', None)
- if not group_name:
- return ""
- origin_content = msg['Content']
- content = msg['Content']
- content_list = content.split(' ', 1)
- context_special_list = content.split('\u2005', 1)
- if len(context_special_list) == 2:
- content = context_special_list[1]
- elif len(content_list) == 2:
- content = content_list[1]
-
- config = conf()
- match_prefix = (msg['IsAt'] and not config.get("group_at_off", False)) or self.check_prefix(origin_content, config.get('group_chat_prefix')) \
- or self.check_contain(origin_content, config.get('group_chat_keyword'))
- if ('ALL_GROUP' in config.get('group_name_white_list') or group_name in config.get('group_name_white_list') or self.check_contain(group_name, config.get('group_name_keyword_white_list'))) and match_prefix:
- img_match_prefix = self.check_prefix(content, conf().get('image_create_prefix'))
- if img_match_prefix:
- content = content.split(img_match_prefix, 1)[1].strip()
- thread_pool.submit(self._do_send_img, content, group_id)
- else:
- thread_pool.submit(self._do_send_group, content, msg)
-
- def send(self, msg, receiver):
- logger.info('[WX] sendMsg={}, receiver={}'.format(msg, receiver))
- self.newInstance.send(msg, toUserName=receiver)
-
- def _do_send(self, query, reply_user_id):
- try:
- if not query:
- return
- context = dict()
- context['from_user_id'] = reply_user_id
- reply_text = super().build_reply_content(query, context)
- if reply_text:
- self.send(conf().get("single_chat_reply_prefix") + reply_text, reply_user_id)
- except Exception as e:
- logger.exception(e)
-
- def _do_send_img(self, query, reply_user_id):
- try:
- if not query:
- return
- context = dict()
- context['type'] = 'IMAGE_CREATE'
- img_url = super().build_reply_content(query, context)
- if not img_url:
- return
-
- # 图片下载
- pic_res = requests.get(img_url, stream=True)
- image_storage = io.BytesIO()
- for block in pic_res.iter_content(1024):
- image_storage.write(block)
- image_storage.seek(0)
-
- # 图片发送
- logger.info('[WX] sendImage, receiver={}'.format(reply_user_id))
- self.newInstance.send_image(image_storage, reply_user_id)
- except Exception as e:
- logger.exception(e)
-
- def _do_send_group(self, query, msg):
- if not query:
- return
- context = dict()
- context['from_user_id'] = msg['ActualUserName']
- reply_text = super().build_reply_content(query, context)
- if reply_text:
- reply_text = '@' + msg['ActualNickName'] + ' ' + reply_text.strip()
- self.send(conf().get("group_chat_reply_prefix", "") + reply_text, msg['User']['UserName'])
-
-
- def check_prefix(self, content, prefix_list):
- for prefix in prefix_list:
- if content.startswith(prefix):
- return prefix
- return None
-
-
- def check_contain(self, content, keyword_list):
- if not keyword_list:
- return None
- for ky in keyword_list:
- if content.find(ky) != -1:
- return True
- return None
diff --git a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/utils/registry.py b/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/utils/registry.py
deleted file mode 100644
index c46cf61c598be620d973391a92072eb781aac99e..0000000000000000000000000000000000000000
--- a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/utils/registry.py
+++ /dev/null
@@ -1,154 +0,0 @@
-# --------------------------------------------------------
-# Based on timm and MAE-priv code bases
-# https://github.com/rwightman/pytorch-image-models/tree/master/timm
-# https://github.com/BUPT-PRIV/MAE-priv
-# --------------------------------------------------------
-""" Model Registry
-Hacked together by / Copyright 2020 Ross Wightman
-"""
-
-import fnmatch
-import re
-import sys
-from collections import defaultdict
-from copy import deepcopy
-
-__all__ = ['list_models', 'is_model', 'model_entrypoint', 'list_modules', 'is_model_in_modules',
- 'is_model_default_key', 'has_model_default_key', 'get_model_default_value', 'is_model_pretrained']
-
-_module_to_models = defaultdict(set) # dict of sets to check membership of model in module
-_model_to_module = {} # mapping of model names to module names
-_model_entrypoints = {} # mapping of model names to entrypoint fns
-_model_has_pretrained = set() # set of model names that have pretrained weight url present
-_model_default_cfgs = dict() # central repo for model default_cfgs
-
-
-def register_model(fn):
- # lookup containing module
- mod = sys.modules[fn.__module__]
- module_name_split = fn.__module__.split('.')
- module_name = module_name_split[-1] if len(module_name_split) else ''
-
- # add model to __all__ in module
- model_name = fn.__name__
- if hasattr(mod, '__all__'):
- mod.__all__.append(model_name)
- else:
- mod.__all__ = [model_name]
-
- # add entries to registry dict/sets
- _model_entrypoints[model_name] = fn
- _model_to_module[model_name] = module_name
- _module_to_models[module_name].add(model_name)
- has_pretrained = False # check if model has a pretrained url to allow filtering on this
- if hasattr(mod, 'default_cfgs') and model_name in mod.default_cfgs:
- # this will catch all models that have entrypoint matching cfg key, but miss any aliasing
- # entrypoints or non-matching combos
- has_pretrained = 'url' in mod.default_cfgs[model_name] and 'http' in mod.default_cfgs[model_name]['url']
- _model_default_cfgs[model_name] = deepcopy(mod.default_cfgs[model_name])
- if has_pretrained:
- _model_has_pretrained.add(model_name)
- return fn
-
-
-def _natural_key(string_):
- return [int(s) if s.isdigit() else s for s in re.split(r'(\d+)', string_.lower())]
-
-
-def list_models(filter='', module='', pretrained=False, exclude_filters='', name_matches_cfg=False):
- """ Return list of available model names, sorted alphabetically
-
- Args:
- filter (str) - Wildcard filter string that works with fnmatch
- module (str) - Limit model selection to a specific sub-module (ie 'gen_efficientnet')
- pretrained (bool) - Include only models with pretrained weights if True
- exclude_filters (str or list[str]) - Wildcard filters to exclude models after including them with filter
- name_matches_cfg (bool) - Include only models w/ model_name matching default_cfg name (excludes some aliases)
-
- Example:
- model_list('gluon_resnet*') -- returns all models starting with 'gluon_resnet'
- model_list('*resnext*, 'resnet') -- returns all models with 'resnext' in 'resnet' module
- """
- if module:
- all_models = list(_module_to_models[module])
- else:
- all_models = _model_entrypoints.keys()
- if filter:
- models = []
- include_filters = filter if isinstance(filter, (tuple, list)) else [filter]
- for f in include_filters:
- include_models = fnmatch.filter(all_models, f) # include these models
- if len(include_models):
- models = set(models).union(include_models)
- else:
- models = all_models
- if exclude_filters:
- if not isinstance(exclude_filters, (tuple, list)):
- exclude_filters = [exclude_filters]
- for xf in exclude_filters:
- exclude_models = fnmatch.filter(models, xf) # exclude these models
- if len(exclude_models):
- models = set(models).difference(exclude_models)
- if pretrained:
- models = _model_has_pretrained.intersection(models)
- if name_matches_cfg:
- models = set(_model_default_cfgs).intersection(models)
- return list(sorted(models, key=_natural_key))
-
-
-def is_model(model_name):
- """ Check if a model name exists
- """
- return model_name in _model_entrypoints
-
-
-def model_entrypoint(model_name):
- """Fetch a model entrypoint for specified model name
- """
- return _model_entrypoints[model_name]
-
-
-def list_modules():
- """ Return list of module names that contain models / model entrypoints
- """
- modules = _module_to_models.keys()
- return list(sorted(modules))
-
-
-def is_model_in_modules(model_name, module_names):
- """Check if a model exists within a subset of modules
- Args:
- model_name (str) - name of model to check
- module_names (tuple, list, set) - names of modules to search in
- """
- assert isinstance(module_names, (tuple, list, set))
- return any(model_name in _module_to_models[n] for n in module_names)
-
-
-def has_model_default_key(model_name, cfg_key):
- """ Query model default_cfgs for existence of a specific key.
- """
- if model_name in _model_default_cfgs and cfg_key in _model_default_cfgs[model_name]:
- return True
- return False
-
-
-def is_model_default_key(model_name, cfg_key):
- """ Return truthy value for specified model default_cfg key, False if does not exist.
- """
- if model_name in _model_default_cfgs and _model_default_cfgs[model_name].get(cfg_key, False):
- return True
- return False
-
-
-def get_model_default_value(model_name, cfg_key):
- """ Get a specific model default_cfg value by key. None if it doesn't exist.
- """
- if model_name in _model_default_cfgs:
- return _model_default_cfgs[model_name].get(cfg_key, None)
- else:
- return None
-
-
-def is_model_pretrained(model_name):
- return model_name in _model_has_pretrained
diff --git a/spaces/HaHaBill/LandShapes-Antarctica/netdissect/sampler.py b/spaces/HaHaBill/LandShapes-Antarctica/netdissect/sampler.py
deleted file mode 100644
index 72f1b46da117403c7f6ddcc1877bd9d70ded962b..0000000000000000000000000000000000000000
--- a/spaces/HaHaBill/LandShapes-Antarctica/netdissect/sampler.py
+++ /dev/null
@@ -1,134 +0,0 @@
-'''
-A sampler is just a list of integer listing the indexes of the
-inputs in a data set to sample. For reproducibility, the
-FixedRandomSubsetSampler uses a seeded prng to produce the same
-sequence always. FixedSubsetSampler is just a wrapper for an
-explicit list of integers.
-
-coordinate_sample solves another sampling problem: when testing
-convolutional outputs, we can reduce data explosing by sampling
-random points of the feature map rather than the entire feature map.
-coordinate_sample does this in a deterministic way that is also
-resolution-independent.
-'''
-
-import numpy
-import random
-from torch.utils.data.sampler import Sampler
-
-class FixedSubsetSampler(Sampler):
- """Represents a fixed sequence of data set indices.
- Subsets can be created by specifying a subset of output indexes.
- """
- def __init__(self, samples):
- self.samples = samples
-
- def __iter__(self):
- return iter(self.samples)
-
- def __len__(self):
- return len(self.samples)
-
- def __getitem__(self, key):
- return self.samples[key]
-
- def subset(self, new_subset):
- return FixedSubsetSampler(self.dereference(new_subset))
-
- def dereference(self, indices):
- '''
- Translate output sample indices (small numbers indexing the sample)
- to input sample indices (larger number indexing the original full set)
- '''
- return [self.samples[i] for i in indices]
-
-
-class FixedRandomSubsetSampler(FixedSubsetSampler):
- """Samples a fixed number of samples from the dataset, deterministically.
- Arguments:
- data_source,
- sample_size,
- seed (optional)
- """
- def __init__(self, data_source, start=None, end=None, seed=1):
- rng = random.Random(seed)
- shuffled = list(range(len(data_source)))
- rng.shuffle(shuffled)
- self.data_source = data_source
- super(FixedRandomSubsetSampler, self).__init__(shuffled[start:end])
-
- def class_subset(self, class_filter):
- '''
- Returns only the subset matching the given rule.
- '''
- if isinstance(class_filter, int):
- rule = lambda d: d[1] == class_filter
- else:
- rule = class_filter
- return self.subset([i for i, j in enumerate(self.samples)
- if rule(self.data_source[j])])
-
-def coordinate_sample(shape, sample_size, seeds, grid=13, seed=1, flat=False):
- '''
- Returns a (end-start) sets of sample_size grid points within
- the shape given. If the shape dimensions are a multiple of 'grid',
- then sampled points within the same row will never be duplicated.
- '''
- if flat:
- sampind = numpy.zeros((len(seeds), sample_size), dtype=int)
- else:
- sampind = numpy.zeros((len(seeds), 2, sample_size), dtype=int)
- assert sample_size <= grid
- for j, seed in enumerate(seeds):
- rng = numpy.random.RandomState(seed)
- # Shuffle the 169 random grid squares, and pick :sample_size.
- square_count = grid ** len(shape)
- square = numpy.stack(numpy.unravel_index(
- rng.choice(square_count, square_count)[:sample_size],
- (grid,) * len(shape)))
- # Then add a random offset to each x, y and put in the range [0...1)
- # Notice this selects the same locations regardless of resolution.
- uniform = (square + rng.uniform(size=square.shape)) / grid
- # TODO: support affine scaling so that we can align receptive field
- # centers exactly when sampling neurons in different layers.
- coords = (uniform * numpy.array(shape)[:,None]).astype(int)
- # Now take sample_size without replacement. We do this in a way
- # such that if sample_size is decreased or increased up to 'grid',
- # the selected points become a subset, not totally different points.
- if flat:
- sampind[j] = numpy.ravel_multi_index(coords, dims=shape)
- else:
- sampind[j] = coords
- return sampind
-
-if __name__ == '__main__':
- from numpy.testing import assert_almost_equal
- # Test that coordinate_sample is deterministic, in-range, and scalable.
- assert_almost_equal(coordinate_sample((26, 26), 10, range(101, 102)),
- [[[14, 0, 12, 11, 8, 13, 11, 20, 7, 20],
- [ 9, 22, 7, 11, 23, 18, 21, 15, 2, 5]]])
- assert_almost_equal(coordinate_sample((13, 13), 10, range(101, 102)),
- [[[ 7, 0, 6, 5, 4, 6, 5, 10, 3, 20 // 2],
- [ 4, 11, 3, 5, 11, 9, 10, 7, 1, 5 // 2]]])
- assert_almost_equal(coordinate_sample((13, 13), 10, range(100, 102),
- flat=True),
- [[ 8, 24, 67, 103, 87, 79, 138, 94, 98, 53],
- [ 95, 11, 81, 70, 63, 87, 75, 137, 40, 2+10*13]])
- assert_almost_equal(coordinate_sample((13, 13), 10, range(101, 103),
- flat=True),
- [[ 95, 11, 81, 70, 63, 87, 75, 137, 40, 132],
- [ 0, 78, 114, 111, 66, 45, 72, 73, 79, 135]])
- assert_almost_equal(coordinate_sample((26, 26), 10, range(101, 102),
- flat=True),
- [[373, 22, 319, 297, 231, 356, 307, 535, 184, 5+20*26]])
- # Test FixedRandomSubsetSampler
- fss = FixedRandomSubsetSampler(range(10))
- assert len(fss) == 10
- assert_almost_equal(list(fss), [8, 0, 3, 4, 5, 2, 9, 6, 7, 1])
- fss = FixedRandomSubsetSampler(range(10), 3, 8)
- assert len(fss) == 5
- assert_almost_equal(list(fss), [4, 5, 2, 9, 6])
- fss = FixedRandomSubsetSampler([(i, i % 3) for i in range(10)],
- class_filter=1)
- assert len(fss) == 3
- assert_almost_equal(list(fss), [4, 7, 1])
diff --git a/spaces/Hallucinate/demo/ldm/modules/losses/contperceptual.py b/spaces/Hallucinate/demo/ldm/modules/losses/contperceptual.py
deleted file mode 100644
index 672c1e32a1389def02461c0781339681060c540e..0000000000000000000000000000000000000000
--- a/spaces/Hallucinate/demo/ldm/modules/losses/contperceptual.py
+++ /dev/null
@@ -1,111 +0,0 @@
-import torch
-import torch.nn as nn
-
-from taming.modules.losses.vqperceptual import * # TODO: taming dependency yes/no?
-
-
-class LPIPSWithDiscriminator(nn.Module):
- def __init__(self, disc_start, logvar_init=0.0, kl_weight=1.0, pixelloss_weight=1.0,
- disc_num_layers=3, disc_in_channels=3, disc_factor=1.0, disc_weight=1.0,
- perceptual_weight=1.0, use_actnorm=False, disc_conditional=False,
- disc_loss="hinge"):
-
- super().__init__()
- assert disc_loss in ["hinge", "vanilla"]
- self.kl_weight = kl_weight
- self.pixel_weight = pixelloss_weight
- self.perceptual_loss = LPIPS().eval()
- self.perceptual_weight = perceptual_weight
- # output log variance
- self.logvar = nn.Parameter(torch.ones(size=()) * logvar_init)
-
- self.discriminator = NLayerDiscriminator(input_nc=disc_in_channels,
- n_layers=disc_num_layers,
- use_actnorm=use_actnorm
- ).apply(weights_init)
- self.discriminator_iter_start = disc_start
- self.disc_loss = hinge_d_loss if disc_loss == "hinge" else vanilla_d_loss
- self.disc_factor = disc_factor
- self.discriminator_weight = disc_weight
- self.disc_conditional = disc_conditional
-
- def calculate_adaptive_weight(self, nll_loss, g_loss, last_layer=None):
- if last_layer is not None:
- nll_grads = torch.autograd.grad(nll_loss, last_layer, retain_graph=True)[0]
- g_grads = torch.autograd.grad(g_loss, last_layer, retain_graph=True)[0]
- else:
- nll_grads = torch.autograd.grad(nll_loss, self.last_layer[0], retain_graph=True)[0]
- g_grads = torch.autograd.grad(g_loss, self.last_layer[0], retain_graph=True)[0]
-
- d_weight = torch.norm(nll_grads) / (torch.norm(g_grads) + 1e-4)
- d_weight = torch.clamp(d_weight, 0.0, 1e4).detach()
- d_weight = d_weight * self.discriminator_weight
- return d_weight
-
- def forward(self, inputs, reconstructions, posteriors, optimizer_idx,
- global_step, last_layer=None, cond=None, split="train",
- weights=None):
- rec_loss = torch.abs(inputs.contiguous() - reconstructions.contiguous())
- if self.perceptual_weight > 0:
- p_loss = self.perceptual_loss(inputs.contiguous(), reconstructions.contiguous())
- rec_loss = rec_loss + self.perceptual_weight * p_loss
-
- nll_loss = rec_loss / torch.exp(self.logvar) + self.logvar
- weighted_nll_loss = nll_loss
- if weights is not None:
- weighted_nll_loss = weights*nll_loss
- weighted_nll_loss = torch.sum(weighted_nll_loss) / weighted_nll_loss.shape[0]
- nll_loss = torch.sum(nll_loss) / nll_loss.shape[0]
- kl_loss = posteriors.kl()
- kl_loss = torch.sum(kl_loss) / kl_loss.shape[0]
-
- # now the GAN part
- if optimizer_idx == 0:
- # generator update
- if cond is None:
- assert not self.disc_conditional
- logits_fake = self.discriminator(reconstructions.contiguous())
- else:
- assert self.disc_conditional
- logits_fake = self.discriminator(torch.cat((reconstructions.contiguous(), cond), dim=1))
- g_loss = -torch.mean(logits_fake)
-
- if self.disc_factor > 0.0:
- try:
- d_weight = self.calculate_adaptive_weight(nll_loss, g_loss, last_layer=last_layer)
- except RuntimeError:
- assert not self.training
- d_weight = torch.tensor(0.0)
- else:
- d_weight = torch.tensor(0.0)
-
- disc_factor = adopt_weight(self.disc_factor, global_step, threshold=self.discriminator_iter_start)
- loss = weighted_nll_loss + self.kl_weight * kl_loss + d_weight * disc_factor * g_loss
-
- log = {"{}/total_loss".format(split): loss.clone().detach().mean(), "{}/logvar".format(split): self.logvar.detach(),
- "{}/kl_loss".format(split): kl_loss.detach().mean(), "{}/nll_loss".format(split): nll_loss.detach().mean(),
- "{}/rec_loss".format(split): rec_loss.detach().mean(),
- "{}/d_weight".format(split): d_weight.detach(),
- "{}/disc_factor".format(split): torch.tensor(disc_factor),
- "{}/g_loss".format(split): g_loss.detach().mean(),
- }
- return loss, log
-
- if optimizer_idx == 1:
- # second pass for discriminator update
- if cond is None:
- logits_real = self.discriminator(inputs.contiguous().detach())
- logits_fake = self.discriminator(reconstructions.contiguous().detach())
- else:
- logits_real = self.discriminator(torch.cat((inputs.contiguous().detach(), cond), dim=1))
- logits_fake = self.discriminator(torch.cat((reconstructions.contiguous().detach(), cond), dim=1))
-
- disc_factor = adopt_weight(self.disc_factor, global_step, threshold=self.discriminator_iter_start)
- d_loss = disc_factor * self.disc_loss(logits_real, logits_fake)
-
- log = {"{}/disc_loss".format(split): d_loss.clone().detach().mean(),
- "{}/logits_real".format(split): logits_real.detach().mean(),
- "{}/logits_fake".format(split): logits_fake.detach().mean()
- }
- return d_loss, log
-
diff --git a/spaces/Hallucinate/demo/midas/blocks.py b/spaces/Hallucinate/demo/midas/blocks.py
deleted file mode 100644
index 6d87a00680bb6ed9a6d7c3043ea30a1e90361794..0000000000000000000000000000000000000000
--- a/spaces/Hallucinate/demo/midas/blocks.py
+++ /dev/null
@@ -1,439 +0,0 @@
-import torch
-import torch.nn as nn
-
-from .backbones.beit import (
- _make_pretrained_beitl16_512,
- _make_pretrained_beitl16_384,
- _make_pretrained_beitb16_384,
- forward_beit,
-)
-from .backbones.swin_common import (
- forward_swin,
-)
-from .backbones.swin2 import (
- _make_pretrained_swin2l24_384,
- _make_pretrained_swin2b24_384,
- _make_pretrained_swin2t16_256,
-)
-from .backbones.swin import (
- _make_pretrained_swinl12_384,
-)
-from .backbones.levit import (
- _make_pretrained_levit_384,
- forward_levit,
-)
-from .backbones.vit import (
- _make_pretrained_vitb_rn50_384,
- _make_pretrained_vitl16_384,
- _make_pretrained_vitb16_384,
- forward_vit,
-)
-
-def _make_encoder(backbone, features, use_pretrained, groups=1, expand=False, exportable=True, hooks=None,
- use_vit_only=False, use_readout="ignore", in_features=[96, 256, 512, 1024]):
- if backbone == "beitl16_512":
- pretrained = _make_pretrained_beitl16_512(
- use_pretrained, hooks=hooks, use_readout=use_readout
- )
- scratch = _make_scratch(
- [256, 512, 1024, 1024], features, groups=groups, expand=expand
- ) # BEiT_512-L (backbone)
- elif backbone == "beitl16_384":
- pretrained = _make_pretrained_beitl16_384(
- use_pretrained, hooks=hooks, use_readout=use_readout
- )
- scratch = _make_scratch(
- [256, 512, 1024, 1024], features, groups=groups, expand=expand
- ) # BEiT_384-L (backbone)
- elif backbone == "beitb16_384":
- pretrained = _make_pretrained_beitb16_384(
- use_pretrained, hooks=hooks, use_readout=use_readout
- )
- scratch = _make_scratch(
- [96, 192, 384, 768], features, groups=groups, expand=expand
- ) # BEiT_384-B (backbone)
- elif backbone == "swin2l24_384":
- pretrained = _make_pretrained_swin2l24_384(
- use_pretrained, hooks=hooks
- )
- scratch = _make_scratch(
- [192, 384, 768, 1536], features, groups=groups, expand=expand
- ) # Swin2-L/12to24 (backbone)
- elif backbone == "swin2b24_384":
- pretrained = _make_pretrained_swin2b24_384(
- use_pretrained, hooks=hooks
- )
- scratch = _make_scratch(
- [128, 256, 512, 1024], features, groups=groups, expand=expand
- ) # Swin2-B/12to24 (backbone)
- elif backbone == "swin2t16_256":
- pretrained = _make_pretrained_swin2t16_256(
- use_pretrained, hooks=hooks
- )
- scratch = _make_scratch(
- [96, 192, 384, 768], features, groups=groups, expand=expand
- ) # Swin2-T/16 (backbone)
- elif backbone == "swinl12_384":
- pretrained = _make_pretrained_swinl12_384(
- use_pretrained, hooks=hooks
- )
- scratch = _make_scratch(
- [192, 384, 768, 1536], features, groups=groups, expand=expand
- ) # Swin-L/12 (backbone)
- elif backbone == "next_vit_large_6m":
- from .backbones.next_vit import _make_pretrained_next_vit_large_6m
- pretrained = _make_pretrained_next_vit_large_6m(hooks=hooks)
- scratch = _make_scratch(
- in_features, features, groups=groups, expand=expand
- ) # Next-ViT-L on ImageNet-1K-6M (backbone)
- elif backbone == "levit_384":
- pretrained = _make_pretrained_levit_384(
- use_pretrained, hooks=hooks
- )
- scratch = _make_scratch(
- [384, 512, 768], features, groups=groups, expand=expand
- ) # LeViT 384 (backbone)
- elif backbone == "vitl16_384":
- pretrained = _make_pretrained_vitl16_384(
- use_pretrained, hooks=hooks, use_readout=use_readout
- )
- scratch = _make_scratch(
- [256, 512, 1024, 1024], features, groups=groups, expand=expand
- ) # ViT-L/16 - 85.0% Top1 (backbone)
- elif backbone == "vitb_rn50_384":
- pretrained = _make_pretrained_vitb_rn50_384(
- use_pretrained,
- hooks=hooks,
- use_vit_only=use_vit_only,
- use_readout=use_readout,
- )
- scratch = _make_scratch(
- [256, 512, 768, 768], features, groups=groups, expand=expand
- ) # ViT-H/16 - 85.0% Top1 (backbone)
- elif backbone == "vitb16_384":
- pretrained = _make_pretrained_vitb16_384(
- use_pretrained, hooks=hooks, use_readout=use_readout
- )
- scratch = _make_scratch(
- [96, 192, 384, 768], features, groups=groups, expand=expand
- ) # ViT-B/16 - 84.6% Top1 (backbone)
- elif backbone == "resnext101_wsl":
- pretrained = _make_pretrained_resnext101_wsl(use_pretrained)
- scratch = _make_scratch([256, 512, 1024, 2048], features, groups=groups, expand=expand) # efficientnet_lite3
- elif backbone == "efficientnet_lite3":
- pretrained = _make_pretrained_efficientnet_lite3(use_pretrained, exportable=exportable)
- scratch = _make_scratch([32, 48, 136, 384], features, groups=groups, expand=expand) # efficientnet_lite3
- else:
- print(f"Backbone '{backbone}' not implemented")
- assert False
-
- return pretrained, scratch
-
-
-def _make_scratch(in_shape, out_shape, groups=1, expand=False):
- scratch = nn.Module()
-
- out_shape1 = out_shape
- out_shape2 = out_shape
- out_shape3 = out_shape
- if len(in_shape) >= 4:
- out_shape4 = out_shape
-
- if expand:
- out_shape1 = out_shape
- out_shape2 = out_shape*2
- out_shape3 = out_shape*4
- if len(in_shape) >= 4:
- out_shape4 = out_shape*8
-
- scratch.layer1_rn = nn.Conv2d(
- in_shape[0], out_shape1, kernel_size=3, stride=1, padding=1, bias=False, groups=groups
- )
- scratch.layer2_rn = nn.Conv2d(
- in_shape[1], out_shape2, kernel_size=3, stride=1, padding=1, bias=False, groups=groups
- )
- scratch.layer3_rn = nn.Conv2d(
- in_shape[2], out_shape3, kernel_size=3, stride=1, padding=1, bias=False, groups=groups
- )
- if len(in_shape) >= 4:
- scratch.layer4_rn = nn.Conv2d(
- in_shape[3], out_shape4, kernel_size=3, stride=1, padding=1, bias=False, groups=groups
- )
-
- return scratch
-
-
-def _make_pretrained_efficientnet_lite3(use_pretrained, exportable=False):
- efficientnet = torch.hub.load(
- "rwightman/gen-efficientnet-pytorch",
- "tf_efficientnet_lite3",
- pretrained=use_pretrained,
- exportable=exportable
- )
- return _make_efficientnet_backbone(efficientnet)
-
-
-def _make_efficientnet_backbone(effnet):
- pretrained = nn.Module()
-
- pretrained.layer1 = nn.Sequential(
- effnet.conv_stem, effnet.bn1, effnet.act1, *effnet.blocks[0:2]
- )
- pretrained.layer2 = nn.Sequential(*effnet.blocks[2:3])
- pretrained.layer3 = nn.Sequential(*effnet.blocks[3:5])
- pretrained.layer4 = nn.Sequential(*effnet.blocks[5:9])
-
- return pretrained
-
-
-def _make_resnet_backbone(resnet):
- pretrained = nn.Module()
- pretrained.layer1 = nn.Sequential(
- resnet.conv1, resnet.bn1, resnet.relu, resnet.maxpool, resnet.layer1
- )
-
- pretrained.layer2 = resnet.layer2
- pretrained.layer3 = resnet.layer3
- pretrained.layer4 = resnet.layer4
-
- return pretrained
-
-
-def _make_pretrained_resnext101_wsl(use_pretrained):
- resnet = torch.hub.load("facebookresearch/WSL-Images", "resnext101_32x8d_wsl")
- return _make_resnet_backbone(resnet)
-
-
-
-class Interpolate(nn.Module):
- """Interpolation module.
- """
-
- def __init__(self, scale_factor, mode, align_corners=False):
- """Init.
-
- Args:
- scale_factor (float): scaling
- mode (str): interpolation mode
- """
- super(Interpolate, self).__init__()
-
- self.interp = nn.functional.interpolate
- self.scale_factor = scale_factor
- self.mode = mode
- self.align_corners = align_corners
-
- def forward(self, x):
- """Forward pass.
-
- Args:
- x (tensor): input
-
- Returns:
- tensor: interpolated data
- """
-
- x = self.interp(
- x, scale_factor=self.scale_factor, mode=self.mode, align_corners=self.align_corners
- )
-
- return x
-
-
-class ResidualConvUnit(nn.Module):
- """Residual convolution module.
- """
-
- def __init__(self, features):
- """Init.
-
- Args:
- features (int): number of features
- """
- super().__init__()
-
- self.conv1 = nn.Conv2d(
- features, features, kernel_size=3, stride=1, padding=1, bias=True
- )
-
- self.conv2 = nn.Conv2d(
- features, features, kernel_size=3, stride=1, padding=1, bias=True
- )
-
- self.relu = nn.ReLU(inplace=True)
-
- def forward(self, x):
- """Forward pass.
-
- Args:
- x (tensor): input
-
- Returns:
- tensor: output
- """
- out = self.relu(x)
- out = self.conv1(out)
- out = self.relu(out)
- out = self.conv2(out)
-
- return out + x
-
-
-class FeatureFusionBlock(nn.Module):
- """Feature fusion block.
- """
-
- def __init__(self, features):
- """Init.
-
- Args:
- features (int): number of features
- """
- super(FeatureFusionBlock, self).__init__()
-
- self.resConfUnit1 = ResidualConvUnit(features)
- self.resConfUnit2 = ResidualConvUnit(features)
-
- def forward(self, *xs):
- """Forward pass.
-
- Returns:
- tensor: output
- """
- output = xs[0]
-
- if len(xs) == 2:
- output += self.resConfUnit1(xs[1])
-
- output = self.resConfUnit2(output)
-
- output = nn.functional.interpolate(
- output, scale_factor=2, mode="bilinear", align_corners=True
- )
-
- return output
-
-
-
-
-class ResidualConvUnit_custom(nn.Module):
- """Residual convolution module.
- """
-
- def __init__(self, features, activation, bn):
- """Init.
-
- Args:
- features (int): number of features
- """
- super().__init__()
-
- self.bn = bn
-
- self.groups=1
-
- self.conv1 = nn.Conv2d(
- features, features, kernel_size=3, stride=1, padding=1, bias=True, groups=self.groups
- )
-
- self.conv2 = nn.Conv2d(
- features, features, kernel_size=3, stride=1, padding=1, bias=True, groups=self.groups
- )
-
- if self.bn==True:
- self.bn1 = nn.BatchNorm2d(features)
- self.bn2 = nn.BatchNorm2d(features)
-
- self.activation = activation
-
- self.skip_add = nn.quantized.FloatFunctional()
-
- def forward(self, x):
- """Forward pass.
-
- Args:
- x (tensor): input
-
- Returns:
- tensor: output
- """
-
- out = self.activation(x)
- out = self.conv1(out)
- if self.bn==True:
- out = self.bn1(out)
-
- out = self.activation(out)
- out = self.conv2(out)
- if self.bn==True:
- out = self.bn2(out)
-
- if self.groups > 1:
- out = self.conv_merge(out)
-
- return self.skip_add.add(out, x)
-
- # return out + x
-
-
-class FeatureFusionBlock_custom(nn.Module):
- """Feature fusion block.
- """
-
- def __init__(self, features, activation, deconv=False, bn=False, expand=False, align_corners=True, size=None):
- """Init.
-
- Args:
- features (int): number of features
- """
- super(FeatureFusionBlock_custom, self).__init__()
-
- self.deconv = deconv
- self.align_corners = align_corners
-
- self.groups=1
-
- self.expand = expand
- out_features = features
- if self.expand==True:
- out_features = features//2
-
- self.out_conv = nn.Conv2d(features, out_features, kernel_size=1, stride=1, padding=0, bias=True, groups=1)
-
- self.resConfUnit1 = ResidualConvUnit_custom(features, activation, bn)
- self.resConfUnit2 = ResidualConvUnit_custom(features, activation, bn)
-
- self.skip_add = nn.quantized.FloatFunctional()
-
- self.size=size
-
- def forward(self, *xs, size=None):
- """Forward pass.
-
- Returns:
- tensor: output
- """
- output = xs[0]
-
- if len(xs) == 2:
- res = self.resConfUnit1(xs[1])
- output = self.skip_add.add(output, res)
- # output += res
-
- output = self.resConfUnit2(output)
-
- if (size is None) and (self.size is None):
- modifier = {"scale_factor": 2}
- elif size is None:
- modifier = {"size": self.size}
- else:
- modifier = {"size": size}
-
- output = nn.functional.interpolate(
- output, **modifier, mode="bilinear", align_corners=self.align_corners
- )
-
- output = self.out_conv(output)
-
- return output
-
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/simultaneous_translation/modules/__init__.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/simultaneous_translation/modules/__init__.py
deleted file mode 100644
index f5ea180f9b4cdb27cd553439b6df9d743105f18c..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/simultaneous_translation/modules/__init__.py
+++ /dev/null
@@ -1,23 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-
-import os
-import importlib
-from fairseq import registry
-
-(
- build_monotonic_attention,
- register_monotonic_attention,
- MONOTONIC_ATTENTION_REGISTRY,
- _,
-) = registry.setup_registry("--simul-type")
-
-for file in sorted(os.listdir(os.path.dirname(__file__))):
- if file.endswith(".py") and not file.startswith("_"):
- model_name = file[: file.find(".py")]
- importlib.import_module(
- "examples.simultaneous_translation.modules." + model_name
- )
diff --git a/spaces/HarshulNanda/HARM_ML_web_app/categoryPredictor.py b/spaces/HarshulNanda/HARM_ML_web_app/categoryPredictor.py
deleted file mode 100644
index 95936bdd5cfd7a4da0e6f4b2fc3a965c4e712e41..0000000000000000000000000000000000000000
--- a/spaces/HarshulNanda/HARM_ML_web_app/categoryPredictor.py
+++ /dev/null
@@ -1,44 +0,0 @@
-from youtubesearchpython import Video, ResultMode
-from colors import colorOf, dataset
-
-import numpy as np
-import matplotlib.pyplot as plt
-import requests
-import pickle
-import warnings
-warnings.filterwarnings("ignore")
-
-def predictCategoryFor(url):
- try:
-
- video = Video.getInfo(url, mode = ResultMode.json)
-
- text = [video["title"] + " " + video["description"]]
-
- categories = sorted(list(dataset.keys()))
-
- education_model = pickle.load(open("./models/educated_model.pkl", "rb"))
- education_prediction = education_model.predict(text)[0]
-
- if education_prediction == 0:
-
- category_classifier = pickle.load(open("./models/cat_model.pkl", "rb"))
- category_prediction = categories[category_classifier.predict(text)[0]]
-
- sub_cat_clf = pickle.load(open(f"./models/{category_prediction.lower().replace(' ', '_')}_model.pkl", "rb"))
- sub_cat_pred = sub_cat_clf.predict_proba(text)[0]
- sub_cat_pred *= 100
- subs = sorted(dataset[category_prediction])
-
- return ("Educational", category_prediction, subs, sub_cat_pred)
-
- else:
-
- return ("Non Educational", "", [], [])
-
- except:
- return ("There must be an error in getting the title and description of the video.", "", [], [])
-
-
-# print(predictCategoryFor(url="https://www.youtube.com/watch?v=bdCX8Nb_2Mg"))
-
diff --git a/spaces/Hexamind/swarms/filter_wrap.py b/spaces/Hexamind/swarms/filter_wrap.py
deleted file mode 100644
index 3eb10533ae0b9eb92e184ab82a6b140b4aafe7d1..0000000000000000000000000000000000000000
--- a/spaces/Hexamind/swarms/filter_wrap.py
+++ /dev/null
@@ -1,95 +0,0 @@
-
-import numpy as np
-from gym import spaces, Wrapper
-
-
-class FilterWrapper(Wrapper):
- """
- :param env: (gym.Env) Gym environment that will be wrapped
- """
-
- def __init__(self, env):
-
- self.nb_blues, self.nb_reds = env.nb_blues, env.nb_reds
-
- self.blue_deads = np.full((self.nb_blues,), False)
- self.red_deads = np.full((self.nb_reds,), False)
-
- env.observation_space = spaces.Tuple((
- spaces.Box(low=0, high=1, shape=(self.nb_blues, 6), dtype=np.float32),
- spaces.Box(low=0, high=1, shape=(self.nb_reds, 6), dtype=np.float32),
- spaces.Box(low=0, high=1, shape=(self.nb_blues, self.nb_reds), dtype=np.float32),
- spaces.Box(low=0, high=1, shape=(self.nb_reds, self.nb_blues), dtype=np.float32),
- spaces.Discrete(1),
- spaces.Discrete(1)))
-
- super(FilterWrapper, self).__init__(env)
-
- def reset(self):
- """
- Reset the environment
- """
- obs = self.env.reset()
-
- return self._sort_obs(obs)
-
- def step(self, action):
- """
- :param action: ([float] or int) Action taken by the agent
- :return: (np.ndarray, float, bool, dict) observation, reward, is the episode over?, additional informations
- """
-
- blue_action, red_action = action
-
- new_ba = []
- index = 0
- for count, alive in enumerate(~self.blue_deads):
- if alive:
- new_ba.append(blue_action[index])
- index += 1
- else:
- new_ba.append(np.array([0, 0, 0]))
- blue_action = new_ba
-
- new_ra = []
- index = 0
- for count, alive in enumerate(~self.red_deads):
- if alive:
- new_ra.append(red_action[index])
- index += 1
- else:
- new_ra.append(np.array([0, 0, 0]))
- red_action = new_ra
-
- action = blue_action, red_action
-
- obs, reward, done, info = self.env.step(action)
-
- obs = self._sort_obs(obs)
-
- return obs, reward, done, info
-
- def _sort_obs(self, obs):
-
- blue_obs, red_obs, blues_fire, reds_fire, blue_deads, red_deads = obs
-
- self.blue_deads = blue_deads
- self.red_deads = red_deads
-
- blue_obs = np.vstack((blue_obs[~self.blue_deads], blue_obs[self.blue_deads]))
- red_obs = np.vstack((red_obs[~self.red_deads], red_obs[self.red_deads]))
-
- blues_fire = self.fire_sort(self.blue_deads, self.red_deads, blues_fire)
- reds_fire = self.fire_sort(self.red_deads, self.blue_deads, reds_fire)
-
- sort_obs = blue_obs, red_obs, blues_fire, reds_fire, sum(blue_deads), sum(red_deads)
-
- return sort_obs
-
- def fire_sort(self, dead_friends, dead_foes, friends_fire):
-
- friends_fire_big = np.zeros_like(friends_fire)
- friends_fire = np.compress(~dead_friends, friends_fire, axis=0)
- friends_fire = np.compress(~dead_foes, friends_fire, axis=1)
- friends_fire_big[:friends_fire.shape[0], :friends_fire.shape[1]] = friends_fire
- return friends_fire_big
diff --git a/spaces/Hina4867/bingo/src/lib/hooks/use-copy-to-clipboard.tsx b/spaces/Hina4867/bingo/src/lib/hooks/use-copy-to-clipboard.tsx
deleted file mode 100644
index 62f7156dca246c46b213151af003a3a177977ccf..0000000000000000000000000000000000000000
--- a/spaces/Hina4867/bingo/src/lib/hooks/use-copy-to-clipboard.tsx
+++ /dev/null
@@ -1,33 +0,0 @@
-'use client'
-
-import * as React from 'react'
-
-export interface useCopyToClipboardProps {
- timeout?: number
-}
-
-export function useCopyToClipboard({
- timeout = 2000
-}: useCopyToClipboardProps) {
- const [isCopied, setIsCopied] = React.useState(false)
-
- const copyToClipboard = (value: string) => {
- if (typeof window === 'undefined' || !navigator.clipboard?.writeText) {
- return
- }
-
- if (!value) {
- return
- }
-
- navigator.clipboard.writeText(value).then(() => {
- setIsCopied(true)
-
- setTimeout(() => {
- setIsCopied(false)
- }, timeout)
- })
- }
-
- return { isCopied, copyToClipboard }
-}
diff --git a/spaces/HusseinHE/psis/share_btn.py b/spaces/HusseinHE/psis/share_btn.py
deleted file mode 100644
index 5d4dc51b883650ed947e7dea90f677d817725198..0000000000000000000000000000000000000000
--- a/spaces/HusseinHE/psis/share_btn.py
+++ /dev/null
@@ -1,83 +0,0 @@
-community_icon_html = """"""
-
-loading_icon_html = """"""
-
-share_js = """async () => {
- async function uploadFile(file){
- const UPLOAD_URL = 'https://huggingface.co/uploads';
- const response = await fetch(UPLOAD_URL, {
- method: 'POST',
- headers: {
- 'Content-Type': file.type,
- 'X-Requested-With': 'XMLHttpRequest',
- },
- body: file, /// <- File inherits from Blob
- });
- const url = await response.text();
- return url;
- }
-
- async function getInputImgFile(imgEl){
- const res = await fetch(imgEl.src);
- const blob = await res.blob();
- const imgId = Date.now() % 200;
- const isPng = imgEl.src.startsWith(`data:image/png`);
- if(isPng){
- const fileName = `sd-perception-${{imgId}}.png`;
- return new File([blob], fileName, { type: 'image/png' });
- }else{
- const fileName = `sd-perception-${{imgId}}.jpg`;
- return new File([blob], fileName, { type: 'image/jpeg' });
- }
- }
-
- const gradioEl = document.querySelector("gradio-app").shadowRoot || document.querySelector('body > gradio-app');
-
- const inputPrompt = gradioEl.querySelector('#prompt textarea').value;
- const negativePrompt = gradioEl.querySelector('#negative_prompt textarea').value;
- const illusionStrength = gradioEl.querySelector('#illusion_strength input[type="number"]').value;
- const controlImage = gradioEl.querySelector('#control_image img');
- const outputImgEl = gradioEl.querySelector('#output img');
-
- const shareBtnEl = gradioEl.querySelector('#share-btn');
- const shareIconEl = gradioEl.querySelector('#share-btn-share-icon');
- const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon');
-
- shareBtnEl.style.pointerEvents = 'none';
- shareIconEl.style.display = 'none';
- loadingIconEl.style.removeProperty('display');
-
- const inputFile = await getInputImgFile(outputImgEl);
- const urlInputImg = await uploadFile(inputFile);
-
- const controlFile = await getInputImgFile(controlImage);
- const urlControlImg = await uploadFile(controlFile);
-
- const descriptionMd = `
-### Prompt
-- *Prompt*: ${inputPrompt}
-- *Negative prompt*: ${negativePrompt}
-- *Illusion strength*: ${illusionStrength}
-#### Generated Image:
-
-
-#### Control Image:
-
-`;
- const params = new URLSearchParams({
- title: inputPrompt,
- description: descriptionMd,
- preview: true
- });
- const paramsStr = params.toString();
- window.open(`https://huggingface.co/spaces/AP123/IllusionDiffusion/discussions/new?${paramsStr}`, '_blank');
- shareBtnEl.style.removeProperty('pointer-events');
- shareIconEl.style.removeProperty('display');
- loadingIconEl.style.display = 'none';
-}"""
\ No newline at end of file
diff --git a/spaces/ICML2022/OFA/fairseq/examples/discriminative_reranking_nmt/tasks/discriminative_reranking_task.py b/spaces/ICML2022/OFA/fairseq/examples/discriminative_reranking_nmt/tasks/discriminative_reranking_task.py
deleted file mode 100644
index 0e7fbba888c1ddd118da8238d644b4ab571177ff..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/examples/discriminative_reranking_nmt/tasks/discriminative_reranking_task.py
+++ /dev/null
@@ -1,475 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from dataclasses import dataclass, field
-
-import itertools
-import logging
-import os
-
-import numpy as np
-import torch
-
-from fairseq import metrics
-from fairseq.data import (
- ConcatDataset,
- ConcatSentencesDataset,
- data_utils,
- Dictionary,
- IdDataset,
- indexed_dataset,
- NestedDictionaryDataset,
- NumSamplesDataset,
- NumelDataset,
- PrependTokenDataset,
- RawLabelDataset,
- RightPadDataset,
- SortDataset,
- TruncateDataset,
- TokenBlockDataset,
-)
-from fairseq.dataclass import ChoiceEnum, FairseqDataclass
-from fairseq.tasks import FairseqTask, register_task
-from omegaconf import II, MISSING
-
-
-EVAL_BLEU_ORDER = 4
-TARGET_METRIC_CHOICES = ChoiceEnum(["bleu", "ter"])
-
-logger = logging.getLogger(__name__)
-
-
-@dataclass
-class DiscriminativeRerankingNMTConfig(FairseqDataclass):
- data: str = field(default=MISSING, metadata={"help": "path to data directory"})
- num_data_splits: int = field(
- default=1, metadata={"help": "total number of data splits"}
- )
- no_shuffle: bool = field(
- default=False, metadata={"help": "do not shuffle training data"}
- )
- max_positions: int = field(
- default=512, metadata={"help": "number of positional embeddings to learn"}
- )
- include_src: bool = field(
- default=False, metadata={"help": "include source sentence"}
- )
- mt_beam: int = field(default=50, metadata={"help": "beam size of input hypotheses"})
- eval_target_metric: bool = field(
- default=False,
- metadata={"help": "evaluation with the target metric during validation"},
- )
- target_metric: TARGET_METRIC_CHOICES = field(
- default="bleu", metadata={"help": "name of the target metric to optimize for"}
- )
- train_subset: str = field(
- default=II("dataset.train_subset"),
- metadata={"help": "data subset to use for training (e.g. train, valid, test)"},
- )
- seed: int = field(
- default=II("common.seed"),
- metadata={"help": "pseudo random number generator seed"},
- )
-
-
-class RerankerScorer(object):
- """Scores the target for a given (source (optional), target) input."""
-
- def __init__(self, args, mt_beam):
- self.mt_beam = mt_beam
-
- @torch.no_grad()
- def generate(self, models, sample, **kwargs):
- """Score a batch of translations."""
- net_input = sample["net_input"]
-
- assert len(models) == 1, "does not support model ensemble"
- model = models[0]
-
- bs = net_input["src_tokens"].shape[0]
- assert (
- model.joint_classification == "none" or bs % self.mt_beam == 0
- ), f"invalid batch size ({bs}) for joint classification with beam size ({self.mt_beam})"
-
- model.eval()
- logits = model(**net_input)
-
- batch_out = model.sentence_forward(logits, net_input["src_tokens"])
- if model.joint_classification == "sent":
- batch_out = model.joint_forward(
- batch_out.view(self.mt_beam, bs // self.mt_beam, -1)
- )
- scores = model.classification_forward(
- batch_out.view(bs, 1, -1)
- ) # input: B x T x C
-
- return scores
-
-
-@register_task(
- "discriminative_reranking_nmt", dataclass=DiscriminativeRerankingNMTConfig
-)
-class DiscriminativeRerankingNMTTask(FairseqTask):
- """
- Translation rerank task.
- The input can be either (src, tgt) sentence pairs or tgt sentence only.
- """
-
- cfg: DiscriminativeRerankingNMTConfig
-
- def __init__(self, cfg: DiscriminativeRerankingNMTConfig, data_dictionary=None):
- super().__init__(cfg)
- self.dictionary = data_dictionary
- self._max_positions = cfg.max_positions
- # args.tokens_per_sample = self._max_positions
- # self.num_classes = 1 # for model
-
- @classmethod
- def load_dictionary(cls, cfg, filename):
- """Load the dictionary from the filename"""
- dictionary = Dictionary.load(filename)
- dictionary.add_symbol("") # for loading pretrained XLMR model
-
- return dictionary
-
- @classmethod
- def setup_task(cls, cfg: DiscriminativeRerankingNMTConfig, **kwargs):
- # load data dictionary (assume joint dictionary)
- data_path = cfg.data
- data_dict = cls.load_dictionary(
- cfg, os.path.join(data_path, "input_src/dict.txt")
- )
-
- logger.info("[input] src dictionary: {} types".format(len(data_dict)))
-
- return DiscriminativeRerankingNMTTask(cfg, data_dict)
-
- def load_dataset(self, split, epoch=0, combine=False, **kwargs):
- """Load a given dataset split (e.g., train, valid, test)."""
- if self.cfg.data.endswith("1"):
- data_shard = (epoch - 1) % self.cfg.num_data_splits + 1
- data_path = self.cfg.data[:-1] + str(data_shard)
- else:
- data_path = self.cfg.data
-
- def get_path(type, data_split):
- return os.path.join(data_path, str(type), data_split)
-
- def make_dataset(type, dictionary, data_split, combine):
- split_path = get_path(type, data_split)
-
- dataset = data_utils.load_indexed_dataset(
- split_path, dictionary, combine=combine,
- )
- return dataset
-
- def load_split(data_split, metric):
- input_src = None
- if self.cfg.include_src:
- input_src = make_dataset(
- "input_src", self.dictionary, data_split, combine=False
- )
- assert input_src is not None, "could not find dataset: {}".format(
- get_path("input_src", data_split)
- )
-
- input_tgt = make_dataset(
- "input_tgt", self.dictionary, data_split, combine=False
- )
- assert input_tgt is not None, "could not find dataset: {}".format(
- get_path("input_tgt", data_split)
- )
-
- label_path = f"{get_path(metric, data_split)}.{metric}"
- assert os.path.exists(label_path), f"could not find dataset: {label_path}"
-
- np_labels = np.loadtxt(label_path)
- if self.cfg.target_metric == "ter":
- np_labels = -np_labels
- label = RawLabelDataset(np_labels)
-
- return input_src, input_tgt, label
-
- src_datasets = []
- tgt_datasets = []
- label_datasets = []
-
- if split == self.cfg.train_subset:
- for k in itertools.count():
- split_k = "train" + (str(k) if k > 0 else "")
- prefix = os.path.join(data_path, "input_tgt", split_k)
- if not indexed_dataset.dataset_exists(prefix, impl=None):
- if k > 0:
- break
- else:
- raise FileNotFoundError(f"Dataset not found: {prefix}")
- input_src, input_tgt, label = load_split(
- split_k, self.cfg.target_metric
- )
- src_datasets.append(input_src)
- tgt_datasets.append(input_tgt)
- label_datasets.append(label)
- else:
- input_src, input_tgt, label = load_split(split, self.cfg.target_metric)
- src_datasets.append(input_src)
- tgt_datasets.append(input_tgt)
- label_datasets.append(label)
-
- if len(tgt_datasets) == 1:
- input_tgt, label = tgt_datasets[0], label_datasets[0]
- if self.cfg.include_src:
- input_src = src_datasets[0]
- else:
- input_tgt = ConcatDataset(tgt_datasets)
- label = ConcatDataset(label_datasets)
- if self.cfg.include_src:
- input_src = ConcatDataset(src_datasets)
-
- input_tgt = TruncateDataset(input_tgt, self.cfg.max_positions)
- if self.cfg.include_src:
- input_src = PrependTokenDataset(input_src, self.dictionary.bos())
- input_src = TruncateDataset(input_src, self.cfg.max_positions)
- src_lengths = NumelDataset(input_src, reduce=False)
- src_tokens = ConcatSentencesDataset(input_src, input_tgt)
- else:
- src_tokens = PrependTokenDataset(input_tgt, self.dictionary.bos())
- src_lengths = NumelDataset(src_tokens, reduce=False)
-
- dataset = {
- "id": IdDataset(),
- "net_input": {
- "src_tokens": RightPadDataset(
- src_tokens, pad_idx=self.source_dictionary.pad(),
- ),
- "src_lengths": src_lengths,
- },
- "nsentences": NumSamplesDataset(),
- "ntokens": NumelDataset(src_tokens, reduce=True),
- "target": label,
- }
-
- dataset = NestedDictionaryDataset(dataset, sizes=[src_tokens.sizes],)
-
- assert len(dataset) % self.cfg.mt_beam == 0, (
- "dataset size (%d) is not a multiple of beam size (%d)"
- % (len(dataset), self.cfg.mt_beam)
- )
-
- # no need to shuffle valid/test sets
- if not self.cfg.no_shuffle and split == self.cfg.train_subset:
-
- # need to keep all hypothese together
- start_idx = np.arange(0, len(dataset), self.cfg.mt_beam)
- with data_utils.numpy_seed(self.cfg.seed + epoch):
- np.random.shuffle(start_idx)
-
- idx = np.arange(0, self.cfg.mt_beam)
- shuffle = np.tile(idx, (len(start_idx), 1)).reshape(-1) + np.tile(
- start_idx, (self.cfg.mt_beam, 1)
- ).transpose().reshape(-1)
-
- dataset = SortDataset(dataset, sort_order=[shuffle],)
-
- logger.info(f"Loaded {split} with #samples: {len(dataset)}")
-
- self.datasets[split] = dataset
- return self.datasets[split]
-
- def build_dataset_for_inference(self, src_tokens, src_lengths, **kwargs):
- assert not self.cfg.include_src or len(src_tokens[0]) == 2
- input_src = None
- if self.cfg.include_src:
- input_src = TokenBlockDataset(
- [t[0] for t in src_tokens],
- [l[0] for l in src_lengths],
- block_size=None, # ignored for "eos" break mode
- pad=self.source_dictionary.pad(),
- eos=self.source_dictionary.eos(),
- break_mode="eos",
- )
- input_src = PrependTokenDataset(input_src, self.dictionary.bos())
- input_src = TruncateDataset(input_src, self.cfg.max_positions)
-
- input_tgt = TokenBlockDataset(
- [t[-1] for t in src_tokens],
- [l[-1] for l in src_lengths],
- block_size=None, # ignored for "eos" break mode
- pad=self.source_dictionary.pad(),
- eos=self.source_dictionary.eos(),
- break_mode="eos",
- )
- input_tgt = TruncateDataset(input_tgt, self.cfg.max_positions)
- if self.cfg.include_src:
- src_tokens = ConcatSentencesDataset(input_src, input_tgt)
- src_lengths = NumelDataset(input_src, reduce=False)
- else:
- input_tgt = PrependTokenDataset(input_tgt, self.dictionary.bos())
- src_tokens = input_tgt
- src_lengths = NumelDataset(src_tokens, reduce=False)
-
- dataset = {
- "id": IdDataset(),
- "net_input": {
- "src_tokens": RightPadDataset(
- src_tokens, pad_idx=self.source_dictionary.pad(),
- ),
- "src_lengths": src_lengths,
- },
- "nsentences": NumSamplesDataset(),
- "ntokens": NumelDataset(src_tokens, reduce=True),
- }
-
- return NestedDictionaryDataset(dataset, sizes=[src_tokens.sizes],)
-
- def build_model(self, cfg: FairseqDataclass):
- return super().build_model(cfg)
-
- def build_generator(self, args):
- return RerankerScorer(args, mt_beam=self.cfg.mt_beam)
-
- def max_positions(self):
- return self._max_positions
-
- @property
- def source_dictionary(self):
- return self.dictionary
-
- @property
- def target_dictionary(self):
- return self.dictionary
-
- def create_dummy_batch(self, device):
- dummy_target = (
- torch.zeros(self.cfg.mt_beam, EVAL_BLEU_ORDER * 2 + 3).long().to(device)
- if not self.cfg.eval_ter
- else torch.zeros(self.cfg.mt_beam, 3).long().to(device)
- )
-
- return {
- "id": torch.zeros(self.cfg.mt_beam, 1).long().to(device),
- "net_input": {
- "src_tokens": torch.zeros(self.cfg.mt_beam, 4).long().to(device),
- "src_lengths": torch.ones(self.cfg.mt_beam, 1).long().to(device),
- },
- "nsentences": 0,
- "ntokens": 0,
- "target": dummy_target,
- }
-
- def train_step(
- self, sample, model, criterion, optimizer, update_num, ignore_grad=False
- ):
- if ignore_grad and sample is None:
- sample = self.create_dummy_batch(model.device)
-
- return super().train_step(
- sample, model, criterion, optimizer, update_num, ignore_grad
- )
-
- def valid_step(self, sample, model, criterion):
- if sample is None:
- sample = self.create_dummy_batch(model.device)
-
- loss, sample_size, logging_output = super().valid_step(sample, model, criterion)
-
- if not self.cfg.eval_target_metric:
- return loss, sample_size, logging_output
-
- scores = logging_output["scores"]
-
- if self.cfg.target_metric == "bleu":
- assert sample["target"].shape[1] == EVAL_BLEU_ORDER * 2 + 3, (
- "target does not contain enough information ("
- + str(sample["target"].shape[1])
- + "for evaluating BLEU"
- )
-
- max_id = torch.argmax(scores, dim=1)
- select_id = max_id + torch.arange(
- 0, sample_size * self.cfg.mt_beam, self.cfg.mt_beam
- ).to(max_id.device)
- bleu_data = sample["target"][select_id, 1:].sum(0).data
-
- logging_output["_bleu_sys_len"] = bleu_data[0]
- logging_output["_bleu_ref_len"] = bleu_data[1]
-
- for i in range(EVAL_BLEU_ORDER):
- logging_output["_bleu_counts_" + str(i)] = bleu_data[2 + i]
- logging_output["_bleu_totals_" + str(i)] = bleu_data[
- 2 + EVAL_BLEU_ORDER + i
- ]
-
- elif self.cfg.target_metric == "ter":
- assert sample["target"].shape[1] == 3, (
- "target does not contain enough information ("
- + str(sample["target"].shape[1])
- + "for evaluating TER"
- )
-
- max_id = torch.argmax(scores, dim=1)
- select_id = max_id + torch.arange(
- 0, sample_size * self.cfg.mt_beam, self.cfg.mt_beam
- ).to(max_id.device)
- ter_data = sample["target"][select_id, 1:].sum(0).data
-
- logging_output["_ter_num_edits"] = -ter_data[0]
- logging_output["_ter_ref_len"] = -ter_data[1]
-
- return loss, sample_size, logging_output
-
- def reduce_metrics(self, logging_outputs, criterion):
- super().reduce_metrics(logging_outputs, criterion)
-
- if not self.cfg.eval_target_metric:
- return
-
- def sum_logs(key):
- return sum(log.get(key, 0) for log in logging_outputs)
-
- if self.cfg.target_metric == "bleu":
- counts, totals = [], []
- for i in range(EVAL_BLEU_ORDER):
- counts.append(sum_logs("_bleu_counts_" + str(i)))
- totals.append(sum_logs("_bleu_totals_" + str(i)))
-
- if max(totals) > 0:
- # log counts as numpy arrays -- log_scalar will sum them correctly
- metrics.log_scalar("_bleu_counts", np.array(counts))
- metrics.log_scalar("_bleu_totals", np.array(totals))
- metrics.log_scalar("_bleu_sys_len", sum_logs("_bleu_sys_len"))
- metrics.log_scalar("_bleu_ref_len", sum_logs("_bleu_ref_len"))
-
- def compute_bleu(meters):
- import inspect
- import sacrebleu
-
- fn_sig = inspect.getfullargspec(sacrebleu.compute_bleu)[0]
- if "smooth_method" in fn_sig:
- smooth = {"smooth_method": "exp"}
- else:
- smooth = {"smooth": "exp"}
- bleu = sacrebleu.compute_bleu(
- correct=meters["_bleu_counts"].sum,
- total=meters["_bleu_totals"].sum,
- sys_len=meters["_bleu_sys_len"].sum,
- ref_len=meters["_bleu_ref_len"].sum,
- **smooth,
- )
- return round(bleu.score, 2)
-
- metrics.log_derived("bleu", compute_bleu)
- elif self.cfg.target_metric == "ter":
- num_edits = sum_logs("_ter_num_edits")
- ref_len = sum_logs("_ter_ref_len")
-
- if ref_len > 0:
- metrics.log_scalar("_ter_num_edits", num_edits)
- metrics.log_scalar("_ter_ref_len", ref_len)
-
- def compute_ter(meters):
- score = meters["_ter_num_edits"].sum / meters["_ter_ref_len"].sum
- return round(score.item(), 2)
-
- metrics.log_derived("ter", compute_ter)
diff --git a/spaces/Iceclear/StableSR/StableSR/taming/data/faceshq.py b/spaces/Iceclear/StableSR/StableSR/taming/data/faceshq.py
deleted file mode 100644
index 6912d04b66a6d464c1078e4b51d5da290f5e767e..0000000000000000000000000000000000000000
--- a/spaces/Iceclear/StableSR/StableSR/taming/data/faceshq.py
+++ /dev/null
@@ -1,134 +0,0 @@
-import os
-import numpy as np
-import albumentations
-from torch.utils.data import Dataset
-
-from taming.data.base import ImagePaths, NumpyPaths, ConcatDatasetWithIndex
-
-
-class FacesBase(Dataset):
- def __init__(self, *args, **kwargs):
- super().__init__()
- self.data = None
- self.keys = None
-
- def __len__(self):
- return len(self.data)
-
- def __getitem__(self, i):
- example = self.data[i]
- ex = {}
- if self.keys is not None:
- for k in self.keys:
- ex[k] = example[k]
- else:
- ex = example
- return ex
-
-
-class CelebAHQTrain(FacesBase):
- def __init__(self, size, keys=None):
- super().__init__()
- root = "data/celebahq"
- with open("data/celebahqtrain.txt", "r") as f:
- relpaths = f.read().splitlines()
- paths = [os.path.join(root, relpath) for relpath in relpaths]
- self.data = NumpyPaths(paths=paths, size=size, random_crop=False)
- self.keys = keys
-
-
-class CelebAHQValidation(FacesBase):
- def __init__(self, size, keys=None):
- super().__init__()
- root = "data/celebahq"
- with open("data/celebahqvalidation.txt", "r") as f:
- relpaths = f.read().splitlines()
- paths = [os.path.join(root, relpath) for relpath in relpaths]
- self.data = NumpyPaths(paths=paths, size=size, random_crop=False)
- self.keys = keys
-
-
-class FFHQTrain(FacesBase):
- def __init__(self, size, keys=None):
- super().__init__()
- root = "data/ffhq"
- with open("data/ffhqtrain.txt", "r") as f:
- relpaths = f.read().splitlines()
- paths = [os.path.join(root, relpath) for relpath in relpaths]
- self.data = ImagePaths(paths=paths, size=size, random_crop=False)
- self.keys = keys
-
-
-class FFHQValidation(FacesBase):
- def __init__(self, size, keys=None):
- super().__init__()
- root = "data/ffhq"
- with open("data/ffhqvalidation.txt", "r") as f:
- relpaths = f.read().splitlines()
- paths = [os.path.join(root, relpath) for relpath in relpaths]
- self.data = ImagePaths(paths=paths, size=size, random_crop=False)
- self.keys = keys
-
-
-class FacesHQTrain(Dataset):
- # CelebAHQ [0] + FFHQ [1]
- def __init__(self, size, keys=None, crop_size=None, coord=False):
- d1 = CelebAHQTrain(size=size, keys=keys)
- d2 = FFHQTrain(size=size, keys=keys)
- self.data = ConcatDatasetWithIndex([d1, d2])
- self.coord = coord
- if crop_size is not None:
- self.cropper = albumentations.RandomCrop(height=crop_size,width=crop_size)
- if self.coord:
- self.cropper = albumentations.Compose([self.cropper],
- additional_targets={"coord": "image"})
-
- def __len__(self):
- return len(self.data)
-
- def __getitem__(self, i):
- ex, y = self.data[i]
- if hasattr(self, "cropper"):
- if not self.coord:
- out = self.cropper(image=ex["image"])
- ex["image"] = out["image"]
- else:
- h,w,_ = ex["image"].shape
- coord = np.arange(h*w).reshape(h,w,1)/(h*w)
- out = self.cropper(image=ex["image"], coord=coord)
- ex["image"] = out["image"]
- ex["coord"] = out["coord"]
- ex["class"] = y
- return ex
-
-
-class FacesHQValidation(Dataset):
- # CelebAHQ [0] + FFHQ [1]
- def __init__(self, size, keys=None, crop_size=None, coord=False):
- d1 = CelebAHQValidation(size=size, keys=keys)
- d2 = FFHQValidation(size=size, keys=keys)
- self.data = ConcatDatasetWithIndex([d1, d2])
- self.coord = coord
- if crop_size is not None:
- self.cropper = albumentations.CenterCrop(height=crop_size,width=crop_size)
- if self.coord:
- self.cropper = albumentations.Compose([self.cropper],
- additional_targets={"coord": "image"})
-
- def __len__(self):
- return len(self.data)
-
- def __getitem__(self, i):
- ex, y = self.data[i]
- if hasattr(self, "cropper"):
- if not self.coord:
- out = self.cropper(image=ex["image"])
- ex["image"] = out["image"]
- else:
- h,w,_ = ex["image"].shape
- coord = np.arange(h*w).reshape(h,w,1)/(h*w)
- out = self.cropper(image=ex["image"], coord=coord)
- ex["image"] = out["image"]
- ex["coord"] = out["coord"]
- ex["class"] = y
- return ex
diff --git a/spaces/Jasonyoyo/CodeFormer/CodeFormer/basicsr/metrics/metric_util.py b/spaces/Jasonyoyo/CodeFormer/CodeFormer/basicsr/metrics/metric_util.py
deleted file mode 100644
index 4d18f0f7816431bed6af9d58319c6435bdf5c971..0000000000000000000000000000000000000000
--- a/spaces/Jasonyoyo/CodeFormer/CodeFormer/basicsr/metrics/metric_util.py
+++ /dev/null
@@ -1,45 +0,0 @@
-import numpy as np
-
-from basicsr.utils.matlab_functions import bgr2ycbcr
-
-
-def reorder_image(img, input_order='HWC'):
- """Reorder images to 'HWC' order.
-
- If the input_order is (h, w), return (h, w, 1);
- If the input_order is (c, h, w), return (h, w, c);
- If the input_order is (h, w, c), return as it is.
-
- Args:
- img (ndarray): Input image.
- input_order (str): Whether the input order is 'HWC' or 'CHW'.
- If the input image shape is (h, w), input_order will not have
- effects. Default: 'HWC'.
-
- Returns:
- ndarray: reordered image.
- """
-
- if input_order not in ['HWC', 'CHW']:
- raise ValueError(f'Wrong input_order {input_order}. Supported input_orders are ' "'HWC' and 'CHW'")
- if len(img.shape) == 2:
- img = img[..., None]
- if input_order == 'CHW':
- img = img.transpose(1, 2, 0)
- return img
-
-
-def to_y_channel(img):
- """Change to Y channel of YCbCr.
-
- Args:
- img (ndarray): Images with range [0, 255].
-
- Returns:
- (ndarray): Images with range [0, 255] (float type) without round.
- """
- img = img.astype(np.float32) / 255.
- if img.ndim == 3 and img.shape[2] == 3:
- img = bgr2ycbcr(img, y_only=True)
- img = img[..., None]
- return img * 255.
diff --git a/spaces/Jasonyoyo/CodeFormer/CodeFormer/scripts/download_pretrained_models.py b/spaces/Jasonyoyo/CodeFormer/CodeFormer/scripts/download_pretrained_models.py
deleted file mode 100644
index daa6e8ca14ea91c89a318e85d9f182eb7d1bf025..0000000000000000000000000000000000000000
--- a/spaces/Jasonyoyo/CodeFormer/CodeFormer/scripts/download_pretrained_models.py
+++ /dev/null
@@ -1,40 +0,0 @@
-import argparse
-import os
-from os import path as osp
-
-from basicsr.utils.download_util import load_file_from_url
-
-
-def download_pretrained_models(method, file_urls):
- save_path_root = f'./weights/{method}'
- os.makedirs(save_path_root, exist_ok=True)
-
- for file_name, file_url in file_urls.items():
- save_path = load_file_from_url(url=file_url, model_dir=save_path_root, progress=True, file_name=file_name)
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
-
- parser.add_argument(
- 'method',
- type=str,
- help=("Options: 'CodeFormer' 'facelib'. Set to 'all' to download all the models."))
- args = parser.parse_args()
-
- file_urls = {
- 'CodeFormer': {
- 'codeformer.pth': 'https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/codeformer.pth'
- },
- 'facelib': {
- # 'yolov5l-face.pth': 'https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/yolov5l-face.pth',
- 'detection_Resnet50_Final.pth': 'https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/detection_Resnet50_Final.pth',
- 'parsing_parsenet.pth': 'https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/parsing_parsenet.pth'
- }
- }
-
- if args.method == 'all':
- for method in file_urls.keys():
- download_pretrained_models(method, file_urls[method])
- else:
- download_pretrained_models(args.method, file_urls[args.method])
\ No newline at end of file
diff --git a/spaces/Kangarroar/ApplioRVC-Inference/julius/bands.py b/spaces/Kangarroar/ApplioRVC-Inference/julius/bands.py
deleted file mode 100644
index ef2162440b69e960770aa7bf81b9aaec48a63243..0000000000000000000000000000000000000000
--- a/spaces/Kangarroar/ApplioRVC-Inference/julius/bands.py
+++ /dev/null
@@ -1,119 +0,0 @@
-# File under the MIT license, see https://github.com/adefossez/julius/LICENSE for details.
-# Author: adefossez, 2020
-"""
-Decomposition of a signal over frequency bands in the waveform domain.
-"""
-from typing import Optional, Sequence
-import torch
-
-from .core import mel_frequencies
-from .lowpass import LowPassFilters
-from .utils import simple_repr
-
-
-class SplitBands(torch.nn.Module):
- """
- Decomposes a signal over the given frequency bands in the waveform domain using
- a cascade of low pass filters as implemented by `julius.lowpass.LowPassFilters`.
- You can either specify explicitely the frequency cutoffs, or just the number of bands,
- in which case the frequency cutoffs will be spread out evenly in mel scale.
-
- Args:
- sample_rate (float): Sample rate of the input signal in Hz.
- n_bands (int or None): number of bands, when not giving them explictely with `cutoffs`.
- In that case, the cutoff frequencies will be evenly spaced in mel-space.
- cutoffs (list[float] or None): list of frequency cutoffs in Hz.
- pad (bool): if True, appropriately pad the input with zero over the edge. If `stride=1`,
- the output will have the same length as the input.
- zeros (float): Number of zero crossings to keep. See `LowPassFilters` for more informations.
- fft (bool or None): See `LowPassFilters` for more info.
-
- ..note::
- The sum of all the bands will always be the input signal.
-
- ..warning::
- Unlike `julius.lowpass.LowPassFilters`, the cutoffs frequencies must be provided in Hz along
- with the sample rate.
-
- Shape:
-
- - Input: `[*, T]`
- - Output: `[B, *, T']`, with `T'=T` if `pad` is True.
- If `n_bands` was provided, `B = n_bands` otherwise `B = len(cutoffs) + 1`
-
- >>> bands = SplitBands(sample_rate=128, n_bands=10)
- >>> x = torch.randn(6, 4, 1024)
- >>> list(bands(x).shape)
- [10, 6, 4, 1024]
- """
-
- def __init__(self, sample_rate: float, n_bands: Optional[int] = None,
- cutoffs: Optional[Sequence[float]] = None, pad: bool = True,
- zeros: float = 8, fft: Optional[bool] = None):
- super().__init__()
- if (cutoffs is None) + (n_bands is None) != 1:
- raise ValueError("You must provide either n_bands, or cutoffs, but not boths.")
-
- self.sample_rate = sample_rate
- self.n_bands = n_bands
- self._cutoffs = list(cutoffs) if cutoffs is not None else None
- self.pad = pad
- self.zeros = zeros
- self.fft = fft
-
- if cutoffs is None:
- if n_bands is None:
- raise ValueError("You must provide one of n_bands or cutoffs.")
- if not n_bands >= 1:
- raise ValueError(f"n_bands must be greater than one (got {n_bands})")
- cutoffs = mel_frequencies(n_bands + 1, 0, sample_rate / 2)[1:-1]
- else:
- if max(cutoffs) > 0.5 * sample_rate:
- raise ValueError("A cutoff above sample_rate/2 does not make sense.")
- if len(cutoffs) > 0:
- self.lowpass = LowPassFilters(
- [c / sample_rate for c in cutoffs], pad=pad, zeros=zeros, fft=fft)
- else:
- # Here I cannot make both TorchScript and MyPy happy.
- # I miss the good old times, before all this madness was created.
- self.lowpass = None # type: ignore
-
- def forward(self, input):
- if self.lowpass is None:
- return input[None]
- lows = self.lowpass(input)
- low = lows[0]
- bands = [low]
- for low_and_band in lows[1:]:
- # Get a bandpass filter by substracting lowpasses
- band = low_and_band - low
- bands.append(band)
- low = low_and_band
- # Last band is whatever is left in the signal
- bands.append(input - low)
- return torch.stack(bands)
-
- @property
- def cutoffs(self):
- if self._cutoffs is not None:
- return self._cutoffs
- elif self.lowpass is not None:
- return [c * self.sample_rate for c in self.lowpass.cutoffs]
- else:
- return []
-
- def __repr__(self):
- return simple_repr(self, overrides={"cutoffs": self._cutoffs})
-
-
-def split_bands(signal: torch.Tensor, sample_rate: float, n_bands: Optional[int] = None,
- cutoffs: Optional[Sequence[float]] = None, pad: bool = True,
- zeros: float = 8, fft: Optional[bool] = None):
- """
- Functional version of `SplitBands`, refer to this class for more information.
-
- >>> x = torch.randn(6, 4, 1024)
- >>> list(split_bands(x, sample_rate=64, cutoffs=[12, 24]).shape)
- [3, 6, 4, 1024]
- """
- return SplitBands(sample_rate, n_bands, cutoffs, pad, zeros, fft).to(signal)(signal)
diff --git a/spaces/Kelas/translation/app.py b/spaces/Kelas/translation/app.py
deleted file mode 100644
index deb6cdab995737080dec5625e32ae3193d7a4ed4..0000000000000000000000000000000000000000
--- a/spaces/Kelas/translation/app.py
+++ /dev/null
@@ -1,17 +0,0 @@
-import streamlit as st
-from transformers import pipeline
-
-classifier = pipeline("translation_en_to_de", model="t5-base")
-def main():
- st.title("translate English to German")
-
- with st.form("text_field"):
- text = st.text_area('enter some text:')
- # clicked==True only when the button is clicked
- clicked = st.form_submit_button("Submit")
- if clicked:
- results = classifier([text])
- st.json(results)
-
-if __name__ == "__main__":
- main()
\ No newline at end of file
diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg_extractor/encoder/encoder_layer.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg_extractor/encoder/encoder_layer.py
deleted file mode 100644
index 750a32e4ef22ed5c2ca74aa364d1e8a3470e4016..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg_extractor/encoder/encoder_layer.py
+++ /dev/null
@@ -1,152 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding: utf-8 -*-
-
-# Copyright 2020 Johns Hopkins University (Shinji Watanabe)
-# Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0)
-
-"""Encoder self-attention layer definition."""
-
-import torch
-
-from torch import nn
-
-from .layer_norm import LayerNorm
-
-
-class EncoderLayer(nn.Module):
- """Encoder layer module.
-
- :param int size: input dim
- :param espnet.nets.pytorch_backend.transformer.attention.
- MultiHeadedAttention self_attn: self attention module
- RelPositionMultiHeadedAttention self_attn: self attention module
- :param espnet.nets.pytorch_backend.transformer.positionwise_feed_forward.
- PositionwiseFeedForward feed_forward:
- feed forward module
- :param espnet.nets.pytorch_backend.transformer.positionwise_feed_forward
- for macaron style
- PositionwiseFeedForward feed_forward:
- feed forward module
- :param espnet.nets.pytorch_backend.conformer.convolution.
- ConvolutionModule feed_foreard:
- feed forward module
- :param float dropout_rate: dropout rate
- :param bool normalize_before: whether to use layer_norm before the first block
- :param bool concat_after: whether to concat attention layer's input and output
- if True, additional linear will be applied.
- i.e. x -> x + linear(concat(x, att(x)))
- if False, no additional linear will be applied. i.e. x -> x + att(x)
-
- """
-
- def __init__(
- self,
- size,
- self_attn,
- feed_forward,
- feed_forward_macaron,
- conv_module,
- dropout_rate,
- normalize_before=True,
- concat_after=False,
- ):
- """Construct an EncoderLayer object."""
- super(EncoderLayer, self).__init__()
- self.self_attn = self_attn
- self.feed_forward = feed_forward
- self.feed_forward_macaron = feed_forward_macaron
- self.conv_module = conv_module
- self.norm_ff = LayerNorm(size) # for the FNN module
- self.norm_mha = LayerNorm(size) # for the MHA module
- if feed_forward_macaron is not None:
- self.norm_ff_macaron = LayerNorm(size)
- self.ff_scale = 0.5
- else:
- self.ff_scale = 1.0
- if self.conv_module is not None:
- self.norm_conv = LayerNorm(size) # for the CNN module
- self.norm_final = LayerNorm(size) # for the final output of the block
- self.dropout = nn.Dropout(dropout_rate)
- self.size = size
- self.normalize_before = normalize_before
- self.concat_after = concat_after
- if self.concat_after:
- self.concat_linear = nn.Linear(size + size, size)
-
- def forward(self, x_input, mask, cache=None):
- """Compute encoded features.
-
- :param torch.Tensor x_input: encoded source features, w/o pos_emb
- tuple((batch, max_time_in, size), (1, max_time_in, size))
- or (batch, max_time_in, size)
- :param torch.Tensor mask: mask for x (batch, max_time_in)
- :param torch.Tensor cache: cache for x (batch, max_time_in - 1, size)
- :rtype: Tuple[torch.Tensor, torch.Tensor]
- """
- if isinstance(x_input, tuple):
- x, pos_emb = x_input[0], x_input[1]
- else:
- x, pos_emb = x_input, None
-
- # whether to use macaron style
- if self.feed_forward_macaron is not None:
- residual = x
- if self.normalize_before:
- x = self.norm_ff_macaron(x)
- x = residual + self.ff_scale * self.dropout(self.feed_forward_macaron(x))
- if not self.normalize_before:
- x = self.norm_ff_macaron(x)
-
- # multi-headed self-attention module
- residual = x
- if self.normalize_before:
- x = self.norm_mha(x)
-
- if cache is None:
- x_q = x
- else:
- assert cache.shape == (x.shape[0], x.shape[1] - 1, self.size)
- x_q = x[:, -1:, :]
- residual = residual[:, -1:, :]
- mask = None if mask is None else mask[:, -1:, :]
-
- if pos_emb is not None:
- x_att = self.self_attn(x_q, x, x, pos_emb, mask)
- else:
- x_att = self.self_attn(x_q, x, x, mask)
-
- if self.concat_after:
- x_concat = torch.cat((x, x_att), dim=-1)
- x = residual + self.concat_linear(x_concat)
- else:
- x = residual + self.dropout(x_att)
- if not self.normalize_before:
- x = self.norm_mha(x)
-
- # convolution module
- if self.conv_module is not None:
- residual = x
- if self.normalize_before:
- x = self.norm_conv(x)
- x = residual + self.dropout(self.conv_module(x))
- if not self.normalize_before:
- x = self.norm_conv(x)
-
- # feed forward module
- residual = x
- if self.normalize_before:
- x = self.norm_ff(x)
- x = residual + self.ff_scale * self.dropout(self.feed_forward(x))
- if not self.normalize_before:
- x = self.norm_ff(x)
-
- if self.conv_module is not None:
- x = self.norm_final(x)
-
- if cache is not None:
- x = torch.cat([cache, x], dim=1)
-
- if pos_emb is not None:
- return (x, pos_emb), mask
-
- return x, mask
diff --git a/spaces/Kevin676/Real-Time-Voice-Cloning/encoder/model.py b/spaces/Kevin676/Real-Time-Voice-Cloning/encoder/model.py
deleted file mode 100644
index e050d3204d8f1becdf0f8b3133470708e5420cea..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/Real-Time-Voice-Cloning/encoder/model.py
+++ /dev/null
@@ -1,135 +0,0 @@
-from encoder.params_model import *
-from encoder.params_data import *
-from scipy.interpolate import interp1d
-from sklearn.metrics import roc_curve
-from torch.nn.utils import clip_grad_norm_
-from scipy.optimize import brentq
-from torch import nn
-import numpy as np
-import torch
-
-
-class SpeakerEncoder(nn.Module):
- def __init__(self, device, loss_device):
- super().__init__()
- self.loss_device = loss_device
-
- # Network defition
- self.lstm = nn.LSTM(input_size=mel_n_channels,
- hidden_size=model_hidden_size,
- num_layers=model_num_layers,
- batch_first=True).to(device)
- self.linear = nn.Linear(in_features=model_hidden_size,
- out_features=model_embedding_size).to(device)
- self.relu = torch.nn.ReLU().to(device)
-
- # Cosine similarity scaling (with fixed initial parameter values)
- self.similarity_weight = nn.Parameter(torch.tensor([10.])).to(loss_device)
- self.similarity_bias = nn.Parameter(torch.tensor([-5.])).to(loss_device)
-
- # Loss
- self.loss_fn = nn.CrossEntropyLoss().to(loss_device)
-
- def do_gradient_ops(self):
- # Gradient scale
- self.similarity_weight.grad *= 0.01
- self.similarity_bias.grad *= 0.01
-
- # Gradient clipping
- clip_grad_norm_(self.parameters(), 3, norm_type=2)
-
- def forward(self, utterances, hidden_init=None):
- """
- Computes the embeddings of a batch of utterance spectrograms.
-
- :param utterances: batch of mel-scale filterbanks of same duration as a tensor of shape
- (batch_size, n_frames, n_channels)
- :param hidden_init: initial hidden state of the LSTM as a tensor of shape (num_layers,
- batch_size, hidden_size). Will default to a tensor of zeros if None.
- :return: the embeddings as a tensor of shape (batch_size, embedding_size)
- """
- # Pass the input through the LSTM layers and retrieve all outputs, the final hidden state
- # and the final cell state.
- out, (hidden, cell) = self.lstm(utterances, hidden_init)
-
- # We take only the hidden state of the last layer
- embeds_raw = self.relu(self.linear(hidden[-1]))
-
- # L2-normalize it
- embeds = embeds_raw / (torch.norm(embeds_raw, dim=1, keepdim=True) + 1e-5)
-
- return embeds
-
- def similarity_matrix(self, embeds):
- """
- Computes the similarity matrix according the section 2.1 of GE2E.
-
- :param embeds: the embeddings as a tensor of shape (speakers_per_batch,
- utterances_per_speaker, embedding_size)
- :return: the similarity matrix as a tensor of shape (speakers_per_batch,
- utterances_per_speaker, speakers_per_batch)
- """
- speakers_per_batch, utterances_per_speaker = embeds.shape[:2]
-
- # Inclusive centroids (1 per speaker). Cloning is needed for reverse differentiation
- centroids_incl = torch.mean(embeds, dim=1, keepdim=True)
- centroids_incl = centroids_incl.clone() / (torch.norm(centroids_incl, dim=2, keepdim=True) + 1e-5)
-
- # Exclusive centroids (1 per utterance)
- centroids_excl = (torch.sum(embeds, dim=1, keepdim=True) - embeds)
- centroids_excl /= (utterances_per_speaker - 1)
- centroids_excl = centroids_excl.clone() / (torch.norm(centroids_excl, dim=2, keepdim=True) + 1e-5)
-
- # Similarity matrix. The cosine similarity of already 2-normed vectors is simply the dot
- # product of these vectors (which is just an element-wise multiplication reduced by a sum).
- # We vectorize the computation for efficiency.
- sim_matrix = torch.zeros(speakers_per_batch, utterances_per_speaker,
- speakers_per_batch).to(self.loss_device)
- mask_matrix = 1 - np.eye(speakers_per_batch, dtype=np.int)
- for j in range(speakers_per_batch):
- mask = np.where(mask_matrix[j])[0]
- sim_matrix[mask, :, j] = (embeds[mask] * centroids_incl[j]).sum(dim=2)
- sim_matrix[j, :, j] = (embeds[j] * centroids_excl[j]).sum(dim=1)
-
- ## Even more vectorized version (slower maybe because of transpose)
- # sim_matrix2 = torch.zeros(speakers_per_batch, speakers_per_batch, utterances_per_speaker
- # ).to(self.loss_device)
- # eye = np.eye(speakers_per_batch, dtype=np.int)
- # mask = np.where(1 - eye)
- # sim_matrix2[mask] = (embeds[mask[0]] * centroids_incl[mask[1]]).sum(dim=2)
- # mask = np.where(eye)
- # sim_matrix2[mask] = (embeds * centroids_excl).sum(dim=2)
- # sim_matrix2 = sim_matrix2.transpose(1, 2)
-
- sim_matrix = sim_matrix * self.similarity_weight + self.similarity_bias
- return sim_matrix
-
- def loss(self, embeds):
- """
- Computes the softmax loss according the section 2.1 of GE2E.
-
- :param embeds: the embeddings as a tensor of shape (speakers_per_batch,
- utterances_per_speaker, embedding_size)
- :return: the loss and the EER for this batch of embeddings.
- """
- speakers_per_batch, utterances_per_speaker = embeds.shape[:2]
-
- # Loss
- sim_matrix = self.similarity_matrix(embeds)
- sim_matrix = sim_matrix.reshape((speakers_per_batch * utterances_per_speaker,
- speakers_per_batch))
- ground_truth = np.repeat(np.arange(speakers_per_batch), utterances_per_speaker)
- target = torch.from_numpy(ground_truth).long().to(self.loss_device)
- loss = self.loss_fn(sim_matrix, target)
-
- # EER (not backpropagated)
- with torch.no_grad():
- inv_argmax = lambda i: np.eye(1, speakers_per_batch, i, dtype=np.int)[0]
- labels = np.array([inv_argmax(i) for i in ground_truth])
- preds = sim_matrix.detach().cpu().numpy()
-
- # Snippet from https://yangcha.github.io/EER-ROC/
- fpr, tpr, thresholds = roc_curve(labels.flatten(), preds.flatten())
- eer = brentq(lambda x: 1. - x - interp1d(fpr, tpr)(x), 0., 1.)
-
- return loss, eer
diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/conditional_detr_head.py b/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/conditional_detr_head.py
deleted file mode 100644
index cc2df2c215667121c5fe329f369510ecd4666faf..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/conditional_detr_head.py
+++ /dev/null
@@ -1,168 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from typing import Tuple
-
-import torch
-import torch.nn as nn
-from mmengine.model import bias_init_with_prob
-from torch import Tensor
-
-from mmdet.models.layers.transformer import inverse_sigmoid
-from mmdet.registry import MODELS
-from mmdet.structures import SampleList
-from mmdet.utils import InstanceList
-from .detr_head import DETRHead
-
-
-@MODELS.register_module()
-class ConditionalDETRHead(DETRHead):
- """Head of Conditional DETR. Conditional DETR: Conditional DETR for Fast
- Training Convergence. More details can be found in the `paper.
-
- `_ .
- """
-
- def init_weights(self):
- """Initialize weights of the transformer head."""
- super().init_weights()
- # The initialization below for transformer head is very
- # important as we use Focal_loss for loss_cls
- if self.loss_cls.use_sigmoid:
- bias_init = bias_init_with_prob(0.01)
- nn.init.constant_(self.fc_cls.bias, bias_init)
-
- def forward(self, hidden_states: Tensor,
- references: Tensor) -> Tuple[Tensor, Tensor]:
- """"Forward function.
-
- Args:
- hidden_states (Tensor): Features from transformer decoder. If
- `return_intermediate_dec` is True output has shape
- (num_decoder_layers, bs, num_queries, dim), else has shape (1,
- bs, num_queries, dim) which only contains the last layer
- outputs.
- references (Tensor): References from transformer decoder, has
- shape (bs, num_queries, 2).
- Returns:
- tuple[Tensor]: results of head containing the following tensor.
-
- - layers_cls_scores (Tensor): Outputs from the classification head,
- shape (num_decoder_layers, bs, num_queries, cls_out_channels).
- Note cls_out_channels should include background.
- - layers_bbox_preds (Tensor): Sigmoid outputs from the regression
- head with normalized coordinate format (cx, cy, w, h), has shape
- (num_decoder_layers, bs, num_queries, 4).
- """
-
- references_unsigmoid = inverse_sigmoid(references)
- layers_bbox_preds = []
- for layer_id in range(hidden_states.shape[0]):
- tmp_reg_preds = self.fc_reg(
- self.activate(self.reg_ffn(hidden_states[layer_id])))
- tmp_reg_preds[..., :2] += references_unsigmoid
- outputs_coord = tmp_reg_preds.sigmoid()
- layers_bbox_preds.append(outputs_coord)
- layers_bbox_preds = torch.stack(layers_bbox_preds)
-
- layers_cls_scores = self.fc_cls(hidden_states)
- return layers_cls_scores, layers_bbox_preds
-
- def loss(self, hidden_states: Tensor, references: Tensor,
- batch_data_samples: SampleList) -> dict:
- """Perform forward propagation and loss calculation of the detection
- head on the features of the upstream network.
-
- Args:
- hidden_states (Tensor): Features from the transformer decoder, has
- shape (num_decoder_layers, bs, num_queries, dim).
- references (Tensor): References from the transformer decoder, has
- shape (num_decoder_layers, bs, num_queries, 2).
- batch_data_samples (List[:obj:`DetDataSample`]): The Data
- Samples. It usually includes information such as
- `gt_instance`, `gt_panoptic_seg` and `gt_sem_seg`.
-
- Returns:
- dict: A dictionary of loss components.
- """
- batch_gt_instances = []
- batch_img_metas = []
- for data_sample in batch_data_samples:
- batch_img_metas.append(data_sample.metainfo)
- batch_gt_instances.append(data_sample.gt_instances)
-
- outs = self(hidden_states, references)
- loss_inputs = outs + (batch_gt_instances, batch_img_metas)
- losses = self.loss_by_feat(*loss_inputs)
- return losses
-
- def loss_and_predict(
- self, hidden_states: Tensor, references: Tensor,
- batch_data_samples: SampleList) -> Tuple[dict, InstanceList]:
- """Perform forward propagation of the head, then calculate loss and
- predictions from the features and data samples. Over-write because
- img_metas are needed as inputs for bbox_head.
-
- Args:
- hidden_states (Tensor): Features from the transformer decoder, has
- shape (num_decoder_layers, bs, num_queries, dim).
- references (Tensor): References from the transformer decoder, has
- shape (num_decoder_layers, bs, num_queries, 2).
- batch_data_samples (list[:obj:`DetDataSample`]): Each item contains
- the meta information of each image and corresponding
- annotations.
-
- Returns:
- tuple: The return value is a tuple contains:
-
- - losses: (dict[str, Tensor]): A dictionary of loss components.
- - predictions (list[:obj:`InstanceData`]): Detection
- results of each image after the post process.
- """
- batch_gt_instances = []
- batch_img_metas = []
- for data_sample in batch_data_samples:
- batch_img_metas.append(data_sample.metainfo)
- batch_gt_instances.append(data_sample.gt_instances)
-
- outs = self(hidden_states, references)
- loss_inputs = outs + (batch_gt_instances, batch_img_metas)
- losses = self.loss_by_feat(*loss_inputs)
-
- predictions = self.predict_by_feat(
- *outs, batch_img_metas=batch_img_metas)
- return losses, predictions
-
- def predict(self,
- hidden_states: Tensor,
- references: Tensor,
- batch_data_samples: SampleList,
- rescale: bool = True) -> InstanceList:
- """Perform forward propagation of the detection head and predict
- detection results on the features of the upstream network. Over-write
- because img_metas are needed as inputs for bbox_head.
-
- Args:
- hidden_states (Tensor): Features from the transformer decoder, has
- shape (num_decoder_layers, bs, num_queries, dim).
- references (Tensor): References from the transformer decoder, has
- shape (num_decoder_layers, bs, num_queries, 2).
- batch_data_samples (List[:obj:`DetDataSample`]): The Data
- Samples. It usually includes information such as
- `gt_instance`, `gt_panoptic_seg` and `gt_sem_seg`.
- rescale (bool, optional): Whether to rescale the results.
- Defaults to True.
-
- Returns:
- list[obj:`InstanceData`]: Detection results of each image
- after the post process.
- """
- batch_img_metas = [
- data_samples.metainfo for data_samples in batch_data_samples
- ]
-
- last_layer_hidden_state = hidden_states[-1].unsqueeze(0)
- outs = self(last_layer_hidden_state, references)
-
- predictions = self.predict_by_feat(
- *outs, batch_img_metas=batch_img_metas, rescale=rescale)
-
- return predictions
diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/roi_heads/cascade_roi_head.py b/spaces/KyanChen/RSPrompter/mmdet/models/roi_heads/cascade_roi_head.py
deleted file mode 100644
index 81db671113a63beb7849abdc0e432a738ee46f5e..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmdet/models/roi_heads/cascade_roi_head.py
+++ /dev/null
@@ -1,568 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from typing import List, Sequence, Tuple, Union
-
-import torch
-import torch.nn as nn
-from mmengine.model import ModuleList
-from mmengine.structures import InstanceData
-from torch import Tensor
-
-from mmdet.models.task_modules.samplers import SamplingResult
-from mmdet.models.test_time_augs import merge_aug_masks
-from mmdet.registry import MODELS, TASK_UTILS
-from mmdet.structures import SampleList
-from mmdet.structures.bbox import bbox2roi, get_box_tensor
-from mmdet.utils import (ConfigType, InstanceList, MultiConfig, OptConfigType,
- OptMultiConfig)
-from ..utils.misc import empty_instances, unpack_gt_instances
-from .base_roi_head import BaseRoIHead
-
-
-@MODELS.register_module()
-class CascadeRoIHead(BaseRoIHead):
- """Cascade roi head including one bbox head and one mask head.
-
- https://arxiv.org/abs/1712.00726
- """
-
- def __init__(self,
- num_stages: int,
- stage_loss_weights: Union[List[float], Tuple[float]],
- bbox_roi_extractor: OptMultiConfig = None,
- bbox_head: OptMultiConfig = None,
- mask_roi_extractor: OptMultiConfig = None,
- mask_head: OptMultiConfig = None,
- shared_head: OptConfigType = None,
- train_cfg: OptConfigType = None,
- test_cfg: OptConfigType = None,
- init_cfg: OptMultiConfig = None) -> None:
- assert bbox_roi_extractor is not None
- assert bbox_head is not None
- assert shared_head is None, \
- 'Shared head is not supported in Cascade RCNN anymore'
-
- self.num_stages = num_stages
- self.stage_loss_weights = stage_loss_weights
- super().__init__(
- bbox_roi_extractor=bbox_roi_extractor,
- bbox_head=bbox_head,
- mask_roi_extractor=mask_roi_extractor,
- mask_head=mask_head,
- shared_head=shared_head,
- train_cfg=train_cfg,
- test_cfg=test_cfg,
- init_cfg=init_cfg)
-
- def init_bbox_head(self, bbox_roi_extractor: MultiConfig,
- bbox_head: MultiConfig) -> None:
- """Initialize box head and box roi extractor.
-
- Args:
- bbox_roi_extractor (:obj:`ConfigDict`, dict or list):
- Config of box roi extractor.
- bbox_head (:obj:`ConfigDict`, dict or list): Config
- of box in box head.
- """
- self.bbox_roi_extractor = ModuleList()
- self.bbox_head = ModuleList()
- if not isinstance(bbox_roi_extractor, list):
- bbox_roi_extractor = [
- bbox_roi_extractor for _ in range(self.num_stages)
- ]
- if not isinstance(bbox_head, list):
- bbox_head = [bbox_head for _ in range(self.num_stages)]
- assert len(bbox_roi_extractor) == len(bbox_head) == self.num_stages
- for roi_extractor, head in zip(bbox_roi_extractor, bbox_head):
- self.bbox_roi_extractor.append(MODELS.build(roi_extractor))
- self.bbox_head.append(MODELS.build(head))
-
- def init_mask_head(self, mask_roi_extractor: MultiConfig,
- mask_head: MultiConfig) -> None:
- """Initialize mask head and mask roi extractor.
-
- Args:
- mask_head (dict): Config of mask in mask head.
- mask_roi_extractor (:obj:`ConfigDict`, dict or list):
- Config of mask roi extractor.
- """
- self.mask_head = nn.ModuleList()
- if not isinstance(mask_head, list):
- mask_head = [mask_head for _ in range(self.num_stages)]
- assert len(mask_head) == self.num_stages
- for head in mask_head:
- self.mask_head.append(MODELS.build(head))
- if mask_roi_extractor is not None:
- self.share_roi_extractor = False
- self.mask_roi_extractor = ModuleList()
- if not isinstance(mask_roi_extractor, list):
- mask_roi_extractor = [
- mask_roi_extractor for _ in range(self.num_stages)
- ]
- assert len(mask_roi_extractor) == self.num_stages
- for roi_extractor in mask_roi_extractor:
- self.mask_roi_extractor.append(MODELS.build(roi_extractor))
- else:
- self.share_roi_extractor = True
- self.mask_roi_extractor = self.bbox_roi_extractor
-
- def init_assigner_sampler(self) -> None:
- """Initialize assigner and sampler for each stage."""
- self.bbox_assigner = []
- self.bbox_sampler = []
- if self.train_cfg is not None:
- for idx, rcnn_train_cfg in enumerate(self.train_cfg):
- self.bbox_assigner.append(
- TASK_UTILS.build(rcnn_train_cfg.assigner))
- self.current_stage = idx
- self.bbox_sampler.append(
- TASK_UTILS.build(
- rcnn_train_cfg.sampler,
- default_args=dict(context=self)))
-
- def _bbox_forward(self, stage: int, x: Tuple[Tensor],
- rois: Tensor) -> dict:
- """Box head forward function used in both training and testing.
-
- Args:
- stage (int): The current stage in Cascade RoI Head.
- x (tuple[Tensor]): List of multi-level img features.
- rois (Tensor): RoIs with the shape (n, 5) where the first
- column indicates batch id of each RoI.
-
- Returns:
- dict[str, Tensor]: Usually returns a dictionary with keys:
-
- - `cls_score` (Tensor): Classification scores.
- - `bbox_pred` (Tensor): Box energies / deltas.
- - `bbox_feats` (Tensor): Extract bbox RoI features.
- """
- bbox_roi_extractor = self.bbox_roi_extractor[stage]
- bbox_head = self.bbox_head[stage]
- bbox_feats = bbox_roi_extractor(x[:bbox_roi_extractor.num_inputs],
- rois)
- # do not support caffe_c4 model anymore
- cls_score, bbox_pred = bbox_head(bbox_feats)
-
- bbox_results = dict(
- cls_score=cls_score, bbox_pred=bbox_pred, bbox_feats=bbox_feats)
- return bbox_results
-
- def bbox_loss(self, stage: int, x: Tuple[Tensor],
- sampling_results: List[SamplingResult]) -> dict:
- """Run forward function and calculate loss for box head in training.
-
- Args:
- stage (int): The current stage in Cascade RoI Head.
- x (tuple[Tensor]): List of multi-level img features.
- sampling_results (list["obj:`SamplingResult`]): Sampling results.
-
- Returns:
- dict: Usually returns a dictionary with keys:
-
- - `cls_score` (Tensor): Classification scores.
- - `bbox_pred` (Tensor): Box energies / deltas.
- - `bbox_feats` (Tensor): Extract bbox RoI features.
- - `loss_bbox` (dict): A dictionary of bbox loss components.
- - `rois` (Tensor): RoIs with the shape (n, 5) where the first
- column indicates batch id of each RoI.
- - `bbox_targets` (tuple): Ground truth for proposals in a
- single image. Containing the following list of Tensors:
- (labels, label_weights, bbox_targets, bbox_weights)
- """
- bbox_head = self.bbox_head[stage]
- rois = bbox2roi([res.priors for res in sampling_results])
- bbox_results = self._bbox_forward(stage, x, rois)
- bbox_results.update(rois=rois)
-
- bbox_loss_and_target = bbox_head.loss_and_target(
- cls_score=bbox_results['cls_score'],
- bbox_pred=bbox_results['bbox_pred'],
- rois=rois,
- sampling_results=sampling_results,
- rcnn_train_cfg=self.train_cfg[stage])
- bbox_results.update(bbox_loss_and_target)
-
- return bbox_results
-
- def _mask_forward(self, stage: int, x: Tuple[Tensor],
- rois: Tensor) -> dict:
- """Mask head forward function used in both training and testing.
-
- Args:
- stage (int): The current stage in Cascade RoI Head.
- x (tuple[Tensor]): Tuple of multi-level img features.
- rois (Tensor): RoIs with the shape (n, 5) where the first
- column indicates batch id of each RoI.
-
- Returns:
- dict: Usually returns a dictionary with keys:
-
- - `mask_preds` (Tensor): Mask prediction.
- """
- mask_roi_extractor = self.mask_roi_extractor[stage]
- mask_head = self.mask_head[stage]
- mask_feats = mask_roi_extractor(x[:mask_roi_extractor.num_inputs],
- rois)
- # do not support caffe_c4 model anymore
- mask_preds = mask_head(mask_feats)
-
- mask_results = dict(mask_preds=mask_preds)
- return mask_results
-
- def mask_loss(self, stage: int, x: Tuple[Tensor],
- sampling_results: List[SamplingResult],
- batch_gt_instances: InstanceList) -> dict:
- """Run forward function and calculate loss for mask head in training.
-
- Args:
- stage (int): The current stage in Cascade RoI Head.
- x (tuple[Tensor]): Tuple of multi-level img features.
- sampling_results (list["obj:`SamplingResult`]): Sampling results.
- batch_gt_instances (list[:obj:`InstanceData`]): Batch of
- gt_instance. It usually includes ``bboxes``, ``labels``, and
- ``masks`` attributes.
-
- Returns:
- dict: Usually returns a dictionary with keys:
-
- - `mask_preds` (Tensor): Mask prediction.
- - `loss_mask` (dict): A dictionary of mask loss components.
- """
- pos_rois = bbox2roi([res.pos_priors for res in sampling_results])
- mask_results = self._mask_forward(stage, x, pos_rois)
-
- mask_head = self.mask_head[stage]
-
- mask_loss_and_target = mask_head.loss_and_target(
- mask_preds=mask_results['mask_preds'],
- sampling_results=sampling_results,
- batch_gt_instances=batch_gt_instances,
- rcnn_train_cfg=self.train_cfg[stage])
- mask_results.update(mask_loss_and_target)
-
- return mask_results
-
- def loss(self, x: Tuple[Tensor], rpn_results_list: InstanceList,
- batch_data_samples: SampleList) -> dict:
- """Perform forward propagation and loss calculation of the detection
- roi on the features of the upstream network.
-
- Args:
- x (tuple[Tensor]): List of multi-level img features.
- rpn_results_list (list[:obj:`InstanceData`]): List of region
- proposals.
- batch_data_samples (list[:obj:`DetDataSample`]): The batch
- data samples. It usually includes information such
- as `gt_instance` or `gt_panoptic_seg` or `gt_sem_seg`.
-
- Returns:
- dict[str, Tensor]: A dictionary of loss components
- """
- # TODO: May add a new function in baseroihead
- assert len(rpn_results_list) == len(batch_data_samples)
- outputs = unpack_gt_instances(batch_data_samples)
- batch_gt_instances, batch_gt_instances_ignore, batch_img_metas \
- = outputs
-
- num_imgs = len(batch_data_samples)
- losses = dict()
- results_list = rpn_results_list
- for stage in range(self.num_stages):
- self.current_stage = stage
-
- stage_loss_weight = self.stage_loss_weights[stage]
-
- # assign gts and sample proposals
- sampling_results = []
- if self.with_bbox or self.with_mask:
- bbox_assigner = self.bbox_assigner[stage]
- bbox_sampler = self.bbox_sampler[stage]
-
- for i in range(num_imgs):
- results = results_list[i]
- # rename rpn_results.bboxes to rpn_results.priors
- results.priors = results.pop('bboxes')
-
- assign_result = bbox_assigner.assign(
- results, batch_gt_instances[i],
- batch_gt_instances_ignore[i])
-
- sampling_result = bbox_sampler.sample(
- assign_result,
- results,
- batch_gt_instances[i],
- feats=[lvl_feat[i][None] for lvl_feat in x])
- sampling_results.append(sampling_result)
-
- # bbox head forward and loss
- bbox_results = self.bbox_loss(stage, x, sampling_results)
-
- for name, value in bbox_results['loss_bbox'].items():
- losses[f's{stage}.{name}'] = (
- value * stage_loss_weight if 'loss' in name else value)
-
- # mask head forward and loss
- if self.with_mask:
- mask_results = self.mask_loss(stage, x, sampling_results,
- batch_gt_instances)
- for name, value in mask_results['loss_mask'].items():
- losses[f's{stage}.{name}'] = (
- value * stage_loss_weight if 'loss' in name else value)
-
- # refine bboxes
- if stage < self.num_stages - 1:
- bbox_head = self.bbox_head[stage]
- with torch.no_grad():
- results_list = bbox_head.refine_bboxes(
- sampling_results, bbox_results, batch_img_metas)
- # Empty proposal
- if results_list is None:
- break
- return losses
-
- def predict_bbox(self,
- x: Tuple[Tensor],
- batch_img_metas: List[dict],
- rpn_results_list: InstanceList,
- rcnn_test_cfg: ConfigType,
- rescale: bool = False,
- **kwargs) -> InstanceList:
- """Perform forward propagation of the bbox head and predict detection
- results on the features of the upstream network.
-
- Args:
- x (tuple[Tensor]): Feature maps of all scale level.
- batch_img_metas (list[dict]): List of image information.
- rpn_results_list (list[:obj:`InstanceData`]): List of region
- proposals.
- rcnn_test_cfg (obj:`ConfigDict`): `test_cfg` of R-CNN.
- rescale (bool): If True, return boxes in original image space.
- Defaults to False.
-
- Returns:
- list[:obj:`InstanceData`]: Detection results of each image
- after the post process.
- Each item usually contains following keys.
-
- - scores (Tensor): Classification scores, has a shape
- (num_instance, )
- - labels (Tensor): Labels of bboxes, has a shape
- (num_instances, ).
- - bboxes (Tensor): Has a shape (num_instances, 4),
- the last dimension 4 arrange as (x1, y1, x2, y2).
- """
- proposals = [res.bboxes for res in rpn_results_list]
- num_proposals_per_img = tuple(len(p) for p in proposals)
- rois = bbox2roi(proposals)
-
- if rois.shape[0] == 0:
- return empty_instances(
- batch_img_metas,
- rois.device,
- task_type='bbox',
- box_type=self.bbox_head[-1].predict_box_type,
- num_classes=self.bbox_head[-1].num_classes,
- score_per_cls=rcnn_test_cfg is None)
-
- rois, cls_scores, bbox_preds = self._refine_roi(
- x=x,
- rois=rois,
- batch_img_metas=batch_img_metas,
- num_proposals_per_img=num_proposals_per_img,
- **kwargs)
-
- results_list = self.bbox_head[-1].predict_by_feat(
- rois=rois,
- cls_scores=cls_scores,
- bbox_preds=bbox_preds,
- batch_img_metas=batch_img_metas,
- rescale=rescale,
- rcnn_test_cfg=rcnn_test_cfg)
- return results_list
-
- def predict_mask(self,
- x: Tuple[Tensor],
- batch_img_metas: List[dict],
- results_list: List[InstanceData],
- rescale: bool = False) -> List[InstanceData]:
- """Perform forward propagation of the mask head and predict detection
- results on the features of the upstream network.
-
- Args:
- x (tuple[Tensor]): Feature maps of all scale level.
- batch_img_metas (list[dict]): List of image information.
- results_list (list[:obj:`InstanceData`]): Detection results of
- each image.
- rescale (bool): If True, return boxes in original image space.
- Defaults to False.
-
- Returns:
- list[:obj:`InstanceData`]: Detection results of each image
- after the post process.
- Each item usually contains following keys.
-
- - scores (Tensor): Classification scores, has a shape
- (num_instance, )
- - labels (Tensor): Labels of bboxes, has a shape
- (num_instances, ).
- - bboxes (Tensor): Has a shape (num_instances, 4),
- the last dimension 4 arrange as (x1, y1, x2, y2).
- - masks (Tensor): Has a shape (num_instances, H, W).
- """
- bboxes = [res.bboxes for res in results_list]
- mask_rois = bbox2roi(bboxes)
- if mask_rois.shape[0] == 0:
- results_list = empty_instances(
- batch_img_metas,
- mask_rois.device,
- task_type='mask',
- instance_results=results_list,
- mask_thr_binary=self.test_cfg.mask_thr_binary)
- return results_list
-
- num_mask_rois_per_img = [len(res) for res in results_list]
- aug_masks = []
- for stage in range(self.num_stages):
- mask_results = self._mask_forward(stage, x, mask_rois)
- mask_preds = mask_results['mask_preds']
- # split batch mask prediction back to each image
- mask_preds = mask_preds.split(num_mask_rois_per_img, 0)
- aug_masks.append([m.sigmoid().detach() for m in mask_preds])
-
- merged_masks = []
- for i in range(len(batch_img_metas)):
- aug_mask = [mask[i] for mask in aug_masks]
- merged_mask = merge_aug_masks(aug_mask, batch_img_metas[i])
- merged_masks.append(merged_mask)
- results_list = self.mask_head[-1].predict_by_feat(
- mask_preds=merged_masks,
- results_list=results_list,
- batch_img_metas=batch_img_metas,
- rcnn_test_cfg=self.test_cfg,
- rescale=rescale,
- activate_map=True)
- return results_list
-
- def _refine_roi(self, x: Tuple[Tensor], rois: Tensor,
- batch_img_metas: List[dict],
- num_proposals_per_img: Sequence[int], **kwargs) -> tuple:
- """Multi-stage refinement of RoI.
-
- Args:
- x (tuple[Tensor]): List of multi-level img features.
- rois (Tensor): shape (n, 5), [batch_ind, x1, y1, x2, y2]
- batch_img_metas (list[dict]): List of image information.
- num_proposals_per_img (sequence[int]): number of proposals
- in each image.
-
- Returns:
- tuple:
-
- - rois (Tensor): Refined RoI.
- - cls_scores (list[Tensor]): Average predicted
- cls score per image.
- - bbox_preds (list[Tensor]): Bbox branch predictions
- for the last stage of per image.
- """
- # "ms" in variable names means multi-stage
- ms_scores = []
- for stage in range(self.num_stages):
- bbox_results = self._bbox_forward(
- stage=stage, x=x, rois=rois, **kwargs)
-
- # split batch bbox prediction back to each image
- cls_scores = bbox_results['cls_score']
- bbox_preds = bbox_results['bbox_pred']
-
- rois = rois.split(num_proposals_per_img, 0)
- cls_scores = cls_scores.split(num_proposals_per_img, 0)
- ms_scores.append(cls_scores)
-
- # some detector with_reg is False, bbox_preds will be None
- if bbox_preds is not None:
- # TODO move this to a sabl_roi_head
- # the bbox prediction of some detectors like SABL is not Tensor
- if isinstance(bbox_preds, torch.Tensor):
- bbox_preds = bbox_preds.split(num_proposals_per_img, 0)
- else:
- bbox_preds = self.bbox_head[stage].bbox_pred_split(
- bbox_preds, num_proposals_per_img)
- else:
- bbox_preds = (None, ) * len(batch_img_metas)
-
- if stage < self.num_stages - 1:
- bbox_head = self.bbox_head[stage]
- if bbox_head.custom_activation:
- cls_scores = [
- bbox_head.loss_cls.get_activation(s)
- for s in cls_scores
- ]
- refine_rois_list = []
- for i in range(len(batch_img_metas)):
- if rois[i].shape[0] > 0:
- bbox_label = cls_scores[i][:, :-1].argmax(dim=1)
- # Refactor `bbox_head.regress_by_class` to only accept
- # box tensor without img_idx concatenated.
- refined_bboxes = bbox_head.regress_by_class(
- rois[i][:, 1:], bbox_label, bbox_preds[i],
- batch_img_metas[i])
- refined_bboxes = get_box_tensor(refined_bboxes)
- refined_rois = torch.cat(
- [rois[i][:, [0]], refined_bboxes], dim=1)
- refine_rois_list.append(refined_rois)
- rois = torch.cat(refine_rois_list)
-
- # average scores of each image by stages
- cls_scores = [
- sum([score[i] for score in ms_scores]) / float(len(ms_scores))
- for i in range(len(batch_img_metas))
- ]
- return rois, cls_scores, bbox_preds
-
- def forward(self, x: Tuple[Tensor], rpn_results_list: InstanceList,
- batch_data_samples: SampleList) -> tuple:
- """Network forward process. Usually includes backbone, neck and head
- forward without any post-processing.
-
- Args:
- x (List[Tensor]): Multi-level features that may have different
- resolutions.
- rpn_results_list (list[:obj:`InstanceData`]): List of region
- proposals.
- batch_data_samples (list[:obj:`DetDataSample`]): Each item contains
- the meta information of each image and corresponding
- annotations.
-
- Returns
- tuple: A tuple of features from ``bbox_head`` and ``mask_head``
- forward.
- """
- results = ()
- batch_img_metas = [
- data_samples.metainfo for data_samples in batch_data_samples
- ]
- proposals = [rpn_results.bboxes for rpn_results in rpn_results_list]
- num_proposals_per_img = tuple(len(p) for p in proposals)
- rois = bbox2roi(proposals)
- # bbox head
- if self.with_bbox:
- rois, cls_scores, bbox_preds = self._refine_roi(
- x, rois, batch_img_metas, num_proposals_per_img)
- results = results + (cls_scores, bbox_preds)
- # mask head
- if self.with_mask:
- aug_masks = []
- rois = torch.cat(rois)
- for stage in range(self.num_stages):
- mask_results = self._mask_forward(stage, x, rois)
- mask_preds = mask_results['mask_preds']
- mask_preds = mask_preds.split(num_proposals_per_img, 0)
- aug_masks.append([m.sigmoid().detach() for m in mask_preds])
-
- merged_masks = []
- for i in range(len(batch_img_metas)):
- aug_mask = [mask[i] for mask in aug_masks]
- merged_mask = merge_aug_masks(aug_mask, batch_img_metas[i])
- merged_masks.append(merged_mask)
- results = results + (merged_masks, )
- return results
diff --git a/spaces/Lippmann/White-box-Cartoonization/wbc/cartoonize.py b/spaces/Lippmann/White-box-Cartoonization/wbc/cartoonize.py
deleted file mode 100644
index 25faf1ceb95aaed9a3f7a7982d17a03dc6bc32b1..0000000000000000000000000000000000000000
--- a/spaces/Lippmann/White-box-Cartoonization/wbc/cartoonize.py
+++ /dev/null
@@ -1,112 +0,0 @@
-import os
-import cv2
-import numpy as np
-import tensorflow as tf
-import wbc.network as network
-import wbc.guided_filter as guided_filter
-from tqdm import tqdm
-
-
-def resize_crop(image):
- h, w, c = np.shape(image)
- if min(h, w) > 720:
- if h > w:
- h, w = int(720 * h / w), 720
- else:
- h, w = 720, int(720 * w / h)
- image = cv2.resize(image, (w, h),
- interpolation=cv2.INTER_AREA)
- h, w = (h // 8) * 8, (w // 8) * 8
- image = image[:h, :w, :]
- return image
-
-
-def cartoonize(load_folder, save_folder, model_path):
- print(model_path)
- input_photo = tf.placeholder(tf.float32, [1, None, None, 3])
- network_out = network.unet_generator(input_photo)
- final_out = guided_filter.guided_filter(input_photo, network_out, r=1, eps=5e-3)
-
- all_vars = tf.trainable_variables()
- gene_vars = [var for var in all_vars if 'generator' in var.name]
- saver = tf.train.Saver(var_list=gene_vars)
-
- config = tf.ConfigProto()
- config.gpu_options.allow_growth = True
- sess = tf.Session(config=config)
-
- sess.run(tf.global_variables_initializer())
- saver.restore(sess, tf.train.latest_checkpoint(model_path))
- name_list = os.listdir(load_folder)
- for name in tqdm(name_list):
- try:
- load_path = os.path.join(load_folder, name)
- save_path = os.path.join(save_folder, name)
- image = cv2.imread(load_path)
- image = resize_crop(image)
- batch_image = image.astype(np.float32) / 127.5 - 1
- batch_image = np.expand_dims(batch_image, axis=0)
- output = sess.run(final_out, feed_dict={input_photo: batch_image})
- output = (np.squeeze(output) + 1) * 127.5
- output = np.clip(output, 0, 255).astype(np.uint8)
- cv2.imwrite(save_path, output)
- except:
- print('cartoonize {} failed'.format(load_path))
-
-
-class Cartoonize:
- def __init__(self, model_path):
- print(model_path)
- self.input_photo = tf.placeholder(tf.float32, [1, None, None, 3])
- network_out = network.unet_generator(self.input_photo)
- self.final_out = guided_filter.guided_filter(self.input_photo, network_out, r=1, eps=5e-3)
-
- all_vars = tf.trainable_variables()
- gene_vars = [var for var in all_vars if 'generator' in var.name]
- saver = tf.train.Saver(var_list=gene_vars)
-
- config = tf.ConfigProto()
- config.gpu_options.allow_growth = True
- self.sess = tf.Session(config=config)
-
- self.sess.run(tf.global_variables_initializer())
- saver.restore(self.sess, tf.train.latest_checkpoint(model_path))
-
- def run(self, load_folder, save_folder):
- name_list = os.listdir(load_folder)
- for name in tqdm(name_list):
- try:
- load_path = os.path.join(load_folder, name)
- save_path = os.path.join(save_folder, name)
- image = cv2.imread(load_path)
- image = resize_crop(image)
- batch_image = image.astype(np.float32) / 127.5 - 1
- batch_image = np.expand_dims(batch_image, axis=0)
- output = self.sess.run(self.final_out, feed_dict={self.input_photo: batch_image})
- output = (np.squeeze(output) + 1) * 127.5
- output = np.clip(output, 0, 255).astype(np.uint8)
- cv2.imwrite(save_path, output)
- except:
- print('cartoonize {} failed'.format(load_path))
-
- def run_sigle(self, load_path, save_path):
- try:
- image = cv2.imread(load_path)
- image = resize_crop(image)
- batch_image = image.astype(np.float32) / 127.5 - 1
- batch_image = np.expand_dims(batch_image, axis=0)
- output = self.sess.run(self.final_out, feed_dict={self.input_photo: batch_image})
- output = (np.squeeze(output) + 1) * 127.5
- output = np.clip(output, 0, 255).astype(np.uint8)
- cv2.imwrite(save_path, output)
- except:
- print('cartoonize {} failed'.format(load_path))
-
-
-if __name__ == '__main__':
- model_path = 'saved_models'
- load_folder = 'test_images'
- save_folder = 'cartoonized_images'
- if not os.path.exists(save_folder):
- os.mkdir(save_folder)
- cartoonize(load_folder, save_folder, model_path)
diff --git a/spaces/LuxOAI/ChatGpt-Web/app/locales/de.ts b/spaces/LuxOAI/ChatGpt-Web/app/locales/de.ts
deleted file mode 100644
index 477228928f82f1763450dc7c8303c63f1c04f74f..0000000000000000000000000000000000000000
--- a/spaces/LuxOAI/ChatGpt-Web/app/locales/de.ts
+++ /dev/null
@@ -1,247 +0,0 @@
-import { SubmitKey } from "../store/config";
-import type { LocaleType } from "./index";
-
-const de: LocaleType = {
- WIP: "In Bearbeitung...",
- Error: {
- Unauthorized:
- "Unbefugter Zugriff, bitte geben Sie den Zugangscode auf der Einstellungsseite ein.",
- },
- ChatItem: {
- ChatItemCount: (count: number) => `${count} Nachrichten`,
- },
- Chat: {
- SubTitle: (count: number) => `${count} Nachrichten mit ChatGPT`,
- Actions: {
- ChatList: "Zur Chat-Liste gehen",
- CompressedHistory: "Komprimierter Gedächtnis-Prompt",
- Export: "Alle Nachrichten als Markdown exportieren",
- Copy: "Kopieren",
- Stop: "Stop",
- Retry: "Wiederholen",
- Delete: "Delete",
- },
- Rename: "Chat umbenennen",
- Typing: "Tippen...",
- Input: (submitKey: string) => {
- var inputHints = `${submitKey} um zu Senden`;
- if (submitKey === String(SubmitKey.Enter)) {
- inputHints += ", Umschalt + Eingabe für Zeilenumbruch";
- }
- return inputHints + ", / zum Durchsuchen von Prompts";
- },
- Send: "Senden",
- Config: {
- Reset: "Reset to Default",
- SaveAs: "Save as Mask",
- },
- },
- Export: {
- Title: "Alle Nachrichten",
- Copy: "Alles kopieren",
- Download: "Herunterladen",
- MessageFromYou: "Deine Nachricht",
- MessageFromChatGPT: "Nachricht von ChatGPT",
- },
- Memory: {
- Title: "Gedächtnis-Prompt",
- EmptyContent: "Noch nichts.",
- Send: "Gedächtnis senden",
- Copy: "Gedächtnis kopieren",
- Reset: "Sitzung zurücksetzen",
- ResetConfirm:
- "Das Zurücksetzen löscht den aktuellen Gesprächsverlauf und das Langzeit-Gedächtnis. Möchten Sie wirklich zurücksetzen?",
- },
- Home: {
- NewChat: "Neuer Chat",
- DeleteChat: "Bestätigen Sie, um das ausgewählte Gespräch zu löschen?",
- DeleteToast: "Chat gelöscht",
- Revert: "Zurücksetzen",
- },
- Settings: {
- Title: "Einstellungen",
- SubTitle: "Alle Einstellungen",
- Actions: {
- ClearAll: "Alle Daten löschen",
- ResetAll: "Alle Einstellungen zurücksetzen",
- Close: "Schließen",
- ConfirmResetAll:
- "Möchten Sie wirklich alle Konfigurationen zurücksetzen?",
- ConfirmClearAll: "Möchten Sie wirklich alle Chats zurücksetzen?",
- },
- Lang: {
- Name: "Language", // ATTENTION: if you wanna add a new translation, please do not translate this value, leave it as `Language`
- All: "All Languages",
- Options: {
- cn: "简体中文",
- en: "English",
- tw: "繁體中文",
- es: "Español",
- it: "Italiano",
- tr: "Türkçe",
- jp: "日本語",
- de: "Deutsch",
- },
- },
- Avatar: "Avatar",
- FontSize: {
- Title: "Schriftgröße",
- SubTitle: "Schriftgröße des Chat-Inhalts anpassen",
- },
- Update: {
- Version: (x: string) => `Version: ${x}`,
- IsLatest: "Neueste Version",
- CheckUpdate: "Update prüfen",
- IsChecking: "Update wird geprüft...",
- FoundUpdate: (x: string) => `Neue Version gefunden: ${x}`,
- GoToUpdate: "Aktualisieren",
- },
- SendKey: "Senden-Taste",
- Theme: "Erscheinungsbild",
- TightBorder: "Enger Rahmen",
- SendPreviewBubble: {
- Title: "Vorschau-Bubble senden",
- SubTitle: "Preview markdown in bubble",
- },
- Mask: {
- Title: "Mask Splash Screen",
- SubTitle: "Show a mask splash screen before starting new chat",
- },
- Prompt: {
- Disable: {
- Title: "Autovervollständigung deaktivieren",
- SubTitle: "Autovervollständigung mit / starten",
- },
- List: "Prompt-Liste",
- ListCount: (builtin: number, custom: number) =>
- `${builtin} integriert, ${custom} benutzerdefiniert`,
- Edit: "Bearbeiten",
- Modal: {
- Title: "Prompt List",
- Add: "Add One",
- Search: "Search Prompts",
- },
- EditModal: {
- Title: "Edit Prompt",
- },
- },
- HistoryCount: {
- Title: "Anzahl der angehängten Nachrichten",
- SubTitle: "Anzahl der pro Anfrage angehängten gesendeten Nachrichten",
- },
- CompressThreshold: {
- Title: "Schwellenwert für Verlaufskomprimierung",
- SubTitle:
- "Komprimierung, wenn die Länge der unkomprimierten Nachrichten den Wert überschreitet",
- },
- Token: {
- Title: "API-Schlüssel",
- SubTitle:
- "Verwenden Sie Ihren Schlüssel, um das Zugangscode-Limit zu ignorieren",
- Placeholder: "OpenAI API-Schlüssel",
- },
- Usage: {
- Title: "Kontostand",
- SubTitle(used: any, total: any) {
- return `Diesen Monat ausgegeben $${used}, Abonnement $${total}`;
- },
- IsChecking: "Wird überprüft...",
- Check: "Erneut prüfen",
- NoAccess: "API-Schlüssel eingeben, um den Kontostand zu überprüfen",
- },
- AccessCode: {
- Title: "Zugangscode",
- SubTitle: "Zugangskontrolle aktiviert",
- Placeholder: "Zugangscode erforderlich",
- },
- Bot: "KI-Anbieter (bot)",
- Model: "Modell",
- Temperature: {
- Title: "Temperature", //Temperatur
- SubTitle: "Ein größerer Wert führt zu zufälligeren Antworten",
- },
- MaxTokens: {
- Title: "Max Tokens", //Maximale Token
- SubTitle: "Maximale Anzahl der Anfrage- plus Antwort-Token",
- },
- PresencePenlty: {
- Title: "Presence Penalty", //Anwesenheitsstrafe
- SubTitle:
- "Ein größerer Wert erhöht die Wahrscheinlichkeit, dass über neue Themen gesprochen wird",
- },
- },
- Store: {
- DefaultTopic: "Neues Gespräch",
- BotHello: "Hallo! Wie kann ich Ihnen heute helfen?",
- Error:
- "Etwas ist schief gelaufen, bitte versuchen Sie es später noch einmal.",
- Prompt: {
- History: (content: string) =>
- "Dies ist eine Zusammenfassung des Chatverlaufs zwischen dem KI und dem Benutzer als Rückblick: " +
- content,
- Topic:
- "Bitte erstellen Sie einen vier- bis fünfwörtigen Titel, der unser Gespräch zusammenfasst, ohne Einleitung, Zeichensetzung, Anführungszeichen, Punkte, Symbole oder zusätzlichen Text. Entfernen Sie Anführungszeichen.",
- Summarize:
- "Fassen Sie unsere Diskussion kurz in 200 Wörtern oder weniger zusammen, um sie als Pronpt für zukünftige Gespräche zu verwenden.",
- },
- },
- Copy: {
- Success: "In die Zwischenablage kopiert",
- Failed:
- "Kopieren fehlgeschlagen, bitte geben Sie die Berechtigung zum Zugriff auf die Zwischenablage frei",
- },
- Context: {
- Toast: (x: any) => `Mit ${x} Kontext-Prompts`,
- Edit: "Kontext- und Gedächtnis-Prompts",
- Add: "Hinzufügen",
- },
- Plugin: {
- Name: "Plugin",
- },
- Mask: {
- Name: "Mask",
- Page: {
- Title: "Prompt Template",
- SubTitle: (count: number) => `${count} prompt templates`,
- Search: "Search Templates",
- Create: "Create",
- },
- Item: {
- Info: (count: number) => `${count} prompts`,
- Chat: "Chat",
- View: "View",
- Edit: "Edit",
- Delete: "Delete",
- DeleteConfirm: "Confirm to delete?",
- },
- EditModal: {
- Title: (readonly: boolean) =>
- `Edit Prompt Template ${readonly ? "(readonly)" : ""}`,
- Download: "Download",
- Clone: "Clone",
- },
- Config: {
- Avatar: "Bot Avatar",
- Name: "Bot Name",
- },
- },
- NewChat: {
- Return: "Return",
- Skip: "Skip",
- Title: "Pick a Mask",
- SubTitle: "Chat with the Soul behind the Mask",
- More: "Find More",
- NotShow: "Not Show Again",
- ConfirmNoShow: "Confirm to disable?You can enable it in settings later.",
- },
-
- UI: {
- Confirm: "Confirm",
- Cancel: "Cancel",
- Close: "Close",
- Create: "Create",
- Edit: "Edit",
- },
-};
-
-export default de;
diff --git a/spaces/MWilinski/bot/data/get_hugging_face_repositories.py b/spaces/MWilinski/bot/data/get_hugging_face_repositories.py
deleted file mode 100644
index 26ddcb7d9e790fe3a2b8e6114004fbfcb4c5419f..0000000000000000000000000000000000000000
--- a/spaces/MWilinski/bot/data/get_hugging_face_repositories.py
+++ /dev/null
@@ -1,34 +0,0 @@
-import json
-import argparse
-import requests
-from typing import List
-
-
-def get_repositories_names(token):
- url = f'https://api.github.com/orgs/huggingface/repos?per_page=1000'
- headers = {'Authorization': f'token {token}'}
- response = requests.get(url, headers=headers)
- if response.status_code == 200:
- repos = json.loads(response.content)
- repo_names = [
- repo['full_name'] for repo in repos
- if repo['stargazers_count'] >= 100
- ]
- return repo_names
- else:
- return 'Error: '+str(response.status_code)
-
-
-def save_repositories_urls(repositories_names: List[str], output_filename: str):
- urls = ['https://github.com/'+repo_name for repo_name in repositories_names]
- data = {"urls": urls}
- with open(output_filename, 'w') as f:
- json.dump(data, f, indent=4)
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--token', type=str)
- args = parser.parse_args()
- repositories = get_repositories_names(token=args.token)
- save_repositories_urls(repositories, 'datasets/hf_repositories_urls_scraped.json')
diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/models/backbones/resnext.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/models/backbones/resnext.py
deleted file mode 100644
index 962249ad6fd9b50960ad6426f7ce3cac6ed8c5bc..0000000000000000000000000000000000000000
--- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/models/backbones/resnext.py
+++ /dev/null
@@ -1,145 +0,0 @@
-import math
-
-from annotator.uniformer.mmcv.cnn import build_conv_layer, build_norm_layer
-
-from ..builder import BACKBONES
-from ..utils import ResLayer
-from .resnet import Bottleneck as _Bottleneck
-from .resnet import ResNet
-
-
-class Bottleneck(_Bottleneck):
- """Bottleneck block for ResNeXt.
-
- If style is "pytorch", the stride-two layer is the 3x3 conv layer, if it is
- "caffe", the stride-two layer is the first 1x1 conv layer.
- """
-
- def __init__(self,
- inplanes,
- planes,
- groups=1,
- base_width=4,
- base_channels=64,
- **kwargs):
- super(Bottleneck, self).__init__(inplanes, planes, **kwargs)
-
- if groups == 1:
- width = self.planes
- else:
- width = math.floor(self.planes *
- (base_width / base_channels)) * groups
-
- self.norm1_name, norm1 = build_norm_layer(
- self.norm_cfg, width, postfix=1)
- self.norm2_name, norm2 = build_norm_layer(
- self.norm_cfg, width, postfix=2)
- self.norm3_name, norm3 = build_norm_layer(
- self.norm_cfg, self.planes * self.expansion, postfix=3)
-
- self.conv1 = build_conv_layer(
- self.conv_cfg,
- self.inplanes,
- width,
- kernel_size=1,
- stride=self.conv1_stride,
- bias=False)
- self.add_module(self.norm1_name, norm1)
- fallback_on_stride = False
- self.with_modulated_dcn = False
- if self.with_dcn:
- fallback_on_stride = self.dcn.pop('fallback_on_stride', False)
- if not self.with_dcn or fallback_on_stride:
- self.conv2 = build_conv_layer(
- self.conv_cfg,
- width,
- width,
- kernel_size=3,
- stride=self.conv2_stride,
- padding=self.dilation,
- dilation=self.dilation,
- groups=groups,
- bias=False)
- else:
- assert self.conv_cfg is None, 'conv_cfg must be None for DCN'
- self.conv2 = build_conv_layer(
- self.dcn,
- width,
- width,
- kernel_size=3,
- stride=self.conv2_stride,
- padding=self.dilation,
- dilation=self.dilation,
- groups=groups,
- bias=False)
-
- self.add_module(self.norm2_name, norm2)
- self.conv3 = build_conv_layer(
- self.conv_cfg,
- width,
- self.planes * self.expansion,
- kernel_size=1,
- bias=False)
- self.add_module(self.norm3_name, norm3)
-
-
-@BACKBONES.register_module()
-class ResNeXt(ResNet):
- """ResNeXt backbone.
-
- Args:
- depth (int): Depth of resnet, from {18, 34, 50, 101, 152}.
- in_channels (int): Number of input image channels. Normally 3.
- num_stages (int): Resnet stages, normally 4.
- groups (int): Group of resnext.
- base_width (int): Base width of resnext.
- strides (Sequence[int]): Strides of the first block of each stage.
- dilations (Sequence[int]): Dilation of each stage.
- out_indices (Sequence[int]): Output from which stages.
- style (str): `pytorch` or `caffe`. If set to "pytorch", the stride-two
- layer is the 3x3 conv layer, otherwise the stride-two layer is
- the first 1x1 conv layer.
- frozen_stages (int): Stages to be frozen (all param fixed). -1 means
- not freezing any parameters.
- norm_cfg (dict): dictionary to construct and config norm layer.
- norm_eval (bool): Whether to set norm layers to eval mode, namely,
- freeze running stats (mean and var). Note: Effect on Batch Norm
- and its variants only.
- with_cp (bool): Use checkpoint or not. Using checkpoint will save some
- memory while slowing down the training speed.
- zero_init_residual (bool): whether to use zero init for last norm layer
- in resblocks to let them behave as identity.
-
- Example:
- >>> from annotator.uniformer.mmseg.models import ResNeXt
- >>> import torch
- >>> self = ResNeXt(depth=50)
- >>> self.eval()
- >>> inputs = torch.rand(1, 3, 32, 32)
- >>> level_outputs = self.forward(inputs)
- >>> for level_out in level_outputs:
- ... print(tuple(level_out.shape))
- (1, 256, 8, 8)
- (1, 512, 4, 4)
- (1, 1024, 2, 2)
- (1, 2048, 1, 1)
- """
-
- arch_settings = {
- 50: (Bottleneck, (3, 4, 6, 3)),
- 101: (Bottleneck, (3, 4, 23, 3)),
- 152: (Bottleneck, (3, 8, 36, 3))
- }
-
- def __init__(self, groups=1, base_width=4, **kwargs):
- self.groups = groups
- self.base_width = base_width
- super(ResNeXt, self).__init__(**kwargs)
-
- def make_res_layer(self, **kwargs):
- """Pack all blocks in a stage into a ``ResLayer``"""
- return ResLayer(
- groups=self.groups,
- base_width=self.base_width,
- base_channels=self.base_channels,
- **kwargs)
diff --git a/spaces/MetaWabbit/Auto-GPT/BULLETIN.md b/spaces/MetaWabbit/Auto-GPT/BULLETIN.md
deleted file mode 100644
index 735048ddc87a914987c6bd70ccdb231a80242ae3..0000000000000000000000000000000000000000
--- a/spaces/MetaWabbit/Auto-GPT/BULLETIN.md
+++ /dev/null
@@ -1,2 +0,0 @@
-Welcome to Auto-GPT! We'll keep you informed of the latest news and features by printing messages here.
-If you don't wish to see this message, you can run Auto-GPT with the --skip-news flag
\ No newline at end of file
diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/common/dictionary/__init__.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/models/common/dictionary/__init__.py
deleted file mode 100644
index 9ad0ab306f183192aa5c8464eee5947e13d294e6..0000000000000000000000000000000000000000
--- a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/common/dictionary/__init__.py
+++ /dev/null
@@ -1,5 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-
-from .dictionary import Dictionary
-
-__all__ = ['Dictionary']
diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textrecog/layers/__init__.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textrecog/layers/__init__.py
deleted file mode 100644
index a1fa8af5586145c8e31c463e6d0620c9f1af2e3b..0000000000000000000000000000000000000000
--- a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textrecog/layers/__init__.py
+++ /dev/null
@@ -1,13 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from .conv_layer import BasicBlock, Bottleneck
-from .dot_product_attention_layer import DotProductAttentionLayer
-from .lstm_layer import BidirectionalLSTM
-from .position_aware_layer import PositionAwareLayer
-from .robust_scanner_fusion_layer import RobustScannerFusionLayer
-from .satrn_layers import Adaptive2DPositionalEncoding, SATRNEncoderLayer
-
-__all__ = [
- 'BidirectionalLSTM', 'Adaptive2DPositionalEncoding', 'BasicBlock',
- 'Bottleneck', 'RobustScannerFusionLayer', 'DotProductAttentionLayer',
- 'PositionAwareLayer', 'SATRNEncoderLayer'
-]
diff --git a/spaces/NATSpeech/PortaSpeech/utils/commons/hparams.py b/spaces/NATSpeech/PortaSpeech/utils/commons/hparams.py
deleted file mode 100644
index 356fe306b0be82040ae1e938d3fca0e2567ae7c2..0000000000000000000000000000000000000000
--- a/spaces/NATSpeech/PortaSpeech/utils/commons/hparams.py
+++ /dev/null
@@ -1,131 +0,0 @@
-import argparse
-import os
-import yaml
-
-from utils.os_utils import remove_file
-
-global_print_hparams = True
-hparams = {}
-
-
-class Args:
- def __init__(self, **kwargs):
- for k, v in kwargs.items():
- self.__setattr__(k, v)
-
-
-def override_config(old_config: dict, new_config: dict):
- for k, v in new_config.items():
- if isinstance(v, dict) and k in old_config:
- override_config(old_config[k], new_config[k])
- else:
- old_config[k] = v
-
-
-def set_hparams(config='', exp_name='', hparams_str='', print_hparams=True, global_hparams=True):
- if config == '' and exp_name == '':
- parser = argparse.ArgumentParser(description='')
- parser.add_argument('--config', type=str, default='',
- help='location of the data corpus')
- parser.add_argument('--exp_name', type=str, default='', help='exp_name')
- parser.add_argument('-hp', '--hparams', type=str, default='',
- help='location of the data corpus')
- parser.add_argument('--infer', action='store_true', help='infer')
- parser.add_argument('--validate', action='store_true', help='validate')
- parser.add_argument('--reset', action='store_true', help='reset hparams')
- parser.add_argument('--remove', action='store_true', help='remove old ckpt')
- parser.add_argument('--debug', action='store_true', help='debug')
- args, unknown = parser.parse_known_args()
- print("| Unknow hparams: ", unknown)
- else:
- args = Args(config=config, exp_name=exp_name, hparams=hparams_str,
- infer=False, validate=False, reset=False, debug=False, remove=False)
- global hparams
- assert args.config != '' or args.exp_name != ''
- if args.config != '':
- assert os.path.exists(args.config)
-
- config_chains = []
- loaded_config = set()
-
- def load_config(config_fn):
- # deep first inheritance and avoid the second visit of one node
- if not os.path.exists(config_fn):
- return {}
- with open(config_fn) as f:
- hparams_ = yaml.safe_load(f)
- loaded_config.add(config_fn)
- if 'base_config' in hparams_:
- ret_hparams = {}
- if not isinstance(hparams_['base_config'], list):
- hparams_['base_config'] = [hparams_['base_config']]
- for c in hparams_['base_config']:
- if c.startswith('.'):
- c = f'{os.path.dirname(config_fn)}/{c}'
- c = os.path.normpath(c)
- if c not in loaded_config:
- override_config(ret_hparams, load_config(c))
- override_config(ret_hparams, hparams_)
- else:
- ret_hparams = hparams_
- config_chains.append(config_fn)
- return ret_hparams
-
- saved_hparams = {}
- args_work_dir = ''
- if args.exp_name != '':
- args_work_dir = f'checkpoints/{args.exp_name}'
- ckpt_config_path = f'{args_work_dir}/config.yaml'
- if os.path.exists(ckpt_config_path):
- with open(ckpt_config_path) as f:
- saved_hparams_ = yaml.safe_load(f)
- if saved_hparams_ is not None:
- saved_hparams.update(saved_hparams_)
- hparams_ = {}
- if args.config != '':
- hparams_.update(load_config(args.config))
- if not args.reset:
- hparams_.update(saved_hparams)
- hparams_['work_dir'] = args_work_dir
-
- # Support config overriding in command line. Support list type config overriding.
- # Examples: --hparams="a=1,b.c=2,d=[1 1 1]"
- if args.hparams != "":
- for new_hparam in args.hparams.split(","):
- k, v = new_hparam.split("=")
- v = v.strip("\'\" ")
- config_node = hparams_
- for k_ in k.split(".")[:-1]:
- config_node = config_node[k_]
- k = k.split(".")[-1]
- if v in ['True', 'False'] or type(config_node[k]) in [bool, list, dict]:
- if type(config_node[k]) == list:
- v = v.replace(" ", ",")
- config_node[k] = eval(v)
- else:
- config_node[k] = type(config_node[k])(v)
- if args_work_dir != '' and args.remove:
- answer = input("REMOVE old checkpoint? Y/N [Default: N]: ")
- if answer.lower() == "y":
- remove_file(args_work_dir)
- if args_work_dir != '' and (not os.path.exists(ckpt_config_path) or args.reset) and not args.infer:
- os.makedirs(hparams_['work_dir'], exist_ok=True)
- with open(ckpt_config_path, 'w') as f:
- yaml.safe_dump(hparams_, f)
-
- hparams_['infer'] = args.infer
- hparams_['debug'] = args.debug
- hparams_['validate'] = args.validate
- hparams_['exp_name'] = args.exp_name
- global global_print_hparams
- if global_hparams:
- hparams.clear()
- hparams.update(hparams_)
- if print_hparams and global_print_hparams and global_hparams:
- print('| Hparams chains: ', config_chains)
- print('| Hparams: ')
- for i, (k, v) in enumerate(sorted(hparams_.items())):
- print(f"\033[;33;m{k}\033[0m: {v}, ", end="\n" if i % 5 == 4 else "")
- print("")
- global_print_hparams = False
- return hparams_
diff --git a/spaces/NATSpeech/PortaSpeech/utils/nn/schedulers.py b/spaces/NATSpeech/PortaSpeech/utils/nn/schedulers.py
deleted file mode 100644
index c91969dd8e01a8342488e060592700f3957c3651..0000000000000000000000000000000000000000
--- a/spaces/NATSpeech/PortaSpeech/utils/nn/schedulers.py
+++ /dev/null
@@ -1,57 +0,0 @@
-class NoneSchedule(object):
- def __init__(self, optimizer, lr):
- self.optimizer = optimizer
- self.constant_lr = lr
- self.step(0)
-
- def step(self, num_updates):
- self.lr = self.constant_lr
- for param_group in self.optimizer.param_groups:
- param_group['lr'] = self.lr
- return self.lr
-
- def get_lr(self):
- return self.optimizer.param_groups[0]['lr']
-
- def get_last_lr(self):
- return self.get_lr()
-
-
-class RSQRTSchedule(NoneSchedule):
- def __init__(self, optimizer, lr, warmup_updates, hidden_size):
- self.optimizer = optimizer
- self.constant_lr = lr
- self.warmup_updates = warmup_updates
- self.hidden_size = hidden_size
- self.lr = lr
- for param_group in optimizer.param_groups:
- param_group['lr'] = self.lr
- self.step(0)
-
- def step(self, num_updates):
- constant_lr = self.constant_lr
- warmup = min(num_updates / self.warmup_updates, 1.0)
- rsqrt_decay = max(self.warmup_updates, num_updates) ** -0.5
- rsqrt_hidden = self.hidden_size ** -0.5
- self.lr = max(constant_lr * warmup * rsqrt_decay * rsqrt_hidden, 1e-7)
- for param_group in self.optimizer.param_groups:
- param_group['lr'] = self.lr
- return self.lr
-
-
-class WarmupSchedule(NoneSchedule):
- def __init__(self, optimizer, lr, warmup_updates):
- self.optimizer = optimizer
- self.constant_lr = self.lr = lr
- self.warmup_updates = warmup_updates
- for param_group in optimizer.param_groups:
- param_group['lr'] = self.lr
- self.step(0)
-
- def step(self, num_updates):
- constant_lr = self.constant_lr
- warmup = min(num_updates / self.warmup_updates, 1.0)
- self.lr = max(constant_lr * warmup, 1e-7)
- for param_group in self.optimizer.param_groups:
- param_group['lr'] = self.lr
- return self.lr
diff --git a/spaces/NCTCMumbai/NCTC/models/research/adversarial_text/data/data_utils_test.py b/spaces/NCTCMumbai/NCTC/models/research/adversarial_text/data/data_utils_test.py
deleted file mode 100644
index 7d225ef08c0bfaa36b2ae32469ca1e3946e3b41a..0000000000000000000000000000000000000000
--- a/spaces/NCTCMumbai/NCTC/models/research/adversarial_text/data/data_utils_test.py
+++ /dev/null
@@ -1,200 +0,0 @@
-# Copyright 2017 Google Inc. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-"""Tests for data_utils."""
-from __future__ import absolute_import
-from __future__ import division
-from __future__ import print_function
-
-# Dependency imports
-
-import tensorflow as tf
-
-from data import data_utils
-
-data = data_utils
-
-
-class SequenceWrapperTest(tf.test.TestCase):
-
- def testDefaultTimesteps(self):
- seq = data.SequenceWrapper()
- t1 = seq.add_timestep()
- _ = seq.add_timestep()
- self.assertEqual(len(seq), 2)
-
- self.assertEqual(t1.weight, 0.0)
- self.assertEqual(t1.label, 0)
- self.assertEqual(t1.token, 0)
-
- def testSettersAndGetters(self):
- ts = data.SequenceWrapper().add_timestep()
- ts.set_token(3)
- ts.set_label(4)
- ts.set_weight(2.0)
- self.assertEqual(ts.token, 3)
- self.assertEqual(ts.label, 4)
- self.assertEqual(ts.weight, 2.0)
-
- def testTimestepIteration(self):
- seq = data.SequenceWrapper()
- seq.add_timestep().set_token(0)
- seq.add_timestep().set_token(1)
- seq.add_timestep().set_token(2)
- for i, ts in enumerate(seq):
- self.assertEqual(ts.token, i)
-
- def testFillsSequenceExampleCorrectly(self):
- seq = data.SequenceWrapper()
- seq.add_timestep().set_token(1).set_label(2).set_weight(3.0)
- seq.add_timestep().set_token(10).set_label(20).set_weight(30.0)
-
- seq_ex = seq.seq
- fl = seq_ex.feature_lists.feature_list
- fl_token = fl[data.SequenceWrapper.F_TOKEN_ID].feature
- fl_label = fl[data.SequenceWrapper.F_LABEL].feature
- fl_weight = fl[data.SequenceWrapper.F_WEIGHT].feature
- _ = [self.assertEqual(len(f), 2) for f in [fl_token, fl_label, fl_weight]]
- self.assertAllEqual([f.int64_list.value[0] for f in fl_token], [1, 10])
- self.assertAllEqual([f.int64_list.value[0] for f in fl_label], [2, 20])
- self.assertAllEqual([f.float_list.value[0] for f in fl_weight], [3.0, 30.0])
-
-
-class DataUtilsTest(tf.test.TestCase):
-
- def testSplitByPunct(self):
- output = data.split_by_punct(
- 'hello! world, i\'ve been\nwaiting\tfor\ryou for.a long time')
- expected = [
- 'hello', 'world', 'i', 've', 'been', 'waiting', 'for', 'you', 'for',
- 'a', 'long', 'time'
- ]
- self.assertListEqual(output, expected)
-
- def _buildDummySequence(self):
- seq = data.SequenceWrapper()
- for i in range(10):
- seq.add_timestep().set_token(i)
- return seq
-
- def testBuildLMSeq(self):
- seq = self._buildDummySequence()
- lm_seq = data.build_lm_sequence(seq)
- for i, ts in enumerate(lm_seq):
- # For end of sequence, the token and label should be same, and weight
- # should be 0.0.
- if i == len(lm_seq) - 1:
- self.assertEqual(ts.token, i)
- self.assertEqual(ts.label, i)
- self.assertEqual(ts.weight, 0.0)
- else:
- self.assertEqual(ts.token, i)
- self.assertEqual(ts.label, i + 1)
- self.assertEqual(ts.weight, 1.0)
-
- def testBuildSAESeq(self):
- seq = self._buildDummySequence()
- sa_seq = data.build_seq_ae_sequence(seq)
-
- self.assertEqual(len(sa_seq), len(seq) * 2 - 1)
-
- # Tokens should be sequence twice, minus the EOS token at the end
- for i, ts in enumerate(sa_seq):
- self.assertEqual(ts.token, seq[i % 10].token)
-
- # Weights should be len-1 0.0's and len 1.0's.
- for i in range(len(seq) - 1):
- self.assertEqual(sa_seq[i].weight, 0.0)
- for i in range(len(seq) - 1, len(sa_seq)):
- self.assertEqual(sa_seq[i].weight, 1.0)
-
- # Labels should be len-1 0's, and then the sequence
- for i in range(len(seq) - 1):
- self.assertEqual(sa_seq[i].label, 0)
- for i in range(len(seq) - 1, len(sa_seq)):
- self.assertEqual(sa_seq[i].label, seq[i - (len(seq) - 1)].token)
-
- def testBuildLabelSeq(self):
- seq = self._buildDummySequence()
- eos_id = len(seq) - 1
- label_seq = data.build_labeled_sequence(seq, True)
- for i, ts in enumerate(label_seq[:-1]):
- self.assertEqual(ts.token, i)
- self.assertEqual(ts.label, 0)
- self.assertEqual(ts.weight, 0.0)
-
- final_timestep = label_seq[-1]
- self.assertEqual(final_timestep.token, eos_id)
- self.assertEqual(final_timestep.label, 1)
- self.assertEqual(final_timestep.weight, 1.0)
-
- def testBuildBidirLabelSeq(self):
- seq = self._buildDummySequence()
- reverse_seq = data.build_reverse_sequence(seq)
- bidir_seq = data.build_bidirectional_seq(seq, reverse_seq)
- label_seq = data.build_labeled_sequence(bidir_seq, True)
-
- for (i, ts), j in zip(
- enumerate(label_seq[:-1]), reversed(range(len(seq) - 1))):
- self.assertAllEqual(ts.tokens, [i, j])
- self.assertEqual(ts.label, 0)
- self.assertEqual(ts.weight, 0.0)
-
- final_timestep = label_seq[-1]
- eos_id = len(seq) - 1
- self.assertAllEqual(final_timestep.tokens, [eos_id, eos_id])
- self.assertEqual(final_timestep.label, 1)
- self.assertEqual(final_timestep.weight, 1.0)
-
- def testReverseSeq(self):
- seq = self._buildDummySequence()
- reverse_seq = data.build_reverse_sequence(seq)
- for i, ts in enumerate(reversed(reverse_seq[:-1])):
- self.assertEqual(ts.token, i)
- self.assertEqual(ts.label, 0)
- self.assertEqual(ts.weight, 0.0)
-
- final_timestep = reverse_seq[-1]
- eos_id = len(seq) - 1
- self.assertEqual(final_timestep.token, eos_id)
- self.assertEqual(final_timestep.label, 0)
- self.assertEqual(final_timestep.weight, 0.0)
-
- def testBidirSeq(self):
- seq = self._buildDummySequence()
- reverse_seq = data.build_reverse_sequence(seq)
- bidir_seq = data.build_bidirectional_seq(seq, reverse_seq)
- for (i, ts), j in zip(
- enumerate(bidir_seq[:-1]), reversed(range(len(seq) - 1))):
- self.assertAllEqual(ts.tokens, [i, j])
- self.assertEqual(ts.label, 0)
- self.assertEqual(ts.weight, 0.0)
-
- final_timestep = bidir_seq[-1]
- eos_id = len(seq) - 1
- self.assertAllEqual(final_timestep.tokens, [eos_id, eos_id])
- self.assertEqual(final_timestep.label, 0)
- self.assertEqual(final_timestep.weight, 0.0)
-
- def testLabelGain(self):
- seq = self._buildDummySequence()
- label_seq = data.build_labeled_sequence(seq, True, label_gain=True)
- for i, ts in enumerate(label_seq):
- self.assertEqual(ts.token, i)
- self.assertEqual(ts.label, 1)
- self.assertNear(ts.weight, float(i) / (len(seq) - 1), 1e-3)
-
-
-if __name__ == '__main__':
- tf.test.main()
diff --git a/spaces/Nultx/VITS-TTS/ONNXVITS_modules.py b/spaces/Nultx/VITS-TTS/ONNXVITS_modules.py
deleted file mode 100644
index 6cf676ce37c1eaf8428c4094e749f862182cb0c3..0000000000000000000000000000000000000000
--- a/spaces/Nultx/VITS-TTS/ONNXVITS_modules.py
+++ /dev/null
@@ -1,390 +0,0 @@
-import copy
-import math
-import numpy as np
-import scipy
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm
-
-import commons
-from commons import init_weights, get_padding
-from ONNXVITS_transforms import piecewise_rational_quadratic_transform
-
-
-LRELU_SLOPE = 0.1
-
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-
-class ConvReluNorm(nn.Module):
- def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(
- nn.ReLU(),
- nn.Dropout(p_dropout))
- for _ in range(n_layers-1):
- self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dialted and Depth-Separable Convolution
- """
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size ** i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size,
- groups=channels, dilation=dilation, padding=padding
- ))
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0):
- super(WN, self).__init__()
- assert(kernel_size % 2 == 1)
- self.hidden_channels =hidden_channels
- self.kernel_size = kernel_size,
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1)
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight')
-
- for i in range(n_layers):
- dilation = dilation_rate ** i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size,
- dilation=dilation, padding=padding)
- in_layer = torch.nn.utils.weight_norm(in_layer, name='weight')
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight')
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(
- x_in,
- g_l,
- n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:,:self.hidden_channels,:]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:,self.hidden_channels:,:]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2])))
- ])
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1)))
- ])
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1])))
- ])
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels,1))
- self.logs = nn.Parameter(torch.zeros(channels,1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1,2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels)
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels]*2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1,2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
-
-class ConvFlow(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0):
- super().__init__()
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.num_bins = num_bins
- self.tail_bound = tail_bound
- self.half_channels = in_channels // 2
-
- self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
- self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.)
- self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0)
- h = self.convs(h, x_mask, g=g)
- h = self.proj(h) * x_mask
-
- b, c, t = x0.shape
- h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
-
- unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_derivatives = h[..., 2 * self.num_bins:]
-
- x1, logabsdet = piecewise_rational_quadratic_transform(x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails='linear',
- tail_bound=self.tail_bound
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1,2])
- if not reverse:
- return x, logdet
- else:
- return x
diff --git a/spaces/Nyashi/rvc-models-epic/config.py b/spaces/Nyashi/rvc-models-epic/config.py
deleted file mode 100644
index 7a9f9b01d62c30aabf20358ff1607de20a88af27..0000000000000000000000000000000000000000
--- a/spaces/Nyashi/rvc-models-epic/config.py
+++ /dev/null
@@ -1,123 +0,0 @@
-import argparse
-import torch
-from multiprocessing import cpu_count
-
-
-class Config:
- def __init__(self):
- self.device = "cuda:0"
- self.is_half = True
- self.n_cpu = 0
- self.gpu_name = None
- self.gpu_mem = None
- (
- self.python_cmd,
- self.listen_port,
- self.colab,
- self.noparallel,
- self.noautoopen,
- self.api,
- self.json
- ) = self.arg_parse()
- self.x_pad, self.x_query, self.x_center, self.x_max = self.device_config()
-
- @staticmethod
- def arg_parse() -> tuple:
- parser = argparse.ArgumentParser()
- parser.add_argument("--port", type=int, default=7865, help="Listen port")
- parser.add_argument(
- "--pycmd", type=str, default="python", help="Python command"
- )
- parser.add_argument("--colab", action="store_true", help="Launch in colab")
- parser.add_argument(
- "--noparallel", action="store_true", help="Disable parallel processing"
- )
- parser.add_argument(
- "--noautoopen",
- action="store_true",
- help="Do not open in browser automatically",
- )
- parser.add_argument('--api', action="store_true", default=True)
- parser.add_argument("--json", action="store_true", default=False, help="use model_info.json")
- cmd_opts = parser.parse_args()
-
- cmd_opts.port = cmd_opts.port if 0 <= cmd_opts.port <= 65535 else 7865
-
- return (
- cmd_opts.pycmd,
- cmd_opts.port,
- cmd_opts.colab,
- cmd_opts.noparallel,
- cmd_opts.noautoopen,
- cmd_opts.api,
- cmd_opts.json
- )
-
- def device_config(self) -> tuple:
- if torch.cuda.is_available():
- i_device = int(self.device.split(":")[-1])
- self.gpu_name = torch.cuda.get_device_name(i_device)
- if (
- ("16" in self.gpu_name and "V100" not in self.gpu_name.upper())
- or "P40" in self.gpu_name.upper()
- or "1060" in self.gpu_name
- or "1070" in self.gpu_name
- or "1080" in self.gpu_name
- ):
- print("16系/10系显卡和P40强制单精度")
- self.is_half = False
- for config_file in ["32k.json", "40k.json", "48k.json"]:
- with open(f"configs/{config_file}", "r") as f:
- strr = f.read().replace("true", "false")
- with open(f"configs/{config_file}", "w") as f:
- f.write(strr)
- with open("trainset_preprocess_pipeline_print.py", "r") as f:
- strr = f.read().replace("3.7", "3.0")
- with open("trainset_preprocess_pipeline_print.py", "w") as f:
- f.write(strr)
- else:
- self.gpu_name = None
- self.gpu_mem = int(
- torch.cuda.get_device_properties(i_device).total_memory
- / 1024
- / 1024
- / 1024
- + 0.4
- )
- if self.gpu_mem <= 4:
- with open("trainset_preprocess_pipeline_print.py", "r") as f:
- strr = f.read().replace("3.7", "3.0")
- with open("trainset_preprocess_pipeline_print.py", "w") as f:
- f.write(strr)
- elif torch.backends.mps.is_available():
- print("没有发现支持的N卡, 使用MPS进行推理")
- self.device = "mps"
- self.is_half = False
- else:
- print("没有发现支持的N卡, 使用CPU进行推理")
- self.device = "cpu"
- self.is_half = False
-
- if self.n_cpu == 0:
- self.n_cpu = cpu_count()
-
- if self.is_half:
- # 6G显存配置
- x_pad = 3
- x_query = 10
- x_center = 60
- x_max = 65
- else:
- # 5G显存配置
- x_pad = 1
- x_query = 6
- x_center = 38
- x_max = 41
-
- if self.gpu_mem != None and self.gpu_mem <= 4:
- x_pad = 1
- x_query = 5
- x_center = 30
- x_max = 32
-
- return x_pad, x_query, x_center, x_max
\ No newline at end of file
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/misc/cut_as.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/misc/cut_as.py
deleted file mode 100644
index 5b7e1e968564b84c47049c5cc69c9d6b8fafe0e9..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/misc/cut_as.py
+++ /dev/null
@@ -1,69 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-
-import torchaudio
-import argparse
-import json
-import pathlib
-
-
-def get_args():
- parser = argparse.ArgumentParser(
- "Assuring generated audio have the same length as ground-truth audio")
- parser.add_argument('--samples_dir', required=True, type=str)
- parser.add_argument('--out_dir', required=True, type=str)
- parser.add_argument('--prompts_description', required=True, type=str)
- return parser.parse_args()
-
-
-def cut(src, tgt, l):
- x, sr = torchaudio.load(str(src))
- assert sr == 16_000
-
- x = x.squeeze()
- target_frames = int(l * sr)
-
- flag = 0
- if target_frames <= x.size(0):
- x = x[:target_frames]
- flag = 1
- else:
- flag = 0
- torchaudio.save(str(tgt), x.unsqueeze(0), sr)
- return flag
-
-
-def main():
- args = get_args()
- tgt_dir = pathlib.Path(args.out_dir)
- tgt_dir.mkdir(exist_ok=True, parents=True)
-
- total_files, sufficiently_long = 0, 0
-
- with open(args.prompts_description, 'r') as f:
- description = json.loads(f.read())
-
- for src_f in pathlib.Path(args.samples_dir).glob('*.wav'):
- name_prompt = src_f.with_suffix('').name.split('__')[0]
-
- assert name_prompt in description, f'Cannot find {name_prompt}!'
-
- target_length = description[name_prompt][0]
- tgt_f = tgt_dir / (src_f.name)
-
- is_long_enough = cut(src_f, tgt_f, target_length)
- sufficiently_long += is_long_enough
- if not is_long_enough:
- print(f'{src_f} is not long enough')
-
- total_files += 1
-
- print(
- f'Total files: {total_files}; sufficiently long: {sufficiently_long}')
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wav2vec/unsupervised/models/wav2vec_u.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wav2vec/unsupervised/models/wav2vec_u.py
deleted file mode 100644
index 27792ebda842057e33fed3dc53dd9d8a594d0483..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wav2vec/unsupervised/models/wav2vec_u.py
+++ /dev/null
@@ -1,637 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from dataclasses import dataclass
-from enum import Enum, auto
-import math
-import numpy as np
-from typing import Tuple, List, Optional, Dict
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from torch import autograd
-
-from fairseq import checkpoint_utils, utils
-from fairseq.dataclass import FairseqDataclass
-from fairseq.models import BaseFairseqModel, register_model
-from fairseq.modules import (
- SamePad,
- TransposeLast,
-)
-
-
-class SegmentationType(Enum):
- NONE = auto()
- RANDOM = auto()
- UNIFORM_RANDOM = auto()
- UNIFORM_RANDOM_JOIN = auto()
- JOIN = auto()
-
-
-@dataclass
-class SegmentationConfig(FairseqDataclass):
- type: SegmentationType = SegmentationType.NONE
- subsample_rate: float = 0.25
- mean_pool: bool = True
- mean_pool_join: bool = False
- remove_zeros: bool = False
-
-
-@dataclass
-class Wav2vec_UConfig(FairseqDataclass):
-
- discriminator_kernel: int = 3
- discriminator_dilation: int = 1
- discriminator_dim: int = 256
- discriminator_causal: bool = True
- discriminator_linear_emb: bool = False
- discriminator_depth: int = 1
- discriminator_max_pool: bool = False
- discriminator_act_after_linear: bool = False
- discriminator_dropout: float = 0.0
- discriminator_spectral_norm: bool = False
- discriminator_weight_norm: bool = False
-
- generator_kernel: int = 4
- generator_dilation: int = 1
- generator_stride: int = 1
- generator_bias: bool = False
- generator_dropout: float = 0.0
-
- blank_weight: float = 0
- blank_mode: str = "add"
- blank_is_sil: bool = False
- no_softmax: bool = False
-
- smoothness_weight: float = 0.0
- smoothing: float = 0.0
- smoothing_one_sided: bool = False
- gradient_penalty: float = 0.0
- probabilistic_grad_penalty_slicing: bool = False
- code_penalty: float = 0.0
- gumbel: bool = False
- hard_gumbel: bool = True
- temp: Tuple[float, float, float] = (2, 0.1, 0.99995)
- input_dim: int = 128
-
- segmentation: SegmentationConfig = SegmentationConfig()
-
-
-class Segmenter(nn.Module):
- cfg: SegmentationConfig
-
- def __init__(self, cfg: SegmentationConfig):
- super().__init__()
- self.cfg = cfg
- self.subsample_rate = cfg.subsample_rate
-
- def pre_segment(self, dense_x, dense_padding_mask):
- return dense_x, dense_padding_mask
-
- def logit_segment(self, logits, padding_mask):
- return logits, padding_mask
-
-
-class RandomSegmenter(Segmenter):
- def pre_segment(self, dense_x, dense_padding_mask):
- target_num = math.ceil(dense_x.size(1) * self.subsample_rate)
- ones = torch.ones(dense_x.shape[:-1], device=dense_x.device)
- indices, _ = ones.multinomial(target_num).sort(dim=-1)
- indices_ld = indices.unsqueeze(-1).expand(-1, -1, dense_x.size(-1))
- dense_x = dense_x.gather(1, indices_ld)
- dense_padding_mask = dense_padding_mask.gather(1, index=indices)
- return dense_x, dense_padding_mask
-
-
-class UniformRandomSegmenter(Segmenter):
- def pre_segment(self, dense_x, dense_padding_mask):
- bsz, tsz, fsz = dense_x.shape
-
- target_num = math.ceil(tsz * self.subsample_rate)
-
- rem = tsz % target_num
-
- if rem > 0:
- dense_x = F.pad(dense_x, [0, 0, 0, target_num - rem])
- dense_padding_mask = F.pad(
- dense_padding_mask, [0, target_num - rem], value=True
- )
-
- dense_x = dense_x.view(bsz, target_num, -1, fsz)
- dense_padding_mask = dense_padding_mask.view(bsz, target_num, -1)
-
- if self.cfg.mean_pool:
- dense_x = dense_x.mean(dim=-2)
- dense_padding_mask = dense_padding_mask.all(dim=-1)
- else:
- ones = torch.ones((bsz, dense_x.size(2)), device=dense_x.device)
- indices = ones.multinomial(1)
- indices = indices.unsqueeze(-1).expand(-1, target_num, -1)
- indices_ld = indices.unsqueeze(-1).expand(-1, -1, -1, fsz)
- dense_x = dense_x.gather(2, indices_ld).reshape(bsz, -1, fsz)
- dense_padding_mask = dense_padding_mask.gather(2, index=indices).reshape(
- bsz, -1
- )
- return dense_x, dense_padding_mask
-
-
-class JoinSegmenter(Segmenter):
- def logit_segment(self, logits, padding_mask):
- preds = logits.argmax(dim=-1)
-
- if padding_mask.any():
- preds[padding_mask] = -1 # mark pad
- uniques = []
-
- bsz, tsz, csz = logits.shape
-
- for p in preds:
- uniques.append(
- p.cpu().unique_consecutive(return_inverse=True, return_counts=True)
- )
-
- new_tsz = max(u[0].numel() for u in uniques)
- new_logits = logits.new_zeros(bsz, new_tsz, csz)
- new_pad = padding_mask.new_zeros(bsz, new_tsz)
-
- for b in range(bsz):
- u, idx, c = uniques[b]
- keep = u != -1
-
- if self.cfg.remove_zeros:
- keep.logical_and_(u != 0)
-
- if self.training and not self.cfg.mean_pool_join:
- u[0] = 0
- u[1:] = c.cumsum(0)[:-1]
- m = c > 1
- r = torch.rand(m.sum())
- o = (c[m] * r).long()
- u[m] += o
- new_logits[b, : u.numel()] = logits[b, u]
- else:
- new_logits[b].index_add_(
- dim=0, index=idx.to(new_logits.device), source=logits[b]
- )
- new_logits[b, : c.numel()] /= c.unsqueeze(-1).to(new_logits.device)
-
- new_sz = keep.sum()
- if not keep.all():
- kept_logits = new_logits[b, : c.numel()][keep]
- new_logits[b, :new_sz] = kept_logits
-
- if new_sz < new_tsz:
- pad = new_tsz - new_sz
- new_logits[b, -pad:] = 0
- new_pad[b, -pad:] = True
-
- return new_logits, new_pad
-
-
-class UniformRandomJoinSegmenter(UniformRandomSegmenter, JoinSegmenter):
- pass
-
-
-SEGMENT_FACTORY = {
- SegmentationType.NONE: Segmenter,
- SegmentationType.RANDOM: RandomSegmenter,
- SegmentationType.UNIFORM_RANDOM: UniformRandomSegmenter,
- SegmentationType.UNIFORM_RANDOM_JOIN: UniformRandomJoinSegmenter,
- SegmentationType.JOIN: JoinSegmenter,
-}
-
-
-class Discriminator(nn.Module):
- def __init__(self, dim, cfg: Wav2vec_UConfig):
- super().__init__()
-
- inner_dim = cfg.discriminator_dim
- kernel = cfg.discriminator_kernel
- dilation = cfg.discriminator_dilation
- self.max_pool = cfg.discriminator_max_pool
-
- if cfg.discriminator_causal:
- padding = kernel - 1
- else:
- padding = kernel // 2
-
- def make_conv(in_d, out_d, k, p=0, has_dilation=True):
- conv = nn.Conv1d(
- in_d,
- out_d,
- kernel_size=k,
- padding=p,
- dilation=dilation if has_dilation else 1,
- )
- if cfg.discriminator_spectral_norm:
- conv = nn.utils.spectral_norm(conv)
- elif cfg.discriminator_weight_norm:
- conv = nn.utils.weight_norm(conv)
- return conv
-
- inner_net = [
- nn.Sequential(
- make_conv(inner_dim, inner_dim, kernel, padding),
- SamePad(kernel_size=kernel, causal=cfg.discriminator_causal),
- nn.Dropout(cfg.discriminator_dropout),
- nn.GELU(),
- )
- for _ in range(cfg.discriminator_depth - 1)
- ] + [
- make_conv(inner_dim, 1, kernel, padding, has_dilation=False),
- SamePad(kernel_size=kernel, causal=cfg.discriminator_causal),
- ]
-
- if cfg.discriminator_linear_emb:
- emb_net = [make_conv(dim, inner_dim, 1)]
- else:
- emb_net = [
- make_conv(dim, inner_dim, kernel, padding),
- SamePad(kernel_size=kernel, causal=cfg.discriminator_causal),
- ]
-
- if cfg.discriminator_act_after_linear:
- emb_net.append(nn.GELU())
-
- self.net = nn.Sequential(
- *emb_net,
- nn.Dropout(cfg.discriminator_dropout),
- *inner_net,
- )
-
- def forward(self, x, padding_mask):
- x = x.transpose(1, 2) # BTC -> BCT
- x = self.net(x)
- x = x.transpose(1, 2)
- x_sz = x.size(1)
- if padding_mask is not None and padding_mask.any() and padding_mask.dim() > 1:
- padding_mask = padding_mask[:, : x.size(1)]
- x[padding_mask] = float("-inf") if self.max_pool else 0
- x_sz = x_sz - padding_mask.sum(dim=-1)
- x = x.squeeze(-1)
- if self.max_pool:
- x, _ = x.max(dim=-1)
- else:
- x = x.sum(dim=-1)
- x = x / x_sz
- return x
-
-
-class Generator(nn.Module):
- def __init__(self, input_dim, output_dim, cfg: Wav2vec_UConfig):
- super().__init__()
-
- self.cfg = cfg
- self.output_dim = output_dim
- self.stride = cfg.generator_stride
- self.dropout = nn.Dropout(cfg.generator_dropout)
-
- padding = cfg.generator_kernel // 2
- self.proj = nn.Sequential(
- TransposeLast(),
- nn.Conv1d(
- input_dim,
- output_dim,
- kernel_size=cfg.generator_kernel,
- stride=cfg.generator_stride,
- dilation=cfg.generator_dilation,
- padding=padding,
- bias=cfg.generator_bias,
- ),
- TransposeLast(),
- )
-
- def forward(self, dense_x, tokens, dense_padding_mask):
- dense_x = self.dropout(dense_x)
-
- dense_x = self.proj(dense_x)
- if self.stride > 1:
- dense_padding_mask = dense_padding_mask[:, :: self.stride]
-
- if dense_padding_mask.size(1) != dense_x.size(1):
- new_padding = dense_padding_mask.new_zeros(dense_x.shape[:-1])
- diff = new_padding.size(1) - dense_padding_mask.size(1)
- assert (
- diff > 0
- ), f"{new_padding.shape}, {dense_padding_mask.shape}, {dense_x.shape}, {diff}"
- if diff > 0:
- new_padding[:, diff:] = dense_padding_mask
- else:
- assert diff < 0
- new_padding = dense_padding_mask[:, :diff]
-
- dense_padding_mask = new_padding
-
- result = {}
-
- token_x = None
- if tokens is not None:
- token_x = dense_x.new_zeros(tokens.numel(), self.output_dim)
- token_x.scatter_(1, tokens.view(-1, 1).long(), 1)
- token_x = token_x.view(tokens.shape + (self.output_dim,))
-
- result["dense_x"] = dense_x
- result["token_x"] = token_x
- result["dense_padding_mask"] = dense_padding_mask
-
- return result
-
-
-@register_model("wav2vec_u", dataclass=Wav2vec_UConfig)
-class Wav2vec_U(BaseFairseqModel):
- def calc_gradient_penalty(self, real_data, fake_data):
-
- b_size = min(real_data.size(0), fake_data.size(0))
- t_size = min(real_data.size(1), fake_data.size(1))
-
- if self.cfg.probabilistic_grad_penalty_slicing:
-
- def get_slice(data, dim, target_size):
-
- size = data.size(dim)
- diff = size - target_size
- if diff <= 0:
- return data
-
- start = np.random.randint(0, diff + 1)
- return data.narrow(dim=dim, start=start, length=target_size)
-
- real_data = get_slice(real_data, 0, b_size)
- real_data = get_slice(real_data, 1, t_size)
- fake_data = get_slice(fake_data, 0, b_size)
- fake_data = get_slice(fake_data, 1, t_size)
-
- else:
- real_data = real_data[:b_size, :t_size]
- fake_data = fake_data[:b_size, :t_size]
-
- alpha = torch.rand(real_data.size(0), 1, 1)
- alpha = alpha.expand(real_data.size())
- alpha = alpha.to(real_data.device)
-
- interpolates = alpha * real_data + ((1 - alpha) * fake_data)
-
- disc_interpolates = self.discriminator(interpolates, None)
-
- gradients = autograd.grad(
- outputs=disc_interpolates,
- inputs=interpolates,
- grad_outputs=torch.ones(disc_interpolates.size(), device=real_data.device),
- create_graph=True,
- retain_graph=True,
- only_inputs=True,
- )[0]
-
- gradient_penalty = (gradients.norm(2, dim=1) - 1) ** 2
- return gradient_penalty
-
- def set_num_updates(self, num_updates):
- super().set_num_updates(num_updates)
- self.update_num = num_updates
- self.curr_temp = max(
- self.max_temp * self.temp_decay ** num_updates, self.min_temp
- )
-
- def discrim_step(self, num_updates):
- return num_updates % 2 == 1
-
- def get_groups_for_update(self, num_updates):
- return "discriminator" if self.discrim_step(num_updates) else "generator"
-
- def __init__(self, cfg: Wav2vec_UConfig, target_dict):
- super().__init__()
-
- self.cfg = cfg
- self.zero_index = target_dict.index("") if "" in target_dict else 0
- self.smoothness_weight = cfg.smoothness_weight
-
- output_size = len(target_dict)
- self.pad = target_dict.pad()
- self.eos = target_dict.eos()
- self.smoothing = cfg.smoothing
- self.smoothing_one_sided = cfg.smoothing_one_sided
- self.no_softmax = cfg.no_softmax
- self.gumbel = cfg.gumbel
- self.hard_gumbel = cfg.hard_gumbel
- self.last_acc = None
-
- self.gradient_penalty = cfg.gradient_penalty
- self.code_penalty = cfg.code_penalty
- self.blank_weight = cfg.blank_weight
- self.blank_mode = cfg.blank_mode
- self.blank_index = target_dict.index("") if cfg.blank_is_sil else 0
- assert self.blank_index != target_dict.unk()
-
- self.discriminator = Discriminator(output_size, cfg)
- for p in self.discriminator.parameters():
- p.param_group = "discriminator"
-
- self.pca_A = self.pca_b = None
- d = cfg.input_dim
-
- self.segmenter = SEGMENT_FACTORY[cfg.segmentation.type](cfg.segmentation)
-
- self.generator = Generator(d, output_size, cfg)
-
- for p in self.generator.parameters():
- p.param_group = "generator"
-
- for p in self.segmenter.parameters():
- p.param_group = "generator"
-
- self.max_temp, self.min_temp, self.temp_decay = cfg.temp
- self.curr_temp = self.max_temp
- self.update_num = 0
-
- @classmethod
- def build_model(cls, cfg, task):
- return cls(cfg, task.target_dictionary)
-
- def get_logits(
- self,
- net_output: Optional[Dict[str, List[Optional[torch.Tensor]]]],
- normalize: bool = False,
- ):
- logits = net_output["logits"]
-
- if self.blank_weight != 0:
- if self.blank_mode == "add":
- logits[..., self.blank_index] += self.blank_weight
- elif self.blank_mode == "set":
- logits[..., self.blank_index] = self.blank_weight
- else:
- raise Exception(f"invalid blank mode {self.blank_mode}")
-
- padding = net_output["padding_mask"]
- if padding.any():
- logits[padding] = float("-inf")
- logits[padding][..., self.blank_index] = float("inf")
-
- if normalize:
- logits = utils.log_softmax(logits.float(), dim=-1)
-
- return logits.transpose(0, 1)
-
- def get_normalized_probs(
- self,
- net_output: Tuple[
- torch.Tensor, Optional[Dict[str, List[Optional[torch.Tensor]]]]
- ],
- log_probs: bool,
- sample: Optional[Dict[str, torch.Tensor]] = None,
- ):
- logits = self.get_logits(net_output)
-
- probs = super().get_normalized_probs(logits, log_probs, sample)
- # BTC -> TBC for ctc
- probs = probs.transpose(0, 1)
- return probs
-
- def normalize(self, dense_x):
-
- bsz, tsz, csz = dense_x.shape
-
- if dense_x.numel() == 0:
- raise Exception(dense_x.shape)
- _, k = dense_x.max(-1)
- hard_x = (
- dense_x.new_zeros(bsz * tsz, csz)
- .scatter_(-1, k.view(-1, 1), 1.0)
- .view(-1, csz)
- )
- hard_probs = torch.mean(hard_x.float(), dim=0)
- code_perplexity = torch.exp(
- -torch.sum(hard_probs * torch.log(hard_probs + 1e-7), dim=-1)
- )
-
- avg_probs = torch.softmax(dense_x.reshape(-1, csz).float(), dim=-1).mean(dim=0)
- prob_perplexity = torch.exp(
- -torch.sum(avg_probs * torch.log(avg_probs + 1e-7), dim=-1)
- )
-
- if not self.no_softmax:
- if self.training and self.gumbel:
- dense_x = F.gumbel_softmax(
- dense_x.float(), tau=self.curr_temp, hard=self.hard_gumbel
- ).type_as(dense_x)
- else:
- dense_x = dense_x.softmax(-1)
-
- return dense_x, code_perplexity, prob_perplexity
-
- def forward(
- self,
- features,
- padding_mask,
- random_label=None,
- dense_x_only=False,
- segment=True,
- ):
- if segment:
- features, padding_mask = self.segmenter.pre_segment(features, padding_mask)
-
- orig_size = features.size(0) * features.size(1) - padding_mask.sum()
-
- gen_result = self.generator(features, random_label, padding_mask)
-
- orig_dense_x, token_x = gen_result["dense_x"], gen_result["token_x"]
- orig_dense_padding_mask = gen_result["dense_padding_mask"]
-
- if segment:
- dense_x, dense_padding_mask = self.segmenter.logit_segment(
- orig_dense_x, orig_dense_padding_mask
- )
- else:
- dense_x = orig_dense_x
- dense_padding_mask = orig_dense_padding_mask
-
- dense_logits = dense_x
- prob_perplexity = None
- code_perplexity = None
-
- if not (self.no_softmax and dense_x_only):
- dense_x, code_perplexity, prob_perplexity = self.normalize(dense_logits)
-
- if dense_x_only or self.discriminator is None:
- return {
- "logits": dense_x,
- "padding_mask": dense_padding_mask,
- }
-
- token_padding_mask = random_label == self.pad
-
- dense_y = self.discriminator(dense_x, dense_padding_mask)
- token_y = self.discriminator(token_x, token_padding_mask)
-
- sample_size = features.size(0)
-
- d_step = self.discrim_step(self.update_num)
-
- fake_smooth = self.smoothing
- real_smooth = self.smoothing
- if self.smoothing_one_sided:
- fake_smooth = 0
-
- zero_loss = None
- smoothness_loss = None
- code_pen = None
-
- if d_step:
- loss_dense = F.binary_cross_entropy_with_logits(
- dense_y,
- dense_y.new_ones(dense_y.shape) - fake_smooth,
- reduction="sum",
- )
- loss_token = F.binary_cross_entropy_with_logits(
- token_y,
- token_y.new_zeros(token_y.shape) + real_smooth,
- reduction="sum",
- )
- if self.training and self.gradient_penalty > 0:
- grad_pen = self.calc_gradient_penalty(token_x, dense_x)
- grad_pen = grad_pen.sum() * self.gradient_penalty
- else:
- grad_pen = None
- else:
- grad_pen = None
- loss_token = None
- loss_dense = F.binary_cross_entropy_with_logits(
- dense_y,
- dense_y.new_zeros(dense_y.shape) + fake_smooth,
- reduction="sum",
- )
- num_vars = dense_x.size(-1)
- if prob_perplexity is not None:
- code_pen = (num_vars - prob_perplexity) / num_vars
- code_pen = code_pen * sample_size * self.code_penalty
-
- if self.smoothness_weight > 0:
- smoothness_loss = F.mse_loss(
- dense_logits[:, :-1], dense_logits[:, 1:], reduction="none"
- )
- smoothness_loss[dense_padding_mask[:, 1:]] = 0
- smoothness_loss = (
- smoothness_loss.mean() * sample_size * self.smoothness_weight
- )
-
- result = {
- "losses": {
- "grad_pen": grad_pen,
- "code_pen": code_pen,
- "smoothness": smoothness_loss,
- },
- "temp": self.curr_temp,
- "code_ppl": code_perplexity,
- "prob_ppl": prob_perplexity,
- "d_steps": int(d_step),
- "sample_size": sample_size,
- }
-
- suff = "_d" if d_step else "_g"
- result["losses"]["dense" + suff] = loss_dense
- result["losses"]["token" + suff] = loss_token
-
- return result
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/scripts/compound_split_bleu.sh b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/scripts/compound_split_bleu.sh
deleted file mode 100644
index 1972fddcebff9a43a70bcf14c287175c68f60e3f..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/scripts/compound_split_bleu.sh
+++ /dev/null
@@ -1,20 +0,0 @@
-#!/bin/bash
-
-if [ $# -ne 1 ]; then
- echo "usage: $0 GENERATE_PY_OUTPUT"
- exit 1
-fi
-
-GEN=$1
-
-SYS=$GEN.sys
-REF=$GEN.ref
-
-if [ $(tail -n 1 $GEN | grep BLEU | wc -l) -ne 1 ]; then
- echo "not done generating"
- exit
-fi
-
-grep ^H $GEN | awk -F '\t' '{print $NF}' | perl -ple 's{(\S)-(\S)}{$1 ##AT##-##AT## $2}g' > $SYS
-grep ^T $GEN | cut -f2- | perl -ple 's{(\S)-(\S)}{$1 ##AT##-##AT## $2}g' > $REF
-fairseq-score --sys $SYS --ref $REF
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/simultaneous_translation/utils/__init__.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/simultaneous_translation/utils/__init__.py
deleted file mode 100644
index 1e9ce844f59a4211061392084cc81075e6bab19f..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/simultaneous_translation/utils/__init__.py
+++ /dev/null
@@ -1,14 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import importlib
-import os
-
-
-# automatically import any Python files in the criterions/ directory
-for file in sorted(os.listdir(os.path.dirname(__file__))):
- if file.endswith(".py") and not file.startswith("_"):
- module = file[: file.find(".py")]
- importlib.import_module("examples.simultaneous_translation.utils." + module)
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/textless_nlp/gslm/metrics/abx_metrics/README.md b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/textless_nlp/gslm/metrics/abx_metrics/README.md
deleted file mode 100644
index aa2560f0453403fb5846c387848c78b037c79cb2..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/textless_nlp/gslm/metrics/abx_metrics/README.md
+++ /dev/null
@@ -1,77 +0,0 @@
-# ABX-based evaluation
-
-ABX is used to evaluate the quality of the obtained discrete units.
-
-The life cycle of the ABX-based evaluation for the Speech-to-Unit contains the following steps:
-1. Training an acoustic model (or use an existing acoustic model) ([description](./../..))
-2. Perform quantization of speech by learning a K-means clustering model ([description](./../..))
-3. Compute discrete features for ABX computation using the learned clusters
-4. Compute the ABX score over the discrete features taking advantage of [libri-light's ABX evaluation script][ll-abx]
-
-Here we assume that you already went throught the first two steps and focus solely on extracting features and computing ABX scores.
-
-## Libri-light setup
-
-Follow [libri-light's instructions][ll-instructions] for installation and [ABX evaluation setup][ll-abx] (including the download of the data items required for ABX computation).
-
-## Computing ABX
-
-### Dumping quantized features
-
-The first step for the ABX computation is to dump the quantized representations corresponding to the test files.
-
-```shell
-TYPE="hubert"
-LAYER=6
-CKPT_PATH=""
-KM_MODEL_PATH=""
-
-SUBSET="dev-clean"
-MANIFEST=""
-DATA_DIR="/$SUBSET"
-
-PYTHONPATH=. python examples/textless_nlp/gslm/metrics/abx_metrics/dump_abx_feats.py \
- --feature_type $TYPE \
- --kmeans_model_path $KM_MODEL_PATH \
- --checkpoint_path $CKPT_PATH \
- --layer $LAYER \
- --manifest_path $MANIFEST \
- --out_dir_path $DATA_DIR \
- --extension ".flac"
-```
-
-Again the manifest file follows the same structure than elsewhere in the codebase.
-
-### Compute ABX with Libri-light
-
-Use libri-light's `eval_ABX.py` script (within the appropriate environment set up) as followed:
-
-```shell
-LIBRILIGHT_ROOT=""
-
-SUBSET="dev-clean"
-DATA_DIR="/$SUBSET"
-ITEM_FILE_PATH="$LIBRILIGHT_ROOT/eval/ABX_data/$SUBSET.item"
-OUT_DIR="/$SUBSET"
-
-FILE_EXTENSION=".npy"
-FEATURE_SIZE=0.02 # depends on the model used
-
-PYTHONPATH=$LIBRILIGHT_ROOT \
- python $LIBRILIGHT_ROOT/eval/eval_ABX.py \
- $DATA_DIR \
- $ITEM_FILE_PATH \
- --file_extension $FILE_EXTENSION \
- --feature_size $FEATURE_SIZE \
- --out $OUT_DIR \
- --mode "all"
-```
-
-Note that `FEATURE_SIZE` will depend on the model type you are using to extract the acoustic features:
-* For HuBERT and Wav2Vec2.0, use `FEATURE_SIZE=0.02`
-* For CPC and Log Mel, use `FEATURE_SIZE=0.01`
-
-If you have a gpu available, make sure you add the `--cuda` flag for faster computation.
-
-[ll-instructions]: https://github.com/facebookresearch/libri-light
-[ll-abx]: https://github.com/facebookresearch/libri-light/tree/master/eval#abx
diff --git a/spaces/OIUGLK/bingo/src/components/theme-toggle.tsx b/spaces/OIUGLK/bingo/src/components/theme-toggle.tsx
deleted file mode 100644
index 67d3f1a2c163ccbeb52c40a7e42f107190237154..0000000000000000000000000000000000000000
--- a/spaces/OIUGLK/bingo/src/components/theme-toggle.tsx
+++ /dev/null
@@ -1,31 +0,0 @@
-'use client'
-
-import * as React from 'react'
-import { useTheme } from 'next-themes'
-
-import { Button } from '@/components/ui/button'
-import { IconMoon, IconSun } from '@/components/ui/icons'
-
-export function ThemeToggle() {
- const { setTheme, theme } = useTheme()
- const [_, startTransition] = React.useTransition()
-
- return (
-
- )
-}
diff --git a/spaces/Oddity/ehartford-WizardLM-13B-Uncensored/README.md b/spaces/Oddity/ehartford-WizardLM-13B-Uncensored/README.md
deleted file mode 100644
index 37e232057b542c5e49c037d0a2d1ba9416a08814..0000000000000000000000000000000000000000
--- a/spaces/Oddity/ehartford-WizardLM-13B-Uncensored/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Ehartford WizardLM 13B Uncensored
-emoji: 🐢
-colorFrom: indigo
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.29.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Olivernyu/sentiment_analysis_app/README.md b/spaces/Olivernyu/sentiment_analysis_app/README.md
deleted file mode 100644
index c7d2d9b118bbea0339e0e08a369452afdf6d26e6..0000000000000000000000000000000000000000
--- a/spaces/Olivernyu/sentiment_analysis_app/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Sentiment Analysis App
-emoji: 🌖
-colorFrom: blue
-colorTo: blue
-sdk: streamlit
-sdk_version: 1.17.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/OpenMotionLab/MotionGPT/mGPT/render/visualize.py b/spaces/OpenMotionLab/MotionGPT/mGPT/render/visualize.py
deleted file mode 100644
index 7cc9c6cd9f77ef8f031aa4a9f2fe5926f6b84272..0000000000000000000000000000000000000000
--- a/spaces/OpenMotionLab/MotionGPT/mGPT/render/visualize.py
+++ /dev/null
@@ -1,747 +0,0 @@
-from operator import mod
-import os
-# from cv2 import CAP_PROP_INTELPERC_DEPTH_LOW_CONFIDENCE_VALUE
-import imageio
-import shutil
-import numpy as np
-import torch
-from tqdm import tqdm
-
-from scipy.spatial.transform import Rotation as R
-from mGPT.render.renderer import get_renderer
-from mGPT.render.rendermotion import render_video
-# from mld.utils.img_utils import convert_img
-# from mld.utils.uicap_utils import output_pkl
-
-
-def parsename(path):
- basebane = os.path.basename(path)
- base = os.path.splitext(basebane)[0]
- strs = base.split('_')
- key = strs[-2]
- action = strs[-1]
- return key, action
-
-
-def load_anim(path, timesize=None):
- data = np.array(imageio.mimread(path, memtest=False)) #[..., :3]
- if timesize is None:
- return data
-
- # take the last frame and put shadow repeat the last frame but with a little shadow
- # lastframe = add_shadow(data[-1])
- # alldata = np.tile(lastframe, (timesize, 1, 1, 1))
- alldata = data
-
- # debug fix mat dim
- if len(data.shape) == 3 and len(alldata.shape) == 4:
- data = data[:, None, :, :]
-
- # copy the first frames
- lenanim = data.shape[0]
- alldata[:lenanim] = data[:lenanim]
- return alldata
-
-
-def plot_3d_motion_dico(x):
- motion, length, save_path, params, kargs = x
- plot_3d_motion(motion, length, save_path, params, **kargs)
-
-
-def plot_3d_motion(motion,
- length,
- save_path,
- params,
- title="",
- interval=50,
- pred_cam=None,
- imgs=None,
- bbox=None,
- side=None):
- # render smpl
- # [nframes, nVs, 3]
- if motion.shape[1] == 6890:
- # width = 250
- # height = 250
- width = 600
- height = 600
- if pred_cam is None:
- # cam=(0.75, 0.75, 0, 0.1)
- cam = (0.8, 0.8, 0, 0.1)
- # cam=(0.9, 0.9, 0, 0.1)
- else:
- assert bbox is not None
- assert imgs is not None
-
- # Tmp visulize
- # weak perspective camera parameters in cropped image space (s,tx,ty)
- # to
- # weak perspective camera parameters in original image space (sx,sy,tx,ty)
- cam = np.concatenate(
- (pred_cam[:, [0]], pred_cam[:, [0]], pred_cam[:, 1:3]), axis=1)
-
- # ToDo convert to original cam
- # load original img?
- # calculate cam after padding???
- #
- # cam = convert_crop_cam_to_orig_img(
- # cam=pred_cam,
- # bbox=bbox,
- # img_width=width,
- # img_height=height
- # )
- cam_pose = np.eye(4)
- cam_pose[0:3, 0:3] = R.from_euler('x', -90, degrees=True).as_matrix()
- cam_pose[0:3, 3] = [0, 0, 0]
- if side:
- rz = np.eye(4)
- rz[0:3, 0:3] = R.from_euler('z', -90, degrees=True).as_matrix()
- cam_pose = np.matmul(rz, cam_pose)
-
- # # reshape input imgs
- # if imgs is not None:
- # imgs = convert_img(imgs.unsqueeze(0), height)[:,0]
- backgrounds = imgs if imgs is not None else np.ones(
- (height, width, 3)) * 255
- renderer = get_renderer(width, height, cam_pose)
-
- # [nframes, nVs, 3]
- meshes = motion
- key, action = parsename(save_path)
- render_video(meshes,
- key,
- action,
- renderer,
- save_path,
- backgrounds,
- cam_pose,
- cams=cam)
- return
-
-
-def stack_images(real, real_gens, gen, real_imgs=None):
- # change to 3 channel
- # print(real.shape)
- # print(real_gens.shape)
- # print(real_gens.shape)
- # real = real[:3]
- # real_gens = real_gens[:3]
- # gen = gen[:3]
-
- nleft_cols = len(real_gens) + 1
- print("Stacking frames..")
- allframes = np.concatenate(
- (real[:, None, ...], *[x[:, None, ...] for x in real_gens], gen), 1)
- nframes, nspa, nats, h, w, pix = allframes.shape
-
- blackborder = np.zeros((w // 30, h * nats, pix), dtype=allframes.dtype)
- # blackborder = np.ones((w//30, h*nats, pix), dtype=allframes.dtype)*255
- frames = []
- for frame_idx in tqdm(range(nframes)):
- columns = np.vstack(allframes[frame_idx].transpose(1, 2, 3, 4,
- 0)).transpose(
- 3, 1, 0, 2)
- frame = np.concatenate(
- (*columns[0:nleft_cols], blackborder, *columns[nleft_cols:]),
- 0).transpose(1, 0, 2)
-
- frames.append(frame)
-
- if real_imgs is not None:
- resize_imgs = convert_img(real_imgs, h)[:nframes, ...]
-
- for i in range(len(frames)):
- imgs = np.vstack(resize_imgs[i, ...])
- imgs4 = np.ones(
- (imgs.shape[0], imgs.shape[1], 4), dtype=np.uint8) * 255
- imgs4[:, :, :3] = imgs
- #imgs = torch2numpy(imgs)
- frames[i] = np.concatenate((imgs4, frames[i]), 1)
- return np.stack(frames)
-
-
-def stack_images_gen(gen, real_imgs=None):
- print("Stacking frames..")
- allframes = gen
- nframes, nspa, nats, h, w, pix = allframes.shape
- blackborder = np.zeros((w * nspa, h // 30, pix), dtype=allframes.dtype)
- blackborder = blackborder[None, ...].repeat(nats,
- axis=0).transpose(0, 2, 1, 3)
-
- frames = []
- for frame_idx in tqdm(range(nframes)):
- rows = np.vstack(allframes[frame_idx].transpose(0, 3, 2, 4,
- 1)).transpose(
- 3, 1, 0, 2)
- rows = np.concatenate((rows, blackborder), 1)
- frame = np.concatenate(rows, 0)
- frames.append(frame)
-
- if real_imgs is not None:
- # ToDo Add images
- resize_imgs = convert_img(real_imgs, h)[:nframes, ...]
- for i in range(len(frames)):
- imgs = np.vstack(resize_imgs[i, ...])
- #imgs = torch2numpy(imgs)
- frames[i] = np.concatenate((imgs, frames[i]), 1)
- return np.stack(frames)
-
-
-def generate_by_video(visualization, reconstructions, generation,
- label_to_action_name, params, nats, nspa, tmp_path):
- # shape : (17, 3, 4, 480, 640, 3)
- # (nframes, row, column, h, w, 3)
- fps = params["fps"]
-
- params = params.copy()
-
- gen_only = False
- if visualization is None:
- gen_only = True
- outputkey = "output_vertices"
- params["pose_rep"] = "vertices"
- elif "output_vertices" in visualization:
- outputkey = "output_vertices"
- params["pose_rep"] = "vertices"
- elif "output_xyz" in visualization:
- outputkey = "output_xyz"
- params["pose_rep"] = "xyz"
- else:
- outputkey = "poses"
-
- keep = [outputkey, 'lengths', "y"]
- gener = {key: generation[key].data.cpu().numpy() for key in keep}
- if not gen_only:
- visu = {key: visualization[key].data.cpu().numpy() for key in keep}
- recons = {}
- # visualize regressor results
- if 'vertices_hat' in reconstructions['ntf']:
- recons['regressor'] = {
- 'output_vertices':
- reconstructions['ntf']['vertices_hat'].data.cpu().numpy(),
- 'lengths':
- reconstructions['ntf']['lengths'].data.cpu().numpy(),
- 'y':
- reconstructions['ntf']['y'].data.cpu().numpy()
- }
-
- recons['regressor_side'] = {
- 'output_vertices':
- reconstructions['ntf']['vertices_hat'].data.cpu().numpy(),
- 'lengths':
- reconstructions['ntf']['lengths'].data.cpu().numpy(),
- 'y':
- reconstructions['ntf']['y'].data.cpu().numpy(),
- 'side':
- True
- }
- # ToDo rendering overlap results
- # recons['overlap'] = {'output_vertices':reconstructions['ntf']['vertices_hat'].data.cpu().numpy(),
- # 'lengths':reconstructions['ntf']['lengths'].data.cpu().numpy(),
- # 'y':reconstructions['ntf']['y'].data.cpu().numpy(),
- # 'imgs':reconstructions['ntf']['imgs'],
- # 'bbox':reconstructions['ntf']['bbox'].data.cpu().numpy(),
- # 'cam':reconstructions['ntf']['preds'][0]['cam'].data.cpu().numpy()}
- for mode, reconstruction in reconstructions.items():
- recons[mode] = {
- key: reconstruction[key].data.cpu().numpy()
- for key in keep
- }
- recons[mode + '_side'] = {
- key: reconstruction[key].data.cpu().numpy()
- for key in keep
- }
- recons[mode + '_side']['side'] = True
-
- # lenmax = max(gener['lengths'].max(), visu['lengths'].max())
- # timesize = lenmax + 5 longer visulization
- lenmax = gener['lengths'].max()
- timesize = lenmax
-
- import multiprocessing
-
- def pool_job_with_desc(pool, iterator, desc, max_, save_path_format, isij):
- with tqdm(total=max_, desc=desc.format("Render")) as pbar:
- for data in iterator:
- plot_3d_motion_dico(data)
- # for _ in pool.imap_unordered(plot_3d_motion_dico, iterator):
- # pbar.update()
- if isij:
- array = np.stack([[
- load_anim(save_path_format.format(i, j), timesize)
- for j in range(nats)
- ] for i in tqdm(range(nspa), desc=desc.format("Load"))])
- return array.transpose(2, 0, 1, 3, 4, 5)
- else:
- array = np.stack([
- load_anim(save_path_format.format(i), timesize)
- for i in tqdm(range(nats), desc=desc.format("Load"))
- ])
- return array.transpose(1, 0, 2, 3, 4)
-
- pool = None
- # if True:
- with multiprocessing.Pool() as pool:
- # Generated samples
- save_path_format = os.path.join(tmp_path, "gen_{}_{}.gif")
- iterator = ((gener[outputkey][i, j], gener['lengths'][i, j],
- save_path_format.format(i, j), params, {
- "title":
- f"gen: {label_to_action_name(gener['y'][i, j])}",
- "interval": 1000 / fps
- }) for j in range(nats) for i in range(nspa))
- gener["frames"] = pool_job_with_desc(pool, iterator,
- "{} the generated samples",
- nats * nspa, save_path_format,
- True)
- if not gen_only:
- # Real samples
- save_path_format = os.path.join(tmp_path, "real_{}.gif")
- iterator = ((visu[outputkey][i], visu['lengths'][i],
- save_path_format.format(i), params, {
- "title":
- f"real: {label_to_action_name(visu['y'][i])}",
- "interval": 1000 / fps
- }) for i in range(nats))
- visu["frames"] = pool_job_with_desc(pool, iterator,
- "{} the real samples", nats,
- save_path_format, False)
- for mode, recon in recons.items():
- # Reconstructed samples
- save_path_format = os.path.join(
- tmp_path, f"reconstructed_{mode}_" + "{}.gif")
- if mode == 'overlap':
- iterator = ((
- recon[outputkey][i], recon['lengths'][i],
- save_path_format.format(i), params, {
- "title":
- f"recons: {label_to_action_name(recon['y'][i])}",
- "interval": 1000 / fps,
- "pred_cam": recon['cam'][i],
- "imgs": recon['imgs'][i],
- "bbox": recon['bbox'][i]
- }) for i in range(nats))
- else:
- side = True if 'side' in recon.keys() else False
- iterator = ((
- recon[outputkey][i], recon['lengths'][i],
- save_path_format.format(i), params, {
- "title":
- f"recons: {label_to_action_name(recon['y'][i])}",
- "interval": 1000 / fps,
- "side": side
- }) for i in range(nats))
- recon["frames"] = pool_job_with_desc(
- pool, iterator, "{} the reconstructed samples", nats,
- save_path_format, False)
- # vis img in visu
- if not gen_only:
- input_imgs = visualization["imgs"] if visualization[
- "imgs"] is not None else None
- vis = visu["frames"] if not gen_only else None
- rec = [recon["frames"]
- for recon in recons.values()] if not gen_only else None
- gen = gener["frames"]
- frames = stack_images(vis, rec, gen, input_imgs)
- else:
- gen = gener["frames"]
- frames = stack_images_gen(gen)
- return frames
-
-
-def viz_epoch(model,
- dataset,
- epoch,
- params,
- folder,
- module=None,
- writer=None,
- exps=''):
- """ Generate & viz samples """
- module = model if module is None else module
-
- # visualize with joints3D
- model.outputxyz = True
-
- print(f"Visualization of the epoch {epoch}")
-
- noise_same_action = params["noise_same_action"]
- noise_diff_action = params["noise_diff_action"]
- duration_mode = params["duration_mode"]
- reconstruction_mode = params["reconstruction_mode"]
- decoder_test = params["decoder_test"]
-
- fact = params["fact_latent"]
- figname = params["figname"].format(epoch)
-
- nspa = params["num_samples_per_action"]
- nats = params["num_actions_to_sample"]
-
- num_classes = params["num_classes"]
- # nats = min(num_classes, nats)
-
- # define some classes
- classes = torch.randperm(num_classes)[:nats]
- # duplicate same classes when sampling too much
- if nats > num_classes:
- classes = classes.expand(nats)
-
- meandurations = torch.from_numpy(
- np.array([
- round(dataset.get_mean_length_label(cl.item())) for cl in classes
- ]))
-
- if duration_mode == "interpolate" or decoder_test == "diffduration":
- points, step = np.linspace(-nspa, nspa, nspa, retstep=True)
- # points = np.round(10*points/step).astype(int)
- points = np.array([5, 10, 16, 30, 60, 80]).astype(int)
- # gendurations = meandurations.repeat((nspa, 1)) + points[:, None]
- gendurations = torch.from_numpy(points[:, None]).expand(
- (nspa, 1)).repeat((1, nats))
- else:
- gendurations = meandurations.repeat((nspa, 1))
- print("Duration time: ")
- print(gendurations[:, 0])
-
- # extract the real samples
- # real_samples, real_theta, mask_real, real_lengths, imgs, paths
- batch = dataset.get_label_sample_batch(classes.numpy())
-
- # ToDo
- # clean these data
- # Visualizaion of real samples
- visualization = {
- "x": batch['x'].to(model.device),
- "y": classes.to(model.device),
- "mask": batch['mask'].to(model.device),
- 'lengths': batch['lengths'].to(model.device),
- "output": batch['x'].to(model.device),
- "theta":
- batch['theta'].to(model.device) if 'theta' in batch.keys() else None,
- "imgs":
- batch['imgs'].to(model.device) if 'imgs' in batch.keys() else None,
- "paths": batch['paths'] if 'paths' in batch.keys() else None,
- }
-
- # Visualizaion of real samples
- if reconstruction_mode == "both":
- reconstructions = {
- "tf": {
- "x":
- batch['x'].to(model.device),
- "y":
- classes.to(model.device),
- 'lengths':
- batch['lengths'].to(model.device),
- "mask":
- batch['mask'].to(model.device),
- "teacher_force":
- True,
- "theta":
- batch['theta'].to(model.device)
- if 'theta' in batch.keys() else None
- },
- "ntf": {
- "x":
- batch['x'].to(model.device),
- "y":
- classes.to(model.device),
- 'lengths':
- batch['lengths'].to(model.device),
- "mask":
- batch['mask'].to(model.device),
- "theta":
- batch['theta'].to(model.device)
- if 'theta' in batch.keys() else None
- }
- }
- else:
- reconstructions = {
- reconstruction_mode: {
- "x":
- batch['x'].to(model.device),
- "y":
- classes.to(model.device),
- 'lengths':
- batch['lengths'].to(model.device),
- "mask":
- batch['mask'].to(model.device),
- "teacher_force":
- reconstruction_mode == "tf",
- "imgs":
- batch['imgs'].to(model.device)
- if 'imgs' in batch.keys() else None,
- "theta":
- batch['theta'].to(model.device)
- if 'theta' in batch.keys() else None,
- "bbox":
- batch['bbox'] if 'bbox' in batch.keys() else None
- }
- }
- print("Computing the samples poses..")
-
- # generate the repr (joints3D/pose etc)
- model.eval()
- with torch.no_grad():
- # Reconstruction of the real data
- for mode in reconstructions:
- # update reconstruction dicts
- reconstructions[mode] = model(reconstructions[mode])
- reconstruction = reconstructions[list(reconstructions.keys())[0]]
-
- if decoder_test == "gt":
- # Generate the new data
- gt_input = {
- "x": batch['x'].repeat(nspa, 1, 1, 1).to(model.device),
- "y": classes.repeat(nspa).to(model.device),
- "mask": batch['mask'].repeat(nspa, 1).to(model.device),
- 'lengths': batch['lengths'].repeat(nspa).to(model.device)
- }
- generation = model(gt_input)
- if decoder_test == "new":
- # Generate the new data
- generation = module.generate(gendurations,
- classes=classes,
- nspa=nspa,
- noise_same_action=noise_same_action,
- noise_diff_action=noise_diff_action,
- fact=fact)
- elif decoder_test == "diffaction":
- assert nats == nspa
- # keep the same noise for each "sample"
- z = reconstruction["z"].repeat((nspa, 1))
- mask = reconstruction["mask"].repeat((nspa, 1))
- lengths = reconstruction['lengths'].repeat(nspa)
- # but use other labels
- y = classes.repeat_interleave(nspa).to(model.device)
- generation = {"z": z, "y": y, "mask": mask, 'lengths': lengths}
- model.decoder(generation)
-
- elif decoder_test == "diffduration":
- z = reconstruction["z"].repeat((nspa, 1))
- lengths = gendurations.reshape(-1).to(model.device)
- mask = model.lengths_to_mask(lengths)
- y = classes.repeat(nspa).to(model.device)
- generation = {"z": z, "y": y, "mask": mask, 'lengths': lengths}
- model.decoder(generation)
-
- elif decoder_test == "interpolate_action":
- assert nats == nspa
- # same noise for each sample
- z_diff_action = torch.randn(1,
- model.latent_dim,
- device=model.device).repeat(nats, 1)
- z = z_diff_action.repeat((nspa, 1))
-
- # but use combination of labels and labels below
- y = F.one_hot(classes.to(model.device),
- model.num_classes).to(model.device)
- y_below = F.one_hot(torch.cat((classes[1:], classes[0:1])),
- model.num_classes).to(model.device)
- convex_factors = torch.linspace(0, 1, nspa, device=model.device)
- y_mixed = torch.einsum("nk,m->mnk", y, 1-convex_factors) + \
- torch.einsum("nk,m->mnk", y_below, convex_factors)
- y_mixed = y_mixed.reshape(nspa * nats, y_mixed.shape[-1])
-
- durations = gendurations[0].to(model.device)
- durations_below = torch.cat((durations[1:], durations[0:1]))
-
- gendurations = torch.einsum("l,k->kl", durations, 1-convex_factors) + \
- torch.einsum("l,k->kl", durations_below, convex_factors)
- gendurations = gendurations.to(dtype=durations.dtype)
-
- lengths = gendurations.to(model.device).reshape(z.shape[0])
- mask = model.lengths_to_mask(lengths)
-
- generation = {
- "z": z,
- "y": y_mixed,
- "mask": mask,
- 'lengths': lengths
- }
- generation = model.decoder(generation)
-
- visualization = module.prepare(visualization)
- visualization["output_xyz"] = visualization["x_xyz"]
- visualization["output_vertices"] = visualization["x_vertices"]
- # Get xyz for the real ones
- # visualization["output_xyz"] = module.rot2xyz(visualization["output"], visualization["mask"], jointstype="smpl")
- # # Get smpl vertices for the real ones
- # if module.cvae.pose_rep != "xyz":
- # visualization["output_vertices"] = module.rot2xyz(visualization["output"], visualization["mask"], jointstype="vertices")
-
- for key, val in generation.items():
- if len(generation[key].shape) == 1:
- generation[key] = val.reshape(nspa, nats)
- else:
- generation[key] = val.reshape(nspa, nats, *val.shape[1:])
-
- finalpath = os.path.join(folder, figname + exps + ".gif")
- tmp_path = os.path.join(folder, f"subfigures_{figname}")
- os.makedirs(tmp_path, exist_ok=True)
-
- print("Generate the videos..")
- frames = generate_by_video(visualization, reconstructions, generation,
- dataset.label_to_action_name, params, nats,
- nspa, tmp_path)
-
- print(f"Writing video {finalpath}")
- imageio.mimsave(finalpath.replace('gif', 'mp4'), frames, fps=params["fps"])
- shutil.rmtree(tmp_path)
-
- # output npy
- output = {
- "data_id": batch['id'],
- "paths": batch['paths'],
- "x": batch['x'].cpu().numpy(),
- "x_vertices": visualization["x_vertices"].cpu().numpy(),
- "output_vertices":
- reconstructions['ntf']["output_vertices"].cpu().numpy(),
- "gen_vertices": generation["output_vertices"].cpu().numpy()
- }
-
- outputpath = finalpath.replace('gif', 'npy')
- np.save(outputpath, output)
-
- # output pkl
- batch_recon = reconstructions["ntf"]
- outputpath = finalpath.replace('gif', 'pkl')
- # output_pkl([batch_recon], outputpath)
-
- if writer is not None:
- writer.add_video(f"Video/Epoch {epoch}",
- frames.transpose(0, 3, 1, 2)[None],
- epoch,
- fps=params["fps"])
- return finalpath
-
-
-def viz_dataset(dataset, params, folder):
- """ Generate & viz samples """
- print("Visualization of the dataset")
-
- nspa = params["num_samples_per_action"]
- nats = params["num_actions_to_sample"]
-
- num_classes = params["num_classes"]
-
- figname = "{}_{}_numframes_{}_sampling_{}_step_{}".format(
- params["dataset"], params["pose_rep"], params["num_frames"],
- params["sampling"], params["sampling_step"])
-
- # define some classes
- classes = torch.randperm(num_classes)[:nats]
-
- allclasses = classes.repeat(nspa, 1).reshape(nspa * nats)
- # extract the real samples
- real_samples, mask_real, real_lengths = dataset.get_label_sample_batch(
- allclasses.numpy())
- # to visualize directly
-
- # Visualizaion of real samples
- visualization = {
- "x": real_samples,
- "y": allclasses,
- "mask": mask_real,
- 'lengths': real_lengths,
- "output": real_samples
- }
-
- from mGPT.models.rotation2xyz import Rotation2xyz
-
- device = params["device"]
- rot2xyz = Rotation2xyz(device=device)
-
- rot2xyz_params = {
- "pose_rep": params["pose_rep"],
- "glob_rot": params["glob_rot"],
- "glob": params["glob"],
- "jointstype": params["jointstype"],
- "translation": params["translation"]
- }
-
- output = visualization["output"]
- visualization["output_xyz"] = rot2xyz(output.to(device),
- visualization["mask"].to(device),
- **rot2xyz_params)
-
- for key, val in visualization.items():
- if len(visualization[key].shape) == 1:
- visualization[key] = val.reshape(nspa, nats)
- else:
- visualization[key] = val.reshape(nspa, nats, *val.shape[1:])
-
- finalpath = os.path.join(folder, figname + ".gif")
- tmp_path = os.path.join(folder, f"subfigures_{figname}")
- os.makedirs(tmp_path, exist_ok=True)
-
- print("Generate the videos..")
- frames = generate_by_video_sequences(visualization,
- dataset.label_to_action_name, params,
- nats, nspa, tmp_path)
-
- print(f"Writing video {finalpath}..")
- imageio.mimsave(finalpath, frames, fps=params["fps"])
-
-
-def generate_by_video_sequences(visualization, label_to_action_name, params,
- nats, nspa, tmp_path):
- # shape : (17, 3, 4, 480, 640, 3)
- # (nframes, row, column, h, w, 3)
- fps = params["fps"]
- if "output_vetices" in visualization:
- outputkey = "output_vetices"
- params["pose_rep"] = "vertices"
- elif "output_xyz" in visualization:
- outputkey = "output_xyz"
- params["pose_rep"] = "xyz"
- else:
- outputkey = "poses"
-
- keep = [outputkey, 'lengths', "y"]
- visu = {key: visualization[key].data.cpu().numpy() for key in keep}
- lenmax = visu['lengths'].max()
-
- timesize = lenmax + 5
-
- # import multiprocessing
-
- def pool_job_with_desc(pool, iterator, desc, max_, save_path_format):
- for data in iterator:
- plot_3d_motion_dico(data)
- # with tqdm(total=max_, desc=desc.format("Render")) as pbar:
- # for _ in pool.imap_unordered(plot_3d_motion_dico, iterator):
- # pbar.update()
- array = np.stack([[
- load_anim(save_path_format.format(i, j), timesize)
- for j in range(nats)
- ] for i in tqdm(range(nspa), desc=desc.format("Load"))])
- return array.transpose(2, 0, 1, 3, 4, 5)
-
- pool = None
- # with multiprocessing.Pool() as pool:
- # Real samples
- save_path_format = os.path.join(tmp_path, "real_{}_{}.gif")
- iterator = ((visu[outputkey][i, j], visu['lengths'][i, j],
- save_path_format.format(i, j), params, {
- "title": f"real: {label_to_action_name(visu['y'][i, j])}",
- "interval": 1000 / fps
- }) for j in range(nats) for i in range(nspa))
- visu["frames"] = pool_job_with_desc(pool, iterator, "{} the real samples",
- nats, save_path_format)
- frames = stack_images_sequence(visu["frames"])
- return frames
-
-
-def stack_images_sequence(visu):
- print("Stacking frames..")
- allframes = visu
- nframes, nspa, nats, h, w, pix = allframes.shape
- frames = []
- for frame_idx in tqdm(range(nframes)):
- columns = np.vstack(allframes[frame_idx].transpose(1, 2, 3, 4,
- 0)).transpose(
- 3, 1, 0, 2)
- frame = np.concatenate(columns).transpose(1, 0, 2)
- frames.append(frame)
- return np.stack(frames)
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/bin/etf2ly.py b/spaces/Pattr/DrumClassification/lilypond-2.24.2/bin/etf2ly.py
deleted file mode 100644
index cafcf24a71004f83e94978c0f2829fb991dae047..0000000000000000000000000000000000000000
--- a/spaces/Pattr/DrumClassification/lilypond-2.24.2/bin/etf2ly.py
+++ /dev/null
@@ -1,1326 +0,0 @@
-#!/home/lily/lilypond-2.24.2/release/binaries/dependencies/install/Python-3.10.8/bin/python3.10
-
-# This file is part of LilyPond, the GNU music typesetter.
-#
-# Copyright (C) 2001--2022 Han-Wen Nienhuys
-# Jan Nieuwenhuizen
-#
-# LilyPond is free software: you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation, either version 3 of the License, or
-# (at your option) any later version.
-#
-# LilyPond is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with LilyPond. If not, see .
-
-# info mostly taken from looking at files. See also
-# https://www.gnu.org/software/lilypond/src/Developers/Details/etfformat.html
-
-# This supports
-#
-# * notes
-# * rests
-# * ties
-# * slurs
-# * lyrics
-# * articulation
-# * grace notes
-# * tuplets
-#
-
-# todo:
-# * slur/stem directions
-# * voices (2nd half of frame?)
-# * more intelligent lyrics
-# * beams (better use autobeam?)
-# * more robust: try entertainer.etf (freenote)
-# * dynamics
-# * empty measures (eg. twopt03.etf from freenote)
-#
-
-
-import __main__
-import getopt
-import gettext
-import os
-import re
-import sys
-
-authors = ('Jan Nieuwenhuizen ',
- 'Han-Wen Nienhuys ')
-
-version = '2.24.2'
-if version == '@' + 'TOPLEVEL_VERSION' + '@':
- version = '(unknown version)' # uGUHGUHGHGUGH
-
-"""
-
-# relocate-preamble.py.in
-#
-# This file is part of LilyPond, the GNU music typesetter.
-#
-# Copyright (C) 2007--2022 Han-Wen Nienhuys
-#
-# LilyPond is free software: you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation, either version 3 of the License, or
-# (at your option) any later version.
-#
-# LilyPond is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with LilyPond. If not, see .
-#
-
-This is generic code, used for all python scripts.
-
-The quotes are to ensure that the source .py file can still be
-run as a python script, but does not include any sys.path handling.
-Otherwise, the lilypond-book calls inside the build
-might modify installed .pyc files.
-
-"""
-
-# This is needed for installations with a non-default layout, ie where share/
-# is not next to bin/.
-sys.path.insert (0, os.path.join ('/home/lily/lilypond-2.24.2/release/binaries/mingw/lilypond/install/share/lilypond/2.24.2', 'python'))
-
-# Dynamic relocation, for installations with a default layout including GUB,
-# but also for execution from the build directory.
-bindir = os.path.abspath (os.path.dirname (sys.argv[0]))
-topdir = os.path.dirname (bindir)
-if bindir.endswith (r'/scripts/out'):
- topdir = os.path.join (os.path.dirname (topdir), 'out')
-datadir = os.path.abspath (os.path.join (topdir, 'share', 'lilypond'))
-for v in [ 'current', '2.24.2' ]:
- sys.path.insert (0, os.path.join (datadir, v, 'python'))
-
-"""
-"""
-
-################################################################
-# Load translation and install _() into Python's builtins namespace.
-gettext.install('lilypond', '/home/lily/lilypond-2.24.2/release/binaries/mingw/lilypond/install/share/locale')
-
-import lilylib as ly
-
-finale_clefs = ['treble', 'alto', 'tenor', 'bass',
- 'percussion', 'treble_8', 'bass_8', 'baritone']
-
-
-def lily_clef(fin):
- try:
- return finale_clefs[fin]
- except IndexError:
- sys.stderr.write('\nHuh? Found clef number %d\n' % fin)
-
- return 'treble'
-
-
-def gulp_file(f):
- return open(f, encoding='utf-8').read()
-
-
-# notename 0 == central C
-distances = [0, 2, 4, 5, 7, 9, 11, 12]
-
-
-def semitones(name, acc):
- return (name / 7) * 12 + distances[name % 7] + acc
-
-# represent pitches as (notename, alteration), relative to C-major scale
-
-
-def transpose(orig, delta):
- (oname, oacc) = orig
- (dname, dacc) = delta
-
- old_pitch = semitones(oname, oacc)
- delta_pitch = semitones(dname, dacc)
- nname = (oname + dname)
- nacc = oacc
- new_pitch = semitones(nname, nacc)
-
- nacc = nacc - (new_pitch - old_pitch - delta_pitch)
-
- return (nname, nacc)
-
-
-def interpret_finale_key_sig(finale_id):
- """
-find the transposition of C-major scale that belongs here.
-
-we are not going to insert the correct major/minor, we only want to
-have the correct number of accidentals
-"""
-
- p = (0, 0)
-
- bank_number = finale_id >> 8
- accidental_bits = finale_id & 0xff
-
- if 0 <= accidental_bits < 7:
- while accidental_bits > 0:
- p = transpose(p, (4, 0)) # a fifth up
- accidental_bits = accidental_bits - 1
- elif 248 < accidental_bits <= 255:
- while accidental_bits < 256:
- p = transpose(p, (3, 0))
- accidental_bits = accidental_bits + 1
-
- if bank_number == 1:
- # minor scale
- p = transpose(p, (5, 0))
- p = (p[0] % 7, p[1])
-
- return KeySignature(p, bank_number)
-
-# should cache this.
-
-
-def find_scale(keysig):
- cscale = [(x, 0) for x in range(0, 7)]
-# print "cscale: ", cscale
- ascale = [(x, 0) for x in range(-2, 5)]
-# print "ascale: ", ascale
- transposition = keysig.pitch
- if keysig.sig_type == 1:
- transposition = transpose(transposition, (2, -1))
- transposition = (transposition[0] % 7, transposition[1])
- trscale = list(map(lambda x, k=transposition: transpose(x, k), ascale))
- else:
- trscale = list(map(lambda x, k=transposition: transpose(x, k), cscale))
-# print "trscale: ", trscale
- return trscale
-
-
-def EDU_to_duration(edu):
- log = 1
- d = 4096
- while d > edu:
- d = d >> 1
- log = log << 1
-
- edu = edu - d
- dots = 0
- if edu == d / 2:
- dots = 1
- elif edu == d*3/4:
- dots = 2
- return (log, dots)
-
-
-def rational_to_lily_skip(rat):
- (n, d) = rat
-
- basedur = 1
- while d and d % 2 == 0:
- basedur = basedur << 1
- d = d >> 1
-
- str = 's%d' % basedur
- if n != 1:
- str = str + '*%d' % n
- if d != 1:
- str = str + '/%d' % d
-
- return str
-
-
-def gcd(a, b):
- if b == 0:
- return a
- c = a
- while c:
- c = a % b
- a = b
- b = c
- return a
-
-
-def rat_simplify(r):
- (n, d) = r
- if d < 0:
- d = -d
- n = -n
- if n == 0:
- return (0, 1)
- else:
- g = gcd(n, d)
- return (n/g, d/g)
-
-
-def rat_multiply(a, b):
- (x, y) = a
- (p, q) = b
-
- return rat_simplify((x*p, y*q))
-
-
-def rat_add(a, b):
- (x, y) = a
- (p, q) = b
-
- return rat_simplify((x*q + p*y, y*q))
-
-
-def rat_neg(a):
- (p, q) = a
- return (-p, q)
-
-
-def rat_subtract(a, b):
- return rat_add(a, rat_neg(b))
-
-
-def lily_notename(tuple2):
- (n, a) = tuple2
- nn = chr((n + 2) % 7 + ord('a'))
-
- return nn + {-2: 'eses', -1: 'es', 0: '', 1: 'is', 2: 'isis'}[a]
-
-
-class Tuplet:
- def __init__(self, number):
- self.start_note = number
- self.finale = []
-
- def append_finale(self, fin):
- self.finale.append(fin)
-
- def factor(self):
- n = self.finale[0][2]*self.finale[0][3]
- d = self.finale[0][0]*self.finale[0][1]
- return rat_simplify((n, d))
-
- def dump_start(self):
- return '\\times %d/%d { ' % self.factor()
-
- def dump_end(self):
- return ' }'
-
- def calculate(self, chords):
- edu_left = self.finale[0][0] * self.finale[0][1]
-
- startch = chords[self.start_note]
- c = startch
- while c and edu_left:
- c.tuplet = self
- if c == startch:
- c.chord_prefix = self.dump_start() + c.chord_prefix
-
- if not c.grace:
- edu_left = edu_left - c.EDU_duration()
- if edu_left == 0:
- c.chord_suffix = c.chord_suffix + self.dump_end()
- c = c.__next__
-
- if edu_left:
- sys.stderr.write(
- "\nHuh? Tuplet starting at entry %d was too short." % self.start_note)
-
-
-class Slur:
- def __init__(self, number, params):
- self.number = number
- self.finale = params
-
- def append_entry(self, finale_e):
- self.finale.append(finale_e)
-
- def calculate(self, chords):
- startnote = self.finale[5]
- endnote = self.finale[3*6 + 2]
- try:
- cs = chords[startnote]
- ce = chords[endnote]
-
- if not cs or not ce:
- raise IndexError
-
- cs.note_suffix = '-(' + cs.note_suffix
- ce.note_suffix = ce.note_suffix + '-)'
-
- except IndexError:
- sys.stderr.write("""\nHuh? Slur no %d between (%d,%d), with %d notes""" % (
- self.number, startnote, endnote, len(chords)))
-
-
-class Global_measure:
- def __init__(self, number):
- self.timesig = ''
- self.number = number
- self.key_signature = None
- self.scale = None
- self.force_break = 0
-
- self.repeats = []
- self.finale = []
-
- def __str__(self):
- return repr(self.finale)
-
- def set_timesig(self, finale):
- (beats, fdur) = finale
- (log, dots) = EDU_to_duration(fdur)
-
- if dots == 1:
- beats = beats * 3
- log = log * 2
- dots = 0
-
- if dots != 0:
- sys.stderr.write(
- "\nHuh? Beat duration has dots? (EDU Duration = %d)" % fdur)
- self.timesig = (beats, log)
-
- def length(self):
- return self.timesig
-
- def set_key_sig(self, finale):
- k = interpret_finale_key_sig(finale)
- self.key_signature = k
- self.scale = find_scale(k)
-
- def set_flags(self, flag1, flag2):
-
- # flag1 isn't all that interesting.
- if flag2 & 0x8000:
- self.force_break = 1
-
- if flag2 & 0x0008:
- self.repeats.append('start')
- if flag2 & 0x0004:
- self.repeats.append('stop')
-
- if flag2 & 0x0002:
- if flag2 & 0x0004:
- self.repeats.append('bracket')
-
-
-articulation_dict = {
- 94: '^',
- 109: '\\prall',
- 84: '\\turn',
- 62: '\\mordent',
- 85: '\\fermata',
- 46: '.',
- # 3: '>',
- # 18: '\arpeggio' ,
-}
-
-
-class Articulation_def:
- def __init__(self, n, a, b):
- self.finale_glyph = a & 0xff
- self.number = n
-
- def dump(self):
- try:
- return articulation_dict[self.finale_glyph]
- except KeyError:
- sys.stderr.write("\nUnknown articulation no. %d" %
- self.finale_glyph)
- sys.stderr.write(
- "\nPlease add an entry to articulation_dict in the Python source")
- return None
-
-
-class Articulation:
- def __init__(self, a, b, finale):
- self.definition = finale[0]
- self.notenumber = b
-
- def calculate(self, chords, defs):
- c = chords[self.notenumber]
-
- adef = defs[self.definition]
- lystr = adef.dump()
- if lystr is None:
- lystr = '"art"'
- sys.stderr.write("\nThis happened on note %d" % self.notenumber)
-
- c.note_suffix = '-' + lystr
-
-
-class Syllable:
- def __init__(self, a, b, finale):
- self.chordnum = b
- self.syllable = finale[1]
- self.verse = finale[0]
-
- def calculate(self, chords, lyrics):
- self.chord = chords[self.chordnum]
-
-
-class Verse:
- def __init__(self, number, body):
- self.body = body
- self.number = number
- self.split_syllables()
-
- def split_syllables(self):
- ss = re.split('(-| +)', self.body)
-
- sep = 0
- syls = [None]
- for s in ss:
- if sep:
- septor = re.sub(" +", "", s)
- septor = re.sub("-", " -- ", septor)
- syls[-1] = syls[-1] + septor
- else:
- syls.append(s)
-
- sep = not sep
-
- self.syllables = syls
-
- def dump(self):
- str = ''
- line = ''
- for s in self.syllables[1:]:
- line = line + ' ' + s
- if len(line) > 72:
- str = str + ' ' * 4 + line + '\n'
- line = ''
-
- str = """\nverse%s = \\lyricmode {\n %s }\n""" % (
- encodeint(self.number - 1), str)
- return str
-
-
-class KeySignature:
- def __init__(self, pitch, sig_type=0):
- self.pitch = pitch
- self.sig_type = sig_type
-
- def signature_type(self):
- if self.sig_type == 1:
- return "\\minor"
- else:
- # really only for 0, but we only know about 0 and 1
- return "\\major"
-
- def equal(self, other):
- if other and other.pitch == self.pitch and other.sig_type == self.sig_type:
- return 1
- else:
- return 0
-
-
-class Measure:
- def __init__(self, no):
- self.number = no
- self.frames = [0] * 4
- self.flags = 0
- self.clef = 0
- self.finale = []
- self.global_measure = None
- self.staff = None
- self.valid = 1
-
- def valid(self):
- return self.valid
-
- def calculate(self):
- fs = []
-
- if len(self.finale) < 2:
- fs = self.finale[0]
-
- self.clef = fs[1]
- self.frames = [fs[0]]
- else:
- fs = self.finale
- self.clef = fs[0]
- self.flags = fs[1]
- self.frames = fs[2:]
-
-
-class Frame:
- def __init__(self, finale):
- self.measure = None
- self.finale = finale
- (number, start, end) = finale
- self.number = number
- self.start = start
- self.end = end
- self.chords = []
-
- def set_measure(self, m):
- self.measure = m
-
- def calculate(self):
-
- # do grace notes.
- lastch = None
- in_grace = 0
- for c in self.chords:
- if c.grace and (lastch is None or (not lastch.grace)):
- c.chord_prefix = r'\grace {' + c.chord_prefix
- in_grace = 1
- elif not c.grace and lastch and lastch.grace:
- lastch.chord_suffix = lastch.chord_suffix + ' } '
- in_grace = 0
-
- lastch = c
-
- if lastch and in_grace:
- lastch.chord_suffix += '}'
-
- def dump(self):
- str = '%% FR(%d)\n' % self.number
- left = self.measure.global_measure.length()
-
- ln = ''
- for c in self.chords:
- add = c.ly_string() + ' '
- if len(ln) + len(add) > 72:
- str = str + ln + '\n'
- ln = ''
- ln = ln + add
- left = rat_subtract(left, c.length())
-
- str = str + ln
-
- if left[0] < 0:
- sys.stderr.write("""\nHuh? Going backwards in frame no %d, start/end (%d,%d)""" %
- (self.number, self.start, self.end))
- left = (0, 1)
- if left[0]:
- str = str + rational_to_lily_skip(left)
-
- str = str + ' |\n'
- return str
-
-
-def encodeint(i):
- return chr(i + ord('A'))
-
-
-class Staff:
- def __init__(self, number):
- self.number = number
- self.measures = []
-
- def get_measure(self, no):
- fill_list_to(self.measures, no)
-
- if self.measures[no] is None:
- m = Measure(no)
- self.measures[no] = m
- m.staff = self
-
- return self.measures[no]
-
- def staffid(self):
- return 'staff' + encodeint(self.number - 1)
-
- def layerid(self, l):
- return self.staffid() + 'layer%s' % chr(l - 1 + ord('A'))
-
- def dump_time_key_sigs(self):
- k = ''
- last_key = None
- last_time = None
- last_clef = None
- gap = (0, 1)
- for m in self.measures[1:]:
- if not m or not m.valid:
- continue # ugh.
-
- g = m.global_measure
- e = ''
-
- if g:
- if g.key_signature and not g.key_signature.equal(last_key):
- pitch = g.key_signature.pitch
- e = e + "\\key %s %s " % (lily_notename(pitch),
- g.key_signature.signature_type())
-
- last_key = g.key_signature
- if last_time != g.timesig:
- e = e + "\\time %d/%d " % g.timesig
- last_time = g.timesig
-
- if 'start' in g.repeats:
- e = e + ' \\bar ".|:" '
-
- # we don't attempt voltas since they fail easily.
- if 0: # and g.repeat_bar == '|:' or g.repeat_bar == ':|:' or g.bracket:
- strs = []
- if g.repeat_bar == '|:' or g.repeat_bar == ':|:' or g.bracket == 'end':
- strs.append('#f')
-
- if g.bracket == 'start':
- strs.append('"0."')
-
- str = ' '.join(['(volta %s)' % x for x in strs])
-
- e = e + ' \\set Score.repeatCommands = #\'(%s) ' % str
-
- if g.force_break:
- e = e + ' \\break '
-
- if last_clef != m.clef:
- e = e + '\\clef "%s"' % lily_clef(m.clef)
- last_clef = m.clef
- if e:
- if gap != (0, 1):
- k = k + ' ' + rational_to_lily_skip(gap) + '\n'
- gap = (0, 1)
- k = k + e
-
- if g:
- gap = rat_add(gap, g.length())
- if 'stop' in g.repeats:
- k = k + ' \\bar ":|." '
-
- k = '%sglobal = { %s }\n\n ' % (self.staffid(), k)
- return k
-
- def dump(self):
- str = ''
-
- layerids = []
- for x in range(1, 5): # 4 layers.
- laystr = ''
- last_frame = None
- first_frame = None
- gap = (0, 1)
- for m in self.measures[1:]:
- if not m or not m.valid:
- sys.stderr.write(
- "Skipping non-existant or invalid measure\n")
- continue
-
- fr = None
- try:
- fr = m.frames[x]
- except IndexError:
- sys.stderr.write("Skipping nonexistent frame %d\n" % x)
- laystr = laystr + \
- "%% non existent frame %d (skipped)\n" % x
- if fr:
- first_frame = fr
- if gap != (0, 1):
- laystr = laystr + \
- '} %s {\n ' % rational_to_lily_skip(gap)
- gap = (0, 1)
- laystr = laystr + fr.dump()
- else:
- if m.global_measure:
- gap = rat_add(gap, m.global_measure.length())
- else:
- sys.stderr.write(
- "No global measure for staff %d measure %d\n"
- % (self.number, m.number))
- if first_frame:
- l = self.layerid(x)
- laystr = '%s = { { %s } }\n\n' % (l, laystr)
- str = str + laystr
- layerids.append(l)
-
- str = str + self.dump_time_key_sigs()
- stafdef = '\\%sglobal' % self.staffid()
- for i in layerids:
- stafdef = stafdef + ' \\' + i
-
- str = str + '%s = \\context Staff = %s <<\n %s\n >>\n' % \
- (self.staffid(), self.staffid(), stafdef)
- return str
-
-
-def ziplist(l):
- if len(l) < 2:
- return []
- else:
- return [(l[0], l[1])] + ziplist(l[2:])
-
-
-class Chord:
- def __init__(self, number, contents):
- self.pitches = []
- self.frame = None
- self.finale = contents[:7]
-
- self.notelist = ziplist(contents[7:])
- self.duration = None
- self.next = None
- self.prev = None
- self.number = number
- self.note_prefix = ''
- self.note_suffix = ''
- self.chord_suffix = ''
- self.chord_prefix = ''
- self.tuplet = None
- self.grace = 0
-
- def measure(self):
- if not self.frame:
- return None
- return self.frame.measure
-
- def length(self):
- if self.grace:
- return (0, 1)
-
- l = (1, self.duration[0])
-
- d = 1 << self.duration[1]
-
- dotfact = rat_subtract((2, 1), (1, d))
- mylen = rat_multiply(dotfact, l)
-
- if self.tuplet:
- mylen = rat_multiply(mylen, self.tuplet.factor())
- return mylen
-
- def EDU_duration(self):
- return self.finale[2]
-
- def set_duration(self):
- self.duration = EDU_to_duration(self.EDU_duration())
-
- def calculate(self):
- self.find_realpitch()
- self.set_duration()
-
- flag = self.finale[4]
- if Chord.GRACE_MASK & flag:
- self.grace = 1
-
- def find_realpitch(self):
-
- meas = self.measure()
- tiestart = 0
- if not meas or not meas.global_measure:
- sys.stderr.write('note %d not in measure\n' % self.number)
- elif not meas.global_measure.scale:
- sys.stderr.write(
- 'note %d: no scale in this measure.' % self.number)
- else:
-
- for p in self.notelist:
- (pitch, flag) = p
-
- nib1 = pitch & 0x0f
-
- if nib1 > 8:
- nib1 = -(nib1 - 8)
- rest = pitch / 16
-
- scale = meas.global_measure.scale
- (sn, sa) = scale[rest % 7]
- sn = sn + (rest - (rest % 7)) + 7
- acc = sa + nib1
- self.pitches.append((sn, acc))
- tiestart = tiestart or (flag & Chord.TIE_START_MASK)
- if tiestart:
- self.chord_suffix = self.chord_suffix + ' ~ '
-
- REST_MASK = 0x40000000
- TIE_START_MASK = 0x40000000
- GRACE_MASK = 0x00800000
-
- def ly_string(self):
- s = ''
-
- rest = ''
-
- if not (self.finale[4] & Chord.REST_MASK):
- rest = 'r'
-
- for p in self.pitches:
- (n, a) = p
- o = n / 7
- n = n % 7
-
- nn = lily_notename((n, a))
-
- if o < 0:
- nn = nn + (',' * -o)
- elif o > 0:
- nn = nn + ('\'' * o)
-
- if s:
- s = s + ' '
-
- if rest:
- nn = rest
-
- s = s + nn
-
- if not self.pitches:
- s = 'r'
- if len(self.pitches) > 1:
- s = '<%s>' % s
-
- s = s + '%d%s' % (self.duration[0], '.' * self.duration[1])
- s = self.note_prefix + s + self.note_suffix
-
- s = self.chord_prefix + s + self.chord_suffix
-
- return s
-
-
-def fill_list_to(list, no):
- """
-Add None to LIST until it contains entry number NO.
- """
- while len(list) <= no:
- list.extend([None] * (no - len(list) + 1))
- return list
-
-
-def read_finale_value(str):
- """
-Pry off one value from STR. The value may be $hex, decimal, or "string".
-Return: (value, rest-of-STR)
- """
- while str and str[0] in ' \t\n':
- str = str[1:]
-
- if not str:
- return (None, str)
-
- if str[0] == '$':
- str = str[1:]
-
- hex = ''
- while str and str[0] in '0123456789ABCDEF':
- hex = hex + str[0]
- str = str[1:]
-
- return (int(hex, 16), str)
- elif str[0] == '"':
- str = str[1:]
- s = ''
- while str and str[0] != '"':
- s = s + str[0]
- str = str[1:]
-
- return (s, str)
- elif str[0] in '-0123456789':
- dec = ''
- while str and str[0] in '-0123456789':
- dec = dec + str[0]
- str = str[1:]
-
- return (int(dec), str)
- else:
- sys.stderr.write("cannot convert `%s'\n" % str)
- return (None, str)
-
-
-def parse_etf_file(fn, tag_dict):
- """ Read FN, putting ETF info into
- a giant dictionary. The keys of TAG_DICT indicate which tags
- to put into the dict.
- """
-
- sys.stderr.write('parsing ... ')
- f = open(fn, encoding='utf-8')
-
- gulp = re.sub('[\n\r]+', '\n', f.read())
- ls = gulp.split('\n^')
-
- etf_file_dict = {}
- for k in tag_dict:
- etf_file_dict[k] = {}
-
- last_tag = None
- last_numbers = None
-
- for l in ls:
- m = re.match(r'^([a-zA-Z0-9&]+)\(([^)]+)\)', l)
- if m and m.group(1) in tag_dict:
- tag = m.group(1)
-
- indices = tuple([int(s) for s in m.group(2).split(',')])
- content = l[m.end(2)+1:]
-
- tdict = etf_file_dict[tag]
- if indices not in tdict:
- tdict[indices] = []
-
- parsed = []
-
- if tag == 'verse' or tag == 'block':
- m2 = re.match(r'(.*)\^end', content)
- if m2:
- parsed = [m2.group(1)]
- else:
- while content:
- (v, content) = read_finale_value(content)
- if v is not None:
- parsed.append(v)
-
- tdict[indices].extend(parsed)
-
- last_indices = indices
- last_tag = tag
-
- continue
-
-# let's not do this: this really confuses when eE happens to be before a ^text.
-# if last_tag and last_indices:
-# etf_file_dict[last_tag][last_indices].append (l)
-
- sys.stderr.write('\n')
- return etf_file_dict
-
-
-class Etf_file:
- def __init__(self, name):
- self.measures = [None]
- self.chords = [None]
- self.frames = [None]
- self.tuplets = [None]
- self.staffs = [None]
- self.slurs = [None]
- self.articulations = [None]
- self.syllables = [None]
- self.verses = [None]
- self.articulation_defs = [None]
-
- # do it
- self.parse(name)
-
- def get_global_measure(self, no):
- fill_list_to(self.measures, no)
- if self.measures[no] is None:
- self.measures[no] = Global_measure(no)
-
- return self.measures[no]
-
- def get_staff(self, staffno):
- fill_list_to(self.staffs, staffno)
- if self.staffs[staffno] is None:
- self.staffs[staffno] = Staff(staffno)
-
- return self.staffs[staffno]
-
- # staff-spec
- def try_IS(self, indices, contents):
- pass
-
- def try_BC(self, indices, contents):
- bn = indices[0]
- where = contents[0] / 1024.0
-
- def try_TP(self, indices, contents):
- (nil, num) = indices
-
- if self.tuplets[-1] is None or num != self.tuplets[-1].start_note:
- self.tuplets.append(Tuplet(num))
-
- self.tuplets[-1].append_finale(contents)
-
- def try_IM(self, indices, contents):
- (a, b) = indices
- fin = contents
- self.articulations.append(Articulation(a, b, fin))
-
- def try_verse(self, indices, contents):
- a = indices[0]
- body = contents[0]
-
- body = re.sub(r"""\^[a-z]+\([^)]+\)""", "", body)
- body = re.sub(r"\^[a-z]+", "", body)
- self.verses.append(Verse(a, body))
-
- def try_ve(self, indices, contents):
- (a, b) = indices
- self.syllables.append(Syllable(a, b, contents))
-
- def try_eE(self, indices, contents):
- no = indices[0]
- (prev, next, dur, pos, entryflag, extended, follow) = contents[:7]
-
- fill_list_to(self.chords, no)
- self.chords[no] = Chord(no, contents)
-
- def try_Sx(self, indices, contents):
- slurno = indices[0]
- fill_list_to(self.slurs, slurno)
- self.slurs[slurno] = Slur(slurno, contents)
-
- def try_IX(self, indices, contents):
- n = indices[0]
- a = contents[0]
- b = contents[1]
-
- ix = None
- try:
- ix = self.articulation_defs[n]
- except IndexError:
- ix = Articulation_def(n, a, b)
- self.articulation_defs.append(Articulation_def(n, a, b))
-
- def try_GF(self, indices, contents):
- (staffno, measno) = indices
-
- st = self.get_staff(staffno)
- meas = st.get_measure(measno)
- meas.finale = contents
-
- def try_FR(self, indices, contents):
- frameno = indices[0]
-
- startnote = contents[0]
- endnote = contents[1]
-
- fill_list_to(self.frames, frameno)
-
- self.frames[frameno] = Frame((frameno, startnote, endnote))
-
- def try_MS(self, indices, contents):
- measno = indices[0]
- keynum = contents[1]
- meas = self. get_global_measure(measno)
-
- meas.set_key_sig(keynum)
-
- beats = contents[2]
- beatlen = contents[3]
- meas.set_timesig((beats, beatlen))
-
- meas_flag1 = contents[4]
- meas_flag2 = contents[5]
-
- meas.set_flags(meas_flag1, meas_flag2)
-
- routine_dict = {
- 'MS': try_MS,
- 'FR': try_FR,
- 'GF': try_GF,
- 'IX': try_IX,
- 'Sx': try_Sx,
- 'eE': try_eE,
- 'verse': try_verse,
- 've': try_ve,
- 'IM': try_IM,
- 'TP': try_TP,
- 'BC': try_BC,
- 'IS': try_IS,
- }
-
- def parse(self, etf_dict):
- sys.stderr.write('reconstructing ...')
- sys.stderr.flush()
-
- for (tag, routine) in list(Etf_file.routine_dict.items()):
- ks = list(etf_dict[tag].keys())
- ks.sort()
- for k in ks:
- routine(self, k, etf_dict[tag][k])
-
- sys.stderr.write('processing ...')
- sys.stderr.flush()
-
- self.unthread_entries()
-
- for st in self.staffs[1:]:
- if not st:
- continue
- mno = 1
- for m in st.measures[1:]:
- if not m:
- continue
-
- m.calculate()
- try:
- m.global_measure = self.measures[mno]
- except IndexError:
- sys.stderr.write("Non-existent global measure %d" % mno)
- continue
-
- frame_obj_list = [None]
- for frno in m.frames:
- try:
- fr = self.frames[frno]
- frame_obj_list.append(fr)
- except IndexError:
- sys.stderr.write("\nNon-existent frame %d" % frno)
-
- m.frames = frame_obj_list
- for fr in frame_obj_list[1:]:
- if not fr:
- continue
-
- fr.set_measure(m)
-
- fr.chords = self.get_thread(fr.start, fr.end)
- for c in fr.chords:
- c.frame = fr
- mno = mno + 1
-
- for c in self.chords[1:]:
- if c:
- c.calculate()
-
- for f in self.frames[1:]:
- if f:
- f.calculate()
-
- for t in self.tuplets[1:]:
- t.calculate(self.chords)
-
- for s in self.slurs[1:]:
- if s:
- s.calculate(self.chords)
-
- for s in self.articulations[1:]:
- s.calculate(self.chords, self.articulation_defs)
-
- def get_thread(self, startno, endno):
-
- thread = []
-
- c = None
- try:
- c = self.chords[startno]
- except IndexError:
- sys.stderr.write(
- "Huh? Frame has invalid bounds (%d,%d)\n" % (startno, endno))
- return []
-
- while c and c.number != endno:
- d = c # hack to avoid problem with scripts/build/grand-replace.py
- thread.append(d)
- c = c.__next__
-
- if c:
- d = c # hack to avoid problem with scripts/build/grand-replace.py
- thread.append(d)
-
- return thread
-
- def dump(self):
- str = ''
- staffs = []
- for s in self.staffs[1:]:
- if s:
- str = str + '\n\n' + s.dump()
- staffs.append('\\' + s.staffid())
-
- # should use \addlyrics ?
-
- for v in self.verses[1:]:
- str = str + v.dump()
-
- if len(self.verses) > 1:
- sys.stderr.write(
- "\nLyrics found; edit to use \\addlyrics to couple to a staff\n")
-
- if staffs:
- str += '\\version "2.3.25"\n'
- str = str + '<<\n %s\n>> } ' % ' '.join(staffs)
-
- return str
-
- def __str__(self):
- return 'ETF FILE %s %s' % (self.measures, self.entries)
-
- def unthread_entries(self):
- for e in self.chords[1:]:
- if not e:
- continue
-
- e.prev = self.chords[e.finale[0]]
- e.next = self.chords[e.finale[1]]
-
-
-def identify():
- sys.stderr.write("%s from LilyPond %s\n" % (ly.program_name, version))
-
-
-def warranty():
- identify()
- sys.stdout.write('''
-%s
-
- %s
-
-%s
-%s
-''' % (_('Copyright (c) %s by') % '2001--2023',
- '\n '.join(authors),
- _('Distributed under terms of the GNU General Public License.'),
- _('It comes with NO WARRANTY.')))
-
-
-def get_option_parser():
- p = ly.get_option_parser(usage=_("%s [OPTION]... ETF-FILE") % 'etf2ly',
- description=_("""Enigma Transport Format is a format used by Coda Music Technology's
-Finale product. etf2ly converts a subset of ETF to a ready-to-use LilyPond file.
-"""),
- add_help_option=False)
- p.add_option("-h", "--help",
- action="help",
- help=_("show this help and exit"))
- p.version = "etf2ly (LilyPond) 2.24.2"
- p.add_option("--version",
- action="version",
- help=_("show version number and exit"))
- p.add_option('-o', '--output', help=_("write output to FILE"),
- metavar=_("FILE"),
- action='store')
- p.add_option('-w', '--warranty', help=_("show warranty and copyright"),
- action='store_true',
- ),
-
- p.add_option_group('',
- description=(
- _('Report bugs via %s')
- % 'bug-lilypond@gnu.org') + '\n')
- return p
-
-
-def do_options():
- opt_parser = get_option_parser()
- (options, args) = opt_parser.parse_args()
- if options.warranty:
- warranty()
- sys.exit(0)
-
- return (options, args)
-
-
-(options, files) = do_options()
-identify()
-
-out_filename = options.output
-
-e = None
-for f in files:
- if f == '-':
- f = ''
-
- sys.stderr.write('Processing `%s\'\n' % f)
-
- dict = parse_etf_file(f, Etf_file.routine_dict)
- e = Etf_file(dict)
- if not out_filename:
- out_filename = os.path.basename(re.sub('(?i).etf$', '.ly', f))
-
- if out_filename == f:
- out_filename = os.path.basename(f + '.ly')
-
- sys.stderr.write('Writing `%s\'' % out_filename)
- ly = e.dump()
-
- fo = open(out_filename, 'w', encoding='utf-8')
- fo.write('%% lily was here -- automatically converted by etf2ly from %s\n' % f)
- fo.write(ly)
- fo.close()
diff --git a/spaces/PaulEdwards/StarWords/app_old.py b/spaces/PaulEdwards/StarWords/app_old.py
deleted file mode 100644
index 685e64ec352c47f8c178849b3eb32824331cadd0..0000000000000000000000000000000000000000
--- a/spaces/PaulEdwards/StarWords/app_old.py
+++ /dev/null
@@ -1,41 +0,0 @@
-import gradio as gr
-from transformers import pipeline
-title = "📗❤️-Story Generator❤️📗- 🦄Myths and Legends🦸"
-examples = [
- ["Cernunnos the Gaelic god of beasts and wild places"],
- ["Often called the Horned One, Cernunnos was a mediator of man and nature"],
- ["able to tame predator and prey so they might lie down together"],
- ["He remains a mysterious deity, as his original mythos has been lost to history"],
- ["It was believed that ringing a bell on Samhain kept away evil spirits"],
- ["Burying animal bones in front of your house on the night of Samhain will"],
- ["keep evil away, according to some legends of eastern Europe"],
- ["Samhain is a good time of year to work on communicating with the spirit world"],
- ["In some Pacific Northwest tribes, elk are also considered to be"],
- ["particular protectors of women, and in some legends elk lead women who had been "],
- ["captured by enemy warriors back to their homes"],
- ["In Plains Indian tribes, elk were associated with masculinity, endurance, and bravery, and elks eyeteeth were highly valued both as objects of adornment and as the symbol of a mans hunting prowess."],
- ["In some Plains tribes, men saved the eyeteeth from their first elk kill to make into engagement jewelry for their sweetheart. In others, the number of elk teeth sewn onto a womans dress showed off the wealth and skill of her husband or father."],
- ["Ah Puch is one of the names associated with a god of death in the ancient Mayan religion. He was known as a god of death, darkness, and disaster. But he was also a god of childbirth and beginnings. The Quiche Maya believed that he ruled over Metnal, the underworld and the Yucatec Maya believed that he was just one of the lords of Xibaba, that translates to place of fear in the underworld."],
- ["Nuwa was the one who patched the holes in Heaven with five colored stones, and she used the legs of a tortoise to mend the pillars. There are many instances of her in literature across China which detail her in creation stories, and today remains a figure important to Chinese culture."]
-]
-from gradio import inputs
-from gradio.inputs import Textbox
-from gradio import outputs
-
-generator2 = gr.Interface.load("huggingface/EleutherAI/gpt-neo-2.7B")
-generator3 = gr.Interface.load("huggingface/EleutherAI/gpt-j-6B")
-generator1 = gr.Interface.load("huggingface/gpt2-large")
-
-#gr.Parallel(generator1, generator2, generator3, inputs=gr.inputs.Textbox(lines=6, label="Enter a sentence to get another sentence."),title=title, examples=examples).launch()
-
-def complete_with_gpt(text):
- # Use the last 50 characters of the text as context
- return text[:-50] + generator1(text[-50:])
-
-with gr.Blocks() as demo:
- textbox = gr.Textbox(placeholder="Type here and press enter...", lines=4)
- btn = gr.Button("Generate")
-
- btn.click(complete_with_gpt, textbox, textbox)
-
-demo.launch()
\ No newline at end of file
diff --git a/spaces/PeepDaSlan9/chatbot-arena/index.html b/spaces/PeepDaSlan9/chatbot-arena/index.html
deleted file mode 100644
index b8e4df94bb5bf9644fda5057d1316c00f2e4ffbf..0000000000000000000000000000000000000000
--- a/spaces/PeepDaSlan9/chatbot-arena/index.html
+++ /dev/null
@@ -1,13 +0,0 @@
-
-
-
-
-
-
- Chat and Battle with Open LLMs
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/Pengyey/bingo-chuchu/src/app/loading.css b/spaces/Pengyey/bingo-chuchu/src/app/loading.css
deleted file mode 100644
index eaaab6a86a228334c4eca3c5368ae6f0f593d405..0000000000000000000000000000000000000000
--- a/spaces/Pengyey/bingo-chuchu/src/app/loading.css
+++ /dev/null
@@ -1,68 +0,0 @@
-::-webkit-scrollbar {
- width: 10px;
- height: 10px;
- display: none;
-}
-
-::-webkit-scrollbar-button:start:decrement,
-::-webkit-scrollbar-button:end:increment {
- height: 30px;
- background-color: transparent;
-}
-
-::-webkit-scrollbar-track-piece {
- background-color: #3b3b3b;
- -webkit-border-radius: 16px;
-}
-
-::-webkit-scrollbar-thumb:vertical {
- height: 50px;
- background-color: #666;
- border: 1px solid #eee;
- -webkit-border-radius: 6px;
-}
-
-/* loading start */
-.loading-spinner {
- display: flex;
- justify-content: center;
- align-items: center;
- height: 100vh;
- opacity: 1;
- transition: opacity .8s ease-out;
-}
-
-.loading-spinner.hidden {
- opacity: 0;
-}
-
-.loading-spinner>div {
- width: 30px;
- height: 30px;
- background: linear-gradient(90deg, #2870EA 10.79%, #1B4AEF 87.08%);
-
- border-radius: 100%;
- display: inline-block;
- animation: sk-bouncedelay 1.4s infinite ease-in-out both;
-}
-
-.loading-spinner .bounce1 {
- animation-delay: -0.32s;
-}
-
-.loading-spinner .bounce2 {
- animation-delay: -0.16s;
-}
-
-@keyframes sk-bouncedelay {
-
- 0%,
- 80%,
- 100% {
- transform: scale(0);
- }
-
- 40% {
- transform: scale(1.0);
- }
-}
diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/datasets/caption.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/datasets/caption.py
deleted file mode 100644
index c5e0d4c82d49da7fac0022333e8edb994e8dcdd2..0000000000000000000000000000000000000000
--- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/datasets/caption.py
+++ /dev/null
@@ -1,279 +0,0 @@
-import torch
-import torch.distributed as dist
-import time
-from torchvision.ops import nms
-import random
-import numpy as np
-from PIL import Image, ImageDraw
-import pdb
-from maskrcnn_benchmark.structures.bounding_box import BoxList
-from .modulated_coco import ConvertCocoPolysToMask
-from .tsv import ODTSVDataset, TSVYamlDataset
-from .od_to_grounding import sanity_check_target_after_processing
-
-class CaptionTSV(TSVYamlDataset):
- def __init__(self,
- yaml_file,
- transforms,
- return_tokens,
- return_masks,
- tokenizer,
- caption_min_box=1,
- replace_clean_label=False,
- further_screen=False,
- caption_conf=0.5,
- caption_nms=-1,
- pack_random_caption_number=0,
- inference_caption=False,
- sample_negative_for_grounding_data=-1,
- random_pack_prob=-1.0,
- no_random_pack_probability=0.0,
- safeguard_positive_caption=True,
- mlm_obj_for_only_positive=False,
- caption_format_version="v1",
- local_debug=False,
- max_query_len=256,
- **kwargs
- ):
- super(CaptionTSV, self).__init__(yaml_file, None, replace_clean_label)
- self.yaml_file = yaml_file
- self._transforms = transforms
- self.max_query_len = max_query_len
- self.prepare = ConvertCocoPolysToMask(return_masks=return_masks,
- return_tokens=return_tokens,
- tokenizer=tokenizer,
- max_query_len=max_query_len)
- self.tokenizer = tokenizer
- self.caption_min_box = caption_min_box
- self.replace_clean_label = replace_clean_label
- self.further_screen = further_screen
- self.pack_random_caption_number = pack_random_caption_number
- self.caption_format_version = caption_format_version
-
- self.caption_conf = caption_conf
- self.caption_nms = caption_nms
- self.inference_caption = inference_caption
- self.sample_negative_for_grounding_data = sample_negative_for_grounding_data
- self.random_pack_prob = random_pack_prob
- self.no_random_pack_probability = no_random_pack_probability
- self.safeguard_positive_caption = safeguard_positive_caption
- self.mlm_obj_for_only_positive = mlm_obj_for_only_positive
- try:
- self.rank = dist.get_rank()
- except:
- self.rank = 0
-
- def __len__(self):
- return super(CaptionTSV, self).__len__()
-
- def pack_caption(self, positive_caption, negative_captions, original_tokens_positive):
- if len(negative_captions) == 0:
- return positive_caption, original_tokens_positive, [(0, len(positive_caption))]
- if self.safeguard_positive_caption:
- length_of_each_caption = []
- for caption in negative_captions + [positive_caption]:
- tokenized = self.tokenizer(caption, return_tensors="pt")
- length_of_each_caption.append(tokenized.input_ids.size(-1))
- max_length = self.max_query_len - length_of_each_caption[-1]
- indexes = list(range(len(negative_captions)))
- random.shuffle(indexes)
- new_caption_list = [positive_caption]
- for i in indexes:
- if length_of_each_caption[i] < max_length:
- new_caption_list.append(negative_captions[i])
- max_length -= length_of_each_caption[i]
- else:
- new_caption_list = [positive_caption] + negative_captions
- random.shuffle(new_caption_list)
-
- new_caption = ''
-
- for i in new_caption_list:
- if i == positive_caption:
- start_position = len(new_caption)
- new_caption += i
- if not i.endswith("."):
- new_caption += "."
- new_caption += " "
-
- # shift the token positions the boxes are aligned to
- for index, i in enumerate(original_tokens_positive):
- original_tokens_positive[index] = [tuple(j) for j in i]
- for i in original_tokens_positive:
- for index, j in enumerate(i):
- i[index] = (j[0] + start_position, j[1] + start_position)
-
- return new_caption, original_tokens_positive, [(start_position, start_position + len(positive_caption))]
-
- def __get_negative_captions__(self, idx, negative_size=7):
- negative_captions = []
- for i in range(negative_size):
- img, anno, _, scale = super(CaptionTSV, self).__getitem__(np.random.choice(len(self)))
- caption = anno["caption"]
- negative_captions.append(caption)
-
- return negative_captions
-
- def __getitem__(self, idx):
- try:
- img, anno, _, scale = super(CaptionTSV, self).__getitem__(idx)
- if self.inference_caption:
- caption = None
- if isinstance(anno, list):
- caption = anno[0]["caption"] # inference mode for bing
- anno = []
- elif len(anno) == 1:
- caption = anno["caption"] # inference mode for googlecc
- anno = []
- else:
- caption = " ".join(anno["captions"])
- anno = []
- else:
- '''
- An example
- {'img_h': 1154, 'img_w': 1600, 'caption': 'xxx', 'tokens_positive': [[[47, 50], [51, 53], [54, 59]], [[32, 35], [36, 41]], [[32, 35], [36, 41]], [[0, 3], [3, 6], [6, 10], [11, 16], [17, 19], [20, 23]], [[32, 35], [36, 41]], [[32, 35], [36, 41]]], 'bboxes': [[7.344961166381836, 10.479412078857422, 1592.2679443359375, 1090.0028076171875], [950.32861328125, 346.572021484375, 1333.2373046875, 679.3215942382812], [927.44140625, 342.7712707519531, 1389.833984375, 719.5758666992188], [90.48786163330078, 363.67572021484375, 1381.8631591796875, 1078.687744140625], [122.84217071533203, 422.6786193847656, 507.845703125, 667.2651977539062], [80.62384033203125, 416.500244140625, 563.1666259765625, 734.603271484375]], 'scores': [0.7966700196266174, 0.8952182531356812, 0.8186006546020508, 0.9995516538619995, 0.8021856546401978, 0.8923134803771973]}
- '''
- if len(anno["bboxes"]) < self.caption_min_box: # Retry triggered!
- return self[np.random.choice(len(self))]
-
- if self.caption_format_version == "v2":
- anno = self.convert_anno_from_v2_to_v1(anno)
-
- try:
- if self.further_screen:
- conf = self.caption_conf
- nms_thre = self.caption_nms
-
- bboxes = torch.as_tensor(anno["bboxes"]).float()
- scores = torch.as_tensor(anno["scores"])
- tokens_positive = anno["tokens_positive"]
-
- # print("\n\n\n\n tokens_positive in original data", tokens_positive)
-
- keep = scores > conf
- scores = scores[keep]
- bboxes = bboxes[keep]
- tokens_positive = [i for index, i in enumerate(tokens_positive) if keep[index]]
-
- assert (len(tokens_positive) == len(bboxes) == len(scores))
-
- if len(bboxes) < self.caption_min_box: # Retry triggered!
- return self[np.random.choice(len(self))]
-
- if nms_thre > 0:
- keep = nms(boxes=bboxes, scores=scores, iou_threshold=nms_thre)
- scores = scores[keep]
- bboxes = bboxes[keep]
- tokens_positive = [tokens_positive[i] for i in keep]
- assert (len(tokens_positive) == len(bboxes) == len(scores))
-
- # Write back
- anno["bboxes"] = bboxes.tolist()
- anno["scores"] = scores.tolist()
- anno["tokens_positive"] = tokens_positive
-
- boxes = torch.as_tensor(anno["bboxes"])
-
- if len(boxes) < self.caption_min_box: # Retry triggered!
- return self[np.random.choice(len(self))]
-
- target = BoxList(boxes, (anno["img_w"], anno["img_h"]), mode="xyxy")
- target = target.clip_to_image(remove_empty=True)
-
- caption = anno["caption"]
- # print("original caption", caption)
- empty_everything = False
- if self.sample_negative_for_grounding_data != -1:
- if random.random() < self.sample_negative_for_grounding_data:
- empty_everything = True
-
- if empty_everything:
- caption = self.__get_negative_captions__(idx, negative_size=1)[0]
-
- if self.pack_random_caption_number != 0:
- if self.random_pack_prob != -1.0:
- if random.random() < self.no_random_pack_probability:
- negative_pack_number = 0
- elif random.random() < self.random_pack_prob:
- negative_pack_number = self.pack_random_caption_number
- else:
- negative_pack_number = np.random.choice(self.pack_random_caption_number)
- else:
- negative_pack_number = self.pack_random_caption_number
-
- negative_captions = self.__get_negative_captions__(idx, negative_size=negative_pack_number)
-
- caption, anno["tokens_positive"], greenlight_span_for_masked_lm_objective = self.pack_caption(
- caption, negative_captions, anno["tokens_positive"])
- else:
- greenlight_span_for_masked_lm_objective = [(0, len(caption))]
-
- if not self.mlm_obj_for_only_positive:
- greenlight_span_for_masked_lm_objective = [(0, len(caption))]
-
- new_anno = []
- areas = target.area()
- for i in range(len(target)):
- new_anno_i = {}
- new_anno_i["area"] = areas[i]
- new_anno_i["iscrowd"] = 0
- new_anno_i["image_id"] = idx
- new_anno_i["category_id"] = 1 # following vg and others
- new_anno_i["id"] = None
- new_anno_i['bbox'] = target.bbox[i].numpy().tolist()
- new_anno_i["tokens_positive"] = anno["tokens_positive"][i]
- new_anno.append(new_anno_i)
-
- except:
- return self[np.random.choice(len(self))]
-
- anno = new_anno
- if empty_everything:
- anno = []
-
- annotations = {"image_id": idx, "annotations": anno, "caption": caption}
- annotations["greenlight_span_for_masked_lm_objective"] = greenlight_span_for_masked_lm_objective
- img, annotations = self.prepare(img, annotations, box_format="xyxy")
-
- if self._transforms is not None:
- img, target = self._transforms(img, target)
-
- # add additional property
- for ann in annotations:
- target.add_field(ann, annotations[ann])
- except:
- print("Outter Retry triggered!!")
- return self[np.random.choice(len(self))]
-
- sanity_check_target_after_processing(target)
-
- return img, target, idx
-
- def convert_anno_from_v2_to_v1(self, anno):
- flatterned_bboxes = []
- flatterned_tokens_positive = []
- flatterned_bboxes_scores = []
- for i in range(len(anno["bboxes"])):
- # i is the index for entity
- for j in range(len(anno["bboxes"][i])):
- # j is the index for each box
- flatterned_bboxes.append(anno["bboxes"][i][j])
- flatterned_tokens_positive.append(
- anno["tokens_positive"][i]) # Assume this box corresponds to all the token_spans for this entity
- flatterned_bboxes_scores.append(anno["scores"][i][j])
- anno["bboxes"] = flatterned_bboxes
- anno["tokens_positive"] = flatterned_tokens_positive
- anno["scores"] = flatterned_bboxes_scores
- return anno
-
-
- def get_raw_image(self, idx):
- image, *_ = super(CaptionTSV, self).__getitem__(idx)
- return image
-
- def get_img_id(self, idx):
- line_no = self.get_line_no(idx)
- if self.label_tsv is not None:
- row = self.label_tsv.seek(line_no)
- img_id = row[0]
- return img_id
diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/modeling/language_backbone/rnn_model.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/modeling/language_backbone/rnn_model.py
deleted file mode 100644
index 2b690ca8520695ab77572679e06b2f90971bec16..0000000000000000000000000000000000000000
--- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/modeling/language_backbone/rnn_model.py
+++ /dev/null
@@ -1,115 +0,0 @@
-from copy import deepcopy
-import numpy as np
-import torch
-from torch import nn
-
-
-class RNNEnoder(nn.Module):
- def __init__(self, cfg):
- super(RNNEnoder, self).__init__()
- self.cfg = cfg
-
- self.rnn_type = cfg.MODEL.LANGUAGE_BACKBONE.RNN_TYPE
- self.variable_length = cfg.MODEL.LANGUAGE_BACKBONE.VARIABLE_LENGTH
- self.word_embedding_size = cfg.MODEL.LANGUAGE_BACKBONE.WORD_EMBEDDING_SIZE
- self.word_vec_size = cfg.MODEL.LANGUAGE_BACKBONE.WORD_VEC_SIZE
- self.hidden_size = cfg.MODEL.LANGUAGE_BACKBONE.HIDDEN_SIZE
- self.bidirectional = cfg.MODEL.LANGUAGE_BACKBONE.BIDIRECTIONAL
- self.input_dropout_p = cfg.MODEL.LANGUAGE_BACKBONE.INPUT_DROPOUT_P
- self.dropout_p = cfg.MODEL.LANGUAGE_BACKBONE.DROPOUT_P
- self.n_layers = cfg.MODEL.LANGUAGE_BACKBONE.N_LAYERS
- self.corpus_path = cfg.MODEL.LANGUAGE_BACKBONE.CORPUS_PATH
- self.vocab_size = cfg.MODEL.LANGUAGE_BACKBONE.VOCAB_SIZE
-
- # language encoder
- self.embedding = nn.Embedding(self.vocab_size, self.word_embedding_size)
- self.input_dropout = nn.Dropout(self.input_dropout_p)
- self.mlp = nn.Sequential(nn.Linear(self.word_embedding_size, self.word_vec_size), nn.ReLU())
- self.rnn = getattr(nn, self.rnn_type.upper())(self.word_vec_size,
- self.hidden_size,
- self.n_layers,
- batch_first=True,
- bidirectional=self.bidirectional,
- dropout=self.dropout_p)
- self.num_dirs = 2 if self.bidirectional else 1
-
- def forward(self, input, mask=None):
- word_id = input
- max_len = (word_id != 0).sum(1).max().item()
- word_id = word_id[:, :max_len] # mask zero
- # embedding
- output, hidden, embedded, final_output = self.RNNEncode(word_id)
- return {
- 'hidden': hidden,
- 'output': output,
- 'embedded': embedded,
- 'final_output': final_output,
- }
-
- def encode(self, input_labels):
- """
- Inputs:
- - input_labels: Variable long (batch, seq_len)
- Outputs:
- - output : Variable float (batch, max_len, hidden_size * num_dirs)
- - hidden : Variable float (batch, num_layers * num_dirs * hidden_size)
- - embedded: Variable float (batch, max_len, word_vec_size)
- """
- device = input_labels.device
- if self.variable_length:
- input_lengths_list, sorted_lengths_list, sort_idxs, recover_idxs = self.sort_inputs(input_labels)
- input_labels = input_labels[sort_idxs]
-
- embedded = self.embedding(input_labels) # (n, seq_len, word_embedding_size)
- embedded = self.input_dropout(embedded) # (n, seq_len, word_embedding_size)
- embedded = self.mlp(embedded) # (n, seq_len, word_vec_size)
-
- if self.variable_length:
- if self.variable_length:
- embedded = nn.utils.rnn.pack_padded_sequence(embedded, \
- sorted_lengths_list, \
- batch_first=True)
- # forward rnn
- self.rnn.flatten_parameters()
- output, hidden = self.rnn(embedded)
-
- # recover
- if self.variable_length:
- # recover embedded
- embedded, _ = nn.utils.rnn.pad_packed_sequence(embedded,
- batch_first=True) # (batch, max_len, word_vec_size)
- embedded = embedded[recover_idxs]
-
- # recover output
- output, _ = nn.utils.rnn.pad_packed_sequence(output,
- batch_first=True) # (batch, max_len, hidden_size * num_dir)
- output = output[recover_idxs]
-
- # recover hidden
- if self.rnn_type == 'lstm':
- hidden = hidden[0] # hidden state
- hidden = hidden[:, recover_idxs, :] # (num_layers * num_dirs, batch, hidden_size)
- hidden = hidden.transpose(0, 1).contiguous() # (batch, num_layers * num_dirs, hidden_size)
- hidden = hidden.view(hidden.size(0), -1) # (batch, num_layers * num_dirs * hidden_size)
-
- # final output
- finnal_output = []
- for ii in range(output.shape[0]):
- finnal_output.append(output[ii, int(input_lengths_list[ii] - 1), :])
- finnal_output = torch.stack(finnal_output, dim=0) # (batch, number_dirs * hidden_size)
-
- return output, hidden, embedded, finnal_output
-
- def sort_inputs(self, input_labels): # sort input labels by descending
- device = input_labels.device
- input_lengths = (input_labels != 0).sum(1)
- input_lengths_list = input_lengths.data.cpu().numpy().tolist()
- sorted_input_lengths_list = np.sort(input_lengths_list)[::-1].tolist() # list of sorted input_lengths
- sort_idxs = np.argsort(input_lengths_list)[::-1].tolist()
- s2r = {s: r for r, s in enumerate(sort_idxs)}
- recover_idxs = [s2r[s] for s in range(len(input_lengths_list))]
- assert max(input_lengths_list) == input_labels.size(1)
- # move to long tensor
- sort_idxs = input_labels.data.new(sort_idxs).long().to(device) # Variable long
- recover_idxs = input_labels.data.new(recover_idxs).long().to(device) # Variable long
- return input_lengths_list, sorted_input_lengths_list, sort_idxs, recover_idxs
diff --git a/spaces/RMXK/RVC_HFF/Applio-RVC-Fork/utils/backups.py b/spaces/RMXK/RVC_HFF/Applio-RVC-Fork/utils/backups.py
deleted file mode 100644
index b814f8184792e80e2324685436053d61487110b1..0000000000000000000000000000000000000000
--- a/spaces/RMXK/RVC_HFF/Applio-RVC-Fork/utils/backups.py
+++ /dev/null
@@ -1,141 +0,0 @@
-import os
-import shutil
-import hashlib
-import time
-import base64
-
-
-
-
-LOGS_FOLDER = '/content/Applio-RVC-Fork/logs'
-WEIGHTS_FOLDER = '/content/Applio-RVC-Fork/weights'
-GOOGLE_DRIVE_PATH = '/content/drive/MyDrive/RVC_Backup'
-
-def import_google_drive_backup():
- print("Importing Google Drive backup...")
- weights_exist = False
- for root, dirs, files in os.walk(GOOGLE_DRIVE_PATH):
- for filename in files:
- filepath = os.path.join(root, filename)
- if os.path.isfile(filepath) and not filepath.startswith(os.path.join(GOOGLE_DRIVE_PATH, 'weights')):
- backup_filepath = os.path.join(LOGS_FOLDER, os.path.relpath(filepath, GOOGLE_DRIVE_PATH))
- backup_folderpath = os.path.dirname(backup_filepath)
- if not os.path.exists(backup_folderpath):
- os.makedirs(backup_folderpath)
- print(f'Created backup folder: {backup_folderpath}', flush=True)
- shutil.copy2(filepath, backup_filepath) # copy file with metadata
- print(f'Imported file from Google Drive backup: {filename}')
- elif filepath.startswith(os.path.join(GOOGLE_DRIVE_PATH, 'weights')) and filename.endswith('.pth'):
- weights_exist = True
- weights_filepath = os.path.join(WEIGHTS_FOLDER, os.path.relpath(filepath, os.path.join(GOOGLE_DRIVE_PATH, 'weights')))
- weights_folderpath = os.path.dirname(weights_filepath)
- if not os.path.exists(weights_folderpath):
- os.makedirs(weights_folderpath)
- print(f'Created weights folder: {weights_folderpath}', flush=True)
- shutil.copy2(filepath, weights_filepath) # copy file with metadata
- print(f'Imported file from weights: {filename}')
- if weights_exist:
- print("Copied weights from Google Drive backup to local weights folder.")
- else:
- print("No weights found in Google Drive backup.")
- print("Google Drive backup import completed.")
-
-def get_md5_hash(file_path):
- hash_md5 = hashlib.md5()
- with open(file_path, "rb") as f:
- for chunk in iter(lambda: f.read(4096), b""):
- hash_md5.update(chunk)
- return hash_md5.hexdigest()
-
-def copy_weights_folder_to_drive():
- destination_folder = os.path.join(GOOGLE_DRIVE_PATH, 'weights')
- try:
- if not os.path.exists(destination_folder):
- os.makedirs(destination_folder)
-
- num_copied = 0
- for filename in os.listdir(WEIGHTS_FOLDER):
- if filename.endswith('.pth'):
- source_file = os.path.join(WEIGHTS_FOLDER, filename)
- destination_file = os.path.join(destination_folder, filename)
- if not os.path.exists(destination_file):
- shutil.copy2(source_file, destination_file)
- num_copied += 1
- print(f"Copied {filename} to Google Drive!")
-
- if num_copied == 0:
- print("No new finished models found for copying.")
- else:
- print(f"Finished copying {num_copied} files to Google Drive!")
-
- except Exception as e:
- print(f"An error occurred while copying weights: {str(e)}")
- # You can log the error or take appropriate actions here.
-
-def backup_files():
- print("\nStarting backup loop...")
- last_backup_timestamps_path = os.path.join(LOGS_FOLDER, 'last_backup_timestamps.txt')
- fully_updated = False # boolean to track if all files are up to date
-
- while True:
- try:
- updated = False # flag to check if any files were updated
- last_backup_timestamps = {}
-
- try:
- with open(last_backup_timestamps_path, 'r') as f:
- last_backup_timestamps = dict(line.strip().split(':') for line in f)
- except FileNotFoundError:
- pass # File does not exist yet, which is fine
-
- for root, dirs, files in os.walk(LOGS_FOLDER):
- for filename in files:
- if filename != 'last_backup_timestamps.txt':
- filepath = os.path.join(root, filename)
- if os.path.isfile(filepath):
- backup_filepath = os.path.join(GOOGLE_DRIVE_PATH, os.path.relpath(filepath, LOGS_FOLDER))
- backup_folderpath = os.path.dirname(backup_filepath)
- if not os.path.exists(backup_folderpath):
- os.makedirs(backup_folderpath)
- print(f'Created backup folder: {backup_folderpath}', flush=True)
- # check if file has changed since last backup
- last_backup_timestamp = last_backup_timestamps.get(filepath)
- current_timestamp = os.path.getmtime(filepath)
- if last_backup_timestamp is None or float(last_backup_timestamp) < current_timestamp:
- shutil.copy2(filepath, backup_filepath) # copy file with metadata
- last_backup_timestamps[filepath] = str(current_timestamp) # update last backup timestamp
- if last_backup_timestamp is None:
- print(f'Backed up file: {filename}')
- else:
- print(f'Updating backed up file: {filename}')
- updated = True
- fully_updated = False # if a file is updated, all files are not up to date
-
- # check if any files were deleted in Colab and delete them from the backup drive
- for filepath in list(last_backup_timestamps.keys()):
- if not os.path.exists(filepath):
- backup_filepath = os.path.join(GOOGLE_DRIVE_PATH, os.path.relpath(filepath, LOGS_FOLDER))
- if os.path.exists(backup_filepath):
- os.remove(backup_filepath)
- print(f'Deleted file: {filepath}')
- del last_backup_timestamps[filepath]
- updated = True
- fully_updated = False # if a file is deleted, all files are not up to date
-
- if not updated and not fully_updated:
- print("Files are up to date.")
- fully_updated = True # if all files are up to date, set the boolean to True
- copy_weights_folder_to_drive()
- sleep_time = 15
- else:
- sleep_time = 0.1
-
- with open(last_backup_timestamps_path, 'w') as f:
- for filepath, timestamp in last_backup_timestamps.items():
- f.write(f'{filepath}:{timestamp}\n')
-
- time.sleep(sleep_time) # wait for 15 seconds before checking again, or 0.1s if not fully up to date to speed up backups
-
- except Exception as e:
- print(f"An error occurred: {str(e)}")
- # You can log the error or take appropriate actions here.
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/operations/build/build_tracker.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/operations/build/build_tracker.py
deleted file mode 100644
index 6621549b8449130d2d01ebac0a3649d8b70c4f91..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/operations/build/build_tracker.py
+++ /dev/null
@@ -1,124 +0,0 @@
-import contextlib
-import hashlib
-import logging
-import os
-from types import TracebackType
-from typing import Dict, Generator, Optional, Set, Type, Union
-
-from pip._internal.models.link import Link
-from pip._internal.req.req_install import InstallRequirement
-from pip._internal.utils.temp_dir import TempDirectory
-
-logger = logging.getLogger(__name__)
-
-
-@contextlib.contextmanager
-def update_env_context_manager(**changes: str) -> Generator[None, None, None]:
- target = os.environ
-
- # Save values from the target and change them.
- non_existent_marker = object()
- saved_values: Dict[str, Union[object, str]] = {}
- for name, new_value in changes.items():
- try:
- saved_values[name] = target[name]
- except KeyError:
- saved_values[name] = non_existent_marker
- target[name] = new_value
-
- try:
- yield
- finally:
- # Restore original values in the target.
- for name, original_value in saved_values.items():
- if original_value is non_existent_marker:
- del target[name]
- else:
- assert isinstance(original_value, str) # for mypy
- target[name] = original_value
-
-
-@contextlib.contextmanager
-def get_build_tracker() -> Generator["BuildTracker", None, None]:
- root = os.environ.get("PIP_BUILD_TRACKER")
- with contextlib.ExitStack() as ctx:
- if root is None:
- root = ctx.enter_context(TempDirectory(kind="build-tracker")).path
- ctx.enter_context(update_env_context_manager(PIP_BUILD_TRACKER=root))
- logger.debug("Initialized build tracking at %s", root)
-
- with BuildTracker(root) as tracker:
- yield tracker
-
-
-class BuildTracker:
- def __init__(self, root: str) -> None:
- self._root = root
- self._entries: Set[InstallRequirement] = set()
- logger.debug("Created build tracker: %s", self._root)
-
- def __enter__(self) -> "BuildTracker":
- logger.debug("Entered build tracker: %s", self._root)
- return self
-
- def __exit__(
- self,
- exc_type: Optional[Type[BaseException]],
- exc_val: Optional[BaseException],
- exc_tb: Optional[TracebackType],
- ) -> None:
- self.cleanup()
-
- def _entry_path(self, link: Link) -> str:
- hashed = hashlib.sha224(link.url_without_fragment.encode()).hexdigest()
- return os.path.join(self._root, hashed)
-
- def add(self, req: InstallRequirement) -> None:
- """Add an InstallRequirement to build tracking."""
-
- assert req.link
- # Get the file to write information about this requirement.
- entry_path = self._entry_path(req.link)
-
- # Try reading from the file. If it exists and can be read from, a build
- # is already in progress, so a LookupError is raised.
- try:
- with open(entry_path) as fp:
- contents = fp.read()
- except FileNotFoundError:
- pass
- else:
- message = "{} is already being built: {}".format(req.link, contents)
- raise LookupError(message)
-
- # If we're here, req should really not be building already.
- assert req not in self._entries
-
- # Start tracking this requirement.
- with open(entry_path, "w", encoding="utf-8") as fp:
- fp.write(str(req))
- self._entries.add(req)
-
- logger.debug("Added %s to build tracker %r", req, self._root)
-
- def remove(self, req: InstallRequirement) -> None:
- """Remove an InstallRequirement from build tracking."""
-
- assert req.link
- # Delete the created file and the corresponding entries.
- os.unlink(self._entry_path(req.link))
- self._entries.remove(req)
-
- logger.debug("Removed %s from build tracker %r", req, self._root)
-
- def cleanup(self) -> None:
- for req in set(self._entries):
- self.remove(req)
-
- logger.debug("Removed build tracker: %r", self._root)
-
- @contextlib.contextmanager
- def track(self, req: InstallRequirement) -> Generator[None, None, None]:
- self.add(req)
- yield
- self.remove(req)
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/requests/api.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/requests/api.py
deleted file mode 100644
index 2f71aaed1afc2f43ae5a58d951896b91e0327abc..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/requests/api.py
+++ /dev/null
@@ -1,157 +0,0 @@
-"""
-requests.api
-~~~~~~~~~~~~
-
-This module implements the Requests API.
-
-:copyright: (c) 2012 by Kenneth Reitz.
-:license: Apache2, see LICENSE for more details.
-"""
-
-from . import sessions
-
-
-def request(method, url, **kwargs):
- """Constructs and sends a :class:`Request `.
-
- :param method: method for the new :class:`Request` object: ``GET``, ``OPTIONS``, ``HEAD``, ``POST``, ``PUT``, ``PATCH``, or ``DELETE``.
- :param url: URL for the new :class:`Request` object.
- :param params: (optional) Dictionary, list of tuples or bytes to send
- in the query string for the :class:`Request`.
- :param data: (optional) Dictionary, list of tuples, bytes, or file-like
- object to send in the body of the :class:`Request`.
- :param json: (optional) A JSON serializable Python object to send in the body of the :class:`Request`.
- :param headers: (optional) Dictionary of HTTP Headers to send with the :class:`Request`.
- :param cookies: (optional) Dict or CookieJar object to send with the :class:`Request`.
- :param files: (optional) Dictionary of ``'name': file-like-objects`` (or ``{'name': file-tuple}``) for multipart encoding upload.
- ``file-tuple`` can be a 2-tuple ``('filename', fileobj)``, 3-tuple ``('filename', fileobj, 'content_type')``
- or a 4-tuple ``('filename', fileobj, 'content_type', custom_headers)``, where ``'content-type'`` is a string
- defining the content type of the given file and ``custom_headers`` a dict-like object containing additional headers
- to add for the file.
- :param auth: (optional) Auth tuple to enable Basic/Digest/Custom HTTP Auth.
- :param timeout: (optional) How many seconds to wait for the server to send data
- before giving up, as a float, or a :ref:`(connect timeout, read
- timeout) ` tuple.
- :type timeout: float or tuple
- :param allow_redirects: (optional) Boolean. Enable/disable GET/OPTIONS/POST/PUT/PATCH/DELETE/HEAD redirection. Defaults to ``True``.
- :type allow_redirects: bool
- :param proxies: (optional) Dictionary mapping protocol to the URL of the proxy.
- :param verify: (optional) Either a boolean, in which case it controls whether we verify
- the server's TLS certificate, or a string, in which case it must be a path
- to a CA bundle to use. Defaults to ``True``.
- :param stream: (optional) if ``False``, the response content will be immediately downloaded.
- :param cert: (optional) if String, path to ssl client cert file (.pem). If Tuple, ('cert', 'key') pair.
- :return: :class:`Response ` object
- :rtype: requests.Response
-
- Usage::
-
- >>> import requests
- >>> req = requests.request('GET', 'https://httpbin.org/get')
- >>> req
-
- """
-
- # By using the 'with' statement we are sure the session is closed, thus we
- # avoid leaving sockets open which can trigger a ResourceWarning in some
- # cases, and look like a memory leak in others.
- with sessions.Session() as session:
- return session.request(method=method, url=url, **kwargs)
-
-
-def get(url, params=None, **kwargs):
- r"""Sends a GET request.
-
- :param url: URL for the new :class:`Request` object.
- :param params: (optional) Dictionary, list of tuples or bytes to send
- in the query string for the :class:`Request`.
- :param \*\*kwargs: Optional arguments that ``request`` takes.
- :return: :class:`Response ` object
- :rtype: requests.Response
- """
-
- return request("get", url, params=params, **kwargs)
-
-
-def options(url, **kwargs):
- r"""Sends an OPTIONS request.
-
- :param url: URL for the new :class:`Request` object.
- :param \*\*kwargs: Optional arguments that ``request`` takes.
- :return: :class:`Response ` object
- :rtype: requests.Response
- """
-
- return request("options", url, **kwargs)
-
-
-def head(url, **kwargs):
- r"""Sends a HEAD request.
-
- :param url: URL for the new :class:`Request` object.
- :param \*\*kwargs: Optional arguments that ``request`` takes. If
- `allow_redirects` is not provided, it will be set to `False` (as
- opposed to the default :meth:`request` behavior).
- :return: :class:`Response ` object
- :rtype: requests.Response
- """
-
- kwargs.setdefault("allow_redirects", False)
- return request("head", url, **kwargs)
-
-
-def post(url, data=None, json=None, **kwargs):
- r"""Sends a POST request.
-
- :param url: URL for the new :class:`Request` object.
- :param data: (optional) Dictionary, list of tuples, bytes, or file-like
- object to send in the body of the :class:`Request`.
- :param json: (optional) json data to send in the body of the :class:`Request`.
- :param \*\*kwargs: Optional arguments that ``request`` takes.
- :return: :class:`Response ` object
- :rtype: requests.Response
- """
-
- return request("post", url, data=data, json=json, **kwargs)
-
-
-def put(url, data=None, **kwargs):
- r"""Sends a PUT request.
-
- :param url: URL for the new :class:`Request` object.
- :param data: (optional) Dictionary, list of tuples, bytes, or file-like
- object to send in the body of the :class:`Request`.
- :param json: (optional) json data to send in the body of the :class:`Request`.
- :param \*\*kwargs: Optional arguments that ``request`` takes.
- :return: :class:`Response ` object
- :rtype: requests.Response
- """
-
- return request("put", url, data=data, **kwargs)
-
-
-def patch(url, data=None, **kwargs):
- r"""Sends a PATCH request.
-
- :param url: URL for the new :class:`Request` object.
- :param data: (optional) Dictionary, list of tuples, bytes, or file-like
- object to send in the body of the :class:`Request`.
- :param json: (optional) json data to send in the body of the :class:`Request`.
- :param \*\*kwargs: Optional arguments that ``request`` takes.
- :return: :class:`Response ` object
- :rtype: requests.Response
- """
-
- return request("patch", url, data=data, **kwargs)
-
-
-def delete(url, **kwargs):
- r"""Sends a DELETE request.
-
- :param url: URL for the new :class:`Request` object.
- :param \*\*kwargs: Optional arguments that ``request`` takes.
- :return: :class:`Response ` object
- :rtype: requests.Response
- """
-
- return request("delete", url, **kwargs)
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/_extension.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/_extension.py
deleted file mode 100644
index cbd6da9be4956ce8558304ed72ffbe88ccd22ba5..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/_extension.py
+++ /dev/null
@@ -1,10 +0,0 @@
-from typing import Any
-
-
-def load_ipython_extension(ip: Any) -> None: # pragma: no cover
- # prevent circular import
- from pip._vendor.rich.pretty import install
- from pip._vendor.rich.traceback import install as tr_install
-
- install()
- tr_install()
diff --git a/spaces/Realcat/image-matching-webui/third_party/SOLD2/sold2/misc/geometry_utils.py b/spaces/Realcat/image-matching-webui/third_party/SOLD2/sold2/misc/geometry_utils.py
deleted file mode 100644
index 024430a07b9b094d2eca6e4e9e14edd5105ad1c5..0000000000000000000000000000000000000000
--- a/spaces/Realcat/image-matching-webui/third_party/SOLD2/sold2/misc/geometry_utils.py
+++ /dev/null
@@ -1,90 +0,0 @@
-import numpy as np
-import torch
-
-
-### Point-related utils
-
-# Warp a list of points using a homography
-def warp_points(points, homography):
- # Convert to homogeneous and in xy format
- new_points = np.concatenate(
- [points[..., [1, 0]], np.ones_like(points[..., :1])], axis=-1
- )
- # Warp
- new_points = (homography @ new_points.T).T
- # Convert back to inhomogeneous and hw format
- new_points = new_points[..., [1, 0]] / new_points[..., 2:]
- return new_points
-
-
-# Mask out the points that are outside of img_size
-def mask_points(points, img_size):
- mask = (
- (points[..., 0] >= 0)
- & (points[..., 0] < img_size[0])
- & (points[..., 1] >= 0)
- & (points[..., 1] < img_size[1])
- )
- return mask
-
-
-# Convert a tensor [N, 2] or batched tensor [B, N, 2] of N keypoints into
-# a grid in [-1, 1]² that can be used in torch.nn.functional.interpolate
-def keypoints_to_grid(keypoints, img_size):
- n_points = keypoints.size()[-2]
- device = keypoints.device
- grid_points = (
- keypoints.float()
- * 2.0
- / torch.tensor(img_size, dtype=torch.float, device=device)
- - 1.0
- )
- grid_points = grid_points[..., [1, 0]].view(-1, n_points, 1, 2)
- return grid_points
-
-
-# Return a 2D matrix indicating the local neighborhood of each point
-# for a given threshold and two lists of corresponding keypoints
-def get_dist_mask(kp0, kp1, valid_mask, dist_thresh):
- b_size, n_points, _ = kp0.size()
- dist_mask0 = torch.norm(kp0.unsqueeze(2) - kp0.unsqueeze(1), dim=-1)
- dist_mask1 = torch.norm(kp1.unsqueeze(2) - kp1.unsqueeze(1), dim=-1)
- dist_mask = torch.min(dist_mask0, dist_mask1)
- dist_mask = dist_mask <= dist_thresh
- dist_mask = dist_mask.repeat(1, 1, b_size).reshape(
- b_size * n_points, b_size * n_points
- )
- dist_mask = dist_mask[valid_mask, :][:, valid_mask]
- return dist_mask
-
-
-### Line-related utils
-
-# Sample n points along lines of shape (num_lines, 2, 2)
-def sample_line_points(lines, n):
- line_points_x = np.linspace(lines[:, 0, 0], lines[:, 1, 0], n, axis=-1)
- line_points_y = np.linspace(lines[:, 0, 1], lines[:, 1, 1], n, axis=-1)
- line_points = np.stack([line_points_x, line_points_y], axis=2)
- return line_points
-
-
-# Return a mask of the valid lines that are within a valid mask of an image
-def mask_lines(lines, valid_mask):
- h, w = valid_mask.shape
- int_lines = np.clip(np.round(lines).astype(int), 0, [h - 1, w - 1])
- h_valid = valid_mask[int_lines[:, 0, 0], int_lines[:, 0, 1]]
- w_valid = valid_mask[int_lines[:, 1, 0], int_lines[:, 1, 1]]
- valid = h_valid & w_valid
- return valid
-
-
-# Return a 2D matrix indicating for each pair of points
-# if they are on the same line or not
-def get_common_line_mask(line_indices, valid_mask):
- b_size, n_points = line_indices.shape
- common_mask = line_indices[:, :, None] == line_indices[:, None, :]
- common_mask = common_mask.repeat(1, 1, b_size).reshape(
- b_size * n_points, b_size * n_points
- )
- common_mask = common_mask[valid_mask, :][:, valid_mask]
- return common_mask
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/parallel/_functions.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/parallel/_functions.py
deleted file mode 100644
index 9b5a8a44483ab991411d07122b22a1d027e4be8e..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/parallel/_functions.py
+++ /dev/null
@@ -1,79 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import torch
-from torch.nn.parallel._functions import _get_stream
-
-
-def scatter(input, devices, streams=None):
- """Scatters tensor across multiple GPUs."""
- if streams is None:
- streams = [None] * len(devices)
-
- if isinstance(input, list):
- chunk_size = (len(input) - 1) // len(devices) + 1
- outputs = [
- scatter(input[i], [devices[i // chunk_size]],
- [streams[i // chunk_size]]) for i in range(len(input))
- ]
- return outputs
- elif isinstance(input, torch.Tensor):
- output = input.contiguous()
- # TODO: copy to a pinned buffer first (if copying from CPU)
- stream = streams[0] if output.numel() > 0 else None
- if devices != [-1]:
- with torch.cuda.device(devices[0]), torch.cuda.stream(stream):
- output = output.cuda(devices[0], non_blocking=True)
- else:
- # unsqueeze the first dimension thus the tensor's shape is the
- # same as those scattered with GPU.
- output = output.unsqueeze(0)
- return output
- else:
- raise Exception(f'Unknown type {type(input)}.')
-
-
-def synchronize_stream(output, devices, streams):
- if isinstance(output, list):
- chunk_size = len(output) // len(devices)
- for i in range(len(devices)):
- for j in range(chunk_size):
- synchronize_stream(output[i * chunk_size + j], [devices[i]],
- [streams[i]])
- elif isinstance(output, torch.Tensor):
- if output.numel() != 0:
- with torch.cuda.device(devices[0]):
- main_stream = torch.cuda.current_stream()
- main_stream.wait_stream(streams[0])
- output.record_stream(main_stream)
- else:
- raise Exception(f'Unknown type {type(output)}.')
-
-
-def get_input_device(input):
- if isinstance(input, list):
- for item in input:
- input_device = get_input_device(item)
- if input_device != -1:
- return input_device
- return -1
- elif isinstance(input, torch.Tensor):
- return input.get_device() if input.is_cuda else -1
- else:
- raise Exception(f'Unknown type {type(input)}.')
-
-
-class Scatter:
-
- @staticmethod
- def forward(target_gpus, input):
- input_device = get_input_device(input)
- streams = None
- if input_device == -1 and target_gpus != [-1]:
- # Perform CPU to GPU copies in a background stream
- streams = [_get_stream(device) for device in target_gpus]
-
- outputs = scatter(input, target_gpus, streams)
- # Synchronize with the copy stream
- if streams is not None:
- synchronize_stream(outputs, target_gpus, streams)
-
- return tuple(outputs)
diff --git a/spaces/SIGGRAPH2022/Text2Human/Text2Human/data/__init__.py b/spaces/SIGGRAPH2022/Text2Human/Text2Human/data/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/SIH/Augmented-Retrieval-qa-ChatGPT/streamlit_langchain_chat/customized_langchain/llms/openai.py b/spaces/SIH/Augmented-Retrieval-qa-ChatGPT/streamlit_langchain_chat/customized_langchain/llms/openai.py
deleted file mode 100644
index 6a03adbb1cb151ea26d4033bc4087aab1d657ab7..0000000000000000000000000000000000000000
--- a/spaces/SIH/Augmented-Retrieval-qa-ChatGPT/streamlit_langchain_chat/customized_langchain/llms/openai.py
+++ /dev/null
@@ -1,708 +0,0 @@
-"""Wrapper around OpenAI APIs."""
-from __future__ import annotations
-
-import logging
-import sys
-from typing import (
- Any,
- Callable,
- Dict,
- Generator,
- List,
- Mapping,
- Optional,
- Set,
- Tuple,
- Union,
-)
-
-from pydantic import BaseModel, Extra, Field, root_validator
-from tenacity import (
- before_sleep_log,
- retry,
- retry_if_exception_type,
- stop_after_attempt,
- wait_exponential,
-)
-
-from langchain.llms.base import BaseLLM
-from langchain.schema import Generation, LLMResult
-from langchain.utils import get_from_dict_or_env
-
-logger = logging.getLogger(__name__)
-
-
-def update_token_usage(
- keys: Set[str], response: Dict[str, Any], token_usage: Dict[str, Any]
-) -> None:
- """Update token usage."""
- _keys_to_use = keys.intersection(response["usage"])
- for _key in _keys_to_use:
- if _key not in token_usage:
- token_usage[_key] = response["usage"][_key]
- else:
- token_usage[_key] += response["usage"][_key]
-
-
-def _update_response(response: Dict[str, Any], stream_response: Dict[str, Any]) -> None:
- """Update response from the stream response."""
- response["choices"][0]["text"] += stream_response["choices"][0]["text"]
- response["choices"][0]["finish_reason"] = stream_response["choices"][0][
- "finish_reason"
- ]
- response["choices"][0]["logprobs"] = stream_response["choices"][0]["logprobs"]
-
-
-def _streaming_response_template() -> Dict[str, Any]:
- return {
- "choices": [
- {
- "text": "",
- "finish_reason": None,
- "logprobs": None,
- }
- ]
- }
-
-
-def _create_retry_decorator(llm: Union[BaseOpenAI, OpenAIChat]) -> Callable[[Any], Any]:
- import openai
-
- min_seconds = 4
- max_seconds = 10
- # Wait 2^x * 1 second between each retry starting with
- # 4 seconds, then up to 10 seconds, then 10 seconds afterwards
- return retry(
- reraise=True,
- stop=stop_after_attempt(llm.max_retries),
- wait=wait_exponential(multiplier=1, min=min_seconds, max=max_seconds),
- retry=(
- retry_if_exception_type(openai.error.Timeout)
- | retry_if_exception_type(openai.error.APIError)
- | retry_if_exception_type(openai.error.APIConnectionError)
- | retry_if_exception_type(openai.error.RateLimitError)
- | retry_if_exception_type(openai.error.ServiceUnavailableError)
- ),
- before_sleep=before_sleep_log(logger, logging.WARNING),
- )
-
-
-def completion_with_retry(llm: Union[BaseOpenAI, OpenAIChat], **kwargs: Any) -> Any:
- """Use tenacity to retry the completion call."""
- retry_decorator = _create_retry_decorator(llm)
-
- @retry_decorator
- def _completion_with_retry(**kwargs: Any) -> Any:
- return llm.client.create(**kwargs)
-
- return _completion_with_retry(**kwargs)
-
-
-async def acompletion_with_retry(
- llm: Union[BaseOpenAI, OpenAIChat], **kwargs: Any
-) -> Any:
- """Use tenacity to retry the async completion call."""
- retry_decorator = _create_retry_decorator(llm)
-
- @retry_decorator
- async def _completion_with_retry(**kwargs: Any) -> Any:
- # Use OpenAI's async api https://github.com/openai/openai-python#async-api
- return await llm.client.acreate(**kwargs)
-
- return await _completion_with_retry(**kwargs)
-
-
-class BaseOpenAI(BaseLLM, BaseModel):
- """Wrapper around OpenAI large language models.
-
- To use, you should have the ``openai`` python package installed, and the
- environment variable ``OPENAI_API_KEY`` set with your API key.
-
- Any parameters that are valid to be passed to the openai.create call can be passed
- in, even if not explicitly saved on this class.
-
- Example:
- .. code-block:: python
-
- from langchain.llms import OpenAI
- openai = OpenAI(model_name="text-davinci-003")
- """
-
- client: Any #: :meta private:
- model_name: str = "text-davinci-003"
- """Model name to use."""
- temperature: float = 0.7
- """What sampling temperature to use."""
- max_tokens: int = 256
- """The maximum number of tokens to generate in the completion.
- -1 returns as many tokens as possible given the prompt and
- the models maximal context size."""
- top_p: float = 1
- """Total probability mass of tokens to consider at each step."""
- frequency_penalty: float = 0
- """Penalizes repeated tokens according to frequency."""
- presence_penalty: float = 0
- """Penalizes repeated tokens."""
- n: int = 1
- """How many completions to generate for each prompt."""
- best_of: int = 1
- """Generates best_of completions server-side and returns the "best"."""
- model_kwargs: Dict[str, Any] = Field(default_factory=dict)
- """Holds any model parameters valid for `create` call not explicitly specified."""
- openai_api_key: Optional[str] = None
- batch_size: int = 20
- """Batch size to use when passing multiple documents to generate."""
- request_timeout: Optional[Union[float, Tuple[float, float]]] = None
- """Timeout for requests to OpenAI completion API. Default is 600 seconds."""
- logit_bias: Optional[Dict[str, float]] = Field(default_factory=dict)
- """Adjust the probability of specific tokens being generated."""
- max_retries: int = 6
- """Maximum number of retries to make when generating."""
- streaming: bool = False
- """Whether to stream the results or not."""
-
- def __new__(cls, **data: Any) -> Union[OpenAIChat, BaseOpenAI]: # type: ignore
- """Initialize the OpenAI object."""
- if data.get("model_name", "").startswith("gpt-3.5-turbo"):
- return OpenAIChat(**data)
- return super().__new__(cls)
-
- class Config:
- """Configuration for this pydantic object."""
-
- extra = Extra.ignore
-
- @root_validator(pre=True, allow_reuse=True)
- def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:
- """Build extra kwargs from additional params that were passed in."""
- all_required_field_names = {field.alias for field in cls.__fields__.values()}
-
- extra = values.get("model_kwargs", {})
- for field_name in list(values):
- if field_name not in all_required_field_names:
- if field_name in extra:
- raise ValueError(f"Found {field_name} supplied twice.")
- logger.warning(
- f"""WARNING! {field_name} is not default parameter.
- {field_name} was transfered to model_kwargs.
- Please confirm that {field_name} is what you intended."""
- )
- extra[field_name] = values.pop(field_name)
- values["model_kwargs"] = extra
- return values
-
- @root_validator(allow_reuse=True)
- def validate_environment(cls, values: Dict) -> Dict:
- """Validate that api key and python package exists in environment."""
- openai_api_key = get_from_dict_or_env(
- values, "openai_api_key", "OPENAI_API_KEY"
- )
- try:
- import openai
-
- openai.api_key = openai_api_key
- values["client"] = openai.Completion
- except ImportError:
- raise ValueError(
- "Could not import openai python package. "
- "Please it install it with `pip install openai`."
- )
- if values["streaming"] and values["n"] > 1:
- raise ValueError("Cannot stream results when n > 1.")
- if values["streaming"] and values.get("best_of") and values["best_of"] > 1:
- raise ValueError("Cannot stream results when best_of > 1.")
- return values
-
- @property
- def _default_params(self) -> Dict[str, Any]:
- """Get the default parameters for calling OpenAI API."""
- normal_params = {
- "temperature": self.temperature,
- "max_tokens": self.max_tokens,
- "top_p": self.top_p,
- "frequency_penalty": self.frequency_penalty,
- "presence_penalty": self.presence_penalty,
- "n": self.n,
- # "best_of": self.best_of,
- "request_timeout": self.request_timeout,
- "logit_bias": self.logit_bias,
- }
- return {**normal_params, **self.model_kwargs}
-
- def _generate(
- self, prompts: List[str], stop: Optional[List[str]] = None
- ) -> LLMResult:
- """Call out to OpenAI's endpoint with k unique prompts.
-
- Args:
- prompts: The prompts to pass into the model.
- stop: Optional list of stop words to use when generating.
-
- Returns:
- The full LLM output.
-
- Example:
- .. code-block:: python
-
- response = openai.generate(["Tell me a joke."])
- """
- # TODO: write a unit test for this
- params = self._invocation_params
- sub_prompts = self.get_sub_prompts(params, prompts, stop)
- choices = []
- token_usage: Dict[str, int] = {}
- # Get the token usage from the response.
- # Includes prompt, completion, and total tokens used.
- _keys = {"completion_tokens", "prompt_tokens", "total_tokens"}
- for _prompts in sub_prompts:
- if self.streaming:
- if len(_prompts) > 1:
- raise ValueError("Cannot stream results with multiple prompts.")
- params["stream"] = True
- response = _streaming_response_template()
- for stream_resp in completion_with_retry(
- self, prompt=_prompts, **params
- ):
- self.callback_manager.on_llm_new_token(
- stream_resp["choices"][0]["text"],
- verbose=self.verbose,
- logprobs=stream_resp["choices"][0]["logprobs"],
- )
- _update_response(response, stream_resp)
- choices.extend(response["choices"])
- else:
- response = completion_with_retry(self, prompt=_prompts, **params)
- choices.extend(response["choices"])
- if not self.streaming:
- # Can't update token usage if streaming
- update_token_usage(_keys, response, token_usage)
- return self.create_llm_result(choices, prompts, token_usage)
-
- async def _agenerate(
- self, prompts: List[str], stop: Optional[List[str]] = None
- ) -> LLMResult:
- """Call out to OpenAI's endpoint async with k unique prompts."""
- params = self._invocation_params
- sub_prompts = self.get_sub_prompts(params, prompts, stop)
- choices = []
- token_usage: Dict[str, int] = {}
- # Get the token usage from the response.
- # Includes prompt, completion, and total tokens used.
- _keys = {"completion_tokens", "prompt_tokens", "total_tokens"}
- for _prompts in sub_prompts:
- if self.streaming:
- if len(_prompts) > 1:
- raise ValueError("Cannot stream results with multiple prompts.")
- params["stream"] = True
- response = _streaming_response_template()
- async for stream_resp in await acompletion_with_retry(
- self, prompt=_prompts, **params
- ):
- if self.callback_manager.is_async:
- await self.callback_manager.on_llm_new_token(
- stream_resp["choices"][0]["text"],
- verbose=self.verbose,
- logprobs=stream_resp["choices"][0]["logprobs"],
- )
- else:
- self.callback_manager.on_llm_new_token(
- stream_resp["choices"][0]["text"],
- verbose=self.verbose,
- logprobs=stream_resp["choices"][0]["logprobs"],
- )
- _update_response(response, stream_resp)
- choices.extend(response["choices"])
- else:
- response = await acompletion_with_retry(self, prompt=_prompts, **params)
- choices.extend(response["choices"])
- if not self.streaming:
- # Can't update token usage if streaming
- update_token_usage(_keys, response, token_usage)
- return self.create_llm_result(choices, prompts, token_usage)
-
- def get_sub_prompts(
- self,
- params: Dict[str, Any],
- prompts: List[str],
- stop: Optional[List[str]] = None,
- ) -> List[List[str]]:
- """Get the sub prompts for llm call."""
- if stop is not None:
- if "stop" in params:
- raise ValueError("`stop` found in both the input and default params.")
- params["stop"] = stop
- if params["max_tokens"] == -1:
- if len(prompts) != 1:
- raise ValueError(
- "max_tokens set to -1 not supported for multiple inputs."
- )
- params["max_tokens"] = self.max_tokens_for_prompt(prompts[0])
- sub_prompts = [
- prompts[i : i + self.batch_size]
- for i in range(0, len(prompts), self.batch_size)
- ]
- return sub_prompts
-
- def create_llm_result(
- self, choices: Any, prompts: List[str], token_usage: Dict[str, int]
- ) -> LLMResult:
- """Create the LLMResult from the choices and prompts."""
- generations = []
- for i, _ in enumerate(prompts):
- sub_choices = choices[i * self.n : (i + 1) * self.n]
- generations.append(
- [
- Generation(
- text=choice["text"],
- generation_info=dict(
- finish_reason=choice.get("finish_reason"),
- logprobs=choice.get("logprobs"),
- ),
- )
- for choice in sub_choices
- ]
- )
- return LLMResult(
- generations=generations, llm_output={"token_usage": token_usage}
- )
-
- def stream(self, prompt: str, stop: Optional[List[str]] = None) -> Generator:
- """Call OpenAI with streaming flag and return the resulting generator.
-
- BETA: this is a beta feature while we figure out the right abstraction.
- Once that happens, this interface could change.
-
- Args:
- prompt: The prompts to pass into the model.
- stop: Optional list of stop words to use when generating.
-
- Returns:
- A generator representing the stream of tokens from OpenAI.
-
- Example:
- .. code-block:: python
-
- generator = openai.stream("Tell me a joke.")
- for token in generator:
- yield token
- """
- params = self.prep_streaming_params(stop)
- generator = self.client.create(prompt=prompt, **params)
-
- return generator
-
- def prep_streaming_params(self, stop: Optional[List[str]] = None) -> Dict[str, Any]:
- """Prepare the params for streaming."""
- params = self._invocation_params
- if params.get('best_of') and params["best_of"] != 1:
- raise ValueError("OpenAI only supports best_of == 1 for streaming")
- if stop is not None:
- if "stop" in params:
- raise ValueError("`stop` found in both the input and default params.")
- params["stop"] = stop
- params["stream"] = True
- return params
-
- @property
- def _invocation_params(self) -> Dict[str, Any]:
- """Get the parameters used to invoke the model."""
- return self._default_params
-
- @property
- def _identifying_params(self) -> Mapping[str, Any]:
- """Get the identifying parameters."""
- return {**{"model_name": self.model_name}, **self._default_params}
-
- @property
- def _llm_type(self) -> str:
- """Return type of llm."""
- return "openai"
-
- def get_num_tokens(self, text: str) -> int:
- """Calculate num tokens with tiktoken package."""
- # tiktoken NOT supported for Python 3.8 or below
- if sys.version_info[1] <= 8:
- return super().get_num_tokens(text)
- try:
- import tiktoken
- except ImportError:
- raise ValueError(
- "Could not import tiktoken python package. "
- "This is needed in order to calculate get_num_tokens. "
- "Please it install it with `pip install tiktoken`."
- )
- encoder = "gpt2"
- if self.model_name in ("text-davinci-003", "text-davinci-002"):
- encoder = "p50k_base"
- if self.model_name.startswith("code"):
- encoder = "p50k_base"
- # create a GPT-3 encoder instance
- enc = tiktoken.get_encoding(encoder)
-
- # encode the text using the GPT-3 encoder
- tokenized_text = enc.encode(text)
-
- # calculate the number of tokens in the encoded text
- return len(tokenized_text)
-
- def modelname_to_contextsize(self, modelname: str) -> int:
- """Calculate the maximum number of tokens possible to generate for a model.
-
- text-davinci-003: 4,097 tokens
- text-curie-001: 2,048 tokens
- text-babbage-001: 2,048 tokens
- text-ada-001: 2,048 tokens
- code-davinci-002: 8,000 tokens
- code-cushman-001: 2,048 tokens
-
- Args:
- modelname: The modelname we want to know the context size for.
-
- Returns:
- The maximum context size
-
- Example:
- .. code-block:: python
-
- max_tokens = openai.modelname_to_contextsize("text-davinci-003")
- """
- if modelname == "text-davinci-003":
- return 4097
- elif modelname == "text-curie-001":
- return 2048
- elif modelname == "text-babbage-001":
- return 2048
- elif modelname == "text-ada-001":
- return 2048
- elif modelname == "code-davinci-002":
- return 8000
- elif modelname == "code-cushman-001":
- return 2048
- else:
- return 4097
-
- def max_tokens_for_prompt(self, prompt: str) -> int:
- """Calculate the maximum number of tokens possible to generate for a prompt.
-
- Args:
- prompt: The prompt to pass into the model.
-
- Returns:
- The maximum number of tokens to generate for a prompt.
-
- Example:
- .. code-block:: python
-
- max_tokens = openai.max_token_for_prompt("Tell me a joke.")
- """
- num_tokens = self.get_num_tokens(prompt)
-
- # get max context size for model by name
- max_size = self.modelname_to_contextsize(self.model_name)
- return max_size - num_tokens
-
-
-class OpenAI(BaseOpenAI):
- """Generic OpenAI class that uses model name."""
-
- @property
- def _invocation_params(self) -> Dict[str, Any]:
- return {**{"model": self.model_name}, **super()._invocation_params}
-
-
-class AzureOpenAI(BaseOpenAI):
- """Azure specific OpenAI class that uses deployment name."""
-
- deployment_name: str = ""
- """Deployment name to use."""
-
- @property
- def _identifying_params(self) -> Mapping[str, Any]:
- return {
- **{"deployment_name": self.deployment_name},
- **super()._identifying_params,
- }
-
- @property
- def _invocation_params(self) -> Dict[str, Any]:
- return {**{"engine": self.deployment_name}, **super()._invocation_params}
-
-
-class OpenAIChat(BaseLLM, BaseModel):
- """Wrapper around OpenAI Chat large language models.
-
- To use, you should have the ``openai`` python package installed, and the
- environment variable ``OPENAI_API_KEY`` set with your API key.
-
- Any parameters that are valid to be passed to the openai.create call can be passed
- in, even if not explicitly saved on this class.
-
- Example:
- .. code-block:: python
-
- from langchain.llms import OpenAIChat
- openaichat = OpenAIChat(model_name="gpt-3.5-turbo")
- """
-
- client: Any #: :meta private:
- model_name: str = "gpt-3.5-turbo"
- """Model name to use."""
- model_kwargs: Dict[str, Any] = Field(default_factory=dict)
- """Holds any model parameters valid for `create` call not explicitly specified."""
- openai_api_key: Optional[str] = None
- max_retries: int = 6
- """Maximum number of retries to make when generating."""
- prefix_messages: List = Field(default_factory=list)
- """Series of messages for Chat input."""
- streaming: bool = False
- """Whether to stream the results or not."""
-
- class Config:
- """Configuration for this pydantic object."""
-
- extra = Extra.ignore
-
- @root_validator(pre=True, allow_reuse=True)
- def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:
- """Build extra kwargs from additional params that were passed in."""
- all_required_field_names = {field.alias for field in cls.__fields__.values()}
-
- extra = values.get("model_kwargs", {})
- for field_name in list(values):
- if field_name not in all_required_field_names:
- if field_name in extra:
- raise ValueError(f"Found {field_name} supplied twice.")
- extra[field_name] = values.pop(field_name)
- values["model_kwargs"] = extra
- return values
-
- @root_validator(allow_reuse=True)
- def validate_environment(cls, values: Dict) -> Dict:
- """Validate that api key and python package exists in environment."""
- openai_api_key = get_from_dict_or_env(
- values, "openai_api_key", "OPENAI_API_KEY"
- )
- try:
- import openai
-
- openai.api_key = openai_api_key
- except ImportError:
- raise ValueError(
- "Could not import openai python package. "
- "Please it install it with `pip install openai`."
- )
- try:
- values["client"] = openai.ChatCompletion
- except AttributeError:
- raise ValueError(
- "`openai` has no `ChatCompletion` attribute, this is likely "
- "due to an old version of the openai package. Try upgrading it "
- "with `pip install --upgrade openai`."
- )
- return values
-
- @property
- def _default_params(self) -> Dict[str, Any]:
- """Get the default parameters for calling OpenAI API."""
- return self.model_kwargs
-
- def _get_chat_params(
- self, prompts: List[str], stop: Optional[List[str]] = None
- ) -> Tuple:
- if len(prompts) > 1:
- raise ValueError(
- f"OpenAIChat currently only supports single prompt, got {prompts}"
- )
- messages = self.prefix_messages + [{"role": "user", "content": prompts[0]}]
- params: Dict[str, Any] = {**{"model": self.model_name}, **self._default_params}
- if stop is not None:
- if "stop" in params:
- raise ValueError("`stop` found in both the input and default params.")
- params["stop"] = stop
- return messages, params
-
- def _generate(
- self, prompts: List[str], stop: Optional[List[str]] = None
- ) -> LLMResult:
- messages, params = self._get_chat_params(prompts, stop)
- if self.streaming:
- response = ""
- params["stream"] = True
- for stream_resp in completion_with_retry(self, messages=messages, **params):
- token = stream_resp["choices"][0]["delta"].get("content", "")
- response += token
- self.callback_manager.on_llm_new_token(
- token,
- verbose=self.verbose,
- )
- return LLMResult(
- generations=[[Generation(text=response)]],
- )
- else:
- full_response = completion_with_retry(self, messages=messages, **params)
- return LLMResult(
- generations=[
- [Generation(text=full_response["choices"][0]["message"]["content"])]
- ],
- llm_output={"token_usage": full_response["usage"]},
- )
-
- async def _agenerate(
- self, prompts: List[str], stop: Optional[List[str]] = None
- ) -> LLMResult:
- messages, params = self._get_chat_params(prompts, stop)
- if self.streaming:
- response = ""
- params["stream"] = True
- async for stream_resp in await acompletion_with_retry(
- self, messages=messages, **params
- ):
- token = stream_resp["choices"][0]["delta"].get("content", "")
- response += token
- if self.callback_manager.is_async:
- await self.callback_manager.on_llm_new_token(
- token,
- verbose=self.verbose,
- )
- else:
- self.callback_manager.on_llm_new_token(
- token,
- verbose=self.verbose,
- )
- return LLMResult(
- generations=[[Generation(text=response)]],
- )
- else:
- full_response = await acompletion_with_retry(
- self, messages=messages, **params
- )
- return LLMResult(
- generations=[
- [Generation(text=full_response["choices"][0]["message"]["content"])]
- ],
- llm_output={"token_usage": full_response["usage"]},
- )
-
- @property
- def _identifying_params(self) -> Mapping[str, Any]:
- """Get the identifying parameters."""
- return {**{"model_name": self.model_name}, **self._default_params}
-
- @property
- def _llm_type(self) -> str:
- """Return type of llm."""
- return "openai-chat"
-
-
-class AzureOpenAIChat(OpenAIChat):
- """Azure specific OpenAI class that uses deployment name."""
-
- deployment_name: str = ""
- """Deployment name to use."""
-
- @property
- def _identifying_params(self) -> Mapping[str, Any]:
- return {
- **{"deployment_name": self.deployment_name},
- **super()._identifying_params,
- }
diff --git a/spaces/SalahZa/Code-Switched-Tunisian-SpeechToText/TunisianASR/results/14epoch_tunisian/1234/train_with_wav2vec.py b/spaces/SalahZa/Code-Switched-Tunisian-SpeechToText/TunisianASR/results/14epoch_tunisian/1234/train_with_wav2vec.py
deleted file mode 100644
index 5d6ca4c5a378583fd297e1202522b9dc9c2368de..0000000000000000000000000000000000000000
--- a/spaces/SalahZa/Code-Switched-Tunisian-SpeechToText/TunisianASR/results/14epoch_tunisian/1234/train_with_wav2vec.py
+++ /dev/null
@@ -1,399 +0,0 @@
-#!/usr/bin/env python3
-import sys
-import torch
-import logging
-import speechbrain as sb
-from pathlib import Path
-import os
-import torchaudio
-from hyperpyyaml import load_hyperpyyaml
-from speechbrain.tokenizers.SentencePiece import SentencePiece
-from speechbrain.utils.data_utils import undo_padding
-from speechbrain.utils.distributed import run_on_main
-
-"""Recipe for training a sequence-to-sequence ASR system with CommonVoice.
-The system employs a wav2vec2 encoder and a CTC decoder.
-Decoding is performed with greedy decoding (will be extended to beam search).
-
-To run this recipe, do the following:
-> python train_with_wav2vec2.py hparams/train_with_wav2vec2.yaml
-
-With the default hyperparameters, the system employs a pretrained wav2vec2 encoder.
-The wav2vec2 model is pretrained following the model given in the hprams file.
-It may be dependent on the language.
-
-The neural network is trained with CTC on sub-word units estimated with
-Byte Pairwise Encoding (BPE).
-
-The experiment file is flexible enough to support a large variety of
-different systems. By properly changing the parameter files, you can try
-different encoders, decoders, tokens (e.g, characters instead of BPE),
-training languages (all CommonVoice languages), and many
-other possible variations.
-
-Authors
- * Titouan Parcollet 2021
-"""
-
-logger = logging.getLogger(__name__)
-
-
-# Define training procedure
-class ASR(sb.core.Brain):
- def compute_forward(self, batch, stage):
- """Forward computations from the waveform batches to the output probabilities."""
-
- batch = batch.to(self.device)
- wavs, wav_lens = batch.sig
- wavs, wav_lens = wavs.to(self.device), wav_lens.to(self.device)
- if stage == sb.Stage.TRAIN:
- if hasattr(self.hparams, "augmentation"):
- wavs = self.hparams.augmentation(wavs, wav_lens)
-
- # Forward pass
- feats = self.modules.wav2vec2(wavs, wav_lens)
- x = self.modules.enc(feats)
- logits = self.modules.ctc_lin(x)
- p_ctc = self.hparams.log_softmax(logits)
-
- return p_ctc, wav_lens
-
- def compute_objectives(self, predictions, batch, stage):
- """Computes the loss (CTC) given predictions and targets."""
-
- p_ctc, wav_lens = predictions
-
- ids = batch.id
- tokens, tokens_lens = batch.tokens
-
- loss = self.hparams.ctc_cost(p_ctc, tokens, wav_lens, tokens_lens)
-
- if stage != sb.Stage.TRAIN:
- predicted_tokens = sb.decoders.ctc_greedy_decode(
- p_ctc, wav_lens, blank_id=self.hparams.blank_index
- )
- # Decode token terms to words
- if self.hparams.use_language_modelling:
- predicted_words = []
- for logs in p_ctc:
- text = decoder.decode(logs.detach().cpu().numpy())
- predicted_words.append(text.split(" "))
- else:
- predicted_words = [
- "".join(self.tokenizer.decode_ndim(utt_seq)).split(" ")
- for utt_seq in predicted_tokens
- ]
- # Convert indices to words
- target_words = [wrd.split(" ") for wrd in batch.wrd]
-
- self.wer_metric.append(ids, predicted_words, target_words)
- self.cer_metric.append(ids, predicted_words, target_words)
-
- return loss
-
- def fit_batch(self, batch):
- """Train the parameters given a single batch in input"""
- should_step = self.step % self.grad_accumulation_factor == 0
- # Managing automatic mixed precision
- # TOFIX: CTC fine-tuning currently is unstable
- # This is certainly due to CTC being done in fp16 instead of fp32
- if self.auto_mix_prec:
- with torch.cuda.amp.autocast():
- with self.no_sync():
- outputs = self.compute_forward(batch, sb.Stage.TRAIN)
- loss = self.compute_objectives(outputs, batch, sb.Stage.TRAIN)
- with self.no_sync(not should_step):
- self.scaler.scale(
- loss / self.grad_accumulation_factor
- ).backward()
- if should_step:
-
- if not self.hparams.wav2vec2.freeze:
- self.scaler.unscale_(self.wav2vec_optimizer)
- self.scaler.unscale_(self.model_optimizer)
- if self.check_gradients(loss):
- if not self.hparams.wav2vec2.freeze:
- if self.optimizer_step >= self.hparams.warmup_steps:
- self.scaler.step(self.wav2vec_optimizer)
- self.scaler.step(self.model_optimizer)
- self.scaler.update()
- self.zero_grad()
- self.optimizer_step += 1
- else:
- # This is mandatory because HF models have a weird behavior with DDP
- # on the forward pass
- with self.no_sync():
- outputs = self.compute_forward(batch, sb.Stage.TRAIN)
-
- loss = self.compute_objectives(outputs, batch, sb.Stage.TRAIN)
-
- with self.no_sync(not should_step):
- (loss / self.grad_accumulation_factor).backward()
- if should_step:
- if self.check_gradients(loss):
- if not self.hparams.wav2vec2.freeze:
- if self.optimizer_step >= self.hparams.warmup_steps:
- self.wav2vec_optimizer.step()
- self.model_optimizer.step()
- self.zero_grad()
- self.optimizer_step += 1
-
- self.on_fit_batch_end(batch, outputs, loss, should_step)
- return loss.detach().cpu()
-
- def evaluate_batch(self, batch, stage):
- """Computations needed for validation/test batches"""
- predictions = self.compute_forward(batch, stage=stage)
- with torch.no_grad():
- loss = self.compute_objectives(predictions, batch, stage=stage)
- return loss.detach()
-
- def on_stage_start(self, stage, epoch):
- """Gets called at the beginning of each epoch"""
- if stage != sb.Stage.TRAIN:
- self.cer_metric = self.hparams.cer_computer()
- self.wer_metric = self.hparams.error_rate_computer()
-
- def on_stage_end(self, stage, stage_loss, epoch):
- """Gets called at the end of an epoch."""
- # Compute/store important stats
- stage_stats = {"loss": stage_loss}
- if stage == sb.Stage.TRAIN:
- self.train_stats = stage_stats
- else:
- stage_stats["CER"] = self.cer_metric.summarize("error_rate")
- stage_stats["WER"] = self.wer_metric.summarize("error_rate")
-
- # Perform end-of-iteration things, like annealing, logging, etc.
- if stage == sb.Stage.VALID:
- old_lr_model, new_lr_model = self.hparams.lr_annealing_model(
- stage_stats["loss"]
- )
- old_lr_wav2vec, new_lr_wav2vec = self.hparams.lr_annealing_wav2vec(
- stage_stats["loss"]
- )
- sb.nnet.schedulers.update_learning_rate(
- self.model_optimizer, new_lr_model
- )
- if not self.hparams.wav2vec2.freeze:
- sb.nnet.schedulers.update_learning_rate(
- self.wav2vec_optimizer, new_lr_wav2vec
- )
- self.hparams.train_logger.log_stats(
- stats_meta={
- "epoch": epoch,
- "lr_model": old_lr_model,
- "lr_wav2vec": old_lr_wav2vec,
- },
- train_stats=self.train_stats,
- valid_stats=stage_stats,
- )
- self.checkpointer.save_and_keep_only(
- meta={"WER": stage_stats["WER"]}, min_keys=["WER"],
- )
- elif stage == sb.Stage.TEST:
- self.hparams.train_logger.log_stats(
- stats_meta={"Epoch loaded": self.hparams.epoch_counter.current},
- test_stats=stage_stats,
- )
- with open(self.hparams.wer_file, "w") as w:
- self.wer_metric.write_stats(w)
-
- def init_optimizers(self):
- "Initializes the wav2vec2 optimizer and model optimizer"
-
- # If the wav2vec encoder is unfrozen, we create the optimizer
- if not self.hparams.wav2vec2.freeze:
- self.wav2vec_optimizer = self.hparams.wav2vec_opt_class(
- self.modules.wav2vec2.parameters()
- )
- if self.checkpointer is not None:
- self.checkpointer.add_recoverable(
- "wav2vec_opt", self.wav2vec_optimizer
- )
-
- self.model_optimizer = self.hparams.model_opt_class(
- self.hparams.model.parameters()
- )
-
- if self.checkpointer is not None:
- self.checkpointer.add_recoverable("modelopt", self.model_optimizer)
-
- def zero_grad(self, set_to_none=False):
- if not self.hparams.wav2vec2.freeze:
- self.wav2vec_optimizer.zero_grad(set_to_none)
- self.model_optimizer.zero_grad(set_to_none)
-
-
-# Define custom data procedure
-def dataio_prepare(hparams):
- """This function prepares the datasets to be used in the brain class.
- It also defines the data processing pipeline through user-defined functions."""
-
- # 1. Define datasets
- data_folder = hparams["data_folder"]
-
- train_data = sb.dataio.dataset.DynamicItemDataset.from_csv(
- csv_path=hparams["train_csv"], replacements={"data_root": data_folder},
- )
-
- if hparams["sorting"] == "ascending":
- # we sort training data to speed up training and get better results.
- train_data = train_data.filtered_sorted(
- sort_key="duration",
- key_max_value={"duration": hparams["avoid_if_longer_than"]},
- )
- # when sorting do not shuffle in dataloader ! otherwise is pointless
- hparams["dataloader_options"]["shuffle"] = False
-
- elif hparams["sorting"] == "descending":
- train_data = train_data.filtered_sorted(
- sort_key="duration",
- reverse=True,
- key_max_value={"duration": hparams["avoid_if_longer_than"]},
- )
- # when sorting do not shuffle in dataloader ! otherwise is pointless
- hparams["dataloader_options"]["shuffle"] = False
-
- elif hparams["sorting"] == "random":
- pass
-
- else:
- raise NotImplementedError(
- "sorting must be random, ascending or descending"
- )
-
- valid_data = sb.dataio.dataset.DynamicItemDataset.from_csv(
- csv_path=hparams["valid_csv"], replacements={"data_root": data_folder},
- )
- # We also sort the validation data so it is faster to validate
- valid_data = valid_data.filtered_sorted(sort_key="duration")
- test_datasets = {}
- for csv_file in hparams["test_csv"]:
- name = Path(csv_file).stem
- test_datasets[name] = sb.dataio.dataset.DynamicItemDataset.from_csv(
- csv_path=csv_file, replacements={"data_root": data_folder}
- )
- test_datasets[name] = test_datasets[name].filtered_sorted(
- sort_key="duration"
- )
-
- datasets = [train_data, valid_data] + [i for k, i in test_datasets.items()]
-
-
- # 2. Define audio pipeline:
- @sb.utils.data_pipeline.takes("wav")
- @sb.utils.data_pipeline.provides("sig")
- def audio_pipeline(wav):
- info = torchaudio.info(wav)
- sig = sb.dataio.dataio.read_audio(wav)
- resampled = torchaudio.transforms.Resample(
- info.sample_rate, hparams["sample_rate"],
- )(sig)
- return resampled
-
- sb.dataio.dataset.add_dynamic_item(datasets, audio_pipeline)
- label_encoder = sb.dataio.encoder.CTCTextEncoder()
-
- # 3. Define text pipeline:
- @sb.utils.data_pipeline.takes("wrd")
- @sb.utils.data_pipeline.provides(
- "wrd", "char_list", "tokens_list", "tokens"
- )
- def text_pipeline(wrd):
- yield wrd
- char_list = list(wrd)
- yield char_list
- tokens_list = label_encoder.encode_sequence(char_list)
- yield tokens_list
- tokens = torch.LongTensor(tokens_list)
- yield tokens
-
- sb.dataio.dataset.add_dynamic_item(datasets, text_pipeline)
- lab_enc_file = os.path.join(hparams["save_folder"], "label_encoder.txt")
- special_labels = {
- "blank_label": hparams["blank_index"],
- "unk_label": hparams["unk_index"]
- }
- label_encoder.load_or_create(
- path=lab_enc_file,
- from_didatasets=[train_data],
- output_key="char_list",
- special_labels=special_labels,
- sequence_input=True,
- )
-
- # 4. Set output:
- sb.dataio.dataset.set_output_keys(
- datasets, ["id", "sig", "wrd", "char_list", "tokens"],
- )
- return train_data, valid_data,test_datasets, label_encoder
-
-
-if __name__ == "__main__":
-
- # Load hyperparameters file with command-line overrides
- hparams_file, run_opts, overrides = sb.parse_arguments(sys.argv[1:])
- with open(hparams_file) as fin:
- hparams = load_hyperpyyaml(fin, overrides)
-
- # If --distributed_launch then
- # create ddp_group with the right communication protocol
- sb.utils.distributed.ddp_init_group(run_opts)
-
-
- # Create experiment directory
- sb.create_experiment_directory(
- experiment_directory=hparams["output_folder"],
- hyperparams_to_save=hparams_file,
- overrides=overrides,
- )
-
- # Due to DDP, we do the preparation ONLY on the main python process
- # Defining tokenizer and loading it
- # Create the datasets objects as well as tokenization and encoding :-D
- train_data, valid_data, test_datasets, label_encoder = dataio_prepare(hparams)
- if hparams["use_language_modelling"]:
- print("using langauge_modeeling")
- from pyctcdecode import build_ctcdecoder
- ind2lab = label_encoder.ind2lab
- print(ind2lab)
- labels = [ind2lab[x] for x in range(len(ind2lab))]
- labels = [""] + labels[1:-1] + ["1"]
- # Replace the token with a blank character, needed for PyCTCdecode
- print(labels)
- decoder = build_ctcdecoder(
- labels,
- kenlm_model_path=hparams["ngram_lm_path"], # .arpa or .bin
- alpha=0.5, # Default by KenLM
- beta=1.0, # Default by KenLM
- )
- # Trainer initialization
- asr_brain = ASR(
- modules=hparams["modules"],
- hparams=hparams,
- run_opts=run_opts,
- checkpointer=hparams["checkpointer"],
- )
-
- # Adding objects to trainer.
- asr_brain.tokenizer = label_encoder
-
- # Training
- asr_brain.fit(
- asr_brain.hparams.epoch_counter,
- train_data,
- valid_data,
- train_loader_kwargs=hparams["dataloader_options"],
- valid_loader_kwargs=hparams["test_dataloader_options"],
- )
-
- # Test
- for k in test_datasets.keys(): # keys are test_clean, test_other etc
- asr_brain.hparams.wer_file = os.path.join(
- hparams["output_folder"], "wer_{}.txt".format(k)
- )
- asr_brain.evaluate(
- test_datasets[k], test_loader_kwargs=hparams["test_dataloader_options"]
- )
-
diff --git a/spaces/Salesforce/EDICT/my_diffusers/models/unet_blocks.py b/spaces/Salesforce/EDICT/my_diffusers/models/unet_blocks.py
deleted file mode 100644
index 9e062165357c33d9b2f0bec13a66204c2e7e7833..0000000000000000000000000000000000000000
--- a/spaces/Salesforce/EDICT/my_diffusers/models/unet_blocks.py
+++ /dev/null
@@ -1,1481 +0,0 @@
-# Copyright 2022 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-
-import numpy as np
-
-# limitations under the License.
-import torch
-from torch import nn
-
-from .attention import AttentionBlock, SpatialTransformer
-from .resnet import Downsample2D, FirDownsample2D, FirUpsample2D, ResnetBlock2D, Upsample2D
-
-
-def get_down_block(
- down_block_type,
- num_layers,
- in_channels,
- out_channels,
- temb_channels,
- add_downsample,
- resnet_eps,
- resnet_act_fn,
- attn_num_head_channels,
- cross_attention_dim=None,
- downsample_padding=None,
-):
- down_block_type = down_block_type[7:] if down_block_type.startswith("UNetRes") else down_block_type
- if down_block_type == "DownBlock2D":
- return DownBlock2D(
- num_layers=num_layers,
- in_channels=in_channels,
- out_channels=out_channels,
- temb_channels=temb_channels,
- add_downsample=add_downsample,
- resnet_eps=resnet_eps,
- resnet_act_fn=resnet_act_fn,
- downsample_padding=downsample_padding,
- )
- elif down_block_type == "AttnDownBlock2D":
- return AttnDownBlock2D(
- num_layers=num_layers,
- in_channels=in_channels,
- out_channels=out_channels,
- temb_channels=temb_channels,
- add_downsample=add_downsample,
- resnet_eps=resnet_eps,
- resnet_act_fn=resnet_act_fn,
- downsample_padding=downsample_padding,
- attn_num_head_channels=attn_num_head_channels,
- )
- elif down_block_type == "CrossAttnDownBlock2D":
- if cross_attention_dim is None:
- raise ValueError("cross_attention_dim must be specified for CrossAttnDownBlock2D")
- return CrossAttnDownBlock2D(
- num_layers=num_layers,
- in_channels=in_channels,
- out_channels=out_channels,
- temb_channels=temb_channels,
- add_downsample=add_downsample,
- resnet_eps=resnet_eps,
- resnet_act_fn=resnet_act_fn,
- downsample_padding=downsample_padding,
- cross_attention_dim=cross_attention_dim,
- attn_num_head_channels=attn_num_head_channels,
- )
- elif down_block_type == "SkipDownBlock2D":
- return SkipDownBlock2D(
- num_layers=num_layers,
- in_channels=in_channels,
- out_channels=out_channels,
- temb_channels=temb_channels,
- add_downsample=add_downsample,
- resnet_eps=resnet_eps,
- resnet_act_fn=resnet_act_fn,
- downsample_padding=downsample_padding,
- )
- elif down_block_type == "AttnSkipDownBlock2D":
- return AttnSkipDownBlock2D(
- num_layers=num_layers,
- in_channels=in_channels,
- out_channels=out_channels,
- temb_channels=temb_channels,
- add_downsample=add_downsample,
- resnet_eps=resnet_eps,
- resnet_act_fn=resnet_act_fn,
- downsample_padding=downsample_padding,
- attn_num_head_channels=attn_num_head_channels,
- )
- elif down_block_type == "DownEncoderBlock2D":
- return DownEncoderBlock2D(
- num_layers=num_layers,
- in_channels=in_channels,
- out_channels=out_channels,
- add_downsample=add_downsample,
- resnet_eps=resnet_eps,
- resnet_act_fn=resnet_act_fn,
- downsample_padding=downsample_padding,
- )
-
-
-def get_up_block(
- up_block_type,
- num_layers,
- in_channels,
- out_channels,
- prev_output_channel,
- temb_channels,
- add_upsample,
- resnet_eps,
- resnet_act_fn,
- attn_num_head_channels,
- cross_attention_dim=None,
-):
- up_block_type = up_block_type[7:] if up_block_type.startswith("UNetRes") else up_block_type
- if up_block_type == "UpBlock2D":
- return UpBlock2D(
- num_layers=num_layers,
- in_channels=in_channels,
- out_channels=out_channels,
- prev_output_channel=prev_output_channel,
- temb_channels=temb_channels,
- add_upsample=add_upsample,
- resnet_eps=resnet_eps,
- resnet_act_fn=resnet_act_fn,
- )
- elif up_block_type == "CrossAttnUpBlock2D":
- if cross_attention_dim is None:
- raise ValueError("cross_attention_dim must be specified for CrossAttnUpBlock2D")
- return CrossAttnUpBlock2D(
- num_layers=num_layers,
- in_channels=in_channels,
- out_channels=out_channels,
- prev_output_channel=prev_output_channel,
- temb_channels=temb_channels,
- add_upsample=add_upsample,
- resnet_eps=resnet_eps,
- resnet_act_fn=resnet_act_fn,
- cross_attention_dim=cross_attention_dim,
- attn_num_head_channels=attn_num_head_channels,
- )
- elif up_block_type == "AttnUpBlock2D":
- return AttnUpBlock2D(
- num_layers=num_layers,
- in_channels=in_channels,
- out_channels=out_channels,
- prev_output_channel=prev_output_channel,
- temb_channels=temb_channels,
- add_upsample=add_upsample,
- resnet_eps=resnet_eps,
- resnet_act_fn=resnet_act_fn,
- attn_num_head_channels=attn_num_head_channels,
- )
- elif up_block_type == "SkipUpBlock2D":
- return SkipUpBlock2D(
- num_layers=num_layers,
- in_channels=in_channels,
- out_channels=out_channels,
- prev_output_channel=prev_output_channel,
- temb_channels=temb_channels,
- add_upsample=add_upsample,
- resnet_eps=resnet_eps,
- resnet_act_fn=resnet_act_fn,
- )
- elif up_block_type == "AttnSkipUpBlock2D":
- return AttnSkipUpBlock2D(
- num_layers=num_layers,
- in_channels=in_channels,
- out_channels=out_channels,
- prev_output_channel=prev_output_channel,
- temb_channels=temb_channels,
- add_upsample=add_upsample,
- resnet_eps=resnet_eps,
- resnet_act_fn=resnet_act_fn,
- attn_num_head_channels=attn_num_head_channels,
- )
- elif up_block_type == "UpDecoderBlock2D":
- return UpDecoderBlock2D(
- num_layers=num_layers,
- in_channels=in_channels,
- out_channels=out_channels,
- add_upsample=add_upsample,
- resnet_eps=resnet_eps,
- resnet_act_fn=resnet_act_fn,
- )
- raise ValueError(f"{up_block_type} does not exist.")
-
-
-class UNetMidBlock2D(nn.Module):
- def __init__(
- self,
- in_channels: int,
- temb_channels: int,
- dropout: float = 0.0,
- num_layers: int = 1,
- resnet_eps: float = 1e-6,
- resnet_time_scale_shift: str = "default",
- resnet_act_fn: str = "swish",
- resnet_groups: int = 32,
- resnet_pre_norm: bool = True,
- attn_num_head_channels=1,
- attention_type="default",
- output_scale_factor=1.0,
- **kwargs,
- ):
- super().__init__()
-
- self.attention_type = attention_type
- resnet_groups = resnet_groups if resnet_groups is not None else min(in_channels // 4, 32)
-
- # there is always at least one resnet
- resnets = [
- ResnetBlock2D(
- in_channels=in_channels,
- out_channels=in_channels,
- temb_channels=temb_channels,
- eps=resnet_eps,
- groups=resnet_groups,
- dropout=dropout,
- time_embedding_norm=resnet_time_scale_shift,
- non_linearity=resnet_act_fn,
- output_scale_factor=output_scale_factor,
- pre_norm=resnet_pre_norm,
- )
- ]
- attentions = []
-
- for _ in range(num_layers):
- attentions.append(
- AttentionBlock(
- in_channels,
- num_head_channels=attn_num_head_channels,
- rescale_output_factor=output_scale_factor,
- eps=resnet_eps,
- num_groups=resnet_groups,
- )
- )
- resnets.append(
- ResnetBlock2D(
- in_channels=in_channels,
- out_channels=in_channels,
- temb_channels=temb_channels,
- eps=resnet_eps,
- groups=resnet_groups,
- dropout=dropout,
- time_embedding_norm=resnet_time_scale_shift,
- non_linearity=resnet_act_fn,
- output_scale_factor=output_scale_factor,
- pre_norm=resnet_pre_norm,
- )
- )
-
- self.attentions = nn.ModuleList(attentions)
- self.resnets = nn.ModuleList(resnets)
-
- def forward(self, hidden_states, temb=None, encoder_states=None):
- hidden_states = self.resnets[0](hidden_states, temb)
- for attn, resnet in zip(self.attentions, self.resnets[1:]):
- if self.attention_type == "default":
- hidden_states = attn(hidden_states)
- else:
- hidden_states = attn(hidden_states, encoder_states)
- hidden_states = resnet(hidden_states, temb)
-
- return hidden_states
-
-
-class UNetMidBlock2DCrossAttn(nn.Module):
- def __init__(
- self,
- in_channels: int,
- temb_channels: int,
- dropout: float = 0.0,
- num_layers: int = 1,
- resnet_eps: float = 1e-6,
- resnet_time_scale_shift: str = "default",
- resnet_act_fn: str = "swish",
- resnet_groups: int = 32,
- resnet_pre_norm: bool = True,
- attn_num_head_channels=1,
- attention_type="default",
- output_scale_factor=1.0,
- cross_attention_dim=1280,
- **kwargs,
- ):
- super().__init__()
-
- self.attention_type = attention_type
- self.attn_num_head_channels = attn_num_head_channels
- resnet_groups = resnet_groups if resnet_groups is not None else min(in_channels // 4, 32)
-
- # there is always at least one resnet
- resnets = [
- ResnetBlock2D(
- in_channels=in_channels,
- out_channels=in_channels,
- temb_channels=temb_channels,
- eps=resnet_eps,
- groups=resnet_groups,
- dropout=dropout,
- time_embedding_norm=resnet_time_scale_shift,
- non_linearity=resnet_act_fn,
- output_scale_factor=output_scale_factor,
- pre_norm=resnet_pre_norm,
- )
- ]
- attentions = []
-
- for _ in range(num_layers):
- attentions.append(
- SpatialTransformer(
- in_channels,
- attn_num_head_channels,
- in_channels // attn_num_head_channels,
- depth=1,
- context_dim=cross_attention_dim,
- )
- )
- resnets.append(
- ResnetBlock2D(
- in_channels=in_channels,
- out_channels=in_channels,
- temb_channels=temb_channels,
- eps=resnet_eps,
- groups=resnet_groups,
- dropout=dropout,
- time_embedding_norm=resnet_time_scale_shift,
- non_linearity=resnet_act_fn,
- output_scale_factor=output_scale_factor,
- pre_norm=resnet_pre_norm,
- )
- )
-
- self.attentions = nn.ModuleList(attentions)
- self.resnets = nn.ModuleList(resnets)
-
- def set_attention_slice(self, slice_size):
- if slice_size is not None and self.attn_num_head_channels % slice_size != 0:
- raise ValueError(
- f"Make sure slice_size {slice_size} is a divisor of "
- f"the number of heads used in cross_attention {self.attn_num_head_channels}"
- )
- if slice_size is not None and slice_size > self.attn_num_head_channels:
- raise ValueError(
- f"Chunk_size {slice_size} has to be smaller or equal to "
- f"the number of heads used in cross_attention {self.attn_num_head_channels}"
- )
-
- for attn in self.attentions:
- attn._set_attention_slice(slice_size)
-
- def forward(self, hidden_states, temb=None, encoder_hidden_states=None):
- hidden_states = self.resnets[0](hidden_states, temb)
- for attn, resnet in zip(self.attentions, self.resnets[1:]):
- hidden_states = attn(hidden_states, encoder_hidden_states)
- hidden_states = resnet(hidden_states, temb)
-
- return hidden_states
-
-
-class AttnDownBlock2D(nn.Module):
- def __init__(
- self,
- in_channels: int,
- out_channels: int,
- temb_channels: int,
- dropout: float = 0.0,
- num_layers: int = 1,
- resnet_eps: float = 1e-6,
- resnet_time_scale_shift: str = "default",
- resnet_act_fn: str = "swish",
- resnet_groups: int = 32,
- resnet_pre_norm: bool = True,
- attn_num_head_channels=1,
- attention_type="default",
- output_scale_factor=1.0,
- downsample_padding=1,
- add_downsample=True,
- ):
- super().__init__()
- resnets = []
- attentions = []
-
- self.attention_type = attention_type
-
- for i in range(num_layers):
- in_channels = in_channels if i == 0 else out_channels
- resnets.append(
- ResnetBlock2D(
- in_channels=in_channels,
- out_channels=out_channels,
- temb_channels=temb_channels,
- eps=resnet_eps,
- groups=resnet_groups,
- dropout=dropout,
- time_embedding_norm=resnet_time_scale_shift,
- non_linearity=resnet_act_fn,
- output_scale_factor=output_scale_factor,
- pre_norm=resnet_pre_norm,
- )
- )
- attentions.append(
- AttentionBlock(
- out_channels,
- num_head_channels=attn_num_head_channels,
- rescale_output_factor=output_scale_factor,
- eps=resnet_eps,
- )
- )
-
- self.attentions = nn.ModuleList(attentions)
- self.resnets = nn.ModuleList(resnets)
-
- if add_downsample:
- self.downsamplers = nn.ModuleList(
- [
- Downsample2D(
- in_channels, use_conv=True, out_channels=out_channels, padding=downsample_padding, name="op"
- )
- ]
- )
- else:
- self.downsamplers = None
-
- def forward(self, hidden_states, temb=None):
- output_states = ()
-
- for resnet, attn in zip(self.resnets, self.attentions):
- hidden_states = resnet(hidden_states, temb)
- hidden_states = attn(hidden_states)
- output_states += (hidden_states,)
-
- if self.downsamplers is not None:
- for downsampler in self.downsamplers:
- hidden_states = downsampler(hidden_states)
-
- output_states += (hidden_states,)
-
- return hidden_states, output_states
-
-
-class CrossAttnDownBlock2D(nn.Module):
- def __init__(
- self,
- in_channels: int,
- out_channels: int,
- temb_channels: int,
- dropout: float = 0.0,
- num_layers: int = 1,
- resnet_eps: float = 1e-6,
- resnet_time_scale_shift: str = "default",
- resnet_act_fn: str = "swish",
- resnet_groups: int = 32,
- resnet_pre_norm: bool = True,
- attn_num_head_channels=1,
- cross_attention_dim=1280,
- attention_type="default",
- output_scale_factor=1.0,
- downsample_padding=1,
- add_downsample=True,
- ):
- super().__init__()
- resnets = []
- attentions = []
-
- self.attention_type = attention_type
- self.attn_num_head_channels = attn_num_head_channels
-
- for i in range(num_layers):
- in_channels = in_channels if i == 0 else out_channels
- resnets.append(
- ResnetBlock2D(
- in_channels=in_channels,
- out_channels=out_channels,
- temb_channels=temb_channels,
- eps=resnet_eps,
- groups=resnet_groups,
- dropout=dropout,
- time_embedding_norm=resnet_time_scale_shift,
- non_linearity=resnet_act_fn,
- output_scale_factor=output_scale_factor,
- pre_norm=resnet_pre_norm,
- )
- )
- attentions.append(
- SpatialTransformer(
- out_channels,
- attn_num_head_channels,
- out_channels // attn_num_head_channels,
- depth=1,
- context_dim=cross_attention_dim,
- )
- )
- self.attentions = nn.ModuleList(attentions)
- self.resnets = nn.ModuleList(resnets)
-
- if add_downsample:
- self.downsamplers = nn.ModuleList(
- [
- Downsample2D(
- in_channels, use_conv=True, out_channels=out_channels, padding=downsample_padding, name="op"
- )
- ]
- )
- else:
- self.downsamplers = None
-
- def set_attention_slice(self, slice_size):
- if slice_size is not None and self.attn_num_head_channels % slice_size != 0:
- raise ValueError(
- f"Make sure slice_size {slice_size} is a divisor of "
- f"the number of heads used in cross_attention {self.attn_num_head_channels}"
- )
- if slice_size is not None and slice_size > self.attn_num_head_channels:
- raise ValueError(
- f"Chunk_size {slice_size} has to be smaller or equal to "
- f"the number of heads used in cross_attention {self.attn_num_head_channels}"
- )
-
- for attn in self.attentions:
- attn._set_attention_slice(slice_size)
-
- def forward(self, hidden_states, temb=None, encoder_hidden_states=None):
- output_states = ()
-
- for resnet, attn in zip(self.resnets, self.attentions):
- hidden_states = resnet(hidden_states, temb)
- hidden_states = attn(hidden_states, context=encoder_hidden_states)
- output_states += (hidden_states,)
-
- if self.downsamplers is not None:
- for downsampler in self.downsamplers:
- hidden_states = downsampler(hidden_states)
-
- output_states += (hidden_states,)
-
- return hidden_states, output_states
-
-
-class DownBlock2D(nn.Module):
- def __init__(
- self,
- in_channels: int,
- out_channels: int,
- temb_channels: int,
- dropout: float = 0.0,
- num_layers: int = 1,
- resnet_eps: float = 1e-6,
- resnet_time_scale_shift: str = "default",
- resnet_act_fn: str = "swish",
- resnet_groups: int = 32,
- resnet_pre_norm: bool = True,
- output_scale_factor=1.0,
- add_downsample=True,
- downsample_padding=1,
- ):
- super().__init__()
- resnets = []
-
- for i in range(num_layers):
- in_channels = in_channels if i == 0 else out_channels
- resnets.append(
- ResnetBlock2D(
- in_channels=in_channels,
- out_channels=out_channels,
- temb_channels=temb_channels,
- eps=resnet_eps,
- groups=resnet_groups,
- dropout=dropout,
- time_embedding_norm=resnet_time_scale_shift,
- non_linearity=resnet_act_fn,
- output_scale_factor=output_scale_factor,
- pre_norm=resnet_pre_norm,
- )
- )
-
- self.resnets = nn.ModuleList(resnets)
-
- if add_downsample:
- self.downsamplers = nn.ModuleList(
- [
- Downsample2D(
- in_channels, use_conv=True, out_channels=out_channels, padding=downsample_padding, name="op"
- )
- ]
- )
- else:
- self.downsamplers = None
-
- def forward(self, hidden_states, temb=None):
- output_states = ()
-
- for resnet in self.resnets:
- hidden_states = resnet(hidden_states, temb)
- output_states += (hidden_states,)
-
- if self.downsamplers is not None:
- for downsampler in self.downsamplers:
- hidden_states = downsampler(hidden_states)
-
- output_states += (hidden_states,)
-
- return hidden_states, output_states
-
-
-class DownEncoderBlock2D(nn.Module):
- def __init__(
- self,
- in_channels: int,
- out_channels: int,
- dropout: float = 0.0,
- num_layers: int = 1,
- resnet_eps: float = 1e-6,
- resnet_time_scale_shift: str = "default",
- resnet_act_fn: str = "swish",
- resnet_groups: int = 32,
- resnet_pre_norm: bool = True,
- output_scale_factor=1.0,
- add_downsample=True,
- downsample_padding=1,
- ):
- super().__init__()
- resnets = []
-
- for i in range(num_layers):
- in_channels = in_channels if i == 0 else out_channels
- resnets.append(
- ResnetBlock2D(
- in_channels=in_channels,
- out_channels=out_channels,
- temb_channels=None,
- eps=resnet_eps,
- groups=resnet_groups,
- dropout=dropout,
- time_embedding_norm=resnet_time_scale_shift,
- non_linearity=resnet_act_fn,
- output_scale_factor=output_scale_factor,
- pre_norm=resnet_pre_norm,
- )
- )
-
- self.resnets = nn.ModuleList(resnets)
-
- if add_downsample:
- self.downsamplers = nn.ModuleList(
- [
- Downsample2D(
- in_channels, use_conv=True, out_channels=out_channels, padding=downsample_padding, name="op"
- )
- ]
- )
- else:
- self.downsamplers = None
-
- def forward(self, hidden_states):
- for resnet in self.resnets:
- hidden_states = resnet(hidden_states, temb=None)
-
- if self.downsamplers is not None:
- for downsampler in self.downsamplers:
- hidden_states = downsampler(hidden_states)
-
- return hidden_states
-
-
-class AttnDownEncoderBlock2D(nn.Module):
- def __init__(
- self,
- in_channels: int,
- out_channels: int,
- dropout: float = 0.0,
- num_layers: int = 1,
- resnet_eps: float = 1e-6,
- resnet_time_scale_shift: str = "default",
- resnet_act_fn: str = "swish",
- resnet_groups: int = 32,
- resnet_pre_norm: bool = True,
- attn_num_head_channels=1,
- output_scale_factor=1.0,
- add_downsample=True,
- downsample_padding=1,
- ):
- super().__init__()
- resnets = []
- attentions = []
-
- for i in range(num_layers):
- in_channels = in_channels if i == 0 else out_channels
- resnets.append(
- ResnetBlock2D(
- in_channels=in_channels,
- out_channels=out_channels,
- temb_channels=None,
- eps=resnet_eps,
- groups=resnet_groups,
- dropout=dropout,
- time_embedding_norm=resnet_time_scale_shift,
- non_linearity=resnet_act_fn,
- output_scale_factor=output_scale_factor,
- pre_norm=resnet_pre_norm,
- )
- )
- attentions.append(
- AttentionBlock(
- out_channels,
- num_head_channels=attn_num_head_channels,
- rescale_output_factor=output_scale_factor,
- eps=resnet_eps,
- num_groups=resnet_groups,
- )
- )
-
- self.attentions = nn.ModuleList(attentions)
- self.resnets = nn.ModuleList(resnets)
-
- if add_downsample:
- self.downsamplers = nn.ModuleList(
- [
- Downsample2D(
- in_channels, use_conv=True, out_channels=out_channels, padding=downsample_padding, name="op"
- )
- ]
- )
- else:
- self.downsamplers = None
-
- def forward(self, hidden_states):
- for resnet, attn in zip(self.resnets, self.attentions):
- hidden_states = resnet(hidden_states, temb=None)
- hidden_states = attn(hidden_states)
-
- if self.downsamplers is not None:
- for downsampler in self.downsamplers:
- hidden_states = downsampler(hidden_states)
-
- return hidden_states
-
-
-class AttnSkipDownBlock2D(nn.Module):
- def __init__(
- self,
- in_channels: int,
- out_channels: int,
- temb_channels: int,
- dropout: float = 0.0,
- num_layers: int = 1,
- resnet_eps: float = 1e-6,
- resnet_time_scale_shift: str = "default",
- resnet_act_fn: str = "swish",
- resnet_pre_norm: bool = True,
- attn_num_head_channels=1,
- attention_type="default",
- output_scale_factor=np.sqrt(2.0),
- downsample_padding=1,
- add_downsample=True,
- ):
- super().__init__()
- self.attentions = nn.ModuleList([])
- self.resnets = nn.ModuleList([])
-
- self.attention_type = attention_type
-
- for i in range(num_layers):
- in_channels = in_channels if i == 0 else out_channels
- self.resnets.append(
- ResnetBlock2D(
- in_channels=in_channels,
- out_channels=out_channels,
- temb_channels=temb_channels,
- eps=resnet_eps,
- groups=min(in_channels // 4, 32),
- groups_out=min(out_channels // 4, 32),
- dropout=dropout,
- time_embedding_norm=resnet_time_scale_shift,
- non_linearity=resnet_act_fn,
- output_scale_factor=output_scale_factor,
- pre_norm=resnet_pre_norm,
- )
- )
- self.attentions.append(
- AttentionBlock(
- out_channels,
- num_head_channels=attn_num_head_channels,
- rescale_output_factor=output_scale_factor,
- eps=resnet_eps,
- )
- )
-
- if add_downsample:
- self.resnet_down = ResnetBlock2D(
- in_channels=out_channels,
- out_channels=out_channels,
- temb_channels=temb_channels,
- eps=resnet_eps,
- groups=min(out_channels // 4, 32),
- dropout=dropout,
- time_embedding_norm=resnet_time_scale_shift,
- non_linearity=resnet_act_fn,
- output_scale_factor=output_scale_factor,
- pre_norm=resnet_pre_norm,
- use_nin_shortcut=True,
- down=True,
- kernel="fir",
- )
- self.downsamplers = nn.ModuleList([FirDownsample2D(in_channels, out_channels=out_channels)])
- self.skip_conv = nn.Conv2d(3, out_channels, kernel_size=(1, 1), stride=(1, 1))
- else:
- self.resnet_down = None
- self.downsamplers = None
- self.skip_conv = None
-
- def forward(self, hidden_states, temb=None, skip_sample=None):
- output_states = ()
-
- for resnet, attn in zip(self.resnets, self.attentions):
- hidden_states = resnet(hidden_states, temb)
- hidden_states = attn(hidden_states)
- output_states += (hidden_states,)
-
- if self.downsamplers is not None:
- hidden_states = self.resnet_down(hidden_states, temb)
- for downsampler in self.downsamplers:
- skip_sample = downsampler(skip_sample)
-
- hidden_states = self.skip_conv(skip_sample) + hidden_states
-
- output_states += (hidden_states,)
-
- return hidden_states, output_states, skip_sample
-
-
-class SkipDownBlock2D(nn.Module):
- def __init__(
- self,
- in_channels: int,
- out_channels: int,
- temb_channels: int,
- dropout: float = 0.0,
- num_layers: int = 1,
- resnet_eps: float = 1e-6,
- resnet_time_scale_shift: str = "default",
- resnet_act_fn: str = "swish",
- resnet_pre_norm: bool = True,
- output_scale_factor=np.sqrt(2.0),
- add_downsample=True,
- downsample_padding=1,
- ):
- super().__init__()
- self.resnets = nn.ModuleList([])
-
- for i in range(num_layers):
- in_channels = in_channels if i == 0 else out_channels
- self.resnets.append(
- ResnetBlock2D(
- in_channels=in_channels,
- out_channels=out_channels,
- temb_channels=temb_channels,
- eps=resnet_eps,
- groups=min(in_channels // 4, 32),
- groups_out=min(out_channels // 4, 32),
- dropout=dropout,
- time_embedding_norm=resnet_time_scale_shift,
- non_linearity=resnet_act_fn,
- output_scale_factor=output_scale_factor,
- pre_norm=resnet_pre_norm,
- )
- )
-
- if add_downsample:
- self.resnet_down = ResnetBlock2D(
- in_channels=out_channels,
- out_channels=out_channels,
- temb_channels=temb_channels,
- eps=resnet_eps,
- groups=min(out_channels // 4, 32),
- dropout=dropout,
- time_embedding_norm=resnet_time_scale_shift,
- non_linearity=resnet_act_fn,
- output_scale_factor=output_scale_factor,
- pre_norm=resnet_pre_norm,
- use_nin_shortcut=True,
- down=True,
- kernel="fir",
- )
- self.downsamplers = nn.ModuleList([FirDownsample2D(in_channels, out_channels=out_channels)])
- self.skip_conv = nn.Conv2d(3, out_channels, kernel_size=(1, 1), stride=(1, 1))
- else:
- self.resnet_down = None
- self.downsamplers = None
- self.skip_conv = None
-
- def forward(self, hidden_states, temb=None, skip_sample=None):
- output_states = ()
-
- for resnet in self.resnets:
- hidden_states = resnet(hidden_states, temb)
- output_states += (hidden_states,)
-
- if self.downsamplers is not None:
- hidden_states = self.resnet_down(hidden_states, temb)
- for downsampler in self.downsamplers:
- skip_sample = downsampler(skip_sample)
-
- hidden_states = self.skip_conv(skip_sample) + hidden_states
-
- output_states += (hidden_states,)
-
- return hidden_states, output_states, skip_sample
-
-
-class AttnUpBlock2D(nn.Module):
- def __init__(
- self,
- in_channels: int,
- prev_output_channel: int,
- out_channels: int,
- temb_channels: int,
- dropout: float = 0.0,
- num_layers: int = 1,
- resnet_eps: float = 1e-6,
- resnet_time_scale_shift: str = "default",
- resnet_act_fn: str = "swish",
- resnet_groups: int = 32,
- resnet_pre_norm: bool = True,
- attention_type="default",
- attn_num_head_channels=1,
- output_scale_factor=1.0,
- add_upsample=True,
- ):
- super().__init__()
- resnets = []
- attentions = []
-
- self.attention_type = attention_type
-
- for i in range(num_layers):
- res_skip_channels = in_channels if (i == num_layers - 1) else out_channels
- resnet_in_channels = prev_output_channel if i == 0 else out_channels
-
- resnets.append(
- ResnetBlock2D(
- in_channels=resnet_in_channels + res_skip_channels,
- out_channels=out_channels,
- temb_channels=temb_channels,
- eps=resnet_eps,
- groups=resnet_groups,
- dropout=dropout,
- time_embedding_norm=resnet_time_scale_shift,
- non_linearity=resnet_act_fn,
- output_scale_factor=output_scale_factor,
- pre_norm=resnet_pre_norm,
- )
- )
- attentions.append(
- AttentionBlock(
- out_channels,
- num_head_channels=attn_num_head_channels,
- rescale_output_factor=output_scale_factor,
- eps=resnet_eps,
- )
- )
-
- self.attentions = nn.ModuleList(attentions)
- self.resnets = nn.ModuleList(resnets)
-
- if add_upsample:
- self.upsamplers = nn.ModuleList([Upsample2D(out_channels, use_conv=True, out_channels=out_channels)])
- else:
- self.upsamplers = None
-
- def forward(self, hidden_states, res_hidden_states_tuple, temb=None):
- for resnet, attn in zip(self.resnets, self.attentions):
-
- # pop res hidden states
- res_hidden_states = res_hidden_states_tuple[-1]
- res_hidden_states_tuple = res_hidden_states_tuple[:-1]
- hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1)
-
- hidden_states = resnet(hidden_states, temb)
- hidden_states = attn(hidden_states)
-
- if self.upsamplers is not None:
- for upsampler in self.upsamplers:
- hidden_states = upsampler(hidden_states)
-
- return hidden_states
-
-
-class CrossAttnUpBlock2D(nn.Module):
- def __init__(
- self,
- in_channels: int,
- out_channels: int,
- prev_output_channel: int,
- temb_channels: int,
- dropout: float = 0.0,
- num_layers: int = 1,
- resnet_eps: float = 1e-6,
- resnet_time_scale_shift: str = "default",
- resnet_act_fn: str = "swish",
- resnet_groups: int = 32,
- resnet_pre_norm: bool = True,
- attn_num_head_channels=1,
- cross_attention_dim=1280,
- attention_type="default",
- output_scale_factor=1.0,
- downsample_padding=1,
- add_upsample=True,
- ):
- super().__init__()
- resnets = []
- attentions = []
-
- self.attention_type = attention_type
- self.attn_num_head_channels = attn_num_head_channels
-
- for i in range(num_layers):
- res_skip_channels = in_channels if (i == num_layers - 1) else out_channels
- resnet_in_channels = prev_output_channel if i == 0 else out_channels
-
- resnets.append(
- ResnetBlock2D(
- in_channels=resnet_in_channels + res_skip_channels,
- out_channels=out_channels,
- temb_channels=temb_channels,
- eps=resnet_eps,
- groups=resnet_groups,
- dropout=dropout,
- time_embedding_norm=resnet_time_scale_shift,
- non_linearity=resnet_act_fn,
- output_scale_factor=output_scale_factor,
- pre_norm=resnet_pre_norm,
- )
- )
- attentions.append(
- SpatialTransformer(
- out_channels,
- attn_num_head_channels,
- out_channels // attn_num_head_channels,
- depth=1,
- context_dim=cross_attention_dim,
- )
- )
- self.attentions = nn.ModuleList(attentions)
- self.resnets = nn.ModuleList(resnets)
-
- if add_upsample:
- self.upsamplers = nn.ModuleList([Upsample2D(out_channels, use_conv=True, out_channels=out_channels)])
- else:
- self.upsamplers = None
-
- def set_attention_slice(self, slice_size):
- if slice_size is not None and self.attn_num_head_channels % slice_size != 0:
- raise ValueError(
- f"Make sure slice_size {slice_size} is a divisor of "
- f"the number of heads used in cross_attention {self.attn_num_head_channels}"
- )
- if slice_size is not None and slice_size > self.attn_num_head_channels:
- raise ValueError(
- f"Chunk_size {slice_size} has to be smaller or equal to "
- f"the number of heads used in cross_attention {self.attn_num_head_channels}"
- )
-
- for attn in self.attentions:
- attn._set_attention_slice(slice_size)
-
- def forward(self, hidden_states, res_hidden_states_tuple, temb=None, encoder_hidden_states=None):
- for resnet, attn in zip(self.resnets, self.attentions):
-
- # pop res hidden states
- res_hidden_states = res_hidden_states_tuple[-1]
- res_hidden_states_tuple = res_hidden_states_tuple[:-1]
- hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1)
-
- hidden_states = resnet(hidden_states, temb)
- hidden_states = attn(hidden_states, context=encoder_hidden_states)
-
- if self.upsamplers is not None:
- for upsampler in self.upsamplers:
- hidden_states = upsampler(hidden_states)
-
- return hidden_states
-
-
-class UpBlock2D(nn.Module):
- def __init__(
- self,
- in_channels: int,
- prev_output_channel: int,
- out_channels: int,
- temb_channels: int,
- dropout: float = 0.0,
- num_layers: int = 1,
- resnet_eps: float = 1e-6,
- resnet_time_scale_shift: str = "default",
- resnet_act_fn: str = "swish",
- resnet_groups: int = 32,
- resnet_pre_norm: bool = True,
- output_scale_factor=1.0,
- add_upsample=True,
- ):
- super().__init__()
- resnets = []
-
- for i in range(num_layers):
- res_skip_channels = in_channels if (i == num_layers - 1) else out_channels
- resnet_in_channels = prev_output_channel if i == 0 else out_channels
-
- resnets.append(
- ResnetBlock2D(
- in_channels=resnet_in_channels + res_skip_channels,
- out_channels=out_channels,
- temb_channels=temb_channels,
- eps=resnet_eps,
- groups=resnet_groups,
- dropout=dropout,
- time_embedding_norm=resnet_time_scale_shift,
- non_linearity=resnet_act_fn,
- output_scale_factor=output_scale_factor,
- pre_norm=resnet_pre_norm,
- )
- )
-
- self.resnets = nn.ModuleList(resnets)
-
- if add_upsample:
- self.upsamplers = nn.ModuleList([Upsample2D(out_channels, use_conv=True, out_channels=out_channels)])
- else:
- self.upsamplers = None
-
- def forward(self, hidden_states, res_hidden_states_tuple, temb=None):
- for resnet in self.resnets:
-
- # pop res hidden states
- res_hidden_states = res_hidden_states_tuple[-1]
- res_hidden_states_tuple = res_hidden_states_tuple[:-1]
- hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1)
-
- hidden_states = resnet(hidden_states, temb)
-
- if self.upsamplers is not None:
- for upsampler in self.upsamplers:
- hidden_states = upsampler(hidden_states)
-
- return hidden_states
-
-
-class UpDecoderBlock2D(nn.Module):
- def __init__(
- self,
- in_channels: int,
- out_channels: int,
- dropout: float = 0.0,
- num_layers: int = 1,
- resnet_eps: float = 1e-6,
- resnet_time_scale_shift: str = "default",
- resnet_act_fn: str = "swish",
- resnet_groups: int = 32,
- resnet_pre_norm: bool = True,
- output_scale_factor=1.0,
- add_upsample=True,
- ):
- super().__init__()
- resnets = []
-
- for i in range(num_layers):
- input_channels = in_channels if i == 0 else out_channels
-
- resnets.append(
- ResnetBlock2D(
- in_channels=input_channels,
- out_channels=out_channels,
- temb_channels=None,
- eps=resnet_eps,
- groups=resnet_groups,
- dropout=dropout,
- time_embedding_norm=resnet_time_scale_shift,
- non_linearity=resnet_act_fn,
- output_scale_factor=output_scale_factor,
- pre_norm=resnet_pre_norm,
- )
- )
-
- self.resnets = nn.ModuleList(resnets)
-
- if add_upsample:
- self.upsamplers = nn.ModuleList([Upsample2D(out_channels, use_conv=True, out_channels=out_channels)])
- else:
- self.upsamplers = None
-
- def forward(self, hidden_states):
- for resnet in self.resnets:
- hidden_states = resnet(hidden_states, temb=None)
-
- if self.upsamplers is not None:
- for upsampler in self.upsamplers:
- hidden_states = upsampler(hidden_states)
-
- return hidden_states
-
-
-class AttnUpDecoderBlock2D(nn.Module):
- def __init__(
- self,
- in_channels: int,
- out_channels: int,
- dropout: float = 0.0,
- num_layers: int = 1,
- resnet_eps: float = 1e-6,
- resnet_time_scale_shift: str = "default",
- resnet_act_fn: str = "swish",
- resnet_groups: int = 32,
- resnet_pre_norm: bool = True,
- attn_num_head_channels=1,
- output_scale_factor=1.0,
- add_upsample=True,
- ):
- super().__init__()
- resnets = []
- attentions = []
-
- for i in range(num_layers):
- input_channels = in_channels if i == 0 else out_channels
-
- resnets.append(
- ResnetBlock2D(
- in_channels=input_channels,
- out_channels=out_channels,
- temb_channels=None,
- eps=resnet_eps,
- groups=resnet_groups,
- dropout=dropout,
- time_embedding_norm=resnet_time_scale_shift,
- non_linearity=resnet_act_fn,
- output_scale_factor=output_scale_factor,
- pre_norm=resnet_pre_norm,
- )
- )
- attentions.append(
- AttentionBlock(
- out_channels,
- num_head_channels=attn_num_head_channels,
- rescale_output_factor=output_scale_factor,
- eps=resnet_eps,
- num_groups=resnet_groups,
- )
- )
-
- self.attentions = nn.ModuleList(attentions)
- self.resnets = nn.ModuleList(resnets)
-
- if add_upsample:
- self.upsamplers = nn.ModuleList([Upsample2D(out_channels, use_conv=True, out_channels=out_channels)])
- else:
- self.upsamplers = None
-
- def forward(self, hidden_states):
- for resnet, attn in zip(self.resnets, self.attentions):
- hidden_states = resnet(hidden_states, temb=None)
- hidden_states = attn(hidden_states)
-
- if self.upsamplers is not None:
- for upsampler in self.upsamplers:
- hidden_states = upsampler(hidden_states)
-
- return hidden_states
-
-
-class AttnSkipUpBlock2D(nn.Module):
- def __init__(
- self,
- in_channels: int,
- prev_output_channel: int,
- out_channels: int,
- temb_channels: int,
- dropout: float = 0.0,
- num_layers: int = 1,
- resnet_eps: float = 1e-6,
- resnet_time_scale_shift: str = "default",
- resnet_act_fn: str = "swish",
- resnet_pre_norm: bool = True,
- attn_num_head_channels=1,
- attention_type="default",
- output_scale_factor=np.sqrt(2.0),
- upsample_padding=1,
- add_upsample=True,
- ):
- super().__init__()
- self.attentions = nn.ModuleList([])
- self.resnets = nn.ModuleList([])
-
- self.attention_type = attention_type
-
- for i in range(num_layers):
- res_skip_channels = in_channels if (i == num_layers - 1) else out_channels
- resnet_in_channels = prev_output_channel if i == 0 else out_channels
-
- self.resnets.append(
- ResnetBlock2D(
- in_channels=resnet_in_channels + res_skip_channels,
- out_channels=out_channels,
- temb_channels=temb_channels,
- eps=resnet_eps,
- groups=min(resnet_in_channels + res_skip_channels // 4, 32),
- groups_out=min(out_channels // 4, 32),
- dropout=dropout,
- time_embedding_norm=resnet_time_scale_shift,
- non_linearity=resnet_act_fn,
- output_scale_factor=output_scale_factor,
- pre_norm=resnet_pre_norm,
- )
- )
-
- self.attentions.append(
- AttentionBlock(
- out_channels,
- num_head_channels=attn_num_head_channels,
- rescale_output_factor=output_scale_factor,
- eps=resnet_eps,
- )
- )
-
- self.upsampler = FirUpsample2D(in_channels, out_channels=out_channels)
- if add_upsample:
- self.resnet_up = ResnetBlock2D(
- in_channels=out_channels,
- out_channels=out_channels,
- temb_channels=temb_channels,
- eps=resnet_eps,
- groups=min(out_channels // 4, 32),
- groups_out=min(out_channels // 4, 32),
- dropout=dropout,
- time_embedding_norm=resnet_time_scale_shift,
- non_linearity=resnet_act_fn,
- output_scale_factor=output_scale_factor,
- pre_norm=resnet_pre_norm,
- use_nin_shortcut=True,
- up=True,
- kernel="fir",
- )
- self.skip_conv = nn.Conv2d(out_channels, 3, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- self.skip_norm = torch.nn.GroupNorm(
- num_groups=min(out_channels // 4, 32), num_channels=out_channels, eps=resnet_eps, affine=True
- )
- self.act = nn.SiLU()
- else:
- self.resnet_up = None
- self.skip_conv = None
- self.skip_norm = None
- self.act = None
-
- def forward(self, hidden_states, res_hidden_states_tuple, temb=None, skip_sample=None):
- for resnet in self.resnets:
- # pop res hidden states
- res_hidden_states = res_hidden_states_tuple[-1]
- res_hidden_states_tuple = res_hidden_states_tuple[:-1]
- hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1)
-
- hidden_states = resnet(hidden_states, temb)
-
- hidden_states = self.attentions[0](hidden_states)
-
- if skip_sample is not None:
- skip_sample = self.upsampler(skip_sample)
- else:
- skip_sample = 0
-
- if self.resnet_up is not None:
- skip_sample_states = self.skip_norm(hidden_states)
- skip_sample_states = self.act(skip_sample_states)
- skip_sample_states = self.skip_conv(skip_sample_states)
-
- skip_sample = skip_sample + skip_sample_states
-
- hidden_states = self.resnet_up(hidden_states, temb)
-
- return hidden_states, skip_sample
-
-
-class SkipUpBlock2D(nn.Module):
- def __init__(
- self,
- in_channels: int,
- prev_output_channel: int,
- out_channels: int,
- temb_channels: int,
- dropout: float = 0.0,
- num_layers: int = 1,
- resnet_eps: float = 1e-6,
- resnet_time_scale_shift: str = "default",
- resnet_act_fn: str = "swish",
- resnet_pre_norm: bool = True,
- output_scale_factor=np.sqrt(2.0),
- add_upsample=True,
- upsample_padding=1,
- ):
- super().__init__()
- self.resnets = nn.ModuleList([])
-
- for i in range(num_layers):
- res_skip_channels = in_channels if (i == num_layers - 1) else out_channels
- resnet_in_channels = prev_output_channel if i == 0 else out_channels
-
- self.resnets.append(
- ResnetBlock2D(
- in_channels=resnet_in_channels + res_skip_channels,
- out_channels=out_channels,
- temb_channels=temb_channels,
- eps=resnet_eps,
- groups=min((resnet_in_channels + res_skip_channels) // 4, 32),
- groups_out=min(out_channels // 4, 32),
- dropout=dropout,
- time_embedding_norm=resnet_time_scale_shift,
- non_linearity=resnet_act_fn,
- output_scale_factor=output_scale_factor,
- pre_norm=resnet_pre_norm,
- )
- )
-
- self.upsampler = FirUpsample2D(in_channels, out_channels=out_channels)
- if add_upsample:
- self.resnet_up = ResnetBlock2D(
- in_channels=out_channels,
- out_channels=out_channels,
- temb_channels=temb_channels,
- eps=resnet_eps,
- groups=min(out_channels // 4, 32),
- groups_out=min(out_channels // 4, 32),
- dropout=dropout,
- time_embedding_norm=resnet_time_scale_shift,
- non_linearity=resnet_act_fn,
- output_scale_factor=output_scale_factor,
- pre_norm=resnet_pre_norm,
- use_nin_shortcut=True,
- up=True,
- kernel="fir",
- )
- self.skip_conv = nn.Conv2d(out_channels, 3, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- self.skip_norm = torch.nn.GroupNorm(
- num_groups=min(out_channels // 4, 32), num_channels=out_channels, eps=resnet_eps, affine=True
- )
- self.act = nn.SiLU()
- else:
- self.resnet_up = None
- self.skip_conv = None
- self.skip_norm = None
- self.act = None
-
- def forward(self, hidden_states, res_hidden_states_tuple, temb=None, skip_sample=None):
- for resnet in self.resnets:
- # pop res hidden states
- res_hidden_states = res_hidden_states_tuple[-1]
- res_hidden_states_tuple = res_hidden_states_tuple[:-1]
- hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1)
-
- hidden_states = resnet(hidden_states, temb)
-
- if skip_sample is not None:
- skip_sample = self.upsampler(skip_sample)
- else:
- skip_sample = 0
-
- if self.resnet_up is not None:
- skip_sample_states = self.skip_norm(hidden_states)
- skip_sample_states = self.act(skip_sample_states)
- skip_sample_states = self.skip_conv(skip_sample_states)
-
- skip_sample = skip_sample + skip_sample_states
-
- hidden_states = self.resnet_up(hidden_states, temb)
-
- return hidden_states, skip_sample
diff --git a/spaces/Salesforce/EDICT/my_half_diffusers/hub_utils.py b/spaces/Salesforce/EDICT/my_half_diffusers/hub_utils.py
deleted file mode 100644
index c07329e36fe7a8826b0f1fb22396819b220e1b58..0000000000000000000000000000000000000000
--- a/spaces/Salesforce/EDICT/my_half_diffusers/hub_utils.py
+++ /dev/null
@@ -1,197 +0,0 @@
-# coding=utf-8
-# Copyright 2022 The HuggingFace Inc. team.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-
-import os
-import shutil
-from pathlib import Path
-from typing import Optional
-
-from huggingface_hub import HfFolder, Repository, whoami
-
-from .pipeline_utils import DiffusionPipeline
-from .utils import is_modelcards_available, logging
-
-
-if is_modelcards_available():
- from modelcards import CardData, ModelCard
-
-
-logger = logging.get_logger(__name__)
-
-
-MODEL_CARD_TEMPLATE_PATH = Path(__file__).parent / "utils" / "model_card_template.md"
-
-
-def get_full_repo_name(model_id: str, organization: Optional[str] = None, token: Optional[str] = None):
- if token is None:
- token = HfFolder.get_token()
- if organization is None:
- username = whoami(token)["name"]
- return f"{username}/{model_id}"
- else:
- return f"{organization}/{model_id}"
-
-
-def init_git_repo(args, at_init: bool = False):
- """
- Args:
- Initializes a git repo in `args.hub_model_id`.
- at_init (`bool`, *optional*, defaults to `False`):
- Whether this function is called before any training or not. If `self.args.overwrite_output_dir` is `True`
- and `at_init` is `True`, the path to the repo (which is `self.args.output_dir`) might be wiped out.
- """
- if hasattr(args, "local_rank") and args.local_rank not in [-1, 0]:
- return
- hub_token = args.hub_token if hasattr(args, "hub_token") else None
- use_auth_token = True if hub_token is None else hub_token
- if not hasattr(args, "hub_model_id") or args.hub_model_id is None:
- repo_name = Path(args.output_dir).absolute().name
- else:
- repo_name = args.hub_model_id
- if "/" not in repo_name:
- repo_name = get_full_repo_name(repo_name, token=hub_token)
-
- try:
- repo = Repository(
- args.output_dir,
- clone_from=repo_name,
- use_auth_token=use_auth_token,
- private=args.hub_private_repo,
- )
- except EnvironmentError:
- if args.overwrite_output_dir and at_init:
- # Try again after wiping output_dir
- shutil.rmtree(args.output_dir)
- repo = Repository(
- args.output_dir,
- clone_from=repo_name,
- use_auth_token=use_auth_token,
- )
- else:
- raise
-
- repo.git_pull()
-
- # By default, ignore the checkpoint folders
- if not os.path.exists(os.path.join(args.output_dir, ".gitignore")):
- with open(os.path.join(args.output_dir, ".gitignore"), "w", encoding="utf-8") as writer:
- writer.writelines(["checkpoint-*/"])
-
- return repo
-
-
-def push_to_hub(
- args,
- pipeline: DiffusionPipeline,
- repo: Repository,
- commit_message: Optional[str] = "End of training",
- blocking: bool = True,
- **kwargs,
-) -> str:
- """
- Parameters:
- Upload *self.model* and *self.tokenizer* to the 🤗 model hub on the repo *self.args.hub_model_id*.
- commit_message (`str`, *optional*, defaults to `"End of training"`):
- Message to commit while pushing.
- blocking (`bool`, *optional*, defaults to `True`):
- Whether the function should return only when the `git push` has finished.
- kwargs:
- Additional keyword arguments passed along to [`create_model_card`].
- Returns:
- The url of the commit of your model in the given repository if `blocking=False`, a tuple with the url of the
- commit and an object to track the progress of the commit if `blocking=True`
- """
-
- if not hasattr(args, "hub_model_id") or args.hub_model_id is None:
- model_name = Path(args.output_dir).name
- else:
- model_name = args.hub_model_id.split("/")[-1]
-
- output_dir = args.output_dir
- os.makedirs(output_dir, exist_ok=True)
- logger.info(f"Saving pipeline checkpoint to {output_dir}")
- pipeline.save_pretrained(output_dir)
-
- # Only push from one node.
- if hasattr(args, "local_rank") and args.local_rank not in [-1, 0]:
- return
-
- # Cancel any async push in progress if blocking=True. The commits will all be pushed together.
- if (
- blocking
- and len(repo.command_queue) > 0
- and repo.command_queue[-1] is not None
- and not repo.command_queue[-1].is_done
- ):
- repo.command_queue[-1]._process.kill()
-
- git_head_commit_url = repo.push_to_hub(commit_message=commit_message, blocking=blocking, auto_lfs_prune=True)
- # push separately the model card to be independent from the rest of the model
- create_model_card(args, model_name=model_name)
- try:
- repo.push_to_hub(commit_message="update model card README.md", blocking=blocking, auto_lfs_prune=True)
- except EnvironmentError as exc:
- logger.error(f"Error pushing update to the model card. Please read logs and retry.\n${exc}")
-
- return git_head_commit_url
-
-
-def create_model_card(args, model_name):
- if not is_modelcards_available:
- raise ValueError(
- "Please make sure to have `modelcards` installed when using the `create_model_card` function. You can"
- " install the package with `pip install modelcards`."
- )
-
- if hasattr(args, "local_rank") and args.local_rank not in [-1, 0]:
- return
-
- hub_token = args.hub_token if hasattr(args, "hub_token") else None
- repo_name = get_full_repo_name(model_name, token=hub_token)
-
- model_card = ModelCard.from_template(
- card_data=CardData( # Card metadata object that will be converted to YAML block
- language="en",
- license="apache-2.0",
- library_name="diffusers",
- tags=[],
- datasets=args.dataset_name,
- metrics=[],
- ),
- template_path=MODEL_CARD_TEMPLATE_PATH,
- model_name=model_name,
- repo_name=repo_name,
- dataset_name=args.dataset_name if hasattr(args, "dataset_name") else None,
- learning_rate=args.learning_rate,
- train_batch_size=args.train_batch_size,
- eval_batch_size=args.eval_batch_size,
- gradient_accumulation_steps=args.gradient_accumulation_steps
- if hasattr(args, "gradient_accumulation_steps")
- else None,
- adam_beta1=args.adam_beta1 if hasattr(args, "adam_beta1") else None,
- adam_beta2=args.adam_beta2 if hasattr(args, "adam_beta2") else None,
- adam_weight_decay=args.adam_weight_decay if hasattr(args, "adam_weight_decay") else None,
- adam_epsilon=args.adam_epsilon if hasattr(args, "adam_epsilon") else None,
- lr_scheduler=args.lr_scheduler if hasattr(args, "lr_scheduler") else None,
- lr_warmup_steps=args.lr_warmup_steps if hasattr(args, "lr_warmup_steps") else None,
- ema_inv_gamma=args.ema_inv_gamma if hasattr(args, "ema_inv_gamma") else None,
- ema_power=args.ema_power if hasattr(args, "ema_power") else None,
- ema_max_decay=args.ema_max_decay if hasattr(args, "ema_max_decay") else None,
- mixed_precision=args.mixed_precision,
- )
-
- card_path = os.path.join(args.output_dir, "README.md")
- model_card.save(card_path)
diff --git a/spaces/SamerKharboush/chatGPT-Sam-Turbo/assets/custom.js b/spaces/SamerKharboush/chatGPT-Sam-Turbo/assets/custom.js
deleted file mode 100644
index 7b1761043149ff97ca498501c87a0d15db5258ee..0000000000000000000000000000000000000000
--- a/spaces/SamerKharboush/chatGPT-Sam-Turbo/assets/custom.js
+++ /dev/null
@@ -1 +0,0 @@
-// custom javascript here
\ No newline at end of file
diff --git a/spaces/ShiwenNi/ChatResponse/app.py b/spaces/ShiwenNi/ChatResponse/app.py
deleted file mode 100644
index 62684313a6fd7b6a492fa901d1ca9928b9c79d86..0000000000000000000000000000000000000000
--- a/spaces/ShiwenNi/ChatResponse/app.py
+++ /dev/null
@@ -1,137 +0,0 @@
-import numpy as np
-import os
-import re
-import datetime
-import time
-import openai, tenacity
-import argparse
-import configparser
-import json
-import tiktoken
-from get_paper_from_pdf import Paper
-import gradio
-
-# 定义Response类
-class Response:
- # 初始化方法,设置属性
- def __init__(self, api, comment, language):
- self.api = api
- self.comment = comment
- self.language = language
- self.max_token_num = 14096
- self.encoding = tiktoken.get_encoding("gpt2")
-
-
- @tenacity.retry(wait=tenacity.wait_exponential(multiplier=1, min=4, max=10),
- stop=tenacity.stop_after_attempt(5),
- reraise=True)
- def chat_response(self, comment):
- openai.api_key = self.api
- response_prompt_token = 1000
- text_token = len(self.encoding.encode(comment))
- input_text_index = int(len(comment)*(self.max_token_num-response_prompt_token)/text_token)
- input_text = "This is the review comments:" + comment[:input_text_index]
- messages=[
- {"role": "system", "content": """You are the author, you submitted a paper, and the reviewers gave the review comments.
- Please reply with what we have done, not what we will do.
- You need to extract questions from the review comments one by one, and then respond point-to-point to the reviewers’ concerns.
- You need to determine for yourself how many reviewers there are and how many questions each reviewer has.
- Must be output in {}. Follow the format of the output later:
- - Response to reviewers
- #1 reviewer
- Concern #1: xxxx
- Author response: xxxxx
- Concern #2: xxxx
- Author response: xxxxx
- ...
- #2 reviewer
- Concern #1: xxxx
- Author response: xxxxx
- Concern #2: xxxx
- Author response: xxxxx
- ...
- #3 reviewer
- Concern #1: xxxx
- Author response: xxxxx
- Concern #2: xxxx
- Author response: xxxxx
- ...
-
- """.format(self.language)
-
- },
- {"role": "user", "content": input_text},
- ]
- try:
- response = openai.ChatCompletion.create(
- model="gpt-3.5-turbo-16k",
- messages=messages,
- )
- result = ''
- for choice in response.choices:
- result += choice.message.content
- usage = response.usage.total_tokens
- except Exception as e:
- # 处理其他的异常
- result = "非常抱歉>_<,生了一个错误:"+ str(e)
- usage = 'xxxxx'
- print("********"*10)
- print(result)
- print("********"*10)
- return result, usage
-
-
-
-def main(api, comment, language):
- start_time = time.time()
- if not api or not comment:
- return "请输入API-key以及审稿意见!"
- else:
- Response1 = Response(api, comment, language)
- # 开始判断是路径还是文件:
- response, total_token_used = Response1.chat_response(comment)
- time_used = time.time() - start_time
- output2 ="使用token数:"+ str(total_token_used)+"\n花费时间:"+ str(round(time_used, 2)) +"秒"
- return response, output2
-
-
-########################################################################################################
-# 标题
-title = "🤖ChatResponse🤖"
-# 描述
-
-description = '''
-'''
-
-# 创建Gradio界面
-inp = [gradio.inputs.Textbox(label="请输入你的API-key(sk开头的字符串)",
- default="",
- type='password'),
- gradio.inputs.Textbox(lines=5,
- label="请输入要回复的审稿意见",
- default=""
- ),
- gradio.inputs.Radio(choices=["English", "Chinese", "French", "German","Japenese"],
- default="English",
- label="选择输出语言"),
-]
-
-chat_Response_gui = gradio.Interface(fn=main,
- inputs=inp,
- outputs = [gradio.Textbox(lines=11, label="回复结果"), gradio.Textbox(lines=2, label="资源统计")],
- title=title,
- description=description)
-
-# Start server
-chat_Response_gui .launch(quiet=True, show_api=False)
\ No newline at end of file
diff --git a/spaces/SpacesExamples/fastapi_t5/README.md b/spaces/SpacesExamples/fastapi_t5/README.md
deleted file mode 100644
index 3fc458213f777f48d7b806f18605225101a518b1..0000000000000000000000000000000000000000
--- a/spaces/SpacesExamples/fastapi_t5/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Fastapi T5
-emoji: 🐢
-colorFrom: purple
-colorTo: blue
-sdk: docker
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/SuYuanS/AudioCraft_Plus/audiocraft/data/audio_utils.py b/spaces/SuYuanS/AudioCraft_Plus/audiocraft/data/audio_utils.py
deleted file mode 100644
index 565b63a4ef78dcd802dda932b42ebe518ffe7397..0000000000000000000000000000000000000000
--- a/spaces/SuYuanS/AudioCraft_Plus/audiocraft/data/audio_utils.py
+++ /dev/null
@@ -1,177 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-"""Various utilities for audio convertion (pcm format, sample rate and channels),
-and volume normalization."""
-import sys
-import typing as tp
-
-import julius
-import torch
-import torchaudio
-
-
-def convert_audio_channels(wav: torch.Tensor, channels: int = 2) -> torch.Tensor:
- """Convert audio to the given number of channels.
-
- Args:
- wav (torch.Tensor): Audio wave of shape [B, C, T].
- channels (int): Expected number of channels as output.
- Returns:
- torch.Tensor: Downmixed or unchanged audio wave [B, C, T].
- """
- *shape, src_channels, length = wav.shape
- if src_channels == channels:
- pass
- elif channels == 1:
- # Case 1:
- # The caller asked 1-channel audio, and the stream has multiple
- # channels, downmix all channels.
- wav = wav.mean(dim=-2, keepdim=True)
- elif src_channels == 1:
- # Case 2:
- # The caller asked for multiple channels, but the input file has
- # a single channel, replicate the audio over all channels.
- wav = wav.expand(*shape, channels, length)
- elif src_channels >= channels:
- # Case 3:
- # The caller asked for multiple channels, and the input file has
- # more channels than requested. In that case return the first channels.
- wav = wav[..., :channels, :]
- else:
- # Case 4: What is a reasonable choice here?
- raise ValueError('The audio file has less channels than requested but is not mono.')
- return wav
-
-
-def convert_audio(wav: torch.Tensor, from_rate: float,
- to_rate: float, to_channels: int) -> torch.Tensor:
- """Convert audio to new sample rate and number of audio channels."""
- wav = julius.resample_frac(wav, int(from_rate), int(to_rate))
- wav = convert_audio_channels(wav, to_channels)
- return wav
-
-
-def normalize_loudness(wav: torch.Tensor, sample_rate: int, loudness_headroom_db: float = 14,
- loudness_compressor: bool = False, energy_floor: float = 2e-3):
- """Normalize an input signal to a user loudness in dB LKFS.
- Audio loudness is defined according to the ITU-R BS.1770-4 recommendation.
-
- Args:
- wav (torch.Tensor): Input multichannel audio data.
- sample_rate (int): Sample rate.
- loudness_headroom_db (float): Target loudness of the output in dB LUFS.
- loudness_compressor (bool): Uses tanh for soft clipping.
- energy_floor (float): anything below that RMS level will not be rescaled.
- Returns:
- torch.Tensor: Loudness normalized output data.
- """
- energy = wav.pow(2).mean().sqrt().item()
- if energy < energy_floor:
- return wav
- transform = torchaudio.transforms.Loudness(sample_rate)
- input_loudness_db = transform(wav).item()
- # calculate the gain needed to scale to the desired loudness level
- delta_loudness = -loudness_headroom_db - input_loudness_db
- gain = 10.0 ** (delta_loudness / 20.0)
- output = gain * wav
- if loudness_compressor:
- output = torch.tanh(output)
- assert output.isfinite().all(), (input_loudness_db, wav.pow(2).mean().sqrt())
- return output
-
-
-def _clip_wav(wav: torch.Tensor, log_clipping: bool = False, stem_name: tp.Optional[str] = None) -> None:
- """Utility function to clip the audio with logging if specified."""
- max_scale = wav.abs().max()
- if log_clipping and max_scale > 1:
- clamp_prob = (wav.abs() > 1).float().mean().item()
- print(f"CLIPPING {stem_name or ''} happening with proba (a bit of clipping is okay):",
- clamp_prob, "maximum scale: ", max_scale.item(), file=sys.stderr)
- #wav.clamp_(-1, 1)
- wav = wav.clone().clamp_(-1, 1)
-
-
-def normalize_audio(wav: torch.Tensor, normalize: bool = True,
- strategy: str = 'peak', peak_clip_headroom_db: float = 1,
- rms_headroom_db: float = 18, loudness_headroom_db: float = 14,
- loudness_compressor: bool = False, log_clipping: bool = False,
- sample_rate: tp.Optional[int] = None,
- stem_name: tp.Optional[str] = None) -> torch.Tensor:
- """Normalize the audio according to the prescribed strategy (see after).
-
- Args:
- wav (torch.Tensor): Audio data.
- normalize (bool): if `True` (default), normalizes according to the prescribed
- strategy (see after). If `False`, the strategy is only used in case clipping
- would happen.
- strategy (str): Can be either 'clip', 'peak', or 'rms'. Default is 'peak',
- i.e. audio is normalized by its largest value. RMS normalizes by root-mean-square
- with extra headroom to avoid clipping. 'clip' just clips.
- peak_clip_headroom_db (float): Headroom in dB when doing 'peak' or 'clip' strategy.
- rms_headroom_db (float): Headroom in dB when doing 'rms' strategy. This must be much larger
- than the `peak_clip` one to avoid further clipping.
- loudness_headroom_db (float): Target loudness for loudness normalization.
- loudness_compressor (bool): If True, uses tanh based soft clipping.
- log_clipping (bool): If True, basic logging on stderr when clipping still
- occurs despite strategy (only for 'rms').
- sample_rate (int): Sample rate for the audio data (required for loudness).
- stem_name (str, optional): Stem name for clipping logging.
- Returns:
- torch.Tensor: Normalized audio.
- """
- scale_peak = 10 ** (-peak_clip_headroom_db / 20)
- scale_rms = 10 ** (-rms_headroom_db / 20)
- if strategy == 'peak':
- rescaling = (scale_peak / wav.abs().max())
- if normalize or rescaling < 1:
- wav = wav * rescaling
- elif strategy == 'clip':
- wav = wav.clamp(-scale_peak, scale_peak)
- elif strategy == 'rms':
- mono = wav.mean(dim=0)
- rescaling = scale_rms / mono.pow(2).mean().sqrt()
- if normalize or rescaling < 1:
- wav = wav * rescaling
- _clip_wav(wav, log_clipping=log_clipping, stem_name=stem_name)
- elif strategy == 'loudness':
- assert sample_rate is not None, "Loudness normalization requires sample rate."
- wav = normalize_loudness(wav, sample_rate, loudness_headroom_db, loudness_compressor)
- _clip_wav(wav, log_clipping=log_clipping, stem_name=stem_name)
- else:
- assert wav.abs().max() < 1
- assert strategy == '' or strategy == 'none', f"Unexpected strategy: '{strategy}'"
- return wav
-
-
-def f32_pcm(wav: torch.Tensor) -> torch.Tensor:
- """Convert audio to float 32 bits PCM format.
- """
- if wav.dtype.is_floating_point:
- return wav
- elif wav.dtype == torch.int16:
- return wav.float() / 2**15
- elif wav.dtype == torch.int32:
- return wav.float() / 2**31
- raise ValueError(f"Unsupported wav dtype: {wav.dtype}")
-
-
-def i16_pcm(wav: torch.Tensor) -> torch.Tensor:
- """Convert audio to int 16 bits PCM format.
-
- ..Warning:: There exist many formula for doing this conversion. None are perfect
- due to the asymmetry of the int16 range. One either have possible clipping, DC offset,
- or inconsistencies with f32_pcm. If the given wav doesn't have enough headroom,
- it is possible that `i16_pcm(f32_pcm)) != Identity`.
- """
- if wav.dtype.is_floating_point:
- assert wav.abs().max() <= 1
- candidate = (wav * 2 ** 15).round()
- if candidate.max() >= 2 ** 15: # clipping would occur
- candidate = (wav * (2 ** 15 - 1)).round()
- return candidate.short()
- else:
- assert wav.dtype == torch.int16
- return wav
diff --git a/spaces/Sudhir87/Intervupro.ai/README.md b/spaces/Sudhir87/Intervupro.ai/README.md
deleted file mode 100644
index 6ebe095215a96395842c49e57d91463444412468..0000000000000000000000000000000000000000
--- a/spaces/Sudhir87/Intervupro.ai/README.md
+++ /dev/null
@@ -1,17 +0,0 @@
----
-title: IntervuPro.ai
-
-sdk: streamlit
-sdk_version: 1.19.0
-app_file: app.py
-pinned: false
----
-IntervuPro.Ai is an innovative tool designed to assist individuals in preparing for job interviews using the power of GPT-3.5. With its intuitive interface, users can choose from three different modes to cater to their specific needs.
-
-Prepare for a Specific Interview: Users can simulate a job interviewer for a particular company, position, and round. IntervuPro.Ai provides detailed characteristics for both the job interview and the specific company's interview. It offers valuable insights into what to expect and how to approach the interview process.
-
-Understand the Requirements of a Specific Position: For those seeking to understand the job requirements better, IntervuPro.Ai acts as a talent recruiter. Users can input the position they are interested in, and the tool provides comprehensive behavior and technical requirements for the position.
-
-Analyze Resume: To gain a competitive edge, users can submit their resume, and IntervuPro.Ai serves as a talent recruiter again. It assesses the resume for a given position and suggests advantages and disadvantages. The tool offers improvement advice to enhance the resume's relevance and potential to match the position's requirements.
-
-Powered by OpenAI's GPT-3.5 model, IntervuPro.Ai leverages natural language processing to generate prompt-based responses tailored to the users' specific inquiries. It provides valuable and personalized feedback, ensuring individuals are better prepared and confident for their upcoming interviews.
diff --git a/spaces/SumDimDimSum/yulet1de-hentaidiffusion/README.md b/spaces/SumDimDimSum/yulet1de-hentaidiffusion/README.md
deleted file mode 100644
index 1b4ac8beb84c507542cd115ebe41c5b5c0bdac3f..0000000000000000000000000000000000000000
--- a/spaces/SumDimDimSum/yulet1de-hentaidiffusion/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Yulet1de Hentaidiffusion
-emoji: 🔥
-colorFrom: purple
-colorTo: gray
-sdk: gradio
-sdk_version: 3.16.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/WalImageFile.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/WalImageFile.py
deleted file mode 100644
index e4f47aa04bc148f3ff151bec5595f8626833b938..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/WalImageFile.py
+++ /dev/null
@@ -1,123 +0,0 @@
-#
-# The Python Imaging Library.
-# $Id$
-#
-# WAL file handling
-#
-# History:
-# 2003-04-23 fl created
-#
-# Copyright (c) 2003 by Fredrik Lundh.
-#
-# See the README file for information on usage and redistribution.
-#
-
-"""
-This reader is based on the specification available from:
-https://www.flipcode.com/archives/Quake_2_BSP_File_Format.shtml
-and has been tested with a few sample files found using google.
-
-.. note::
- This format cannot be automatically recognized, so the reader
- is not registered for use with :py:func:`PIL.Image.open()`.
- To open a WAL file, use the :py:func:`PIL.WalImageFile.open()` function instead.
-"""
-
-from . import Image, ImageFile
-from ._binary import i32le as i32
-
-
-class WalImageFile(ImageFile.ImageFile):
- format = "WAL"
- format_description = "Quake2 Texture"
-
- def _open(self):
- self.mode = "P"
-
- # read header fields
- header = self.fp.read(32 + 24 + 32 + 12)
- self._size = i32(header, 32), i32(header, 36)
- Image._decompression_bomb_check(self.size)
-
- # load pixel data
- offset = i32(header, 40)
- self.fp.seek(offset)
-
- # strings are null-terminated
- self.info["name"] = header[:32].split(b"\0", 1)[0]
- next_name = header[56 : 56 + 32].split(b"\0", 1)[0]
- if next_name:
- self.info["next_name"] = next_name
-
- def load(self):
- if not self.im:
- self.im = Image.core.new(self.mode, self.size)
- self.frombytes(self.fp.read(self.size[0] * self.size[1]))
- self.putpalette(quake2palette)
- return Image.Image.load(self)
-
-
-def open(filename):
- """
- Load texture from a Quake2 WAL texture file.
-
- By default, a Quake2 standard palette is attached to the texture.
- To override the palette, use the :py:func:`PIL.Image.Image.putpalette()` method.
-
- :param filename: WAL file name, or an opened file handle.
- :returns: An image instance.
- """
- return WalImageFile(filename)
-
-
-quake2palette = (
- # default palette taken from piffo 0.93 by Hans Häggström
- b"\x01\x01\x01\x0b\x0b\x0b\x12\x12\x12\x17\x17\x17\x1b\x1b\x1b\x1e"
- b"\x1e\x1e\x22\x22\x22\x26\x26\x26\x29\x29\x29\x2c\x2c\x2c\x2f\x2f"
- b"\x2f\x32\x32\x32\x35\x35\x35\x37\x37\x37\x3a\x3a\x3a\x3c\x3c\x3c"
- b"\x24\x1e\x13\x22\x1c\x12\x20\x1b\x12\x1f\x1a\x10\x1d\x19\x10\x1b"
- b"\x17\x0f\x1a\x16\x0f\x18\x14\x0d\x17\x13\x0d\x16\x12\x0d\x14\x10"
- b"\x0b\x13\x0f\x0b\x10\x0d\x0a\x0f\x0b\x0a\x0d\x0b\x07\x0b\x0a\x07"
- b"\x23\x23\x26\x22\x22\x25\x22\x20\x23\x21\x1f\x22\x20\x1e\x20\x1f"
- b"\x1d\x1e\x1d\x1b\x1c\x1b\x1a\x1a\x1a\x19\x19\x18\x17\x17\x17\x16"
- b"\x16\x14\x14\x14\x13\x13\x13\x10\x10\x10\x0f\x0f\x0f\x0d\x0d\x0d"
- b"\x2d\x28\x20\x29\x24\x1c\x27\x22\x1a\x25\x1f\x17\x38\x2e\x1e\x31"
- b"\x29\x1a\x2c\x25\x17\x26\x20\x14\x3c\x30\x14\x37\x2c\x13\x33\x28"
- b"\x12\x2d\x24\x10\x28\x1f\x0f\x22\x1a\x0b\x1b\x14\x0a\x13\x0f\x07"
- b"\x31\x1a\x16\x30\x17\x13\x2e\x16\x10\x2c\x14\x0d\x2a\x12\x0b\x27"
- b"\x0f\x0a\x25\x0f\x07\x21\x0d\x01\x1e\x0b\x01\x1c\x0b\x01\x1a\x0b"
- b"\x01\x18\x0a\x01\x16\x0a\x01\x13\x0a\x01\x10\x07\x01\x0d\x07\x01"
- b"\x29\x23\x1e\x27\x21\x1c\x26\x20\x1b\x25\x1f\x1a\x23\x1d\x19\x21"
- b"\x1c\x18\x20\x1b\x17\x1e\x19\x16\x1c\x18\x14\x1b\x17\x13\x19\x14"
- b"\x10\x17\x13\x0f\x14\x10\x0d\x12\x0f\x0b\x0f\x0b\x0a\x0b\x0a\x07"
- b"\x26\x1a\x0f\x23\x19\x0f\x20\x17\x0f\x1c\x16\x0f\x19\x13\x0d\x14"
- b"\x10\x0b\x10\x0d\x0a\x0b\x0a\x07\x33\x22\x1f\x35\x29\x26\x37\x2f"
- b"\x2d\x39\x35\x34\x37\x39\x3a\x33\x37\x39\x30\x34\x36\x2b\x31\x34"
- b"\x27\x2e\x31\x22\x2b\x2f\x1d\x28\x2c\x17\x25\x2a\x0f\x20\x26\x0d"
- b"\x1e\x25\x0b\x1c\x22\x0a\x1b\x20\x07\x19\x1e\x07\x17\x1b\x07\x14"
- b"\x18\x01\x12\x16\x01\x0f\x12\x01\x0b\x0d\x01\x07\x0a\x01\x01\x01"
- b"\x2c\x21\x21\x2a\x1f\x1f\x29\x1d\x1d\x27\x1c\x1c\x26\x1a\x1a\x24"
- b"\x18\x18\x22\x17\x17\x21\x16\x16\x1e\x13\x13\x1b\x12\x12\x18\x10"
- b"\x10\x16\x0d\x0d\x12\x0b\x0b\x0d\x0a\x0a\x0a\x07\x07\x01\x01\x01"
- b"\x2e\x30\x29\x2d\x2e\x27\x2b\x2c\x26\x2a\x2a\x24\x28\x29\x23\x27"
- b"\x27\x21\x26\x26\x1f\x24\x24\x1d\x22\x22\x1c\x1f\x1f\x1a\x1c\x1c"
- b"\x18\x19\x19\x16\x17\x17\x13\x13\x13\x10\x0f\x0f\x0d\x0b\x0b\x0a"
- b"\x30\x1e\x1b\x2d\x1c\x19\x2c\x1a\x17\x2a\x19\x14\x28\x17\x13\x26"
- b"\x16\x10\x24\x13\x0f\x21\x12\x0d\x1f\x10\x0b\x1c\x0f\x0a\x19\x0d"
- b"\x0a\x16\x0b\x07\x12\x0a\x07\x0f\x07\x01\x0a\x01\x01\x01\x01\x01"
- b"\x28\x29\x38\x26\x27\x36\x25\x26\x34\x24\x24\x31\x22\x22\x2f\x20"
- b"\x21\x2d\x1e\x1f\x2a\x1d\x1d\x27\x1b\x1b\x25\x19\x19\x21\x17\x17"
- b"\x1e\x14\x14\x1b\x13\x12\x17\x10\x0f\x13\x0d\x0b\x0f\x0a\x07\x07"
- b"\x2f\x32\x29\x2d\x30\x26\x2b\x2e\x24\x29\x2c\x21\x27\x2a\x1e\x25"
- b"\x28\x1c\x23\x26\x1a\x21\x25\x18\x1e\x22\x14\x1b\x1f\x10\x19\x1c"
- b"\x0d\x17\x1a\x0a\x13\x17\x07\x10\x13\x01\x0d\x0f\x01\x0a\x0b\x01"
- b"\x01\x3f\x01\x13\x3c\x0b\x1b\x39\x10\x20\x35\x14\x23\x31\x17\x23"
- b"\x2d\x18\x23\x29\x18\x3f\x3f\x3f\x3f\x3f\x39\x3f\x3f\x31\x3f\x3f"
- b"\x2a\x3f\x3f\x20\x3f\x3f\x14\x3f\x3c\x12\x3f\x39\x0f\x3f\x35\x0b"
- b"\x3f\x32\x07\x3f\x2d\x01\x3d\x2a\x01\x3b\x26\x01\x39\x21\x01\x37"
- b"\x1d\x01\x34\x1a\x01\x32\x16\x01\x2f\x12\x01\x2d\x0f\x01\x2a\x0b"
- b"\x01\x27\x07\x01\x23\x01\x01\x1d\x01\x01\x17\x01\x01\x10\x01\x01"
- b"\x3d\x01\x01\x19\x19\x3f\x3f\x01\x01\x01\x01\x3f\x16\x16\x13\x10"
- b"\x10\x0f\x0d\x0d\x0b\x3c\x2e\x2a\x36\x27\x20\x30\x21\x18\x29\x1b"
- b"\x10\x3c\x39\x37\x37\x32\x2f\x31\x2c\x28\x2b\x26\x21\x30\x22\x20"
-)
diff --git a/spaces/Suniilkumaar/MusicGen-updated/audiocraft/models/lm.py b/spaces/Suniilkumaar/MusicGen-updated/audiocraft/models/lm.py
deleted file mode 100644
index c8aad8f06797eef3293605056e1de14d07c56c2a..0000000000000000000000000000000000000000
--- a/spaces/Suniilkumaar/MusicGen-updated/audiocraft/models/lm.py
+++ /dev/null
@@ -1,527 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from dataclasses import dataclass
-from functools import partial
-import logging
-import math
-import typing as tp
-
-import torch
-from torch import nn
-
-from ..utils import utils
-from ..modules.streaming import StreamingModule, State
-from ..modules.transformer import StreamingTransformer, create_norm_fn
-from ..modules.conditioners import (
- ConditionFuser,
- ClassifierFreeGuidanceDropout,
- AttributeDropout,
- ConditioningProvider,
- ConditioningAttributes,
- ConditionType,
-)
-from ..modules.codebooks_patterns import CodebooksPatternProvider
-from ..modules.activations import get_activation_fn
-
-
-logger = logging.getLogger(__name__)
-ConditionTensors = tp.Dict[str, ConditionType]
-CFGConditions = tp.Union[ConditionTensors, tp.Tuple[ConditionTensors, ConditionTensors]]
-
-
-def get_init_fn(method: str, input_dim: int, init_depth: tp.Optional[int] = None):
- """LM layer initialization.
- Inspired from xlformers: https://github.com/fairinternal/xlformers
-
- Args:
- method (str): Method name for init function. Valid options are:
- 'gaussian', 'uniform'.
- input_dim (int): Input dimension of the initialized module.
- init_depth (Optional[int]): Optional init depth value used to rescale
- the standard deviation if defined.
- """
- # Compute std
- std = 1 / math.sqrt(input_dim)
- # Rescale with depth
- if init_depth is not None:
- std = std / math.sqrt(2 * init_depth)
-
- if method == 'gaussian':
- return partial(
- torch.nn.init.trunc_normal_, mean=0.0, std=std, a=-3 * std, b=3 * std
- )
- elif method == 'uniform':
- bound = math.sqrt(3) * std # ensure the standard deviation is `std`
- return partial(torch.nn.init.uniform_, a=-bound, b=bound)
- else:
- raise ValueError("Unsupported layer initialization method")
-
-
-def init_layer(m: nn.Module,
- method: str,
- init_depth: tp.Optional[int] = None,
- zero_bias_init: bool = False):
- """Wrapper around ``get_init_fn`` for proper initialization of LM modules.
-
- Args:
- m (nn.Module): Module to initialize.
- method (str): Method name for the init function.
- init_depth (Optional[int]): Optional init depth value used to rescale
- the standard deviation if defined.
- zero_bias_init (bool): Whether to initialize the bias to 0 or not.
- """
- if isinstance(m, nn.Linear):
- init_fn = get_init_fn(method, m.in_features, init_depth=init_depth)
- if m.weight.device.type == 'cpu' and m.weight.dtype == torch.float16:
- weight = m.weight.float()
- init_fn(weight)
- m.weight.data[:] = weight.half()
- else:
- init_fn(m.weight)
- if zero_bias_init and m.bias is not None:
- nn.init.constant_(m.bias, 0)
- elif isinstance(m, nn.Embedding):
- init_fn = get_init_fn(method, m.embedding_dim, init_depth=None)
- if m.weight.device.type == 'cpu' and m.weight.dtype == torch.float16:
- weight = m.weight.float()
- init_fn(weight)
- m.weight.data[:] = weight.half()
- else:
- init_fn(m.weight)
-
-
-class ScaledEmbedding(nn.Embedding):
- """Boost learning rate for embeddings (with `scale`).
- """
- def __init__(self, *args, lr=None, **kwargs):
- super().__init__(*args, **kwargs)
- self.lr = lr
-
- def make_optim_group(self):
- group = {"params": list(self.parameters())}
- if self.lr is not None:
- group["lr"] = self.lr
- return group
-
-
-@dataclass
-class LMOutput:
- # The logits are already re-aligned with the input codes
- # hence no extra shift is required, e.g. when computing CE
- logits: torch.Tensor # [B, K, T, card]
- mask: torch.Tensor # [B, K, T]
-
-
-class LMModel(StreamingModule):
- """Transformer-based language model on multiple streams of codes.
-
- Args:
- pattern_provider (CodebooksPatternProvider): Pattern provider for codebook interleaving.
- condition_provider (MusicConditioningProvider): Conditioning provider from metadata.
- fuser (ConditionFuser): Fuser handling the fusing of conditions with language model input.
- n_q (int): Number of parallel streams to model.
- card (int): Cardinality, vocabulary size.
- dim (int): Dimension of the transformer encoder.
- num_heads (int): Number of heads for the transformer encoder.
- hidden_scale (int): Scale for hidden feed forward dimension of the transformer encoder.
- norm (str): Normalization method.
- norm_first (bool): Use pre-norm instead of post-norm.
- emb_lr (Optional[float]): Embedding-specific learning rate.
- bias_proj (bool): Use bias for output projections.
- weight_init (Optional[str]): Method for weight initialization.
- depthwise_init (Optional[str]): Method for depthwise weight initialization.
- zero_bias_init (bool): If true and bias in Linears, initialize bias to zeros.
- cfg_dropout (float): Classifier-free guidance dropout.
- cfg_coef (float): Classifier-free guidance coefficient.
- attribute_dropout (dict): Attribute dropout probabilities.
- two_step_cfg (bool): Whether to run classifier free-guidance with 2 distinct steps.
- **kwargs: Additional parameters for the transformer encoder.
- """
- def __init__(self, pattern_provider: CodebooksPatternProvider, condition_provider: ConditioningProvider,
- fuser: ConditionFuser, n_q: int = 8, card: int = 1024, dim: int = 128, num_heads: int = 8,
- hidden_scale: int = 4, norm: str = 'layer_norm', norm_first: bool = False,
- emb_lr: tp.Optional[float] = None, bias_proj: bool = True,
- weight_init: tp.Optional[str] = None, depthwise_init: tp.Optional[str] = None,
- zero_bias_init: bool = False, cfg_dropout: float = 0, cfg_coef: float = 1.0,
- attribute_dropout: tp.Dict[str, tp.Dict[str, float]] = {}, two_step_cfg: bool = False,
- **kwargs):
- super().__init__()
- self.cfg_coef = cfg_coef
- self.cfg_dropout = ClassifierFreeGuidanceDropout(p=cfg_dropout)
- self.att_dropout = AttributeDropout(p=attribute_dropout)
- self.condition_provider = condition_provider
- self.fuser = fuser
- self.card = card
- embed_dim = self.card + 1
- self.n_q = n_q
- self.dim = dim
- self.pattern_provider = pattern_provider
- self.two_step_cfg = two_step_cfg
- self.emb = nn.ModuleList([ScaledEmbedding(embed_dim, dim, lr=emb_lr) for _ in range(n_q)])
- if 'activation' in kwargs:
- kwargs['activation'] = get_activation_fn(kwargs['activation'])
- self.transformer = StreamingTransformer(
- d_model=dim, num_heads=num_heads, dim_feedforward=int(hidden_scale * dim),
- norm=norm, norm_first=norm_first, **kwargs)
- self.out_norm: tp.Optional[nn.Module] = None
- if norm_first:
- self.out_norm = create_norm_fn(norm, dim)
- self.linears = nn.ModuleList([nn.Linear(dim, self.card, bias=bias_proj) for _ in range(n_q)])
- self._init_weights(weight_init, depthwise_init, zero_bias_init)
- self._fsdp: tp.Optional[nn.Module]
- self.__dict__['_fsdp'] = None
-
- def _init_weights(self, weight_init: tp.Optional[str], depthwise_init: tp.Optional[str], zero_bias_init: bool):
- """Initialization of the transformer module weights.
-
- Args:
- weight_init (Optional[str]): Weight initialization strategy. See ``get_init_fn`` for valid options.
- depthwise_init (Optional[str]): Depwthwise initialization strategy. The following options are valid:
- 'current' where the depth corresponds to the current layer index or 'global' where the total number
- of layer is used as depth. If not set, no depthwise initialization strategy is used.
- zero_bias_init (bool): Whether to initalize bias to zero or not.
- """
- assert depthwise_init is None or depthwise_init in ['current', 'global']
- assert depthwise_init is None or weight_init is not None, \
- "If 'depthwise_init' is defined, a 'weight_init' method should be provided."
- assert not zero_bias_init or weight_init is not None, \
- "If 'zero_bias_init', a 'weight_init' method should be provided"
-
- if weight_init is None:
- return
-
- for emb_layer in self.emb:
- init_layer(emb_layer, method=weight_init, init_depth=None, zero_bias_init=zero_bias_init)
-
- for layer_idx, tr_layer in enumerate(self.transformer.layers):
- depth = None
- if depthwise_init == 'current':
- depth = layer_idx + 1
- elif depthwise_init == 'global':
- depth = len(self.transformer.layers)
- init_fn = partial(init_layer, method=weight_init, init_depth=depth, zero_bias_init=zero_bias_init)
- tr_layer.apply(init_fn)
-
- for linear in self.linears:
- init_layer(linear, method=weight_init, init_depth=None, zero_bias_init=zero_bias_init)
-
- @property
- def special_token_id(self) -> int:
- return self.card
-
- @property
- def num_codebooks(self) -> int:
- return self.n_q
-
- def forward(self, sequence: torch.Tensor,
- conditions: tp.List[ConditioningAttributes],
- condition_tensors: tp.Optional[ConditionTensors] = None) -> torch.Tensor:
- """Apply language model on sequence and conditions.
- Given a tensor of sequence of shape [B, K, S] with K the number of codebooks and
- S the sequence steps, return the logits with shape [B, card, K, S].
-
- Args:
- indices (torch.Tensor): indices of the codes to model.
- conditions (list[ConditioningAttributes]): conditionings to use when modeling
- the given codes. Note that when evaluating multiple time with the same conditioning
- you should pre-compute those and pass them as `condition_tensors`.
- condition_tensors (dict[str, ConditionType] or None): pre-computed conditioning
- tensors, see `conditions`.
- Returns:
- torch.Tensor: Logits.
- """
- B, K, S = sequence.shape
- assert K == self.num_codebooks, 'Sequence shape must match the specified number of codebooks'
- input_ = sum([self.emb[k](sequence[:, k]) for k in range(K)])
- if condition_tensors is None:
- assert not self._is_streaming, "Conditions tensors should be precomputed when streaming."
- # apply dropout modules
- conditions = self.cfg_dropout(conditions)
- conditions = self.att_dropout(conditions)
- tokenized = self.condition_provider.tokenize(conditions)
- # encode conditions and fuse, both have a streaming cache to not recompute when generating.
- condition_tensors = self.condition_provider(tokenized)
- else:
- assert not conditions, "Shouldn't pass both conditions and condition_tensors."
-
- input_, cross_attention_input = self.fuser(input_, condition_tensors)
-
- out = self.transformer(input_, cross_attention_src=cross_attention_input)
- if self.out_norm:
- out = self.out_norm(out)
- logits = torch.stack([self.linears[k](out) for k in range(K)], dim=1) # [B, K, S, card]
-
- # remove the prefix from the model outputs
- if len(self.fuser.fuse2cond['prepend']) > 0:
- logits = logits[:, :, -S:]
-
- return logits # [B, K, S, card]
-
- def compute_predictions(
- self, codes: torch.Tensor,
- conditions: tp.List[ConditioningAttributes],
- condition_tensors: tp.Optional[ConditionTensors] = None) -> LMOutput:
- """Given an input tensor of codes [B, K, T] and list of conditions, runs the model
- forward using the specified codes interleaving pattern.
-
- Args:
- codes (torch.Tensor): Input codes of shape [B, K, T] with B the batch size,
- K the number of codebooks and T the number of timesteps.
- conditions (list[ConditioningAttributes]): conditionings to use when modeling
- the given codes. Note that when evaluating multiple time with the same conditioning
- you should pre-compute those and pass them as `condition_tensors`.
- condition_tensors (dict[str, ConditionType] or None): pre-computed conditioning
- tensors, see `conditions`.
- Returns:
- LMOutput: Language model outputs
- logits (torch.Tensor) of shape [B, K, T, card] corresponding to the provided codes,
- i.e. the first item corresponds to logits to predict the first code, meaning that
- no additional shifting of codes and logits is required.
- mask (torch.Tensor) of shape [B, K, T], mask over valid and invalid positions.
- Given the specified interleaving strategies, parts of the logits and codes should
- not be considered as valid predictions because of invalid context.
- """
- B, K, T = codes.shape
- codes = codes.contiguous()
- # map codes [B, K, T] into pattern sequence [B, K, S] using special_token_id for masked tokens
- pattern = self.pattern_provider.get_pattern(T)
- sequence_codes, sequence_indexes, sequence_mask = pattern.build_pattern_sequence(
- codes, self.special_token_id, keep_only_valid_steps=True
- )
- # apply model on pattern sequence
- model = self if self._fsdp is None else self._fsdp
- logits = model(sequence_codes, conditions, condition_tensors) # [B, K, S, card]
- # map back the logits on pattern sequence to logits on original codes: [B, K, S, card] -> [B, K, T, card]
- # and provide the corresponding mask over invalid positions of tokens
- logits = logits.permute(0, 3, 1, 2) # [B, card, K, S]
- # note: we use nans as special token to make it obvious if we feed unexpected logits
- logits, logits_indexes, logits_mask = pattern.revert_pattern_logits(
- logits, float('nan'), keep_only_valid_steps=True
- )
- logits = logits.permute(0, 2, 3, 1) # [B, K, T, card]
- logits_mask = logits_mask[None, :, :].expand(B, -1, -1) # [K, T] -> [B, K, T]
- return LMOutput(logits, logits_mask)
-
- def _sample_next_token(self,
- sequence: torch.Tensor,
- cfg_conditions: CFGConditions,
- unconditional_state: State,
- use_sampling: bool = False,
- temp: float = 1.0,
- top_k: int = 0,
- top_p: float = 0.0,
- cfg_coef: tp.Optional[float] = None) -> torch.Tensor:
- """Sample next token from the model given a sequence and a set of conditions. The model supports
- multiple sampling strategies (greedy sampling, softmax, top-k, top-p...).
-
- Args:
- sequence (torch.Tensor): Current sequence of shape [B, K, S]
- with K corresponding to the number of codebooks and S the number of sequence steps.
- S = 1 in streaming mode, except for the first step that contains a bigger prompt.
- condition_tensors (Dict[str, ConditionType): Set of conditions. If CFG is used,
- should be twice the batch size, being the concatenation of the conditions + null conditions.
- use_sampling (bool): Whether to use a sampling strategy or not.
- temp (float): Sampling temperature.
- top_k (int): K for "top-k" sampling.
- top_p (float): P for "top-p" sampling.
- cfg_coef (float): classifier free guidance coefficient
- Returns:
- next_token (torch.Tensor): Next token tensor of shape [B, K, 1].
- """
- B = sequence.shape[0]
- cfg_coef = self.cfg_coef if cfg_coef is None else cfg_coef
- model = self if self._fsdp is None else self._fsdp
- if self.two_step_cfg and cfg_conditions != {}:
- assert isinstance(cfg_conditions, tuple)
- condition_tensors, null_condition_tensors = cfg_conditions
- cond_logits = model(sequence, conditions=[], condition_tensors=condition_tensors)
- state = self.get_streaming_state()
- self.set_streaming_state(unconditional_state)
- uncond_logits = model(sequence, conditions=[], condition_tensors=null_condition_tensors)
- unconditional_state.update(self.get_streaming_state())
- self.set_streaming_state(state)
- logits = uncond_logits + (cond_logits - uncond_logits) * self.cfg_coef
- else:
- assert isinstance(cfg_conditions, dict)
- condition_tensors = cfg_conditions
- if condition_tensors:
- # Preparing for CFG, predicting both conditional and unconditional logits.
- sequence = torch.cat([sequence, sequence], dim=0)
- all_logits = model(
- sequence,
- conditions=[], condition_tensors=condition_tensors)
- if condition_tensors:
- cond_logits, uncond_logits = all_logits.split(B, dim=0) # [B, K, T, card]
- logits = uncond_logits + (cond_logits - uncond_logits) * cfg_coef
- else:
- logits = all_logits
-
- logits = logits.permute(0, 1, 3, 2) # [B, K, card, T]
- logits = logits[..., -1] # [B x K x card]
-
- # Apply softmax for sampling if temp > 0. Else, do greedy sampling to avoid zero division error.
- if use_sampling and temp > 0.0:
- probs = torch.softmax(logits / temp, dim=-1)
- if top_p > 0.0:
- next_token = utils.sample_top_p(probs, p=top_p)
- elif top_k > 0:
- next_token = utils.sample_top_k(probs, k=top_k)
- else:
- next_token = utils.multinomial(probs, num_samples=1)
- else:
- next_token = torch.argmax(logits, dim=-1, keepdim=True)
-
- return next_token
-
- @torch.no_grad()
- def generate(self,
- prompt: tp.Optional[torch.Tensor] = None,
- conditions: tp.List[ConditioningAttributes] = [],
- num_samples: tp.Optional[int] = None,
- max_gen_len: int = 256,
- use_sampling: bool = True,
- temp: float = 1.0,
- top_k: int = 250,
- top_p: float = 0.0,
- cfg_coef: tp.Optional[float] = None,
- two_step_cfg: bool = False,
- remove_prompts: bool = False,
- check: bool = False,
- callback: tp.Optional[tp.Callable[[int, int], None]] = None) -> torch.Tensor:
- """Generate tokens sampling from the model given a prompt or unconditionally. Generation can
- be perform in a greedy fashion or using sampling with top K and top P strategies.
-
- Args:
- prompt (Optional[torch.Tensor]): Prompt tokens of shape [B, K, T].
- conditions_tensors (Dict[str, torch.Tensor]): Set of conditions or None.
- num_samples (int or None): Number of samples to generate when no prompt and no conditions are given.
- max_gen_len (int): Maximum generation length.
- use_sampling (bool): Whether to use a sampling strategy or not.
- temp (float): Sampling temperature.
- top_k (int): K for "top-k" sampling.
- top_p (float): P for "top-p" sampling.
- remove_prompts (bool): Whether to remove prompts from generation or not.
- Returns:
- torch.Tensor: Generated tokens.
- """
- assert not self.training, "generation shouldn't be used in training mode."
- first_param = next(iter(self.parameters()))
- device = first_param.device
-
- # Checking all input shapes are consistents.
- possible_num_samples = []
- if num_samples is not None:
- possible_num_samples.append(num_samples)
- elif prompt is not None:
- possible_num_samples.append(prompt.shape[0])
- elif conditions:
- possible_num_samples.append(len(conditions))
- else:
- possible_num_samples.append(1)
- assert [x == possible_num_samples[0] for x in possible_num_samples], "Inconsitent inputs shapes"
- num_samples = possible_num_samples[0]
-
- # below we create set of conditions: one conditional and one unconditional
- # to do that we merge the regular condition together with the null condition
- # we then do 1 forward pass instead of 2.
- # the reason for that is two-fold:
- # 1. it is about x2 faster than doing 2 forward passes
- # 2. avoid the streaming API treating the 2 passes as part of different time steps
- # We also support doing two different passes, in particular to ensure that
- # the padding structure is exactly the same between train anf test.
- # With a batch size of 1, this can be slower though.
- cfg_conditions: CFGConditions
- two_step_cfg = self.two_step_cfg if two_step_cfg is None else two_step_cfg
- if conditions:
- null_conditions = ClassifierFreeGuidanceDropout(p=1.0)(conditions)
- if two_step_cfg:
- cfg_conditions = (
- self.condition_provider(self.condition_provider.tokenize(conditions)),
- self.condition_provider(self.condition_provider.tokenize(null_conditions)),
- )
- else:
- conditions = conditions + null_conditions
- tokenized = self.condition_provider.tokenize(conditions)
- cfg_conditions = self.condition_provider(tokenized)
- else:
- cfg_conditions = {}
-
- if prompt is None:
- assert num_samples > 0
- prompt = torch.zeros((num_samples, self.num_codebooks, 0), dtype=torch.long, device=device)
-
- B, K, T = prompt.shape
- start_offset = T
- assert start_offset < max_gen_len
-
- pattern = self.pattern_provider.get_pattern(max_gen_len)
- # this token is used as default value for codes that are not generated yet
- unknown_token = -1
-
- # we generate codes up to the max_gen_len that will be mapped to the pattern sequence
- gen_codes = torch.full((B, K, max_gen_len), unknown_token, dtype=torch.long, device=device)
- # filling the gen_codes with the prompt if needed
- gen_codes[..., :start_offset] = prompt
- # create the gen_sequence with proper interleaving from the pattern: [B, K, S]
- gen_sequence, indexes, mask = pattern.build_pattern_sequence(gen_codes, self.special_token_id)
- # retrieve the start_offset in the sequence:
- # it is the first sequence step that contains the `start_offset` timestep
- start_offset_sequence = pattern.get_first_step_with_timesteps(start_offset)
- assert start_offset_sequence is not None
-
- with self.streaming():
- unconditional_state = self.get_streaming_state()
- prev_offset = 0
- gen_sequence_len = gen_sequence.shape[-1] # gen_sequence shape is [B, K, S]
- for offset in range(start_offset_sequence, gen_sequence_len):
- # get current sequence (note that the streaming API is providing the caching over previous offsets)
- curr_sequence = gen_sequence[..., prev_offset:offset]
- curr_mask = mask[None, ..., prev_offset:offset].expand(B, -1, -1)
- if check:
- # check coherence between mask and sequence
- assert (curr_sequence == torch.where(curr_mask, curr_sequence, self.special_token_id)).all()
- # should never happen as gen_sequence is filled progressively
- assert not (curr_sequence == unknown_token).any()
- # sample next token from the model, next token shape is [B, K, 1]
- next_token = self._sample_next_token(
- curr_sequence, cfg_conditions, unconditional_state, use_sampling, temp, top_k, top_p,
- cfg_coef=cfg_coef)
- # ensure the tokens that should be masked are properly set to special_token_id
- # as the model never output special_token_id
- valid_mask = mask[..., offset:offset+1].expand(B, -1, -1)
- next_token[~valid_mask] = self.special_token_id
- # ensure we don't overwrite prompt tokens, we only write over unknown tokens
- # (then mask tokens should be left as is as well, which is correct)
- gen_sequence[..., offset:offset+1] = torch.where(
- gen_sequence[..., offset:offset+1] == unknown_token,
- next_token, gen_sequence[..., offset:offset+1]
- )
- prev_offset = offset
- if callback is not None:
- callback(1 + offset - start_offset_sequence, gen_sequence_len - start_offset_sequence)
- unconditional_state.clear()
-
- # ensure sequence has been entirely filled
- assert not (gen_sequence == unknown_token).any()
- # ensure gen_sequence pattern and mask are matching
- # which means the gen_sequence is valid according to the pattern
- assert (
- gen_sequence == torch.where(mask[None, ...].expand(B, -1, -1), gen_sequence, self.special_token_id)
- ).all()
- # get back the codes, trimming the prompt if needed and cutting potentially incomplete timesteps
- out_codes, out_indexes, out_mask = pattern.revert_pattern_sequence(gen_sequence, special_token=unknown_token)
-
- # sanity checks over the returned codes and corresponding masks
- assert (out_codes[..., :max_gen_len] != unknown_token).all()
- assert (out_mask[..., :max_gen_len] == 1).all()
-
- out_start_offset = start_offset if remove_prompts else 0
- out_codes = out_codes[..., out_start_offset:max_gen_len]
-
- # ensure the returned codes are all valid
- assert (out_codes >= 0).all() and (out_codes <= self.card).all()
- return out_codes
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/cli/cmdoptions.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/cli/cmdoptions.py
deleted file mode 100644
index 02ba60827933d6623cdf6b1417762fee47c1ab6f..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/cli/cmdoptions.py
+++ /dev/null
@@ -1,1074 +0,0 @@
-"""
-shared options and groups
-
-The principle here is to define options once, but *not* instantiate them
-globally. One reason being that options with action='append' can carry state
-between parses. pip parses general options twice internally, and shouldn't
-pass on state. To be consistent, all options will follow this design.
-"""
-
-# The following comment should be removed at some point in the future.
-# mypy: strict-optional=False
-
-import importlib.util
-import logging
-import os
-import textwrap
-from functools import partial
-from optparse import SUPPRESS_HELP, Option, OptionGroup, OptionParser, Values
-from textwrap import dedent
-from typing import Any, Callable, Dict, Optional, Tuple
-
-from pip._vendor.packaging.utils import canonicalize_name
-
-from pip._internal.cli.parser import ConfigOptionParser
-from pip._internal.exceptions import CommandError
-from pip._internal.locations import USER_CACHE_DIR, get_src_prefix
-from pip._internal.models.format_control import FormatControl
-from pip._internal.models.index import PyPI
-from pip._internal.models.target_python import TargetPython
-from pip._internal.utils.hashes import STRONG_HASHES
-from pip._internal.utils.misc import strtobool
-
-logger = logging.getLogger(__name__)
-
-
-def raise_option_error(parser: OptionParser, option: Option, msg: str) -> None:
- """
- Raise an option parsing error using parser.error().
-
- Args:
- parser: an OptionParser instance.
- option: an Option instance.
- msg: the error text.
- """
- msg = f"{option} error: {msg}"
- msg = textwrap.fill(" ".join(msg.split()))
- parser.error(msg)
-
-
-def make_option_group(group: Dict[str, Any], parser: ConfigOptionParser) -> OptionGroup:
- """
- Return an OptionGroup object
- group -- assumed to be dict with 'name' and 'options' keys
- parser -- an optparse Parser
- """
- option_group = OptionGroup(parser, group["name"])
- for option in group["options"]:
- option_group.add_option(option())
- return option_group
-
-
-def check_dist_restriction(options: Values, check_target: bool = False) -> None:
- """Function for determining if custom platform options are allowed.
-
- :param options: The OptionParser options.
- :param check_target: Whether or not to check if --target is being used.
- """
- dist_restriction_set = any(
- [
- options.python_version,
- options.platforms,
- options.abis,
- options.implementation,
- ]
- )
-
- binary_only = FormatControl(set(), {":all:"})
- sdist_dependencies_allowed = (
- options.format_control != binary_only and not options.ignore_dependencies
- )
-
- # Installations or downloads using dist restrictions must not combine
- # source distributions and dist-specific wheels, as they are not
- # guaranteed to be locally compatible.
- if dist_restriction_set and sdist_dependencies_allowed:
- raise CommandError(
- "When restricting platform and interpreter constraints using "
- "--python-version, --platform, --abi, or --implementation, "
- "either --no-deps must be set, or --only-binary=:all: must be "
- "set and --no-binary must not be set (or must be set to "
- ":none:)."
- )
-
- if check_target:
- if dist_restriction_set and not options.target_dir:
- raise CommandError(
- "Can not use any platform or abi specific options unless "
- "installing via '--target'"
- )
-
-
-def _path_option_check(option: Option, opt: str, value: str) -> str:
- return os.path.expanduser(value)
-
-
-def _package_name_option_check(option: Option, opt: str, value: str) -> str:
- return canonicalize_name(value)
-
-
-class PipOption(Option):
- TYPES = Option.TYPES + ("path", "package_name")
- TYPE_CHECKER = Option.TYPE_CHECKER.copy()
- TYPE_CHECKER["package_name"] = _package_name_option_check
- TYPE_CHECKER["path"] = _path_option_check
-
-
-###########
-# options #
-###########
-
-help_: Callable[..., Option] = partial(
- Option,
- "-h",
- "--help",
- dest="help",
- action="help",
- help="Show help.",
-)
-
-debug_mode: Callable[..., Option] = partial(
- Option,
- "--debug",
- dest="debug_mode",
- action="store_true",
- default=False,
- help=(
- "Let unhandled exceptions propagate outside the main subroutine, "
- "instead of logging them to stderr."
- ),
-)
-
-isolated_mode: Callable[..., Option] = partial(
- Option,
- "--isolated",
- dest="isolated_mode",
- action="store_true",
- default=False,
- help=(
- "Run pip in an isolated mode, ignoring environment variables and user "
- "configuration."
- ),
-)
-
-require_virtualenv: Callable[..., Option] = partial(
- Option,
- "--require-virtualenv",
- "--require-venv",
- dest="require_venv",
- action="store_true",
- default=False,
- help=(
- "Allow pip to only run in a virtual environment; "
- "exit with an error otherwise."
- ),
-)
-
-override_externally_managed: Callable[..., Option] = partial(
- Option,
- "--break-system-packages",
- dest="override_externally_managed",
- action="store_true",
- help="Allow pip to modify an EXTERNALLY-MANAGED Python installation",
-)
-
-python: Callable[..., Option] = partial(
- Option,
- "--python",
- dest="python",
- help="Run pip with the specified Python interpreter.",
-)
-
-verbose: Callable[..., Option] = partial(
- Option,
- "-v",
- "--verbose",
- dest="verbose",
- action="count",
- default=0,
- help="Give more output. Option is additive, and can be used up to 3 times.",
-)
-
-no_color: Callable[..., Option] = partial(
- Option,
- "--no-color",
- dest="no_color",
- action="store_true",
- default=False,
- help="Suppress colored output.",
-)
-
-version: Callable[..., Option] = partial(
- Option,
- "-V",
- "--version",
- dest="version",
- action="store_true",
- help="Show version and exit.",
-)
-
-quiet: Callable[..., Option] = partial(
- Option,
- "-q",
- "--quiet",
- dest="quiet",
- action="count",
- default=0,
- help=(
- "Give less output. Option is additive, and can be used up to 3"
- " times (corresponding to WARNING, ERROR, and CRITICAL logging"
- " levels)."
- ),
-)
-
-progress_bar: Callable[..., Option] = partial(
- Option,
- "--progress-bar",
- dest="progress_bar",
- type="choice",
- choices=["on", "off"],
- default="on",
- help="Specify whether the progress bar should be used [on, off] (default: on)",
-)
-
-log: Callable[..., Option] = partial(
- PipOption,
- "--log",
- "--log-file",
- "--local-log",
- dest="log",
- metavar="path",
- type="path",
- help="Path to a verbose appending log.",
-)
-
-no_input: Callable[..., Option] = partial(
- Option,
- # Don't ask for input
- "--no-input",
- dest="no_input",
- action="store_true",
- default=False,
- help="Disable prompting for input.",
-)
-
-keyring_provider: Callable[..., Option] = partial(
- Option,
- "--keyring-provider",
- dest="keyring_provider",
- choices=["auto", "disabled", "import", "subprocess"],
- default="auto",
- help=(
- "Enable the credential lookup via the keyring library if user input is allowed."
- " Specify which mechanism to use [disabled, import, subprocess]."
- " (default: disabled)"
- ),
-)
-
-proxy: Callable[..., Option] = partial(
- Option,
- "--proxy",
- dest="proxy",
- type="str",
- default="",
- help="Specify a proxy in the form scheme://[user:passwd@]proxy.server:port.",
-)
-
-retries: Callable[..., Option] = partial(
- Option,
- "--retries",
- dest="retries",
- type="int",
- default=5,
- help="Maximum number of retries each connection should attempt "
- "(default %default times).",
-)
-
-timeout: Callable[..., Option] = partial(
- Option,
- "--timeout",
- "--default-timeout",
- metavar="sec",
- dest="timeout",
- type="float",
- default=15,
- help="Set the socket timeout (default %default seconds).",
-)
-
-
-def exists_action() -> Option:
- return Option(
- # Option when path already exist
- "--exists-action",
- dest="exists_action",
- type="choice",
- choices=["s", "i", "w", "b", "a"],
- default=[],
- action="append",
- metavar="action",
- help="Default action when a path already exists: "
- "(s)witch, (i)gnore, (w)ipe, (b)ackup, (a)bort.",
- )
-
-
-cert: Callable[..., Option] = partial(
- PipOption,
- "--cert",
- dest="cert",
- type="path",
- metavar="path",
- help=(
- "Path to PEM-encoded CA certificate bundle. "
- "If provided, overrides the default. "
- "See 'SSL Certificate Verification' in pip documentation "
- "for more information."
- ),
-)
-
-client_cert: Callable[..., Option] = partial(
- PipOption,
- "--client-cert",
- dest="client_cert",
- type="path",
- default=None,
- metavar="path",
- help="Path to SSL client certificate, a single file containing the "
- "private key and the certificate in PEM format.",
-)
-
-index_url: Callable[..., Option] = partial(
- Option,
- "-i",
- "--index-url",
- "--pypi-url",
- dest="index_url",
- metavar="URL",
- default=PyPI.simple_url,
- help="Base URL of the Python Package Index (default %default). "
- "This should point to a repository compliant with PEP 503 "
- "(the simple repository API) or a local directory laid out "
- "in the same format.",
-)
-
-
-def extra_index_url() -> Option:
- return Option(
- "--extra-index-url",
- dest="extra_index_urls",
- metavar="URL",
- action="append",
- default=[],
- help="Extra URLs of package indexes to use in addition to "
- "--index-url. Should follow the same rules as "
- "--index-url.",
- )
-
-
-no_index: Callable[..., Option] = partial(
- Option,
- "--no-index",
- dest="no_index",
- action="store_true",
- default=False,
- help="Ignore package index (only looking at --find-links URLs instead).",
-)
-
-
-def find_links() -> Option:
- return Option(
- "-f",
- "--find-links",
- dest="find_links",
- action="append",
- default=[],
- metavar="url",
- help="If a URL or path to an html file, then parse for links to "
- "archives such as sdist (.tar.gz) or wheel (.whl) files. "
- "If a local path or file:// URL that's a directory, "
- "then look for archives in the directory listing. "
- "Links to VCS project URLs are not supported.",
- )
-
-
-def trusted_host() -> Option:
- return Option(
- "--trusted-host",
- dest="trusted_hosts",
- action="append",
- metavar="HOSTNAME",
- default=[],
- help="Mark this host or host:port pair as trusted, even though it "
- "does not have valid or any HTTPS.",
- )
-
-
-def constraints() -> Option:
- return Option(
- "-c",
- "--constraint",
- dest="constraints",
- action="append",
- default=[],
- metavar="file",
- help="Constrain versions using the given constraints file. "
- "This option can be used multiple times.",
- )
-
-
-def requirements() -> Option:
- return Option(
- "-r",
- "--requirement",
- dest="requirements",
- action="append",
- default=[],
- metavar="file",
- help="Install from the given requirements file. "
- "This option can be used multiple times.",
- )
-
-
-def editable() -> Option:
- return Option(
- "-e",
- "--editable",
- dest="editables",
- action="append",
- default=[],
- metavar="path/url",
- help=(
- "Install a project in editable mode (i.e. setuptools "
- '"develop mode") from a local project path or a VCS url.'
- ),
- )
-
-
-def _handle_src(option: Option, opt_str: str, value: str, parser: OptionParser) -> None:
- value = os.path.abspath(value)
- setattr(parser.values, option.dest, value)
-
-
-src: Callable[..., Option] = partial(
- PipOption,
- "--src",
- "--source",
- "--source-dir",
- "--source-directory",
- dest="src_dir",
- type="path",
- metavar="dir",
- default=get_src_prefix(),
- action="callback",
- callback=_handle_src,
- help="Directory to check out editable projects into. "
- 'The default in a virtualenv is "/src". '
- 'The default for global installs is "/src".',
-)
-
-
-def _get_format_control(values: Values, option: Option) -> Any:
- """Get a format_control object."""
- return getattr(values, option.dest)
-
-
-def _handle_no_binary(
- option: Option, opt_str: str, value: str, parser: OptionParser
-) -> None:
- existing = _get_format_control(parser.values, option)
- FormatControl.handle_mutual_excludes(
- value,
- existing.no_binary,
- existing.only_binary,
- )
-
-
-def _handle_only_binary(
- option: Option, opt_str: str, value: str, parser: OptionParser
-) -> None:
- existing = _get_format_control(parser.values, option)
- FormatControl.handle_mutual_excludes(
- value,
- existing.only_binary,
- existing.no_binary,
- )
-
-
-def no_binary() -> Option:
- format_control = FormatControl(set(), set())
- return Option(
- "--no-binary",
- dest="format_control",
- action="callback",
- callback=_handle_no_binary,
- type="str",
- default=format_control,
- help="Do not use binary packages. Can be supplied multiple times, and "
- 'each time adds to the existing value. Accepts either ":all:" to '
- 'disable all binary packages, ":none:" to empty the set (notice '
- "the colons), or one or more package names with commas between "
- "them (no colons). Note that some packages are tricky to compile "
- "and may fail to install when this option is used on them.",
- )
-
-
-def only_binary() -> Option:
- format_control = FormatControl(set(), set())
- return Option(
- "--only-binary",
- dest="format_control",
- action="callback",
- callback=_handle_only_binary,
- type="str",
- default=format_control,
- help="Do not use source packages. Can be supplied multiple times, and "
- 'each time adds to the existing value. Accepts either ":all:" to '
- 'disable all source packages, ":none:" to empty the set, or one '
- "or more package names with commas between them. Packages "
- "without binary distributions will fail to install when this "
- "option is used on them.",
- )
-
-
-platforms: Callable[..., Option] = partial(
- Option,
- "--platform",
- dest="platforms",
- metavar="platform",
- action="append",
- default=None,
- help=(
- "Only use wheels compatible with . Defaults to the "
- "platform of the running system. Use this option multiple times to "
- "specify multiple platforms supported by the target interpreter."
- ),
-)
-
-
-# This was made a separate function for unit-testing purposes.
-def _convert_python_version(value: str) -> Tuple[Tuple[int, ...], Optional[str]]:
- """
- Convert a version string like "3", "37", or "3.7.3" into a tuple of ints.
-
- :return: A 2-tuple (version_info, error_msg), where `error_msg` is
- non-None if and only if there was a parsing error.
- """
- if not value:
- # The empty string is the same as not providing a value.
- return (None, None)
-
- parts = value.split(".")
- if len(parts) > 3:
- return ((), "at most three version parts are allowed")
-
- if len(parts) == 1:
- # Then we are in the case of "3" or "37".
- value = parts[0]
- if len(value) > 1:
- parts = [value[0], value[1:]]
-
- try:
- version_info = tuple(int(part) for part in parts)
- except ValueError:
- return ((), "each version part must be an integer")
-
- return (version_info, None)
-
-
-def _handle_python_version(
- option: Option, opt_str: str, value: str, parser: OptionParser
-) -> None:
- """
- Handle a provided --python-version value.
- """
- version_info, error_msg = _convert_python_version(value)
- if error_msg is not None:
- msg = "invalid --python-version value: {!r}: {}".format(
- value,
- error_msg,
- )
- raise_option_error(parser, option=option, msg=msg)
-
- parser.values.python_version = version_info
-
-
-python_version: Callable[..., Option] = partial(
- Option,
- "--python-version",
- dest="python_version",
- metavar="python_version",
- action="callback",
- callback=_handle_python_version,
- type="str",
- default=None,
- help=dedent(
- """\
- The Python interpreter version to use for wheel and "Requires-Python"
- compatibility checks. Defaults to a version derived from the running
- interpreter. The version can be specified using up to three dot-separated
- integers (e.g. "3" for 3.0.0, "3.7" for 3.7.0, or "3.7.3"). A major-minor
- version can also be given as a string without dots (e.g. "37" for 3.7.0).
- """
- ),
-)
-
-
-implementation: Callable[..., Option] = partial(
- Option,
- "--implementation",
- dest="implementation",
- metavar="implementation",
- default=None,
- help=(
- "Only use wheels compatible with Python "
- "implementation , e.g. 'pp', 'jy', 'cp', "
- " or 'ip'. If not specified, then the current "
- "interpreter implementation is used. Use 'py' to force "
- "implementation-agnostic wheels."
- ),
-)
-
-
-abis: Callable[..., Option] = partial(
- Option,
- "--abi",
- dest="abis",
- metavar="abi",
- action="append",
- default=None,
- help=(
- "Only use wheels compatible with Python abi , e.g. 'pypy_41'. "
- "If not specified, then the current interpreter abi tag is used. "
- "Use this option multiple times to specify multiple abis supported "
- "by the target interpreter. Generally you will need to specify "
- "--implementation, --platform, and --python-version when using this "
- "option."
- ),
-)
-
-
-def add_target_python_options(cmd_opts: OptionGroup) -> None:
- cmd_opts.add_option(platforms())
- cmd_opts.add_option(python_version())
- cmd_opts.add_option(implementation())
- cmd_opts.add_option(abis())
-
-
-def make_target_python(options: Values) -> TargetPython:
- target_python = TargetPython(
- platforms=options.platforms,
- py_version_info=options.python_version,
- abis=options.abis,
- implementation=options.implementation,
- )
-
- return target_python
-
-
-def prefer_binary() -> Option:
- return Option(
- "--prefer-binary",
- dest="prefer_binary",
- action="store_true",
- default=False,
- help="Prefer older binary packages over newer source packages.",
- )
-
-
-cache_dir: Callable[..., Option] = partial(
- PipOption,
- "--cache-dir",
- dest="cache_dir",
- default=USER_CACHE_DIR,
- metavar="dir",
- type="path",
- help="Store the cache data in .",
-)
-
-
-def _handle_no_cache_dir(
- option: Option, opt: str, value: str, parser: OptionParser
-) -> None:
- """
- Process a value provided for the --no-cache-dir option.
-
- This is an optparse.Option callback for the --no-cache-dir option.
- """
- # The value argument will be None if --no-cache-dir is passed via the
- # command-line, since the option doesn't accept arguments. However,
- # the value can be non-None if the option is triggered e.g. by an
- # environment variable, like PIP_NO_CACHE_DIR=true.
- if value is not None:
- # Then parse the string value to get argument error-checking.
- try:
- strtobool(value)
- except ValueError as exc:
- raise_option_error(parser, option=option, msg=str(exc))
-
- # Originally, setting PIP_NO_CACHE_DIR to a value that strtobool()
- # converted to 0 (like "false" or "no") caused cache_dir to be disabled
- # rather than enabled (logic would say the latter). Thus, we disable
- # the cache directory not just on values that parse to True, but (for
- # backwards compatibility reasons) also on values that parse to False.
- # In other words, always set it to False if the option is provided in
- # some (valid) form.
- parser.values.cache_dir = False
-
-
-no_cache: Callable[..., Option] = partial(
- Option,
- "--no-cache-dir",
- dest="cache_dir",
- action="callback",
- callback=_handle_no_cache_dir,
- help="Disable the cache.",
-)
-
-no_deps: Callable[..., Option] = partial(
- Option,
- "--no-deps",
- "--no-dependencies",
- dest="ignore_dependencies",
- action="store_true",
- default=False,
- help="Don't install package dependencies.",
-)
-
-ignore_requires_python: Callable[..., Option] = partial(
- Option,
- "--ignore-requires-python",
- dest="ignore_requires_python",
- action="store_true",
- help="Ignore the Requires-Python information.",
-)
-
-no_build_isolation: Callable[..., Option] = partial(
- Option,
- "--no-build-isolation",
- dest="build_isolation",
- action="store_false",
- default=True,
- help="Disable isolation when building a modern source distribution. "
- "Build dependencies specified by PEP 518 must be already installed "
- "if this option is used.",
-)
-
-check_build_deps: Callable[..., Option] = partial(
- Option,
- "--check-build-dependencies",
- dest="check_build_deps",
- action="store_true",
- default=False,
- help="Check the build dependencies when PEP517 is used.",
-)
-
-
-def _handle_no_use_pep517(
- option: Option, opt: str, value: str, parser: OptionParser
-) -> None:
- """
- Process a value provided for the --no-use-pep517 option.
-
- This is an optparse.Option callback for the no_use_pep517 option.
- """
- # Since --no-use-pep517 doesn't accept arguments, the value argument
- # will be None if --no-use-pep517 is passed via the command-line.
- # However, the value can be non-None if the option is triggered e.g.
- # by an environment variable, for example "PIP_NO_USE_PEP517=true".
- if value is not None:
- msg = """A value was passed for --no-use-pep517,
- probably using either the PIP_NO_USE_PEP517 environment variable
- or the "no-use-pep517" config file option. Use an appropriate value
- of the PIP_USE_PEP517 environment variable or the "use-pep517"
- config file option instead.
- """
- raise_option_error(parser, option=option, msg=msg)
-
- # If user doesn't wish to use pep517, we check if setuptools and wheel are installed
- # and raise error if it is not.
- packages = ("setuptools", "wheel")
- if not all(importlib.util.find_spec(package) for package in packages):
- msg = (
- f"It is not possible to use --no-use-pep517 "
- f"without {' and '.join(packages)} installed."
- )
- raise_option_error(parser, option=option, msg=msg)
-
- # Otherwise, --no-use-pep517 was passed via the command-line.
- parser.values.use_pep517 = False
-
-
-use_pep517: Any = partial(
- Option,
- "--use-pep517",
- dest="use_pep517",
- action="store_true",
- default=None,
- help="Use PEP 517 for building source distributions "
- "(use --no-use-pep517 to force legacy behaviour).",
-)
-
-no_use_pep517: Any = partial(
- Option,
- "--no-use-pep517",
- dest="use_pep517",
- action="callback",
- callback=_handle_no_use_pep517,
- default=None,
- help=SUPPRESS_HELP,
-)
-
-
-def _handle_config_settings(
- option: Option, opt_str: str, value: str, parser: OptionParser
-) -> None:
- key, sep, val = value.partition("=")
- if sep != "=":
- parser.error(f"Arguments to {opt_str} must be of the form KEY=VAL") # noqa
- dest = getattr(parser.values, option.dest)
- if dest is None:
- dest = {}
- setattr(parser.values, option.dest, dest)
- if key in dest:
- if isinstance(dest[key], list):
- dest[key].append(val)
- else:
- dest[key] = [dest[key], val]
- else:
- dest[key] = val
-
-
-config_settings: Callable[..., Option] = partial(
- Option,
- "-C",
- "--config-settings",
- dest="config_settings",
- type=str,
- action="callback",
- callback=_handle_config_settings,
- metavar="settings",
- help="Configuration settings to be passed to the PEP 517 build backend. "
- "Settings take the form KEY=VALUE. Use multiple --config-settings options "
- "to pass multiple keys to the backend.",
-)
-
-build_options: Callable[..., Option] = partial(
- Option,
- "--build-option",
- dest="build_options",
- metavar="options",
- action="append",
- help="Extra arguments to be supplied to 'setup.py bdist_wheel'.",
-)
-
-global_options: Callable[..., Option] = partial(
- Option,
- "--global-option",
- dest="global_options",
- action="append",
- metavar="options",
- help="Extra global options to be supplied to the setup.py "
- "call before the install or bdist_wheel command.",
-)
-
-no_clean: Callable[..., Option] = partial(
- Option,
- "--no-clean",
- action="store_true",
- default=False,
- help="Don't clean up build directories.",
-)
-
-pre: Callable[..., Option] = partial(
- Option,
- "--pre",
- action="store_true",
- default=False,
- help="Include pre-release and development versions. By default, "
- "pip only finds stable versions.",
-)
-
-disable_pip_version_check: Callable[..., Option] = partial(
- Option,
- "--disable-pip-version-check",
- dest="disable_pip_version_check",
- action="store_true",
- default=False,
- help="Don't periodically check PyPI to determine whether a new version "
- "of pip is available for download. Implied with --no-index.",
-)
-
-root_user_action: Callable[..., Option] = partial(
- Option,
- "--root-user-action",
- dest="root_user_action",
- default="warn",
- choices=["warn", "ignore"],
- help="Action if pip is run as a root user. By default, a warning message is shown.",
-)
-
-
-def _handle_merge_hash(
- option: Option, opt_str: str, value: str, parser: OptionParser
-) -> None:
- """Given a value spelled "algo:digest", append the digest to a list
- pointed to in a dict by the algo name."""
- if not parser.values.hashes:
- parser.values.hashes = {}
- try:
- algo, digest = value.split(":", 1)
- except ValueError:
- parser.error(
- "Arguments to {} must be a hash name " # noqa
- "followed by a value, like --hash=sha256:"
- "abcde...".format(opt_str)
- )
- if algo not in STRONG_HASHES:
- parser.error(
- "Allowed hash algorithms for {} are {}.".format( # noqa
- opt_str, ", ".join(STRONG_HASHES)
- )
- )
- parser.values.hashes.setdefault(algo, []).append(digest)
-
-
-hash: Callable[..., Option] = partial(
- Option,
- "--hash",
- # Hash values eventually end up in InstallRequirement.hashes due to
- # __dict__ copying in process_line().
- dest="hashes",
- action="callback",
- callback=_handle_merge_hash,
- type="string",
- help="Verify that the package's archive matches this "
- "hash before installing. Example: --hash=sha256:abcdef...",
-)
-
-
-require_hashes: Callable[..., Option] = partial(
- Option,
- "--require-hashes",
- dest="require_hashes",
- action="store_true",
- default=False,
- help="Require a hash to check each requirement against, for "
- "repeatable installs. This option is implied when any package in a "
- "requirements file has a --hash option.",
-)
-
-
-list_path: Callable[..., Option] = partial(
- PipOption,
- "--path",
- dest="path",
- type="path",
- action="append",
- help="Restrict to the specified installation path for listing "
- "packages (can be used multiple times).",
-)
-
-
-def check_list_path_option(options: Values) -> None:
- if options.path and (options.user or options.local):
- raise CommandError("Cannot combine '--path' with '--user' or '--local'")
-
-
-list_exclude: Callable[..., Option] = partial(
- PipOption,
- "--exclude",
- dest="excludes",
- action="append",
- metavar="package",
- type="package_name",
- help="Exclude specified package from the output",
-)
-
-
-no_python_version_warning: Callable[..., Option] = partial(
- Option,
- "--no-python-version-warning",
- dest="no_python_version_warning",
- action="store_true",
- default=False,
- help="Silence deprecation warnings for upcoming unsupported Pythons.",
-)
-
-
-# Features that are now always on. A warning is printed if they are used.
-ALWAYS_ENABLED_FEATURES = [
- "no-binary-enable-wheel-cache", # always on since 23.1
-]
-
-use_new_feature: Callable[..., Option] = partial(
- Option,
- "--use-feature",
- dest="features_enabled",
- metavar="feature",
- action="append",
- default=[],
- choices=[
- "fast-deps",
- "truststore",
- ]
- + ALWAYS_ENABLED_FEATURES,
- help="Enable new functionality, that may be backward incompatible.",
-)
-
-use_deprecated_feature: Callable[..., Option] = partial(
- Option,
- "--use-deprecated",
- dest="deprecated_features_enabled",
- metavar="feature",
- action="append",
- default=[],
- choices=[
- "legacy-resolver",
- ],
- help=("Enable deprecated functionality, that will be removed in the future."),
-)
-
-
-##########
-# groups #
-##########
-
-general_group: Dict[str, Any] = {
- "name": "General Options",
- "options": [
- help_,
- debug_mode,
- isolated_mode,
- require_virtualenv,
- python,
- verbose,
- version,
- quiet,
- log,
- no_input,
- keyring_provider,
- proxy,
- retries,
- timeout,
- exists_action,
- trusted_host,
- cert,
- client_cert,
- cache_dir,
- no_cache,
- disable_pip_version_check,
- no_color,
- no_python_version_warning,
- use_new_feature,
- use_deprecated_feature,
- ],
-}
-
-index_group: Dict[str, Any] = {
- "name": "Package Index Options",
- "options": [
- index_url,
- extra_index_url,
- no_index,
- find_links,
- ],
-}
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/utils/logging.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/utils/logging.py
deleted file mode 100644
index c10e1f4ced6bcc799799b62666695998e095bbaf..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/utils/logging.py
+++ /dev/null
@@ -1,348 +0,0 @@
-import contextlib
-import errno
-import logging
-import logging.handlers
-import os
-import sys
-import threading
-from dataclasses import dataclass
-from io import TextIOWrapper
-from logging import Filter
-from typing import Any, ClassVar, Generator, List, Optional, TextIO, Type
-
-from pip._vendor.rich.console import (
- Console,
- ConsoleOptions,
- ConsoleRenderable,
- RenderableType,
- RenderResult,
- RichCast,
-)
-from pip._vendor.rich.highlighter import NullHighlighter
-from pip._vendor.rich.logging import RichHandler
-from pip._vendor.rich.segment import Segment
-from pip._vendor.rich.style import Style
-
-from pip._internal.utils._log import VERBOSE, getLogger
-from pip._internal.utils.compat import WINDOWS
-from pip._internal.utils.deprecation import DEPRECATION_MSG_PREFIX
-from pip._internal.utils.misc import ensure_dir
-
-_log_state = threading.local()
-subprocess_logger = getLogger("pip.subprocessor")
-
-
-class BrokenStdoutLoggingError(Exception):
- """
- Raised if BrokenPipeError occurs for the stdout stream while logging.
- """
-
-
-def _is_broken_pipe_error(exc_class: Type[BaseException], exc: BaseException) -> bool:
- if exc_class is BrokenPipeError:
- return True
-
- # On Windows, a broken pipe can show up as EINVAL rather than EPIPE:
- # https://bugs.python.org/issue19612
- # https://bugs.python.org/issue30418
- if not WINDOWS:
- return False
-
- return isinstance(exc, OSError) and exc.errno in (errno.EINVAL, errno.EPIPE)
-
-
-@contextlib.contextmanager
-def indent_log(num: int = 2) -> Generator[None, None, None]:
- """
- A context manager which will cause the log output to be indented for any
- log messages emitted inside it.
- """
- # For thread-safety
- _log_state.indentation = get_indentation()
- _log_state.indentation += num
- try:
- yield
- finally:
- _log_state.indentation -= num
-
-
-def get_indentation() -> int:
- return getattr(_log_state, "indentation", 0)
-
-
-class IndentingFormatter(logging.Formatter):
- default_time_format = "%Y-%m-%dT%H:%M:%S"
-
- def __init__(
- self,
- *args: Any,
- add_timestamp: bool = False,
- **kwargs: Any,
- ) -> None:
- """
- A logging.Formatter that obeys the indent_log() context manager.
-
- :param add_timestamp: A bool indicating output lines should be prefixed
- with their record's timestamp.
- """
- self.add_timestamp = add_timestamp
- super().__init__(*args, **kwargs)
-
- def get_message_start(self, formatted: str, levelno: int) -> str:
- """
- Return the start of the formatted log message (not counting the
- prefix to add to each line).
- """
- if levelno < logging.WARNING:
- return ""
- if formatted.startswith(DEPRECATION_MSG_PREFIX):
- # Then the message already has a prefix. We don't want it to
- # look like "WARNING: DEPRECATION: ...."
- return ""
- if levelno < logging.ERROR:
- return "WARNING: "
-
- return "ERROR: "
-
- def format(self, record: logging.LogRecord) -> str:
- """
- Calls the standard formatter, but will indent all of the log message
- lines by our current indentation level.
- """
- formatted = super().format(record)
- message_start = self.get_message_start(formatted, record.levelno)
- formatted = message_start + formatted
-
- prefix = ""
- if self.add_timestamp:
- prefix = f"{self.formatTime(record)} "
- prefix += " " * get_indentation()
- formatted = "".join([prefix + line for line in formatted.splitlines(True)])
- return formatted
-
-
-@dataclass
-class IndentedRenderable:
- renderable: RenderableType
- indent: int
-
- def __rich_console__(
- self, console: Console, options: ConsoleOptions
- ) -> RenderResult:
- segments = console.render(self.renderable, options)
- lines = Segment.split_lines(segments)
- for line in lines:
- yield Segment(" " * self.indent)
- yield from line
- yield Segment("\n")
-
-
-class RichPipStreamHandler(RichHandler):
- KEYWORDS: ClassVar[Optional[List[str]]] = []
-
- def __init__(self, stream: Optional[TextIO], no_color: bool) -> None:
- super().__init__(
- console=Console(file=stream, no_color=no_color, soft_wrap=True),
- show_time=False,
- show_level=False,
- show_path=False,
- highlighter=NullHighlighter(),
- )
-
- # Our custom override on Rich's logger, to make things work as we need them to.
- def emit(self, record: logging.LogRecord) -> None:
- style: Optional[Style] = None
-
- # If we are given a diagnostic error to present, present it with indentation.
- assert isinstance(record.args, tuple)
- if record.msg == "[present-rich] %s" and len(record.args) == 1:
- rich_renderable = record.args[0]
- assert isinstance(
- rich_renderable, (ConsoleRenderable, RichCast, str)
- ), f"{rich_renderable} is not rich-console-renderable"
-
- renderable: RenderableType = IndentedRenderable(
- rich_renderable, indent=get_indentation()
- )
- else:
- message = self.format(record)
- renderable = self.render_message(record, message)
- if record.levelno is not None:
- if record.levelno >= logging.ERROR:
- style = Style(color="red")
- elif record.levelno >= logging.WARNING:
- style = Style(color="yellow")
-
- try:
- self.console.print(renderable, overflow="ignore", crop=False, style=style)
- except Exception:
- self.handleError(record)
-
- def handleError(self, record: logging.LogRecord) -> None:
- """Called when logging is unable to log some output."""
-
- exc_class, exc = sys.exc_info()[:2]
- # If a broken pipe occurred while calling write() or flush() on the
- # stdout stream in logging's Handler.emit(), then raise our special
- # exception so we can handle it in main() instead of logging the
- # broken pipe error and continuing.
- if (
- exc_class
- and exc
- and self.console.file is sys.stdout
- and _is_broken_pipe_error(exc_class, exc)
- ):
- raise BrokenStdoutLoggingError()
-
- return super().handleError(record)
-
-
-class BetterRotatingFileHandler(logging.handlers.RotatingFileHandler):
- def _open(self) -> TextIOWrapper:
- ensure_dir(os.path.dirname(self.baseFilename))
- return super()._open()
-
-
-class MaxLevelFilter(Filter):
- def __init__(self, level: int) -> None:
- self.level = level
-
- def filter(self, record: logging.LogRecord) -> bool:
- return record.levelno < self.level
-
-
-class ExcludeLoggerFilter(Filter):
-
- """
- A logging Filter that excludes records from a logger (or its children).
- """
-
- def filter(self, record: logging.LogRecord) -> bool:
- # The base Filter class allows only records from a logger (or its
- # children).
- return not super().filter(record)
-
-
-def setup_logging(verbosity: int, no_color: bool, user_log_file: Optional[str]) -> int:
- """Configures and sets up all of the logging
-
- Returns the requested logging level, as its integer value.
- """
-
- # Determine the level to be logging at.
- if verbosity >= 2:
- level_number = logging.DEBUG
- elif verbosity == 1:
- level_number = VERBOSE
- elif verbosity == -1:
- level_number = logging.WARNING
- elif verbosity == -2:
- level_number = logging.ERROR
- elif verbosity <= -3:
- level_number = logging.CRITICAL
- else:
- level_number = logging.INFO
-
- level = logging.getLevelName(level_number)
-
- # The "root" logger should match the "console" level *unless* we also need
- # to log to a user log file.
- include_user_log = user_log_file is not None
- if include_user_log:
- additional_log_file = user_log_file
- root_level = "DEBUG"
- else:
- additional_log_file = "/dev/null"
- root_level = level
-
- # Disable any logging besides WARNING unless we have DEBUG level logging
- # enabled for vendored libraries.
- vendored_log_level = "WARNING" if level in ["INFO", "ERROR"] else "DEBUG"
-
- # Shorthands for clarity
- log_streams = {
- "stdout": "ext://sys.stdout",
- "stderr": "ext://sys.stderr",
- }
- handler_classes = {
- "stream": "pip._internal.utils.logging.RichPipStreamHandler",
- "file": "pip._internal.utils.logging.BetterRotatingFileHandler",
- }
- handlers = ["console", "console_errors", "console_subprocess"] + (
- ["user_log"] if include_user_log else []
- )
-
- logging.config.dictConfig(
- {
- "version": 1,
- "disable_existing_loggers": False,
- "filters": {
- "exclude_warnings": {
- "()": "pip._internal.utils.logging.MaxLevelFilter",
- "level": logging.WARNING,
- },
- "restrict_to_subprocess": {
- "()": "logging.Filter",
- "name": subprocess_logger.name,
- },
- "exclude_subprocess": {
- "()": "pip._internal.utils.logging.ExcludeLoggerFilter",
- "name": subprocess_logger.name,
- },
- },
- "formatters": {
- "indent": {
- "()": IndentingFormatter,
- "format": "%(message)s",
- },
- "indent_with_timestamp": {
- "()": IndentingFormatter,
- "format": "%(message)s",
- "add_timestamp": True,
- },
- },
- "handlers": {
- "console": {
- "level": level,
- "class": handler_classes["stream"],
- "no_color": no_color,
- "stream": log_streams["stdout"],
- "filters": ["exclude_subprocess", "exclude_warnings"],
- "formatter": "indent",
- },
- "console_errors": {
- "level": "WARNING",
- "class": handler_classes["stream"],
- "no_color": no_color,
- "stream": log_streams["stderr"],
- "filters": ["exclude_subprocess"],
- "formatter": "indent",
- },
- # A handler responsible for logging to the console messages
- # from the "subprocessor" logger.
- "console_subprocess": {
- "level": level,
- "class": handler_classes["stream"],
- "stream": log_streams["stderr"],
- "no_color": no_color,
- "filters": ["restrict_to_subprocess"],
- "formatter": "indent",
- },
- "user_log": {
- "level": "DEBUG",
- "class": handler_classes["file"],
- "filename": additional_log_file,
- "encoding": "utf-8",
- "delay": True,
- "formatter": "indent_with_timestamp",
- },
- },
- "root": {
- "level": root_level,
- "handlers": handlers,
- },
- "loggers": {"pip._vendor": {"level": vendored_log_level}},
- }
- )
-
- return level_number
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pyparsing/unicode.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pyparsing/unicode.py
deleted file mode 100644
index ec0b3a4fe6055b276d5515a4e81d60d921c6f381..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pyparsing/unicode.py
+++ /dev/null
@@ -1,361 +0,0 @@
-# unicode.py
-
-import sys
-from itertools import filterfalse
-from typing import List, Tuple, Union
-
-
-class _lazyclassproperty:
- def __init__(self, fn):
- self.fn = fn
- self.__doc__ = fn.__doc__
- self.__name__ = fn.__name__
-
- def __get__(self, obj, cls):
- if cls is None:
- cls = type(obj)
- if not hasattr(cls, "_intern") or any(
- cls._intern is getattr(superclass, "_intern", [])
- for superclass in cls.__mro__[1:]
- ):
- cls._intern = {}
- attrname = self.fn.__name__
- if attrname not in cls._intern:
- cls._intern[attrname] = self.fn(cls)
- return cls._intern[attrname]
-
-
-UnicodeRangeList = List[Union[Tuple[int, int], Tuple[int]]]
-
-
-class unicode_set:
- """
- A set of Unicode characters, for language-specific strings for
- ``alphas``, ``nums``, ``alphanums``, and ``printables``.
- A unicode_set is defined by a list of ranges in the Unicode character
- set, in a class attribute ``_ranges``. Ranges can be specified using
- 2-tuples or a 1-tuple, such as::
-
- _ranges = [
- (0x0020, 0x007e),
- (0x00a0, 0x00ff),
- (0x0100,),
- ]
-
- Ranges are left- and right-inclusive. A 1-tuple of (x,) is treated as (x, x).
-
- A unicode set can also be defined using multiple inheritance of other unicode sets::
-
- class CJK(Chinese, Japanese, Korean):
- pass
- """
-
- _ranges: UnicodeRangeList = []
-
- @_lazyclassproperty
- def _chars_for_ranges(cls):
- ret = []
- for cc in cls.__mro__:
- if cc is unicode_set:
- break
- for rr in getattr(cc, "_ranges", ()):
- ret.extend(range(rr[0], rr[-1] + 1))
- return [chr(c) for c in sorted(set(ret))]
-
- @_lazyclassproperty
- def printables(cls):
- """all non-whitespace characters in this range"""
- return "".join(filterfalse(str.isspace, cls._chars_for_ranges))
-
- @_lazyclassproperty
- def alphas(cls):
- """all alphabetic characters in this range"""
- return "".join(filter(str.isalpha, cls._chars_for_ranges))
-
- @_lazyclassproperty
- def nums(cls):
- """all numeric digit characters in this range"""
- return "".join(filter(str.isdigit, cls._chars_for_ranges))
-
- @_lazyclassproperty
- def alphanums(cls):
- """all alphanumeric characters in this range"""
- return cls.alphas + cls.nums
-
- @_lazyclassproperty
- def identchars(cls):
- """all characters in this range that are valid identifier characters, plus underscore '_'"""
- return "".join(
- sorted(
- set(
- "".join(filter(str.isidentifier, cls._chars_for_ranges))
- + "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyzªµº"
- + "ÀÁÂÃÄÅÆÇÈÉÊËÌÍÎÏÐÑÒÓÔÕÖØÙÚÛÜÝÞßàáâãäåæçèéêëìíîïðñòóôõöøùúûüýþÿ"
- + "_"
- )
- )
- )
-
- @_lazyclassproperty
- def identbodychars(cls):
- """
- all characters in this range that are valid identifier body characters,
- plus the digits 0-9, and · (Unicode MIDDLE DOT)
- """
- return "".join(
- sorted(
- set(
- cls.identchars
- + "0123456789·"
- + "".join(
- [c for c in cls._chars_for_ranges if ("_" + c).isidentifier()]
- )
- )
- )
- )
-
- @_lazyclassproperty
- def identifier(cls):
- """
- a pyparsing Word expression for an identifier using this range's definitions for
- identchars and identbodychars
- """
- from pip._vendor.pyparsing import Word
-
- return Word(cls.identchars, cls.identbodychars)
-
-
-class pyparsing_unicode(unicode_set):
- """
- A namespace class for defining common language unicode_sets.
- """
-
- # fmt: off
-
- # define ranges in language character sets
- _ranges: UnicodeRangeList = [
- (0x0020, sys.maxunicode),
- ]
-
- class BasicMultilingualPlane(unicode_set):
- """Unicode set for the Basic Multilingual Plane"""
- _ranges: UnicodeRangeList = [
- (0x0020, 0xFFFF),
- ]
-
- class Latin1(unicode_set):
- """Unicode set for Latin-1 Unicode Character Range"""
- _ranges: UnicodeRangeList = [
- (0x0020, 0x007E),
- (0x00A0, 0x00FF),
- ]
-
- class LatinA(unicode_set):
- """Unicode set for Latin-A Unicode Character Range"""
- _ranges: UnicodeRangeList = [
- (0x0100, 0x017F),
- ]
-
- class LatinB(unicode_set):
- """Unicode set for Latin-B Unicode Character Range"""
- _ranges: UnicodeRangeList = [
- (0x0180, 0x024F),
- ]
-
- class Greek(unicode_set):
- """Unicode set for Greek Unicode Character Ranges"""
- _ranges: UnicodeRangeList = [
- (0x0342, 0x0345),
- (0x0370, 0x0377),
- (0x037A, 0x037F),
- (0x0384, 0x038A),
- (0x038C,),
- (0x038E, 0x03A1),
- (0x03A3, 0x03E1),
- (0x03F0, 0x03FF),
- (0x1D26, 0x1D2A),
- (0x1D5E,),
- (0x1D60,),
- (0x1D66, 0x1D6A),
- (0x1F00, 0x1F15),
- (0x1F18, 0x1F1D),
- (0x1F20, 0x1F45),
- (0x1F48, 0x1F4D),
- (0x1F50, 0x1F57),
- (0x1F59,),
- (0x1F5B,),
- (0x1F5D,),
- (0x1F5F, 0x1F7D),
- (0x1F80, 0x1FB4),
- (0x1FB6, 0x1FC4),
- (0x1FC6, 0x1FD3),
- (0x1FD6, 0x1FDB),
- (0x1FDD, 0x1FEF),
- (0x1FF2, 0x1FF4),
- (0x1FF6, 0x1FFE),
- (0x2129,),
- (0x2719, 0x271A),
- (0xAB65,),
- (0x10140, 0x1018D),
- (0x101A0,),
- (0x1D200, 0x1D245),
- (0x1F7A1, 0x1F7A7),
- ]
-
- class Cyrillic(unicode_set):
- """Unicode set for Cyrillic Unicode Character Range"""
- _ranges: UnicodeRangeList = [
- (0x0400, 0x052F),
- (0x1C80, 0x1C88),
- (0x1D2B,),
- (0x1D78,),
- (0x2DE0, 0x2DFF),
- (0xA640, 0xA672),
- (0xA674, 0xA69F),
- (0xFE2E, 0xFE2F),
- ]
-
- class Chinese(unicode_set):
- """Unicode set for Chinese Unicode Character Range"""
- _ranges: UnicodeRangeList = [
- (0x2E80, 0x2E99),
- (0x2E9B, 0x2EF3),
- (0x31C0, 0x31E3),
- (0x3400, 0x4DB5),
- (0x4E00, 0x9FEF),
- (0xA700, 0xA707),
- (0xF900, 0xFA6D),
- (0xFA70, 0xFAD9),
- (0x16FE2, 0x16FE3),
- (0x1F210, 0x1F212),
- (0x1F214, 0x1F23B),
- (0x1F240, 0x1F248),
- (0x20000, 0x2A6D6),
- (0x2A700, 0x2B734),
- (0x2B740, 0x2B81D),
- (0x2B820, 0x2CEA1),
- (0x2CEB0, 0x2EBE0),
- (0x2F800, 0x2FA1D),
- ]
-
- class Japanese(unicode_set):
- """Unicode set for Japanese Unicode Character Range, combining Kanji, Hiragana, and Katakana ranges"""
-
- class Kanji(unicode_set):
- "Unicode set for Kanji Unicode Character Range"
- _ranges: UnicodeRangeList = [
- (0x4E00, 0x9FBF),
- (0x3000, 0x303F),
- ]
-
- class Hiragana(unicode_set):
- """Unicode set for Hiragana Unicode Character Range"""
- _ranges: UnicodeRangeList = [
- (0x3041, 0x3096),
- (0x3099, 0x30A0),
- (0x30FC,),
- (0xFF70,),
- (0x1B001,),
- (0x1B150, 0x1B152),
- (0x1F200,),
- ]
-
- class Katakana(unicode_set):
- """Unicode set for Katakana Unicode Character Range"""
- _ranges: UnicodeRangeList = [
- (0x3099, 0x309C),
- (0x30A0, 0x30FF),
- (0x31F0, 0x31FF),
- (0x32D0, 0x32FE),
- (0xFF65, 0xFF9F),
- (0x1B000,),
- (0x1B164, 0x1B167),
- (0x1F201, 0x1F202),
- (0x1F213,),
- ]
-
- 漢字 = Kanji
- カタカナ = Katakana
- ひらがな = Hiragana
-
- _ranges = (
- Kanji._ranges
- + Hiragana._ranges
- + Katakana._ranges
- )
-
- class Hangul(unicode_set):
- """Unicode set for Hangul (Korean) Unicode Character Range"""
- _ranges: UnicodeRangeList = [
- (0x1100, 0x11FF),
- (0x302E, 0x302F),
- (0x3131, 0x318E),
- (0x3200, 0x321C),
- (0x3260, 0x327B),
- (0x327E,),
- (0xA960, 0xA97C),
- (0xAC00, 0xD7A3),
- (0xD7B0, 0xD7C6),
- (0xD7CB, 0xD7FB),
- (0xFFA0, 0xFFBE),
- (0xFFC2, 0xFFC7),
- (0xFFCA, 0xFFCF),
- (0xFFD2, 0xFFD7),
- (0xFFDA, 0xFFDC),
- ]
-
- Korean = Hangul
-
- class CJK(Chinese, Japanese, Hangul):
- """Unicode set for combined Chinese, Japanese, and Korean (CJK) Unicode Character Range"""
-
- class Thai(unicode_set):
- """Unicode set for Thai Unicode Character Range"""
- _ranges: UnicodeRangeList = [
- (0x0E01, 0x0E3A),
- (0x0E3F, 0x0E5B)
- ]
-
- class Arabic(unicode_set):
- """Unicode set for Arabic Unicode Character Range"""
- _ranges: UnicodeRangeList = [
- (0x0600, 0x061B),
- (0x061E, 0x06FF),
- (0x0700, 0x077F),
- ]
-
- class Hebrew(unicode_set):
- """Unicode set for Hebrew Unicode Character Range"""
- _ranges: UnicodeRangeList = [
- (0x0591, 0x05C7),
- (0x05D0, 0x05EA),
- (0x05EF, 0x05F4),
- (0xFB1D, 0xFB36),
- (0xFB38, 0xFB3C),
- (0xFB3E,),
- (0xFB40, 0xFB41),
- (0xFB43, 0xFB44),
- (0xFB46, 0xFB4F),
- ]
-
- class Devanagari(unicode_set):
- """Unicode set for Devanagari Unicode Character Range"""
- _ranges: UnicodeRangeList = [
- (0x0900, 0x097F),
- (0xA8E0, 0xA8FF)
- ]
-
- BMP = BasicMultilingualPlane
-
- # add language identifiers using language Unicode
- العربية = Arabic
- 中文 = Chinese
- кириллица = Cyrillic
- Ελληνικά = Greek
- עִברִית = Hebrew
- 日本語 = Japanese
- 한국어 = Korean
- ไทย = Thai
- देवनागरी = Devanagari
-
- # fmt: on
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/unixccompiler.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/unixccompiler.py
deleted file mode 100644
index 6ca2332ae16a575a850fe97e5bc1e42d33b7b2f2..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/unixccompiler.py
+++ /dev/null
@@ -1,400 +0,0 @@
-"""distutils.unixccompiler
-
-Contains the UnixCCompiler class, a subclass of CCompiler that handles
-the "typical" Unix-style command-line C compiler:
- * macros defined with -Dname[=value]
- * macros undefined with -Uname
- * include search directories specified with -Idir
- * libraries specified with -lllib
- * library search directories specified with -Ldir
- * compile handled by 'cc' (or similar) executable with -c option:
- compiles .c to .o
- * link static library handled by 'ar' command (possibly with 'ranlib')
- * link shared library handled by 'cc -shared'
-"""
-
-import os
-import sys
-import re
-import shlex
-import itertools
-
-from . import sysconfig
-from .dep_util import newer
-from .ccompiler import CCompiler, gen_preprocess_options, gen_lib_options
-from .errors import DistutilsExecError, CompileError, LibError, LinkError
-from ._log import log
-from ._macos_compat import compiler_fixup
-
-# XXX Things not currently handled:
-# * optimization/debug/warning flags; we just use whatever's in Python's
-# Makefile and live with it. Is this adequate? If not, we might
-# have to have a bunch of subclasses GNUCCompiler, SGICCompiler,
-# SunCCompiler, and I suspect down that road lies madness.
-# * even if we don't know a warning flag from an optimization flag,
-# we need some way for outsiders to feed preprocessor/compiler/linker
-# flags in to us -- eg. a sysadmin might want to mandate certain flags
-# via a site config file, or a user might want to set something for
-# compiling this module distribution only via the setup.py command
-# line, whatever. As long as these options come from something on the
-# current system, they can be as system-dependent as they like, and we
-# should just happily stuff them into the preprocessor/compiler/linker
-# options and carry on.
-
-
-def _split_env(cmd):
- """
- For macOS, split command into 'env' portion (if any)
- and the rest of the linker command.
-
- >>> _split_env(['a', 'b', 'c'])
- ([], ['a', 'b', 'c'])
- >>> _split_env(['/usr/bin/env', 'A=3', 'gcc'])
- (['/usr/bin/env', 'A=3'], ['gcc'])
- """
- pivot = 0
- if os.path.basename(cmd[0]) == "env":
- pivot = 1
- while '=' in cmd[pivot]:
- pivot += 1
- return cmd[:pivot], cmd[pivot:]
-
-
-def _split_aix(cmd):
- """
- AIX platforms prefix the compiler with the ld_so_aix
- script, so split that from the linker command.
-
- >>> _split_aix(['a', 'b', 'c'])
- ([], ['a', 'b', 'c'])
- >>> _split_aix(['/bin/foo/ld_so_aix', 'gcc'])
- (['/bin/foo/ld_so_aix'], ['gcc'])
- """
- pivot = os.path.basename(cmd[0]) == 'ld_so_aix'
- return cmd[:pivot], cmd[pivot:]
-
-
-def _linker_params(linker_cmd, compiler_cmd):
- """
- The linker command usually begins with the compiler
- command (possibly multiple elements), followed by zero or more
- params for shared library building.
-
- If the LDSHARED env variable overrides the linker command,
- however, the commands may not match.
-
- Return the best guess of the linker parameters by stripping
- the linker command. If the compiler command does not
- match the linker command, assume the linker command is
- just the first element.
-
- >>> _linker_params('gcc foo bar'.split(), ['gcc'])
- ['foo', 'bar']
- >>> _linker_params('gcc foo bar'.split(), ['other'])
- ['foo', 'bar']
- >>> _linker_params('ccache gcc foo bar'.split(), 'ccache gcc'.split())
- ['foo', 'bar']
- >>> _linker_params(['gcc'], ['gcc'])
- []
- """
- c_len = len(compiler_cmd)
- pivot = c_len if linker_cmd[:c_len] == compiler_cmd else 1
- return linker_cmd[pivot:]
-
-
-class UnixCCompiler(CCompiler):
- compiler_type = 'unix'
-
- # These are used by CCompiler in two places: the constructor sets
- # instance attributes 'preprocessor', 'compiler', etc. from them, and
- # 'set_executable()' allows any of these to be set. The defaults here
- # are pretty generic; they will probably have to be set by an outsider
- # (eg. using information discovered by the sysconfig about building
- # Python extensions).
- executables = {
- 'preprocessor': None,
- 'compiler': ["cc"],
- 'compiler_so': ["cc"],
- 'compiler_cxx': ["cc"],
- 'linker_so': ["cc", "-shared"],
- 'linker_exe': ["cc"],
- 'archiver': ["ar", "-cr"],
- 'ranlib': None,
- }
-
- if sys.platform[:6] == "darwin":
- executables['ranlib'] = ["ranlib"]
-
- # Needed for the filename generation methods provided by the base
- # class, CCompiler. NB. whoever instantiates/uses a particular
- # UnixCCompiler instance should set 'shared_lib_ext' -- we set a
- # reasonable common default here, but it's not necessarily used on all
- # Unices!
-
- src_extensions = [".c", ".C", ".cc", ".cxx", ".cpp", ".m"]
- obj_extension = ".o"
- static_lib_extension = ".a"
- shared_lib_extension = ".so"
- dylib_lib_extension = ".dylib"
- xcode_stub_lib_extension = ".tbd"
- static_lib_format = shared_lib_format = dylib_lib_format = "lib%s%s"
- xcode_stub_lib_format = dylib_lib_format
- if sys.platform == "cygwin":
- exe_extension = ".exe"
-
- def preprocess(
- self,
- source,
- output_file=None,
- macros=None,
- include_dirs=None,
- extra_preargs=None,
- extra_postargs=None,
- ):
- fixed_args = self._fix_compile_args(None, macros, include_dirs)
- ignore, macros, include_dirs = fixed_args
- pp_opts = gen_preprocess_options(macros, include_dirs)
- pp_args = self.preprocessor + pp_opts
- if output_file:
- pp_args.extend(['-o', output_file])
- if extra_preargs:
- pp_args[:0] = extra_preargs
- if extra_postargs:
- pp_args.extend(extra_postargs)
- pp_args.append(source)
-
- # reasons to preprocess:
- # - force is indicated
- # - output is directed to stdout
- # - source file is newer than the target
- preprocess = self.force or output_file is None or newer(source, output_file)
- if not preprocess:
- return
-
- if output_file:
- self.mkpath(os.path.dirname(output_file))
-
- try:
- self.spawn(pp_args)
- except DistutilsExecError as msg:
- raise CompileError(msg)
-
- def _compile(self, obj, src, ext, cc_args, extra_postargs, pp_opts):
- compiler_so = compiler_fixup(self.compiler_so, cc_args + extra_postargs)
- try:
- self.spawn(compiler_so + cc_args + [src, '-o', obj] + extra_postargs)
- except DistutilsExecError as msg:
- raise CompileError(msg)
-
- def create_static_lib(
- self, objects, output_libname, output_dir=None, debug=0, target_lang=None
- ):
- objects, output_dir = self._fix_object_args(objects, output_dir)
-
- output_filename = self.library_filename(output_libname, output_dir=output_dir)
-
- if self._need_link(objects, output_filename):
- self.mkpath(os.path.dirname(output_filename))
- self.spawn(self.archiver + [output_filename] + objects + self.objects)
-
- # Not many Unices required ranlib anymore -- SunOS 4.x is, I
- # think the only major Unix that does. Maybe we need some
- # platform intelligence here to skip ranlib if it's not
- # needed -- or maybe Python's configure script took care of
- # it for us, hence the check for leading colon.
- if self.ranlib:
- try:
- self.spawn(self.ranlib + [output_filename])
- except DistutilsExecError as msg:
- raise LibError(msg)
- else:
- log.debug("skipping %s (up-to-date)", output_filename)
-
- def link(
- self,
- target_desc,
- objects,
- output_filename,
- output_dir=None,
- libraries=None,
- library_dirs=None,
- runtime_library_dirs=None,
- export_symbols=None,
- debug=0,
- extra_preargs=None,
- extra_postargs=None,
- build_temp=None,
- target_lang=None,
- ):
- objects, output_dir = self._fix_object_args(objects, output_dir)
- fixed_args = self._fix_lib_args(libraries, library_dirs, runtime_library_dirs)
- libraries, library_dirs, runtime_library_dirs = fixed_args
-
- lib_opts = gen_lib_options(self, library_dirs, runtime_library_dirs, libraries)
- if not isinstance(output_dir, (str, type(None))):
- raise TypeError("'output_dir' must be a string or None")
- if output_dir is not None:
- output_filename = os.path.join(output_dir, output_filename)
-
- if self._need_link(objects, output_filename):
- ld_args = objects + self.objects + lib_opts + ['-o', output_filename]
- if debug:
- ld_args[:0] = ['-g']
- if extra_preargs:
- ld_args[:0] = extra_preargs
- if extra_postargs:
- ld_args.extend(extra_postargs)
- self.mkpath(os.path.dirname(output_filename))
- try:
- # Select a linker based on context: linker_exe when
- # building an executable or linker_so (with shared options)
- # when building a shared library.
- building_exe = target_desc == CCompiler.EXECUTABLE
- linker = (self.linker_exe if building_exe else self.linker_so)[:]
-
- if target_lang == "c++" and self.compiler_cxx:
- env, linker_ne = _split_env(linker)
- aix, linker_na = _split_aix(linker_ne)
- _, compiler_cxx_ne = _split_env(self.compiler_cxx)
- _, linker_exe_ne = _split_env(self.linker_exe)
-
- params = _linker_params(linker_na, linker_exe_ne)
- linker = env + aix + compiler_cxx_ne + params
-
- linker = compiler_fixup(linker, ld_args)
-
- self.spawn(linker + ld_args)
- except DistutilsExecError as msg:
- raise LinkError(msg)
- else:
- log.debug("skipping %s (up-to-date)", output_filename)
-
- # -- Miscellaneous methods -----------------------------------------
- # These are all used by the 'gen_lib_options() function, in
- # ccompiler.py.
-
- def library_dir_option(self, dir):
- return "-L" + dir
-
- def _is_gcc(self):
- cc_var = sysconfig.get_config_var("CC")
- compiler = os.path.basename(shlex.split(cc_var)[0])
- return "gcc" in compiler or "g++" in compiler
-
- def runtime_library_dir_option(self, dir):
- # XXX Hackish, at the very least. See Python bug #445902:
- # http://sourceforge.net/tracker/index.php
- # ?func=detail&aid=445902&group_id=5470&atid=105470
- # Linkers on different platforms need different options to
- # specify that directories need to be added to the list of
- # directories searched for dependencies when a dynamic library
- # is sought. GCC on GNU systems (Linux, FreeBSD, ...) has to
- # be told to pass the -R option through to the linker, whereas
- # other compilers and gcc on other systems just know this.
- # Other compilers may need something slightly different. At
- # this time, there's no way to determine this information from
- # the configuration data stored in the Python installation, so
- # we use this hack.
- if sys.platform[:6] == "darwin":
- from distutils.util import get_macosx_target_ver, split_version
-
- macosx_target_ver = get_macosx_target_ver()
- if macosx_target_ver and split_version(macosx_target_ver) >= [10, 5]:
- return "-Wl,-rpath," + dir
- else: # no support for -rpath on earlier macOS versions
- return "-L" + dir
- elif sys.platform[:7] == "freebsd":
- return "-Wl,-rpath=" + dir
- elif sys.platform[:5] == "hp-ux":
- return [
- "-Wl,+s" if self._is_gcc() else "+s",
- "-L" + dir,
- ]
-
- # For all compilers, `-Wl` is the presumed way to
- # pass a compiler option to the linker and `-R` is
- # the way to pass an RPATH.
- if sysconfig.get_config_var("GNULD") == "yes":
- # GNU ld needs an extra option to get a RUNPATH
- # instead of just an RPATH.
- return "-Wl,--enable-new-dtags,-R" + dir
- else:
- return "-Wl,-R" + dir
-
- def library_option(self, lib):
- return "-l" + lib
-
- @staticmethod
- def _library_root(dir):
- """
- macOS users can specify an alternate SDK using'-isysroot'.
- Calculate the SDK root if it is specified.
-
- Note that, as of Xcode 7, Apple SDKs may contain textual stub
- libraries with .tbd extensions rather than the normal .dylib
- shared libraries installed in /. The Apple compiler tool
- chain handles this transparently but it can cause problems
- for programs that are being built with an SDK and searching
- for specific libraries. Callers of find_library_file need to
- keep in mind that the base filename of the returned SDK library
- file might have a different extension from that of the library
- file installed on the running system, for example:
- /Applications/Xcode.app/Contents/Developer/Platforms/
- MacOSX.platform/Developer/SDKs/MacOSX10.11.sdk/
- usr/lib/libedit.tbd
- vs
- /usr/lib/libedit.dylib
- """
- cflags = sysconfig.get_config_var('CFLAGS')
- match = re.search(r'-isysroot\s*(\S+)', cflags)
-
- apply_root = (
- sys.platform == 'darwin'
- and match
- and (
- dir.startswith('/System/')
- or (dir.startswith('/usr/') and not dir.startswith('/usr/local/'))
- )
- )
-
- return os.path.join(match.group(1), dir[1:]) if apply_root else dir
-
- def find_library_file(self, dirs, lib, debug=0):
- r"""
- Second-guess the linker with not much hard
- data to go on: GCC seems to prefer the shared library, so
- assume that *all* Unix C compilers do,
- ignoring even GCC's "-static" option.
-
- >>> compiler = UnixCCompiler()
- >>> compiler._library_root = lambda dir: dir
- >>> monkeypatch = getfixture('monkeypatch')
- >>> monkeypatch.setattr(os.path, 'exists', lambda d: 'existing' in d)
- >>> dirs = ('/foo/bar/missing', '/foo/bar/existing')
- >>> compiler.find_library_file(dirs, 'abc').replace('\\', '/')
- '/foo/bar/existing/libabc.dylib'
- >>> compiler.find_library_file(reversed(dirs), 'abc').replace('\\', '/')
- '/foo/bar/existing/libabc.dylib'
- >>> monkeypatch.setattr(os.path, 'exists',
- ... lambda d: 'existing' in d and '.a' in d)
- >>> compiler.find_library_file(dirs, 'abc').replace('\\', '/')
- '/foo/bar/existing/libabc.a'
- >>> compiler.find_library_file(reversed(dirs), 'abc').replace('\\', '/')
- '/foo/bar/existing/libabc.a'
- """
- lib_names = (
- self.library_filename(lib, lib_type=type)
- for type in 'dylib xcode_stub shared static'.split()
- )
-
- roots = map(self._library_root, dirs)
-
- searched = (
- os.path.join(root, lib_name)
- for root, lib_name in itertools.product(roots, lib_names)
- )
-
- found = filter(os.path.exists, searched)
-
- # Return None if it could not be found in any dir.
- return next(found, None)
diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/evaluation/rotated_coco_evaluation.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/evaluation/rotated_coco_evaluation.py
deleted file mode 100644
index ea6d1b381dcf106339a03f08577df673ad439c46..0000000000000000000000000000000000000000
--- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/evaluation/rotated_coco_evaluation.py
+++ /dev/null
@@ -1,207 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import itertools
-import json
-import numpy as np
-import os
-import torch
-from pycocotools.cocoeval import COCOeval, maskUtils
-
-from detectron2.structures import BoxMode, RotatedBoxes, pairwise_iou_rotated
-from detectron2.utils.file_io import PathManager
-
-from .coco_evaluation import COCOEvaluator
-
-
-class RotatedCOCOeval(COCOeval):
- @staticmethod
- def is_rotated(box_list):
- if type(box_list) == np.ndarray:
- return box_list.shape[1] == 5
- elif type(box_list) == list:
- if box_list == []: # cannot decide the box_dim
- return False
- return np.all(
- np.array(
- [
- (len(obj) == 5) and ((type(obj) == list) or (type(obj) == np.ndarray))
- for obj in box_list
- ]
- )
- )
- return False
-
- @staticmethod
- def boxlist_to_tensor(boxlist, output_box_dim):
- if type(boxlist) == np.ndarray:
- box_tensor = torch.from_numpy(boxlist)
- elif type(boxlist) == list:
- if boxlist == []:
- return torch.zeros((0, output_box_dim), dtype=torch.float32)
- else:
- box_tensor = torch.FloatTensor(boxlist)
- else:
- raise Exception("Unrecognized boxlist type")
-
- input_box_dim = box_tensor.shape[1]
- if input_box_dim != output_box_dim:
- if input_box_dim == 4 and output_box_dim == 5:
- box_tensor = BoxMode.convert(box_tensor, BoxMode.XYWH_ABS, BoxMode.XYWHA_ABS)
- else:
- raise Exception(
- "Unable to convert from {}-dim box to {}-dim box".format(
- input_box_dim, output_box_dim
- )
- )
- return box_tensor
-
- def compute_iou_dt_gt(self, dt, gt, is_crowd):
- if self.is_rotated(dt) or self.is_rotated(gt):
- # TODO: take is_crowd into consideration
- assert all(c == 0 for c in is_crowd)
- dt = RotatedBoxes(self.boxlist_to_tensor(dt, output_box_dim=5))
- gt = RotatedBoxes(self.boxlist_to_tensor(gt, output_box_dim=5))
- return pairwise_iou_rotated(dt, gt)
- else:
- # This is the same as the classical COCO evaluation
- return maskUtils.iou(dt, gt, is_crowd)
-
- def computeIoU(self, imgId, catId):
- p = self.params
- if p.useCats:
- gt = self._gts[imgId, catId]
- dt = self._dts[imgId, catId]
- else:
- gt = [_ for cId in p.catIds for _ in self._gts[imgId, cId]]
- dt = [_ for cId in p.catIds for _ in self._dts[imgId, cId]]
- if len(gt) == 0 and len(dt) == 0:
- return []
- inds = np.argsort([-d["score"] for d in dt], kind="mergesort")
- dt = [dt[i] for i in inds]
- if len(dt) > p.maxDets[-1]:
- dt = dt[0 : p.maxDets[-1]]
-
- assert p.iouType == "bbox", "unsupported iouType for iou computation"
-
- g = [g["bbox"] for g in gt]
- d = [d["bbox"] for d in dt]
-
- # compute iou between each dt and gt region
- iscrowd = [int(o["iscrowd"]) for o in gt]
-
- # Note: this function is copied from cocoeval.py in cocoapi
- # and the major difference is here.
- ious = self.compute_iou_dt_gt(d, g, iscrowd)
- return ious
-
-
-class RotatedCOCOEvaluator(COCOEvaluator):
- """
- Evaluate object proposal/instance detection outputs using COCO-like metrics and APIs,
- with rotated boxes support.
- Note: this uses IOU only and does not consider angle differences.
- """
-
- def process(self, inputs, outputs):
- """
- Args:
- inputs: the inputs to a COCO model (e.g., GeneralizedRCNN).
- It is a list of dict. Each dict corresponds to an image and
- contains keys like "height", "width", "file_name", "image_id".
- outputs: the outputs of a COCO model. It is a list of dicts with key
- "instances" that contains :class:`Instances`.
- """
- for input, output in zip(inputs, outputs):
- prediction = {"image_id": input["image_id"]}
-
- if "instances" in output:
- instances = output["instances"].to(self._cpu_device)
-
- prediction["instances"] = self.instances_to_json(instances, input["image_id"])
- if "proposals" in output:
- prediction["proposals"] = output["proposals"].to(self._cpu_device)
- self._predictions.append(prediction)
-
- def instances_to_json(self, instances, img_id):
- num_instance = len(instances)
- if num_instance == 0:
- return []
-
- boxes = instances.pred_boxes.tensor.numpy()
- if boxes.shape[1] == 4:
- boxes = BoxMode.convert(boxes, BoxMode.XYXY_ABS, BoxMode.XYWH_ABS)
- boxes = boxes.tolist()
- scores = instances.scores.tolist()
- classes = instances.pred_classes.tolist()
-
- results = []
- for k in range(num_instance):
- result = {
- "image_id": img_id,
- "category_id": classes[k],
- "bbox": boxes[k],
- "score": scores[k],
- }
-
- results.append(result)
- return results
-
- def _eval_predictions(self, predictions, img_ids=None): # img_ids: unused
- """
- Evaluate predictions on the given tasks.
- Fill self._results with the metrics of the tasks.
- """
- self._logger.info("Preparing results for COCO format ...")
- coco_results = list(itertools.chain(*[x["instances"] for x in predictions]))
-
- # unmap the category ids for COCO
- if hasattr(self._metadata, "thing_dataset_id_to_contiguous_id"):
- reverse_id_mapping = {
- v: k for k, v in self._metadata.thing_dataset_id_to_contiguous_id.items()
- }
- for result in coco_results:
- result["category_id"] = reverse_id_mapping[result["category_id"]]
-
- if self._output_dir:
- file_path = os.path.join(self._output_dir, "coco_instances_results.json")
- self._logger.info("Saving results to {}".format(file_path))
- with PathManager.open(file_path, "w") as f:
- f.write(json.dumps(coco_results))
- f.flush()
-
- if not self._do_evaluation:
- self._logger.info("Annotations are not available for evaluation.")
- return
-
- self._logger.info("Evaluating predictions ...")
-
- assert self._tasks is None or set(self._tasks) == {
- "bbox"
- }, "[RotatedCOCOEvaluator] Only bbox evaluation is supported"
- coco_eval = (
- self._evaluate_predictions_on_coco(self._coco_api, coco_results)
- if len(coco_results) > 0
- else None # cocoapi does not handle empty results very well
- )
-
- task = "bbox"
- res = self._derive_coco_results(
- coco_eval, task, class_names=self._metadata.get("thing_classes")
- )
- self._results[task] = res
-
- def _evaluate_predictions_on_coco(self, coco_gt, coco_results):
- """
- Evaluate the coco results using COCOEval API.
- """
- assert len(coco_results) > 0
-
- coco_dt = coco_gt.loadRes(coco_results)
-
- # Only bbox is supported for now
- coco_eval = RotatedCOCOeval(coco_gt, coco_dt, iouType="bbox")
-
- coco_eval.evaluate()
- coco_eval.accumulate()
- coco_eval.summarize()
-
- return coco_eval
diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/utils/registry.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/utils/registry.py
deleted file mode 100644
index 4b01e9007c2578a7b5ae555c926cc06c8a3010f9..0000000000000000000000000000000000000000
--- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/utils/registry.py
+++ /dev/null
@@ -1,60 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-from typing import Any
-import pydoc
-from fvcore.common.registry import Registry # for backward compatibility.
-
-"""
-``Registry`` and `locate` provide ways to map a string (typically found
-in config files) to callable objects.
-"""
-
-__all__ = ["Registry", "locate"]
-
-
-def _convert_target_to_string(t: Any) -> str:
- """
- Inverse of ``locate()``.
-
- Args:
- t: any object with ``__module__`` and ``__qualname__``
- """
- module, qualname = t.__module__, t.__qualname__
-
- # Compress the path to this object, e.g. ``module.submodule._impl.class``
- # may become ``module.submodule.class``, if the later also resolves to the same
- # object. This simplifies the string, and also is less affected by moving the
- # class implementation.
- module_parts = module.split(".")
- for k in range(1, len(module_parts)):
- prefix = ".".join(module_parts[:k])
- candidate = f"{prefix}.{qualname}"
- try:
- if locate(candidate) is t:
- return candidate
- except ImportError:
- pass
- return f"{module}.{qualname}"
-
-
-def locate(name: str) -> Any:
- """
- Locate and return an object ``x`` using an input string ``{x.__module__}.{x.__qualname__}``,
- such as "module.submodule.class_name".
-
- Raise Exception if it cannot be found.
- """
- obj = pydoc.locate(name)
-
- # Some cases (e.g. torch.optim.sgd.SGD) not handled correctly
- # by pydoc.locate. Try a private function from hydra.
- if obj is None:
- try:
- # from hydra.utils import get_method - will print many errors
- from hydra.utils import _locate
- except ImportError as e:
- raise ImportError(f"Cannot dynamically locate object {name}!") from e
- else:
- obj = _locate(name) # it raises if fails
-
- return obj
diff --git a/spaces/TheKitten/Fast-Images-Creature/README.md b/spaces/TheKitten/Fast-Images-Creature/README.md
deleted file mode 100644
index e86e1a8d30bd80a0bd7d87fa092c0f05457969a8..0000000000000000000000000000000000000000
--- a/spaces/TheKitten/Fast-Images-Creature/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Fast Images Creature (400 Models)
-emoji: ⭐️
-colorFrom: red
-colorTo: blue
-sdk: gradio
-sdk_version: 3.15.0
-app_file: app.py
-pinned: true
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/UndueTarget/youtube-whisper/README.md b/spaces/UndueTarget/youtube-whisper/README.md
deleted file mode 100644
index c3180680339155aaf1d27f629129b68d12cac021..0000000000000000000000000000000000000000
--- a/spaces/UndueTarget/youtube-whisper/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Youtube Whisper
-emoji: ⚡
-colorFrom: green
-colorTo: red
-sdk: gradio
-sdk_version: 3.16.2
-app_file: app.py
-pinned: false
-license: unknown
-duplicated_from: kazuk/youtube-whisper
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Voicelab/vlT5-rfc-generation/README.md b/spaces/Voicelab/vlT5-rfc-generation/README.md
deleted file mode 100644
index 2757b96b6c4272da09f74a736a852cb198faa0f6..0000000000000000000000000000000000000000
--- a/spaces/Voicelab/vlT5-rfc-generation/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: VlT5 Reason for contact generation
-emoji: 📱
-colorFrom: blue
-colorTo: green
-sdk: streamlit
-sdk_version: 1.15.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/WUXIAOMO/stabilityai-stable-diffusion-2-1-test-space/README.md b/spaces/WUXIAOMO/stabilityai-stable-diffusion-2-1-test-space/README.md
deleted file mode 100644
index 13237c24f947a7918ea3c2fc039c9e7d4e573849..0000000000000000000000000000000000000000
--- a/spaces/WUXIAOMO/stabilityai-stable-diffusion-2-1-test-space/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Stabilityai Stable Diffusion 2 1 Test Space
-emoji: 🚀
-colorFrom: gray
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.29.0
-app_file: app.py
-pinned: false
-license: other
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Wauplin/bloomz.cpp-converter/app.py b/spaces/Wauplin/bloomz.cpp-converter/app.py
deleted file mode 100644
index 2ae53d9b897412bc89e6e76d3ef5b70a9ab35da1..0000000000000000000000000000000000000000
--- a/spaces/Wauplin/bloomz.cpp-converter/app.py
+++ /dev/null
@@ -1,293 +0,0 @@
-import csv
-import os
-import shutil
-from datetime import datetime
-from pathlib import Path
-from tempfile import TemporaryDirectory
-from typing import Optional
-
-import gradio as gr
-from huggingface_hub import HfApi, ModelCard, Repository, scan_cache_dir
-from huggingface_hub.utils import EntryNotFoundError, RepositoryNotFoundError
-
-from convert import convert
-
-# Repo with files totalling more than 24GB are not converted. Avoid to have a memory issue.
-try:
- MAX_REPO_SIZE = int(os.environ.get("MAX_REPO_SIZE"))
-except:
- MAX_REPO_SIZE = 24 * 1000 * 1000 * 1000
-
-# Used to log Space usage
-# Taken from https://huggingface.co/spaces/onnx/export
-DATASET_REPO_ID = "Wauplin/bloom.cpp-converters"
-DATASET_LOCAL_DIR = "usage_data"
-DATASET_LOCAL_FILE = Path(DATASET_LOCAL_DIR) / "data.csv"
-HF_TOKEN = os.environ.get("HF_TOKEN")
-
-repo: Optional[Repository] = None
-if HF_TOKEN:
- repo = Repository(
- local_dir=DATASET_LOCAL_DIR,
- clone_from=DATASET_REPO_ID,
- repo_type="dataset",
- token=HF_TOKEN,
- )
-
-
-class Generator:
- # Taken from https://stackoverflow.com/a/34073559
- # Allows to log process in Gradio
- def __init__(self, gen):
- self.gen = gen
-
- def __iter__(self):
- self.value = yield from self.gen
-
-
-def run(
- token: str, model_id: str, precision: str, quantization: bool, destination: str
-):
- _log_usage(
- status="start",
- model_id=model_id,
- precision=precision,
- quantization=quantization,
- destination=destination,
- pr_url=None,
- )
- _all_logs = []
-
- def _log(msg: str):
- print(msg) # for container logs
- _all_logs.append(msg)
- return "\n\n".join(_all_logs) # for Gradio output
-
- if token == "" or model_id == "":
- yield _log("### Invalid input 🐞\n\nPlease fill a token and model_id.")
- _log_usage(
- status="invalid input",
- model_id=model_id,
- precision=precision,
- quantization=quantization,
- destination=destination,
- pr_url=None,
- )
- return
- if destination == "":
- _log("Destination not provided. Will default to the initial repo.")
- destination = model_id
-
- api = HfApi(token=token)
- try:
- # TODO: make a PR to bloomz.cpp to be able to pass a token
- model_info = api.model_info(repo_id=model_id, files_metadata=True, token=False)
- _log(f"Model {model_id} exists.")
- except RepositoryNotFoundError:
- yield _log(
- f"\n### Error 😢😢😢\n\nRepository {model_id} not found. Only public models are convertible at the moment."
- )
- _log_usage(
- status="model not found",
- model_id=model_id,
- precision=precision,
- quantization=quantization,
- destination=destination,
- pr_url=None,
- )
- return
-
- try:
- total_size = sum(
- file.size
- for file in model_info.siblings
- if file.rfilename.endswith(".pt") or file.rfilename.endswith(".bin")
- )
- if total_size > MAX_REPO_SIZE:
- yield _log(
- f"### Unprocessable 😢😢😢\n\nModel {model_id} is too big and cannot be processed in this Space. This Space needs to be able to load the model in memory before converting it. To avoid a memory issue, we do not process models bigger than {MAX_REPO_SIZE}b.\n\nYou have 2 options:\n- [Duplicate this Space](https://huggingface.co/spaces/Wauplin/bloomz.cpp-converter?duplicate=true) and assign a bigger machine. You will need to set 'MAX_REPO_SIZE' as a secret to overwrite the default value. Once you are done, remove the upgraded hardware and/or delete the Space.\n- Manually convert the weights by following [this guide](https://github.com/NouamaneTazi/bloomz.cpp#usage)."
- )
- _log_usage(
- status="unprocessable",
- model_id=model_id,
- precision=precision,
- quantization=quantization,
- destination=destination,
- pr_url=None,
- )
- return
-
- with TemporaryDirectory() as cache_folder:
- convert_progress = Generator(
- convert(
- cache_folder=Path(cache_folder),
- model_id=model_id,
- precision=precision,
- quantization=quantization,
- )
- )
- for msg in convert_progress:
- yield _log(msg)
- model_path = convert_progress.value
- yield _log(f"Model converted: {model_path}")
-
- destination_url = api.create_repo(repo_id=destination, exist_ok=True)
- destination = destination_url.repo_id
- yield _log(f"Destination model: {destination_url}")
- pr = api.create_pull_request(
- repo_id=destination_url.repo_id,
- title=f"Add {model_path.name} from bloomz.cpp converter.",
- description="This PR has been created using the [bloomz.cpp converter Space](https://huggingface.co/spaces/Wauplin/bloomz.cpp-converter). It adds weights compatible with the [bloomz.cpp](https://github.com/NouamaneTazi/bloomz.cpp#usage) project.",
- )
- pr_url = f"https://huggingface.co/{destination}/discussions/{pr.num}"
- yield _log(f"Created PR: {pr_url} (empty)")
-
- yield _log(f"Uploading model to PR")
- api.upload_file(
- repo_id=destination,
- path_or_fileobj=model_path,
- path_in_repo=model_path.name,
- revision=pr.git_reference,
- )
- yield _log(f"Model uploaded to PR")
-
- yield _log(f"Modifying model card in PR (add `bloom` and `ggml` tags)")
- try:
- card = ModelCard.load(repo_id_or_path=destination)
- except EntryNotFoundError: # new repo => no model card yet
- card = ModelCard(
- "This model contains a model based on the Bloom architecture with weights compatible with [bloomz.cpp](https://github.com/NouamaneTazi/bloomz.cpp). This model card has been automatically generated [by the bloomz.cpp converter Space](https://huggingface.co/spaces/Wauplin/bloomz.cpp-converter) and must be completed."
- )
- if card.data.tags is None:
- card.data.tags = []
- tags = card.data.tags
- if "ggml" not in tags:
- tags.append("ggml")
- if "bloom" not in tags:
- tags.append("bloom")
- card.push_to_hub(
- repo_id=destination, token=token, revision=pr.git_reference
- )
- yield _log(f"Model card modified in PR.")
-
- api.change_discussion_status(
- repo_id=destination,
- discussion_num=pr.num,
- new_status="open",
- comment="PR is now complete and ready to be reviewed.",
- )
- yield _log(f"[PR]({pr_url}) is complete and ready to be reviewed.")
-
- yield _log(
- f"### Success 🔥\n\nYay! This model was successfully converted! Make sure to let the repo owner know about it and review your PR. You might need to complete the PR manually, especially to add information in the model card."
- )
- _log_usage(
- status="success",
- model_id=model_id,
- precision=precision,
- quantization=quantization,
- destination=destination,
- pr_url=pr_url,
- )
- shutil.rmtree(model_path.parent)
- _delete_cache()
- return
- except Exception as e:
- _log_usage(
- status="error",
- model_id=model_id,
- precision=precision,
- quantization=quantization,
- destination=destination,
- pr_url=None,
- )
- yield _log(f"### Error 😢😢😢\n\n{e}")
- _delete_cache()
- return
-
-
-def _delete_cache():
- """Delete cache dir between each run to avoid filling up the Space disk."""
- scan = scan_cache_dir()
- scan.delete_revisions(
- *[rev.commit_hash for repo in scan.repos for rev in repo.revisions]
- )
-
-
-def _log_usage(**kwargs):
- # save in a private dataset
- # Taken from https://huggingface.co/spaces/onnx/export
- if repo is not None:
- repo.git_pull(rebase=True)
- with DATASET_LOCAL_FILE.open("a") as csv_file:
- writer = csv.DictWriter(csv_file, fieldnames=["time"] + list(kwargs.keys()))
- writer.writerow({"time": str(datetime.now()), **kwargs})
- commit_url = repo.push_to_hub()
- print("[dataset]", commit_url)
-
-
-TITLE = """
-
- Make any BLOOM-like model compatible with bloomz.cpp
-
-"""
-
-DESCRIPTION = """
-This Space allows you to automatically export any Bloom-like model hosted on the 🤗 Hub to be compatible with [bloomz.cpp](https://github.com/NouamaneTazi/bloomz.cpp). Converted weights are either exported to a repo you own (or that we create for you) or to the original repo by opening a PR on the target model. Once exported, the model can run with bloomz.cpp. Check out [this guide](https://github.com/NouamaneTazi/bloomz.cpp#usage) to see how!
-
-Don't know which Bloom model are available on the 🤗 Hub? Find a complete list at https://huggingface.co/models?other=bloom.
-
-To use this Space, please follow these steps:
-
-1. Paste your HF token. You can create one in your [settings page](https://huggingface.co/settings/tokens). The token requires a write-access token to create a PR and upload the weights.
-1. Input a model id from the Hub. This model must be public.
-1. Choose which precision you want to use (default to FP16).
-1. (optional) Opt-in for 4-bit quantization.
-1. (optional) By default a PR to the initial repo will be created. You can choose a different destination repo if you want. The destination repo will be created if it doesn't exist.
-1. Click "Convert!"
-
-That's it! You'll get feedback if it works or not, and if it worked, you'll get the URL of the opened PR 🔥
-If you encounter any issues please let us know [by opening a Discussion](https://huggingface.co/spaces/Wauplin/bloomz.cpp-converter/discussions/new).
-"""
-
-
-with gr.Blocks() as demo:
- gr.HTML(TITLE)
-
- with gr.Row():
- with gr.Column(scale=50):
- gr.Markdown(DESCRIPTION)
-
- with gr.Column(scale=50):
- input_token = gr.Text(
- max_lines=1, label="Hugging Face token", type="password"
- )
- input_model = gr.Text(
- max_lines=1, label="Model id (e.g.: bigscience/bloomz-7b1)"
- )
- input_precision = gr.Radio(
- choices=["FP16", "FP32"], label="Precision", value="FP16"
- )
- input_quantization = gr.Checkbox(value=False, label="4-bits quantization")
- input_destination = gr.Text(
- max_lines=1,
- label="Destination (e.g.: bloomz-7b1.cpp) - optional",
- )
- btn = gr.Button("Convert!")
-
- output = gr.Markdown(label="Output")
-
- btn.click(
- fn=run,
- inputs=[
- input_token,
- input_model,
- input_precision,
- input_quantization,
- input_destination,
- ],
- outputs=output,
- )
-
-
-demo.queue().launch()
diff --git a/spaces/Xenos14/XenoEngine-SD-webui/header_patch.py b/spaces/Xenos14/XenoEngine-SD-webui/header_patch.py
deleted file mode 100644
index 464447c8cfb431f96098a1cbd95835596a5457bb..0000000000000000000000000000000000000000
--- a/spaces/Xenos14/XenoEngine-SD-webui/header_patch.py
+++ /dev/null
@@ -1,37 +0,0 @@
- with gr.Box(visible=os.environ.get("SPACE_ID")):
- if os.environ.get("SPACE_ID") and str(os.environ.get("IS_SHARED_UI", "") or "") not in ("", "0"):
- import torch
- if not torch.cuda.is_available():
- gr.HTML(f"""
-
-
▲ Automatic1111's Stable Diffusion WebUI + Mikubill's ControlNet WebUI extension | Running on Hugging Face | Loaded checkpoint: AtoZovyaRPGArtistTools15_sd15V1
▲ Load additional checkpoints, VAE, LoRA models, etc. Read more on the README at the GitHub link above.
-
▲ This Space has GPU enabled - remember to remove the GPU from the space in the Settings tab when you're done.
-
- """)
diff --git a/spaces/Xenova/semantic-image-search-client/_next/static/oNz1VZyHBbKzD04b9fhBW/_ssgManifest.js b/spaces/Xenova/semantic-image-search-client/_next/static/oNz1VZyHBbKzD04b9fhBW/_ssgManifest.js
deleted file mode 100644
index 5b3ff592fd46c8736892a12864fdf3fed8775202..0000000000000000000000000000000000000000
--- a/spaces/Xenova/semantic-image-search-client/_next/static/oNz1VZyHBbKzD04b9fhBW/_ssgManifest.js
+++ /dev/null
@@ -1 +0,0 @@
-self.__SSG_MANIFEST=new Set([]);self.__SSG_MANIFEST_CB&&self.__SSG_MANIFEST_CB()
\ No newline at end of file
diff --git a/spaces/XzJosh/Ava-Bert-VITS2/text/symbols.py b/spaces/XzJosh/Ava-Bert-VITS2/text/symbols.py
deleted file mode 100644
index 9dfae4e633829f20c4fd767b1c7a9198911ed801..0000000000000000000000000000000000000000
--- a/spaces/XzJosh/Ava-Bert-VITS2/text/symbols.py
+++ /dev/null
@@ -1,51 +0,0 @@
-punctuation = ['!', '?', '…', ",", ".", "'", '-']
-pu_symbols = punctuation + ["SP", "UNK"]
-pad = '_'
-
-# chinese
-zh_symbols = ['E', 'En', 'a', 'ai', 'an', 'ang', 'ao', 'b', 'c', 'ch', 'd', 'e', 'ei', 'en', 'eng', 'er', 'f', 'g', 'h',
- 'i', 'i0', 'ia', 'ian', 'iang', 'iao', 'ie', 'in', 'ing', 'iong', 'ir', 'iu', 'j', 'k', 'l', 'm', 'n', 'o',
- 'ong',
- 'ou', 'p', 'q', 'r', 's', 'sh', 't', 'u', 'ua', 'uai', 'uan', 'uang', 'ui', 'un', 'uo', 'v', 'van', 've', 'vn',
- 'w', 'x', 'y', 'z', 'zh',
- "AA", "EE", "OO"]
-num_zh_tones = 6
-
-# japanese
-ja_symbols = ['I', 'N', 'U', 'a', 'b', 'by', 'ch', 'cl', 'd', 'dy', 'e', 'f', 'g', 'gy', 'h', 'hy', 'i', 'j', 'k', 'ky',
- 'm', 'my', 'n', 'ny', 'o', 'p', 'py', 'r', 'ry', 's', 'sh', 't', 'ts', 'u', 'V', 'w', 'y', 'z']
-num_ja_tones = 1
-
-# English
-en_symbols = ['aa', 'ae', 'ah', 'ao', 'aw', 'ay', 'b', 'ch', 'd', 'dh', 'eh', 'er', 'ey', 'f', 'g', 'hh', 'ih', 'iy',
- 'jh', 'k', 'l', 'm', 'n', 'ng', 'ow', 'oy', 'p', 'r', 's',
- 'sh', 't', 'th', 'uh', 'uw', 'V', 'w', 'y', 'z', 'zh']
-num_en_tones = 4
-
-# combine all symbols
-normal_symbols = sorted(set(zh_symbols + ja_symbols + en_symbols))
-symbols = [pad] + normal_symbols + pu_symbols
-sil_phonemes_ids = [symbols.index(i) for i in pu_symbols]
-
-# combine all tones
-num_tones = num_zh_tones + num_ja_tones + num_en_tones
-
-# language maps
-language_id_map = {
- 'ZH': 0,
- "JA": 1,
- "EN": 2
-}
-num_languages = len(language_id_map.keys())
-
-language_tone_start_map = {
- 'ZH': 0,
- "JA": num_zh_tones,
- "EN": num_zh_tones + num_ja_tones
-}
-
-if __name__ == '__main__':
- a = set(zh_symbols)
- b = set(en_symbols)
- print(sorted(a&b))
-
diff --git a/spaces/XzJosh/yoyo-Bert-VITS2/app.py b/spaces/XzJosh/yoyo-Bert-VITS2/app.py
deleted file mode 100644
index e55eddc0c6b411f3a0f0b6bc1da9269be4f5b087..0000000000000000000000000000000000000000
--- a/spaces/XzJosh/yoyo-Bert-VITS2/app.py
+++ /dev/null
@@ -1,160 +0,0 @@
-import sys, os
-
-if sys.platform == "darwin":
- os.environ["PYTORCH_ENABLE_MPS_FALLBACK"] = "1"
-
-import logging
-
-logging.getLogger("numba").setLevel(logging.WARNING)
-logging.getLogger("markdown_it").setLevel(logging.WARNING)
-logging.getLogger("urllib3").setLevel(logging.WARNING)
-logging.getLogger("matplotlib").setLevel(logging.WARNING)
-
-logging.basicConfig(level=logging.INFO, format="| %(name)s | %(levelname)s | %(message)s")
-
-logger = logging.getLogger(__name__)
-
-import torch
-import argparse
-import commons
-import utils
-from models import SynthesizerTrn
-from text.symbols import symbols
-from text import cleaned_text_to_sequence, get_bert
-from text.cleaner import clean_text
-import gradio as gr
-import webbrowser
-
-
-net_g = None
-
-
-def get_text(text, language_str, hps):
- norm_text, phone, tone, word2ph = clean_text(text, language_str)
- phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str)
-
- if hps.data.add_blank:
- phone = commons.intersperse(phone, 0)
- tone = commons.intersperse(tone, 0)
- language = commons.intersperse(language, 0)
- for i in range(len(word2ph)):
- word2ph[i] = word2ph[i] * 2
- word2ph[0] += 1
- bert = get_bert(norm_text, word2ph, language_str)
- del word2ph
-
- assert bert.shape[-1] == len(phone)
-
- phone = torch.LongTensor(phone)
- tone = torch.LongTensor(tone)
- language = torch.LongTensor(language)
-
- return bert, phone, tone, language
-
-def infer(text, sdp_ratio, noise_scale, noise_scale_w, length_scale, sid):
- global net_g
- bert, phones, tones, lang_ids = get_text(text, "ZH", hps)
- with torch.no_grad():
- x_tst=phones.to(device).unsqueeze(0)
- tones=tones.to(device).unsqueeze(0)
- lang_ids=lang_ids.to(device).unsqueeze(0)
- bert = bert.to(device).unsqueeze(0)
- x_tst_lengths = torch.LongTensor([phones.size(0)]).to(device)
- del phones
- speakers = torch.LongTensor([hps.data.spk2id[sid]]).to(device)
- audio = net_g.infer(x_tst, x_tst_lengths, speakers, tones, lang_ids, bert, sdp_ratio=sdp_ratio
- , noise_scale=noise_scale, noise_scale_w=noise_scale_w, length_scale=length_scale)[0][0,0].data.cpu().float().numpy()
- del x_tst, tones, lang_ids, bert, x_tst_lengths, speakers
- return audio
-
-def tts_fn(text, speaker, sdp_ratio, noise_scale, noise_scale_w, length_scale):
- with torch.no_grad():
- audio = infer(text, sdp_ratio=sdp_ratio, noise_scale=noise_scale, noise_scale_w=noise_scale_w, length_scale=length_scale, sid=speaker)
- return "Success", (hps.data.sampling_rate, audio)
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument("--model_dir", default="./logs/Lumi/G_2500.pth", help="path of your model")
- parser.add_argument("--config_dir", default="./configs/config.json", help="path of your config file")
- parser.add_argument("--share", default=False, help="make link public")
- parser.add_argument("-d", "--debug", action="store_true", help="enable DEBUG-LEVEL log")
-
- args = parser.parse_args()
- if args.debug:
- logger.info("Enable DEBUG-LEVEL log")
- logging.basicConfig(level=logging.DEBUG)
- hps = utils.get_hparams_from_file(args.config_dir)
- device = "cuda:0" if torch.cuda.is_available() else "cpu"
- '''
- device = (
- "cuda:0"
- if torch.cuda.is_available()
- else (
- "mps"
- if sys.platform == "darwin" and torch.backends.mps.is_available()
- else "cpu"
- )
- )
- '''
- net_g = SynthesizerTrn(
- len(symbols),
- hps.data.filter_length // 2 + 1,
- hps.train.segment_size // hps.data.hop_length,
- n_speakers=hps.data.n_speakers,
- **hps.model).to(device)
- _ = net_g.eval()
-
- _ = utils.load_checkpoint(args.model_dir, net_g, None, skip_optimizer=True)
-
- speaker_ids = hps.data.spk2id
- speakers = list(speaker_ids.keys())
- with gr.Blocks() as app:
- with gr.Row():
- with gr.Column():
- gr.Markdown(value="""
- 【AI鹿鸣②】在线语音合成(Bert-Vits2)\n
- 模型作者:Xz乔希 https://space.bilibili.com/5859321\n
- 声音归属:yoyo鹿鸣_Lumi https://space.bilibili.com/488836173\n
- 【AI鹿鸣①】https://huggingface.co/spaces/XzJosh/Lumi-Bert-VITS2\n
- Bert-VITS2项目:https://github.com/Stardust-minus/Bert-VITS2\n
- 使用本模型请严格遵守法律法规!\n
- 发布二创作品请标注本项目作者及链接、作品使用Bert-VITS2 AI生成!\n
- """)
- text = gr.TextArea(label="Text", placeholder="Input Text Here",
- value="大家好呀,嘿嘿,我是鹿鸣")
- speaker = gr.Dropdown(choices=speakers, value=speakers[0], label='Speaker')
- sdp_ratio = gr.Slider(minimum=0.1, maximum=1, value=0.2, step=0.01, label='SDP/DP混合比')
- noise_scale = gr.Slider(minimum=0.1, maximum=1, value=0.5, step=0.01, label='感情调节')
- noise_scale_w = gr.Slider(minimum=0.1, maximum=1, value=0.9, step=0.01, label='音素长度')
- length_scale = gr.Slider(minimum=0.1, maximum=2, value=1, step=0.01, label='生成长度')
- btn = gr.Button("点击生成", variant="primary")
- with gr.Column():
- text_output = gr.Textbox(label="Message")
- audio_output = gr.Audio(label="Output Audio")
- gr.Markdown(value="""
- 【AI塔菲】https://huggingface.co/spaces/XzJosh/Taffy-Bert-VITS2\n
- 【AI东雪莲】https://huggingface.co/spaces/XzJosh/Azuma-Bert-VITS2\n
- 【AI奶绿】https://huggingface.co/spaces/XzJosh/LAPLACE-Bert-VITS2\n
- 【AI七海】https://huggingface.co/spaces/XzJosh/Nana7mi-Bert-VITS2\n
- 【AI星瞳】https://huggingface.co/spaces/XzJosh/XingTong-Bert-VITS2\n
- 【AI阿梓】https://huggingface.co/spaces/XzJosh/Azusa-Bert-VITS2\n
- 【AI嘉然】https://huggingface.co/spaces/XzJosh/Diana-Bert-VITS2\n
- 【AI向晚】https://huggingface.co/spaces/XzJosh/Ava-Bert-VITS2\n
- 【AI乃琳】https://huggingface.co/spaces/XzJosh/Eileen-Bert-VITS2\n
- 【AI贝拉】https://huggingface.co/spaces/XzJosh/Bella-Bert-VITS2\n
- 【AI珈乐】https://huggingface.co/spaces/XzJosh/Carol-Bert-VITS2\n
- 【AI恬豆】https://huggingface.co/spaces/XzJosh/Bekki-Bert-VITS2\n
- 【AI尼奈】https://huggingface.co/spaces/XzJosh/nine1-Bert-VITS2\n
- 【AI扇宝】https://huggingface.co/spaces/XzJosh/ShanBao-Bert-VITS2\n
- 【AI剑魔】https://huggingface.co/spaces/XzJosh/Aatrox-Bert-VITS2\n
- 【AI电棍】https://huggingface.co/spaces/XzJosh/otto-Bert-VITS2\n
- """)
- btn.click(tts_fn,
- inputs=[text, speaker, sdp_ratio, noise_scale, noise_scale_w, length_scale],
- outputs=[text_output, audio_output])
-
-# webbrowser.open("http://127.0.0.1:6006")
-# app.launch(server_port=6006, show_error=True)
-
- app.launch(show_error=True)
diff --git a/spaces/XzJosh/yoyo-Bert-VITS2/text/cleaner.py b/spaces/XzJosh/yoyo-Bert-VITS2/text/cleaner.py
deleted file mode 100644
index 64bd5f7296f66c94f3a335666c53706bb5fe5b39..0000000000000000000000000000000000000000
--- a/spaces/XzJosh/yoyo-Bert-VITS2/text/cleaner.py
+++ /dev/null
@@ -1,27 +0,0 @@
-from text import chinese, cleaned_text_to_sequence
-
-
-language_module_map = {
- 'ZH': chinese
-}
-
-
-def clean_text(text, language):
- language_module = language_module_map[language]
- norm_text = language_module.text_normalize(text)
- phones, tones, word2ph = language_module.g2p(norm_text)
- return norm_text, phones, tones, word2ph
-
-def clean_text_bert(text, language):
- language_module = language_module_map[language]
- norm_text = language_module.text_normalize(text)
- phones, tones, word2ph = language_module.g2p(norm_text)
- bert = language_module.get_bert_feature(norm_text, word2ph)
- return phones, tones, bert
-
-def text_to_sequence(text, language):
- norm_text, phones, tones, word2ph = clean_text(text, language)
- return cleaned_text_to_sequence(phones, tones, language)
-
-if __name__ == '__main__':
- pass
diff --git a/spaces/Yabo/ControlVideo/models/util.py b/spaces/Yabo/ControlVideo/models/util.py
deleted file mode 100644
index faba28d79fc80c2786872e2d9fa7edb267b18949..0000000000000000000000000000000000000000
--- a/spaces/Yabo/ControlVideo/models/util.py
+++ /dev/null
@@ -1,122 +0,0 @@
-import os
-import imageio
-import numpy as np
-from typing import Union
-import decord
-decord.bridge.set_bridge('torch')
-import torch
-import torchvision
-import PIL
-from typing import List
-from tqdm import tqdm
-from einops import rearrange
-
-from controlnet_aux import CannyDetector
-
-def save_videos_grid(videos: torch.Tensor, path: str, rescale=False, n_rows=4, fps=8):
- videos = rearrange(videos, "b c t h w -> t b c h w")
- outputs = []
- for x in videos:
- x = torchvision.utils.make_grid(x, nrow=n_rows)
- x = x.transpose(0, 1).transpose(1, 2).squeeze(-1)
- if rescale:
- x = (x + 1.0) / 2.0 # -1,1 -> 0,1
- x = (x * 255).numpy().astype(np.uint8)
- outputs.append(x)
-
- os.makedirs(os.path.dirname(path), exist_ok=True)
- imageio.mimsave(path, outputs, fps=fps)
-
-def save_videos_grid_pil(videos: List[PIL.Image.Image], path: str, rescale=False, n_rows=4, fps=8):
- videos = rearrange(videos, "b c t h w -> t b c h w")
- outputs = []
- for x in videos:
- x = torchvision.utils.make_grid(x, nrow=n_rows)
- x = x.transpose(0, 1).transpose(1, 2).squeeze(-1)
- if rescale:
- x = (x + 1.0) / 2.0 # -1,1 -> 0,1
- x = (x * 255).numpy().astype(np.uint8)
- outputs.append(x)
-
- os.makedirs(os.path.dirname(path), exist_ok=True)
- imageio.mimsave(path, outputs, fps=fps)
-
-def read_video(video_path, video_length, width=512, height=512, frame_rate=None):
- vr = decord.VideoReader(video_path, width=width, height=height)
- if frame_rate is None:
- frame_rate = max(1, len(vr) // video_length)
- sample_index = list(range(0, len(vr), frame_rate))[:video_length]
- video = vr.get_batch(sample_index)
- video = rearrange(video, "f h w c -> f c h w")
- video = (video / 127.5 - 1.0)
- return video
-
-
-def get_annotation(video, annotator):
- t2i_transform = torchvision.transforms.ToPILImage()
- annotation = []
- for frame in video:
- pil_frame = t2i_transform(frame)
- if isinstance(annotator, CannyDetector):
- annotation.append(annotator(pil_frame, low_threshold=100, high_threshold=200))
- else:
- annotation.append(annotator(pil_frame))
- return annotation
-
-# DDIM Inversion
-@torch.no_grad()
-def init_prompt(prompt, pipeline):
- uncond_input = pipeline.tokenizer(
- [""], padding="max_length", max_length=pipeline.tokenizer.model_max_length,
- return_tensors="pt"
- )
- uncond_embeddings = pipeline.text_encoder(uncond_input.input_ids.to(pipeline.device))[0]
- text_input = pipeline.tokenizer(
- [prompt],
- padding="max_length",
- max_length=pipeline.tokenizer.model_max_length,
- truncation=True,
- return_tensors="pt",
- )
- text_embeddings = pipeline.text_encoder(text_input.input_ids.to(pipeline.device))[0]
- context = torch.cat([uncond_embeddings, text_embeddings])
-
- return context
-
-
-def next_step(model_output: Union[torch.FloatTensor, np.ndarray], timestep: int,
- sample: Union[torch.FloatTensor, np.ndarray], ddim_scheduler):
- timestep, next_timestep = min(
- timestep - ddim_scheduler.config.num_train_timesteps // ddim_scheduler.num_inference_steps, 999), timestep
- alpha_prod_t = ddim_scheduler.alphas_cumprod[timestep] if timestep >= 0 else ddim_scheduler.final_alpha_cumprod
- alpha_prod_t_next = ddim_scheduler.alphas_cumprod[next_timestep]
- beta_prod_t = 1 - alpha_prod_t
- next_original_sample = (sample - beta_prod_t ** 0.5 * model_output) / alpha_prod_t ** 0.5
- next_sample_direction = (1 - alpha_prod_t_next) ** 0.5 * model_output
- next_sample = alpha_prod_t_next ** 0.5 * next_original_sample + next_sample_direction
- return next_sample
-
-
-def get_noise_pred_single(latents, t, context, unet):
- noise_pred = unet(latents, t, encoder_hidden_states=context)["sample"]
- return noise_pred
-
-
-@torch.no_grad()
-def ddim_loop(pipeline, ddim_scheduler, latent, num_inv_steps, prompt):
- context = init_prompt(prompt, pipeline)
- uncond_embeddings, cond_embeddings = context.chunk(2)
- all_latent = [latent]
- latent = latent.clone().detach()
- for i in tqdm(range(num_inv_steps)):
- t = ddim_scheduler.timesteps[len(ddim_scheduler.timesteps) - i - 1]
- noise_pred = get_noise_pred_single(latent, t, cond_embeddings, pipeline.unet)
- latent = next_step(noise_pred, t, latent, ddim_scheduler)
- all_latent.append(latent)
- return all_latent
-
-
-@torch.no_grad()
-def ddim_inversion(pipeline, ddim_scheduler, video_latent, num_inv_steps, prompt=""):
- ddim_latents = ddim_loop(pipeline, ddim_scheduler, video_latent, num_inv_steps, prompt)
- return ddim_latents
diff --git a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/modeling_utils.py b/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/modeling_utils.py
deleted file mode 100644
index e270f75e056e9130ae9a7df590a1e7547efceee8..0000000000000000000000000000000000000000
--- a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/modeling_utils.py
+++ /dev/null
@@ -1,764 +0,0 @@
-# coding=utf-8
-# Copyright 2022 The HuggingFace Inc. team.
-# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import os
-from functools import partial
-from typing import Callable, List, Optional, Tuple, Union
-
-import torch
-from torch import Tensor, device
-
-from huggingface_hub import hf_hub_download
-from huggingface_hub.utils import EntryNotFoundError, RepositoryNotFoundError, RevisionNotFoundError
-from requests import HTTPError
-
-from . import __version__
-from .utils import (
- CONFIG_NAME,
- DIFFUSERS_CACHE,
- HUGGINGFACE_CO_RESOLVE_ENDPOINT,
- SAFETENSORS_WEIGHTS_NAME,
- WEIGHTS_NAME,
- is_accelerate_available,
- is_safetensors_available,
- is_torch_version,
- logging,
-)
-
-
-logger = logging.get_logger(__name__)
-
-
-if is_torch_version(">=", "1.9.0"):
- _LOW_CPU_MEM_USAGE_DEFAULT = True
-else:
- _LOW_CPU_MEM_USAGE_DEFAULT = False
-
-
-if is_accelerate_available():
- import accelerate
- from accelerate.utils import set_module_tensor_to_device
- from accelerate.utils.versions import is_torch_version
-
-if is_safetensors_available():
- import safetensors
-
-
-def get_parameter_device(parameter: torch.nn.Module):
- try:
- return next(parameter.parameters()).device
- except StopIteration:
- # For torch.nn.DataParallel compatibility in PyTorch 1.5
-
- def find_tensor_attributes(module: torch.nn.Module) -> List[Tuple[str, Tensor]]:
- tuples = [(k, v) for k, v in module.__dict__.items() if torch.is_tensor(v)]
- return tuples
-
- gen = parameter._named_members(get_members_fn=find_tensor_attributes)
- first_tuple = next(gen)
- return first_tuple[1].device
-
-
-def get_parameter_dtype(parameter: torch.nn.Module):
- try:
- return next(parameter.parameters()).dtype
- except StopIteration:
- # For torch.nn.DataParallel compatibility in PyTorch 1.5
-
- def find_tensor_attributes(module: torch.nn.Module) -> List[Tuple[str, Tensor]]:
- tuples = [(k, v) for k, v in module.__dict__.items() if torch.is_tensor(v)]
- return tuples
-
- gen = parameter._named_members(get_members_fn=find_tensor_attributes)
- first_tuple = next(gen)
- return first_tuple[1].dtype
-
-
-def load_state_dict(checkpoint_file: Union[str, os.PathLike]):
- """
- Reads a checkpoint file, returning properly formatted errors if they arise.
- """
- try:
- if os.path.basename(checkpoint_file) == WEIGHTS_NAME:
- return torch.load(checkpoint_file, map_location="cpu")
- else:
- return safetensors.torch.load_file(checkpoint_file, device="cpu")
- except Exception as e:
- try:
- with open(checkpoint_file) as f:
- if f.read().startswith("version"):
- raise OSError(
- "You seem to have cloned a repository without having git-lfs installed. Please install "
- "git-lfs and run `git lfs install` followed by `git lfs pull` in the folder "
- "you cloned."
- )
- else:
- raise ValueError(
- f"Unable to locate the file {checkpoint_file} which is necessary to load this pretrained "
- "model. Make sure you have saved the model properly."
- ) from e
- except (UnicodeDecodeError, ValueError):
- raise OSError(
- f"Unable to load weights from checkpoint file for '{checkpoint_file}' "
- f"at '{checkpoint_file}'. "
- "If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True."
- )
-
-
-def _load_state_dict_into_model(model_to_load, state_dict):
- # Convert old format to new format if needed from a PyTorch state_dict
- # copy state_dict so _load_from_state_dict can modify it
- state_dict = state_dict.copy()
- error_msgs = []
-
- # PyTorch's `_load_from_state_dict` does not copy parameters in a module's descendants
- # so we need to apply the function recursively.
- def load(module: torch.nn.Module, prefix=""):
- args = (state_dict, prefix, {}, True, [], [], error_msgs)
- module._load_from_state_dict(*args)
-
- for name, child in module._modules.items():
- if child is not None:
- load(child, prefix + name + ".")
-
- load(model_to_load)
-
- return error_msgs
-
-
-class ModelMixin(torch.nn.Module):
- r"""
- Base class for all models.
-
- [`ModelMixin`] takes care of storing the configuration of the models and handles methods for loading, downloading
- and saving models.
-
- - **config_name** ([`str`]) -- A filename under which the model should be stored when calling
- [`~modeling_utils.ModelMixin.save_pretrained`].
- """
- config_name = CONFIG_NAME
- _automatically_saved_args = ["_diffusers_version", "_class_name", "_name_or_path"]
- _supports_gradient_checkpointing = False
-
- def __init__(self):
- super().__init__()
-
- @property
- def is_gradient_checkpointing(self) -> bool:
- """
- Whether gradient checkpointing is activated for this model or not.
-
- Note that in other frameworks this feature can be referred to as "activation checkpointing" or "checkpoint
- activations".
- """
- return any(hasattr(m, "gradient_checkpointing") and m.gradient_checkpointing for m in self.modules())
-
- def enable_gradient_checkpointing(self):
- """
- Activates gradient checkpointing for the current model.
-
- Note that in other frameworks this feature can be referred to as "activation checkpointing" or "checkpoint
- activations".
- """
- if not self._supports_gradient_checkpointing:
- raise ValueError(f"{self.__class__.__name__} does not support gradient checkpointing.")
- self.apply(partial(self._set_gradient_checkpointing, value=True))
-
- def disable_gradient_checkpointing(self):
- """
- Deactivates gradient checkpointing for the current model.
-
- Note that in other frameworks this feature can be referred to as "activation checkpointing" or "checkpoint
- activations".
- """
- if self._supports_gradient_checkpointing:
- self.apply(partial(self._set_gradient_checkpointing, value=False))
-
- def save_pretrained(
- self,
- save_directory: Union[str, os.PathLike],
- is_main_process: bool = True,
- save_function: Callable = None,
- safe_serialization: bool = False,
- ):
- """
- Save a model and its configuration file to a directory, so that it can be re-loaded using the
- `[`~modeling_utils.ModelMixin.from_pretrained`]` class method.
-
- Arguments:
- save_directory (`str` or `os.PathLike`):
- Directory to which to save. Will be created if it doesn't exist.
- is_main_process (`bool`, *optional*, defaults to `True`):
- Whether the process calling this is the main process or not. Useful when in distributed training like
- TPUs and need to call this function on all processes. In this case, set `is_main_process=True` only on
- the main process to avoid race conditions.
- save_function (`Callable`):
- The function to use to save the state dictionary. Useful on distributed training like TPUs when one
- need to replace `torch.save` by another method. Can be configured with the environment variable
- `DIFFUSERS_SAVE_MODE`.
- safe_serialization (`bool`, *optional*, defaults to `False`):
- Whether to save the model using `safetensors` or the traditional PyTorch way (that uses `pickle`).
- """
- if safe_serialization and not is_safetensors_available():
- raise ImportError("`safe_serialization` requires the `safetensors library: `pip install safetensors`.")
-
- if os.path.isfile(save_directory):
- logger.error(f"Provided path ({save_directory}) should be a directory, not a file")
- return
-
- if save_function is None:
- save_function = safetensors.torch.save_file if safe_serialization else torch.save
-
- os.makedirs(save_directory, exist_ok=True)
-
- model_to_save = self
-
- # Attach architecture to the config
- # Save the config
- if is_main_process:
- model_to_save.save_config(save_directory)
-
- # Save the model
- state_dict = model_to_save.state_dict()
-
- weights_name = SAFETENSORS_WEIGHTS_NAME if safe_serialization else WEIGHTS_NAME
-
- # Clean the folder from a previous save
- for filename in os.listdir(save_directory):
- full_filename = os.path.join(save_directory, filename)
- # If we have a shard file that is not going to be replaced, we delete it, but only from the main process
- # in distributed settings to avoid race conditions.
- weights_no_suffix = weights_name.replace(".bin", "").replace(".safetensors", "")
- if filename.startswith(weights_no_suffix) and os.path.isfile(full_filename) and is_main_process:
- os.remove(full_filename)
-
- # Save the model
- save_function(state_dict, os.path.join(save_directory, weights_name))
-
- logger.info(f"Model weights saved in {os.path.join(save_directory, weights_name)}")
-
- @classmethod
- def from_pretrained(cls, pretrained_model_name_or_path: Optional[Union[str, os.PathLike]], **kwargs):
- r"""
- Instantiate a pretrained pytorch model from a pre-trained model configuration.
-
- The model is set in evaluation mode by default using `model.eval()` (Dropout modules are deactivated). To train
- the model, you should first set it back in training mode with `model.train()`.
-
- The warning *Weights from XXX not initialized from pretrained model* means that the weights of XXX do not come
- pretrained with the rest of the model. It is up to you to train those weights with a downstream fine-tuning
- task.
-
- The warning *Weights from XXX not used in YYY* means that the layer XXX is not used by YYY, therefore those
- weights are discarded.
-
- Parameters:
- pretrained_model_name_or_path (`str` or `os.PathLike`, *optional*):
- Can be either:
-
- - A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co.
- Valid model ids should have an organization name, like `google/ddpm-celebahq-256`.
- - A path to a *directory* containing model weights saved using [`~ModelMixin.save_config`], e.g.,
- `./my_model_directory/`.
-
- cache_dir (`Union[str, os.PathLike]`, *optional*):
- Path to a directory in which a downloaded pretrained model configuration should be cached if the
- standard cache should not be used.
- torch_dtype (`str` or `torch.dtype`, *optional*):
- Override the default `torch.dtype` and load the model under this dtype. If `"auto"` is passed the dtype
- will be automatically derived from the model's weights.
- force_download (`bool`, *optional*, defaults to `False`):
- Whether or not to force the (re-)download of the model weights and configuration files, overriding the
- cached versions if they exist.
- resume_download (`bool`, *optional*, defaults to `False`):
- Whether or not to delete incompletely received files. Will attempt to resume the download if such a
- file exists.
- proxies (`Dict[str, str]`, *optional*):
- A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128',
- 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.
- output_loading_info(`bool`, *optional*, defaults to `False`):
- Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.
- local_files_only(`bool`, *optional*, defaults to `False`):
- Whether or not to only look at local files (i.e., do not try to download the model).
- use_auth_token (`str` or *bool*, *optional*):
- The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated
- when running `diffusers-cli login` (stored in `~/.huggingface`).
- revision (`str`, *optional*, defaults to `"main"`):
- The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
- git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any
- identifier allowed by git.
- subfolder (`str`, *optional*, defaults to `""`):
- In case the relevant files are located inside a subfolder of the model repo (either remote in
- huggingface.co or downloaded locally), you can specify the folder name here.
-
- mirror (`str`, *optional*):
- Mirror source to accelerate downloads in China. If you are from China and have an accessibility
- problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety.
- Please refer to the mirror site for more information.
- device_map (`str` or `Dict[str, Union[int, str, torch.device]]`, *optional*):
- A map that specifies where each submodule should go. It doesn't need to be refined to each
- parameter/buffer name, once a given module name is inside, every submodule of it will be sent to the
- same device.
-
- To have Accelerate compute the most optimized `device_map` automatically, set `device_map="auto"`. For
- more information about each option see [designing a device
- map](https://hf.co/docs/accelerate/main/en/usage_guides/big_modeling#designing-a-device-map).
- low_cpu_mem_usage (`bool`, *optional*, defaults to `True` if torch version >= 1.9.0 else `False`):
- Speed up model loading by not initializing the weights and only loading the pre-trained weights. This
- also tries to not use more than 1x model size in CPU memory (including peak memory) while loading the
- model. This is only supported when torch version >= 1.9.0. If you are using an older version of torch,
- setting this argument to `True` will raise an error.
-
-
-
- It is required to be logged in (`huggingface-cli login`) when you want to use private or [gated
- models](https://huggingface.co/docs/hub/models-gated#gated-models).
-
-
-
-
-
- Activate the special ["offline-mode"](https://huggingface.co/diffusers/installation.html#offline-mode) to use
- this method in a firewalled environment.
-
-
-
- """
- cache_dir = kwargs.pop("cache_dir", DIFFUSERS_CACHE)
- ignore_mismatched_sizes = kwargs.pop("ignore_mismatched_sizes", False)
- force_download = kwargs.pop("force_download", False)
- resume_download = kwargs.pop("resume_download", False)
- proxies = kwargs.pop("proxies", None)
- output_loading_info = kwargs.pop("output_loading_info", False)
- local_files_only = kwargs.pop("local_files_only", False)
- use_auth_token = kwargs.pop("use_auth_token", None)
- revision = kwargs.pop("revision", None)
- torch_dtype = kwargs.pop("torch_dtype", None)
- subfolder = kwargs.pop("subfolder", None)
- device_map = kwargs.pop("device_map", None)
- low_cpu_mem_usage = kwargs.pop("low_cpu_mem_usage", _LOW_CPU_MEM_USAGE_DEFAULT)
-
- if low_cpu_mem_usage and not is_accelerate_available():
- low_cpu_mem_usage = False
- logger.warning(
- "Cannot initialize model with low cpu memory usage because `accelerate` was not found in the"
- " environment. Defaulting to `low_cpu_mem_usage=False`. It is strongly recommended to install"
- " `accelerate` for faster and less memory-intense model loading. You can do so with: \n```\npip"
- " install accelerate\n```\n."
- )
-
- if device_map is not None and not is_accelerate_available():
- raise NotImplementedError(
- "Loading and dispatching requires `accelerate`. Please make sure to install accelerate or set"
- " `device_map=None`. You can install accelerate with `pip install accelerate`."
- )
-
- # Check if we can handle device_map and dispatching the weights
- if device_map is not None and not is_torch_version(">=", "1.9.0"):
- raise NotImplementedError(
- "Loading and dispatching requires torch >= 1.9.0. Please either update your PyTorch version or set"
- " `device_map=None`."
- )
-
- if low_cpu_mem_usage is True and not is_torch_version(">=", "1.9.0"):
- raise NotImplementedError(
- "Low memory initialization requires torch >= 1.9.0. Please either update your PyTorch version or set"
- " `low_cpu_mem_usage=False`."
- )
-
- if low_cpu_mem_usage is False and device_map is not None:
- raise ValueError(
- f"You cannot set `low_cpu_mem_usage` to `False` while using device_map={device_map} for loading and"
- " dispatching. Please make sure to set `low_cpu_mem_usage=True`."
- )
-
- user_agent = {
- "diffusers": __version__,
- "file_type": "model",
- "framework": "pytorch",
- }
-
- # Load config if we don't provide a configuration
- config_path = pretrained_model_name_or_path
-
- # This variable will flag if we're loading a sharded checkpoint. In this case the archive file is just the
- # Load model
-
- model_file = None
- if is_safetensors_available():
- try:
- model_file = _get_model_file(
- pretrained_model_name_or_path,
- weights_name=SAFETENSORS_WEIGHTS_NAME,
- cache_dir=cache_dir,
- force_download=force_download,
- resume_download=resume_download,
- proxies=proxies,
- local_files_only=local_files_only,
- use_auth_token=use_auth_token,
- revision=revision,
- subfolder=subfolder,
- user_agent=user_agent,
- )
- except:
- pass
- if model_file is None:
- model_file = _get_model_file(
- pretrained_model_name_or_path,
- weights_name=WEIGHTS_NAME,
- cache_dir=cache_dir,
- force_download=force_download,
- resume_download=resume_download,
- proxies=proxies,
- local_files_only=local_files_only,
- use_auth_token=use_auth_token,
- revision=revision,
- subfolder=subfolder,
- user_agent=user_agent,
- )
-
- if low_cpu_mem_usage:
- # Instantiate model with empty weights
- with accelerate.init_empty_weights():
- config, unused_kwargs = cls.load_config(
- config_path,
- cache_dir=cache_dir,
- return_unused_kwargs=True,
- force_download=force_download,
- resume_download=resume_download,
- proxies=proxies,
- local_files_only=local_files_only,
- use_auth_token=use_auth_token,
- revision=revision,
- subfolder=subfolder,
- device_map=device_map,
- **kwargs,
- )
- model = cls.from_config(config, **unused_kwargs)
-
- # if device_map is Non,e load the state dict on move the params from meta device to the cpu
- if device_map is None:
- param_device = "cpu"
- state_dict = load_state_dict(model_file)
- # move the parms from meta device to cpu
- for param_name, param in state_dict.items():
- set_module_tensor_to_device(model, param_name, param_device, value=param)
- else: # else let accelerate handle loading and dispatching.
- # Load weights and dispatch according to the device_map
- # by deafult the device_map is None and the weights are loaded on the CPU
- accelerate.load_checkpoint_and_dispatch(model, model_file, device_map)
-
- loading_info = {
- "missing_keys": [],
- "unexpected_keys": [],
- "mismatched_keys": [],
- "error_msgs": [],
- }
- else:
- config, unused_kwargs = cls.load_config(
- config_path,
- cache_dir=cache_dir,
- return_unused_kwargs=True,
- force_download=force_download,
- resume_download=resume_download,
- proxies=proxies,
- local_files_only=local_files_only,
- use_auth_token=use_auth_token,
- revision=revision,
- subfolder=subfolder,
- device_map=device_map,
- **kwargs,
- )
- model = cls.from_config(config, **unused_kwargs)
-
- state_dict = load_state_dict(model_file)
- dtype = set(v.dtype for v in state_dict.values())
-
- if len(dtype) > 1 and torch.float32 not in dtype:
- raise ValueError(
- f"The weights of the model file {model_file} have a mixture of incompatible dtypes {dtype}. Please"
- f" make sure that {model_file} weights have only one dtype."
- )
- elif len(dtype) > 1 and torch.float32 in dtype:
- dtype = torch.float32
- else:
- dtype = dtype.pop()
-
- # move model to correct dtype
- model = model.to(dtype)
-
- model, missing_keys, unexpected_keys, mismatched_keys, error_msgs = cls._load_pretrained_model(
- model,
- state_dict,
- model_file,
- pretrained_model_name_or_path,
- ignore_mismatched_sizes=ignore_mismatched_sizes,
- )
-
- loading_info = {
- "missing_keys": missing_keys,
- "unexpected_keys": unexpected_keys,
- "mismatched_keys": mismatched_keys,
- "error_msgs": error_msgs,
- }
-
- if torch_dtype is not None and not isinstance(torch_dtype, torch.dtype):
- raise ValueError(
- f"{torch_dtype} needs to be of type `torch.dtype`, e.g. `torch.float16`, but is {type(torch_dtype)}."
- )
- elif torch_dtype is not None:
- model = model.to(torch_dtype)
-
- model.register_to_config(_name_or_path=pretrained_model_name_or_path)
-
- # Set model in evaluation mode to deactivate DropOut modules by default
- model.eval()
- if output_loading_info:
- return model, loading_info
-
- return model
-
- @classmethod
- def _load_pretrained_model(
- cls,
- model,
- state_dict,
- resolved_archive_file,
- pretrained_model_name_or_path,
- ignore_mismatched_sizes=False,
- ):
- # Retrieve missing & unexpected_keys
- model_state_dict = model.state_dict()
- loaded_keys = [k for k in state_dict.keys()]
-
- expected_keys = list(model_state_dict.keys())
-
- original_loaded_keys = loaded_keys
-
- missing_keys = list(set(expected_keys) - set(loaded_keys))
- unexpected_keys = list(set(loaded_keys) - set(expected_keys))
-
- # Make sure we are able to load base models as well as derived models (with heads)
- model_to_load = model
-
- def _find_mismatched_keys(
- state_dict,
- model_state_dict,
- loaded_keys,
- ignore_mismatched_sizes,
- ):
- mismatched_keys = []
- if ignore_mismatched_sizes:
- for checkpoint_key in loaded_keys:
- model_key = checkpoint_key
-
- if (
- model_key in model_state_dict
- and state_dict[checkpoint_key].shape != model_state_dict[model_key].shape
- ):
- mismatched_keys.append(
- (checkpoint_key, state_dict[checkpoint_key].shape, model_state_dict[model_key].shape)
- )
- del state_dict[checkpoint_key]
- return mismatched_keys
-
- if state_dict is not None:
- # Whole checkpoint
- mismatched_keys = _find_mismatched_keys(
- state_dict,
- model_state_dict,
- original_loaded_keys,
- ignore_mismatched_sizes,
- )
- error_msgs = _load_state_dict_into_model(model_to_load, state_dict)
-
- if len(error_msgs) > 0:
- error_msg = "\n\t".join(error_msgs)
- if "size mismatch" in error_msg:
- error_msg += (
- "\n\tYou may consider adding `ignore_mismatched_sizes=True` in the model `from_pretrained` method."
- )
- raise RuntimeError(f"Error(s) in loading state_dict for {model.__class__.__name__}:\n\t{error_msg}")
-
- if len(unexpected_keys) > 0:
- logger.warning(
- f"Some weights of the model checkpoint at {pretrained_model_name_or_path} were not used when"
- f" initializing {model.__class__.__name__}: {unexpected_keys}\n- This IS expected if you are"
- f" initializing {model.__class__.__name__} from the checkpoint of a model trained on another task"
- " or with another architecture (e.g. initializing a BertForSequenceClassification model from a"
- " BertForPreTraining model).\n- This IS NOT expected if you are initializing"
- f" {model.__class__.__name__} from the checkpoint of a model that you expect to be exactly"
- " identical (initializing a BertForSequenceClassification model from a"
- " BertForSequenceClassification model)."
- )
- else:
- logger.info(f"All model checkpoint weights were used when initializing {model.__class__.__name__}.\n")
- if len(missing_keys) > 0:
- logger.warning(
- f"Some weights of {model.__class__.__name__} were not initialized from the model checkpoint at"
- f" {pretrained_model_name_or_path} and are newly initialized: {missing_keys}\nYou should probably"
- " TRAIN this model on a down-stream task to be able to use it for predictions and inference."
- )
- elif len(mismatched_keys) == 0:
- logger.info(
- f"All the weights of {model.__class__.__name__} were initialized from the model checkpoint at"
- f" {pretrained_model_name_or_path}.\nIf your task is similar to the task the model of the"
- f" checkpoint was trained on, you can already use {model.__class__.__name__} for predictions"
- " without further training."
- )
- if len(mismatched_keys) > 0:
- mismatched_warning = "\n".join(
- [
- f"- {key}: found shape {shape1} in the checkpoint and {shape2} in the model instantiated"
- for key, shape1, shape2 in mismatched_keys
- ]
- )
- logger.warning(
- f"Some weights of {model.__class__.__name__} were not initialized from the model checkpoint at"
- f" {pretrained_model_name_or_path} and are newly initialized because the shapes did not"
- f" match:\n{mismatched_warning}\nYou should probably TRAIN this model on a down-stream task to be"
- " able to use it for predictions and inference."
- )
-
- return model, missing_keys, unexpected_keys, mismatched_keys, error_msgs
-
- @property
- def device(self) -> device:
- """
- `torch.device`: The device on which the module is (assuming that all the module parameters are on the same
- device).
- """
- return get_parameter_device(self)
-
- @property
- def dtype(self) -> torch.dtype:
- """
- `torch.dtype`: The dtype of the module (assuming that all the module parameters have the same dtype).
- """
- return get_parameter_dtype(self)
-
- def num_parameters(self, only_trainable: bool = False, exclude_embeddings: bool = False) -> int:
- """
- Get number of (optionally, trainable or non-embeddings) parameters in the module.
-
- Args:
- only_trainable (`bool`, *optional*, defaults to `False`):
- Whether or not to return only the number of trainable parameters
-
- exclude_embeddings (`bool`, *optional*, defaults to `False`):
- Whether or not to return only the number of non-embeddings parameters
-
- Returns:
- `int`: The number of parameters.
- """
-
- if exclude_embeddings:
- embedding_param_names = [
- f"{name}.weight"
- for name, module_type in self.named_modules()
- if isinstance(module_type, torch.nn.Embedding)
- ]
- non_embedding_parameters = [
- parameter for name, parameter in self.named_parameters() if name not in embedding_param_names
- ]
- return sum(p.numel() for p in non_embedding_parameters if p.requires_grad or not only_trainable)
- else:
- return sum(p.numel() for p in self.parameters() if p.requires_grad or not only_trainable)
-
-
-def _get_model_file(
- pretrained_model_name_or_path,
- *,
- weights_name,
- subfolder,
- cache_dir,
- force_download,
- proxies,
- resume_download,
- local_files_only,
- use_auth_token,
- user_agent,
- revision,
-):
- pretrained_model_name_or_path = str(pretrained_model_name_or_path)
- if os.path.isdir(pretrained_model_name_or_path):
- if os.path.isfile(os.path.join(pretrained_model_name_or_path, weights_name)):
- # Load from a PyTorch checkpoint
- model_file = os.path.join(pretrained_model_name_or_path, weights_name)
- return model_file
- elif subfolder is not None and os.path.isfile(
- os.path.join(pretrained_model_name_or_path, subfolder, weights_name)
- ):
- model_file = os.path.join(pretrained_model_name_or_path, subfolder, weights_name)
- return model_file
- else:
- raise EnvironmentError(
- f"Error no file named {weights_name} found in directory {pretrained_model_name_or_path}."
- )
- else:
- try:
- # Load from URL or cache if already cached
- model_file = hf_hub_download(
- pretrained_model_name_or_path,
- filename=weights_name,
- cache_dir=cache_dir,
- force_download=force_download,
- proxies=proxies,
- resume_download=resume_download,
- local_files_only=local_files_only,
- use_auth_token=use_auth_token,
- user_agent=user_agent,
- subfolder=subfolder,
- revision=revision,
- )
- return model_file
-
- except RepositoryNotFoundError:
- raise EnvironmentError(
- f"{pretrained_model_name_or_path} is not a local folder and is not a valid model identifier "
- "listed on 'https://huggingface.co/models'\nIf this is a private repository, make sure to pass a "
- "token having permission to this repo with `use_auth_token` or log in with `huggingface-cli "
- "login`."
- )
- except RevisionNotFoundError:
- raise EnvironmentError(
- f"{revision} is not a valid git identifier (branch name, tag name or commit id) that exists for "
- "this model name. Check the model page at "
- f"'https://huggingface.co/{pretrained_model_name_or_path}' for available revisions."
- )
- except EntryNotFoundError:
- raise EnvironmentError(
- f"{pretrained_model_name_or_path} does not appear to have a file named {weights_name}."
- )
- except HTTPError as err:
- raise EnvironmentError(
- f"There was a specific connection error when trying to load {pretrained_model_name_or_path}:\n{err}"
- )
- except ValueError:
- raise EnvironmentError(
- f"We couldn't connect to '{HUGGINGFACE_CO_RESOLVE_ENDPOINT}' to load this model, couldn't find it"
- f" in the cached files and it looks like {pretrained_model_name_or_path} is not the path to a"
- f" directory containing a file named {weights_name} or"
- " \nCheckout your internet connection or see how to run the library in"
- " offline mode at 'https://huggingface.co/docs/diffusers/installation#offline-mode'."
- )
- except EnvironmentError:
- raise EnvironmentError(
- f"Can't load the model for '{pretrained_model_name_or_path}'. If you were trying to load it from "
- "'https://huggingface.co/models', make sure you don't have a local directory with the same name. "
- f"Otherwise, make sure '{pretrained_model_name_or_path}' is the correct path to a directory "
- f"containing a file named {weights_name}"
- )
diff --git a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/schedulers/scheduling_dpmsolver_multistep_flax.py b/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/schedulers/scheduling_dpmsolver_multistep_flax.py
deleted file mode 100644
index a44070d1d2aa1b5964884f17f1cbf335b9433f8e..0000000000000000000000000000000000000000
--- a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/schedulers/scheduling_dpmsolver_multistep_flax.py
+++ /dev/null
@@ -1,625 +0,0 @@
-# Copyright 2022 TSAIL Team and The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# DISCLAIMER: This file is strongly influenced by https://github.com/LuChengTHU/dpm-solver
-
-import math
-from dataclasses import dataclass
-from typing import List, Optional, Tuple, Union
-
-import flax
-import jax
-import jax.numpy as jnp
-
-from ..configuration_utils import ConfigMixin, register_to_config
-from ..utils import deprecate
-from .scheduling_utils_flax import (
- _FLAX_COMPATIBLE_STABLE_DIFFUSION_SCHEDULERS,
- FlaxSchedulerMixin,
- FlaxSchedulerOutput,
- broadcast_to_shape_from_left,
-)
-
-
-def betas_for_alpha_bar(num_diffusion_timesteps: int, max_beta=0.999) -> jnp.ndarray:
- """
- Create a beta schedule that discretizes the given alpha_t_bar function, which defines the cumulative product of
- (1-beta) over time from t = [0,1].
-
- Contains a function alpha_bar that takes an argument t and transforms it to the cumulative product of (1-beta) up
- to that part of the diffusion process.
-
-
- Args:
- num_diffusion_timesteps (`int`): the number of betas to produce.
- max_beta (`float`): the maximum beta to use; use values lower than 1 to
- prevent singularities.
-
- Returns:
- betas (`jnp.ndarray`): the betas used by the scheduler to step the model outputs
- """
-
- def alpha_bar(time_step):
- return math.cos((time_step + 0.008) / 1.008 * math.pi / 2) ** 2
-
- betas = []
- for i in range(num_diffusion_timesteps):
- t1 = i / num_diffusion_timesteps
- t2 = (i + 1) / num_diffusion_timesteps
- betas.append(min(1 - alpha_bar(t2) / alpha_bar(t1), max_beta))
- return jnp.array(betas, dtype=jnp.float32)
-
-
-@flax.struct.dataclass
-class DPMSolverMultistepSchedulerState:
- # setable values
- num_inference_steps: Optional[int] = None
- timesteps: Optional[jnp.ndarray] = None
-
- # running values
- model_outputs: Optional[jnp.ndarray] = None
- lower_order_nums: Optional[int] = None
- step_index: Optional[int] = None
- prev_timestep: Optional[int] = None
- cur_sample: Optional[jnp.ndarray] = None
-
- @classmethod
- def create(cls, num_train_timesteps: int):
- return cls(timesteps=jnp.arange(0, num_train_timesteps)[::-1])
-
-
-@dataclass
-class FlaxDPMSolverMultistepSchedulerOutput(FlaxSchedulerOutput):
- state: DPMSolverMultistepSchedulerState
-
-
-class FlaxDPMSolverMultistepScheduler(FlaxSchedulerMixin, ConfigMixin):
- """
- DPM-Solver (and the improved version DPM-Solver++) is a fast dedicated high-order solver for diffusion ODEs with
- the convergence order guarantee. Empirically, sampling by DPM-Solver with only 20 steps can generate high-quality
- samples, and it can generate quite good samples even in only 10 steps.
-
- For more details, see the original paper: https://arxiv.org/abs/2206.00927 and https://arxiv.org/abs/2211.01095
-
- Currently, we support the multistep DPM-Solver for both noise prediction models and data prediction models. We
- recommend to use `solver_order=2` for guided sampling, and `solver_order=3` for unconditional sampling.
-
- We also support the "dynamic thresholding" method in Imagen (https://arxiv.org/abs/2205.11487). For pixel-space
- diffusion models, you can set both `algorithm_type="dpmsolver++"` and `thresholding=True` to use the dynamic
- thresholding. Note that the thresholding method is unsuitable for latent-space diffusion models (such as
- stable-diffusion).
-
- [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__`
- function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`.
- [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and
- [`~SchedulerMixin.from_pretrained`] functions.
-
- For more details, see the original paper: https://arxiv.org/abs/2206.00927 and https://arxiv.org/abs/2211.01095
-
- Args:
- num_train_timesteps (`int`): number of diffusion steps used to train the model.
- beta_start (`float`): the starting `beta` value of inference.
- beta_end (`float`): the final `beta` value.
- beta_schedule (`str`):
- the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
- `linear`, `scaled_linear`, or `squaredcos_cap_v2`.
- trained_betas (`np.ndarray`, optional):
- option to pass an array of betas directly to the constructor to bypass `beta_start`, `beta_end` etc.
- solver_order (`int`, default `2`):
- the order of DPM-Solver; can be `1` or `2` or `3`. We recommend to use `solver_order=2` for guided
- sampling, and `solver_order=3` for unconditional sampling.
- prediction_type (`str`, default `epsilon`):
- indicates whether the model predicts the noise (epsilon), or the data / `x0`. One of `epsilon`, `sample`,
- or `v-prediction`.
- thresholding (`bool`, default `False`):
- whether to use the "dynamic thresholding" method (introduced by Imagen, https://arxiv.org/abs/2205.11487).
- For pixel-space diffusion models, you can set both `algorithm_type=dpmsolver++` and `thresholding=True` to
- use the dynamic thresholding. Note that the thresholding method is unsuitable for latent-space diffusion
- models (such as stable-diffusion).
- dynamic_thresholding_ratio (`float`, default `0.995`):
- the ratio for the dynamic thresholding method. Default is `0.995`, the same as Imagen
- (https://arxiv.org/abs/2205.11487).
- sample_max_value (`float`, default `1.0`):
- the threshold value for dynamic thresholding. Valid only when `thresholding=True` and
- `algorithm_type="dpmsolver++`.
- algorithm_type (`str`, default `dpmsolver++`):
- the algorithm type for the solver. Either `dpmsolver` or `dpmsolver++`. The `dpmsolver` type implements the
- algorithms in https://arxiv.org/abs/2206.00927, and the `dpmsolver++` type implements the algorithms in
- https://arxiv.org/abs/2211.01095. We recommend to use `dpmsolver++` with `solver_order=2` for guided
- sampling (e.g. stable-diffusion).
- solver_type (`str`, default `midpoint`):
- the solver type for the second-order solver. Either `midpoint` or `heun`. The solver type slightly affects
- the sample quality, especially for small number of steps. We empirically find that `midpoint` solvers are
- slightly better, so we recommend to use the `midpoint` type.
- lower_order_final (`bool`, default `True`):
- whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. We empirically
- find this trick can stabilize the sampling of DPM-Solver for steps < 15, especially for steps <= 10.
-
- """
-
- _compatibles = _FLAX_COMPATIBLE_STABLE_DIFFUSION_SCHEDULERS.copy()
- _deprecated_kwargs = ["predict_epsilon"]
-
- @property
- def has_state(self):
- return True
-
- @register_to_config
- def __init__(
- self,
- num_train_timesteps: int = 1000,
- beta_start: float = 0.0001,
- beta_end: float = 0.02,
- beta_schedule: str = "linear",
- trained_betas: Optional[jnp.ndarray] = None,
- solver_order: int = 2,
- prediction_type: str = "epsilon",
- thresholding: bool = False,
- dynamic_thresholding_ratio: float = 0.995,
- sample_max_value: float = 1.0,
- algorithm_type: str = "dpmsolver++",
- solver_type: str = "midpoint",
- lower_order_final: bool = True,
- **kwargs,
- ):
- message = (
- "Please make sure to instantiate your scheduler with `prediction_type` instead. E.g. `scheduler ="
- " FlaxDPMSolverMultistepScheduler.from_pretrained(, prediction_type='epsilon')`."
- )
- predict_epsilon = deprecate("predict_epsilon", "0.11.0", message, take_from=kwargs)
- if predict_epsilon is not None:
- self.register_to_config(prediction_type="epsilon" if predict_epsilon else "sample")
-
- if trained_betas is not None:
- self.betas = jnp.asarray(trained_betas)
- elif beta_schedule == "linear":
- self.betas = jnp.linspace(beta_start, beta_end, num_train_timesteps, dtype=jnp.float32)
- elif beta_schedule == "scaled_linear":
- # this schedule is very specific to the latent diffusion model.
- self.betas = jnp.linspace(beta_start**0.5, beta_end**0.5, num_train_timesteps, dtype=jnp.float32) ** 2
- elif beta_schedule == "squaredcos_cap_v2":
- # Glide cosine schedule
- self.betas = betas_for_alpha_bar(num_train_timesteps)
- else:
- raise NotImplementedError(f"{beta_schedule} does is not implemented for {self.__class__}")
-
- self.alphas = 1.0 - self.betas
- self.alphas_cumprod = jnp.cumprod(self.alphas, axis=0)
- # Currently we only support VP-type noise schedule
- self.alpha_t = jnp.sqrt(self.alphas_cumprod)
- self.sigma_t = jnp.sqrt(1 - self.alphas_cumprod)
- self.lambda_t = jnp.log(self.alpha_t) - jnp.log(self.sigma_t)
-
- # standard deviation of the initial noise distribution
- self.init_noise_sigma = 1.0
-
- # settings for DPM-Solver
- if algorithm_type not in ["dpmsolver", "dpmsolver++"]:
- raise NotImplementedError(f"{algorithm_type} does is not implemented for {self.__class__}")
- if solver_type not in ["midpoint", "heun"]:
- raise NotImplementedError(f"{solver_type} does is not implemented for {self.__class__}")
-
- def create_state(self):
- return DPMSolverMultistepSchedulerState.create(num_train_timesteps=self.config.num_train_timesteps)
-
- def set_timesteps(
- self, state: DPMSolverMultistepSchedulerState, num_inference_steps: int, shape: Tuple
- ) -> DPMSolverMultistepSchedulerState:
- """
- Sets the discrete timesteps used for the diffusion chain. Supporting function to be run before inference.
-
- Args:
- state (`DPMSolverMultistepSchedulerState`):
- the `FlaxDPMSolverMultistepScheduler` state data class instance.
- num_inference_steps (`int`):
- the number of diffusion steps used when generating samples with a pre-trained model.
- shape (`Tuple`):
- the shape of the samples to be generated.
- """
- timesteps = (
- jnp.linspace(0, self.config.num_train_timesteps - 1, num_inference_steps + 1)
- .round()[::-1][:-1]
- .astype(jnp.int32)
- )
-
- return state.replace(
- num_inference_steps=num_inference_steps,
- timesteps=timesteps,
- model_outputs=jnp.zeros((self.config.solver_order,) + shape),
- lower_order_nums=0,
- step_index=0,
- prev_timestep=-1,
- cur_sample=jnp.zeros(shape),
- )
-
- def convert_model_output(
- self,
- model_output: jnp.ndarray,
- timestep: int,
- sample: jnp.ndarray,
- ) -> jnp.ndarray:
- """
- Convert the model output to the corresponding type that the algorithm (DPM-Solver / DPM-Solver++) needs.
-
- DPM-Solver is designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to
- discretize an integral of the data prediction model. So we need to first convert the model output to the
- corresponding type to match the algorithm.
-
- Note that the algorithm type and the model type is decoupled. That is to say, we can use either DPM-Solver or
- DPM-Solver++ for both noise prediction model and data prediction model.
-
- Args:
- model_output (`jnp.ndarray`): direct output from learned diffusion model.
- timestep (`int`): current discrete timestep in the diffusion chain.
- sample (`jnp.ndarray`):
- current instance of sample being created by diffusion process.
-
- Returns:
- `jnp.ndarray`: the converted model output.
- """
- # DPM-Solver++ needs to solve an integral of the data prediction model.
- if self.config.algorithm_type == "dpmsolver++":
- if self.config.prediction_type == "epsilon":
- alpha_t, sigma_t = self.alpha_t[timestep], self.sigma_t[timestep]
- x0_pred = (sample - sigma_t * model_output) / alpha_t
- elif self.config.prediction_type == "sample":
- x0_pred = model_output
- elif self.config.prediction_type == "v_prediction":
- alpha_t, sigma_t = self.alpha_t[timestep], self.sigma_t[timestep]
- x0_pred = alpha_t * sample - sigma_t * model_output
- else:
- raise ValueError(
- f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, `sample`, "
- " or `v_prediction` for the FlaxDPMSolverMultistepScheduler."
- )
-
- if self.config.thresholding:
- # Dynamic thresholding in https://arxiv.org/abs/2205.11487
- dynamic_max_val = jnp.percentile(
- jnp.abs(x0_pred), self.config.dynamic_thresholding_ratio, axis=tuple(range(1, x0_pred.ndim))
- )
- dynamic_max_val = jnp.maximum(
- dynamic_max_val, self.config.sample_max_value * jnp.ones_like(dynamic_max_val)
- )
- x0_pred = jnp.clip(x0_pred, -dynamic_max_val, dynamic_max_val) / dynamic_max_val
- return x0_pred
- # DPM-Solver needs to solve an integral of the noise prediction model.
- elif self.config.algorithm_type == "dpmsolver":
- if self.config.prediction_type == "epsilon":
- return model_output
- elif self.config.prediction_type == "sample":
- alpha_t, sigma_t = self.alpha_t[timestep], self.sigma_t[timestep]
- epsilon = (sample - alpha_t * model_output) / sigma_t
- return epsilon
- elif self.config.prediction_type == "v_prediction":
- alpha_t, sigma_t = self.alpha_t[timestep], self.sigma_t[timestep]
- epsilon = alpha_t * model_output + sigma_t * sample
- return epsilon
- else:
- raise ValueError(
- f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, `sample`, "
- " or `v_prediction` for the FlaxDPMSolverMultistepScheduler."
- )
-
- def dpm_solver_first_order_update(
- self, model_output: jnp.ndarray, timestep: int, prev_timestep: int, sample: jnp.ndarray
- ) -> jnp.ndarray:
- """
- One step for the first-order DPM-Solver (equivalent to DDIM).
-
- See https://arxiv.org/abs/2206.00927 for the detailed derivation.
-
- Args:
- model_output (`jnp.ndarray`): direct output from learned diffusion model.
- timestep (`int`): current discrete timestep in the diffusion chain.
- prev_timestep (`int`): previous discrete timestep in the diffusion chain.
- sample (`jnp.ndarray`):
- current instance of sample being created by diffusion process.
-
- Returns:
- `jnp.ndarray`: the sample tensor at the previous timestep.
- """
- t, s0 = prev_timestep, timestep
- m0 = model_output
- lambda_t, lambda_s = self.lambda_t[t], self.lambda_t[s0]
- alpha_t, alpha_s = self.alpha_t[t], self.alpha_t[s0]
- sigma_t, sigma_s = self.sigma_t[t], self.sigma_t[s0]
- h = lambda_t - lambda_s
- if self.config.algorithm_type == "dpmsolver++":
- x_t = (sigma_t / sigma_s) * sample - (alpha_t * (jnp.exp(-h) - 1.0)) * m0
- elif self.config.algorithm_type == "dpmsolver":
- x_t = (alpha_t / alpha_s) * sample - (sigma_t * (jnp.exp(h) - 1.0)) * m0
- return x_t
-
- def multistep_dpm_solver_second_order_update(
- self,
- model_output_list: jnp.ndarray,
- timestep_list: List[int],
- prev_timestep: int,
- sample: jnp.ndarray,
- ) -> jnp.ndarray:
- """
- One step for the second-order multistep DPM-Solver.
-
- Args:
- model_output_list (`List[jnp.ndarray]`):
- direct outputs from learned diffusion model at current and latter timesteps.
- timestep (`int`): current and latter discrete timestep in the diffusion chain.
- prev_timestep (`int`): previous discrete timestep in the diffusion chain.
- sample (`jnp.ndarray`):
- current instance of sample being created by diffusion process.
-
- Returns:
- `jnp.ndarray`: the sample tensor at the previous timestep.
- """
- t, s0, s1 = prev_timestep, timestep_list[-1], timestep_list[-2]
- m0, m1 = model_output_list[-1], model_output_list[-2]
- lambda_t, lambda_s0, lambda_s1 = self.lambda_t[t], self.lambda_t[s0], self.lambda_t[s1]
- alpha_t, alpha_s0 = self.alpha_t[t], self.alpha_t[s0]
- sigma_t, sigma_s0 = self.sigma_t[t], self.sigma_t[s0]
- h, h_0 = lambda_t - lambda_s0, lambda_s0 - lambda_s1
- r0 = h_0 / h
- D0, D1 = m0, (1.0 / r0) * (m0 - m1)
- if self.config.algorithm_type == "dpmsolver++":
- # See https://arxiv.org/abs/2211.01095 for detailed derivations
- if self.config.solver_type == "midpoint":
- x_t = (
- (sigma_t / sigma_s0) * sample
- - (alpha_t * (jnp.exp(-h) - 1.0)) * D0
- - 0.5 * (alpha_t * (jnp.exp(-h) - 1.0)) * D1
- )
- elif self.config.solver_type == "heun":
- x_t = (
- (sigma_t / sigma_s0) * sample
- - (alpha_t * (jnp.exp(-h) - 1.0)) * D0
- + (alpha_t * ((jnp.exp(-h) - 1.0) / h + 1.0)) * D1
- )
- elif self.config.algorithm_type == "dpmsolver":
- # See https://arxiv.org/abs/2206.00927 for detailed derivations
- if self.config.solver_type == "midpoint":
- x_t = (
- (alpha_t / alpha_s0) * sample
- - (sigma_t * (jnp.exp(h) - 1.0)) * D0
- - 0.5 * (sigma_t * (jnp.exp(h) - 1.0)) * D1
- )
- elif self.config.solver_type == "heun":
- x_t = (
- (alpha_t / alpha_s0) * sample
- - (sigma_t * (jnp.exp(h) - 1.0)) * D0
- - (sigma_t * ((jnp.exp(h) - 1.0) / h - 1.0)) * D1
- )
- return x_t
-
- def multistep_dpm_solver_third_order_update(
- self,
- model_output_list: jnp.ndarray,
- timestep_list: List[int],
- prev_timestep: int,
- sample: jnp.ndarray,
- ) -> jnp.ndarray:
- """
- One step for the third-order multistep DPM-Solver.
-
- Args:
- model_output_list (`List[jnp.ndarray]`):
- direct outputs from learned diffusion model at current and latter timesteps.
- timestep (`int`): current and latter discrete timestep in the diffusion chain.
- prev_timestep (`int`): previous discrete timestep in the diffusion chain.
- sample (`jnp.ndarray`):
- current instance of sample being created by diffusion process.
-
- Returns:
- `jnp.ndarray`: the sample tensor at the previous timestep.
- """
- t, s0, s1, s2 = prev_timestep, timestep_list[-1], timestep_list[-2], timestep_list[-3]
- m0, m1, m2 = model_output_list[-1], model_output_list[-2], model_output_list[-3]
- lambda_t, lambda_s0, lambda_s1, lambda_s2 = (
- self.lambda_t[t],
- self.lambda_t[s0],
- self.lambda_t[s1],
- self.lambda_t[s2],
- )
- alpha_t, alpha_s0 = self.alpha_t[t], self.alpha_t[s0]
- sigma_t, sigma_s0 = self.sigma_t[t], self.sigma_t[s0]
- h, h_0, h_1 = lambda_t - lambda_s0, lambda_s0 - lambda_s1, lambda_s1 - lambda_s2
- r0, r1 = h_0 / h, h_1 / h
- D0 = m0
- D1_0, D1_1 = (1.0 / r0) * (m0 - m1), (1.0 / r1) * (m1 - m2)
- D1 = D1_0 + (r0 / (r0 + r1)) * (D1_0 - D1_1)
- D2 = (1.0 / (r0 + r1)) * (D1_0 - D1_1)
- if self.config.algorithm_type == "dpmsolver++":
- # See https://arxiv.org/abs/2206.00927 for detailed derivations
- x_t = (
- (sigma_t / sigma_s0) * sample
- - (alpha_t * (jnp.exp(-h) - 1.0)) * D0
- + (alpha_t * ((jnp.exp(-h) - 1.0) / h + 1.0)) * D1
- - (alpha_t * ((jnp.exp(-h) - 1.0 + h) / h**2 - 0.5)) * D2
- )
- elif self.config.algorithm_type == "dpmsolver":
- # See https://arxiv.org/abs/2206.00927 for detailed derivations
- x_t = (
- (alpha_t / alpha_s0) * sample
- - (sigma_t * (jnp.exp(h) - 1.0)) * D0
- - (sigma_t * ((jnp.exp(h) - 1.0) / h - 1.0)) * D1
- - (sigma_t * ((jnp.exp(h) - 1.0 - h) / h**2 - 0.5)) * D2
- )
- return x_t
-
- def step(
- self,
- state: DPMSolverMultistepSchedulerState,
- model_output: jnp.ndarray,
- timestep: int,
- sample: jnp.ndarray,
- return_dict: bool = True,
- ) -> Union[FlaxDPMSolverMultistepSchedulerOutput, Tuple]:
- """
- Predict the sample at the previous timestep by DPM-Solver. Core function to propagate the diffusion process
- from the learned model outputs (most often the predicted noise).
-
- Args:
- state (`DPMSolverMultistepSchedulerState`):
- the `FlaxDPMSolverMultistepScheduler` state data class instance.
- model_output (`jnp.ndarray`): direct output from learned diffusion model.
- timestep (`int`): current discrete timestep in the diffusion chain.
- sample (`jnp.ndarray`):
- current instance of sample being created by diffusion process.
- return_dict (`bool`): option for returning tuple rather than FlaxDPMSolverMultistepSchedulerOutput class
-
- Returns:
- [`FlaxDPMSolverMultistepSchedulerOutput`] or `tuple`: [`FlaxDPMSolverMultistepSchedulerOutput`] if
- `return_dict` is True, otherwise a `tuple`. When returning a tuple, the first element is the sample tensor.
-
- """
- prev_timestep = jax.lax.cond(
- state.step_index == len(state.timesteps) - 1,
- lambda _: 0,
- lambda _: state.timesteps[state.step_index + 1],
- (),
- )
-
- model_output = self.convert_model_output(model_output, timestep, sample)
-
- model_outputs_new = jnp.roll(state.model_outputs, -1, axis=0)
- model_outputs_new = model_outputs_new.at[-1].set(model_output)
- state = state.replace(
- model_outputs=model_outputs_new,
- prev_timestep=prev_timestep,
- cur_sample=sample,
- )
-
- def step_1(state: DPMSolverMultistepSchedulerState) -> jnp.ndarray:
- return self.dpm_solver_first_order_update(
- state.model_outputs[-1],
- state.timesteps[state.step_index],
- state.prev_timestep,
- state.cur_sample,
- )
-
- def step_23(state: DPMSolverMultistepSchedulerState) -> jnp.ndarray:
- def step_2(state: DPMSolverMultistepSchedulerState) -> jnp.ndarray:
- timestep_list = jnp.array([state.timesteps[state.step_index - 1], state.timesteps[state.step_index]])
- return self.multistep_dpm_solver_second_order_update(
- state.model_outputs,
- timestep_list,
- state.prev_timestep,
- state.cur_sample,
- )
-
- def step_3(state: DPMSolverMultistepSchedulerState) -> jnp.ndarray:
- timestep_list = jnp.array(
- [
- state.timesteps[state.step_index - 2],
- state.timesteps[state.step_index - 1],
- state.timesteps[state.step_index],
- ]
- )
- return self.multistep_dpm_solver_third_order_update(
- state.model_outputs,
- timestep_list,
- state.prev_timestep,
- state.cur_sample,
- )
-
- if self.config.solver_order == 2:
- return step_2(state)
- elif self.config.lower_order_final and len(state.timesteps) < 15:
- return jax.lax.cond(
- state.lower_order_nums < 2,
- step_2,
- lambda state: jax.lax.cond(
- state.step_index == len(state.timesteps) - 2,
- step_2,
- step_3,
- state,
- ),
- state,
- )
- else:
- return jax.lax.cond(
- state.lower_order_nums < 2,
- step_2,
- step_3,
- state,
- )
-
- if self.config.solver_order == 1:
- prev_sample = step_1(state)
- elif self.config.lower_order_final and len(state.timesteps) < 15:
- prev_sample = jax.lax.cond(
- state.lower_order_nums < 1,
- step_1,
- lambda state: jax.lax.cond(
- state.step_index == len(state.timesteps) - 1,
- step_1,
- step_23,
- state,
- ),
- state,
- )
- else:
- prev_sample = jax.lax.cond(
- state.lower_order_nums < 1,
- step_1,
- step_23,
- state,
- )
-
- state = state.replace(
- lower_order_nums=jnp.minimum(state.lower_order_nums + 1, self.config.solver_order),
- step_index=(state.step_index + 1),
- )
-
- if not return_dict:
- return (prev_sample, state)
-
- return FlaxDPMSolverMultistepSchedulerOutput(prev_sample=prev_sample, state=state)
-
- def scale_model_input(
- self, state: DPMSolverMultistepSchedulerState, sample: jnp.ndarray, timestep: Optional[int] = None
- ) -> jnp.ndarray:
- """
- Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
- current timestep.
-
- Args:
- state (`DPMSolverMultistepSchedulerState`):
- the `FlaxDPMSolverMultistepScheduler` state data class instance.
- sample (`jnp.ndarray`): input sample
- timestep (`int`, optional): current timestep
-
- Returns:
- `jnp.ndarray`: scaled input sample
- """
- return sample
-
- def add_noise(
- self,
- original_samples: jnp.ndarray,
- noise: jnp.ndarray,
- timesteps: jnp.ndarray,
- ) -> jnp.ndarray:
- sqrt_alpha_prod = self.alphas_cumprod[timesteps] ** 0.5
- sqrt_alpha_prod = sqrt_alpha_prod.flatten()
- sqrt_alpha_prod = broadcast_to_shape_from_left(sqrt_alpha_prod, original_samples.shape)
-
- sqrt_one_minus_alpha_prod = (1 - self.alphas_cumprod[timesteps]) ** 0.0
- sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.flatten()
- sqrt_one_minus_alpha_prod = broadcast_to_shape_from_left(sqrt_one_minus_alpha_prod, original_samples.shape)
-
- noisy_samples = sqrt_alpha_prod * original_samples + sqrt_one_minus_alpha_prod * noise
- return noisy_samples
-
- def __len__(self):
- return self.config.num_train_timesteps
diff --git a/spaces/YlcldKlns/bing/cloudflare/worker.js b/spaces/YlcldKlns/bing/cloudflare/worker.js
deleted file mode 100644
index e0debd750615f1329b2c72fbce73e1b9291f7137..0000000000000000000000000000000000000000
--- a/spaces/YlcldKlns/bing/cloudflare/worker.js
+++ /dev/null
@@ -1,18 +0,0 @@
-const TRAGET_HOST='hf4all-bingo.hf.space' // 请将此域名改成你自己的,域名信息在设置》站点域名查看。
-
-export default {
- async fetch(request) {
- const uri = new URL(request.url);
- if (uri.protocol === 'http:') {
- uri.protocol = 'https:';
- return new Response('', {
- status: 301,
- headers: {
- location: uri.toString(),
- },
- })
- }
- uri.host = TRAGET_HOST
- return fetch(new Request(uri.toString(), request));
- },
-};
diff --git a/spaces/YlcldKlns/bing/src/components/chat-message.tsx b/spaces/YlcldKlns/bing/src/components/chat-message.tsx
deleted file mode 100644
index bf272d8d7005cfd06c53bd213e09ea217e803549..0000000000000000000000000000000000000000
--- a/spaces/YlcldKlns/bing/src/components/chat-message.tsx
+++ /dev/null
@@ -1,93 +0,0 @@
-import remarkGfm from 'remark-gfm'
-import remarkMath from 'remark-math'
-import supersub from 'remark-supersub'
-import remarkBreaks from 'remark-breaks'
-import { cn } from '@/lib/utils'
-import { CodeBlock } from '@/components/ui/codeblock'
-import { MemoizedReactMarkdown } from '@/components/markdown'
-import { LearnMore } from './learn-more'
-import { ChatMessageModel } from '@/lib/bots/bing/types'
-import { useEffect } from 'react'
-import { TurnCounter } from './turn-counter'
-
-export interface ChatMessageProps {
- message: ChatMessageModel
-}
-
-export function ChatMessage({ message, ...props }: ChatMessageProps) {
- useEffect(() => {
- if (document.body.scrollHeight - window.innerHeight - window.scrollY - 200 < 0) {
- window.scrollBy(0, 200)
- }
- }, [message.text])
-
- return message.text ? (
-
- ) : null
-}
diff --git a/spaces/Yudha515/Rvc-Models/Makefile b/spaces/Yudha515/Rvc-Models/Makefile
deleted file mode 100644
index 5bfd89dd833d7448b21073eb6ee7cfac1d5157dd..0000000000000000000000000000000000000000
--- a/spaces/Yudha515/Rvc-Models/Makefile
+++ /dev/null
@@ -1,21 +0,0 @@
-default: linter tests
-
-install:
- pip install -U pip
- pip install -U -e '.[dev]'
-
-linter:
- flake8 audiocraft && mypy audiocraft
- flake8 tests && mypy tests
-
-tests:
- coverage run -m pytest tests
- coverage report --include 'audiocraft/*'
-
-docs:
- pdoc3 --html -o docs -f audiocraft
-
-dist:
- python setup.py sdist
-
-.PHONY: linter tests docs dist
diff --git "a/spaces/a-v-bely/spanish-task-generator/pages/3_\360\237\223\245_\320\241\320\272\320\260\321\207\320\260\321\202\321\214.py" "b/spaces/a-v-bely/spanish-task-generator/pages/3_\360\237\223\245_\320\241\320\272\320\260\321\207\320\260\321\202\321\214.py"
deleted file mode 100644
index 23b0728d148d95c5232e4bdde5aa325de25aab1a..0000000000000000000000000000000000000000
--- "a/spaces/a-v-bely/spanish-task-generator/pages/3_\360\237\223\245_\320\241\320\272\320\260\321\207\320\260\321\202\321\214.py"
+++ /dev/null
@@ -1,44 +0,0 @@
-import streamlit as st
-from utilities_ui.custom_download_button import download_button as d_button
-
-st.set_page_config(page_title='Скачать', layout="wide", page_icon=':es:', initial_sidebar_state='collapsed')
-if st.session_state.get('-LOGGED_IN_BOOL-') and (st.session_state.get('-DISPLAY_READY-')
- or st.session_state.get('-DOWNLOAD_VERSION-')):
- result = st.session_state.get('RESULT')
- if result is None:
- st.error('Не можем ничего загрузить! Вы ничего не просили!')
- st.stop()
- # Download buttons
- if st.session_state.get('-DOWNLOAD_VERSION-'):
- invite, tasks_col, tasks_with_answers_col, keys_only_col, full_coll, rest = st.columns([1, 1, 2, 1, 3, 1])
- invite.write('Скачать:')
- with tasks_col:
- d_button(
- label='Задания',
- data=result['STUDENT_OUT'],
- file_name=f'{result["name"]}_tasks.txt')
- with tasks_with_answers_col:
- d_button(
- label='Задания+Ключи',
- data=result['TEACHER_OUT'],
- file_name=f'{result["name"]}_tasks_and_keys.txt')
- with keys_only_col:
- d_button(
- label='Ключи',
- data=result['KEYS_ONLY'],
- file_name=f'{result["name"]}_keys.txt')
- with full_coll:
- d_button(
- label='Исходник+Задания+Ключи',
- data=result['TOTAL_OUT'],
- file_name=f'{result["name"]}_all.txt')
-
- if st.session_state.get('-DISPLAY_VERSION-'):
- display_tasks_with_answers, display_tasks_only = st.tabs(['Задания+Ответы', 'Задания'])
- display_tasks_with_answers.write(str(result['TEACHER_OUT'].replace('_', '\_')))
- display_tasks_only.write(str(result['STUDENT_OUT'].replace('_', '\_')))
-
-elif st.session_state.get('-LOGGED_IN_BOOL-'):
- st.warning('**Сначала введите текст**')
-else:
- st.warning('**Войдите или зарегистрируйтесь**')
diff --git a/spaces/abdvl/datahub_qa_bot/docs/api/tutorials/creating-terms.md b/spaces/abdvl/datahub_qa_bot/docs/api/tutorials/creating-terms.md
deleted file mode 100644
index 713f59cb1ff7494293f3b0965c8de69d3f490a60..0000000000000000000000000000000000000000
--- a/spaces/abdvl/datahub_qa_bot/docs/api/tutorials/creating-terms.md
+++ /dev/null
@@ -1,111 +0,0 @@
-# Creating Terms
-
-## Why Would You Create Terms?
-The Business Glossary(Term) feature in DataHub helps you use a shared vocabulary within the orgarnization, by providing a framework for defining a standardized set of data concepts and then associating them with the physical assets that exist within your data ecosystem.
-
-Fore more information about terms, refer to [About DataHub Business Glossary](/docs/glossary/business-glossary.md).
-
-### Goal Of This Guide
-This guide will show you how to create a term named `Rate of Return`.
-
-## Prerequisites
-For this tutorial, you need to deploy DataHub Quickstart and ingest sample data.
-For detailed steps, please refer to [Prepare Local DataHub Environment](/docs/api/tutorials/references/prepare-datahub.md).
-
-## Create Terms With GraphQL
-
-:::note
-Please note that there are two available endpoints (`:8000`, `:9002`) to access GraphQL.
-For more information about the differences between these endpoints, please refer to [DataHub Metadata Service](../../../metadata-service/README.md#graphql-api)
-:::
-
-### GraphQL Explorer
-GraphQL Explorer is the fastest way to experiment with GraphQL without any dependancies.
-Navigate to GraphQL Explorer (`http://localhost:9002/api/graphiql`) and run the following query.
-
-```python
-mutation createGlossaryTerm {
- createGlossaryTerm(input:
- {
- name: "Rate of Return",
- description: "A rate of return (RoR) is the net gain or loss of an investment over a specified time period."
- })
-}
-```
-If you see the following response, the operation was successful:
-```python
-{
- "data": {
- "createGlossaryTerm": ""
- },
- "extensions": {}
-}
-```
-
-### CURL
-
-With CURL, you need to provide tokens. To generate a token, please refer to [Generate Access Token](/docs/api/tutorials/references/generate-access-token.md).
-With `accessToken`, you can run the following command.
-
-```shell
-curl --location --request POST 'http://localhost:8080/api/graphql' \
---header 'Authorization: Bearer ' \
---header 'Content-Type: application/json' \
---data-raw '{ "query": "mutation createGlossaryTerm { createGlossaryTerm(input: { name: \"Rate of Return\", description: \"A rate of return (RoR) is the net gain or loss of an investment over a specified time period.\" }) }", "variables":{}}'
-```
-Expected Response:
-```json
-{"data":{"createGlossaryTerm":""},"extensions":{}}
-```
-
-
-## Create Terms With Python SDK
-
-The following code creates a term named `Rate of Return`.
-You can refer to the full code in [create_term.py](https://github.com/datahub-project/datahub/blob/master/metadata-ingestion/examples/library/create_term.py).
-```python
-import logging
-
-from datahub.emitter.mce_builder import make_term_urn
-from datahub.emitter.mcp import MetadataChangeProposalWrapper
-from datahub.emitter.rest_emitter import DatahubRestEmitter
-
-# Imports for metadata model classes
-from datahub.metadata.schema_classes import GlossaryTermInfoClass
-
-log = logging.getLogger(__name__)
-logging.basicConfig(level=logging.INFO)
-
-term_urn = make_term_urn("rateofreturn")
-term_properties_aspect = GlossaryTermInfoClass(
- definition="A rate of return (RoR) is the net gain or loss of an investment over a specified time period.",
- name="Rate of Return",
- termSource="",
-)
-
-event: MetadataChangeProposalWrapper = MetadataChangeProposalWrapper(
- entityUrn=term_urn,
- aspect=term_properties_aspect,
-)
-
-# Create rest emitter
-rest_emitter = DatahubRestEmitter(gms_server="http://localhost:8080")
-rest_emitter.emit(event)
-log.info(f"Created term {term_urn}")
-```
-
-We're using the `MetdataChangeProposalWrapper` to change entities in this example.
-For more information about the `MetadataChangeProposal`, please refer to [MetadataChangeProposal & MetadataChangeLog Events](/docs/advanced/mcp-mcl.md)
-
-
-## Expected Outcomes
-You can now see `Rate of Return` term has been created.
-To view the definition, you can either click on 'Govern > Glossary' at the top right of the page or simply search for the term by name.
-
-
-
-## What's Next?
-
-Now that you created a term, how about adding it to a dataset? Here's a guide on [how to add a term on a dataset](/docs/api/tutorials/adding-terms.md).
-
-
diff --git a/spaces/abdvl/datahub_qa_bot/docs/deploy/kubernetes.md b/spaces/abdvl/datahub_qa_bot/docs/deploy/kubernetes.md
deleted file mode 100644
index 5d6a61b98b7545e7a88a0ca1a564374f75525b51..0000000000000000000000000000000000000000
--- a/spaces/abdvl/datahub_qa_bot/docs/deploy/kubernetes.md
+++ /dev/null
@@ -1,154 +0,0 @@
----
-title: "Deploying with Kubernetes"
----
-
-# Deploying DataHub with Kubernetes
-
-## Introduction
-
-Helm charts for deploying DataHub on a kubernetes cluster is located in
-this [repository](https://github.com/acryldata/datahub-helm). We provide charts for
-deploying [Datahub](https://github.com/acryldata/datahub-helm/tree/master/charts/datahub) and
-it's [dependencies](https://github.com/acryldata/datahub-helm/tree/master/charts/prerequisites)
-(Elasticsearch, optionally Neo4j, MySQL, and Kafka) on a Kubernetes cluster.
-
-This doc is a guide to deploy an instance of DataHub on a kubernetes cluster using the above charts from scratch.
-
-## Setup
-
-1. Set up a kubernetes cluster
- - In a cloud platform of choice like [Amazon EKS](https://aws.amazon.com/eks),
- [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine),
- and [Azure Kubernetes Service](https://azure.microsoft.com/en-us/services/kubernetes-service/) OR
- - In local environment using [Minikube](https://minikube.sigs.k8s.io/docs/). Note, more than 7GB of RAM is required
- to run Datahub and it's dependencies
-2. Install the following tools:
- - [kubectl](https://kubernetes.io/docs/tasks/tools/) to manage kubernetes resources
- - [helm](https://helm.sh/docs/intro/install/) to deploy the resources based on helm charts. Note, we only support
- Helm 3.
-
-## Components
-
-Datahub consists of 4 main components: [GMS](https://datahubproject.io/docs/metadata-service),
-[MAE Consumer](https://datahubproject.io/docs/metadata-jobs/mae-consumer-job) (optional),
-[MCE Consumer](https://datahubproject.io/docs/metadata-jobs/mce-consumer-job) (optional), and
-[Frontend](https://datahubproject.io/docs/datahub-frontend). Kubernetes deployment for each of the components are
-defined as subcharts under the main
-[Datahub](https://github.com/acryldata/datahub-helm/tree/master/charts/datahub)
-helm chart.
-
-The main components are powered by 4 external dependencies:
-
-- Kafka
-- Local DB (MySQL, Postgres, MariaDB)
-- Search Index (Elasticsearch)
-- Graph Index (Supports either Neo4j or Elasticsearch)
-
-The dependencies must be deployed before deploying Datahub. We created a separate
-[chart](https://github.com/acryldata/datahub-helm/tree/master/charts/prerequisites)
-for deploying the dependencies with example configuration. They could also be deployed separately on-prem or leveraged
-as managed services. To remove your dependency on Neo4j, set enabled to false in
-the [values.yaml](https://github.com/acryldata/datahub-helm/blob/master/charts/prerequisites/values.yaml#L54) for
-prerequisites. Then, override the `graph_service_impl` field in
-the [values.yaml](https://github.com/acryldata/datahub-helm/blob/master/charts/datahub/values.yaml#L63) of datahub
-instead of `neo4j`.
-
-## Quickstart
-
-Assuming kubectl context points to the correct kubernetes cluster, first create kubernetes secrets that contain MySQL
-and Neo4j passwords.
-
-```(shell)
-kubectl create secret generic mysql-secrets --from-literal=mysql-root-password=datahub
-kubectl create secret generic neo4j-secrets --from-literal=neo4j-password=datahub
-```
-
-The above commands sets the passwords to "datahub" as an example. Change to any password of choice.
-
-Add datahub helm repo by running the following
-
-```(shell)
-helm repo add datahub https://helm.datahubproject.io/
-```
-
-Then, deploy the dependencies by running the following
-
-```(shell)
-helm install prerequisites datahub/datahub-prerequisites
-```
-
-Note, the above uses the default configuration
-defined [here](https://github.com/acryldata/datahub-helm/blob/master/charts/prerequisites/values.yaml). You can change
-any of the configuration and deploy by running the following command.
-
-```(shell)
-helm install prerequisites datahub/datahub-prerequisites --values <>
-```
-
-Run `kubectl get pods` to check whether all the pods for the dependencies are running. You should get a result similar
-to below.
-
-```
-NAME READY STATUS RESTARTS AGE
-elasticsearch-master-0 1/1 Running 0 62m
-elasticsearch-master-1 1/1 Running 0 62m
-elasticsearch-master-2 1/1 Running 0 62m
-prerequisites-cp-schema-registry-cf79bfccf-kvjtv 2/2 Running 1 63m
-prerequisites-kafka-0 1/1 Running 2 62m
-prerequisites-mysql-0 1/1 Running 1 62m
-prerequisites-neo4j-community-0 1/1 Running 0 52m
-prerequisites-zookeeper-0 1/1 Running 0 62m
-```
-
-deploy Datahub by running the following
-
-```(shell)
-helm install datahub datahub/datahub
-```
-
-Values in [values.yaml](https://github.com/acryldata/datahub-helm/blob/master/charts/datahub/values.yaml)
-have been preset to point to the dependencies deployed using
-the [prerequisites](https://github.com/acryldata/datahub-helm/tree/master/charts/prerequisites)
-chart with release name "prerequisites". If you deployed the helm chart using a different release name, update the
-quickstart-values.yaml file accordingly before installing.
-
-Run `kubectl get pods` to check whether all the datahub pods are running. You should get a result similar to below.
-
-```
-NAME READY STATUS RESTARTS AGE
-datahub-datahub-frontend-84c58df9f7-5bgwx 1/1 Running 0 4m2s
-datahub-datahub-gms-58b676f77c-c6pfx 1/1 Running 0 4m2s
-datahub-datahub-mae-consumer-7b98bf65d-tjbwx 1/1 Running 0 4m3s
-datahub-datahub-mce-consumer-8c57d8587-vjv9m 1/1 Running 0 4m2s
-datahub-elasticsearch-setup-job-8dz6b 0/1 Completed 0 4m50s
-datahub-kafka-setup-job-6blcj 0/1 Completed 0 4m40s
-datahub-mysql-setup-job-b57kc 0/1 Completed 0 4m7s
-elasticsearch-master-0 1/1 Running 0 97m
-elasticsearch-master-1 1/1 Running 0 97m
-elasticsearch-master-2 1/1 Running 0 97m
-prerequisites-cp-schema-registry-cf79bfccf-kvjtv 2/2 Running 1 99m
-prerequisites-kafka-0 1/1 Running 2 97m
-prerequisites-mysql-0 1/1 Running 1 97m
-prerequisites-neo4j-community-0 1/1 Running 0 88m
-prerequisites-zookeeper-0 1/1 Running 0 97m
-```
-
-You can run the following to expose the frontend locally. Note, you can find the pod name using the command above. In
-this case, the datahub-frontend pod name was `datahub-datahub-frontend-84c58df9f7-5bgwx`.
-
-```(shell)
-kubectl port-forward 9002:9002
-```
-
-You should be able to access the frontend via http://localhost:9002.
-
-Once you confirm that the pods are running well, you can set up ingress for datahub-frontend to expose the 9002 port to
-the public.
-
-## Other useful commands
-
-| Command | Description |
-|-----|------|
-| helm uninstall datahub | Remove DataHub |
-| helm ls | List of Helm charts |
-| helm history | Fetch a release history |
diff --git a/spaces/abdvl/datahub_qa_bot/docs/posts.md b/spaces/abdvl/datahub_qa_bot/docs/posts.md
deleted file mode 100644
index 9647ee4ca9da9f18f9b9a36c11dea0cf433c5dd5..0000000000000000000000000000000000000000
--- a/spaces/abdvl/datahub_qa_bot/docs/posts.md
+++ /dev/null
@@ -1,53 +0,0 @@
-import FeatureAvailability from '@site/src/components/FeatureAvailability';
-
-# About DataHub Posts
-
-
-DataHub allows users to make Posts that can be displayed on the app. Currently, Posts are only supported on the Home Page, but may be extended to other surfaces of the app in the future. Posts can be used to accomplish the following:
-
-* Allowing Admins to post announcements on the home page
-* Pinning important DataHub assets or pages
-* Pinning important external links
-
-## Posts Setup, Prerequisites, and Permissions
-
-Anyone can view Posts on the home page. To create Posts, a user must either have the **Create Global Announcements** Privilege, or possess the **Admin** DataHub Role.
-
-## Using Posts
-
-To create a post, users must use the [createPost](../graphql/mutations.md#createPost) GraphQL mutation. There is currently no way to create posts using the UI, though this will come in the future.
-
-There is only one type of Post that can be currently made, and that is a **Home Page Announcement**. This may be extended in the future to other surfaces.
-
-DataHub currently supports two types of Post content. Posts can either contain **TEXT** or can be a **LINK**. When creating a post through GraphQL, users will have to supply the post content.
-
-For **TEXT** posts, the following pieces of information are required in the `content` object (of type [UpdatePostContentInput](../graphql/inputObjects.md#updatepostcontentinput)) of the GraphQL `input` (of type [CreatePostInput](../graphql/inputObjects.md#createpostinput))). **TEXT** posts cannot be clicked.
-* `contentType: TEXT`
-* `title`
-* `description`
-
-The `link` and `media` attributes are currently unused for **TEXT** posts.
-
-For **LINK** posts, the following pieces of information are required in the `content` object (of type [UpdatePostContentInput](../graphql/inputObjects.md#updatepostcontentinput)) of the GraphQL `input` (of type [CreatePostInput](../graphql/inputObjects.md#createpostinput))). **LINK** posts redirect to the provided link when clicked.
-* `contentType: LINK`
-* `title`
-* `link`
-* `media`. Currently only the **IMAGE** type is supported, and the URL of the image must be provided
-
-The `description` attribute is currently unused for **LINK** posts.
-
-Here are some examples of Posts displayed on the home page, with one **TEXT** post and two **LINK** posts.
-
-
-
-
-
-### GraphQL
-
-* [createPost](../graphql/mutations.md#createpost)
-* [listPosts](../graphql/queries.md#listposts)
-
-
-## FAQ and Troubleshooting
-
-*Need more help with Posts? Join the conversation in [Slack](http://slack.datahubproject.io)! Please post in the **#ui** channel!*
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/utils/contextmanagers.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/utils/contextmanagers.py
deleted file mode 100644
index 38a639262d949b5754dedf12f33fa814b030ea38..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/utils/contextmanagers.py
+++ /dev/null
@@ -1,121 +0,0 @@
-import asyncio
-import contextlib
-import logging
-import os
-import time
-from typing import List
-
-import torch
-
-logger = logging.getLogger(__name__)
-
-DEBUG_COMPLETED_TIME = bool(os.environ.get('DEBUG_COMPLETED_TIME', False))
-
-
-@contextlib.asynccontextmanager
-async def completed(trace_name='',
- name='',
- sleep_interval=0.05,
- streams: List[torch.cuda.Stream] = None):
- """Async context manager that waits for work to complete on given CUDA
- streams."""
- if not torch.cuda.is_available():
- yield
- return
-
- stream_before_context_switch = torch.cuda.current_stream()
- if not streams:
- streams = [stream_before_context_switch]
- else:
- streams = [s if s else stream_before_context_switch for s in streams]
-
- end_events = [
- torch.cuda.Event(enable_timing=DEBUG_COMPLETED_TIME) for _ in streams
- ]
-
- if DEBUG_COMPLETED_TIME:
- start = torch.cuda.Event(enable_timing=True)
- stream_before_context_switch.record_event(start)
-
- cpu_start = time.monotonic()
- logger.debug('%s %s starting, streams: %s', trace_name, name, streams)
- grad_enabled_before = torch.is_grad_enabled()
- try:
- yield
- finally:
- current_stream = torch.cuda.current_stream()
- assert current_stream == stream_before_context_switch
-
- if DEBUG_COMPLETED_TIME:
- cpu_end = time.monotonic()
- for i, stream in enumerate(streams):
- event = end_events[i]
- stream.record_event(event)
-
- grad_enabled_after = torch.is_grad_enabled()
-
- # observed change of torch.is_grad_enabled() during concurrent run of
- # async_test_bboxes code
- assert (grad_enabled_before == grad_enabled_after
- ), 'Unexpected is_grad_enabled() value change'
-
- are_done = [e.query() for e in end_events]
- logger.debug('%s %s completed: %s streams: %s', trace_name, name,
- are_done, streams)
- with torch.cuda.stream(stream_before_context_switch):
- while not all(are_done):
- await asyncio.sleep(sleep_interval)
- are_done = [e.query() for e in end_events]
- logger.debug(
- '%s %s completed: %s streams: %s',
- trace_name,
- name,
- are_done,
- streams,
- )
-
- current_stream = torch.cuda.current_stream()
- assert current_stream == stream_before_context_switch
-
- if DEBUG_COMPLETED_TIME:
- cpu_time = (cpu_end - cpu_start) * 1000
- stream_times_ms = ''
- for i, stream in enumerate(streams):
- elapsed_time = start.elapsed_time(end_events[i])
- stream_times_ms += f' {stream} {elapsed_time:.2f} ms'
- logger.info('%s %s %.2f ms %s', trace_name, name, cpu_time,
- stream_times_ms)
-
-
-@contextlib.asynccontextmanager
-async def concurrent(streamqueue: asyncio.Queue,
- trace_name='concurrent',
- name='stream'):
- """Run code concurrently in different streams.
-
- :param streamqueue: asyncio.Queue instance.
-
- Queue tasks define the pool of streams used for concurrent execution.
- """
- if not torch.cuda.is_available():
- yield
- return
-
- initial_stream = torch.cuda.current_stream()
-
- with torch.cuda.stream(initial_stream):
- stream = await streamqueue.get()
- assert isinstance(stream, torch.cuda.Stream)
-
- try:
- with torch.cuda.stream(stream):
- logger.debug('%s %s is starting, stream: %s', trace_name, name,
- stream)
- yield
- current = torch.cuda.current_stream()
- assert current == stream
- logger.debug('%s %s has finished, stream: %s', trace_name,
- name, stream)
- finally:
- streamqueue.task_done()
- streamqueue.put_nowait(stream)
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/ops/cc_attention.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/ops/cc_attention.py
deleted file mode 100644
index 9207aa95e6730bd9b3362dee612059a5f0ce1c5e..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/ops/cc_attention.py
+++ /dev/null
@@ -1,83 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-from annotator.uniformer.mmcv.cnn import PLUGIN_LAYERS, Scale
-
-
-def NEG_INF_DIAG(n, device):
- """Returns a diagonal matrix of size [n, n].
-
- The diagonal are all "-inf". This is for avoiding calculating the
- overlapped element in the Criss-Cross twice.
- """
- return torch.diag(torch.tensor(float('-inf')).to(device).repeat(n), 0)
-
-
-@PLUGIN_LAYERS.register_module()
-class CrissCrossAttention(nn.Module):
- """Criss-Cross Attention Module.
-
- .. note::
- Before v1.3.13, we use a CUDA op. Since v1.3.13, we switch
- to a pure PyTorch and equivalent implementation. For more
- details, please refer to https://github.com/open-mmlab/mmcv/pull/1201.
-
- Speed comparison for one forward pass
-
- - Input size: [2,512,97,97]
- - Device: 1 NVIDIA GeForce RTX 2080 Ti
-
- +-----------------------+---------------+------------+---------------+
- | |PyTorch version|CUDA version|Relative speed |
- +=======================+===============+============+===============+
- |with torch.no_grad() |0.00554402 s |0.0299619 s |5.4x |
- +-----------------------+---------------+------------+---------------+
- |no with torch.no_grad()|0.00562803 s |0.0301349 s |5.4x |
- +-----------------------+---------------+------------+---------------+
-
- Args:
- in_channels (int): Channels of the input feature map.
- """
-
- def __init__(self, in_channels):
- super().__init__()
- self.query_conv = nn.Conv2d(in_channels, in_channels // 8, 1)
- self.key_conv = nn.Conv2d(in_channels, in_channels // 8, 1)
- self.value_conv = nn.Conv2d(in_channels, in_channels, 1)
- self.gamma = Scale(0.)
- self.in_channels = in_channels
-
- def forward(self, x):
- """forward function of Criss-Cross Attention.
-
- Args:
- x (Tensor): Input feature. \
- shape (batch_size, in_channels, height, width)
- Returns:
- Tensor: Output of the layer, with shape of \
- (batch_size, in_channels, height, width)
- """
- B, C, H, W = x.size()
- query = self.query_conv(x)
- key = self.key_conv(x)
- value = self.value_conv(x)
- energy_H = torch.einsum('bchw,bciw->bwhi', query, key) + NEG_INF_DIAG(
- H, query.device)
- energy_H = energy_H.transpose(1, 2)
- energy_W = torch.einsum('bchw,bchj->bhwj', query, key)
- attn = F.softmax(
- torch.cat([energy_H, energy_W], dim=-1), dim=-1) # [B,H,W,(H+W)]
- out = torch.einsum('bciw,bhwi->bchw', value, attn[..., :H])
- out += torch.einsum('bchj,bhwj->bchw', value, attn[..., H:])
-
- out = self.gamma(out) + x
- out = out.contiguous()
-
- return out
-
- def __repr__(self):
- s = self.__class__.__name__
- s += f'(in_channels={self.in_channels})'
- return s
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/datasets/pipelines/loading.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/datasets/pipelines/loading.py
deleted file mode 100644
index 5213aa3409f476e564970e85fd2bd973cb012fa0..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/datasets/pipelines/loading.py
+++ /dev/null
@@ -1,165 +0,0 @@
-'''
- * Copyright (c) 2023 Salesforce, Inc.
- * All rights reserved.
- * SPDX-License-Identifier: Apache License 2.0
- * For full license text, see LICENSE.txt file in the repo root or http://www.apache.org/licenses/
- * By Can Qin
- * Modified from ControlNet repo: https://github.com/lllyasviel/ControlNet
- * Copyright (c) 2023 Lvmin Zhang and Maneesh Agrawala
- * Modified from MMCV repo: From https://github.com/open-mmlab/mmcv
- * Copyright (c) OpenMMLab. All rights reserved.
-'''
-
-import os.path as osp
-
-import annotator.uniformer.mmcv as mmcv
-import numpy as np
-
-from ..builder import PIPELINES
-
-
-@PIPELINES.register_module()
-class LoadImageFromFile(object):
- """Load an image from file.
-
- Required keys are "img_prefix" and "img_info" (a dict that must contain the
- key "filename"). Added or updated keys are "filename", "img", "img_shape",
- "ori_shape" (same as `img_shape`), "pad_shape" (same as `img_shape`),
- "scale_factor" (1.0) and "img_norm_cfg" (means=0 and stds=1).
-
- Args:
- to_float32 (bool): Whether to convert the loaded image to a float32
- numpy array. If set to False, the loaded image is an uint8 array.
- Defaults to False.
- color_type (str): The flag argument for :func:`mmcv.imfrombytes`.
- Defaults to 'color'.
- file_client_args (dict): Arguments to instantiate a FileClient.
- See :class:`mmcv.fileio.FileClient` for details.
- Defaults to ``dict(backend='disk')``.
- imdecode_backend (str): Backend for :func:`mmcv.imdecode`. Default:
- 'cv2'
- """
-
- def __init__(self,
- to_float32=False,
- color_type='color',
- file_client_args=dict(backend='disk'),
- imdecode_backend='cv2'):
- self.to_float32 = to_float32
- self.color_type = color_type
- self.file_client_args = file_client_args.copy()
- self.file_client = None
- self.imdecode_backend = imdecode_backend
-
- def __call__(self, results):
- """Call functions to load image and get image meta information.
-
- Args:
- results (dict): Result dict from :obj:`mmseg.CustomDataset`.
-
- Returns:
- dict: The dict contains loaded image and meta information.
- """
-
- if self.file_client is None:
- self.file_client = mmcv.FileClient(**self.file_client_args)
-
- if results.get('img_prefix') is not None:
- filename = osp.join(results['img_prefix'],
- results['img_info']['filename'])
- else:
- filename = results['img_info']['filename']
- img_bytes = self.file_client.get(filename)
- img = mmcv.imfrombytes(
- img_bytes, flag=self.color_type, backend=self.imdecode_backend)
- if self.to_float32:
- img = img.astype(np.float32)
-
- results['filename'] = filename
- results['ori_filename'] = results['img_info']['filename']
- results['img'] = img
- results['img_shape'] = img.shape
- results['ori_shape'] = img.shape
- # Set initial values for default meta_keys
- results['pad_shape'] = img.shape
- results['scale_factor'] = 1.0
- num_channels = 1 if len(img.shape) < 3 else img.shape[2]
- results['img_norm_cfg'] = dict(
- mean=np.zeros(num_channels, dtype=np.float32),
- std=np.ones(num_channels, dtype=np.float32),
- to_rgb=False)
- return results
-
- def __repr__(self):
- repr_str = self.__class__.__name__
- repr_str += f'(to_float32={self.to_float32},'
- repr_str += f"color_type='{self.color_type}',"
- repr_str += f"imdecode_backend='{self.imdecode_backend}')"
- return repr_str
-
-
-@PIPELINES.register_module()
-class LoadAnnotations(object):
- """Load annotations for semantic segmentation.
-
- Args:
- reduce_zero_label (bool): Whether reduce all label value by 1.
- Usually used for datasets where 0 is background label.
- Default: False.
- file_client_args (dict): Arguments to instantiate a FileClient.
- See :class:`mmcv.fileio.FileClient` for details.
- Defaults to ``dict(backend='disk')``.
- imdecode_backend (str): Backend for :func:`mmcv.imdecode`. Default:
- 'pillow'
- """
-
- def __init__(self,
- reduce_zero_label=False,
- file_client_args=dict(backend='disk'),
- imdecode_backend='pillow'):
- self.reduce_zero_label = reduce_zero_label
- self.file_client_args = file_client_args.copy()
- self.file_client = None
- self.imdecode_backend = imdecode_backend
-
- def __call__(self, results):
- """Call function to load multiple types annotations.
-
- Args:
- results (dict): Result dict from :obj:`mmseg.CustomDataset`.
-
- Returns:
- dict: The dict contains loaded semantic segmentation annotations.
- """
-
- if self.file_client is None:
- self.file_client = mmcv.FileClient(**self.file_client_args)
-
- if results.get('seg_prefix', None) is not None:
- filename = osp.join(results['seg_prefix'],
- results['ann_info']['seg_map'])
- else:
- filename = results['ann_info']['seg_map']
- img_bytes = self.file_client.get(filename)
- gt_semantic_seg = mmcv.imfrombytes(
- img_bytes, flag='unchanged',
- backend=self.imdecode_backend).squeeze().astype(np.uint8)
- # modify if custom classes
- if results.get('label_map', None) is not None:
- for old_id, new_id in results['label_map'].items():
- gt_semantic_seg[gt_semantic_seg == old_id] = new_id
- # reduce zero_label
- if self.reduce_zero_label:
- # avoid using underflow conversion
- gt_semantic_seg[gt_semantic_seg == 0] = 255
- gt_semantic_seg = gt_semantic_seg - 1
- gt_semantic_seg[gt_semantic_seg == 254] = 255
- results['gt_semantic_seg'] = gt_semantic_seg
- results['seg_fields'].append('gt_semantic_seg')
- return results
-
- def __repr__(self):
- repr_str = self.__class__.__name__
- repr_str += f'(reduce_zero_label={self.reduce_zero_label},'
- repr_str += f"imdecode_backend='{self.imdecode_backend}')"
- return repr_str
diff --git a/spaces/ai-maker-space/ChatWithYourPDF/app.py b/spaces/ai-maker-space/ChatWithYourPDF/app.py
deleted file mode 100644
index 6ae6dc4f00a3b401305e15b8a66869498fd50a08..0000000000000000000000000000000000000000
--- a/spaces/ai-maker-space/ChatWithYourPDF/app.py
+++ /dev/null
@@ -1,146 +0,0 @@
-import os
-from typing import List
-
-from langchain.embeddings.openai import OpenAIEmbeddings
-from langchain.text_splitter import RecursiveCharacterTextSplitter
-from langchain.vectorstores import Chroma
-from langchain.chains import (
- ConversationalRetrievalChain,
-)
-from langchain.document_loaders import PyPDFLoader
-from langchain.chat_models import ChatOpenAI
-from langchain.prompts.chat import (
- ChatPromptTemplate,
- SystemMessagePromptTemplate,
- HumanMessagePromptTemplate,
-)
-from langchain.docstore.document import Document
-from langchain.memory import ChatMessageHistory, ConversationBufferMemory
-from chainlit.types import AskFileResponse
-
-import chainlit as cl
-
-text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=100)
-
-system_template = """Use the following pieces of context to answer the users question.
-If you don't know the answer, just say that you don't know, don't try to make up an answer.
-ALWAYS return a "SOURCES" part in your answer.
-The "SOURCES" part should be a reference to the source of the document from which you got your answer.
-
-And if the user greets with greetings like Hi, hello, How are you, etc reply accordingly as well.
-
-Example of your response should be:
-
-The answer is foo
-SOURCES: xyz
-
-
-Begin!
-----------------
-{summaries}"""
-messages = [
- SystemMessagePromptTemplate.from_template(system_template),
- HumanMessagePromptTemplate.from_template("{question}"),
-]
-prompt = ChatPromptTemplate.from_messages(messages)
-chain_type_kwargs = {"prompt": prompt}
-
-
-def process_file(file: AskFileResponse):
- import tempfile
-
- with tempfile.NamedTemporaryFile(mode="w", delete=False) as tempfile:
- with open(tempfile.name, "wb") as f:
- f.write(file.content)
-
- pypdf_loader = PyPDFLoader(tempfile.name)
- texts = pypdf_loader.load_and_split()
- texts = [text.page_content for text in texts]
- return texts
-
-
-@cl.on_chat_start
-async def on_chat_start():
- files = None
-
- # Wait for the user to upload a file
- while files == None:
- files = await cl.AskFileMessage(
- content="Please upload a PDF file to begin!",
- accept=["application/pdf"],
- max_size_mb=20,
- timeout=180,
- ).send()
-
- file = files[0]
-
- msg = cl.Message(
- content=f"Processing `{file.name}`...", disable_human_feedback=True
- )
- await msg.send()
-
- # load the file
- texts = process_file(file)
-
- print(texts[0])
-
- # Create a metadata for each chunk
- metadatas = [{"source": f"{i}-pl"} for i in range(len(texts))]
-
- # Create a Chroma vector store
- embeddings = OpenAIEmbeddings()
- docsearch = await cl.make_async(Chroma.from_texts)(
- texts, embeddings, metadatas=metadatas
- )
-
- message_history = ChatMessageHistory()
-
- memory = ConversationBufferMemory(
- memory_key="chat_history",
- output_key="answer",
- chat_memory=message_history,
- return_messages=True,
- )
-
- # Create a chain that uses the Chroma vector store
- chain = ConversationalRetrievalChain.from_llm(
- ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0, streaming=True),
- chain_type="stuff",
- retriever=docsearch.as_retriever(),
- memory=memory,
- return_source_documents=True,
- )
-
- # Let the user know that the system is ready
- msg.content = f"Processing `{file.name}` done. You can now ask questions!"
- await msg.update()
-
- cl.user_session.set("chain", chain)
-
-
-@cl.on_message
-async def main(message):
- chain = cl.user_session.get("chain") # type: ConversationalRetrievalChain
- cb = cl.AsyncLangchainCallbackHandler()
-
- res = await chain.acall(message.content, callbacks=[cb])
- answer = res["answer"]
- source_documents = res["source_documents"] # type: List[Document]
-
- text_elements = [] # type: List[cl.Text]
-
- if source_documents:
- for source_idx, source_doc in enumerate(source_documents):
- source_name = f"source_{source_idx}"
- # Create the text element referenced in the message
- text_elements.append(
- cl.Text(content=source_doc.page_content, name=source_name)
- )
- source_names = [text_el.name for text_el in text_elements]
-
- if source_names:
- answer += f"\nSources: {', '.join(source_names)}"
- else:
- answer += "\nNo sources found"
-
- await cl.Message(content=answer, elements=text_elements).send()
diff --git a/spaces/akhaliq/SwinIR/download-weights.sh b/spaces/akhaliq/SwinIR/download-weights.sh
deleted file mode 100644
index 1232611b4d81d15413ced7535d8ef1ca89d323a3..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/SwinIR/download-weights.sh
+++ /dev/null
@@ -1,13 +0,0 @@
-#!/bin/sh
-
-wget https://github.com/JingyunLiang/SwinIR/releases/download/v0.0/003_realSR_BSRGAN_DFO_s64w8_SwinIR-M_x4_GAN.pth -P experiments/pretrained_models
-wget https://github.com/JingyunLiang/SwinIR/releases/download/v0.0/004_grayDN_DFWB_s128w8_SwinIR-M_noise15.pth -P experiments/pretrained_models
-wget https://github.com/JingyunLiang/SwinIR/releases/download/v0.0/004_grayDN_DFWB_s128w8_SwinIR-M_noise25.pth -P experiments/pretrained_models
-wget https://github.com/JingyunLiang/SwinIR/releases/download/v0.0/004_grayDN_DFWB_s128w8_SwinIR-M_noise50.pth -P experiments/pretrained_models
-wget https://github.com/JingyunLiang/SwinIR/releases/download/v0.0/005_colorDN_DFWB_s128w8_SwinIR-M_noise15.pth -P experiments/pretrained_models
-wget https://github.com/JingyunLiang/SwinIR/releases/download/v0.0/005_colorDN_DFWB_s128w8_SwinIR-M_noise25.pth -P experiments/pretrained_models
-wget https://github.com/JingyunLiang/SwinIR/releases/download/v0.0/005_colorDN_DFWB_s128w8_SwinIR-M_noise50.pth -P experiments/pretrained_models
-wget https://github.com/JingyunLiang/SwinIR/releases/download/v0.0/006_CAR_DFWB_s126w7_SwinIR-M_jpeg10.pth -P experiments/pretrained_models
-wget https://github.com/JingyunLiang/SwinIR/releases/download/v0.0/006_CAR_DFWB_s126w7_SwinIR-M_jpeg20.pth -P experiments/pretrained_models
-wget https://github.com/JingyunLiang/SwinIR/releases/download/v0.0/006_CAR_DFWB_s126w7_SwinIR-M_jpeg30.pth -P experiments/pretrained_models
-wget https://github.com/JingyunLiang/SwinIR/releases/download/v0.0/006_CAR_DFWB_s126w7_SwinIR-M_jpeg40.pth -P experiments/pretrained_models
\ No newline at end of file
diff --git a/spaces/akhaliq/deeplab2/evaluation/panoptic_quality_test.py b/spaces/akhaliq/deeplab2/evaluation/panoptic_quality_test.py
deleted file mode 100644
index ecef73fd8d93dbcac295f9f5431c1ba4cc08398b..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/deeplab2/evaluation/panoptic_quality_test.py
+++ /dev/null
@@ -1,214 +0,0 @@
-# coding=utf-8
-# Copyright 2021 The Deeplab2 Authors.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""Tests for panoptic_quality metrics."""
-import collections
-
-from absl import logging
-import numpy as np
-import tensorflow as tf
-
-from deeplab2.evaluation import panoptic_quality
-from deeplab2.evaluation import test_utils
-
-# See the definition of the color names at:
-# https://en.wikipedia.org/wiki/Web_colors.
-_CLASS_COLOR_MAP = {
- (0, 0, 0): 0,
- (0, 0, 255): 1, # Person (blue).
- (255, 0, 0): 2, # Bear (red).
- (0, 255, 0): 3, # Tree (lime).
- (255, 0, 255): 4, # Bird (fuchsia).
- (0, 255, 255): 5, # Sky (aqua).
- (255, 255, 0): 6, # Cat (yellow).
-}
-
-
-def combine_maps(semantic_map, instance_map, label_divisor):
- combined_map = instance_map + semantic_map * label_divisor
- return tf.cast(combined_map, tf.int32)
-
-
-class PanopticQualityMetricTest(tf.test.TestCase):
-
- def test_streaming_metric_on_single_image(self):
- max_instances_per_category = 1000
- instance_class_map = {
- 0: 0,
- 47: 1,
- 97: 1,
- 133: 1,
- 150: 1,
- 174: 1,
- 198: 2,
- 215: 1,
- 244: 1,
- 255: 1,
- }
- gt_instances, gt_classes = test_utils.panoptic_segmentation_with_class_map(
- 'team_gt_instance.png', instance_class_map)
-
- pred_classes = test_utils.read_segmentation_with_rgb_color_map(
- 'team_pred_class.png', _CLASS_COLOR_MAP)
- pred_instances = test_utils.read_test_image(
- 'team_pred_instance.png', image_format='L')
-
- pq_obj = panoptic_quality.PanopticQuality(
- num_classes=3,
- max_instances_per_category=max_instances_per_category,
- ignored_label=0, offset=256*256)
-
- y_true = combine_maps(gt_classes, gt_instances, max_instances_per_category)
- y_pred = combine_maps(pred_classes, pred_instances,
- max_instances_per_category)
- pq_obj.update_state(y_true, y_pred)
- result = pq_obj.result().numpy()
- self.assertAlmostEqual(result[0], 0.62156284, places=4)
- self.assertAlmostEqual(result[1], 0.64664984, places=4)
- self.assertAlmostEqual(result[2], 0.9666667, places=4)
- self.assertEqual(result[3], 4.)
- self.assertAlmostEqual(result[4], 0.5)
- self.assertEqual(result[5], 0.)
-
- def test_streaming_metric_on_multiple_images(self):
- num_classes = 7
-
- bird_gt_instance_class_map = {
- 92: 5,
- 176: 3,
- 255: 4,
- }
- cat_gt_instance_class_map = {
- 0: 0,
- 255: 6,
- }
- team_gt_instance_class_map = {
- 0: 0,
- 47: 1,
- 97: 1,
- 133: 1,
- 150: 1,
- 174: 1,
- 198: 2,
- 215: 1,
- 244: 1,
- 255: 1,
- }
- max_instances_per_category = 256
- test_image = collections.namedtuple(
- 'TestImage',
- ['gt_class_map', 'gt_path', 'pred_inst_path', 'pred_class_path'])
- test_images = [
- test_image(bird_gt_instance_class_map, 'bird_gt.png',
- 'bird_pred_instance.png', 'bird_pred_class.png'),
- test_image(cat_gt_instance_class_map, 'cat_gt.png',
- 'cat_pred_instance.png', 'cat_pred_class.png'),
- test_image(team_gt_instance_class_map, 'team_gt_instance.png',
- 'team_pred_instance.png', 'team_pred_class.png'),
- ]
-
- gt_classes = []
- gt_instances = []
- pred_classes = []
- pred_instances = []
- for test_image in test_images:
- (image_gt_instances,
- image_gt_classes) = test_utils.panoptic_segmentation_with_class_map(
- test_image.gt_path, test_image.gt_class_map)
- gt_classes.append(image_gt_classes)
- gt_instances.append(image_gt_instances)
-
- pred_classes.append(
- test_utils.read_segmentation_with_rgb_color_map(
- test_image.pred_class_path, _CLASS_COLOR_MAP))
- pred_instances.append(
- test_utils.read_test_image(test_image.pred_inst_path,
- image_format='L'))
-
- pq_obj = panoptic_quality.PanopticQuality(
- num_classes=num_classes,
- max_instances_per_category=max_instances_per_category,
- ignored_label=0, offset=256*256)
- for pred_class, pred_instance, gt_class, gt_instance in zip(
- pred_classes, pred_instances, gt_classes, gt_instances):
- y_true = combine_maps(gt_class, gt_instance, max_instances_per_category)
- y_pred = combine_maps(pred_class, pred_instance,
- max_instances_per_category)
- pq_obj.update_state(y_true, y_pred)
- result = pq_obj.result().numpy()
-
- self.assertAlmostEqual(result[0], 0.76855499, places=4)
- self.assertAlmostEqual(result[1], 0.7769174, places=4)
- self.assertAlmostEqual(result[2], 0.98888892, places=4)
- self.assertEqual(result[3], 2.)
- self.assertAlmostEqual(result[4], 1. / 6, places=4)
- self.assertEqual(result[5], 0.)
-
- def test_predicted_non_contiguous_ignore_label(self):
- max_instances_per_category = 256
- pq_obj = panoptic_quality.PanopticQuality(
- num_classes=3,
- max_instances_per_category=max_instances_per_category,
- ignored_label=9,
- offset=256 * 256)
-
- gt_class = [
- [0, 9, 9],
- [1, 2, 2],
- [1, 9, 9],
- ]
- gt_instance = [
- [0, 2, 2],
- [1, 0, 0],
- [1, 0, 0],
- ]
- y_true = combine_maps(
- np.array(gt_class), np.array(gt_instance), max_instances_per_category)
- logging.info('y_true=\n%s', y_true)
-
- pred_class = [
- [0, 0, 9],
- [1, 1, 1],
- [1, 9, 9],
- ]
- pred_instance = [
- [0, 0, 0],
- [0, 1, 1],
- [0, 1, 1],
- ]
- y_pred = combine_maps(
- np.array(pred_class), np.array(pred_instance),
- max_instances_per_category)
- logging.info('y_pred=\n%s', y_pred)
-
- pq_obj.update_state(y_true, y_pred)
- result = pq_obj.result().numpy()
-
- # pq
- self.assertAlmostEqual(result[0], 2. / 9, places=4)
- # sq
- self.assertAlmostEqual(result[1], 1. / 3, places=4)
- # rq
- self.assertAlmostEqual(result[2], 2. / 9, places=4)
- # tp
- self.assertAlmostEqual(result[3], 1. / 3, places=4)
- # fn
- self.assertAlmostEqual(result[4], 2. / 3, places=4)
- # fp
- self.assertAlmostEqual(result[5], 2. / 3, places=4)
-
-
-if __name__ == '__main__':
- tf.test.main()
diff --git a/spaces/aliceoq/vozes-da-loirinha/vc_infer_pipeline.py b/spaces/aliceoq/vozes-da-loirinha/vc_infer_pipeline.py
deleted file mode 100644
index 81d163305f9f8c158f83690bd631de3433c2adf1..0000000000000000000000000000000000000000
--- a/spaces/aliceoq/vozes-da-loirinha/vc_infer_pipeline.py
+++ /dev/null
@@ -1,650 +0,0 @@
-import numpy as np, parselmouth, torch, pdb, sys, os
-from time import time as ttime
-import torch.nn.functional as F
-import torchcrepe # Fork feature. Use the crepe f0 algorithm. New dependency (pip install torchcrepe)
-from torch import Tensor
-import scipy.signal as signal
-import pyworld, os, traceback, faiss, librosa, torchcrepe
-from scipy import signal
-from functools import lru_cache
-
-now_dir = os.getcwd()
-sys.path.append(now_dir)
-
-bh, ah = signal.butter(N=5, Wn=48, btype="high", fs=16000)
-
-input_audio_path2wav = {}
-
-
-@lru_cache
-def cache_harvest_f0(input_audio_path, fs, f0max, f0min, frame_period):
- audio = input_audio_path2wav[input_audio_path]
- f0, t = pyworld.harvest(
- audio,
- fs=fs,
- f0_ceil=f0max,
- f0_floor=f0min,
- frame_period=frame_period,
- )
- f0 = pyworld.stonemask(audio, f0, t, fs)
- return f0
-
-
-def change_rms(data1, sr1, data2, sr2, rate): # 1是输入音频,2是输出音频,rate是2的占比
- # print(data1.max(),data2.max())
- rms1 = librosa.feature.rms(
- y=data1, frame_length=sr1 // 2 * 2, hop_length=sr1 // 2
- ) # 每半秒一个点
- rms2 = librosa.feature.rms(y=data2, frame_length=sr2 // 2 * 2, hop_length=sr2 // 2)
- rms1 = torch.from_numpy(rms1)
- rms1 = F.interpolate(
- rms1.unsqueeze(0), size=data2.shape[0], mode="linear"
- ).squeeze()
- rms2 = torch.from_numpy(rms2)
- rms2 = F.interpolate(
- rms2.unsqueeze(0), size=data2.shape[0], mode="linear"
- ).squeeze()
- rms2 = torch.max(rms2, torch.zeros_like(rms2) + 1e-6)
- data2 *= (
- torch.pow(rms1, torch.tensor(1 - rate))
- * torch.pow(rms2, torch.tensor(rate - 1))
- ).numpy()
- return data2
-
-
-class VC(object):
- def __init__(self, tgt_sr, config):
- self.x_pad, self.x_query, self.x_center, self.x_max, self.is_half = (
- config.x_pad,
- config.x_query,
- config.x_center,
- config.x_max,
- config.is_half,
- )
- self.sr = 16000 # hubert输入采样率
- self.window = 160 # 每帧点数
- self.t_pad = self.sr * self.x_pad # 每条前后pad时间
- self.t_pad_tgt = tgt_sr * self.x_pad
- self.t_pad2 = self.t_pad * 2
- self.t_query = self.sr * self.x_query # 查询切点前后查询时间
- self.t_center = self.sr * self.x_center # 查询切点位置
- self.t_max = self.sr * self.x_max # 免查询时长阈值
- self.device = config.device
-
- # Fork Feature: Get the best torch device to use for f0 algorithms that require a torch device. Will return the type (torch.device)
- def get_optimal_torch_device(self, index: int = 0) -> torch.device:
- # Get cuda device
- if torch.cuda.is_available():
- return torch.device(
- f"cuda:{index % torch.cuda.device_count()}"
- ) # Very fast
- elif torch.backends.mps.is_available():
- return torch.device("mps")
- # Insert an else here to grab "xla" devices if available. TO DO later. Requires the torch_xla.core.xla_model library
- # Else wise return the "cpu" as a torch device,
- return torch.device("cpu")
-
- # Fork Feature: Compute f0 with the crepe method
- def get_f0_crepe_computation(
- self,
- x,
- f0_min,
- f0_max,
- p_len,
- hop_length=160, # 512 before. Hop length changes the speed that the voice jumps to a different dramatic pitch. Lower hop lengths means more pitch accuracy but longer inference time.
- model="full", # Either use crepe-tiny "tiny" or crepe "full". Default is full
- ):
- x = x.astype(
- np.float32
- ) # fixes the F.conv2D exception. We needed to convert double to float.
- x /= np.quantile(np.abs(x), 0.999)
- torch_device = self.get_optimal_torch_device()
- audio = torch.from_numpy(x).to(torch_device, copy=True)
- audio = torch.unsqueeze(audio, dim=0)
- if audio.ndim == 2 and audio.shape[0] > 1:
- audio = torch.mean(audio, dim=0, keepdim=True).detach()
- audio = audio.detach()
- print("Initiating prediction with a crepe_hop_length of: " + str(hop_length))
- pitch: Tensor = torchcrepe.predict(
- audio,
- self.sr,
- hop_length,
- f0_min,
- f0_max,
- model,
- batch_size=hop_length * 2,
- device=torch_device,
- pad=True,
- )
- p_len = p_len or x.shape[0] // hop_length
- # Resize the pitch for final f0
- source = np.array(pitch.squeeze(0).cpu().float().numpy())
- source[source < 0.001] = np.nan
- target = np.interp(
- np.arange(0, len(source) * p_len, len(source)) / p_len,
- np.arange(0, len(source)),
- source,
- )
- f0 = np.nan_to_num(target)
- return f0 # Resized f0
-
- def get_f0_official_crepe_computation(
- self,
- x,
- f0_min,
- f0_max,
- model="full",
- ):
- # Pick a batch size that doesn't cause memory errors on your gpu
- batch_size = 512
- # Compute pitch using first gpu
- audio = torch.tensor(np.copy(x))[None].float()
- f0, pd = torchcrepe.predict(
- audio,
- self.sr,
- self.window,
- f0_min,
- f0_max,
- model,
- batch_size=batch_size,
- device=self.device,
- return_periodicity=True,
- )
- pd = torchcrepe.filter.median(pd, 3)
- f0 = torchcrepe.filter.mean(f0, 3)
- f0[pd < 0.1] = 0
- f0 = f0[0].cpu().numpy()
- return f0
-
- # Fork Feature: Compute pYIN f0 method
- def get_f0_pyin_computation(self, x, f0_min, f0_max):
- y, sr = librosa.load("saudio/Sidney.wav", self.sr, mono=True)
- f0, _, _ = librosa.pyin(y, sr=self.sr, fmin=f0_min, fmax=f0_max)
- f0 = f0[1:] # Get rid of extra first frame
- return f0
-
- # Fork Feature: Acquire median hybrid f0 estimation calculation
- def get_f0_hybrid_computation(
- self,
- methods_str,
- input_audio_path,
- x,
- f0_min,
- f0_max,
- p_len,
- filter_radius,
- crepe_hop_length,
- time_step,
- ):
- # Get various f0 methods from input to use in the computation stack
- s = methods_str
- s = s.split("hybrid")[1]
- s = s.replace("[", "").replace("]", "")
- methods = s.split("+")
- f0_computation_stack = []
-
- print("Calculating f0 pitch estimations for methods: %s" % str(methods))
- x = x.astype(np.float32)
- x /= np.quantile(np.abs(x), 0.999)
- # Get f0 calculations for all methods specified
- for method in methods:
- f0 = None
- if method == "pm":
- f0 = (
- parselmouth.Sound(x, self.sr)
- .to_pitch_ac(
- time_step=time_step / 1000,
- voicing_threshold=0.6,
- pitch_floor=f0_min,
- pitch_ceiling=f0_max,
- )
- .selected_array["frequency"]
- )
- pad_size = (p_len - len(f0) + 1) // 2
- if pad_size > 0 or p_len - len(f0) - pad_size > 0:
- f0 = np.pad(
- f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant"
- )
- elif method == "crepe":
- f0 = self.get_f0_official_crepe_computation(x, f0_min, f0_max)
- f0 = f0[1:] # Get rid of extra first frame
- elif method == "crepe-tiny":
- f0 = self.get_f0_official_crepe_computation(x, f0_min, f0_max, "tiny")
- f0 = f0[1:] # Get rid of extra first frame
- elif method == "mangio-crepe":
- f0 = self.get_f0_crepe_computation(
- x, f0_min, f0_max, p_len, crepe_hop_length
- )
- elif method == "mangio-crepe-tiny":
- f0 = self.get_f0_crepe_computation(
- x, f0_min, f0_max, p_len, crepe_hop_length, "tiny"
- )
- elif method == "harvest":
- f0 = cache_harvest_f0(input_audio_path, self.sr, f0_max, f0_min, 10)
- if filter_radius > 2:
- f0 = signal.medfilt(f0, 3)
- f0 = f0[1:] # Get rid of first frame.
- elif method == "dio": # Potentially buggy?
- f0, t = pyworld.dio(
- x.astype(np.double),
- fs=self.sr,
- f0_ceil=f0_max,
- f0_floor=f0_min,
- frame_period=10,
- )
- f0 = pyworld.stonemask(x.astype(np.double), f0, t, self.sr)
- f0 = signal.medfilt(f0, 3)
- f0 = f0[1:]
- # elif method == "pyin": Not Working just yet
- # f0 = self.get_f0_pyin_computation(x, f0_min, f0_max)
- # Push method to the stack
- f0_computation_stack.append(f0)
-
- for fc in f0_computation_stack:
- print(len(fc))
-
- print("Calculating hybrid median f0 from the stack of: %s" % str(methods))
- f0_median_hybrid = None
- if len(f0_computation_stack) == 1:
- f0_median_hybrid = f0_computation_stack[0]
- else:
- f0_median_hybrid = np.nanmedian(f0_computation_stack, axis=0)
- return f0_median_hybrid
-
- def get_f0(
- self,
- input_audio_path,
- x,
- p_len,
- f0_up_key,
- f0_method,
- filter_radius,
- crepe_hop_length,
- inp_f0=None,
- ):
- global input_audio_path2wav
- time_step = self.window / self.sr * 1000
- f0_min = 50
- f0_max = 1100
- f0_mel_min = 1127 * np.log(1 + f0_min / 700)
- f0_mel_max = 1127 * np.log(1 + f0_max / 700)
- if f0_method == "pm":
- f0 = (
- parselmouth.Sound(x, self.sr)
- .to_pitch_ac(
- time_step=time_step / 1000,
- voicing_threshold=0.6,
- pitch_floor=f0_min,
- pitch_ceiling=f0_max,
- )
- .selected_array["frequency"]
- )
- pad_size = (p_len - len(f0) + 1) // 2
- if pad_size > 0 or p_len - len(f0) - pad_size > 0:
- f0 = np.pad(
- f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant"
- )
- elif f0_method == "harvest":
- input_audio_path2wav[input_audio_path] = x.astype(np.double)
- f0 = cache_harvest_f0(input_audio_path, self.sr, f0_max, f0_min, 10)
- if filter_radius > 2:
- f0 = signal.medfilt(f0, 3)
- elif f0_method == "dio": # Potentially Buggy?
- f0, t = pyworld.dio(
- x.astype(np.double),
- fs=self.sr,
- f0_ceil=f0_max,
- f0_floor=f0_min,
- frame_period=10,
- )
- f0 = pyworld.stonemask(x.astype(np.double), f0, t, self.sr)
- f0 = signal.medfilt(f0, 3)
- elif f0_method == "crepe":
- f0 = self.get_f0_official_crepe_computation(x, f0_min, f0_max)
- elif f0_method == "crepe-tiny":
- f0 = self.get_f0_official_crepe_computation(x, f0_min, f0_max, "tiny")
- elif f0_method == "mangio-crepe":
- f0 = self.get_f0_crepe_computation(
- x, f0_min, f0_max, p_len, crepe_hop_length
- )
- elif f0_method == "mangio-crepe-tiny":
- f0 = self.get_f0_crepe_computation(
- x, f0_min, f0_max, p_len, crepe_hop_length, "tiny"
- )
- elif f0_method == "rmvpe":
- if hasattr(self, "model_rmvpe") == False:
- from rmvpe import RMVPE
-
- print("loading rmvpe model")
- self.model_rmvpe = RMVPE(
- "rmvpe.pt", is_half=self.is_half, device=self.device
- )
- f0 = self.model_rmvpe.infer_from_audio(x, thred=0.03)
-
- elif "hybrid" in f0_method:
- # Perform hybrid median pitch estimation
- input_audio_path2wav[input_audio_path] = x.astype(np.double)
- f0 = self.get_f0_hybrid_computation(
- f0_method,
- input_audio_path,
- x,
- f0_min,
- f0_max,
- p_len,
- filter_radius,
- crepe_hop_length,
- time_step,
- )
-
- f0 *= pow(2, f0_up_key / 12)
- # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()]))
- tf0 = self.sr // self.window # 每秒f0点数
- if inp_f0 is not None:
- delta_t = np.round(
- (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1
- ).astype("int16")
- replace_f0 = np.interp(
- list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1]
- )
- shape = f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)].shape[0]
- f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)] = replace_f0[
- :shape
- ]
- # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()]))
- f0bak = f0.copy()
- f0_mel = 1127 * np.log(1 + f0 / 700)
- f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (
- f0_mel_max - f0_mel_min
- ) + 1
- f0_mel[f0_mel <= 1] = 1
- f0_mel[f0_mel > 255] = 255
- f0_coarse = np.rint(f0_mel).astype(np.int)
-
- return f0_coarse, f0bak # 1-0
-
- def vc(
- self,
- model,
- net_g,
- sid,
- audio0,
- pitch,
- pitchf,
- times,
- index,
- big_npy,
- index_rate,
- version,
- protect,
- ): # ,file_index,file_big_npy
- feats = torch.from_numpy(audio0)
- if self.is_half:
- feats = feats.half()
- else:
- feats = feats.float()
- if feats.dim() == 2: # double channels
- feats = feats.mean(-1)
- assert feats.dim() == 1, feats.dim()
- feats = feats.view(1, -1)
- padding_mask = torch.BoolTensor(feats.shape).to(self.device).fill_(False)
-
- inputs = {
- "source": feats.to(self.device),
- "padding_mask": padding_mask,
- "output_layer": 9 if version == "v1" else 12,
- }
- t0 = ttime()
- with torch.no_grad():
- logits = model.extract_features(**inputs)
- feats = model.final_proj(logits[0]) if version == "v1" else logits[0]
- if protect < 0.5 and pitch != None and pitchf != None:
- feats0 = feats.clone()
- if (
- isinstance(index, type(None)) == False
- and isinstance(big_npy, type(None)) == False
- and index_rate != 0
- ):
- npy = feats[0].cpu().numpy()
- if self.is_half:
- npy = npy.astype("float32")
-
- # _, I = index.search(npy, 1)
- # npy = big_npy[I.squeeze()]
-
- score, ix = index.search(npy, k=8)
- weight = np.square(1 / score)
- weight /= weight.sum(axis=1, keepdims=True)
- npy = np.sum(big_npy[ix] * np.expand_dims(weight, axis=2), axis=1)
-
- if self.is_half:
- npy = npy.astype("float16")
- feats = (
- torch.from_numpy(npy).unsqueeze(0).to(self.device) * index_rate
- + (1 - index_rate) * feats
- )
-
- feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1)
- if protect < 0.5 and pitch != None and pitchf != None:
- feats0 = F.interpolate(feats0.permute(0, 2, 1), scale_factor=2).permute(
- 0, 2, 1
- )
- t1 = ttime()
- p_len = audio0.shape[0] // self.window
- if feats.shape[1] < p_len:
- p_len = feats.shape[1]
- if pitch != None and pitchf != None:
- pitch = pitch[:, :p_len]
- pitchf = pitchf[:, :p_len]
-
- if protect < 0.5 and pitch != None and pitchf != None:
- pitchff = pitchf.clone()
- pitchff[pitchf > 0] = 1
- pitchff[pitchf < 1] = protect
- pitchff = pitchff.unsqueeze(-1)
- feats = feats * pitchff + feats0 * (1 - pitchff)
- feats = feats.to(feats0.dtype)
- p_len = torch.tensor([p_len], device=self.device).long()
- with torch.no_grad():
- if pitch != None and pitchf != None:
- audio1 = (
- (net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0])
- .data.cpu()
- .float()
- .numpy()
- )
- else:
- audio1 = (
- (net_g.infer(feats, p_len, sid)[0][0, 0]).data.cpu().float().numpy()
- )
- del feats, p_len, padding_mask
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- t2 = ttime()
- times[0] += t1 - t0
- times[2] += t2 - t1
- return audio1
-
- def pipeline(
- self,
- model,
- net_g,
- sid,
- audio,
- input_audio_path,
- times,
- f0_up_key,
- f0_method,
- file_index,
- # file_big_npy,
- index_rate,
- if_f0,
- filter_radius,
- tgt_sr,
- resample_sr,
- rms_mix_rate,
- version,
- protect,
- crepe_hop_length,
- progress,
- f0_file=None,
- ):
- if (
- file_index != ""
- # and file_big_npy != ""
- # and os.path.exists(file_big_npy) == True
- and os.path.exists(file_index) == True
- and index_rate != 0
- ):
- try:
- index = faiss.read_index(file_index)
- # big_npy = np.load(file_big_npy)
- big_npy = index.reconstruct_n(0, index.ntotal)
- except:
- traceback.print_exc()
- index = big_npy = None
- else:
- index = big_npy = None
- audio = signal.filtfilt(bh, ah, audio)
- audio_pad = np.pad(audio, (self.window // 2, self.window // 2), mode="reflect")
- opt_ts = []
- if audio_pad.shape[0] > self.t_max:
- audio_sum = np.zeros_like(audio)
- for i in range(self.window):
- audio_sum += audio_pad[i : i - self.window]
- for t in range(self.t_center, audio.shape[0], self.t_center):
- opt_ts.append(
- t
- - self.t_query
- + np.where(
- np.abs(audio_sum[t - self.t_query : t + self.t_query])
- == np.abs(audio_sum[t - self.t_query : t + self.t_query]).min()
- )[0][0]
- )
- progress(0.4, desc="Gerando áudio...")
- s = 0
- audio_opt = []
- t = None
- t1 = ttime()
- audio_pad = np.pad(audio, (self.t_pad, self.t_pad), mode="reflect")
- p_len = audio_pad.shape[0] // self.window
- inp_f0 = None
- if hasattr(f0_file, "name") == True:
- try:
- with open(f0_file.name, "r") as f:
- lines = f.read().strip("\n").split("\n")
- inp_f0 = []
- for line in lines:
- inp_f0.append([float(i) for i in line.split(",")])
- inp_f0 = np.array(inp_f0, dtype="float32")
- except:
- traceback.print_exc()
- sid = torch.tensor(sid, device=self.device).unsqueeze(0).long()
- pitch, pitchf = None, None
- if if_f0 == 1:
- pitch, pitchf = self.get_f0(
- input_audio_path,
- audio_pad,
- p_len,
- f0_up_key,
- f0_method,
- filter_radius,
- crepe_hop_length,
- inp_f0,
- )
- pitch = pitch[:p_len]
- pitchf = pitchf[:p_len]
- if self.device == "mps":
- pitchf = pitchf.astype(np.float32)
- pitch = torch.tensor(pitch, device=self.device).unsqueeze(0).long()
- pitchf = torch.tensor(pitchf, device=self.device).unsqueeze(0).float()
- t2 = ttime()
- times[1] += t2 - t1
- for t in opt_ts:
- t = t // self.window * self.window
- if if_f0 == 1:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[s : t + self.t_pad2 + self.window],
- pitch[:, s // self.window : (t + self.t_pad2) // self.window],
- pitchf[:, s // self.window : (t + self.t_pad2) // self.window],
- times,
- index,
- big_npy,
- index_rate,
- version,
- protect,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- else:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[s : t + self.t_pad2 + self.window],
- None,
- None,
- times,
- index,
- big_npy,
- index_rate,
- version,
- protect,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- s = t
- if if_f0 == 1:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[t:],
- pitch[:, t // self.window :] if t is not None else pitch,
- pitchf[:, t // self.window :] if t is not None else pitchf,
- times,
- index,
- big_npy,
- index_rate,
- version,
- protect,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- else:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[t:],
- None,
- None,
- times,
- index,
- big_npy,
- index_rate,
- version,
- protect,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- progress(0.6, desc="Gerando áudio...")
- audio_opt = np.concatenate(audio_opt)
- if rms_mix_rate != 1:
- audio_opt = change_rms(audio, 16000, audio_opt, tgt_sr, rms_mix_rate)
- if resample_sr >= 16000 and tgt_sr != resample_sr:
- audio_opt = librosa.resample(
- audio_opt, orig_sr=tgt_sr, target_sr=resample_sr
- )
- progress(0.8, desc="Gerando áudio...")
- audio_max = np.abs(audio_opt).max() / 0.99
- max_int16 = 32768
- if audio_max > 1:
- max_int16 /= audio_max
- audio_opt = (audio_opt * max_int16).astype(np.int16)
- del pitch, pitchf, sid
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- return audio_opt
diff --git a/spaces/allknowingroger/Image-Models-Test146/README.md b/spaces/allknowingroger/Image-Models-Test146/README.md
deleted file mode 100644
index a3a43bf672ca727d8113068aed4ea790c9de9309..0000000000000000000000000000000000000000
--- a/spaces/allknowingroger/Image-Models-Test146/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: More Image Models
-emoji: 😻
-colorFrom: red
-colorTo: gray
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-duplicated_from: allknowingroger/Image-Models-Test142
----
-
-
\ No newline at end of file
diff --git a/spaces/allknowingroger/Image-Models-Test177/app.py b/spaces/allknowingroger/Image-Models-Test177/app.py
deleted file mode 100644
index 827b380f766ab55ff8d7d888e2d6a2fae752ca34..0000000000000000000000000000000000000000
--- a/spaces/allknowingroger/Image-Models-Test177/app.py
+++ /dev/null
@@ -1,144 +0,0 @@
-import gradio as gr
-# import os
-# import sys
-# from pathlib import Path
-import time
-
-models =[
- "Yntec/nuipenimix2",
- "melaris/nilooai",
- "salma-remyx/lora-trained-xl-colab",
- "joachimsallstrom/aether-glitch-lora-for-sdxl",
- "alessandroaere/lora-trained-xl-colab",
- "LinoyTsaban/huggy_v23",
- "milaidy/jardepoz",
- "shikari2917/mypic4",
- "Yntec/SCMix",
-]
-
-
-model_functions = {}
-model_idx = 1
-for model_path in models:
- try:
- model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False)
- except Exception as error:
- def the_fn(txt):
- return None
- model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"])
- model_idx+=1
-
-
-def send_it_idx(idx):
- def send_it_fn(prompt):
- output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt)
- return output
- return send_it_fn
-
-def get_prompts(prompt_text):
- return prompt_text
-
-def clear_it(val):
- if int(val) != 0:
- val = 0
- else:
- val = 0
- pass
- return val
-
-def all_task_end(cnt,t_stamp):
- to = t_stamp + 60
- et = time.time()
- if et > to and t_stamp != 0:
- d = gr.update(value=0)
- tog = gr.update(value=1)
- #print(f'to: {to} et: {et}')
- else:
- if cnt != 0:
- d = gr.update(value=et)
- else:
- d = gr.update(value=0)
- tog = gr.update(value=0)
- #print (f'passing: to: {to} et: {et}')
- pass
- return d, tog
-
-def all_task_start():
- print("\n\n\n\n\n\n\n")
- t = time.gmtime()
- t_stamp = time.time()
- current_time = time.strftime("%H:%M:%S", t)
- return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0)
-
-def clear_fn():
- nn = len(models)
- return tuple([None, *[None for _ in range(nn)]])
-
-
-
-with gr.Blocks(title="SD Models") as my_interface:
- with gr.Column(scale=12):
- # with gr.Row():
- # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""")
- with gr.Row():
- with gr.Row(scale=6):
- primary_prompt=gr.Textbox(label="Prompt", value="")
- # real_prompt=gr.Textbox(label="Real prompt")
- with gr.Row(scale=6):
- # improve_prompts_btn=gr.Button("Improve")
- with gr.Row():
- run=gr.Button("Run",variant="primary")
- clear_btn=gr.Button("Clear")
- with gr.Row():
- sd_outputs = {}
- model_idx = 1
- for model_path in models:
- with gr.Column(scale=3, min_width=320):
- with gr.Box():
- sd_outputs[model_idx] = gr.Image(label=model_path)
- pass
- model_idx += 1
- pass
- pass
-
- with gr.Row(visible=False):
- start_box=gr.Number(interactive=False)
- end_box=gr.Number(interactive=False)
- tog_box=gr.Textbox(value=0,interactive=False)
-
- start_box.change(
- all_task_end,
- [start_box, end_box],
- [start_box, tog_box],
- every=1,
- show_progress=False)
-
- primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box])
- run.click(all_task_start, None, [start_box, end_box, tog_box])
- runs_dict = {}
- model_idx = 1
- for model_path in models:
- runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]])
- model_idx += 1
- pass
- pass
-
- # improve_prompts_btn_clicked=improve_prompts_btn.click(
- # get_prompts,
- # inputs=[primary_prompt],
- # outputs=[primary_prompt],
- # cancels=list(runs_dict.values()))
- clear_btn.click(
- clear_fn,
- None,
- [primary_prompt, *list(sd_outputs.values())],
- cancels=[*list(runs_dict.values())])
- tog_box.change(
- clear_it,
- tog_box,
- tog_box,
- cancels=[*list(runs_dict.values())])
-
-my_interface.queue(concurrency_count=600, status_update_rate=1)
-my_interface.launch(inline=True, show_api=False)
-
\ No newline at end of file
diff --git a/spaces/allknowingroger/Image-Models-Test85/README.md b/spaces/allknowingroger/Image-Models-Test85/README.md
deleted file mode 100644
index 30c3cf4d496c2fcf11e1659264655c971c669ad5..0000000000000000000000000000000000000000
--- a/spaces/allknowingroger/Image-Models-Test85/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: More Image Models
-emoji: 😻
-colorFrom: red
-colorTo: gray
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-pinned: true
-duplicated_from: allknowingroger/Image-Models-Test84
----
-
-
\ No newline at end of file
diff --git a/spaces/amankishore/sjc/sd1/ldm/modules/ema.py b/spaces/amankishore/sjc/sd1/ldm/modules/ema.py
deleted file mode 100644
index c8c75af43565f6e140287644aaaefa97dd6e67c5..0000000000000000000000000000000000000000
--- a/spaces/amankishore/sjc/sd1/ldm/modules/ema.py
+++ /dev/null
@@ -1,76 +0,0 @@
-import torch
-from torch import nn
-
-
-class LitEma(nn.Module):
- def __init__(self, model, decay=0.9999, use_num_upates=True):
- super().__init__()
- if decay < 0.0 or decay > 1.0:
- raise ValueError('Decay must be between 0 and 1')
-
- self.m_name2s_name = {}
- self.register_buffer('decay', torch.tensor(decay, dtype=torch.float32))
- self.register_buffer('num_updates', torch.tensor(0,dtype=torch.int) if use_num_upates
- else torch.tensor(-1,dtype=torch.int))
-
- for name, p in model.named_parameters():
- if p.requires_grad:
- #remove as '.'-character is not allowed in buffers
- s_name = name.replace('.','')
- self.m_name2s_name.update({name:s_name})
- self.register_buffer(s_name,p.clone().detach().data)
-
- self.collected_params = []
-
- def forward(self,model):
- decay = self.decay
-
- if self.num_updates >= 0:
- self.num_updates += 1
- decay = min(self.decay,(1 + self.num_updates) / (10 + self.num_updates))
-
- one_minus_decay = 1.0 - decay
-
- with torch.no_grad():
- m_param = dict(model.named_parameters())
- shadow_params = dict(self.named_buffers())
-
- for key in m_param:
- if m_param[key].requires_grad:
- sname = self.m_name2s_name[key]
- shadow_params[sname] = shadow_params[sname].type_as(m_param[key])
- shadow_params[sname].sub_(one_minus_decay * (shadow_params[sname] - m_param[key]))
- else:
- assert not key in self.m_name2s_name
-
- def copy_to(self, model):
- m_param = dict(model.named_parameters())
- shadow_params = dict(self.named_buffers())
- for key in m_param:
- if m_param[key].requires_grad:
- m_param[key].data.copy_(shadow_params[self.m_name2s_name[key]].data)
- else:
- assert not key in self.m_name2s_name
-
- def store(self, parameters):
- """
- Save the current parameters for restoring later.
- Args:
- parameters: Iterable of `torch.nn.Parameter`; the parameters to be
- temporarily stored.
- """
- self.collected_params = [param.clone() for param in parameters]
-
- def restore(self, parameters):
- """
- Restore the parameters stored with the `store` method.
- Useful to validate the model with EMA parameters without affecting the
- original optimization process. Store the parameters before the
- `copy_to` method. After validation (or model saving), use this to
- restore the former parameters.
- Args:
- parameters: Iterable of `torch.nn.Parameter`; the parameters to be
- updated with the stored parameters.
- """
- for c_param, param in zip(self.collected_params, parameters):
- param.data.copy_(c_param.data)
diff --git a/spaces/andresgtn/bean-leaf-health-classifier/README.md b/spaces/andresgtn/bean-leaf-health-classifier/README.md
deleted file mode 100644
index 2c7e1ebe3fba4edaf72306cbb75173e75247c8b9..0000000000000000000000000000000000000000
--- a/spaces/andresgtn/bean-leaf-health-classifier/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Bean Leaf Health Classifier
-emoji: 🐢
-colorFrom: red
-colorTo: gray
-sdk: gradio
-sdk_version: 3.3.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/antonovmaxim/text-generation-webui-space/modules/deepspeed_parameters.py b/spaces/antonovmaxim/text-generation-webui-space/modules/deepspeed_parameters.py
deleted file mode 100644
index 9116f5792fea4edf4b536b6605ee40e254109a98..0000000000000000000000000000000000000000
--- a/spaces/antonovmaxim/text-generation-webui-space/modules/deepspeed_parameters.py
+++ /dev/null
@@ -1,74 +0,0 @@
-def generate_ds_config(ds_bf16, train_batch_size, nvme_offload_dir):
- '''
- DeepSpeed configration
- https://huggingface.co/docs/transformers/main_classes/deepspeed
- '''
-
- if nvme_offload_dir:
- ds_config = {
- "fp16": {
- "enabled": not ds_bf16,
- },
- "bf16": {
- "enabled": ds_bf16,
- },
- "zero_optimization": {
- "stage": 3,
- "offload_param": {
- "device": "nvme",
- "nvme_path": nvme_offload_dir,
- "pin_memory": True,
- "buffer_count": 5,
- "buffer_size": 1e9,
- "max_in_cpu": 1e9
- },
- "overlap_comm": True,
- "reduce_bucket_size": "auto",
- "contiguous_gradients": True,
- "sub_group_size": 1e8,
- "stage3_prefetch_bucket_size": "auto",
- "stage3_param_persistence_threshold": "auto",
- "stage3_max_live_parameters": "auto",
- "stage3_max_reuse_distance": "auto",
- },
- "aio": {
- "block_size": 262144,
- "queue_depth": 32,
- "thread_count": 1,
- "single_submit": False,
- "overlap_events": True
- },
- "steps_per_print": 2000,
- "train_batch_size": train_batch_size,
- "train_micro_batch_size_per_gpu": 1,
- "wall_clock_breakdown": False
- }
- else:
- ds_config = {
- "fp16": {
- "enabled": not ds_bf16,
- },
- "bf16": {
- "enabled": ds_bf16,
- },
- "zero_optimization": {
- "stage": 3,
- "offload_param": {
- "device": "cpu",
- "pin_memory": True
- },
- "overlap_comm": True,
- "contiguous_gradients": True,
- "reduce_bucket_size": "auto",
- "stage3_prefetch_bucket_size": "auto",
- "stage3_param_persistence_threshold": "auto",
- "stage3_max_live_parameters": "auto",
- "stage3_max_reuse_distance": "auto",
- },
- "steps_per_print": 2000,
- "train_batch_size": train_batch_size,
- "train_micro_batch_size_per_gpu": 1,
- "wall_clock_breakdown": False
- }
-
- return ds_config
diff --git a/spaces/arbml/Ashaar/README.md b/spaces/arbml/Ashaar/README.md
deleted file mode 100644
index 2c2c8d33881b2b8a9b9f786b6279beee761df0e4..0000000000000000000000000000000000000000
--- a/spaces/arbml/Ashaar/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Ashaar
-emoji: 🧑🎤
-colorFrom: purple
-colorTo: blue
-sdk: gradio
-sdk_version: 3.35.2
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/arsalagrey/speech-recognition-vue/README.md b/spaces/arsalagrey/speech-recognition-vue/README.md
deleted file mode 100644
index a4d3148e533329c1f43e3b597015ba3689b85d63..0000000000000000000000000000000000000000
--- a/spaces/arsalagrey/speech-recognition-vue/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: Speech Recognition Vue
-emoji: 👀
-colorFrom: indigo
-colorTo: blue
-sdk: static
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/tortoise/autoregressive.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/tortoise/autoregressive.py
deleted file mode 100644
index 14d881bc1029ef577f24ae28f9414e431661142a..0000000000000000000000000000000000000000
--- a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/tortoise/autoregressive.py
+++ /dev/null
@@ -1,631 +0,0 @@
-# AGPL: a notification must be added stating that changes have been made to that file.
-import functools
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from transformers import GPT2Config, GPT2PreTrainedModel, LogitsProcessorList
-from transformers.modeling_outputs import CausalLMOutputWithCrossAttentions
-
-from TTS.tts.layers.tortoise.arch_utils import AttentionBlock, TypicalLogitsWarper
-
-
-def null_position_embeddings(range, dim):
- return torch.zeros((range.shape[0], range.shape[1], dim), device=range.device)
-
-
-def _p(t):
- return t and (len(t), len(t[0]), t[0][0].shape) # kv_cache debug
-
-
-class ResBlock(nn.Module):
- """
- Basic residual convolutional block that uses GroupNorm.
- """
-
- def __init__(self, chan):
- super().__init__()
- self.net = nn.Sequential(
- nn.Conv1d(chan, chan, kernel_size=3, padding=1),
- nn.GroupNorm(chan // 8, chan),
- nn.ReLU(),
- nn.Conv1d(chan, chan, kernel_size=3, padding=1),
- nn.GroupNorm(chan // 8, chan),
- )
-
- def forward(self, x):
- return F.relu(self.net(x) + x)
-
-
-class GPT2InferenceModel(GPT2PreTrainedModel):
- def __init__(self, config, gpt, text_pos_emb, embeddings, norm, linear, kv_cache):
- super().__init__(config)
- self.transformer = gpt
- self.text_pos_embedding = text_pos_emb
- self.embeddings = embeddings
- self.lm_head = nn.Sequential(norm, linear)
- self.kv_cache = kv_cache
-
- def store_mel_emb(self, mel_emb):
- self.cached_mel_emb = mel_emb
-
- def prepare_inputs_for_generation(self, input_ids, past_key_values=None, **kwargs):
- token_type_ids = kwargs.get("token_type_ids", None) # usually None
- if not self.kv_cache:
- past_key_values = None
- # only last token for inputs_ids if past is defined in kwargs
- if past_key_values:
- input_ids = input_ids[:, -1].unsqueeze(-1)
- if token_type_ids is not None:
- token_type_ids = token_type_ids[:, -1].unsqueeze(-1)
-
- attention_mask = kwargs.get("attention_mask", None)
- position_ids = kwargs.get("position_ids", None)
-
- if attention_mask is not None and position_ids is None:
- # create position_ids on the fly for batch generation
- position_ids = attention_mask.long().cumsum(-1) - 1
- position_ids.masked_fill_(attention_mask == 0, 1)
- if past_key_values:
- position_ids = position_ids[:, -1].unsqueeze(-1)
- else:
- position_ids = None
- return {
- "input_ids": input_ids,
- "past_key_values": past_key_values,
- "use_cache": kwargs.get("use_cache"),
- "position_ids": position_ids,
- "attention_mask": attention_mask,
- "token_type_ids": token_type_ids,
- }
-
- def forward(
- self,
- input_ids=None,
- past_key_values=None,
- attention_mask=None,
- token_type_ids=None,
- position_ids=None,
- head_mask=None,
- inputs_embeds=None,
- encoder_hidden_states=None,
- encoder_attention_mask=None,
- labels=None,
- use_cache=None,
- output_attentions=None,
- output_hidden_states=None,
- return_dict=None,
- ):
- assert self.cached_mel_emb is not None
- assert inputs_embeds is None # Not supported by this inference model.
- assert labels is None # Training not supported by this inference model.
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- # Create embedding
- mel_len = self.cached_mel_emb.shape[1]
- if input_ids.shape[1] != 1:
- text_inputs = input_ids[:, mel_len:]
- text_emb = self.embeddings(text_inputs)
- text_emb = text_emb + self.text_pos_embedding(text_emb)
- if self.cached_mel_emb.shape[0] != text_emb.shape[0]:
- mel_emb = self.cached_mel_emb.repeat_interleave(text_emb.shape[0] // self.cached_mel_emb.shape[0], 0)
- else: # this outcome only occurs once per loop in most cases
- mel_emb = self.cached_mel_emb
- emb = torch.cat([mel_emb, text_emb], dim=1)
- else:
- emb = self.embeddings(input_ids)
- emb = emb + self.text_pos_embedding.get_fixed_embedding(
- attention_mask.shape[1] - mel_len, attention_mask.device
- )
-
- transformer_outputs = self.transformer(
- inputs_embeds=emb,
- past_key_values=past_key_values,
- attention_mask=attention_mask,
- token_type_ids=token_type_ids,
- position_ids=position_ids,
- head_mask=head_mask,
- encoder_hidden_states=encoder_hidden_states,
- encoder_attention_mask=encoder_attention_mask,
- use_cache=use_cache,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
- hidden_states = transformer_outputs[0]
- lm_logits = self.lm_head(hidden_states)
-
- if not return_dict:
- return (lm_logits,) + transformer_outputs[1:]
-
- return CausalLMOutputWithCrossAttentions(
- loss=None,
- logits=lm_logits,
- past_key_values=transformer_outputs.past_key_values,
- hidden_states=transformer_outputs.hidden_states,
- attentions=transformer_outputs.attentions,
- cross_attentions=transformer_outputs.cross_attentions,
- )
-
- @staticmethod
- def _reorder_cache(past, beam_idx):
- """
- This function is used to re-order the :obj:`past_key_values` cache if
- :meth:`~transformers.PreTrainedModel.beam_search` or :meth:`~transformers.PreTrainedModel.beam_sample` is
- called. This is required to match :obj:`past_key_values` with the correct beam_idx at every generation step.
- """
- return tuple(
- tuple(past_state.index_select(0, beam_idx.to(past_state.device)) for past_state in layer_past)
- for layer_past in past
- )
-
-
-class ConditioningEncoder(nn.Module):
- def __init__(
- self,
- spec_dim,
- embedding_dim,
- attn_blocks=6,
- num_attn_heads=4,
- do_checkpointing=False,
- mean=False,
- ):
- super().__init__()
- attn = []
- self.init = nn.Conv1d(spec_dim, embedding_dim, kernel_size=1)
- for a in range(attn_blocks):
- attn.append(AttentionBlock(embedding_dim, num_attn_heads))
- self.attn = nn.Sequential(*attn)
- self.dim = embedding_dim
- self.do_checkpointing = do_checkpointing
- self.mean = mean
-
- def forward(self, x):
- h = self.init(x)
- h = self.attn(h)
- if self.mean:
- return h.mean(dim=2)
- else:
- return h[:, :, 0]
-
-
-class LearnedPositionEmbeddings(nn.Module):
- def __init__(self, seq_len, model_dim, init=0.02):
- super().__init__()
- self.emb = nn.Embedding(seq_len, model_dim)
- # Initializing this way is standard for GPT-2
- self.emb.weight.data.normal_(mean=0.0, std=init)
-
- def forward(self, x):
- sl = x.shape[1]
- return self.emb(torch.arange(0, sl, device=x.device))
-
- def get_fixed_embedding(self, ind, dev):
- return self.emb(torch.arange(0, ind, device=dev))[ind - 1 : ind]
-
-
-def build_hf_gpt_transformer(layers, model_dim, heads, max_mel_seq_len, max_text_seq_len, checkpointing):
- """
- GPT-2 implemented by the HuggingFace library.
- """
- from transformers import GPT2Config, GPT2Model
-
- gpt_config = GPT2Config(
- vocab_size=256, # Unused.
- n_positions=max_mel_seq_len + max_text_seq_len,
- n_ctx=max_mel_seq_len + max_text_seq_len,
- n_embd=model_dim,
- n_layer=layers,
- n_head=heads,
- gradient_checkpointing=checkpointing,
- use_cache=not checkpointing,
- )
- gpt = GPT2Model(gpt_config)
- # Override the built in positional embeddings
- del gpt.wpe # TODO: figure out relevance in fixing exported model definition: Embedding(1012, 1024)
- gpt.wpe = functools.partial(null_position_embeddings, dim=model_dim)
- # Built-in token embeddings are unused.
- del gpt.wte
- return (
- gpt,
- LearnedPositionEmbeddings(max_mel_seq_len, model_dim),
- LearnedPositionEmbeddings(max_text_seq_len, model_dim),
- None,
- None,
- )
-
-
-class MelEncoder(nn.Module):
- def __init__(self, channels, mel_channels=80, resblocks_per_reduction=2):
- super().__init__()
- self.channels = channels
- self.encoder = nn.Sequential(
- nn.Conv1d(mel_channels, channels // 4, kernel_size=3, padding=1),
- nn.Sequential(*[ResBlock(channels // 4) for _ in range(resblocks_per_reduction)]),
- nn.Conv1d(channels // 4, channels // 2, kernel_size=3, stride=2, padding=1),
- nn.GroupNorm(channels // 16, channels // 2),
- nn.ReLU(),
- nn.Sequential(*[ResBlock(channels // 2) for _ in range(resblocks_per_reduction)]),
- nn.Conv1d(channels // 2, channels, kernel_size=3, stride=2, padding=1),
- nn.GroupNorm(channels // 8, channels),
- nn.ReLU(),
- nn.Sequential(*[ResBlock(channels) for _ in range(resblocks_per_reduction)]),
- )
- self.reduction = 4
-
- def forward(self, x):
- for e in self.encoder:
- x = e(x)
- return x.permute(0, 2, 1)
-
-
-class UnifiedVoice(nn.Module):
- def __init__(
- self,
- layers=8,
- model_dim=512,
- heads=8,
- max_text_tokens=120,
- max_mel_tokens=250,
- max_conditioning_inputs=1,
- mel_length_compression=1024,
- number_text_tokens=256,
- start_text_token=None,
- number_mel_codes=8194,
- start_mel_token=8192,
- stop_mel_token=8193,
- train_solo_embeddings=False,
- use_mel_codes_as_input=True,
- checkpointing=True,
- types=1,
- ):
- """
- Args:
- layers: Number of layers in transformer stack.
- model_dim: Operating dimensions of the transformer
- heads: Number of transformer heads. Must be divisible by model_dim. Recommend model_dim//64
- max_text_tokens: Maximum number of text tokens that will be encountered by model.
- max_mel_tokens: Maximum number of MEL tokens that will be encountered by model.
- max_conditioning_inputs: Maximum number of conditioning inputs provided to the model. If (1), conditioning input can be of format (b,80,s), otherwise (b,n,80,s).
- mel_length_compression: The factor between and . Used to compute MEL code padding given wav input length.
- number_text_tokens:
- start_text_token:
- stop_text_token:
- number_mel_codes:
- start_mel_token:
- stop_mel_token:
- train_solo_embeddings:
- use_mel_codes_as_input:
- checkpointing:
- """
- super().__init__()
-
- self.number_text_tokens = number_text_tokens
- self.start_text_token = number_text_tokens * types if start_text_token is None else start_text_token
- self.stop_text_token = 0
- self.number_mel_codes = number_mel_codes
- self.start_mel_token = start_mel_token
- self.stop_mel_token = stop_mel_token
- self.layers = layers
- self.heads = heads
- self.max_mel_tokens = max_mel_tokens
- self.max_text_tokens = max_text_tokens
- self.model_dim = model_dim
- self.max_conditioning_inputs = max_conditioning_inputs
- self.mel_length_compression = mel_length_compression
- self.conditioning_encoder = ConditioningEncoder(80, model_dim, num_attn_heads=heads)
- self.text_embedding = nn.Embedding(self.number_text_tokens * types + 1, model_dim)
- if use_mel_codes_as_input:
- self.mel_embedding = nn.Embedding(self.number_mel_codes, model_dim)
- else:
- self.mel_embedding = MelEncoder(model_dim, resblocks_per_reduction=1)
- (
- self.gpt,
- self.mel_pos_embedding,
- self.text_pos_embedding,
- self.mel_layer_pos_embedding,
- self.text_layer_pos_embedding,
- ) = build_hf_gpt_transformer(
- layers,
- model_dim,
- heads,
- self.max_mel_tokens + 2 + self.max_conditioning_inputs,
- self.max_text_tokens + 2,
- checkpointing,
- )
- if train_solo_embeddings:
- self.mel_solo_embedding = nn.Parameter(torch.randn(1, 1, model_dim) * 0.02, requires_grad=True)
- self.text_solo_embedding = nn.Parameter(torch.randn(1, 1, model_dim) * 0.02, requires_grad=True)
- else:
- self.mel_solo_embedding = 0
- self.text_solo_embedding = 0
-
- self.final_norm = nn.LayerNorm(model_dim)
- self.text_head = nn.Linear(model_dim, self.number_text_tokens * types + 1)
- self.mel_head = nn.Linear(model_dim, self.number_mel_codes)
-
- # Initialize the embeddings per the GPT-2 scheme
- embeddings = [self.text_embedding]
- if use_mel_codes_as_input:
- embeddings.append(self.mel_embedding)
- for module in embeddings:
- module.weight.data.normal_(mean=0.0, std=0.02)
-
- def post_init_gpt2_config(self, kv_cache=True):
- seq_length = self.max_mel_tokens + self.max_text_tokens + 2
- gpt_config = GPT2Config(
- vocab_size=self.max_mel_tokens,
- n_positions=seq_length,
- n_ctx=seq_length,
- n_embd=self.model_dim,
- n_layer=self.layers,
- n_head=self.heads,
- gradient_checkpointing=False,
- use_cache=True,
- )
- self.inference_model = GPT2InferenceModel(
- gpt_config,
- self.gpt,
- self.mel_pos_embedding,
- self.mel_embedding,
- self.final_norm,
- self.mel_head,
- kv_cache=kv_cache,
- )
- # self.inference_model = PrunedGPT2InferenceModel(gpt_config, self.gpt, self.mel_pos_embedding, self.mel_embedding, self.final_norm, self.mel_head)
- self.gpt.wte = self.mel_embedding
- # self.inference_model.save_pretrained("")
-
- def build_aligned_inputs_and_targets(self, input, start_token, stop_token):
- inp = F.pad(input, (1, 0), value=start_token)
- tar = F.pad(input, (0, 1), value=stop_token)
- return inp, tar
-
- def set_mel_padding(self, mel_input_tokens, wav_lengths):
- """
- Given mel tokens that are derived from a padded audio clip and the actual lengths of each batch element in
- that audio clip, reformats the tokens with STOP_MEL_TOKEN in place of the zero padding. This is required
- preformatting to create a working TTS model.
- """
- # Set padding areas within MEL (currently it is coded with the MEL code for ).
- mel_lengths = torch.div(wav_lengths, self.mel_length_compression, rounding_mode="trunc")
- for b in range(len(mel_lengths)):
- actual_end = (
- mel_lengths[b] + 1
- ) # Due to the convolutional nature of how these tokens are generated, it would be best if the model predicts a token past the actual last token.
- if actual_end < mel_input_tokens.shape[-1]:
- mel_input_tokens[b, actual_end:] = self.stop_mel_token
- return mel_input_tokens
-
- def get_logits(
- self,
- speech_conditioning_inputs,
- first_inputs,
- first_head,
- second_inputs=None,
- second_head=None,
- get_attns=False,
- return_latent=False,
- ):
- if second_inputs is not None:
- emb = torch.cat([speech_conditioning_inputs, first_inputs, second_inputs], dim=1)
- else:
- emb = torch.cat([speech_conditioning_inputs, first_inputs], dim=1)
-
- gpt_out = self.gpt(inputs_embeds=emb, return_dict=True, output_attentions=get_attns)
- if get_attns:
- return gpt_out.attentions
-
- enc = gpt_out.last_hidden_state[:, 1:] # The first logit is tied to the speech_conditioning_input
- enc = self.final_norm(enc)
-
- if return_latent:
- return (
- enc[
- :,
- speech_conditioning_inputs.shape[1] : speech_conditioning_inputs.shape[1] + first_inputs.shape[1],
- ],
- enc[:, -second_inputs.shape[1] :],
- )
-
- first_logits = enc[:, : first_inputs.shape[1]]
- first_logits = first_head(first_logits)
- first_logits = first_logits.permute(0, 2, 1)
- if second_inputs is not None:
- second_logits = enc[:, -second_inputs.shape[1] :]
- second_logits = second_head(second_logits)
- second_logits = second_logits.permute(0, 2, 1)
- return first_logits, second_logits
- else:
- return first_logits
-
- def get_conditioning(self, speech_conditioning_input):
- speech_conditioning_input = (
- speech_conditioning_input.unsqueeze(1)
- if len(speech_conditioning_input.shape) == 3
- else speech_conditioning_input
- )
- conds = []
- for j in range(speech_conditioning_input.shape[1]):
- conds.append(self.conditioning_encoder(speech_conditioning_input[:, j]))
- conds = torch.stack(conds, dim=1)
- conds = conds.mean(dim=1)
- return conds
-
- def forward(
- self,
- speech_conditioning_latent,
- text_inputs,
- text_lengths,
- mel_codes,
- wav_lengths,
- types=None,
- text_first=True,
- raw_mels=None,
- return_attentions=False,
- return_latent=False,
- clip_inputs=True,
- ):
- """
- Forward pass that uses both text and voice in either text conditioning mode or voice conditioning mode
- (actuated by `text_first`).
-
- speech_conditioning_input: MEL float tensor, (b,1024)
- text_inputs: long tensor, (b,t)
- text_lengths: long tensor, (b,)
- mel_inputs: long tensor, (b,m)
- wav_lengths: long tensor, (b,)
- raw_mels: MEL float tensor (b,80,s)
-
- If return_attentions is specified, only logits are returned.
- If return_latent is specified, loss & logits are not computed or returned. Only the predicted latents are returned.
- If clip_inputs is True, the inputs will be clipped to the smallest input size across each input modality.
- """
- # Types are expressed by expanding the text embedding space.
- if types is not None:
- text_inputs = text_inputs * (1 + types).unsqueeze(-1)
-
- if clip_inputs:
- # This model will receive micro-batches with a ton of padding for both the text and MELs. Ameliorate this by
- # chopping the inputs by the maximum actual length.
- max_text_len = text_lengths.max()
- text_inputs = text_inputs[:, :max_text_len]
- max_mel_len = wav_lengths.max() // self.mel_length_compression
- mel_codes = mel_codes[:, :max_mel_len]
- if raw_mels is not None:
- raw_mels = raw_mels[:, :, : max_mel_len * 4]
- mel_codes = self.set_mel_padding(mel_codes, wav_lengths)
- text_inputs = F.pad(text_inputs, (0, 1), value=self.stop_text_token)
- mel_codes = F.pad(mel_codes, (0, 1), value=self.stop_mel_token)
-
- conds = speech_conditioning_latent.unsqueeze(1)
- text_inputs, text_targets = self.build_aligned_inputs_and_targets(
- text_inputs, self.start_text_token, self.stop_text_token
- )
- text_emb = self.text_embedding(text_inputs) + self.text_pos_embedding(text_inputs)
- mel_codes, mel_targets = self.build_aligned_inputs_and_targets(
- mel_codes, self.start_mel_token, self.stop_mel_token
- )
- if raw_mels is not None:
- mel_inp = F.pad(raw_mels, (0, 8))
- else:
- mel_inp = mel_codes
- mel_emb = self.mel_embedding(mel_inp)
- mel_emb = mel_emb + self.mel_pos_embedding(mel_codes)
-
- if text_first:
- text_logits, mel_logits = self.get_logits(
- conds,
- text_emb,
- self.text_head,
- mel_emb,
- self.mel_head,
- get_attns=return_attentions,
- return_latent=return_latent,
- )
- if return_latent:
- return mel_logits[
- :, :-2
- ] # Despite the name, these are not logits. Strip off the two tokens added by this forward pass.
- else:
- mel_logits, text_logits = self.get_logits(
- conds,
- mel_emb,
- self.mel_head,
- text_emb,
- self.text_head,
- get_attns=return_attentions,
- return_latent=return_latent,
- )
- if return_latent:
- return text_logits[
- :, :-2
- ] # Despite the name, these are not logits. Strip off the two tokens added by this forward pass.
-
- if return_attentions:
- return mel_logits
- loss_text = F.cross_entropy(text_logits, text_targets.long())
- loss_mel = F.cross_entropy(mel_logits, mel_targets.long())
- return loss_text.mean(), loss_mel.mean(), mel_logits
-
- def inference_speech(
- self,
- speech_conditioning_latent,
- text_inputs,
- input_tokens=None,
- num_return_sequences=1,
- max_generate_length=None,
- typical_sampling=False,
- typical_mass=0.9,
- **hf_generate_kwargs,
- ):
- text_inputs = F.pad(text_inputs, (0, 1), value=self.stop_text_token)
- text_inputs, text_targets = self.build_aligned_inputs_and_targets(
- text_inputs, self.start_text_token, self.stop_text_token
- )
- text_emb = self.text_embedding(text_inputs) + self.text_pos_embedding(text_inputs)
-
- conds = speech_conditioning_latent.unsqueeze(1)
- emb = torch.cat([conds, text_emb], dim=1)
- self.inference_model.store_mel_emb(emb)
-
- fake_inputs = torch.full(
- (
- emb.shape[0],
- conds.shape[1] + emb.shape[1],
- ),
- fill_value=1,
- dtype=torch.long,
- device=text_inputs.device,
- )
- fake_inputs[:, -1] = self.start_mel_token
- trunc_index = fake_inputs.shape[1]
- if input_tokens is None:
- inputs = fake_inputs
- else:
- assert (
- num_return_sequences % input_tokens.shape[0] == 0
- ), "The number of return sequences must be divisible by the number of input sequences"
- fake_inputs = fake_inputs.repeat(num_return_sequences, 1)
- input_tokens = input_tokens.repeat(num_return_sequences // input_tokens.shape[0], 1)
- inputs = torch.cat([fake_inputs, input_tokens], dim=1)
-
- logits_processor = (
- LogitsProcessorList([TypicalLogitsWarper(mass=typical_mass)]) if typical_sampling else LogitsProcessorList()
- ) # TODO disable this
- max_length = (
- trunc_index + self.max_mel_tokens - 1 if max_generate_length is None else trunc_index + max_generate_length
- )
- gen = self.inference_model.generate(
- inputs,
- bos_token_id=self.start_mel_token,
- pad_token_id=self.stop_mel_token,
- eos_token_id=self.stop_mel_token,
- max_length=max_length,
- logits_processor=logits_processor,
- num_return_sequences=num_return_sequences,
- **hf_generate_kwargs,
- )
- return gen[:, trunc_index:]
-
-
-if __name__ == "__main__":
- gpt = UnifiedVoice(
- model_dim=256,
- heads=4,
- train_solo_embeddings=True,
- use_mel_codes_as_input=True,
- max_conditioning_inputs=4,
- )
- l = gpt(
- torch.randn(2, 3, 80, 800),
- torch.randint(high=120, size=(2, 120)),
- torch.tensor([32, 120]),
- torch.randint(high=8192, size=(2, 250)),
- torch.tensor([250 * 256, 195 * 256]),
- )
- gpt.text_forward(
- torch.randn(2, 80, 800),
- torch.randint(high=50, size=(2, 80)),
- torch.tensor([32, 80]),
- )
diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/xtts/dvae.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/xtts/dvae.py
deleted file mode 100644
index bdd7a9d09f44cc8dae102a053c365462dc416b6d..0000000000000000000000000000000000000000
--- a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/xtts/dvae.py
+++ /dev/null
@@ -1,393 +0,0 @@
-import functools
-from math import sqrt
-
-import torch
-import torch.distributed as distributed
-import torch.nn as nn
-import torch.nn.functional as F
-import torchaudio
-from einops import rearrange
-
-
-def default(val, d):
- return val if val is not None else d
-
-
-def eval_decorator(fn):
- def inner(model, *args, **kwargs):
- was_training = model.training
- model.eval()
- out = fn(model, *args, **kwargs)
- model.train(was_training)
- return out
-
- return inner
-
-
-def dvae_wav_to_mel(
- wav, mel_norms_file="../experiments/clips_mel_norms.pth", mel_norms=None, device=torch.device("cpu")
-):
- mel_stft = torchaudio.transforms.MelSpectrogram(
- n_fft=1024,
- hop_length=256,
- win_length=1024,
- power=2,
- normalized=False,
- sample_rate=22050,
- f_min=0,
- f_max=8000,
- n_mels=80,
- norm="slaney",
- ).to(device)
- wav = wav.to(device)
- mel = mel_stft(wav)
- mel = torch.log(torch.clamp(mel, min=1e-5))
- if mel_norms is None:
- mel_norms = torch.load(mel_norms_file, map_location=device)
- mel = mel / mel_norms.unsqueeze(0).unsqueeze(-1)
- return mel
-
-
-class Quantize(nn.Module):
- def __init__(self, dim, n_embed, decay=0.99, eps=1e-5, balancing_heuristic=False, new_return_order=False):
- super().__init__()
-
- self.dim = dim
- self.n_embed = n_embed
- self.decay = decay
- self.eps = eps
-
- self.balancing_heuristic = balancing_heuristic
- self.codes = None
- self.max_codes = 64000
- self.codes_full = False
- self.new_return_order = new_return_order
-
- embed = torch.randn(dim, n_embed)
- self.register_buffer("embed", embed)
- self.register_buffer("cluster_size", torch.zeros(n_embed))
- self.register_buffer("embed_avg", embed.clone())
-
- def forward(self, input, return_soft_codes=False):
- if self.balancing_heuristic and self.codes_full:
- h = torch.histc(self.codes, bins=self.n_embed, min=0, max=self.n_embed) / len(self.codes)
- mask = torch.logical_or(h > 0.9, h < 0.01).unsqueeze(1)
- ep = self.embed.permute(1, 0)
- ea = self.embed_avg.permute(1, 0)
- rand_embed = torch.randn_like(ep) * mask
- self.embed = (ep * ~mask + rand_embed).permute(1, 0)
- self.embed_avg = (ea * ~mask + rand_embed).permute(1, 0)
- self.cluster_size = self.cluster_size * ~mask.squeeze()
- if torch.any(mask):
- print(f"Reset {torch.sum(mask)} embedding codes.")
- self.codes = None
- self.codes_full = False
-
- flatten = input.reshape(-1, self.dim)
- dist = flatten.pow(2).sum(1, keepdim=True) - 2 * flatten @ self.embed + self.embed.pow(2).sum(0, keepdim=True)
- soft_codes = -dist
- _, embed_ind = soft_codes.max(1)
- embed_onehot = F.one_hot(embed_ind, self.n_embed).type(flatten.dtype)
- embed_ind = embed_ind.view(*input.shape[:-1])
- quantize = self.embed_code(embed_ind)
-
- if self.balancing_heuristic:
- if self.codes is None:
- self.codes = embed_ind.flatten()
- else:
- self.codes = torch.cat([self.codes, embed_ind.flatten()])
- if len(self.codes) > self.max_codes:
- self.codes = self.codes[-self.max_codes :]
- self.codes_full = True
-
- if self.training:
- embed_onehot_sum = embed_onehot.sum(0)
- embed_sum = flatten.transpose(0, 1) @ embed_onehot
-
- if distributed.is_initialized() and distributed.get_world_size() > 1:
- distributed.all_reduce(embed_onehot_sum)
- distributed.all_reduce(embed_sum)
-
- self.cluster_size.data.mul_(self.decay).add_(embed_onehot_sum, alpha=1 - self.decay)
- self.embed_avg.data.mul_(self.decay).add_(embed_sum, alpha=1 - self.decay)
- n = self.cluster_size.sum()
- cluster_size = (self.cluster_size + self.eps) / (n + self.n_embed * self.eps) * n
- embed_normalized = self.embed_avg / cluster_size.unsqueeze(0)
- self.embed.data.copy_(embed_normalized)
-
- diff = (quantize.detach() - input).pow(2).mean()
- quantize = input + (quantize - input).detach()
-
- if return_soft_codes:
- return quantize, diff, embed_ind, soft_codes.view(input.shape[:-1] + (-1,))
- elif self.new_return_order:
- return quantize, embed_ind, diff
- else:
- return quantize, diff, embed_ind
-
- def embed_code(self, embed_id):
- return F.embedding(embed_id, self.embed.transpose(0, 1))
-
-
-# Fits a soft-discretized input to a normal-PDF across the specified dimension.
-# In other words, attempts to force the discretization function to have a mean equal utilization across all discrete
-# values with the specified expected variance.
-class DiscretizationLoss(nn.Module):
- def __init__(self, discrete_bins, dim, expected_variance, store_past=0):
- super().__init__()
- self.discrete_bins = discrete_bins
- self.dim = dim
- self.dist = torch.distributions.Normal(0, scale=expected_variance)
- if store_past > 0:
- self.record_past = True
- self.register_buffer("accumulator_index", torch.zeros(1, dtype=torch.long, device="cpu"))
- self.register_buffer("accumulator_filled", torch.zeros(1, dtype=torch.long, device="cpu"))
- self.register_buffer("accumulator", torch.zeros(store_past, discrete_bins))
- else:
- self.record_past = False
-
- def forward(self, x):
- other_dims = set(range(len(x.shape))) - set([self.dim])
- averaged = x.sum(dim=tuple(other_dims)) / x.sum()
- averaged = averaged - averaged.mean()
-
- if self.record_past:
- acc_count = self.accumulator.shape[0]
- avg = averaged.detach().clone()
- if self.accumulator_filled > 0:
- averaged = torch.mean(self.accumulator, dim=0) * (acc_count - 1) / acc_count + averaged / acc_count
-
- # Also push averaged into the accumulator.
- self.accumulator[self.accumulator_index] = avg
- self.accumulator_index += 1
- if self.accumulator_index >= acc_count:
- self.accumulator_index *= 0
- if self.accumulator_filled <= 0:
- self.accumulator_filled += 1
-
- return torch.sum(-self.dist.log_prob(averaged))
-
-
-class ResBlock(nn.Module):
- def __init__(self, chan, conv, activation):
- super().__init__()
- self.net = nn.Sequential(
- conv(chan, chan, 3, padding=1),
- activation(),
- conv(chan, chan, 3, padding=1),
- activation(),
- conv(chan, chan, 1),
- )
-
- def forward(self, x):
- return self.net(x) + x
-
-
-class UpsampledConv(nn.Module):
- def __init__(self, conv, *args, **kwargs):
- super().__init__()
- assert "stride" in kwargs.keys()
- self.stride = kwargs["stride"]
- del kwargs["stride"]
- self.conv = conv(*args, **kwargs)
-
- def forward(self, x):
- up = nn.functional.interpolate(x, scale_factor=self.stride, mode="nearest")
- return self.conv(up)
-
-
-# DiscreteVAE partially derived from lucidrains DALLE implementation
-# Credit: https://github.com/lucidrains/DALLE-pytorch
-class DiscreteVAE(nn.Module):
- def __init__(
- self,
- positional_dims=2,
- num_tokens=512,
- codebook_dim=512,
- num_layers=3,
- num_resnet_blocks=0,
- hidden_dim=64,
- channels=3,
- stride=2,
- kernel_size=4,
- use_transposed_convs=True,
- encoder_norm=False,
- activation="relu",
- smooth_l1_loss=False,
- straight_through=False,
- normalization=None, # ((0.5,) * 3, (0.5,) * 3),
- record_codes=False,
- discretization_loss_averaging_steps=100,
- lr_quantizer_args={},
- ):
- super().__init__()
- has_resblocks = num_resnet_blocks > 0
-
- self.num_tokens = num_tokens
- self.num_layers = num_layers
- self.straight_through = straight_through
- self.positional_dims = positional_dims
- self.discrete_loss = DiscretizationLoss(
- num_tokens, 2, 1 / (num_tokens * 2), discretization_loss_averaging_steps
- )
-
- assert positional_dims > 0 and positional_dims < 3 # This VAE only supports 1d and 2d inputs for now.
- if positional_dims == 2:
- conv = nn.Conv2d
- conv_transpose = nn.ConvTranspose2d
- else:
- conv = nn.Conv1d
- conv_transpose = nn.ConvTranspose1d
- if not use_transposed_convs:
- conv_transpose = functools.partial(UpsampledConv, conv)
-
- if activation == "relu":
- act = nn.ReLU
- elif activation == "silu":
- act = nn.SiLU
- else:
- assert NotImplementedError()
-
- enc_layers = []
- dec_layers = []
-
- if num_layers > 0:
- enc_chans = [hidden_dim * 2**i for i in range(num_layers)]
- dec_chans = list(reversed(enc_chans))
-
- enc_chans = [channels, *enc_chans]
-
- dec_init_chan = codebook_dim if not has_resblocks else dec_chans[0]
- dec_chans = [dec_init_chan, *dec_chans]
-
- enc_chans_io, dec_chans_io = map(lambda t: list(zip(t[:-1], t[1:])), (enc_chans, dec_chans))
-
- pad = (kernel_size - 1) // 2
- for (enc_in, enc_out), (dec_in, dec_out) in zip(enc_chans_io, dec_chans_io):
- enc_layers.append(nn.Sequential(conv(enc_in, enc_out, kernel_size, stride=stride, padding=pad), act()))
- if encoder_norm:
- enc_layers.append(nn.GroupNorm(8, enc_out))
- dec_layers.append(
- nn.Sequential(conv_transpose(dec_in, dec_out, kernel_size, stride=stride, padding=pad), act())
- )
- dec_out_chans = dec_chans[-1]
- innermost_dim = dec_chans[0]
- else:
- enc_layers.append(nn.Sequential(conv(channels, hidden_dim, 1), act()))
- dec_out_chans = hidden_dim
- innermost_dim = hidden_dim
-
- for _ in range(num_resnet_blocks):
- dec_layers.insert(0, ResBlock(innermost_dim, conv, act))
- enc_layers.append(ResBlock(innermost_dim, conv, act))
-
- if num_resnet_blocks > 0:
- dec_layers.insert(0, conv(codebook_dim, innermost_dim, 1))
-
- enc_layers.append(conv(innermost_dim, codebook_dim, 1))
- dec_layers.append(conv(dec_out_chans, channels, 1))
-
- self.encoder = nn.Sequential(*enc_layers)
- self.decoder = nn.Sequential(*dec_layers)
-
- self.loss_fn = F.smooth_l1_loss if smooth_l1_loss else F.mse_loss
- self.codebook = Quantize(codebook_dim, num_tokens, new_return_order=True)
-
- # take care of normalization within class
- self.normalization = normalization
- self.record_codes = record_codes
- if record_codes:
- self.codes = torch.zeros((1228800,), dtype=torch.long)
- self.code_ind = 0
- self.total_codes = 0
- self.internal_step = 0
-
- def norm(self, images):
- if not self.normalization is not None:
- return images
-
- means, stds = map(lambda t: torch.as_tensor(t).to(images), self.normalization)
- arrange = "c -> () c () ()" if self.positional_dims == 2 else "c -> () c ()"
- means, stds = map(lambda t: rearrange(t, arrange), (means, stds))
- images = images.clone()
- images.sub_(means).div_(stds)
- return images
-
- def get_debug_values(self, step, __):
- if self.record_codes and self.total_codes > 0:
- # Report annealing schedule
- return {"histogram_codes": self.codes[: self.total_codes]}
- else:
- return {}
-
- @torch.no_grad()
- @eval_decorator
- def get_codebook_indices(self, images):
- img = self.norm(images)
- logits = self.encoder(img).permute((0, 2, 3, 1) if len(img.shape) == 4 else (0, 2, 1))
- sampled, codes, _ = self.codebook(logits)
- self.log_codes(codes)
- return codes
-
- def decode(self, img_seq):
- self.log_codes(img_seq)
- if hasattr(self.codebook, "embed_code"):
- image_embeds = self.codebook.embed_code(img_seq)
- else:
- image_embeds = F.embedding(img_seq, self.codebook.codebook)
- b, n, d = image_embeds.shape
-
- kwargs = {}
- if self.positional_dims == 1:
- arrange = "b n d -> b d n"
- else:
- h = w = int(sqrt(n))
- arrange = "b (h w) d -> b d h w"
- kwargs = {"h": h, "w": w}
- image_embeds = rearrange(image_embeds, arrange, **kwargs)
- images = [image_embeds]
- for layer in self.decoder:
- images.append(layer(images[-1]))
- return images[-1], images[-2]
-
- def infer(self, img):
- img = self.norm(img)
- logits = self.encoder(img).permute((0, 2, 3, 1) if len(img.shape) == 4 else (0, 2, 1))
- sampled, codes, commitment_loss = self.codebook(logits)
- return self.decode(codes)
-
- # Note: This module is not meant to be run in forward() except while training. It has special logic which performs
- # evaluation using quantized values when it detects that it is being run in eval() mode, which will be substantially
- # more lossy (but useful for determining network performance).
- def forward(self, img):
- img = self.norm(img)
- logits = self.encoder(img).permute((0, 2, 3, 1) if len(img.shape) == 4 else (0, 2, 1))
- sampled, codes, commitment_loss = self.codebook(logits)
- sampled = sampled.permute((0, 3, 1, 2) if len(img.shape) == 4 else (0, 2, 1))
-
- if self.training:
- out = sampled
- for d in self.decoder:
- out = d(out)
- self.log_codes(codes)
- else:
- # This is non-differentiable, but gives a better idea of how the network is actually performing.
- out, _ = self.decode(codes)
-
- # reconstruction loss
- recon_loss = self.loss_fn(img, out, reduction="none")
-
- return recon_loss, commitment_loss, out
-
- def log_codes(self, codes):
- # This is so we can debug the distribution of codes being learned.
- if self.record_codes and self.internal_step % 10 == 0:
- codes = codes.flatten()
- l = codes.shape[0]
- i = self.code_ind if (self.codes.shape[0] - self.code_ind) > l else self.codes.shape[0] - l
- self.codes[i : i + l] = codes.cpu()
- self.code_ind = self.code_ind + l
- if self.code_ind >= self.codes.shape[0]:
- self.code_ind = 0
- self.total_codes += 1
- self.internal_step += 1
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Cipher/DES.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Cipher/DES.py
deleted file mode 100644
index 5cc286aee78a997631413f5981ad94638954c394..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Cipher/DES.py
+++ /dev/null
@@ -1,158 +0,0 @@
-# -*- coding: utf-8 -*-
-#
-# Cipher/DES.py : DES
-#
-# ===================================================================
-# The contents of this file are dedicated to the public domain. To
-# the extent that dedication to the public domain is not available,
-# everyone is granted a worldwide, perpetual, royalty-free,
-# non-exclusive license to exercise all rights associated with the
-# contents of this file for any purpose whatsoever.
-# No rights are reserved.
-#
-# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
-# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
-# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
-# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
-# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
-# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
-# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
-# SOFTWARE.
-# ===================================================================
-"""
-Module's constants for the modes of operation supported with Single DES:
-
-:var MODE_ECB: :ref:`Electronic Code Book (ECB) `
-:var MODE_CBC: :ref:`Cipher-Block Chaining (CBC) `
-:var MODE_CFB: :ref:`Cipher FeedBack (CFB) `
-:var MODE_OFB: :ref:`Output FeedBack (OFB) `
-:var MODE_CTR: :ref:`CounTer Mode (CTR) `
-:var MODE_OPENPGP: :ref:`OpenPGP Mode `
-:var MODE_EAX: :ref:`EAX Mode `
-"""
-
-import sys
-
-from Crypto.Cipher import _create_cipher
-from Crypto.Util.py3compat import byte_string
-from Crypto.Util._raw_api import (load_pycryptodome_raw_lib,
- VoidPointer, SmartPointer,
- c_size_t, c_uint8_ptr)
-
-_raw_des_lib = load_pycryptodome_raw_lib(
- "Crypto.Cipher._raw_des",
- """
- int DES_start_operation(const uint8_t key[],
- size_t key_len,
- void **pResult);
- int DES_encrypt(const void *state,
- const uint8_t *in,
- uint8_t *out,
- size_t data_len);
- int DES_decrypt(const void *state,
- const uint8_t *in,
- uint8_t *out,
- size_t data_len);
- int DES_stop_operation(void *state);
- """)
-
-
-def _create_base_cipher(dict_parameters):
- """This method instantiates and returns a handle to a low-level
- base cipher. It will absorb named parameters in the process."""
-
- try:
- key = dict_parameters.pop("key")
- except KeyError:
- raise TypeError("Missing 'key' parameter")
-
- if len(key) != key_size:
- raise ValueError("Incorrect DES key length (%d bytes)" % len(key))
-
- start_operation = _raw_des_lib.DES_start_operation
- stop_operation = _raw_des_lib.DES_stop_operation
-
- cipher = VoidPointer()
- result = start_operation(c_uint8_ptr(key),
- c_size_t(len(key)),
- cipher.address_of())
- if result:
- raise ValueError("Error %X while instantiating the DES cipher"
- % result)
- return SmartPointer(cipher.get(), stop_operation)
-
-
-def new(key, mode, *args, **kwargs):
- """Create a new DES cipher.
-
- :param key:
- The secret key to use in the symmetric cipher.
- It must be 8 byte long. The parity bits will be ignored.
- :type key: bytes/bytearray/memoryview
-
- :param mode:
- The chaining mode to use for encryption or decryption.
- :type mode: One of the supported ``MODE_*`` constants
-
- :Keyword Arguments:
- * **iv** (*byte string*) --
- (Only applicable for ``MODE_CBC``, ``MODE_CFB``, ``MODE_OFB``,
- and ``MODE_OPENPGP`` modes).
-
- The initialization vector to use for encryption or decryption.
-
- For ``MODE_CBC``, ``MODE_CFB``, and ``MODE_OFB`` it must be 8 bytes long.
-
- For ``MODE_OPENPGP`` mode only,
- it must be 8 bytes long for encryption
- and 10 bytes for decryption (in the latter case, it is
- actually the *encrypted* IV which was prefixed to the ciphertext).
-
- If not provided, a random byte string is generated (you must then
- read its value with the :attr:`iv` attribute).
-
- * **nonce** (*byte string*) --
- (Only applicable for ``MODE_EAX`` and ``MODE_CTR``).
-
- A value that must never be reused for any other encryption done
- with this key.
-
- For ``MODE_EAX`` there are no
- restrictions on its length (recommended: **16** bytes).
-
- For ``MODE_CTR``, its length must be in the range **[0..7]**.
-
- If not provided for ``MODE_EAX``, a random byte string is generated (you
- can read it back via the ``nonce`` attribute).
-
- * **segment_size** (*integer*) --
- (Only ``MODE_CFB``).The number of **bits** the plaintext and ciphertext
- are segmented in. It must be a multiple of 8.
- If not specified, it will be assumed to be 8.
-
- * **mac_len** : (*integer*) --
- (Only ``MODE_EAX``)
- Length of the authentication tag, in bytes.
- It must be no longer than 8 (default).
-
- * **initial_value** : (*integer*) --
- (Only ``MODE_CTR``). The initial value for the counter within
- the counter block. By default it is **0**.
-
- :Return: a DES object, of the applicable mode.
- """
-
- return _create_cipher(sys.modules[__name__], key, mode, *args, **kwargs)
-
-MODE_ECB = 1
-MODE_CBC = 2
-MODE_CFB = 3
-MODE_OFB = 5
-MODE_CTR = 6
-MODE_OPENPGP = 7
-MODE_EAX = 9
-
-# Size of a data block (in bytes)
-block_size = 8
-# Size of a key (in bytes)
-key_size = 8
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Hash/Poly1305.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Hash/Poly1305.py
deleted file mode 100644
index eb5e0dadba401ef75c8478af979ffde6c3f65c01..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Hash/Poly1305.py
+++ /dev/null
@@ -1,217 +0,0 @@
-# -*- coding: utf-8 -*-
-#
-# Hash/Poly1305.py - Implements the Poly1305 MAC
-#
-# ===================================================================
-# The contents of this file are dedicated to the public domain. To
-# the extent that dedication to the public domain is not available,
-# everyone is granted a worldwide, perpetual, royalty-free,
-# non-exclusive license to exercise all rights associated with the
-# contents of this file for any purpose whatsoever.
-# No rights are reserved.
-#
-# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
-# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
-# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
-# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
-# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
-# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
-# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
-# SOFTWARE.
-# ===================================================================
-
-from binascii import unhexlify
-
-from Crypto.Util.py3compat import bord, tobytes, _copy_bytes
-
-from Crypto.Hash import BLAKE2s
-from Crypto.Random import get_random_bytes
-from Crypto.Util._raw_api import (load_pycryptodome_raw_lib,
- VoidPointer, SmartPointer,
- create_string_buffer,
- get_raw_buffer, c_size_t,
- c_uint8_ptr)
-
-
-_raw_poly1305 = load_pycryptodome_raw_lib("Crypto.Hash._poly1305",
- """
- int poly1305_init(void **state,
- const uint8_t *r,
- size_t r_len,
- const uint8_t *s,
- size_t s_len);
- int poly1305_destroy(void *state);
- int poly1305_update(void *state,
- const uint8_t *in,
- size_t len);
- int poly1305_digest(const void *state,
- uint8_t *digest,
- size_t len);
- """)
-
-
-class Poly1305_MAC(object):
- """An Poly1305 MAC object.
- Do not instantiate directly. Use the :func:`new` function.
-
- :ivar digest_size: the size in bytes of the resulting MAC tag
- :vartype digest_size: integer
- """
-
- digest_size = 16
-
- def __init__(self, r, s, data):
-
- if len(r) != 16:
- raise ValueError("Parameter r is not 16 bytes long")
- if len(s) != 16:
- raise ValueError("Parameter s is not 16 bytes long")
-
- self._mac_tag = None
-
- state = VoidPointer()
- result = _raw_poly1305.poly1305_init(state.address_of(),
- c_uint8_ptr(r),
- c_size_t(len(r)),
- c_uint8_ptr(s),
- c_size_t(len(s))
- )
- if result:
- raise ValueError("Error %d while instantiating Poly1305" % result)
- self._state = SmartPointer(state.get(),
- _raw_poly1305.poly1305_destroy)
- if data:
- self.update(data)
-
- def update(self, data):
- """Authenticate the next chunk of message.
-
- Args:
- data (byte string/byte array/memoryview): The next chunk of data
- """
-
- if self._mac_tag:
- raise TypeError("You can only call 'digest' or 'hexdigest' on this object")
-
- result = _raw_poly1305.poly1305_update(self._state.get(),
- c_uint8_ptr(data),
- c_size_t(len(data)))
- if result:
- raise ValueError("Error %d while hashing Poly1305 data" % result)
- return self
-
- def copy(self):
- raise NotImplementedError()
-
- def digest(self):
- """Return the **binary** (non-printable) MAC tag of the message
- authenticated so far.
-
- :return: The MAC tag digest, computed over the data processed so far.
- Binary form.
- :rtype: byte string
- """
-
- if self._mac_tag:
- return self._mac_tag
-
- bfr = create_string_buffer(16)
- result = _raw_poly1305.poly1305_digest(self._state.get(),
- bfr,
- c_size_t(len(bfr)))
- if result:
- raise ValueError("Error %d while creating Poly1305 digest" % result)
-
- self._mac_tag = get_raw_buffer(bfr)
- return self._mac_tag
-
- def hexdigest(self):
- """Return the **printable** MAC tag of the message authenticated so far.
-
- :return: The MAC tag, computed over the data processed so far.
- Hexadecimal encoded.
- :rtype: string
- """
-
- return "".join(["%02x" % bord(x)
- for x in tuple(self.digest())])
-
- def verify(self, mac_tag):
- """Verify that a given **binary** MAC (computed by another party)
- is valid.
-
- Args:
- mac_tag (byte string/byte string/memoryview): the expected MAC of the message.
-
- Raises:
- ValueError: if the MAC does not match. It means that the message
- has been tampered with or that the MAC key is incorrect.
- """
-
- secret = get_random_bytes(16)
-
- mac1 = BLAKE2s.new(digest_bits=160, key=secret, data=mac_tag)
- mac2 = BLAKE2s.new(digest_bits=160, key=secret, data=self.digest())
-
- if mac1.digest() != mac2.digest():
- raise ValueError("MAC check failed")
-
- def hexverify(self, hex_mac_tag):
- """Verify that a given **printable** MAC (computed by another party)
- is valid.
-
- Args:
- hex_mac_tag (string): the expected MAC of the message,
- as a hexadecimal string.
-
- Raises:
- ValueError: if the MAC does not match. It means that the message
- has been tampered with or that the MAC key is incorrect.
- """
-
- self.verify(unhexlify(tobytes(hex_mac_tag)))
-
-
-
-def new(**kwargs):
- """Create a new Poly1305 MAC object.
-
- Args:
- key (bytes/bytearray/memoryview):
- The 32-byte key for the Poly1305 object.
- cipher (module from ``Crypto.Cipher``):
- The cipher algorithm to use for deriving the Poly1305
- key pair *(r, s)*.
- It can only be ``Crypto.Cipher.AES`` or ``Crypto.Cipher.ChaCha20``.
- nonce (bytes/bytearray/memoryview):
- Optional. The non-repeatable value to use for the MAC of this message.
- It must be 16 bytes long for ``AES`` and 8 or 12 bytes for ``ChaCha20``.
- If not passed, a random nonce is created; you will find it in the
- ``nonce`` attribute of the new object.
- data (bytes/bytearray/memoryview):
- Optional. The very first chunk of the message to authenticate.
- It is equivalent to an early call to ``update()``.
-
- Returns:
- A :class:`Poly1305_MAC` object
- """
-
- cipher = kwargs.pop("cipher", None)
- if not hasattr(cipher, '_derive_Poly1305_key_pair'):
- raise ValueError("Parameter 'cipher' must be AES or ChaCha20")
-
- cipher_key = kwargs.pop("key", None)
- if cipher_key is None:
- raise TypeError("You must pass a parameter 'key'")
-
- nonce = kwargs.pop("nonce", None)
- data = kwargs.pop("data", None)
-
- if kwargs:
- raise TypeError("Unknown parameters: " + str(kwargs))
-
- r, s, nonce = cipher._derive_Poly1305_key_pair(cipher_key, nonce)
-
- new_mac = Poly1305_MAC(r, s, data)
- new_mac.nonce = _copy_bytes(None, None, nonce) # nonce may still be just a memoryview
- return new_mac
diff --git a/spaces/asdasdasdasd/Face-forgery-detection/detect_from_videos.py b/spaces/asdasdasdasd/Face-forgery-detection/detect_from_videos.py
deleted file mode 100644
index 993ba3fbf71a932a172d3bebac2ce399cde8efde..0000000000000000000000000000000000000000
--- a/spaces/asdasdasdasd/Face-forgery-detection/detect_from_videos.py
+++ /dev/null
@@ -1,236 +0,0 @@
-# coding: utf-8
-import os
-import argparse
-from os.path import join
-import cv2
-import dlib
-import torch
-import torch.nn as nn
-from PIL import Image as pil_image
-from tqdm import tqdm
-from model_core import Two_Stream_Net
-from torchvision import transforms
-
-device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
-map_location=torch.device('cpu')
-
-xception_default_data_transforms_256 = {
- 'train': transforms.Compose([
- transforms.Resize((256, 256)),
- transforms.ToTensor(),
- transforms.Normalize([0.5]*3, [0.5]*3)
- ]),
- 'val': transforms.Compose([
- transforms.Resize((256, 256)),
- transforms.ToTensor(),
- transforms.Normalize([0.5] * 3, [0.5] * 3)
- ]),
- 'test': transforms.Compose([
- transforms.Resize((256, 256)),
- transforms.ToTensor(),
- transforms.Normalize([0.5] * 3, [0.5] * 3)
- ]),
-}
-
-def get_boundingbox(face, width, height, scale=1.3, minsize=None):
- """
- Expects a dlib face to generate a quadratic bounding box.
- :param face: dlib face class
- :param width: frame width
- :param height: frame height
- :param scale: bounding box size multiplier to get a bigger face region
- :param minsize: set minimum bounding box size
- :return: x, y, bounding_box_size in opencv form
- """
- x1 = face.left()
- y1 = face.top()
- x2 = face.right()
- y2 = face.bottom()
- size_bb = int(max(x2 - x1, y2 - y1) * scale)
- if minsize:
- if size_bb < minsize:
- size_bb = minsize
- center_x, center_y = (x1 + x2) // 2, (y1 + y2) // 2
-
- # Check for out of bounds, x-y top left corner
- x1 = max(int(center_x - size_bb // 2), 0)
- y1 = max(int(center_y - size_bb // 2), 0)
- # Check for too big bb size for given x, y
- size_bb = min(width - x1, size_bb)
- size_bb = min(height - y1, size_bb)
-
- return x1, y1, size_bb
-
-
-def preprocess_image(image, cuda=True):
- """
- Preprocesses the image such that it can be fed into our network.
- During this process we envoke PIL to cast it into a PIL image.
-
- :param image: numpy image in opencv form (i.e., BGR and of shape
- :return: pytorch tensor of shape [1, 3, image_size, image_size], not
- necessarily casted to cuda
- """
- # Revert from BGR
- image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
- # Preprocess using the preprocessing function used during training and
- # casting it to PIL image
- preprocess = xception_default_data_transforms_256['test']
- preprocessed_image = preprocess(pil_image.fromarray(image))
- # Add first dimension as the network expects a batch
- preprocessed_image = preprocessed_image.unsqueeze(0)
- if cuda:
- preprocessed_image = preprocessed_image.cuda()
- return preprocessed_image
-
-
-def predict_with_model(image, model, post_function=nn.Softmax(dim=1),
- cuda=True):
- """
- Predicts the label of an input image. Preprocesses the input image and
- casts it to cuda if required
-
- :param image: numpy image
- :param model: torch model with linear layer at the end
- :param post_function: e.g., softmax
- :param cuda: enables cuda, must be the same parameter as the model
- :return: prediction (1 = fake, 0 = real)
- """
- # Preprocess
- preprocessed_image = preprocess_image(image, cuda).cuda()
-
- # print(preprocessed_image.shape)
-
- # Model prediction
- output = model(preprocessed_image)
- # print(output)
- # output = post_function(output[0])
-
- # Cast to desired
- _, prediction = torch.max(output[0], 1) # argmax
- prediction = float(prediction.cpu().numpy())
- # print(prediction)
-
- return int(prediction), output
-
-
-def test_full_image_network(video_path, model_path, output_path,
- start_frame=0, end_frame=None, cuda=False):
- """
- Reads a video and evaluates a subset of frames with the a detection network
- that takes in a full frame. Outputs are only given if a face is present
- and the face is highlighted using dlib.
- :param video_path: path to video file
- :param model_path: path to model file (should expect the full sized image)
- :param output_path: path where the output video is stored
- :param start_frame: first frame to evaluate
- :param end_frame: last frame to evaluate
- :param cuda: enable cuda
- :return:
- """
- print('Starting: {}'.format(video_path))
-
- if not os.path.exists(output_path):
- os.mkdir(output_path)
-
- # Read and write
- reader = cv2.VideoCapture(video_path)
-
- # video_fn = video_path.split('/')[-1].split('.')[0]+'.avi'
- video_fn = 'output_video.avi'
- os.makedirs(output_path, exist_ok=True)
- fourcc = cv2.VideoWriter_fourcc(*'MJPG')
- fps = reader.get(cv2.CAP_PROP_FPS)
- num_frames = int(reader.get(cv2.CAP_PROP_FRAME_COUNT))
- writer = None
-
- # Face detector
- face_detector = dlib.get_frontal_face_detector()
-
- # Load model
- # model, *_ = model_selection(modelname='xception', num_out_classes=2)
- model = Two_Stream_Net()
- model.load_state_dict(torch.load(model_path,map_location))
- model = model.to(device)
- model.eval()
-
- if cuda:
- model = model.cuda()
-
- # Text variables
- font_face = cv2.FONT_HERSHEY_SIMPLEX
- thickness = 2
- font_scale = 1
-
- frame_num = 0
- assert start_frame < num_frames - 1
- end_frame = end_frame if end_frame else num_frames
- pbar = tqdm(total=end_frame-start_frame)
-
- while reader.isOpened():
- _, image = reader.read()
- if image is None:
- break
- frame_num += 1
-
- if frame_num < start_frame:
- continue
- pbar.update(1)
-
- # Image size
- height, width = image.shape[:2]
-
- # Init output writer
- if writer is None:
- # writer = cv2.VideoWriter(join(output_path, video_fn), fourcc, fps,
- # (height, width)[::-1])
- writer = cv2.VideoWriter(video_fn, fourcc, fps,
- (height, width)[::-1])
-
- # 2. Detect with dlib
- gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
- faces = face_detector(gray, 1)
- if len(faces):
- # For now only take biggest face
- face = faces[0]
-
- # --- Prediction ---------------------------------------------------
- # Face crop with dlib and bounding box scale enlargement
- x, y, size = get_boundingbox(face, width, height)
- cropped_face = image[y:y+size, x:x+size]
-
- # Actual prediction using our model
- prediction, output = predict_with_model(cropped_face, model,
- cuda=cuda)
- # ------------------------------------------------------------------
-
- # Text and bb
- x = face.left()
- y = face.top()
- w = face.right() - x
- h = face.bottom() - y
- label = 'fake' if prediction == 0 else 'real'
- color = (0, 255, 0) if prediction == 1 else (0, 0, 255)
- output_list = ['{0:.2f}'.format(float(x)) for x in
- output[0].detach().cpu().numpy()[0]]
- cv2.putText(image, str(output_list)+'=>'+label, (x, y+h+30),
- font_face, font_scale,
- color, thickness, 2)
- # draw box over face
- cv2.rectangle(image, (x, y), (x + w, y + h), color, 2)
-
- if frame_num >= end_frame:
- break
-
- # Show
- # cv2.imshow('test', image)
- # cv2.waitKey(33) # About 30 fps
- writer.write(image)
- pbar.close()
- if writer is not None:
- writer.release()
- print('Finished! Output saved under {}'.format(output_path))
- else:
- print('Input video file was empty')
- return 'output_video.avi'
-
diff --git a/spaces/ashishraics/MCQ-Generator/extract_config.py b/spaces/ashishraics/MCQ-Generator/extract_config.py
deleted file mode 100644
index 1e134fc579001b775e83b358a83471134e74b62c..0000000000000000000000000000000000000000
--- a/spaces/ashishraics/MCQ-Generator/extract_config.py
+++ /dev/null
@@ -1,8 +0,0 @@
-from transformers import BertConfig,BertForMaskedLM
-
-config=BertConfig()
-model=BertForMaskedLM(config)
-
-print(config)
-
-print(model.config)
diff --git a/spaces/awacke1/SelfModifyStreamlitTest/app.py b/spaces/awacke1/SelfModifyStreamlitTest/app.py
deleted file mode 100644
index 0aa275f9557ab71fa8f8730edfbea52b48577286..0000000000000000000000000000000000000000
--- a/spaces/awacke1/SelfModifyStreamlitTest/app.py
+++ /dev/null
@@ -1,51 +0,0 @@
-import streamlit as st
-import base64
-import os
-from datetime import datetime
-
-def read_app_code():
- with open('app.py', 'r') as f:
- return f.read()
-
-def write_app_code(modified_code):
- with open('app.py', 'w') as f:
- f.write(modified_code)
-
-def get_timestamp():
- return datetime.now().strftime("%Y%m%d_%H%M%S")
-
-def create_download_link(file_content, filename):
- b64 = base64.b64encode(file_content).decode()
- href = f'Download {filename}'
- st.markdown(href, unsafe_allow_html=True)
-
-# Streamlit UI
-st.title("Self-Modifying Streamlit App")
-
-# Textbox for username
-username = st.text_input("Enter your username:", "anonymous")
-
-# File Upload
-uploaded_file = st.file_uploader("Choose a file")
-
-if uploaded_file is not None:
- file_content = uploaded_file.read()
- create_download_link(file_content, "your_file.txt")
-
- # Read and Modify app.py
- timestamp = get_timestamp()
- app_code = read_app_code()
- new_code = f"# Modified by {username} on {timestamp}\n"
- modified_app_code = app_code + "\n" + new_code
-
- write_app_code(modified_app_code)
-
- # Display the new code in a textbox
- st.text_area("Newly Modified app.py Code:", modified_app_code)
-
- # Create download link for modified app.py
- download_filename = f"modified_app_{timestamp}.py"
- create_download_link(modified_app_code.encode(), download_filename)
-
- # Refresh app
- os.system("streamlit rerun")
diff --git a/spaces/awacke1/VizLib-Numpy/app.py b/spaces/awacke1/VizLib-Numpy/app.py
deleted file mode 100644
index bf14fa36bf992ffed25117640cc10abd3f910e94..0000000000000000000000000000000000000000
--- a/spaces/awacke1/VizLib-Numpy/app.py
+++ /dev/null
@@ -1,123 +0,0 @@
-import streamlit as st
-import numpy as np
-
-st.sidebar.title("NumPy Demo")
-
-# Array creation routines
-st.sidebar.header("Array creation routines")
-st.sidebar.write("np.zeros(5):", np.zeros(5))
-st.sidebar.write("np.ones((2, 3)):", np.ones((2, 3)))
-st.sidebar.write("np.arange(0, 10, 2):", np.arange(0, 10, 2))
-st.sidebar.write("np.linspace(0, 1, 5):", np.linspace(0, 1, 5))
-st.sidebar.write("np.eye(3):", np.eye(3))
-
-# Array manipulation routines
-st.sidebar.header("Array manipulation routines")
-arr = np.array([[1, 2], [3, 4]])
-st.sidebar.write("arr.flatten():", arr.flatten())
-st.sidebar.write("np.transpose(arr):", np.transpose(arr))
-st.sidebar.write("np.rot90(arr):", np.rot90(arr))
-
-# Binary operations
-st.sidebar.header("Binary operations")
-x = np.array([1, 2, 3])
-y = np.array([4, 5, 6])
-st.sidebar.write("np.add(x, y):", np.add(x, y))
-st.sidebar.write("np.subtract(x, y):", np.subtract(x, y))
-st.sidebar.write("np.multiply(x, y):", np.multiply(x, y))
-
-# String operations
-st.sidebar.header("String operations")
-st.sidebar.write("np.char.add(['hello', 'world'], ['!', '?']):", np.char.add(['hello', 'world'], ['!', '?']))
-st.sidebar.write("np.char.upper('numpy'):", np.char.upper('numpy'))
-st.sidebar.write("np.char.replace('numpy', 'py', 'ython'):", np.char.replace('numpy', 'py', 'ython'))
-
-# C-Types Foreign Function Interface (numpy.ctypeslib)
-st.sidebar.header("C-Types Foreign Function Interface (numpy.ctypeslib)")
-# Omitted for simplicity
-
-# Datetime Support Functions
-st.sidebar.header("Datetime Support Functions")
-st.sidebar.write("np.datetime64('2023-02-21'):", np.datetime64('2023-02-21'))
-st.sidebar.write("np.datetime64('2023-02-21 12:00:00'):", np.datetime64('2023-02-21 12:00:00'))
-
-# Data type routines
-st.sidebar.header("Data type routines")
-st.sidebar.write("np.dtype('float64'):", np.dtype('float64'))
-st.sidebar.write("np.issubdtype(np.float64, np.number):", np.issubdtype(np.float64, np.number))
-
-# Optionally SciPy-accelerated routines (numpy.dual)
-st.sidebar.header("Optionally SciPy-accelerated routines (numpy.dual)")
-# Omitted for simplicity
-
-# Mathematical functions with automatic domain
-st.sidebar.header("Mathematical functions with automatic domain")
-st.sidebar.write("np.sqrt(-1):", np.sqrt(-1))
-st.sidebar.write("np.log(0):", np.log(0))
-
-# Functional programming
-st.sidebar.header("Functional programming")
-st.sidebar.write("np.vectorize(np.square)([1, 2, 3]):", np.vectorize(np.square)([1, 2, 3]))
-
-# NumPy-specific help functions
-st.sidebar.header("NumPy-specific help functions")
-st.sidebar.write("np.info(np.add):", np.info(np.add))
-
-# Linear algebra (numpy.linalg)
-st.sidebar.header("Linear algebra (numpy.linalg)")
-mat = np.array([[1, 2], [3, 4]])
-st.sidebar.write("np.linalg.inv(mat):", np.linalg.inv(mat))
-st.sidebar.write("np.linalg.eig(mat):", np.linalg.eig(mat))
-
-# Logic functions
-st.sidebar.header("Logic functions")
-x = np.array([1, 2, 3])
-y = np.array([2, 2, 2])
-st.sidebar.write("np.logical_and(x > 1, y < 3):", np.logical_and(x > 1, y < 3))
-st.sidebar.write("np.logical_or(x > 2, y < 2):", np.logical_or(x > 2, y < 2))
-st.sidebar.write("np.logical_not(x > 2):", np.logical_not(x > 2))
-
-# Mathematical functions
-st.sidebar.header("Mathematical functions")
-x = np.array([0, 1, 2])
-st.sidebar.write("np.exp(x):", np.exp(x))
-st.sidebar.write("np.sin(x):", np.sin(x))
-st.sidebar.write("np.arctan(x):", np.arctan(x))
-
-# Miscellaneous routines
-st.sidebar.header("Miscellaneous routines")
-st.sidebar.write("np.percentile([1, 2, 3, 4, 5], 50):", np.percentile([1, 2, 3, 4, 5], 50))
-st.sidebar.write("np.histogram([1, 2, 1], bins=[0, 1, 2, 3]):", np.histogram([1, 2, 1], bins=[0, 1, 2, 3]))
-
-# Polynomials
-st.sidebar.header("Polynomials")
-st.sidebar.write("np.poly1d([1, 2, 3])(4):", np.poly1d([1, 2, 3])(4))
-
-# Random sampling (numpy.random)
-st.sidebar.header("Random sampling (numpy.random)")
-st.sidebar.write("np.random.rand(3, 2):", np.random.rand(3, 2))
-st.sidebar.write("np.random.normal(size=(2, 2)):", np.random.normal(size=(2, 2)))
-
-#Set routines
-st.sidebar.header("Set routines")
-x = np.array([1, 2, 3, 4])
-y = np.array([3, 4, 5, 6])
-st.sidebar.write("np.intersect1d(x, y):", np.intersect1d(x, y))
-st.sidebar.write("np.union1d(x, y):", np.union1d(x, y))
-st.sidebar.write("np.setdiff1d(x, y):", np.setdiff1d(x, y))
-
-#Sorting, searching, and counting
-st.sidebar.header("Sorting, searching, and counting")
-x = np.array([3, 1, 4, 1, 5, 9, 2, 6, 5, 3])
-st.sidebar.write("np.sort(x):", np.sort(x))
-st.sidebar.write("np.argsort(x):", np.argsort(x))
-st.sidebar.write("np.where(x == 5):", np.where(x == 5))
-st.sidebar.write("np.count_nonzero(x > 3):", np.count_nonzero(x > 3))
-
-# Statistics
-st.sidebar.header("Statistics")
-x = np.array([3, 1, 4, 1, 5, 9, 2, 6, 5, 3])
-st.sidebar.write("np.mean(x):", np.mean(x))
-st.sidebar.write("np.std(x):", np.std(x))
-st.sidebar.write("np.median(x):", np.median(x))
-
diff --git a/spaces/awen666/web-ui/_next/static/chunks/app/layout-15d71eaa391f3141.js b/spaces/awen666/web-ui/_next/static/chunks/app/layout-15d71eaa391f3141.js
deleted file mode 100644
index 0544bb13baccd784224b930bcca44e80470064b9..0000000000000000000000000000000000000000
--- a/spaces/awen666/web-ui/_next/static/chunks/app/layout-15d71eaa391f3141.js
+++ /dev/null
@@ -1 +0,0 @@
-(self.webpackChunk_N_E=self.webpackChunk_N_E||[]).push([[185],{8415:function(n,e,u){Promise.resolve().then(u.t.bind(u,98410,23))},98410:function(){}},function(n){n.O(0,[253,698,744],function(){return n(n.s=8415)}),_N_E=n.O()}]);
\ No newline at end of file
diff --git a/spaces/ayaanzaveri/whisper-webui/src/modelCache.py b/spaces/ayaanzaveri/whisper-webui/src/modelCache.py
deleted file mode 100644
index 680a4b386fc37e17ed2353e72d04a646ece2c4a6..0000000000000000000000000000000000000000
--- a/spaces/ayaanzaveri/whisper-webui/src/modelCache.py
+++ /dev/null
@@ -1,17 +0,0 @@
-class ModelCache:
- def __init__(self):
- self._cache = dict()
-
- def get(self, model_key: str, model_factory):
- result = self._cache.get(model_key)
-
- if result is None:
- result = model_factory()
- self._cache[model_key] = result
- return result
-
- def clear(self):
- self._cache.clear()
-
-# A global cache of models. This is mainly used by the daemon processes to avoid loading the same model multiple times.
-GLOBAL_MODEL_CACHE = ModelCache()
\ No newline at end of file
diff --git a/spaces/bigscience/SourcingCatalog/catalogue/__init__.py b/spaces/bigscience/SourcingCatalog/catalogue/__init__.py
deleted file mode 100644
index 67f5ee37503593e2ccd5120293b056731438d43b..0000000000000000000000000000000000000000
--- a/spaces/bigscience/SourcingCatalog/catalogue/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-from .geography import countries, make_choro_map, region_tree
diff --git a/spaces/bioriAsaeru/text-to-voice/Anurag 3.1 Software Keygen Free Download Reizende Schulrefera Features and Benefits.md b/spaces/bioriAsaeru/text-to-voice/Anurag 3.1 Software Keygen Free Download Reizende Schulrefera Features and Benefits.md
deleted file mode 100644
index 0c7e50bf2b8d0e1ec3e4812e9163c17ee558aafe..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Anurag 3.1 Software Keygen Free Download Reizende Schulrefera Features and Benefits.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-avengers infinity war comics pdf ebook full pdf download
-Avengers Infinity War Comics Full.
-Download Avengers Infinity War Comics Full, free online.
-Find more comics, reviews, scan and read about Avengers Infinity War Comics on the largest comics and manga.
-Download Avengers Infinity War Comics Full.
-Download Avengers Infinity War Comics Full, free online. 8a78ff9644
-
-
-
diff --git a/spaces/bioriAsaeru/text-to-voice/Elliott Smith XO full album zip Download the 1998 masterpiece by the indie rock legend.md b/spaces/bioriAsaeru/text-to-voice/Elliott Smith XO full album zip Download the 1998 masterpiece by the indie rock legend.md
deleted file mode 100644
index 9e081f25e550abc9e9ac97a6a42bc57202bfafbd..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Elliott Smith XO full album zip Download the 1998 masterpiece by the indie rock legend.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cihyFjudo/fairness-paper-search/Download Aplikasi Power Point Terbaru dan Dapatkan Bonus Menarik.md b/spaces/cihyFjudo/fairness-paper-search/Download Aplikasi Power Point Terbaru dan Dapatkan Bonus Menarik.md
deleted file mode 100644
index 3c6695c7fabc33a018844114bd34aad000ee2588..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Download Aplikasi Power Point Terbaru dan Dapatkan Bonus Menarik.md
+++ /dev/null
@@ -1,16 +0,0 @@
-
-
PowerPoint 2016 telah memperkenalkan fitur tambahan dan merampingkan prosedur tertentu untuk membuatnya lebih efektif serta mengesankan dibanding pendahulunya. Sekarang Anda bisa lebih kreatif dengan tampilan tema Anda melalui beberapa variasi untuk memperhalus desainnya. Umpan balik, komen dan pertanyaan pun sudah bisa ditampilkan melalui panel komentar sehingga sangat berguna saat mengadakan konferensi. Beberapa fungsi telah diotomatisasi untuk meningkatkan kecepatan supaya Anda bisa mendapatkan penampilan yang mengesankan. Sebagai contoh, jika Anda memasukkan bullet point, Power Point akan menyarankan agar merubahnya menjadi grafik SmartArt yang lebih menarik perhatian. Jika Anda merasa PowerPoint versi 2013 sulit dimengerti, maka edisi terbarunya kini dilengkapi menu bantuan yang memberikan Anda saran terkait langkah-langkah agar hasilnya sesuai dengan yang Anda inginkan. Jika Anda selalu merasa bahwa PowerPoint hanya sebagai alat untuk presentasi, sekarang waktunya untuk berpikir di luar kotak dan memanfaatkan fungsinya dengan maksimal. Sebagai media kolaborasi untuk berbagi ide antar kolega, PowerPoint sangatlah sulit dikalahkan. Di samping itu, jika Anda ingin membantu pekerjan rumah anak-anak, kenapa tidak memanfaatkan aplikasi multifungsi ini untuk membuat Flash card agar membantunya mengingat detail yang jelimet?
-
Microsoft PowerPoint adalah sebuah program komputer untuk presentasi yang dikembangkan oleh Microsoft di dalam paket aplikasi kantoran mereka, Microsoft Office, selain Microsoft Word, Excel, Access dan beberapa program lainnya. PowerPoint berjalan di atas komputer PC berbasis sistem operasi Microsoft Windows dan juga Apple Macintosh yang menggunakan sistem operasi Apple Mac OS, meskipun pada awalnya aplikasi ini berjalan di atas sistem operasi Xenix. Aplikasi ini sangat banyak digunakan, apalagi oleh kalangan perkantoran dan pebisnis, para pendidik, siswa, dan trainer. Dimulai pada versi Microsoft Office System 2003, Microsoft mengganti nama dari sebelumnya Microsoft PowerPoint saja menjadi Microsoft Office PowerPoint. Lalu, pada Office 2013, namanya cukup disingkat PowerPoint. Versi terbaru dari PowerPoint adalah versi 15 (Microsoft Office PowerPoint 2013), yang tergabung ke dalam paket Microsoft Office 2013.
Cara download powerpoint di laptop sangat mudah dilakukan. Untuk melakukan presentasi biasanya Anda akan memakai powerpoint untuk membuat presentasinya lebih menarik saat menjelaskan materi.
-
Powerpoint ini merupakan salah satu aplikasi yang berasa dari microsoft sebagai media presentasi. Meski saat ini selain microsoft, sudah banyak vendor lain yang memiliki aplikasi dengan fungsi yang sama.
-
Tetapi tetap saja powerpoint yang berasal dari microsoft masih menjadi pilihan banyak orang dan tidak kalah bersaing dengan yang dikeluarkan vendor lain. hal ini dikarenakan aplikasi ini dianggap sangat user friendly dan mudah digunakan.
-
Microsoft power point memiliki fungsi sebagai sarana memudahkan seseorang untuk melakukan presentasi, membuat materi presentasi tersebut berbentuk softfile sehingga dapat diakses dengan mudah oleh orang lain melalui perangkat gawai.
-
Anda harus mendownload Microsoft Office secara keseluruhan yang sudah mencakup Miscrosoft Word, Excel, dan lain-lainnya. Berikut ini tutorial cara download power point di laptop yang bisa diikuti oleh Anda.
-
-
Selain memiliki fungsi untuk membuat presentasi lebih jelas dan menarik, Powerpoint juga memiliki manfaat tersembunyi yang masih belum banyak diketahui semua orang. Setelah mengetahui cara download powerpoint di laptop kami akan memberitahu manfaat lain dari PPT.
-
Power Point merupakan salah satu aplikasi dari Microsoft Office yang paling sering digunakan sebagai media presentasi paling efektif. Cara download powerpoint di laptop disarankan yang berasal dari situs resminya saja, khawatir jika diambil dari situs lain Laptop akan terkena malware.
-
Dalam memulai pembelajaran yang interaktif, Guru dapat memberikan kode kelas kepada setiap siswa yang akan bergabung ke kelas daring. Siswa tidak harus mendownload aplikasi dalam mengikuti kelas yang diselenggarakan oleh guru.
Watch Video Download aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cihyFjudo/fairness-paper-search/Irreplaceable (Harmony __LINK__.md b/spaces/cihyFjudo/fairness-paper-search/Irreplaceable (Harmony __LINK__.md
deleted file mode 100644
index 9ba1e89d1c6fe79b11f4ddc0a4c8c77ec83d298a..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Irreplaceable (Harmony __LINK__.md
+++ /dev/null
@@ -1,9 +0,0 @@
-
-
Beginning at age 3 and continuing through college, dance and piano lessons were routine. Singing in school and church choirs is a family tradition and has continued throughout her life. Kari joined Sweet Adelines in 2002 and has been an irreplaceable member of Crosstown Harmony Chorus since January 2005.
The school is always the best place to cultivate young people's "sustainable development" consciousness and living habits. According to July 24th, 2018, the Ministry of Education issued the "National Statistical Bulletin on the Development of Education in 2018", the data show that there are 518,800 schools at all levels in the country in 2018, and 276 million students in all levels of education with different academic qualifications. The best period of one's life is spent at school. Therefore, our school should attach importance to the combination of protection and education, actively explore the development model of a green school and also effectively promote the construction of a green school in China. Staying on such a school for a few years will bring life-long benefits to students. To build a green and sustainable school is not only to cultivate harmony between the young generation and nature, to foster a green production and lifestyle, but also to provide an irreplaceable place for the healthy growth of the "future flowers" of China.
-
Sears, however, is chiefly concerned with soil conservation. During the great westward migration across the continent, he notes, pioneer farmers felt little obligation to conserve the soil, and the inevitable result was a "kind of predatory farming" (p. 48). Predatory farming meant that once the soil of a farmstead became exhausted, one could always move farther west to where it was rich once again. The forests were cut down and the grasslands plowed under; and when the rains and winds came, the soil washed and blew away. Predatory farming still exists, and the need for the vigilant practice of proper soil conservation techniques is as great now as at any time in the past. A variety of conservation measures are particularly needed in the Great Plains where only a delicate root system anchors the soil against the nearly constant wind. Once the grasslands have been destroyed by overgrazing or by plowing, drought and wind will play havoc with the soil. Still, the task is not to grow two blades of grass where only one grew before, but rather to develop a land utilization policy that will preserve the soil when only half a blade can be grown. If such a land utilization program is not instituted, Sears warns, the result will be future Dust Bowls and the irreplaceable loss of topsoil-all to the detriment of the world's food supply.
-
In broadening their focus out from the years around the first millennium, the editors have, seemingly unconsciously (for there is no mention of it in their introduction), taken up a project left unfinished by the early death of Tim Reuter in 2002. Reuter's seminal essays "The 'Imperial Church System'" and "A Europe of the Bishops" provide some of the most-cited references in this volume, and he was embarked on "a study of episcopal power across the longue durée" at the time of his death. [3] He published the early fruits of this, sadly unfinished study, amongst other places, in Gilsdorf's collection. [4] As Janet Nelson writes in her introduction to a posthumous collection of his essays, "he himself would have hoped that the project could be taken forward by other hands. It seems likelier to be attempted by a team than a lone scholar. For as well as being among the outstanding medieval historians of his generation, Tim combined knowledge, skills and interests in a unique and irreplaceable way." [5] Reuter is irreplaceable, but the appearance of this rich volume of essays by thirteen North American, British and French scholars, which has its origins in a 2003 Kalamazoo panel, demonstrates that he was not working in isolation, and that the baton has been successfully passed to others. The Bishop Reformed records an important stage in the path of this renewed collective effort to reconsider episcopal authority, with episcopal interest in reform as one aspect of that, across the three crucial centuries from 900 to 1200.
- aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cihyFjudo/fairness-paper-search/Singham 4 Full Movie In Hindi 720p Free Download The Story Cast and Trailer of the Bollywood Hit.md b/spaces/cihyFjudo/fairness-paper-search/Singham 4 Full Movie In Hindi 720p Free Download The Story Cast and Trailer of the Bollywood Hit.md
deleted file mode 100644
index 96a57130e09159145ffc30c7a339afa05be92d7a..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Singham 4 Full Movie In Hindi 720p Free Download The Story Cast and Trailer of the Bollywood Hit.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cihyFjudo/fairness-paper-search/uTorrent 3.5.5.45231 Activation Include File Download The Ultimate Guide.md b/spaces/cihyFjudo/fairness-paper-search/uTorrent 3.5.5.45231 Activation Include File Download The Ultimate Guide.md
deleted file mode 100644
index 5a2771af2c72b3785329ed24b29b326d785c41a0..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/uTorrent 3.5.5.45231 Activation Include File Download The Ultimate Guide.md
+++ /dev/null
@@ -1,5 +0,0 @@
-
-
uTorrent (pronounced "MicroTorrent" mean micro μ) is a Peer-to-peer BitTorrent client, designed for the distribution of files at high speed.
With only 600k (approx) and 7MB memory, the software is very simple to use: to start a download the user has to simply inform the torrent file he wants to get address, so he can share and download large files very easily. If necessary, it is possible to adjust some settings: setting the bandwidth with priorities, scheduling downloads, RSS auto-downloading and DHT and downloading can begin.
The application supports downloading multiple files simultaneously and offers management of is appropriate UPnP.
It has a minimalist interface that immediately makes it an ideal choice for a novice user.
-
uTorrent 3.5.5.45231 Activation Include File Download
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cleanmaster/so-vits-svc-akagi/preprocess_flist_config.py b/spaces/cleanmaster/so-vits-svc-akagi/preprocess_flist_config.py
deleted file mode 100644
index 552e1ba9355de1d1ddc63240dee7ab84855b314b..0000000000000000000000000000000000000000
--- a/spaces/cleanmaster/so-vits-svc-akagi/preprocess_flist_config.py
+++ /dev/null
@@ -1,125 +0,0 @@
-import os
-import argparse
-import re
-
-from tqdm import tqdm
-from random import shuffle
-import json
-config_template = {
- "train": {
- "log_interval": 200,
- "eval_interval": 1000,
- "seed": 1234,
- "epochs": 10000,
- "learning_rate": 1e-4,
- "betas": [0.8, 0.99],
- "eps": 1e-9,
- "batch_size": 12,
- "fp16_run": False,
- "lr_decay": 0.999875,
- "segment_size": 17920,
- "init_lr_ratio": 1,
- "warmup_epochs": 0,
- "c_mel": 45,
- "c_kl": 1.0,
- "use_sr": True,
- "max_speclen": 384,
- "port": "8001"
- },
- "data": {
- "training_files":"filelists/train.txt",
- "validation_files":"filelists/val.txt",
- "max_wav_value": 32768.0,
- "sampling_rate": 32000,
- "filter_length": 1280,
- "hop_length": 320,
- "win_length": 1280,
- "n_mel_channels": 80,
- "mel_fmin": 0.0,
- "mel_fmax": None
- },
- "model": {
- "inter_channels": 192,
- "hidden_channels": 192,
- "filter_channels": 768,
- "n_heads": 2,
- "n_layers": 6,
- "kernel_size": 3,
- "p_dropout": 0.1,
- "resblock": "1",
- "resblock_kernel_sizes": [3,7,11],
- "resblock_dilation_sizes": [[1,3,5], [1,3,5], [1,3,5]],
- "upsample_rates": [10,8,2,2],
- "upsample_initial_channel": 512,
- "upsample_kernel_sizes": [16,16,4,4],
- "n_layers_q": 3,
- "use_spectral_norm": False,
- "gin_channels": 256,
- "ssl_dim": 256,
- "n_speakers": 0,
- },
- "spk":{
- "nen": 0,
- "paimon": 1,
- "yunhao": 2
- }
-}
-
-pattern = re.compile(r'^[\.a-zA-Z0-9_\/]+$')
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument("--train_list", type=str, default="./filelists/train.txt", help="path to train list")
- parser.add_argument("--val_list", type=str, default="./filelists/val.txt", help="path to val list")
- parser.add_argument("--test_list", type=str, default="./filelists/test.txt", help="path to test list")
- parser.add_argument("--source_dir", type=str, default="./dataset/32k", help="path to source dir")
- args = parser.parse_args()
-
- train = []
- val = []
- test = []
- idx = 0
- spk_dict = {}
- spk_id = 0
- for speaker in tqdm(os.listdir(args.source_dir)):
- spk_dict[speaker] = spk_id
- spk_id += 1
- wavs = ["/".join([args.source_dir, speaker, i]) for i in os.listdir(os.path.join(args.source_dir, speaker))]
- for wavpath in wavs:
- if not pattern.match(wavpath):
- print(f"warning:文件名{wavpath}中包含非字母数字下划线,可能会导致错误。(也可能不会)")
- if len(wavs) < 10:
- print(f"warning:{speaker}数据集数量小于10条,请补充数据")
- wavs = [i for i in wavs if i.endswith("wav")]
- shuffle(wavs)
- train += wavs[2:-2]
- val += wavs[:2]
- test += wavs[-2:]
- n_speakers = len(spk_dict.keys())*2
- shuffle(train)
- shuffle(val)
- shuffle(test)
-
- print("Writing", args.train_list)
- with open(args.train_list, "w") as f:
- for fname in tqdm(train):
- wavpath = fname
- f.write(wavpath + "\n")
-
- print("Writing", args.val_list)
- with open(args.val_list, "w") as f:
- for fname in tqdm(val):
- wavpath = fname
- f.write(wavpath + "\n")
-
- print("Writing", args.test_list)
- with open(args.test_list, "w") as f:
- for fname in tqdm(test):
- wavpath = fname
- f.write(wavpath + "\n")
-
- config_template["model"]["n_speakers"] = n_speakers
- config_template["spk"] = spk_dict
- print("Writing configs/config.json")
- with open("configs/config.json", "w") as f:
- json.dump(config_template, f, indent=2)
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/exceptiongroup/_exceptions.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/exceptiongroup/_exceptions.py
deleted file mode 100644
index 84e2b375954e2c8cd17bef0f94dc25f0c5fcbdce..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/exceptiongroup/_exceptions.py
+++ /dev/null
@@ -1,282 +0,0 @@
-from __future__ import annotations
-
-from collections.abc import Callable, Sequence
-from functools import partial
-from inspect import getmro, isclass
-from typing import TYPE_CHECKING, Generic, Type, TypeVar, cast, overload
-
-if TYPE_CHECKING:
- from typing import Self
-
-_BaseExceptionT_co = TypeVar("_BaseExceptionT_co", bound=BaseException, covariant=True)
-_BaseExceptionT = TypeVar("_BaseExceptionT", bound=BaseException)
-_ExceptionT_co = TypeVar("_ExceptionT_co", bound=Exception, covariant=True)
-_ExceptionT = TypeVar("_ExceptionT", bound=Exception)
-
-
-def check_direct_subclass(
- exc: BaseException, parents: tuple[type[BaseException]]
-) -> bool:
- for cls in getmro(exc.__class__)[:-1]:
- if cls in parents:
- return True
-
- return False
-
-
-def get_condition_filter(
- condition: type[_BaseExceptionT]
- | tuple[type[_BaseExceptionT], ...]
- | Callable[[_BaseExceptionT_co], bool]
-) -> Callable[[_BaseExceptionT_co], bool]:
- if isclass(condition) and issubclass(
- cast(Type[BaseException], condition), BaseException
- ):
- return partial(check_direct_subclass, parents=(condition,))
- elif isinstance(condition, tuple):
- if all(isclass(x) and issubclass(x, BaseException) for x in condition):
- return partial(check_direct_subclass, parents=condition)
- elif callable(condition):
- return cast("Callable[[BaseException], bool]", condition)
-
- raise TypeError("expected a function, exception type or tuple of exception types")
-
-
-class BaseExceptionGroup(BaseException, Generic[_BaseExceptionT_co]):
- """A combination of multiple unrelated exceptions."""
-
- def __new__(
- cls, __message: str, __exceptions: Sequence[_BaseExceptionT_co]
- ) -> Self:
- if not isinstance(__message, str):
- raise TypeError(f"argument 1 must be str, not {type(__message)}")
- if not isinstance(__exceptions, Sequence):
- raise TypeError("second argument (exceptions) must be a sequence")
- if not __exceptions:
- raise ValueError(
- "second argument (exceptions) must be a non-empty sequence"
- )
-
- for i, exc in enumerate(__exceptions):
- if not isinstance(exc, BaseException):
- raise ValueError(
- f"Item {i} of second argument (exceptions) is not an exception"
- )
-
- if cls is BaseExceptionGroup:
- if all(isinstance(exc, Exception) for exc in __exceptions):
- cls = ExceptionGroup
-
- if issubclass(cls, Exception):
- for exc in __exceptions:
- if not isinstance(exc, Exception):
- if cls is ExceptionGroup:
- raise TypeError(
- "Cannot nest BaseExceptions in an ExceptionGroup"
- )
- else:
- raise TypeError(
- f"Cannot nest BaseExceptions in {cls.__name__!r}"
- )
-
- instance = super().__new__(cls, __message, __exceptions)
- instance._message = __message
- instance._exceptions = __exceptions
- return instance
-
- def add_note(self, note: str) -> None:
- if not isinstance(note, str):
- raise TypeError(
- f"Expected a string, got note={note!r} (type {type(note).__name__})"
- )
-
- if not hasattr(self, "__notes__"):
- self.__notes__: list[str] = []
-
- self.__notes__.append(note)
-
- @property
- def message(self) -> str:
- return self._message
-
- @property
- def exceptions(
- self,
- ) -> tuple[_BaseExceptionT_co | BaseExceptionGroup[_BaseExceptionT_co], ...]:
- return tuple(self._exceptions)
-
- @overload
- def subgroup(
- self, __condition: type[_BaseExceptionT] | tuple[type[_BaseExceptionT], ...]
- ) -> BaseExceptionGroup[_BaseExceptionT] | None:
- ...
-
- @overload
- def subgroup(
- self: Self, __condition: Callable[[_BaseExceptionT_co], bool]
- ) -> Self | None:
- ...
-
- def subgroup(
- self: Self,
- __condition: type[_BaseExceptionT]
- | tuple[type[_BaseExceptionT], ...]
- | Callable[[_BaseExceptionT_co], bool],
- ) -> BaseExceptionGroup[_BaseExceptionT] | Self | None:
- condition = get_condition_filter(__condition)
- modified = False
- if condition(self):
- return self
-
- exceptions: list[BaseException] = []
- for exc in self.exceptions:
- if isinstance(exc, BaseExceptionGroup):
- subgroup = exc.subgroup(__condition)
- if subgroup is not None:
- exceptions.append(subgroup)
-
- if subgroup is not exc:
- modified = True
- elif condition(exc):
- exceptions.append(exc)
- else:
- modified = True
-
- if not modified:
- return self
- elif exceptions:
- group = self.derive(exceptions)
- group.__cause__ = self.__cause__
- group.__context__ = self.__context__
- group.__traceback__ = self.__traceback__
- return group
- else:
- return None
-
- @overload
- def split(
- self: Self,
- __condition: type[_BaseExceptionT] | tuple[type[_BaseExceptionT], ...],
- ) -> tuple[BaseExceptionGroup[_BaseExceptionT] | None, Self | None]:
- ...
-
- @overload
- def split(
- self: Self, __condition: Callable[[_BaseExceptionT_co], bool]
- ) -> tuple[Self | None, Self | None]:
- ...
-
- def split(
- self: Self,
- __condition: type[_BaseExceptionT]
- | tuple[type[_BaseExceptionT], ...]
- | Callable[[_BaseExceptionT_co], bool],
- ) -> (
- tuple[BaseExceptionGroup[_BaseExceptionT] | None, Self | None]
- | tuple[Self | None, Self | None]
- ):
- condition = get_condition_filter(__condition)
- if condition(self):
- return self, None
-
- matching_exceptions: list[BaseException] = []
- nonmatching_exceptions: list[BaseException] = []
- for exc in self.exceptions:
- if isinstance(exc, BaseExceptionGroup):
- matching, nonmatching = exc.split(condition)
- if matching is not None:
- matching_exceptions.append(matching)
-
- if nonmatching is not None:
- nonmatching_exceptions.append(nonmatching)
- elif condition(exc):
- matching_exceptions.append(exc)
- else:
- nonmatching_exceptions.append(exc)
-
- matching_group: Self | None = None
- if matching_exceptions:
- matching_group = self.derive(matching_exceptions)
- matching_group.__cause__ = self.__cause__
- matching_group.__context__ = self.__context__
- matching_group.__traceback__ = self.__traceback__
-
- nonmatching_group: Self | None = None
- if nonmatching_exceptions:
- nonmatching_group = self.derive(nonmatching_exceptions)
- nonmatching_group.__cause__ = self.__cause__
- nonmatching_group.__context__ = self.__context__
- nonmatching_group.__traceback__ = self.__traceback__
-
- return matching_group, nonmatching_group
-
- def derive(self: Self, __excs: Sequence[_BaseExceptionT_co]) -> Self:
- eg = BaseExceptionGroup(self.message, __excs)
- if hasattr(self, "__notes__"):
- # Create a new list so that add_note() only affects one exceptiongroup
- eg.__notes__ = list(self.__notes__)
-
- return eg
-
- def __str__(self) -> str:
- suffix = "" if len(self._exceptions) == 1 else "s"
- return f"{self.message} ({len(self._exceptions)} sub-exception{suffix})"
-
- def __repr__(self) -> str:
- return f"{self.__class__.__name__}({self.message!r}, {self._exceptions!r})"
-
-
-class ExceptionGroup(BaseExceptionGroup[_ExceptionT_co], Exception):
- def __new__(cls, __message: str, __exceptions: Sequence[_ExceptionT_co]) -> Self:
- return super().__new__(cls, __message, __exceptions)
-
- if TYPE_CHECKING:
-
- @property
- def exceptions(
- self,
- ) -> tuple[_ExceptionT_co | ExceptionGroup[_ExceptionT_co], ...]:
- ...
-
- @overload # type: ignore[override]
- def subgroup(
- self, __condition: type[_ExceptionT] | tuple[type[_ExceptionT], ...]
- ) -> ExceptionGroup[_ExceptionT] | None:
- ...
-
- @overload
- def subgroup(
- self: Self, __condition: Callable[[_ExceptionT_co], bool]
- ) -> Self | None:
- ...
-
- def subgroup(
- self: Self,
- __condition: type[_ExceptionT]
- | tuple[type[_ExceptionT], ...]
- | Callable[[_ExceptionT_co], bool],
- ) -> ExceptionGroup[_ExceptionT] | Self | None:
- return super().subgroup(__condition)
-
- @overload # type: ignore[override]
- def split(
- self: Self, __condition: type[_ExceptionT] | tuple[type[_ExceptionT], ...]
- ) -> tuple[ExceptionGroup[_ExceptionT] | None, Self | None]:
- ...
-
- @overload
- def split(
- self: Self, __condition: Callable[[_ExceptionT_co], bool]
- ) -> tuple[Self | None, Self | None]:
- ...
-
- def split(
- self: Self,
- __condition: type[_ExceptionT]
- | tuple[type[_ExceptionT], ...]
- | Callable[[_ExceptionT_co], bool],
- ) -> (
- tuple[ExceptionGroup[_ExceptionT] | None, Self | None]
- | tuple[Self | None, Self | None]
- ):
- return super().split(__condition)
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/filelock/_unix.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/filelock/_unix.py
deleted file mode 100644
index 40cec0ab189762ac9b4a0a950e65daf53bc5be16..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/filelock/_unix.py
+++ /dev/null
@@ -1,63 +0,0 @@
-from __future__ import annotations
-
-import os
-import sys
-from contextlib import suppress
-from errno import ENOSYS
-from typing import cast
-
-from ._api import BaseFileLock
-
-#: a flag to indicate if the fcntl API is available
-has_fcntl = False
-if sys.platform == "win32": # pragma: win32 cover
-
- class UnixFileLock(BaseFileLock):
- """Uses the :func:`fcntl.flock` to hard lock the lock file on unix systems."""
-
- def _acquire(self) -> None:
- raise NotImplementedError
-
- def _release(self) -> None:
- raise NotImplementedError
-
-else: # pragma: win32 no cover
- try:
- import fcntl
- except ImportError:
- pass
- else:
- has_fcntl = True
-
- class UnixFileLock(BaseFileLock):
- """Uses the :func:`fcntl.flock` to hard lock the lock file on unix systems."""
-
- def _acquire(self) -> None:
- open_flags = os.O_RDWR | os.O_CREAT | os.O_TRUNC
- fd = os.open(self.lock_file, open_flags, self._context.mode)
- with suppress(PermissionError): # This locked is not owned by this UID
- os.fchmod(fd, self._context.mode)
- try:
- fcntl.flock(fd, fcntl.LOCK_EX | fcntl.LOCK_NB)
- except OSError as exception:
- os.close(fd)
- if exception.errno == ENOSYS: # NotImplemented error
- msg = "FileSystem does not appear to support flock; user SoftFileLock instead"
- raise NotImplementedError(msg) from exception
- else:
- self._context.lock_file_fd = fd
-
- def _release(self) -> None:
- # Do not remove the lockfile:
- # https://github.com/tox-dev/py-filelock/issues/31
- # https://stackoverflow.com/questions/17708885/flock-removing-locked-file-without-race-condition
- fd = cast(int, self._context.lock_file_fd)
- self._context.lock_file_fd = None
- fcntl.flock(fd, fcntl.LOCK_UN)
- os.close(fd)
-
-
-__all__ = [
- "has_fcntl",
- "UnixFileLock",
-]
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/colorLib/geometry.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/colorLib/geometry.py
deleted file mode 100644
index 1ce161bfa117df1632b507d161f0dd4abb633bcc..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/colorLib/geometry.py
+++ /dev/null
@@ -1,143 +0,0 @@
-"""Helpers for manipulating 2D points and vectors in COLR table."""
-
-from math import copysign, cos, hypot, isclose, pi
-from fontTools.misc.roundTools import otRound
-
-
-def _vector_between(origin, target):
- return (target[0] - origin[0], target[1] - origin[1])
-
-
-def _round_point(pt):
- return (otRound(pt[0]), otRound(pt[1]))
-
-
-def _unit_vector(vec):
- length = hypot(*vec)
- if length == 0:
- return None
- return (vec[0] / length, vec[1] / length)
-
-
-_CIRCLE_INSIDE_TOLERANCE = 1e-4
-
-
-# The unit vector's X and Y components are respectively
-# U = (cos(α), sin(α))
-# where α is the angle between the unit vector and the positive x axis.
-_UNIT_VECTOR_THRESHOLD = cos(3 / 8 * pi) # == sin(1/8 * pi) == 0.38268343236508984
-
-
-def _rounding_offset(direction):
- # Return 2-tuple of -/+ 1.0 or 0.0 approximately based on the direction vector.
- # We divide the unit circle in 8 equal slices oriented towards the cardinal
- # (N, E, S, W) and intermediate (NE, SE, SW, NW) directions. To each slice we
- # map one of the possible cases: -1, 0, +1 for either X and Y coordinate.
- # E.g. Return (+1.0, -1.0) if unit vector is oriented towards SE, or
- # (-1.0, 0.0) if it's pointing West, etc.
- uv = _unit_vector(direction)
- if not uv:
- return (0, 0)
-
- result = []
- for uv_component in uv:
- if -_UNIT_VECTOR_THRESHOLD <= uv_component < _UNIT_VECTOR_THRESHOLD:
- # unit vector component near 0: direction almost orthogonal to the
- # direction of the current axis, thus keep coordinate unchanged
- result.append(0)
- else:
- # nudge coord by +/- 1.0 in direction of unit vector
- result.append(copysign(1.0, uv_component))
- return tuple(result)
-
-
-class Circle:
- def __init__(self, centre, radius):
- self.centre = centre
- self.radius = radius
-
- def __repr__(self):
- return f"Circle(centre={self.centre}, radius={self.radius})"
-
- def round(self):
- return Circle(_round_point(self.centre), otRound(self.radius))
-
- def inside(self, outer_circle, tolerance=_CIRCLE_INSIDE_TOLERANCE):
- dist = self.radius + hypot(*_vector_between(self.centre, outer_circle.centre))
- return (
- isclose(outer_circle.radius, dist, rel_tol=_CIRCLE_INSIDE_TOLERANCE)
- or outer_circle.radius > dist
- )
-
- def concentric(self, other):
- return self.centre == other.centre
-
- def move(self, dx, dy):
- self.centre = (self.centre[0] + dx, self.centre[1] + dy)
-
-
-def round_start_circle_stable_containment(c0, r0, c1, r1):
- """Round start circle so that it stays inside/outside end circle after rounding.
-
- The rounding of circle coordinates to integers may cause an abrupt change
- if the start circle c0 is so close to the end circle c1's perimiter that
- it ends up falling outside (or inside) as a result of the rounding.
- To keep the gradient unchanged, we nudge it in the right direction.
-
- See:
- https://github.com/googlefonts/colr-gradients-spec/issues/204
- https://github.com/googlefonts/picosvg/issues/158
- """
- start, end = Circle(c0, r0), Circle(c1, r1)
-
- inside_before_round = start.inside(end)
-
- round_start = start.round()
- round_end = end.round()
- inside_after_round = round_start.inside(round_end)
-
- if inside_before_round == inside_after_round:
- return round_start
- elif inside_after_round:
- # start was outside before rounding: we need to push start away from end
- direction = _vector_between(round_end.centre, round_start.centre)
- radius_delta = +1.0
- else:
- # start was inside before rounding: we need to push start towards end
- direction = _vector_between(round_start.centre, round_end.centre)
- radius_delta = -1.0
- dx, dy = _rounding_offset(direction)
-
- # At most 2 iterations ought to be enough to converge. Before the loop, we
- # know the start circle didn't keep containment after normal rounding; thus
- # we continue adjusting by -/+ 1.0 until containment is restored.
- # Normal rounding can at most move each coordinates -/+0.5; in the worst case
- # both the start and end circle's centres and radii will be rounded in opposite
- # directions, e.g. when they move along a 45 degree diagonal:
- # c0 = (1.5, 1.5) ===> (2.0, 2.0)
- # r0 = 0.5 ===> 1.0
- # c1 = (0.499, 0.499) ===> (0.0, 0.0)
- # r1 = 2.499 ===> 2.0
- # In this example, the relative distance between the circles, calculated
- # as r1 - (r0 + distance(c0, c1)) is initially 0.57437 (c0 is inside c1), and
- # -1.82842 after rounding (c0 is now outside c1). Nudging c0 by -1.0 on both
- # x and y axes moves it towards c1 by hypot(-1.0, -1.0) = 1.41421. Two of these
- # moves cover twice that distance, which is enough to restore containment.
- max_attempts = 2
- for _ in range(max_attempts):
- if round_start.concentric(round_end):
- # can't move c0 towards c1 (they are the same), so we change the radius
- round_start.radius += radius_delta
- assert round_start.radius >= 0
- else:
- round_start.move(dx, dy)
- if inside_before_round == round_start.inside(round_end):
- break
- else: # likely a bug
- raise AssertionError(
- f"Rounding circle {start} "
- f"{'inside' if inside_before_round else 'outside'} "
- f"{end} failed after {max_attempts} attempts!"
- )
-
- return round_start
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/misc/psOperators.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/misc/psOperators.py
deleted file mode 100644
index d0ef432f5243e5ed0c8fa5b02f4c147dfcb032c2..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/misc/psOperators.py
+++ /dev/null
@@ -1,574 +0,0 @@
-_accessstrings = {0: "", 1: "readonly", 2: "executeonly", 3: "noaccess"}
-
-
-class ps_object(object):
-
- literal = 1
- access = 0
- value = None
-
- def __init__(self, value):
- self.value = value
- self.type = self.__class__.__name__[3:] + "type"
-
- def __repr__(self):
- return "<%s %s>" % (self.__class__.__name__[3:], repr(self.value))
-
-
-class ps_operator(ps_object):
-
- literal = 0
-
- def __init__(self, name, function):
- self.name = name
- self.function = function
- self.type = self.__class__.__name__[3:] + "type"
-
- def __repr__(self):
- return "" % self.name
-
-
-class ps_procedure(ps_object):
- literal = 0
-
- def __repr__(self):
- return ""
-
- def __str__(self):
- psstring = "{"
- for i in range(len(self.value)):
- if i:
- psstring = psstring + " " + str(self.value[i])
- else:
- psstring = psstring + str(self.value[i])
- return psstring + "}"
-
-
-class ps_name(ps_object):
- literal = 0
-
- def __str__(self):
- if self.literal:
- return "/" + self.value
- else:
- return self.value
-
-
-class ps_literal(ps_object):
- def __str__(self):
- return "/" + self.value
-
-
-class ps_array(ps_object):
- def __str__(self):
- psstring = "["
- for i in range(len(self.value)):
- item = self.value[i]
- access = _accessstrings[item.access]
- if access:
- access = " " + access
- if i:
- psstring = psstring + " " + str(item) + access
- else:
- psstring = psstring + str(item) + access
- return psstring + "]"
-
- def __repr__(self):
- return ""
-
-
-_type1_pre_eexec_order = [
- "FontInfo",
- "FontName",
- "Encoding",
- "PaintType",
- "FontType",
- "FontMatrix",
- "FontBBox",
- "UniqueID",
- "Metrics",
- "StrokeWidth",
-]
-
-_type1_fontinfo_order = [
- "version",
- "Notice",
- "FullName",
- "FamilyName",
- "Weight",
- "ItalicAngle",
- "isFixedPitch",
- "UnderlinePosition",
- "UnderlineThickness",
-]
-
-_type1_post_eexec_order = ["Private", "CharStrings", "FID"]
-
-
-def _type1_item_repr(key, value):
- psstring = ""
- access = _accessstrings[value.access]
- if access:
- access = access + " "
- if key == "CharStrings":
- psstring = psstring + "/%s %s def\n" % (
- key,
- _type1_CharString_repr(value.value),
- )
- elif key == "Encoding":
- psstring = psstring + _type1_Encoding_repr(value, access)
- else:
- psstring = psstring + "/%s %s %sdef\n" % (str(key), str(value), access)
- return psstring
-
-
-def _type1_Encoding_repr(encoding, access):
- encoding = encoding.value
- psstring = "/Encoding 256 array\n0 1 255 {1 index exch /.notdef put} for\n"
- for i in range(256):
- name = encoding[i].value
- if name != ".notdef":
- psstring = psstring + "dup %d /%s put\n" % (i, name)
- return psstring + access + "def\n"
-
-
-def _type1_CharString_repr(charstrings):
- items = sorted(charstrings.items())
- return "xxx"
-
-
-class ps_font(ps_object):
- def __str__(self):
- psstring = "%d dict dup begin\n" % len(self.value)
- for key in _type1_pre_eexec_order:
- try:
- value = self.value[key]
- except KeyError:
- pass
- else:
- psstring = psstring + _type1_item_repr(key, value)
- items = sorted(self.value.items())
- for key, value in items:
- if key not in _type1_pre_eexec_order + _type1_post_eexec_order:
- psstring = psstring + _type1_item_repr(key, value)
- psstring = psstring + "currentdict end\ncurrentfile eexec\ndup "
- for key in _type1_post_eexec_order:
- try:
- value = self.value[key]
- except KeyError:
- pass
- else:
- psstring = psstring + _type1_item_repr(key, value)
- return (
- psstring
- + "dup/FontName get exch definefont pop\nmark currentfile closefile\n"
- + 8 * (64 * "0" + "\n")
- + "cleartomark"
- + "\n"
- )
-
- def __repr__(self):
- return ""
-
-
-class ps_file(ps_object):
- pass
-
-
-class ps_dict(ps_object):
- def __str__(self):
- psstring = "%d dict dup begin\n" % len(self.value)
- items = sorted(self.value.items())
- for key, value in items:
- access = _accessstrings[value.access]
- if access:
- access = access + " "
- psstring = psstring + "/%s %s %sdef\n" % (str(key), str(value), access)
- return psstring + "end "
-
- def __repr__(self):
- return ""
-
-
-class ps_mark(ps_object):
- def __init__(self):
- self.value = "mark"
- self.type = self.__class__.__name__[3:] + "type"
-
-
-class ps_procmark(ps_object):
- def __init__(self):
- self.value = "procmark"
- self.type = self.__class__.__name__[3:] + "type"
-
-
-class ps_null(ps_object):
- def __init__(self):
- self.type = self.__class__.__name__[3:] + "type"
-
-
-class ps_boolean(ps_object):
- def __str__(self):
- if self.value:
- return "true"
- else:
- return "false"
-
-
-class ps_string(ps_object):
- def __str__(self):
- return "(%s)" % repr(self.value)[1:-1]
-
-
-class ps_integer(ps_object):
- def __str__(self):
- return repr(self.value)
-
-
-class ps_real(ps_object):
- def __str__(self):
- return repr(self.value)
-
-
-class PSOperators(object):
- def ps_def(self):
- obj = self.pop()
- name = self.pop()
- self.dictstack[-1][name.value] = obj
-
- def ps_bind(self):
- proc = self.pop("proceduretype")
- self.proc_bind(proc)
- self.push(proc)
-
- def proc_bind(self, proc):
- for i in range(len(proc.value)):
- item = proc.value[i]
- if item.type == "proceduretype":
- self.proc_bind(item)
- else:
- if not item.literal:
- try:
- obj = self.resolve_name(item.value)
- except:
- pass
- else:
- if obj.type == "operatortype":
- proc.value[i] = obj
-
- def ps_exch(self):
- if len(self.stack) < 2:
- raise RuntimeError("stack underflow")
- obj1 = self.pop()
- obj2 = self.pop()
- self.push(obj1)
- self.push(obj2)
-
- def ps_dup(self):
- if not self.stack:
- raise RuntimeError("stack underflow")
- self.push(self.stack[-1])
-
- def ps_exec(self):
- obj = self.pop()
- if obj.type == "proceduretype":
- self.call_procedure(obj)
- else:
- self.handle_object(obj)
-
- def ps_count(self):
- self.push(ps_integer(len(self.stack)))
-
- def ps_eq(self):
- any1 = self.pop()
- any2 = self.pop()
- self.push(ps_boolean(any1.value == any2.value))
-
- def ps_ne(self):
- any1 = self.pop()
- any2 = self.pop()
- self.push(ps_boolean(any1.value != any2.value))
-
- def ps_cvx(self):
- obj = self.pop()
- obj.literal = 0
- self.push(obj)
-
- def ps_matrix(self):
- matrix = [
- ps_real(1.0),
- ps_integer(0),
- ps_integer(0),
- ps_real(1.0),
- ps_integer(0),
- ps_integer(0),
- ]
- self.push(ps_array(matrix))
-
- def ps_string(self):
- num = self.pop("integertype").value
- self.push(ps_string("\0" * num))
-
- def ps_type(self):
- obj = self.pop()
- self.push(ps_string(obj.type))
-
- def ps_store(self):
- value = self.pop()
- key = self.pop()
- name = key.value
- for i in range(len(self.dictstack) - 1, -1, -1):
- if name in self.dictstack[i]:
- self.dictstack[i][name] = value
- break
- self.dictstack[-1][name] = value
-
- def ps_where(self):
- name = self.pop()
- # XXX
- self.push(ps_boolean(0))
-
- def ps_systemdict(self):
- self.push(ps_dict(self.dictstack[0]))
-
- def ps_userdict(self):
- self.push(ps_dict(self.dictstack[1]))
-
- def ps_currentdict(self):
- self.push(ps_dict(self.dictstack[-1]))
-
- def ps_currentfile(self):
- self.push(ps_file(self.tokenizer))
-
- def ps_eexec(self):
- f = self.pop("filetype").value
- f.starteexec()
-
- def ps_closefile(self):
- f = self.pop("filetype").value
- f.skipwhite()
- f.stopeexec()
-
- def ps_cleartomark(self):
- obj = self.pop()
- while obj != self.mark:
- obj = self.pop()
-
- def ps_readstring(self, ps_boolean=ps_boolean, len=len):
- s = self.pop("stringtype")
- oldstr = s.value
- f = self.pop("filetype")
- # pad = file.value.read(1)
- # for StringIO, this is faster
- f.value.pos = f.value.pos + 1
- newstr = f.value.read(len(oldstr))
- s.value = newstr
- self.push(s)
- self.push(ps_boolean(len(oldstr) == len(newstr)))
-
- def ps_known(self):
- key = self.pop()
- d = self.pop("dicttype", "fonttype")
- self.push(ps_boolean(key.value in d.value))
-
- def ps_if(self):
- proc = self.pop("proceduretype")
- if self.pop("booleantype").value:
- self.call_procedure(proc)
-
- def ps_ifelse(self):
- proc2 = self.pop("proceduretype")
- proc1 = self.pop("proceduretype")
- if self.pop("booleantype").value:
- self.call_procedure(proc1)
- else:
- self.call_procedure(proc2)
-
- def ps_readonly(self):
- obj = self.pop()
- if obj.access < 1:
- obj.access = 1
- self.push(obj)
-
- def ps_executeonly(self):
- obj = self.pop()
- if obj.access < 2:
- obj.access = 2
- self.push(obj)
-
- def ps_noaccess(self):
- obj = self.pop()
- if obj.access < 3:
- obj.access = 3
- self.push(obj)
-
- def ps_not(self):
- obj = self.pop("booleantype", "integertype")
- if obj.type == "booleantype":
- self.push(ps_boolean(not obj.value))
- else:
- self.push(ps_integer(~obj.value))
-
- def ps_print(self):
- str = self.pop("stringtype")
- print("PS output --->", str.value)
-
- def ps_anchorsearch(self):
- seek = self.pop("stringtype")
- s = self.pop("stringtype")
- seeklen = len(seek.value)
- if s.value[:seeklen] == seek.value:
- self.push(ps_string(s.value[seeklen:]))
- self.push(seek)
- self.push(ps_boolean(1))
- else:
- self.push(s)
- self.push(ps_boolean(0))
-
- def ps_array(self):
- num = self.pop("integertype")
- array = ps_array([None] * num.value)
- self.push(array)
-
- def ps_astore(self):
- array = self.pop("arraytype")
- for i in range(len(array.value) - 1, -1, -1):
- array.value[i] = self.pop()
- self.push(array)
-
- def ps_load(self):
- name = self.pop()
- self.push(self.resolve_name(name.value))
-
- def ps_put(self):
- obj1 = self.pop()
- obj2 = self.pop()
- obj3 = self.pop("arraytype", "dicttype", "stringtype", "proceduretype")
- tp = obj3.type
- if tp == "arraytype" or tp == "proceduretype":
- obj3.value[obj2.value] = obj1
- elif tp == "dicttype":
- obj3.value[obj2.value] = obj1
- elif tp == "stringtype":
- index = obj2.value
- obj3.value = obj3.value[:index] + chr(obj1.value) + obj3.value[index + 1 :]
-
- def ps_get(self):
- obj1 = self.pop()
- if obj1.value == "Encoding":
- pass
- obj2 = self.pop(
- "arraytype", "dicttype", "stringtype", "proceduretype", "fonttype"
- )
- tp = obj2.type
- if tp in ("arraytype", "proceduretype"):
- self.push(obj2.value[obj1.value])
- elif tp in ("dicttype", "fonttype"):
- self.push(obj2.value[obj1.value])
- elif tp == "stringtype":
- self.push(ps_integer(ord(obj2.value[obj1.value])))
- else:
- assert False, "shouldn't get here"
-
- def ps_getinterval(self):
- obj1 = self.pop("integertype")
- obj2 = self.pop("integertype")
- obj3 = self.pop("arraytype", "stringtype")
- tp = obj3.type
- if tp == "arraytype":
- self.push(ps_array(obj3.value[obj2.value : obj2.value + obj1.value]))
- elif tp == "stringtype":
- self.push(ps_string(obj3.value[obj2.value : obj2.value + obj1.value]))
-
- def ps_putinterval(self):
- obj1 = self.pop("arraytype", "stringtype")
- obj2 = self.pop("integertype")
- obj3 = self.pop("arraytype", "stringtype")
- tp = obj3.type
- if tp == "arraytype":
- obj3.value[obj2.value : obj2.value + len(obj1.value)] = obj1.value
- elif tp == "stringtype":
- newstr = obj3.value[: obj2.value]
- newstr = newstr + obj1.value
- newstr = newstr + obj3.value[obj2.value + len(obj1.value) :]
- obj3.value = newstr
-
- def ps_cvn(self):
- self.push(ps_name(self.pop("stringtype").value))
-
- def ps_index(self):
- n = self.pop("integertype").value
- if n < 0:
- raise RuntimeError("index may not be negative")
- self.push(self.stack[-1 - n])
-
- def ps_for(self):
- proc = self.pop("proceduretype")
- limit = self.pop("integertype", "realtype").value
- increment = self.pop("integertype", "realtype").value
- i = self.pop("integertype", "realtype").value
- while 1:
- if increment > 0:
- if i > limit:
- break
- else:
- if i < limit:
- break
- if type(i) == type(0.0):
- self.push(ps_real(i))
- else:
- self.push(ps_integer(i))
- self.call_procedure(proc)
- i = i + increment
-
- def ps_forall(self):
- proc = self.pop("proceduretype")
- obj = self.pop("arraytype", "stringtype", "dicttype")
- tp = obj.type
- if tp == "arraytype":
- for item in obj.value:
- self.push(item)
- self.call_procedure(proc)
- elif tp == "stringtype":
- for item in obj.value:
- self.push(ps_integer(ord(item)))
- self.call_procedure(proc)
- elif tp == "dicttype":
- for key, value in obj.value.items():
- self.push(ps_name(key))
- self.push(value)
- self.call_procedure(proc)
-
- def ps_definefont(self):
- font = self.pop("dicttype")
- name = self.pop()
- font = ps_font(font.value)
- self.dictstack[0]["FontDirectory"].value[name.value] = font
- self.push(font)
-
- def ps_findfont(self):
- name = self.pop()
- font = self.dictstack[0]["FontDirectory"].value[name.value]
- self.push(font)
-
- def ps_pop(self):
- self.pop()
-
- def ps_dict(self):
- self.pop("integertype")
- self.push(ps_dict({}))
-
- def ps_begin(self):
- self.dictstack.append(self.pop("dicttype").value)
-
- def ps_end(self):
- if len(self.dictstack) > 2:
- del self.dictstack[-1]
- else:
- raise RuntimeError("dictstack underflow")
-
-
-notdef = ".notdef"
-from fontTools.encodings.StandardEncoding import StandardEncoding
-
-ps_StandardEncoding = list(map(ps_name, StandardEncoding))
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/qu2cu/cli.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/qu2cu/cli.py
deleted file mode 100644
index a07fd6dcd0d8256b4bb8db45a8d88cdf2d381ff2..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/qu2cu/cli.py
+++ /dev/null
@@ -1,125 +0,0 @@
-import os
-import argparse
-import logging
-from fontTools.misc.cliTools import makeOutputFileName
-from fontTools.ttLib import TTFont
-from fontTools.pens.qu2cuPen import Qu2CuPen
-from fontTools.pens.ttGlyphPen import TTGlyphPen
-import fontTools
-
-
-logger = logging.getLogger("fontTools.qu2cu")
-
-
-def _font_to_cubic(input_path, output_path=None, **kwargs):
- font = TTFont(input_path)
- logger.info("Converting curves for %s", input_path)
-
- stats = {} if kwargs["dump_stats"] else None
- qu2cu_kwargs = {
- "stats": stats,
- "max_err": kwargs["max_err_em"] * font["head"].unitsPerEm,
- "all_cubic": kwargs["all_cubic"],
- }
-
- assert "gvar" not in font, "Cannot convert variable font"
- glyphSet = font.getGlyphSet()
- glyphOrder = font.getGlyphOrder()
- glyf = font["glyf"]
- for glyphName in glyphOrder:
- glyph = glyphSet[glyphName]
- ttpen = TTGlyphPen(glyphSet)
- pen = Qu2CuPen(ttpen, **qu2cu_kwargs)
- glyph.draw(pen)
- glyf[glyphName] = ttpen.glyph(dropImpliedOnCurves=True)
-
- font["head"].glyphDataFormat = 1
-
- if kwargs["dump_stats"]:
- logger.info("Stats: %s", stats)
-
- logger.info("Saving %s", output_path)
- font.save(output_path)
-
-
-def main(args=None):
- """Convert an OpenType font from quadratic to cubic curves"""
- parser = argparse.ArgumentParser(prog="qu2cu")
- parser.add_argument("--version", action="version", version=fontTools.__version__)
- parser.add_argument(
- "infiles",
- nargs="+",
- metavar="INPUT",
- help="one or more input TTF source file(s).",
- )
- parser.add_argument("-v", "--verbose", action="count", default=0)
- parser.add_argument(
- "-e",
- "--conversion-error",
- type=float,
- metavar="ERROR",
- default=0.001,
- help="maxiumum approximation error measured in EM (default: 0.001)",
- )
- parser.add_argument(
- "-c",
- "--all-cubic",
- default=False,
- action="store_true",
- help="whether to only use cubic curves",
- )
-
- output_parser = parser.add_mutually_exclusive_group()
- output_parser.add_argument(
- "-o",
- "--output-file",
- default=None,
- metavar="OUTPUT",
- help=("output filename for the converted TTF."),
- )
- output_parser.add_argument(
- "-d",
- "--output-dir",
- default=None,
- metavar="DIRECTORY",
- help="output directory where to save converted TTFs",
- )
-
- options = parser.parse_args(args)
-
- if not options.verbose:
- level = "WARNING"
- elif options.verbose == 1:
- level = "INFO"
- else:
- level = "DEBUG"
- logging.basicConfig(level=level)
-
- if len(options.infiles) > 1 and options.output_file:
- parser.error("-o/--output-file can't be used with multile inputs")
-
- if options.output_dir:
- output_dir = options.output_dir
- if not os.path.exists(output_dir):
- os.mkdir(output_dir)
- elif not os.path.isdir(output_dir):
- parser.error("'%s' is not a directory" % output_dir)
- output_paths = [
- os.path.join(output_dir, os.path.basename(p)) for p in options.infiles
- ]
- elif options.output_file:
- output_paths = [options.output_file]
- else:
- output_paths = [
- makeOutputFileName(p, overWrite=True, suffix=".cubic")
- for p in options.infiles
- ]
-
- kwargs = dict(
- dump_stats=options.verbose > 0,
- max_err_em=options.conversion_error,
- all_cubic=options.all_cubic,
- )
-
- for input_path, output_path in zip(options.infiles, output_paths):
- _font_to_cubic(input_path, output_path, **kwargs)
diff --git a/spaces/cncn102/bingo1/src/components/tailwind-indicator.tsx b/spaces/cncn102/bingo1/src/components/tailwind-indicator.tsx
deleted file mode 100644
index f2a1291213dd67055fcebe67fab574c8441338df..0000000000000000000000000000000000000000
--- a/spaces/cncn102/bingo1/src/components/tailwind-indicator.tsx
+++ /dev/null
@@ -1,14 +0,0 @@
-export function TailwindIndicator() {
- if (process.env.NODE_ENV === 'production') return null
-
- return (
-
-
xs
-
sm
-
md
-
lg
-
xl
-
2xl
-
- )
-}
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/mpegvideoencdsp_init_arm.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/mpegvideoencdsp_init_arm.c
deleted file mode 100644
index a95b5bebe9a63e6525faad730fe059cc15d41f78..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/mpegvideoencdsp_init_arm.c
+++ /dev/null
@@ -1,39 +0,0 @@
-/*
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#include
-
-#include "libavutil/attributes.h"
-#include "libavutil/cpu.h"
-#include "libavutil/arm/cpu.h"
-#include "libavcodec/avcodec.h"
-#include "libavcodec/mpegvideoencdsp.h"
-
-int ff_pix_norm1_armv6(const uint8_t *pix, int line_size);
-int ff_pix_sum_armv6(const uint8_t *pix, int line_size);
-
-av_cold void ff_mpegvideoencdsp_init_arm(MpegvideoEncDSPContext *c,
- AVCodecContext *avctx)
-{
- int cpu_flags = av_get_cpu_flags();
-
- if (have_armv6(cpu_flags)) {
- c->pix_norm1 = ff_pix_norm1_armv6;
- c->pix_sum = ff_pix_sum_armv6;
- }
-}
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Climb Higher and Faster with Getting Over It Mod APK.md b/spaces/congsaPfin/Manga-OCR/logs/Climb Higher and Faster with Getting Over It Mod APK.md
deleted file mode 100644
index e9a7bce79a5633d3dc38b096f9ac323f874a39ae..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Climb Higher and Faster with Getting Over It Mod APK.md
+++ /dev/null
@@ -1,118 +0,0 @@
-
-
Getting Over It Apk with Mods: A Guide for Beginners
-
If you are looking for a game that will test your patience, skill, and sanity, you might want to try Getting Over It with Bennett Foddy. This is a game that has become famous (or infamous) for its extreme difficulty and frustrating gameplay. In this game, you control a man in a pot who has to climb a mountain using only a hammer. The game has no checkpoints, no save system, and no mercy. One wrong move can send you tumbling down to the bottom, undoing all your progress.
However, despite (or because of) its challenge, Getting Over It has also attracted a large fan base who enjoy the game's unique concept, humorous narration, and rewarding feeling of accomplishment. Some fans have even taken it to the next level by creating mods for the game. Mods are modifications that alter the game's appearance, gameplay, or features. Some mods make the game easier, some make it harder, and some just make it more fun.
-
If you are curious about how to play Getting Over It with mods, this article will guide you through the process of downloading and installing them on your Android device. You will also learn about some of the mod features and how to use them, as well as some tips and tricks for getting over it.
-
How to Download and Install Getting Over It Apk with Mods
-
The first step to playing Getting Over It with mods is to download the apk file and the mod files. An apk file is an Android application package that contains all the files needed to run an app on your device. A mod file is a file that contains the modified code or assets for the game.
-
getting over it apk mod unlimited money
-getting over it apk mod free download
-getting over it apk mod latest version
-getting over it apk mod android 1
-getting over it apk mod no ads
-getting over it apk mod revdl
-getting over it apk mod hack
-getting over it apk mod unlocked
-getting over it apk mod rexdl
-getting over it apk mod 2023
-getting over it apk mod unlimited lives
-getting over it apk mod offline
-getting over it apk mod god mode
-getting over it apk mod mega
-getting over it apk mod mediafıre
-getting over it apk mod obb
-getting over it apk mod premium
-getting over it apk mod full version
-getting over it apk mod unlimited coins
-getting over it apk mod all levels
-getting over it apk mod easy mode
-getting over it apk mod online
-getting over it apk mod cheats
-getting over it apk mod happy mod
-getting over it apk mod 1.9.4
-getting over it apk mod 1.9.3
-getting over it apk mod 1.9.2
-getting over it apk mod 1.9.1
-getting over it apk mod 1.9.0
-getting over it apk mod 1.8.9
-getting over it apk mod 1.8.8
-getting over it apk mod 1.8.7
-getting over it apk mod 1.8.6
-getting over it apk mod 1.8.5
-getting over it apk mod 1.8.4
-getting over it apk mod 1.8.3
-getting over it apk mod 1.8.2
-getting over it apk mod 1.8.1
-getting over it apk mod 1.8.0
-getting over it apk mod 1.7.9
-getting over it apk mod 1.7.8
-getting over it apk mod 1.7.7
-getting over it apk mod 1.7.6
-getting over it apk mod 1.7.5
-getting over it apk mod 1.7.4
-getting over it apk mod 1.7.3
-getting over it apk mod 1.7.2
-getting over it apk mod 1.7.1
-
There are many websites that offer apk files and mod files for Getting Over It, but not all of them are safe or reliable. You should always be careful when downloading files from unknown sources, as they may contain viruses or malware that can harm your device or steal your data. One of the websites that we recommend is [APKDone](^1^), which has a large collection of apk files and mod files for various games, including Getting Over It.
-
To download Getting Over It apk with mods from APKDone, follow these steps:
-
-
Go to on your browser.
-
Search for "Getting Over It" in the search bar.
-
Select the version of the game that you want to download. You can choose between the original version or the modded version. The modded version has some features unlocked, such as unlimited gravity, speed, scale, etc.
-
Click on "Download APK" or "Download MOD APK" depending on your choice.
-
Wait for the download to finish.
-
-
Once you have downloaded the apk file and the mod file (if any), you need to install them on your device. To do this, you need to enable unknown sources on your device settings. This will allow you to install apps from sources other than Google Play Store.
-
To enable unknown sources on your device settings, follow these steps:
-
-
Go to Settings > Security > Unknown Sources.
-
Toggle on the switch to allow installation of apps from unknown sources.
-
Confirm your choice by tapping OK.
-
-
Now that you have enabled unknown sources, you can install Getting Over It apk with mods on your device. To do this, follow these steps:
-
-
Locate the apk file and the mod file (if any) on your device storage. You can use a file manager app to do this.
-
Tap on the apk file to start the installation process. You may see a warning message that says "This type of file can harm your device". Ignore it and tap on "Install anyway".
-
Wait for the installation to finish. You may see a message that says "App installed". Tap on "Open" to launch the game.
-
If you have downloaded a mod file, you need to replace the original files with the modded ones. To do this, go to Android > Data > com.noodlecake.gettingoverit > files > Managed and delete the Assembly-CSharp.dll file. Then, copy and paste the modded Assembly-CSharp.dll file from your download folder to the same location.
-
-
Congratulations! You have successfully installed Getting Over It apk with mods on your device. You can now enjoy the game with some extra features and options.
-
How to Play Getting Over It with Mods
-
Playing Getting Over It with mods is not much different from playing the original game. You still have to use your hammer to climb the mountain and avoid falling down. However, with mods, you can also access some additional features and options that can enhance your gameplay experience.
-
Some of the mod features that you can use are:
-
-
Unlimited gravity: This feature allows you to adjust the gravity level in the game. You can make it higher or lower depending on your preference. Higher gravity makes the game harder, while lower gravity makes it easier.
-
Unlimited speed: This feature allows you to increase or decrease the speed of your movement. You can make it faster or slower depending on your preference. Faster speed makes the game more exciting, while slower speed makes it more relaxing.
-
Unlimited scale: This feature allows you to change the size of your character and the objects in the game. You can make them bigger or smaller depending on your preference. Bigger size makes the game more challenging, while smaller size makes it more manageable.
-
Unlimited rotation: This feature allows you to rotate your character and the objects in the game. You can make them spin clockwise or counterclockwise depending on your preference. Rotation adds some variety and fun to the game.
-
Unlimited color: This feature allows you to change the color of your character and the objects in the game. You can choose from a range of colors depending on your preference. Color adds some customization and flair to the game.
-
-
To use these mod features, you need to access the mod menu in the game. To do this, tap on the screen with three fingers at the same time. You will see a pop-up window that shows the mod options. You can toggle them on or off by tapping on them. You can also adjust their values by sliding the bars.
-
Here are some tips and tricks for playing Getting Over It with mods:
-
-
Experiment with different combinations of mod features and see how they affect your gameplay. You may find some settings that suit your style or mood better than others.
-
Use moderation when using mod features. Don't make the game too easy or too hard for yourself, as that may ruin the fun and challenge of the game. Find a balance that works for you.
-
Don't forget to enjoy the game's original aspects, such as its narration, music, and graphics. Mods are meant to enhance, not replace, the game's core elements.
-
Don't get discouraged if you fail or fall down in the game. Remember that Getting Over It is a game about perseverance, resilience, and humor. Learn from your mistakes and try again.
-
-
Conclusion
-
Getting Over It with Bennett Foddy is a game that will test your skills, patience, and sanity like no other. It is a game that will make you rage, laugh, cry, and celebrate. It is a game that will challenge you, reward you, and inspire you.
-
If you want to spice up your gameplay experience, you can try playing Getting Over It with mods. Mods are modifications that alter the game's appearance, gameplay, or features. Some mods make the game easier, some make it harder, and some just make it more fun.
-
In this article, we have shown you how to download and install Getting Over It apk with mods on your Android device. We have also shown you how to use some of the mod features and how to play Getting Over It with mods. We hope that this article has been helpful and informative for you.
-
If you are ready to try out Getting Over It with mods, you can download the game and the mods from the link below. Have fun and good luck!
-
[Download Getting Over It Apk with Mods]
-
FAQs
-
Here are some frequently asked questions about Getting Over It with mods:
-
What is the difference between apk and mod?
-
An apk file is an Android application package that contains all the files needed to run an app on your device. A mod file is a file that contains the modified code or assets for the game. You need both files to play Getting Over It with mods.
-
Is it safe to download and install Getting Over It apk with mods?
-
It depends on where you download the files from. Some websites may offer fake or malicious files that can harm your device or steal your data. You should always be careful when downloading files from unknown sources and scan them for viruses or malware before installing them. One of the websites that we recommend is APKDone, which has a large collection of apk files and mod files for various games, including Getting Over It.
-
Can I play Getting Over It with mods online or offline?
-
You can play Getting Over It with mods offline, as the game does not require an internet connection to run. However, you cannot play Getting Over It with mods online, as the game does not support multiplayer or online features.
-
Can I use mods on other platforms besides Android?
-
No, you cannot use mods on other platforms besides Android. Mods are only compatible with Android devices and cannot be used on iOS, Windows, Mac, or Linux devices.
-
Can I uninstall Getting Over It apk with mods?
-
Yes, you can uninstall Getting Over It apk with mods if you want to. To do this, go to Settings > Apps > Getting Over It > Uninstall. This will remove the game and the mods from your device.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Hungry Shark Evolution APK A Must-Have Game for Android Users.md b/spaces/congsaPfin/Manga-OCR/logs/Hungry Shark Evolution APK A Must-Have Game for Android Users.md
deleted file mode 100644
index 62ecc4183c2f038603cb538591024b298af73071..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Hungry Shark Evolution APK A Must-Have Game for Android Users.md
+++ /dev/null
@@ -1,130 +0,0 @@
-
-
How to Download Hungry Shark Evolution APKPure
-
Do you love sharks? Do you love arcade games? Do you love eating everything in sight? If you answered yes to any of these questions, then you will love Hungry Shark Evolution, a fun and addictive game where you take control of a hungry shark and go on a feeding frenzy in a vast ocean full of prey and predators.
-
But what if you don't have access to Google Play Store or you want to save some storage space on your device? Don't worry, there's a solution for that. You can download APKPure, a third-party app store that offers free and safe downloads of Android apps and games.
In this article, we will show you how to download Hungry Shark Evolution APKPure on your Android device. We will also give you an overview of the features of Hungry Shark Evolution, some tips and tricks for playing it, and our personal review of the game.
-
Features of Hungry Shark Evolution
-
Hungry Shark Evolution is one of the most popular shark games on Android. It has over 100 million downloads and a 4.5-star rating on Google Play Store. It is also the official game for Shark Week, which is an annual event that celebrates these amazing creatures.
-
So what makes Hungry Shark Evolution so awesome? Here are some of its main features:
-
Sharks
-
The game lets you choose from over 20 different sharks to play with, each with its own unique abilities and appearance. You can start with a small Reef Shark and work your way up to bigger and more powerful sharks like the Great White, Megalodon, or even a prehistoric Mosasaurus.
-
You can also unlock special sharks that have special abilities or features. For example, there's a Robo Shark that can shoot lasers from its eyes, a Zombie Shark that can regenerate health by eating zombies, or a Pyro Shark that can breathe fire and fly.
-
You can also customize your shark with various accessories, such as hats, sunglasses, headphones, or even a crown. These accessories not only make your shark look cool, but also give you some bonuses, such as extra coins, health, or speed.
-
World
-
Hungry Shark Evolution features a huge open world that you can explore freely. You can swim in different areas of the ocean, such as the surface, the deep sea, the arctic, or the prehistoric. Each area has its own scenery, creatures, and secrets to discover.
You can also find portals that take you to other worlds, such as a medieval castle, a pirate ship, or a space station. These worlds have their own challenges and rewards, such as treasure chests, enemies, or power-ups.
-
Missions
-
The game has over 250 missions that you can complete to earn coins and gems. These missions range from simple tasks like eating a certain number of fish, turtles, or humans, to more complex ones like finding hidden objects, performing stunts, or defeating bosses.
-
Some missions are specific to each shark, while others are common to all sharks. Completing missions not only gives you rewards, but also increases your shark's level and stats.
-
Equipment
-
The game also lets you equip your shark with various gadgets that enhance its abilities or give it new ones. For example, you can equip a jetpack that lets you fly in the air, a laser that lets you shoot beams from your eyes, or a magnet that attracts coins and gems.
-
You can also equip baby sharks that follow you around and help you eat more creatures. There are over 30 baby sharks to choose from, each with its own special ability or feature. For example, there's a baby Hammerhead that gives you extra health, a baby Killer Whale that gives you extra speed, or a baby Ghost Shark that makes you invisible.
-
Gold Rush
-
The game also has a special mode called Gold Rush that can be triggered by eating enough gold creatures. Gold creatures are marked with a yellow glow and include fish, crabs, jellyfish, and even humans.
-
When Gold Rush is activated, your shark becomes invincible and can eat anything in its path. It also grows bigger and faster, and earns more coins and points. Gold Rush is a great way to boost your score and have some fun.
-
How to Download Hungry Shark Evolution APKPure
-
Now that you know what Hungry Shark Evolution is all about, you might be wondering how to download it from APKPure. Don't worry, it's very easy and safe. Just follow these simple steps:
-
-
Go to APKPure.com on your Android device's browser. You can also scan the QR code below to go directly to the website.
-
Search for Hungry Shark Evolution in the search bar or browse the categories until you find it.
-
Tap on the green Download APK button to start downloading the game file. You might see a warning message saying that this type of file can harm your device. Ignore it and tap OK.
-
Once the download is complete, open the file and tap Install. You might need to enable Unknown Sources in your device's settings to allow the installation of apps from sources other than Google Play Store.
-
Wait for the installation to finish and then tap Open to launch the game.
-
Enjoy playing Hungry Shark Evolution APKPure on your Android device!
-
-
Here are some screenshots of the download process:
-
-
-
-
-
-
-
-
-
-
-
-
-
Tips and Tricks for Hungry Shark Evolution
-
Hungry Shark Evolution is a fun and easy game to play, but it can also be challenging and addictive. Here are some tips and tricks that will help you survive longer, earn more coins and gems, and have more fun playing Hungry Shark Evolution:
-
-
Keep an eye on your shark's health bar. It will decrease over time and when you get hurt by enemies or hazards. To replenish it, you need to eat constantly. Try to eat a variety of creatures, as some give you more health than others.
-
Use the map to find your way around the ocean. You can access it by tapping on the compass icon on the top right corner of the screen. The map will show you where you are, where the portals are, where the treasure chests are, and where the enemies and hazards are.
-
Collect coins and gems as much as you can. Coins are used to buy and upgrade sharks and equipment, while gems are used to revive your shark when it dies or to unlock special sharks. You can find coins and gems by eating gold creatures, opening treasure chests, completing missions, or watching ads.
-
Use the equipment wisely. Each equipment has its own benefits and drawbacks, so choose the ones that suit your play style and your shark's abilities. For example, if you have a fast shark, you might want to equip a jetpack to fly faster. If you have a slow shark, you might want to equip a magnet to attract coins and gems.
-
Trigger Gold Rush as often as you can. Gold Rush is the best way to increase your score and earn more coins and gems. To trigger it, you need to eat enough gold creatures in a short time. You can also use some equipment or baby sharks that increase your Gold Rush meter faster.
-
Avoid enemies and hazards that are bigger or stronger than you. They will damage your shark and reduce your health. You can tell if an enemy or hazard is dangerous by looking at its color. If it's green, it's safe to eat. If it's yellow, it's risky to eat. If it's red, it's deadly to eat.
-
Explore the different areas of the ocean and find hidden secrets. There are many things to discover in Hungry Shark Evolution, such as sunken ships, ancient ruins, underwater caves, and more. Some of these secrets contain valuable rewards, such as coins, gems, or power-ups.
-
-
Review of Hungry Shark Evolution APKPure
-
Now that you know how to download and play Hungry Shark Evolution APKPure, you might be wondering what we think of the game. Well, here is our honest review of Hungry Shark Evolution APKPure:
-
We think Hungry Shark Evolution APKPure is a great game for anyone who loves sharks and arcade games. It has amazing graphics, sound effects, and gameplay that make you feel like you are really a hungry shark in a vast ocean.
-
We love how the game offers so much variety and content for players to enjoy. There are so many sharks to choose from, each with its own personality and abilities. There are so many areas to explore, each with its own scenery and creatures. There are so many missions to complete, each with its own challenges and rewards.
-
We also love how the game is easy to play but hard to master. It has simple controls that anyone can learn quickly, but it also has a lot of depth and strategy that require skill and practice. It has a lot of fun and excitement that keep us hooked for hours.
-
The only thing we don't like about the game is that it has too many ads that interrupt the gameplay. We understand that ads are necessary for free games, but we wish they were less frequent or less intrusive. We also wish there was an option to remove them by paying a small fee.
-
Overall, we give Hungry Shark Evolution APKPure a rating of 4 out of 5 stars. We think it is one of the best shark games on Android and we highly recommend it to anyone who likes sharks or arcade games.
-
Conclusion
-
In conclusion, Hungry Shark Evolution APKPure is a fun and addictive game where you take control of a hungry shark and go on a feeding frenzy in a vast ocean full of prey and predators.
-
You can download Hungry Shark Evolution APKPure from APKPure.com, a third-party app store that offers free and safe downloads of Android apps and games.
-
You can also enjoy the features of Hungry Shark Evolution APKPure, such as the different sharks, the open world, the missions, the equipment, and the gold rush.
-
You can also use our tips and tricks for Hungry Shark Evolution APKPure to survive longer, earn more coins and gems, and have more fun playing the game.
-
You can also read our review of Hungry Shark Evolution APKPure to see what we think of the game.
-
We hope you enjoyed this article and found it helpful. If you have any feedback or questions about Hungry Shark Evolution APKPure, please feel free to leave a comment below. We would love to hear from you.
-
Thank you for reading and happy shark hunting!
-
FAQs
-
Here are some frequently asked questions about Hungry Shark Evolution APKPure:
-
-
Is Hungry Shark Evolution APKPure safe to download and install?
-
Yes, Hungry Shark Evolution APKPure is safe to download and install. APKPure is a reputable and trusted app store that verifies the security and authenticity of all the apps and games it offers. You can download Hungry Shark Evolution APKPure without any worries.
-
What are the differences between Hungry Shark Evolution APKPure and Hungry Shark Evolution Google Play Store?
-
There are not many differences between Hungry Shark Evolution APKPure and Hungry Shark Evolution Google Play Store. They are both the same game with the same features and content. The only difference is that Hungry Shark Evolution APKPure is downloaded from APKPure.com, while Hungry Shark Evolution Google Play Store is downloaded from Google Play Store.
-
What are the requirements for playing Hungry Shark Evolution APKPure?
-
The requirements for playing Hungry Shark Evolution APKPure are not very high. You need an Android device with Android 4.1 or higher, at least 100 MB of free storage space, and a stable internet connection.
-
How can I update Hungry Shark Evolution APKPure?
-
You can update Hungry Shark Evolution APKPure by visiting APKPure.com and downloading the latest version of the game. You can also enable the auto-update feature in the APKPure app settings to get notified and updated automatically when a new version is available.
-
How can I contact the developers of Hungry Shark Evolution?
-
You can contact the developers of Hungry Shark Evolution by visiting their official website, Facebook page, Twitter account, or YouTube channel. You can also send them an email at support@fgol.co.uk or use the in-game feedback option.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Play Dragon Ball Z Shin Budokai 7 with PPSSPP The Complete Tutorial.md b/spaces/congsaPfin/Manga-OCR/logs/Play Dragon Ball Z Shin Budokai 7 with PPSSPP The Complete Tutorial.md
deleted file mode 100644
index 1b3b988ff5bdecb81e8d4356aee318337556ae6b..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Play Dragon Ball Z Shin Budokai 7 with PPSSPP The Complete Tutorial.md
+++ /dev/null
@@ -1,130 +0,0 @@
-
-
Dragon Ball Z Shin Budokai 7 PPSSPP Download Romsmania: A Guide for Fans of Anime and Fighting Games
-
If you are a fan of dragon ball z, one of the most popular anime series of all time, and you love fighting games, then you might be interested in dragon ball z shin budokai 7 ppsspp, a fan-made mod of the original dragon ball z shin budokai game for the PlayStation Portable (PSP). In this article, we will tell you everything you need to know about this game, its features, gameplay, requirements, review, and download link. So, let's get started!
-
What is dragon ball z shin budokai 7 ppsspp?
-
Dragon ball z shin budokai 7 ppsspp is a modded version of dragon ball z shin budokai, a fighting game based on the dragon ball z anime series. The game was released for the PSP in 2006 and was developed by Dimps and published by Atari. The game features a story mode that follows the events of the anime from the Saiyan Saga to the Majin Buu Saga, as well as a versus mode, a tournament mode, a practice mode, and an item shop. The game also has a wireless multiplayer mode that allows up to two players to battle each other using their PSP devices.
-
dragon ball z shin budokai 7 ppsspp download romsmania
The modded version of the game, dragon ball z shin budokai 7 ppsspp, adds many new features and improvements to the original game. The mod was created by fans of the anime and the game, who wanted to make it more fun and challenging. The mod includes many new characters, stages, skills, attacks, transformations, and modes from the latest dragon ball z series, such as dragon ball super and dragon ball heroes. The mod also enhances the graphics, sound effects, music, and gameplay of the original game.
-
Why is it popular among fans of the anime and fighting games?
-
Dragon ball z shin budokai 7 ppsspp is popular among fans of the anime and fighting games because it offers them a chance to experience the epic battles and adventures of their favorite characters from the dragon ball z universe. The game has a large roster of characters from different sagas and timelines of the anime, such as Goku, Vegeta, Gohan, Piccolo, Frieza, Cell, Buu, Beerus, Whis, Jiren, Broly, Zamasu, Goku Black, Vegito, Gogeta, Kefla, Caulifla, Kale, Hit, Cabba, Frost, Android 17, Android 18, Trunks, Gotenks, Bardock, Raditz, Nappa, Turles, Cooler, Janemba, Bojack, Omega Shenron, and many more. The game also has many different stages from different planets and dimensions of the anime, such as Earth, Namek, Planet Vegeta, King Kai's Planet, Supreme Kai's Planet, Hell, Heaven, Future Earth, Universe 6, Universe 11, Tournament of Power Arena, and many more. The game also has many different skills and attacks from different forms and techniques of the characters, such as Kamehameha, Galick Gun, Final Flash, Spirit Bomb, Big Bang Attack, Masenko, Special Beam Cannon, Destructo Disc, Death Beam, Solar Flare, Instant Transmission, Kaio-Ken, Super Saiyan, Super Saiyan God, Super Saiyan Blue, Ultra Instinct, Fusion Dance, Potara Earrings, and many more. The game also has many different modes that add variety and challenge to the gameplay, such as story mode, arcade mode, survival mode, time attack mode, team battle mode, and dragon ball collection mode.
-
How does the game play on the PSP emulator?
-
Dragon ball z shin budokai 7 ppsspp is a game that can be played on the PSP emulator, which is a software that allows you to run PSP games on your PC, Android, or iOS devices. The PSP emulator that is recommended for playing this game is PPSSPP, which is a free and open-source emulator that supports many PSP games and features. PPSSPP can run the game smoothly and with high-quality graphics and sound, as long as you have a compatible device and a good configuration.
-
The game plays on the PSP emulator like any other fighting game, with a simple and intuitive control scheme that uses the buttons and analog sticks of the PSP device or the keyboard and mouse of the PC device. The game has a 2D fighting system that allows you to move your character left and right, jump, crouch, dash, guard, and perform various attacks and combos. The game also has a 3D fighting system that allows you to move your character in any direction, fly, teleport, and perform more advanced attacks and combos. The game also has a ki system that allows you to charge your energy, use special skills and transformations, and unleash ultimate attacks. The game also has a dragon rush system that allows you to initiate a cinematic sequence of attacks and counters with your opponent.
-
What are the minimum and recommended requirements for playing the game on the PSP emulator?
-
The minimum and recommended requirements for playing dragon ball z shin budokai 7 ppsspp on the PSP emulator are as follows:
-
-
-
Device
-
Minimum Requirements
-
Recommended Requirements
-
-
-
PC
-
- Windows 7 or higher - 2 GB RAM - 2 GHz dual-core CPU - OpenGL 2.0 compatible GPU - DirectX 9.0c compatible sound card - 1 GB free disk space - PPSSPP emulator - Dragon ball z shin budokai 7 ppsspp ISO file
-
- Windows 10 or higher - 4 GB RAM or more - 3 GHz quad-core CPU or better - OpenGL 3.0 compatible GPU or better - DirectX 11 compatible sound card or better - 2 GB free disk space or more - PPSSPP emulator - Dragon ball z shin budokai 7 ppsspp ISO file
-
-
-
Android
-
- Android 4.1 or higher - 1 GB RAM - 1 GHz dual-core CPU - OpenGL ES 2.0 compatible GPU - 1 GB free storage space - PPSSPP emulator - Dragon ball z shin budokai 7 ppsspp ISO file
-
- Android 6.0 or higher - 2 GB RAM or more - 2 GHz quad-core CPU or better - OpenGL ES 3.0 compatible GPU or better - 2 GB free storage space or more - PPSSPP emulator - Dragon ball z shin budokai 7 ppsspp ISO file
-
-
-
iOS
-
- iOS 9.0 or higher - iPhone 5s or higher - iPad Air or higher - iPod Touch 6th generation or higher - PPSSPP emulator (jailbroken device required) - Dragon ball z shin budokai 7 ppsspp ISO file
-
- iOS 11.0 or higher - iPhone 6s or higher - iPad Pro or higher - iPod Touch 7th generation or higher - PPSSPP emulator (jailbroken device required) - Dragon ball z shin budokai 7 ppsspp ISO file
-
-
-
How to install and configure the game and the emulator?
-
To install and configure dragon ball z shin budokai 7 ppsspp and the PPSSPP emulator on your device, you need to follow these steps:
-
-
Download the PPSSPP emulator from its official website (https://www.ppsspp.org/) or from the Google Play Store (for Android devices) or from Cydia (for jailbroken iOS devices).
-
Download the dragon ball z shin budokai 7 ppsspp ISO file from its download link (https://romsmania.cc/roms/playstation-portable/dragon-ball-z-shin-budokai-an other-road-275007) or from any other trusted source.
-
Extract the ISO file from the zip file using any file extractor app (such as WinRAR, 7-Zip, ZArchiver, etc.).
-
Copy the ISO file to a folder on your device where you can easily access it (such as Downloads, Documents, PSP, etc.).
-
Launch the PPSSPP emulator on your device and tap on the "Games" tab.
-
Navigate to the folder where you copied the ISO file and tap on it to start the game.
-
Enjoy playing dragon ball z shin budokai 7 ppsspp on your device!
-
-
You can also customize the settings of the game and the emulator according to your preferences and device specifications. You can change the graphics, sound, controls, system, and network settings of the emulator by tapping on the "Settings" tab. You can also change the difficulty, language, sound, and display settings of the game by tapping on the "Options" tab in the game menu.
-
dragon ball z shin budokai 7 psp iso download coolrom
-dbz shin budokai 7 ppsspp highly compressed romsmania
-dragon ball z shin budokai 7 mod ppsspp free download
-how to download dragon ball z shin budokai 7 on ppsspp
-dragon ball z shin budokai 7 ppsspp cheats codes romsmania
-dragon ball z shin budokai 7 ppsspp android download apk
-dragon ball z shin budokai 7 ppsspp settings for best performance
-dragon ball z shin budokai 7 ppsspp save data download
-dragon ball z shin budokai 7 ppsspp gameplay video
-dragon ball z shin budokai 7 ppsspp emulator for pc
-dragon ball z shin budokai 7 ppsspp gold download link
-dragon ball z shin budokai 7 ppsspp english version romsmania
-dragon ball z shin budokai 7 ppsspp multiplayer mode
-dragon ball z shin budokai 7 ppsspp all characters unlocked
-dragon ball z shin budokai 7 ppsspp review and rating
-dragon ball z shin budokai 7 ppsspp iso file size
-dragon ball z shin budokai 7 ppsspp system requirements
-dragon ball z shin budokai 7 ppsspp online play
-dragon ball z shin budokai 7 ppsspp new features and updates
-dragon ball z shin budokai 7 ppsspp best mods and hacks
-dragon ball z shin budokai 7 ppsspp cso download romsmania
-dragon ball z shin budokai 7 ppsspp texture pack download
-dragon ball z shin budokai 7 ppsspp tips and tricks
-dragon ball z shin budokai 7 ppsspp story mode walkthrough
-dragon ball z shin budokai 7 ppsspp comparison with other dbz games
-dragon ball z shin budokai 7 ppsspp download for ios devices
-dbz shin budokai 7 psp iso google drive download link
-dbz shin budokai 7 psp iso mediafire download link
-dbz shin budokai 7 psp iso mega download link
-dbz shin budokai 7 psp iso zip file download romsmania
-dbz shin budokai 7 psp iso full game download free
-dbz shin budokai 7 psp iso no password required
-dbz shin budokai 7 psp iso latest version download
-dbz shin budokai 7 psp iso direct download without ads
-dbz shin budokai 7 psp iso working on all devices
-dbz shin budokai 7 psp iso original game not modded
-dbz shin budokai 7 psp iso best graphics quality
-dbz shin budokai 7 psp iso easy installation guide
-dbz shin budokai 7 psp iso offline play mode
-dbz shin budokai 7 psp iso support controller and keyboard input
-
What are the pros and cons of dragon ball z shin budokai 7 ppsspp?
-
Dragon ball z shin budokai 7 ppsspp is a game that has many pros and cons that you should consider before playing it. Here are some of them:
-
Pros
-
-
The game has a large and diverse roster of characters from different sagas and timelines of the dragon ball z universe.
-
The game has many new and improved features and modes that make it more fun and challenging than the original game.
-
The game has high-quality graphics and sound effects that enhance the immersion and excitement of the gameplay.
-
The game has a simple and intuitive control scheme that makes it easy to play on any device.
-
The game has a wireless multiplayer mode that allows you to battle with your friends or other players online.
-
The game is free to download and play on any device that supports the PSP emulator.
-
-
Cons
-
-
The game is not an official product of Dimps or Atari, but a fan-made mod that may have some bugs and glitches.
-
The game may not run smoothly or properly on some devices that do not meet the minimum or recommended requirements.
-
The game may require some configuration and optimization of the emulator settings to achieve the best performance and quality.
-
The game may have some compatibility issues with some versions or updates of the emulator or the device software.
-
The game may have some legal issues with some regions or countries that do not allow downloading or playing pirated or modded games.
-
-
How does it compare to other dragon ball z games and fighting games on the PSP emulator?
-
Dragon ball z shin budokai 7 ppsspp is a game that compares favorably to other dragon ball z games and fighting games on the PSP emulator. The game has more content, features, modes, characters, stages, skills, attacks, transformations, and options than most of the other games in its genre. The game also has better graphics, sound effects, music, gameplay, and controls than most of the other games in its genre. The game also has a higher replay value, challenge level, and fun factor than most of the other games in its genre. The game also has a loyal fan base, community support, and regular updates than most of the other games in its genre. The game is one of the best dragon ball z games and fighting games on the PSP emulator that you can play right now.
-
Conclusion
-
In conclusion, dragon ball z shin budokai 7 ppsspp is a fan-made mod of dragon ball z shin budokai that adds many new features and improvements to the original game. The game is based on the dragon ball z anime series and features a story mode, a versus mode, a tournament mode, a practice mode, an item shop, and a wireless multiplayer mode. The game also features a large roster of characters from different sagas and timelines of the anime, such as Goku, Vegeta, Gohan, Piccolo, Frieza, Cell, Buu, Beerus, Whis, Jiren, Broly, Zamasu, Goku Black, Vegito, Gogeta, Kefla, Caulifla, Kale, Hit, Cabba, Frost, Android 17, Android 18, Trunks, Gotenks, Bardock, Raditz, Nappa, Turles, Cooler, Janemba, Bojack Omega Shenron and many more. The game also features many different stages from different planets and dimensions of [user](# the anime, such as Earth, Namek, Planet Vegeta, King Kai's Planet, Supreme Kai's Planet, Hell, Heaven, Future Earth, Universe 6, Universe 11, Tournament of Power Arena, and many more. The game also features many different skills and attacks from different forms and techniques of the characters, such as Kamehameha, Galick Gun, Final Flash, Spirit Bomb, Big Bang Attack, Masenko, Special Beam Cannon, Destructo Disc, Death Beam, Solar Flare, Instant Transmission, Kaio-Ken, Super Saiyan, Super Saiyan God, Super Saiyan Blue, Ultra Instinct, Fusion Dance, Potara Earrings, and many more. The game also features a 2D and a 3D fighting system that allows you to move and fight in any direction. The game also features a ki system that allows you to charge your energy, use special skills and transformations, and unleash ultimate attacks. The game also features a dragon rush system that allows you to initiate a cinematic sequence of attacks and counters with your opponent.
-
Dragon ball z shin budokai 7 ppsspp is a game that can be played on the PSP emulator, which is a software that allows you to run PSP games on your PC, Android, or iOS devices. The PSP emulator that is recommended for playing this game is PPSSPP, which is a free and open-source emulator that supports many PSP games and features. PPSSPP can run the game smoothly and with high-quality graphics and sound, as long as you have a compatible device and a good configuration. The game plays on the PSP emulator like any other fighting game, with a simple and intuitive control scheme that uses the buttons and analog sticks of the PSP device or the keyboard and mouse of the PC device.
-
Dragon ball z shin budokai 7 ppsspp is a game that has many pros and cons that you should consider before playing it. The game has more content, features, modes, characters, stages, skills, attacks, transformations, and options than most of the other games in its genre. The game also has better graphics, sound effects, music, gameplay, and controls than most of the other games in its genre. The game also has a higher replay value, challenge level, and fun factor than most of the other games in its genre. The game also has a loyal fan base, community support, and regular updates than most of the other games in its genre. The game is one of the best dragon ball z games and fighting games on the PSP emulator that you can play right now.
-
However, the game is not an official product of Dimps or Atari, but a fan-made mod that may have some bugs and glitches. The game may not run smoothly or properly on some devices that do not meet the minimum or recommended requirements. The game may require some configuration and optimization of the emulator settings to achieve the best performance and quality. The game may have some compatibility issues with some versions or updates of the emulator or the device software. The game may have some legal issues with some regions or countries that do not allow downloading or playing pirated or modded games.
-
Therefore, if you are a fan of dragon ball z and fighting games, and you want to experience the epic battles and adventures of your favorite characters from the dragon ball z universe, then you should definitely try dragon ball z shin budokai 7 ppsspp on your device. The game will give you hours of fun and entertainment, as well as challenge and satisfaction. The game is free to download and play on any device that supports the PSP emulator. You can download the game from its download link (https://romsmania.cc/roms/playstation-portable/dragon-ball-z-shin-budokai-another-road-275007) or from any other trusted source. You can also follow the instructions given in this article to install and configure the game and the emulator on your device.
-
We hope you enjoyed this article and found it helpful and informative. If you have any questions or feedback about the game or the article, please feel free to leave them in the comments section below. Thank you for reading!
-
FAQs
-
Here are some frequently asked questions about dragon ball z shin budokai 7 ppsspp:
-
Q: Is dragon ball z shin budokai 7 ppsspp an official game?
-
A: No, dragon ball z shin budokai 7 ppsspp is not an official game, but a fan-made mod of dragon ball z shin budokai, a fighting game based on the dragon ball z anime series.
-
Q: How can I play dragon ball z shin budokai 7 ppsspp on my device?
-
A: You can play dragon ball z shin budokai 7 ppsspp on your device by using the PSP emulator, which is a software that allows you to run PSP games on your PC, Android, or iOS devices. The PSP emulator that is recommended for playing this game is PPSSPP, which is a free and open-source emulator that supports many PSP games and features. You also need to download the dragon ball z shin budokai 7 ppsspp ISO file from its download link or from any other trusted source. You can follow the steps given in this article to install and configure the game and the emulator on your device.
-
Q: What are the differences between dragon ball z shin budokai 7 ppsspp and dragon ball z shin budokai?
-
A: Dragon ball z shin budokai 7 ppsspp is a modded version of dragon ball z shin budokai, which adds many new features and improvements to the original game. The mod includes many new characters, stages, skills, attacks, transformations, and modes from the latest dragon ball z series, such as dragon ball super and dragon ball heroes. The mod also enhances the graphics, sound effects, music, and gameplay of the original game.
-
Q: Is dragon ball z shin budokai 7 ppsspp safe to download and play?
-
A: Dragon ball z shin budokai 7 ppsspp is safe to download and play as long as you download it from a trusted source and scan it for viruses or malware before installing it on your device. You should also make sure that your device meets the minimum or recommended requirements for playing the game on the PSP emulator. You should also be aware of the legal issues that may arise from downloading or playing pirated or modded games in some regions or countries.
-
Q: How can I get more updates and support for dragon ball z shin budokai 7 ppsspp?
-
A: You can get more updates and support for dragon ball z shin budokai 7 ppsspp by following its official Facebook page (https://www.facebook.com/DBZSB7/) or its YouTube channel (https://www.youtube.com/channel/UCiXyfZPwqRKXx69c-5n-MpA). You can also join its Discord server (https://discord.gg/4JNvzGk) or its Reddit community (https://www.reddit.com/r/dbzsb7/) to interact with other fans and players of the game. You can also contact the developers of the mod by sending them an email (dbzsb7@gmail.com) or a message on their social media accounts.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/contluForse/HuggingGPT/assets/Astebreed Definitive Edition Download] [hack] - How to Unlock All Features and Modes.md b/spaces/contluForse/HuggingGPT/assets/Astebreed Definitive Edition Download] [hack] - How to Unlock All Features and Modes.md
deleted file mode 100644
index 730739240a7431a43cc3d9f0fe13678a96bcff50..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/Astebreed Definitive Edition Download] [hack] - How to Unlock All Features and Modes.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/contluForse/HuggingGPT/assets/Cmo crear un personaje principal de una obra literaria memorable y verosmil.md b/spaces/contluForse/HuggingGPT/assets/Cmo crear un personaje principal de una obra literaria memorable y verosmil.md
deleted file mode 100644
index 4214f99f48b4b5c717cb03ea704159c6d60b22c7..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/Cmo crear un personaje principal de una obra literaria memorable y verosmil.md
+++ /dev/null
@@ -1,36 +0,0 @@
-
-
Cuando se habla de un personaje, se hace alusión a los individuos humanos, animales o de otro tipo, por lo general de carácter ficcional, fantástico o imaginario, que toman parte en la trama de una obra artística, como una narración cinematográfica, un cuadro pictórico o un relato literario.
Los personajes son creados para habitar el mundo posible de la obra de arte, más o menos inspirados en los seres que encontramos en el mundo real, y la trama de dichas narraciones suele girar en torno a sus aventuras y desventuras. En casos como el cine o el teatro, además, son encarnados por actores o representados mediante ilustraciones, figuras tridimensionales, etc.
-
De esa manera, el lector o el espectador de una obra debe pactar con la existencia de los personajes como si fueran reales, incluso cuando se trate de seres mitológicos, religiosos o fantásticos, para poder acompañarlos en su relato.
-
Existen varios tipos de personajes en un cuento, novela o una cualquier obra narrativa. Su clasificación varía según su grado de participación, la caracterización psicológica que haya hecho el autor, su evolución dentro de la trama, etc.
-
El protagonista, por lo tanto, lleva a cabo las acciones más importantes de la historia. Sin su participación, la trama carecería de sentido. El oponente del protagonista se conoce como antagonista, y es aquel personaje que pone obstáculos a los objetivos del personaje principal. Una obra puede tener varios protagonistas y antagonistas.
-
-
Un claro ejemplo entre lo que es protagonista y antagonista es el que se ve en la saga literaria de gran éxito que gira en torno a la figura de Harry Potter. La escritora J.K Rowling es la que estableció como personaje principal al joven mago que da título a la obra, se trata de un muchacho que estudia magia en la Escuela Hogwarts y que se enfrenta a multitud de aventuras junto a sus siempre inseparables amigos Ron y Hermione.
-
No obstante, es importante establecer que tanto en la literatura como en las series de televisión o en las películas existen también lo que se conoce como personajes secundarios. Estos tienen una participación menor en la historia que se cuenta pero en determinados momentos también adquieren una especial relevancia. Es decir, en esos instantes se explica el porqué de su aparición en la obra en cuestión.
-
Al actor que interpreta el personaje principal de una obra también se le conoce como protagonista. De este modo, el concepto se aplica sobre la persona real y no sobre el personaje de ficción.
-
Debemos decir que los personajes que son considerados como principales dentro de una obra dramática son el protagonista y el antagonista. Vale mencionar que a los personajes que se enfrentan en un cuento o narración se les conoce como antagonistas y protagonistas.
-
Los personajes de Don Quijote principales y secundarios son los que hacen que la novela avance y por eso en esta lección de unPROFESOR queremos explicarte sus principales características.
-
Los personajes principales son los que hacen posible que la trama de la obra avance. Sin ellos no habría novela y por eso Cervantes eligió bien las características de cada una de las personas, para que su gran obra maestra tuviera sentido. Aquí te dejamos un repaso de los personajes de Don Quijote protagonistas.
-
En la obra es llamado con diferentes nombres. Su nombre de caballero es Don Quijote de La Mancha, pero la novela nos descubre, al final de sus páginas, el verdadero nombre del protagonista: Alonso Quijano. La obra comienza cuando este personaje principal se encuentra en su casa, en una pequeña aldea, en un lugar no identificado de La Mancha. Don Quijote se ha vuelto loco después de leer tantas novelas de caballerías y por eso decide ponerse en marcha, para vivir su propia aventura.
-
Es un hombre de unos 50 años que vive en su propio mundo de fantasía. Sale de la aldea con unas armas viejas que tenía en casa como herencia de sus abuelos, una armadura bien antigua y su caballo Rocinante, que lo acompañará a lo largo de toda la obra. Es, sin duda, el personaje principal de la obra, que lleva su nombre.
-
Rocinante es otro de los personajes de Don Quijote. Es extraño que un caballo se encuentre entre los personajes principales de una novela, pero lo cierto es que este animal está presente en todas y cada una de las aventuras de Don Quijote y Sancho Panza. Es el caballo del hidalgo y, aunque camina bastante despacio, es muy fiel a su amo. Siempre está agotado y físicamente es tan flaco como su amo.
-
Lo cierto es que Dulcinea no está demasiado presente físicamente en la obra, pero es un personaje principal porque siempre está en los labios de Don Quijote. Se trata de una labradora muy bonita de la que se enamoró el hidalgo. Su nombre real es Aldonza Lorenzo, pero Don Quijote decide llamarla Dulcinea del Toboso, porque considera que es un nombre más propio de una novela de caballerías.
-
Los personajes secundarios no son los que llevan el peso de la obra, pero sí que son los que la sustentan. Sin las apariciones de estos personajes, los protagonistas se quedarían sin historia. Por eso es tan importante conocer a los personajes secundarios de Don Quijote y saber las principales características que los definen.
-
Ahora ya conoces a los personajes principales y secundarios de Don Quijote y has podido ver algunas características de cada uno de ellos. Si estás interesado en seguir aprendiendo más sobre este libro o alguno parecido, no dudes en consultar nuestro apartado de lectura.
-
Por eso, para analizar una obra y entenderla completamente, es necesario saber qué es un personaje, su jerarquía, función, su identidad física y psicológica, así como su papel en todo el entramado narrativo con respecto al resto de personajes.
-
Pero cabe aclarar que en este caso de Harry Potter y de otras sagas largas, un personaje cobra más o menos protagonismo en diferentes entregas. Como es el caso de Draco Malfoy que hacia el final de la saga cobra mayor importancia.
-
El personaje principal de una narración es el protagonista. Es el personaje que más relevancia tiene en las acciones de una historia, por él y para él ocurren (casi) la mayoría de cosas en una narración.
-
El personaje principal es también el mejor desarrollado de una historia, el que más conocemos interior y exteriormente y, generalmente, con el que más nos vinculamos porque en la historia todo se trata de él. Es también el personaje que más evoluciona, el que más tiene motivaciones, y el que más gana, o pierde, en todo lo que está en juego de la historia.
-
Después del personaje principal estarían los personajes secundarios que son los que ayudan o evitan que el personaje principal cumpla su misión. Son los personajes que aparecen a menudo en la historia y que alcanzan a mover los hilos de la trama, aunque no tanto como el principal, claro.
-
Bueno, ya lo sabes, son personajes de menor importancia que los anteriores, pero que, en algún momento de la trama, ayudan o evitan que el personaje principal, o los secundarios, logren su objetivo.
-
Es importante notar que, por lo general, entre más importante es el personaje más dinámico es. Así, en la novela moderna, el personaje principal suele ser muy dinámico, y los terciarios muy estáticos.
-
Espero que te haya servido para que comprendas y valores más las obras literarias, más allá de decir que «amas» u «odias» a un personaje. Estas son estrategias que yo mismo uso en todos mis análisis literarios, así que espero que tú también las pongas en práctica y tengas con ellas una ¡Buena lectura!
-
El 15 de febrero del año 1929 el escritor y político venezolano Rómulo Gallegos publicó una de sus novelas más reconocidas, Doña Bárbara, por lo que este lunes le invitamos a identificar lo que representan los personajes principales de esta obra literaria.
-
La crueldad, la dictadura, la corrupción, la barbarie, la injusticia, el mestizaje, la lucha de clases, el empoderamiento femenino, el progreso, son algunos de los temas reflejados en las vivencias y las características de los personajes de la novela Doña Bárbara, por lo que vale identificar la importancia y significado que tienen las tres figuras principales del texto.
-
El título de la novela, Doña Bárbara, alude a la protagonista principal de la obra, un nombre con el cual Gallegos hace referencia a la barbarie, con su comportamiento arbitrario, violento, y malicioso.
-
De manera que, Marisela representa en la transición entre la barbarie y lo salvaje hacia el progreso y el desarrollo. En esta obra literaria, Gallegos introduce este personaje como es el símbolo de evolución de lo primitivo y lo salvaje hacia el perfeccionamiento y la civilización ideal.
-
Una de las obras clásicas más importantes de la historia de la literatura es La Odisea de Homero. Se trata de un poema épico que se publicó después de La Ilíada y que narra las aventuras de Odiseo (Ulises, en la traducción española) cuando intenta regresar a Ítaca después de la guerra de Troya. En esta lección de unPROFESOR queremos descubrirte en profundidad esta obra literaria y, por eso, vamos a analizar a los personajes de La Odisea tanto principales como secundarios y que son esenciales para el desarrollo de la trama. Adéntrate a descubrir una de las obras clásicas imprescindibles de la literatura universal.
-
Si hablamos de los personajes de La Odisea tenemos que hacer mención especial al protagonista de la obra: Odiseo. Este héroe ya lo conocimos por su participación en La Ilíada, el relato de Homero donde se nos narra todo lo que sucedió en la guerra de Troya. Gracias a este poema sabemos que Odiseo fue uno de los héroes griegos más importantes de la mencionada guerra y que, una vez acabada la guerra, quiso volver a Ítaca, el lugar que reinaba.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cr7-gjx/Suspicion-Agent-Demo/util.py b/spaces/cr7-gjx/Suspicion-Agent-Demo/util.py
deleted file mode 100644
index e9837ba1f053e0711943461e1c4ad13414f64806..0000000000000000000000000000000000000000
--- a/spaces/cr7-gjx/Suspicion-Agent-Demo/util.py
+++ /dev/null
@@ -1,127 +0,0 @@
-import json
-import os
-from pathlib import Path
-from typing import Any, Dict
-
-from model import load_embedding_from_config, load_llm_from_config
-from setting import Settings
-import logging
-from pythonjsonlogger import jsonlogger
-
-
-
-def verify_openai_token(token: str) -> str:
- import openai
-
- openai.api_key = token
- try:
- openai.Completion.create(
- model="text-ada-001",
- prompt="Hello",
- temperature=0,
- max_tokens=10,
- top_p=1,
- frequency_penalty=0.5,
- presence_penalty=0,
- )
- return "OK"
- except Exception as e:
- return str(e)
-
-def get_logging(logger_name,content=''):
- logger = logging.getLogger(logger_name)
- if not logger.handlers:
- logger.setLevel(logging.DEBUG)
- logHandlerJson = logging.FileHandler('./memory_data/'+logger_name+'.json')
- formatter = jsonlogger.JsonFormatter()
- logHandlerJson.setFormatter(formatter)
-
- # handler = logging.FileHandler('./memory_data/'+logger_name+'.txt')
- # handler.setFormatter(logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s'))
- logger.addHandler(logHandlerJson)
- logger.info(content)
-
-
-def verify_model_initialization(settings: Settings) -> str:
- try:
- load_llm_from_config(settings.model.llm)
- except Exception as e:
- return f"LLM initialization check failed: {e}"
-
- try:
- load_embedding_from_config(settings.model.embedding)
- except Exception as e:
- return f"Embedding initialization check failed: {e}"
-
- return "OK"
-
-
-def verify_pinecone_token(token: str) -> str:
- return "OK"
-
-
-def verify_discord_token(token: str) -> str:
- return "OK"
-
-
-def load_json_value(filepath: Path, key: str, default_value: Any) -> Any:
- if not Path(filepath).exists():
- return default_value
- json_obj = load_json(filepath)
- if key not in json_obj:
- return default_value
- return json_obj[key]
-
-
-def set_json_value(filepath: Path, key: str, value: Any) -> None:
- # key needs to follow python naming convention, such as trial_id
- json_obj = load_json(filepath)
- json_obj[key] = value
- with open(filepath, "w+") as json_file:
- json.dump(json_obj, json_file, sort_keys=True)
- json_file.flush()
-
-
-def load_json(filepath: Path) -> Dict:
- if not Path(filepath).exists():
- return {}
- with open(filepath, "r") as file:
- try:
- json_obj = json.load(file)
- return json_obj
- except json.JSONDecodeError as e:
- if os.stat(filepath).st_size == 0:
- # Empty file
- return {}
- else:
- raise e
-
-def load_log(file_name, key_name):
- content_list = []
- key_list = []
- with open('./memory_data/'+file_name) as f:
- contents = f.readlines()
- for i in contents:
- print(i)
- contents = json.loads(i)
- content_list.append(list(contents.values())[1][key_name])
- key_list.append(list(contents.keys())[1])
- return content_list, key_list
-
-def load_log_full(file_name, key_name):
- content_list = []
- key_list = []
- with open(file_name) as f:
- contents = f.readlines()
- for i in contents:
- #print(i)
- contents = json.loads(i)
- if key_name is None:
- content_list.append(list(contents.values())[1])
- else:
- content_list.append(list(contents.values())[1][key_name])
- key_list.append(list(contents.keys())[1])
- return content_list, key_list
-
-def get_checkpoint_dir(agent_file: str) -> str:
- return "./{}.cpt".format(os.path.basename(agent_file))
diff --git a/spaces/crimeacs/phase-hunter/README.md b/spaces/crimeacs/phase-hunter/README.md
deleted file mode 100644
index a6230fafc35b487b1cb97fa310608f2f3f171ede..0000000000000000000000000000000000000000
--- a/spaces/crimeacs/phase-hunter/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Phase Hunter
-emoji: 🏹
-colorFrom: indigo
-colorTo: pink
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/cscan/CodeFormer/CodeFormer/facelib/detection/align_trans.py b/spaces/cscan/CodeFormer/CodeFormer/facelib/detection/align_trans.py
deleted file mode 100644
index 07f1eb365462c2ec5bbac6d1854c786b6fd6be90..0000000000000000000000000000000000000000
--- a/spaces/cscan/CodeFormer/CodeFormer/facelib/detection/align_trans.py
+++ /dev/null
@@ -1,219 +0,0 @@
-import cv2
-import numpy as np
-
-from .matlab_cp2tform import get_similarity_transform_for_cv2
-
-# reference facial points, a list of coordinates (x,y)
-REFERENCE_FACIAL_POINTS = [[30.29459953, 51.69630051], [65.53179932, 51.50139999], [48.02519989, 71.73660278],
- [33.54930115, 92.3655014], [62.72990036, 92.20410156]]
-
-DEFAULT_CROP_SIZE = (96, 112)
-
-
-class FaceWarpException(Exception):
-
- def __str__(self):
- return 'In File {}:{}'.format(__file__, super.__str__(self))
-
-
-def get_reference_facial_points(output_size=None, inner_padding_factor=0.0, outer_padding=(0, 0), default_square=False):
- """
- Function:
- ----------
- get reference 5 key points according to crop settings:
- 0. Set default crop_size:
- if default_square:
- crop_size = (112, 112)
- else:
- crop_size = (96, 112)
- 1. Pad the crop_size by inner_padding_factor in each side;
- 2. Resize crop_size into (output_size - outer_padding*2),
- pad into output_size with outer_padding;
- 3. Output reference_5point;
- Parameters:
- ----------
- @output_size: (w, h) or None
- size of aligned face image
- @inner_padding_factor: (w_factor, h_factor)
- padding factor for inner (w, h)
- @outer_padding: (w_pad, h_pad)
- each row is a pair of coordinates (x, y)
- @default_square: True or False
- if True:
- default crop_size = (112, 112)
- else:
- default crop_size = (96, 112);
- !!! make sure, if output_size is not None:
- (output_size - outer_padding)
- = some_scale * (default crop_size * (1.0 +
- inner_padding_factor))
- Returns:
- ----------
- @reference_5point: 5x2 np.array
- each row is a pair of transformed coordinates (x, y)
- """
-
- tmp_5pts = np.array(REFERENCE_FACIAL_POINTS)
- tmp_crop_size = np.array(DEFAULT_CROP_SIZE)
-
- # 0) make the inner region a square
- if default_square:
- size_diff = max(tmp_crop_size) - tmp_crop_size
- tmp_5pts += size_diff / 2
- tmp_crop_size += size_diff
-
- if (output_size and output_size[0] == tmp_crop_size[0] and output_size[1] == tmp_crop_size[1]):
-
- return tmp_5pts
-
- if (inner_padding_factor == 0 and outer_padding == (0, 0)):
- if output_size is None:
- return tmp_5pts
- else:
- raise FaceWarpException('No paddings to do, output_size must be None or {}'.format(tmp_crop_size))
-
- # check output size
- if not (0 <= inner_padding_factor <= 1.0):
- raise FaceWarpException('Not (0 <= inner_padding_factor <= 1.0)')
-
- if ((inner_padding_factor > 0 or outer_padding[0] > 0 or outer_padding[1] > 0) and output_size is None):
- output_size = tmp_crop_size * \
- (1 + inner_padding_factor * 2).astype(np.int32)
- output_size += np.array(outer_padding)
- if not (outer_padding[0] < output_size[0] and outer_padding[1] < output_size[1]):
- raise FaceWarpException('Not (outer_padding[0] < output_size[0] and outer_padding[1] < output_size[1])')
-
- # 1) pad the inner region according inner_padding_factor
- if inner_padding_factor > 0:
- size_diff = tmp_crop_size * inner_padding_factor * 2
- tmp_5pts += size_diff / 2
- tmp_crop_size += np.round(size_diff).astype(np.int32)
-
- # 2) resize the padded inner region
- size_bf_outer_pad = np.array(output_size) - np.array(outer_padding) * 2
-
- if size_bf_outer_pad[0] * tmp_crop_size[1] != size_bf_outer_pad[1] * tmp_crop_size[0]:
- raise FaceWarpException('Must have (output_size - outer_padding)'
- '= some_scale * (crop_size * (1.0 + inner_padding_factor)')
-
- scale_factor = size_bf_outer_pad[0].astype(np.float32) / tmp_crop_size[0]
- tmp_5pts = tmp_5pts * scale_factor
- # size_diff = tmp_crop_size * (scale_factor - min(scale_factor))
- # tmp_5pts = tmp_5pts + size_diff / 2
- tmp_crop_size = size_bf_outer_pad
-
- # 3) add outer_padding to make output_size
- reference_5point = tmp_5pts + np.array(outer_padding)
- tmp_crop_size = output_size
-
- return reference_5point
-
-
-def get_affine_transform_matrix(src_pts, dst_pts):
- """
- Function:
- ----------
- get affine transform matrix 'tfm' from src_pts to dst_pts
- Parameters:
- ----------
- @src_pts: Kx2 np.array
- source points matrix, each row is a pair of coordinates (x, y)
- @dst_pts: Kx2 np.array
- destination points matrix, each row is a pair of coordinates (x, y)
- Returns:
- ----------
- @tfm: 2x3 np.array
- transform matrix from src_pts to dst_pts
- """
-
- tfm = np.float32([[1, 0, 0], [0, 1, 0]])
- n_pts = src_pts.shape[0]
- ones = np.ones((n_pts, 1), src_pts.dtype)
- src_pts_ = np.hstack([src_pts, ones])
- dst_pts_ = np.hstack([dst_pts, ones])
-
- A, res, rank, s = np.linalg.lstsq(src_pts_, dst_pts_)
-
- if rank == 3:
- tfm = np.float32([[A[0, 0], A[1, 0], A[2, 0]], [A[0, 1], A[1, 1], A[2, 1]]])
- elif rank == 2:
- tfm = np.float32([[A[0, 0], A[1, 0], 0], [A[0, 1], A[1, 1], 0]])
-
- return tfm
-
-
-def warp_and_crop_face(src_img, facial_pts, reference_pts=None, crop_size=(96, 112), align_type='smilarity'):
- """
- Function:
- ----------
- apply affine transform 'trans' to uv
- Parameters:
- ----------
- @src_img: 3x3 np.array
- input image
- @facial_pts: could be
- 1)a list of K coordinates (x,y)
- or
- 2) Kx2 or 2xK np.array
- each row or col is a pair of coordinates (x, y)
- @reference_pts: could be
- 1) a list of K coordinates (x,y)
- or
- 2) Kx2 or 2xK np.array
- each row or col is a pair of coordinates (x, y)
- or
- 3) None
- if None, use default reference facial points
- @crop_size: (w, h)
- output face image size
- @align_type: transform type, could be one of
- 1) 'similarity': use similarity transform
- 2) 'cv2_affine': use the first 3 points to do affine transform,
- by calling cv2.getAffineTransform()
- 3) 'affine': use all points to do affine transform
- Returns:
- ----------
- @face_img: output face image with size (w, h) = @crop_size
- """
-
- if reference_pts is None:
- if crop_size[0] == 96 and crop_size[1] == 112:
- reference_pts = REFERENCE_FACIAL_POINTS
- else:
- default_square = False
- inner_padding_factor = 0
- outer_padding = (0, 0)
- output_size = crop_size
-
- reference_pts = get_reference_facial_points(output_size, inner_padding_factor, outer_padding,
- default_square)
-
- ref_pts = np.float32(reference_pts)
- ref_pts_shp = ref_pts.shape
- if max(ref_pts_shp) < 3 or min(ref_pts_shp) != 2:
- raise FaceWarpException('reference_pts.shape must be (K,2) or (2,K) and K>2')
-
- if ref_pts_shp[0] == 2:
- ref_pts = ref_pts.T
-
- src_pts = np.float32(facial_pts)
- src_pts_shp = src_pts.shape
- if max(src_pts_shp) < 3 or min(src_pts_shp) != 2:
- raise FaceWarpException('facial_pts.shape must be (K,2) or (2,K) and K>2')
-
- if src_pts_shp[0] == 2:
- src_pts = src_pts.T
-
- if src_pts.shape != ref_pts.shape:
- raise FaceWarpException('facial_pts and reference_pts must have the same shape')
-
- if align_type == 'cv2_affine':
- tfm = cv2.getAffineTransform(src_pts[0:3], ref_pts[0:3])
- elif align_type == 'affine':
- tfm = get_affine_transform_matrix(src_pts, ref_pts)
- else:
- tfm = get_similarity_transform_for_cv2(src_pts, ref_pts)
-
- face_img = cv2.warpAffine(src_img, tfm, (crop_size[0], crop_size[1]))
-
- return face_img
diff --git a/spaces/datasciencedojo/Question-Generator/README.md b/spaces/datasciencedojo/Question-Generator/README.md
deleted file mode 100644
index f6e56b8e9d470fc19d4c053d05c6383d8c1d4e79..0000000000000000000000000000000000000000
--- a/spaces/datasciencedojo/Question-Generator/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Question Generator
-emoji: 🔥
-colorFrom: green
-colorTo: red
-sdk: gradio
-sdk_version: 3.4.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/inputs.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/inputs.py
deleted file mode 100644
index 9345530649a0b8843c27d7a0f965ac73bfcce7d6..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/inputs.py
+++ /dev/null
@@ -1,451 +0,0 @@
-# type: ignore
-"""
-This module defines various classes that can serve as the `input` to an interface. Each class must inherit from
-`InputComponent`, and each class must define a path to its template. All of the subclasses of `InputComponent` are
-automatically added to a registry, which allows them to be easily referenced in other parts of the code.
-"""
-
-from __future__ import annotations
-
-from typing import Any, Optional
-
-from gradio import components
-from gradio.deprecation import warn_deprecation
-
-
-def warn_inputs_deprecation():
- warn_deprecation(
- "Usage of gradio.inputs is deprecated, and will not be supported in the future, please import your component from gradio.components",
- )
-
-
-class Textbox(components.Textbox):
- def __init__(
- self,
- lines: int = 1,
- placeholder: Optional[str] = None,
- default: str = "",
- numeric: Optional[bool] = False,
- type: Optional[str] = "text",
- label: Optional[str] = None,
- optional: bool = False,
- ):
- warn_inputs_deprecation()
- super().__init__(
- value=default,
- lines=lines,
- placeholder=placeholder,
- label=label,
- numeric=numeric,
- type=type,
- optional=optional,
- )
-
-
-class Number(components.Number):
- """
- Component creates a field for user to enter numeric input. Provides a number as an argument to the wrapped function.
- Input type: float
- """
-
- def __init__(
- self,
- default: Optional[float] = None,
- label: Optional[str] = None,
- optional: bool = False,
- ):
- """
- Parameters:
- default (float): default value.
- label (str): component name in interface.
- optional (bool): If True, the interface can be submitted with no value for this component.
- """
- warn_inputs_deprecation()
- super().__init__(value=default, label=label, optional=optional)
-
-
-class Slider(components.Slider):
- """
- Component creates a slider that ranges from `minimum` to `maximum`. Provides number as an argument to the wrapped function.
- Input type: float
- """
-
- def __init__(
- self,
- minimum: float = 0,
- maximum: float = 100,
- step: Optional[float] = None,
- default: Optional[float] = None,
- label: Optional[str] = None,
- optional: bool = False,
- ):
- """
- Parameters:
- minimum (float): minimum value for slider.
- maximum (float): maximum value for slider.
- step (float): increment between slider values.
- default (float): default value.
- label (str): component name in interface.
- optional (bool): this parameter is ignored.
- """
- warn_inputs_deprecation()
-
- super().__init__(
- value=default,
- minimum=minimum,
- maximum=maximum,
- step=step,
- label=label,
- optional=optional,
- )
-
-
-class Checkbox(components.Checkbox):
- """
- Component creates a checkbox that can be set to `True` or `False`. Provides a boolean as an argument to the wrapped function.
- Input type: bool
- """
-
- def __init__(
- self,
- default: bool = False,
- label: Optional[str] = None,
- optional: bool = False,
- ):
- """
- Parameters:
- label (str): component name in interface.
- default (bool): if True, checked by default.
- optional (bool): this parameter is ignored.
- """
- warn_inputs_deprecation()
- super().__init__(value=default, label=label, optional=optional)
-
-
-class CheckboxGroup(components.CheckboxGroup):
- """
- Component creates a set of checkboxes of which a subset can be selected. Provides a list of strings representing the selected choices as an argument to the wrapped function.
- Input type: Union[List[str], List[int]]
- """
-
- def __init__(
- self,
- choices: list[str],
- default: list[str] | None = None,
- type: str = "value",
- label: Optional[str] = None,
- optional: bool = False,
- ):
- """
- Parameters:
- choices (List[str]): list of options to select from.
- default (List[str]): default selected list of options.
- type (str): Type of value to be returned by component. "value" returns the list of strings of the choices selected, "index" returns the list of indices of the choices selected.
- label (str): component name in interface.
- optional (bool): this parameter is ignored.
- """
- if default is None:
- default = []
- warn_inputs_deprecation()
- super().__init__(
- value=default,
- choices=choices,
- type=type,
- label=label,
- optional=optional,
- )
-
-
-class Radio(components.Radio):
- """
- Component creates a set of radio buttons of which only one can be selected. Provides string representing selected choice as an argument to the wrapped function.
- Input type: Union[str, int]
- """
-
- def __init__(
- self,
- choices: list[str],
- type: str = "value",
- default: Optional[str] = None,
- label: Optional[str] = None,
- optional: bool = False,
- ):
- """
- Parameters:
- choices (List[str]): list of options to select from.
- type (str): Type of value to be returned by component. "value" returns the string of the choice selected, "index" returns the index of the choice selected.
- default (str): the button selected by default. If None, no button is selected by default.
- label (str): component name in interface.
- optional (bool): this parameter is ignored.
- """
- warn_inputs_deprecation()
- super().__init__(
- choices=choices,
- type=type,
- value=default,
- label=label,
- optional=optional,
- )
-
-
-class Dropdown(components.Dropdown):
- """
- Component creates a dropdown of which only one can be selected. Provides string representing selected choice as an argument to the wrapped function.
- Input type: Union[str, int]
- """
-
- def __init__(
- self,
- choices: list[str],
- type: str = "value",
- default: Optional[str] = None,
- label: Optional[str] = None,
- optional: bool = False,
- ):
- """
- Parameters:
- choices (List[str]): list of options to select from.
- type (str): Type of value to be returned by component. "value" returns the string of the choice selected, "index" returns the index of the choice selected.
- default (str): default value selected in dropdown. If None, no value is selected by default.
- label (str): component name in interface.
- optional (bool): this parameter is ignored.
- """
- warn_inputs_deprecation()
- super().__init__(
- choices=choices,
- type=type,
- value=default,
- label=label,
- optional=optional,
- )
-
-
-class Image(components.Image):
- """
- Component creates an image upload box with editing capabilities.
- Input type: Union[numpy.array, PIL.Image, file-object]
- """
-
- def __init__(
- self,
- shape: tuple[int, int] = None,
- image_mode: str = "RGB",
- invert_colors: bool = False,
- source: str = "upload",
- tool: str = "editor",
- type: str = "numpy",
- label: str = None,
- optional: bool = False,
- ):
- """
- Parameters:
- shape (Tuple[int, int]): (width, height) shape to crop and resize image to; if None, matches input image size.
- image_mode (str): How to process the uploaded image. Accepts any of the PIL image modes, e.g. "RGB" for color images, "RGBA" to include the transparency mask, "L" for black-and-white images.
- invert_colors (bool): whether to invert the image as a preprocessing step.
- source (str): Source of image. "upload" creates a box where user can drop an image file, "webcam" allows user to take snapshot from their webcam, "canvas" defaults to a white image that can be edited and drawn upon with tools.
- tool (str): Tools used for editing. "editor" allows a full screen editor, "select" provides a cropping and zoom tool.
- type (str): Type of value to be returned by component. "numpy" returns a numpy array with shape (height, width, 3) and values from 0 to 255, "pil" returns a PIL image object, "file" returns a temporary file object whose path can be retrieved by file_obj.name, "filepath" returns the path directly.
- label (str): component name in interface.
- optional (bool): If True, the interface can be submitted with no uploaded image, in which case the input value is None.
- """
- warn_inputs_deprecation()
- super().__init__(
- shape=shape,
- image_mode=image_mode,
- invert_colors=invert_colors,
- source=source,
- tool=tool,
- type=type,
- label=label,
- optional=optional,
- )
-
-
-class Video(components.Video):
- """
- Component creates a video file upload that is converted to a file path.
-
- Input type: filepath
- """
-
- def __init__(
- self,
- type: Optional[str] = None,
- source: str = "upload",
- label: Optional[str] = None,
- optional: bool = False,
- ):
- """
- Parameters:
- type (str): Type of video format to be returned by component, such as 'avi' or 'mp4'. If set to None, video will keep uploaded format.
- source (str): Source of video. "upload" creates a box where user can drop an video file, "webcam" allows user to record a video from their webcam.
- label (str): component name in interface.
- optional (bool): If True, the interface can be submitted with no uploaded video, in which case the input value is None.
- """
- warn_inputs_deprecation()
- super().__init__(format=type, source=source, label=label, optional=optional)
-
-
-class Audio(components.Audio):
- """
- Component accepts audio input files.
- Input type: Union[Tuple[int, numpy.array], file-object, numpy.array]
- """
-
- def __init__(
- self,
- source: str = "upload",
- type: str = "numpy",
- label: str = None,
- optional: bool = False,
- ):
- """
- Parameters:
- source (str): Source of audio. "upload" creates a box where user can drop an audio file, "microphone" creates a microphone input.
- type (str): Type of value to be returned by component. "numpy" returns a 2-set tuple with an integer sample_rate and the data numpy.array of shape (samples, 2), "file" returns a temporary file object whose path can be retrieved by file_obj.name, "filepath" returns the path directly.
- label (str): component name in interface.
- optional (bool): If True, the interface can be submitted with no uploaded audio, in which case the input value is None.
- """
- warn_inputs_deprecation()
- super().__init__(source=source, type=type, label=label, optional=optional)
-
-
-class File(components.File):
- """
- Component accepts generic file uploads.
- Input type: Union[file-object, bytes, List[Union[file-object, bytes]]]
- """
-
- def __init__(
- self,
- file_count: str = "single",
- type: str = "file",
- label: Optional[str] = None,
- keep_filename: bool = True,
- optional: bool = False,
- ):
- """
- Parameters:
- file_count (str): if single, allows user to upload one file. If "multiple", user uploads multiple files. If "directory", user uploads all files in selected directory. Return type will be list for each file in case of "multiple" or "directory".
- type (str): Type of value to be returned by component. "file" returns a temporary file object whose path can be retrieved by file_obj.name, "binary" returns an bytes object.
- label (str): component name in interface.
- keep_filename (bool): DEPRECATED. Original filename always kept.
- optional (bool): If True, the interface can be submitted with no uploaded image, in which case the input value is None.
- """
- warn_inputs_deprecation()
- super().__init__(
- file_count=file_count,
- type=type,
- label=label,
- keep_filename=keep_filename,
- optional=optional,
- )
-
-
-class Dataframe(components.Dataframe):
- """
- Component accepts 2D input through a spreadsheet interface.
- Input type: Union[pandas.DataFrame, numpy.array, List[Union[str, float]], List[List[Union[str, float]]]]
- """
-
- def __init__(
- self,
- headers: Optional[list[str]] = None,
- row_count: int = 3,
- col_count: Optional[int] = 3,
- datatype: str | list[str] = "str",
- col_width: int | list[int] = None,
- default: Optional[list[list[Any]]] = None,
- type: str = "pandas",
- label: Optional[str] = None,
- optional: bool = False,
- ):
- """
- Parameters:
- headers (List[str]): Header names to dataframe. If None, no headers are shown.
- row_count (int): Limit number of rows for input.
- col_count (int): Limit number of columns for input. If equal to 1, return data will be one-dimensional. Ignored if `headers` is provided.
- datatype (Union[str, List[str]]): Datatype of values in sheet. Can be provided per column as a list of strings, or for the entire sheet as a single string. Valid datatypes are "str", "number", "bool", and "date".
- col_width (Union[int, List[int]]): Width of columns in pixels. Can be provided as single value or list of values per column.
- default (List[List[Any]]): Default value
- type (str): Type of value to be returned by component. "pandas" for pandas dataframe, "numpy" for numpy array, or "array" for a Python array.
- label (str): component name in interface.
- optional (bool): this parameter is ignored.
- """
- warn_inputs_deprecation()
- super().__init__(
- value=default,
- headers=headers,
- row_count=row_count,
- col_count=col_count,
- datatype=datatype,
- col_width=col_width,
- type=type,
- label=label,
- optional=optional,
- )
-
-
-class Timeseries(components.Timeseries):
- """
- Component accepts pandas.DataFrame uploaded as a timeseries csv file.
- Input type: pandas.DataFrame
- """
-
- def __init__(
- self,
- x: Optional[str] = None,
- y: str | list[str] = None,
- label: Optional[str] = None,
- optional: bool = False,
- ):
- """
- Parameters:
- x (str): Column name of x (time) series. None if csv has no headers, in which case first column is x series.
- y (Union[str, List[str]]): Column name of y series, or list of column names if multiple series. None if csv has no headers, in which case every column after first is a y series.
- label (str): component name in interface.
- optional (bool): If True, the interface can be submitted with no uploaded csv file, in which case the input value is None.
- """
- warn_inputs_deprecation()
- super().__init__(x=x, y=y, label=label, optional=optional)
-
-
-class State(components.State):
- """
- Special hidden component that stores state across runs of the interface.
- Input type: Any
- """
-
- def __init__(
- self,
- label: str = None,
- default: Any = None,
- ):
- """
- Parameters:
- label (str): component name in interface (not used).
- default (Any): the initial value of the state.
- optional (bool): this parameter is ignored.
- """
- warn_inputs_deprecation()
- super().__init__(value=default, label=label)
-
-
-class Image3D(components.Model3D):
- """
- Used for 3D image model output.
- Input type: File object of type (.obj, glb, or .gltf)
- """
-
- def __init__(
- self,
- label: Optional[str] = None,
- optional: bool = False,
- ):
- """
- Parameters:
- label (str): component name in interface.
- optional (bool): If True, the interface can be submitted with no uploaded image, in which case the input value is None.
- """
- warn_inputs_deprecation()
- super().__init__(label=label, optional=optional)
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/jsonschema/benchmarks/subcomponents.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/jsonschema/benchmarks/subcomponents.py
deleted file mode 100644
index 225d86e72d59bba808b00c59f59d6489eda8ccc7..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/jsonschema/benchmarks/subcomponents.py
+++ /dev/null
@@ -1,42 +0,0 @@
-"""
-A benchmark which tries to compare the possible slow subparts of validation.
-"""
-from referencing import Registry
-from referencing.jsonschema import DRAFT202012
-from rpds import HashTrieMap, HashTrieSet
-
-from jsonschema import Draft202012Validator
-
-schema = {
- "type": "array",
- "minLength": 1,
- "maxLength": 1,
- "items": {"type": "integer"}
-}
-
-hmap = HashTrieMap()
-hset = HashTrieSet()
-
-registry = Registry()
-
-v = Draft202012Validator(schema)
-
-
-def registry_data_structures():
- return hmap.insert("foo", "bar"), hset.insert("foo")
-
-
-def registry_add():
- resource = DRAFT202012.create_resource(schema)
- return registry.with_resource(uri="urn:example", resource=resource)
-
-
-if __name__ == "__main__":
- from pyperf import Runner
- runner = Runner()
-
- runner.bench_func("HashMap/HashSet insertion", registry_data_structures)
- runner.bench_func("Registry insertion", registry_add)
- runner.bench_func("Success", lambda: v.is_valid([1]))
- runner.bench_func("Failure", lambda: v.is_valid(["foo"]))
- runner.bench_func("Metaschema validation", lambda: v.check_schema(schema))
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/markdown_it/helpers/__init__.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/markdown_it/helpers/__init__.py
deleted file mode 100644
index 3dbbdd1d480ecc5ace6529f9005d40d5985529ae..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/markdown_it/helpers/__init__.py
+++ /dev/null
@@ -1,6 +0,0 @@
-"""Functions for parsing Links
-"""
-__all__ = ("parseLinkLabel", "parseLinkDestination", "parseLinkTitle")
-from .parse_link_destination import parseLinkDestination
-from .parse_link_label import parseLinkLabel
-from .parse_link_title import parseLinkTitle
diff --git a/spaces/deeplearning/audioldm-text-to-audio-generation/audioldm/clap/training/lp_train.py b/spaces/deeplearning/audioldm-text-to-audio-generation/audioldm/clap/training/lp_train.py
deleted file mode 100644
index 24a19bacd0a4b789415cfccbce1f8bc99bc493ed..0000000000000000000000000000000000000000
--- a/spaces/deeplearning/audioldm-text-to-audio-generation/audioldm/clap/training/lp_train.py
+++ /dev/null
@@ -1,301 +0,0 @@
-import json
-import logging
-import math
-import os
-import time
-from contextlib import suppress
-
-import numpy as np
-import torch
-import torch.nn.functional as F
-
-try:
- import wandb
-except ImportError:
- wandb = None
-
-from open_clip import LPLoss, LPMetrics, lp_gather_features
-from open_clip.utils import do_mixup, get_mix_lambda
-from .distributed import is_master
-from .zero_shot import zero_shot_eval
-
-
-class AverageMeter(object):
- """Computes and stores the average and current value"""
-
- def __init__(self):
- self.reset()
-
- def reset(self):
- self.val = 0
- self.avg = 0
- self.sum = 0
- self.count = 0
-
- def update(self, val, n=1):
- self.val = val
- self.sum += val * n
- self.count += n
- self.avg = self.sum / self.count
-
-
-def unwrap_model(model):
- if hasattr(model, "module"):
- return model.module
- else:
- return model
-
-
-def train_one_epoch(
- model,
- data,
- epoch,
- optimizer,
- scaler,
- scheduler,
- args,
- tb_writer=None,
- extra_suffix="",
-):
- device = torch.device(args.device)
- autocast = torch.cuda.amp.autocast if args.precision == "amp" else suppress
- model.train()
- loss = LPLoss(args.lp_loss)
-
- dataloader, sampler = data["train"].dataloader, data["train"].sampler
- if args.distributed and sampler is not None:
- sampler.set_epoch(epoch)
- num_batches_per_epoch = dataloader.num_batches
- sample_digits = math.ceil(math.log(dataloader.num_samples + 1, 10))
-
- # for toy dataset
- if args.dataset_type == "toy":
- dataloader.dataset.generate_queue()
-
- loss_m = AverageMeter()
- batch_time_m = AverageMeter()
- data_time_m = AverageMeter()
- end = time.time()
-
- for i, batch in enumerate(dataloader):
- step = num_batches_per_epoch * epoch + i
-
- if isinstance(scheduler, dict):
- for s in scheduler.values():
- s(step)
- else:
- scheduler(step)
-
- audio = batch # contains mel_spec, wavform, and longer list
- class_label = batch["class_label"]
- # audio = audio.to(device=device, non_blocking=True)
- class_label = class_label.to(device=device, non_blocking=True)
-
- if args.mixup:
- # https://github.com/RetroCirce/HTS-Audio-Transformer/blob/main/utils.py#L146
- mix_lambda = torch.from_numpy(
- get_mix_lambda(0.5, len(audio["waveform"]))
- ).to(device)
- class_label = do_mixup(class_label, mix_lambda)
- else:
- mix_lambda = None
-
- data_time_m.update(time.time() - end)
- if isinstance(optimizer, dict):
- for o_ in optimizer.values():
- o_.zero_grad()
- else:
- optimizer.zero_grad()
-
- with autocast():
- pred = model(audio, mix_lambda=mix_lambda, device=device)
- total_loss = loss(pred, class_label)
-
- if isinstance(optimizer, dict):
- if scaler is not None:
- scaler.scale(total_loss).backward()
- for o_ in optimizer.values():
- if args.horovod:
- o_.synchronize()
- scaler.unscale_(o_)
- with o_.skip_synchronize():
- scaler.step(o_)
- else:
- scaler.step(o_)
- scaler.update()
- else:
- total_loss.backward()
- for o_ in optimizer.values():
- o_.step()
- else:
- if scaler is not None:
- scaler.scale(total_loss).backward()
- if args.horovod:
- optimizer.synchronize()
- scaler.unscale_(optimizer)
- with optimizer.skip_synchronize():
- scaler.step(optimizer)
- else:
- scaler.step(optimizer)
- scaler.update()
- else:
- total_loss.backward()
- optimizer.step()
-
- # Note: we clamp to 4.6052 = ln(100), as in the original paper.
- with torch.no_grad():
- unwrap_model(model).clap_model.logit_scale_a.clamp_(0, math.log(100))
- unwrap_model(model).clap_model.logit_scale_t.clamp_(0, math.log(100))
-
- batch_time_m.update(time.time() - end)
- end = time.time()
- batch_count = i + 1
-
- if is_master(args) and (i % 100 == 0 or batch_count == num_batches_per_epoch):
- if isinstance(audio, dict):
- batch_size = len(audio["waveform"])
- else:
- batch_size = len(audio)
- num_samples = batch_count * batch_size * args.world_size
- samples_per_epoch = dataloader.num_samples
- percent_complete = 100.0 * batch_count / num_batches_per_epoch
-
- # NOTE loss is coarsely sampled, just master node and per log update
- loss_m.update(total_loss.item(), batch_size)
- if isinstance(optimizer, dict):
- logging.info(
- f"Train Epoch: {epoch} [{num_samples:>{sample_digits}}/{samples_per_epoch} ({percent_complete:.0f}%)] "
- f"Loss: {loss_m.val:#.5g} ({loss_m.avg:#.4g}) "
- f"Data (t): {data_time_m.avg:.3f} "
- f"Batch (t): {batch_time_m.avg:.3f} "
- f"LR: {[o_.param_groups[0]['lr'] for o_ in optimizer.values()]}"
- )
- log_data = {
- "loss": loss_m.val,
- "data_time": data_time_m.val,
- "batch_time": batch_time_m.val,
- "lr": [o_.param_groups[0]["lr"] for o_ in optimizer.values()],
- }
- else:
- logging.info(
- f"Train Epoch: {epoch} [{num_samples:>{sample_digits}}/{samples_per_epoch} ({percent_complete:.0f}%)] "
- f"Loss: {loss_m.val:#.5g} ({loss_m.avg:#.4g}) "
- f"Data (t): {data_time_m.avg:.3f} "
- f"Batch (t): {batch_time_m.avg:.3f} "
- f"LR: {optimizer.param_groups[0]['lr']:5f} "
- )
-
- # Save train loss / etc. Using non avg meter values as loggers have their own smoothing
- log_data = {
- "loss": loss_m.val,
- "data_time": data_time_m.val,
- "batch_time": batch_time_m.val,
- "lr": optimizer.param_groups[0]["lr"],
- }
- for name, val in log_data.items():
- name = f"train{extra_suffix}/{name}"
- if tb_writer is not None:
- tb_writer.add_scalar(name, val, step)
- if args.wandb:
- assert wandb is not None, "Please install wandb."
- wandb.log({name: val, "step": step})
-
- # resetting batch / data time meters per log window
- batch_time_m.reset()
- data_time_m.reset()
- # end for
-
-
-def evaluate(model, data, epoch, args, tb_writer=None, extra_suffix=""):
- metrics = {}
- if not args.parallel_eval:
- if not is_master(args):
- return metrics
- device = torch.device(args.device)
- model.eval()
-
- # CHANGE
- # zero_shot_metrics = zero_shot_eval(model, data, epoch, args)
- # metrics.update(zero_shot_metrics)
- if is_master(args):
- print("Evaluating...")
- metric_names = args.lp_metrics.split(",")
- eval_tool = LPMetrics(metric_names=metric_names)
-
- autocast = torch.cuda.amp.autocast if args.precision == "amp" else suppress
- if "val" in data and (
- args.val_frequency
- and ((epoch % args.val_frequency) == 0 or epoch == args.epochs)
- ):
- if args.parallel_eval:
- dataloader, sampler = data["val"].dataloader, data["val"].sampler
- if args.distributed and sampler is not None:
- sampler.set_epoch(epoch)
- samples_per_val = dataloader.num_samples
- else:
- dataloader = data["val"].dataloader
- num_samples = 0
- samples_per_val = dataloader.num_samples
-
- eval_info = {"pred": [], "target": []}
- with torch.no_grad():
- for i, batch in enumerate(dataloader):
- audio = batch # contains mel_spec, wavform, and longer list
- class_label = batch["class_label"]
-
- # audio = audio.to(device=device, non_blocking=True)
- class_label = class_label.to(device=device, non_blocking=True)
-
- with autocast():
- pred = model(audio, device=device)
- if args.parallel_eval:
- pred, class_label = lp_gather_features(
- pred, class_label, args.world_size, args.horovod
- )
- eval_info["pred"].append(pred)
- eval_info["target"].append(class_label)
-
- num_samples += class_label.shape[0]
-
- if (i % 100) == 0: # and i != 0:
- logging.info(
- f"Eval Epoch: {epoch} [{num_samples} / {samples_per_val}]"
- )
-
- if is_master(args):
- eval_info["pred"] = torch.cat(eval_info["pred"], 0).cpu()
- eval_info["target"] = torch.cat(eval_info["target"], 0).cpu()
- metric_dict = eval_tool.evaluate_mertics(
- eval_info["pred"], eval_info["target"]
- )
- metrics.update(metric_dict)
- if "epoch" not in metrics.keys():
- metrics.update({"epoch": epoch})
-
- if is_master(args):
- if not metrics:
- return metrics
-
- logging.info(
- f"Eval Epoch: {epoch} "
- + "\n".join(
- ["\t".join([f"{m}: {round(metrics[m], 4):.4f}"]) for m in metrics]
- )
- )
- if args.save_logs:
- for name, val in metrics.items():
- if tb_writer is not None:
- tb_writer.add_scalar(f"val{extra_suffix}/{name}", val, epoch)
-
- with open(os.path.join(args.checkpoint_path, "results.jsonl"), "a+") as f:
- f.write(json.dumps(metrics))
- f.write("\n")
-
- if args.wandb:
- assert wandb is not None, "Please install wandb."
- for name, val in metrics.items():
- wandb.log({f"val{extra_suffix}/{name}": val, "epoch": epoch})
-
- return metrics
- else:
- return metrics
diff --git a/spaces/deepthiaj/Electro_oneAPI/app.py b/spaces/deepthiaj/Electro_oneAPI/app.py
deleted file mode 100644
index 4248939773909bfb5072743b694546737b9a7142..0000000000000000000000000000000000000000
--- a/spaces/deepthiaj/Electro_oneAPI/app.py
+++ /dev/null
@@ -1,376 +0,0 @@
-# Import Libraries
-import streamlit as st
-import pandas as pd
-import pickle
-import xgboost as xgb
-import numpy as np
-import sklearn
-from sklearn.metrics import confusion_matrix, classification_report
-import seaborn as sns
-import matplotlib.pyplot as plt
-from io import StringIO
-from scipy import signal
-import daal4py as d4p
-import time
-from sklearn.model_selection import train_test_split
-import tensorflow as tf
-from tensorflow.keras.models import Sequential
-from tensorflow.keras.layers import Dense
-from tensorflow.keras.optimizers import Adam
-from tensorflow.keras.callbacks import EarlyStopping
-from tensorflow.keras.utils import to_categorical
-from sklearnex import patch_sklearn
-patch_sklearn()
-
-# Define Methods
-def diagnostic_models_evaluation(X_train, X_test, y_train, y_test):
-
- # Define the model parameters
- model_params = {
- 'objective': 'multi:softmax',
- 'num_class': 11,
- 'random_state': 42
- }
-
- # Create and train the XGBoost model including early stopping to avoid overfitting
- xgb_model = xgb.XGBClassifier(**model_params)
- eval_set = [(X_test, y_test)]
- xgb_model.fit(X_train, y_train, early_stopping_rounds=15, eval_set=eval_set, verbose=True)
-
- # DAAL model
- daal_model = d4p.get_gbt_model_from_xgboost(xgb_model.get_booster())
-
- st.subheader(":blue[Performance evaluation of the Automated Diagnosis Model]")
-
- st.divider()
-
- # Evaluate the model on the entire dataset
- # XGBoost prediction (for accuracy comparison)
- t0 = time.time()
- y_pred = xgb_model.predict(X_test)
- t1 = time.time()
- xgb_errors_count = np.count_nonzero(y_pred - np.ravel(y_test))
-
- xgb_total = t1-t0
- st.write("Prediction time using XGBoost model is ", xgb_total)
- accuracy = np.sum(y_pred == y_test) / len(y_test) # Calculate accuracy
- acc = (accuracy / 1) * 100
- st.write("The accuracy of the diagnosis report is: ", acc, "%")
-
-
- st.divider()
-
- # Evaluate the model on the entire dataset
- # Calculate evaluation metrics
- classification_metrics = classification_report(y_test, y_pred, output_dict=True)
- st.caption(":blue[Classification Metrics]")
-
- st.table(classification_metrics)
-
- # st.write("1: Myocardial infarction, 2: Bundle branch block, 3: Dysrhythmia , 4: Valvular heart disease, 5: Myocarditis")
-
- st.divider()
-
- # Calculate confusion matrix
- confusion_mat = confusion_matrix(y_test, y_pred)
-
- # Plot confusion matrix
- htmap = sns.heatmap(confusion_mat, annot=True, fmt="d", cmap="Blues")
- htmap = htmap.figure
- st.pyplot(htmap)
-
- st.divider()
-
- # Make a faster prediction with oneDAL
- n_classes = 11
- # daal4py prediction for increased performance
- daal_predict_algo = d4p.gbt_classification_prediction(
- nClasses=n_classes,
- resultsToEvaluate="computeClassLabels",
- fptype='float'
- )
- t0 = time.time()
- daal_prediction = daal_predict_algo.compute(X_test, daal_model)
- t1 = time.time()
- daal_errors_count = np.count_nonzero(np.ravel(daal_prediction.prediction) - np.ravel(y_test))
-
- d4p_total = t1-t0
- st.write("Prediction time using DAAL model is ", xgb_total)
-
- y_test = np.ravel(y_test)
- daal_prediction = np.ravel(daal_prediction.prediction)
- xgb_prediction = y_pred
-
- st.subheader(":blue[Accuracy & Performance Comparison:]")
- st.subheader(":blue[XGBooster Prediction vs. Daal4py Prediction]")
- st.write("\nXGBoost prediction results (first 10 rows):\n", xgb_prediction[0:10])
- st.write("\ndaal4py prediction results (first 10 rows):\n", daal_prediction[0:10])
- st.write("\nGround truth (first 10 rows):\n", y_test[0:10])
-
- st.write("XGBoost errors count:", xgb_errors_count)
- st.write("XGBoost accuracy score:", 1 - xgb_errors_count / xgb_prediction.shape[0])
-
- st.write("\ndaal4py errors count:", daal_errors_count)
- st.write("daal4py accuracy score:", 1 - daal_errors_count / daal_prediction.shape[0])
-
- st.write("\n XGBoost Prediction Time:", xgb_total)
- st.write("\n daal4py Prediction Time:", d4p_total)
-
- st.subheader("Visualizations")
- st.write("Performance: 'XGBoost Prediction' vs. 'daal4py Prediction'")
-
- pred_times = [xgb_total, d4p_total]
- st.bar_chart(pred_times)
- st.write("speedup:",xgb_total/d4p_total)
- st.write("Accuracy")
-
- xgb_acc = 1 - xgb_errors_count / xgb_prediction.shape[0]
- d4p_acc = 1 - daal_errors_count / daal_prediction.shape[0]
- pred_acc = [xgb_acc, d4p_acc]
- st.bar_chart(pred_acc)
- st.write("Accuracy Difference",xgb_acc-d4p_acc)
-
- st.divider()
-
- return xgb_model, daal_model
-
-
-def DL_diagnostic_model_eval(ECG_data_type, X_train, X_test, y_train, y_test):
-
- num_classes = 11
-
- if ECG_data_type == "15_Leads_ECG_data":
- input_shape = 15
- elif ECG_data_type == "12_Leads_ECG_data":
- input_shape = 12
-
- batch_size = 64 # 32, 64, 128
- num_epochs = 100
-
- dl_model = Sequential()
- dl_model.add(Dense(128, activation='relu', input_shape=(input_shape,))) # Adjust the input_shape to match data
- dl_model.add(Dense(64, activation='relu'))
- dl_model.add(Dense(32, activation='relu'))
- dl_model.add(Dense(num_classes, activation='softmax')) # Adjust the num_classes to match data
-
- dl_model.compile(loss='categorical_crossentropy', optimizer=Adam(), metrics=['accuracy'])
-
- # Define the early stopping criteria
- early_stopping = EarlyStopping(monitor='val_loss', patience=10, verbose=1)
-
- # Encode target values as one-hot vectors
- y_train_encoded = to_categorical(y_train, num_classes=11)
- y_test_encoded = to_categorical(y_test, num_classes=11)
-
- # Train the deep learning model with early stopping
- history = dl_model.fit(X_train, y_train_encoded, batch_size=batch_size, epochs=num_epochs,
- validation_data=(X_test, y_test_encoded), verbose=1, callbacks=[early_stopping])
-
- score = dl_model.evaluate(X_test, y_test_encoded, verbose=0)
- st.write('Model test loss:', score[0])
- st.write('Model test accuracy:', score[1])
-
-
- return dl_model
-
-
-def model_gen(signal_data_type):
-
- enc_dat = pd.read_csv("PTB_ECG_df2_enc_f.csv")
-
- if signal_data_type == '15_Leads_ECG_data':
- # Split the dataset into features (X) and target (y)
- X = enc_dat.iloc[:, :-1].values # Features (all columns except the last one)
- y = enc_dat.iloc[:, -1].values # Target (last column "diagnosis")
- # # Map the existing class labels to the expected class values
- # class_mapping = {0: 0, 1: 1, 3: 2, 4: 3, 6: 4, 7: 5}
- # mapped_labels = np.array([class_mapping[label] for label in y])
-
- # split data into train and test sets
- seed = 10
- test_size = 0.10
- X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_size, random_state=seed)
- elif signal_data_type == '12_Leads_ECG_data':
- # Split the dataset into features (X) and target (y) from 12 lead data alone in PTB ECG Diagnostic database
- X = enc_dat.iloc[:, :12].values # Features (all columns except the last one) #CALL PREPROCESSING FROM JUPYTERHUB
- y = enc_dat.iloc[:, -1].values # Target (last column "diagnosis")
- # # Map the existing class labels to the expected class values
- # class_mapping = {0: 0, 1: 1, 3: 2, 4: 3, 6: 4, 7: 5}
- # mapped_labels = np.array([class_mapping[label] for label in y])
-
- # split data into train and test sets
- seed = 10
- test_size = 0.10
- X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_size, random_state=seed)
- else:
- st.write("Please upload a 12-leads ECG data or 15 Leads ECG (12 + 3 Frank vx,vy,vz leads) data to perform the diagnosis for Heart condition")
-
- return X_train, X_test, y_train, y_test
-
-
-def diagnosis_report(predicted_class):
- if (predicted_class == 0).any():
- st.write("Your heart is in good health.")
- st.write("Kindly follow your regular checkup routine.")
- elif (predicted_class == 1).any():
- st.write("You are diagnosed with possibility of Myocardial infarction.")
- st.write("It is recommended that you consult a doctor to take the necessary treatment.")
- elif (predicted_class == 2).any():
- st.write("You are diagnosed with possibility of Cardiomyopathy.")
- st.write("It is recommended that you consult a doctor to take the necessary treatment.")
- elif (predicted_class == 3).any():
- st.write("You are diagnosed with possibility of Bundle branch block.")
- st.write("It is recommended that you consult a doctor to the necessary treatment.")
- elif (predicted_class == 4).any():
- st.write("You are diagnosed with possibility of Dysrhythmia.")
- st.write("It is recommended that you consult a doctor to take the necessary treatment.")
- elif (predicted_class == 5).any():
- st.write("You are diagnosed with possibility of Hypertrophy.")
- st.write("It is recommended that you consult a doctor to take the necessary treatment.")
- elif (predicted_class == 6).any():
- st.write("You are diagnosed with possibility of Valvular heart disease.")
- st.write("It is recommended that you consult a doctor to take the necessary treatment.")
- elif (predicted_class == 7).any():
- st.write("You are diagnosed with possibility of Myocarditis.")
- st.write("It is recommended that you consult a doctor to take the necessary treatment.")
- elif (predicted_class == 8).any():
- st.write("You are diagnosed with possibility of Stable angina.")
- st.write("It is recommended that you consult a doctor to take the necessary treatment.")
- elif (predicted_class == 9).any():
- st.write("You are diagnosed with possibility of Palpitation.")
- st.write("It is recommended that you consult a doctor to take the necessary treatment.")
- elif (predicted_class == 10).any():
- st.write("You are diagnosed with possibility of Unstable angina.")
- st.write("It is recommended that you consult a doctor to take the necessary treatment.")
- else:
- st.write("Sorry, We cannot give your diagnosis report at the moment. Kindly consult a doctor in person.")
-
-
-def ECG_data_uploader(uploaded_file):
- dataframe = X[0]
- if uploaded_file is not None:
- df = pd.read_csv(uploaded_file)
-
- if df.columns[-1] == 'diagnosis':
- data_frame = df.iloc[0,:-1].transpose() #data_frame.iloc[0,:-1]
- st.write("The ECG data uploaded except diagnosis is \n", df.iloc[:,:-1])
- else:
- data_frame = df.transpose() #data_frame.iloc[0,:-1]
- st.write("The ECG data uploaded is \n", df)
- dataframe = data_frame.values # values attribute returns the underlying data of the DataFrame as a 2D ndarray.
-
- else:
- st.sidebar.write("No ECG patient data uploaded")
- return dataframe
-
-
-def preprocess(ecg_test_data):
- st.write('')
-
-#..........................................................................................................................................................#
-# Streamlit App Interface for Diagnosis
-
-st.title("Automated Diagnosis of Heart health condition from Electrocardiogram (ECG) using Intel oneAPI")
-st.write('This app is a prototype for diagnosing heart health condition using Electrocardiogram (ECG).')
-
-st.divider()
-
-with st.container():
- st.subheader(":red[PTB ECG Diagnostic Dataset used for Model deployment]")
-
- if st.button("Visualize ECG data distribution based on diagnosis in PTB ECG Diagnostic Dataset provided by Health Practitioners"):
- ecg_train_dat = pd.read_csv("PTB_ECG_df2.csv")
- diagnosis_counts = ecg_train_dat["diagnosis"].value_counts()
- st.bar_chart(diagnosis_counts)
-
-st.divider()
-
-enc_dat = pd.read_csv("PTB_ECG_df2_enc_f.csv")
-X = enc_dat.iloc[:, :-1].values # Features (all columns except the last one)
-patient_ecg_sel = "Patient001"
-ECG_data_type = "15_Leads_ECG_data"
-
-st.subheader(":red[Prototype Test and Evaluation]")
-patient_enc_data = {"Patient001":X[0],"Patient002":X[100],"Patient003":X[200],"Patient004":X[50],"Patient005":X[40],"Patient006":X[30],"Patient007":X[20],"Patient008":X[10],"Patient009":X[60],"Patient010":X[110],"Patient011":X[120],"Patient012":X[130],"Patient013":X[140],"Patient014":X[150],"Patient015":X[160],"Patient016":X[170],"Patient017":X[180],"Patient018":X[190],"Patient019":X[210],"Patient020":X[220],"Patient021":X[21],"Patient022":X[22],"Patient023":X[23],"Patient024":X[24],"Patient025":X[25],"Patient026":X[26],"Patient027":X[27],"Patient028":X[28],"Patient029":X[29],"Patient030":X[31],"Patient031":X[41],"Patient032":X[42],"Patient033":X[43],"Patient034":X[44],"Patient035":X[45],"Patient036":X[46],"Patient037":X[47],"Patient038":X[48],"Patient039":X[49],"Patient040":X[51],"Patient41":X[61],"Patient042":X[62],"Patient043":X[63],"Patient044":X[64],"Patient045":X[65],"Patient046":X[66],"Patient047":X[67],"Patient048":X[68],"Patient049":X[69],"Patient050":X[71], }
-patient_ecg_sel = st.selectbox( "Select an ECG data of a single patient from the given list", list(patient_enc_data.keys()))
-ecg_test_data = patient_enc_data[patient_ecg_sel]
-
-st.subheader("Diagnosis Report: ")
-st.caption(patient_ecg_sel)
-if st.button("Diagnose"):
- X_train, X_test, y_train, y_test = model_gen(ECG_data_type)
-
- xgb_model, daal_model = diagnostic_models_evaluation(X_train, X_test, y_train, y_test)
- predicted_class_xgb = xgb_model.predict(np.array([ecg_test_data]))
- st.caption("Diagnosis using XGBooster: ")
- diagnosis_report(predicted_class_xgb)
-
- n_classes = 11
- predicted_class_daal = d4p.gbt_classification_prediction(
- nClasses=n_classes,
- resultsToEvaluate="computeClassLabels",
- fptype='float'
- ).compute(np.array([ecg_test_data]), daal_model)
- st.caption("Diagnosis using daal4py: ")
- diagnosis_report(predicted_class_daal.prediction)
-
- st.caption("Diagnosis using Deep Learning: ")
- dl_model = DL_diagnostic_model_eval(ECG_data_type, X_train, X_test, y_train, y_test)
- predicted_class_dl = dl_model.predict(np.array([ecg_test_data]))
- diagnosis_report(predicted_class_dl)
-
-else:
- st.write("Press 'Diagnose' button after selecting the patient data from the dropdown menu.")
-
-
-st.sidebar.subheader('Diagnose Heart Health')
-
-uploaded_file = st.sidebar.file_uploader("Upload ECG file of a single patient in CSV format")
-ecg_test_data = ECG_data_uploader(uploaded_file)
-
-ECG_data_type = "15_Leads_ECG_upload_data"
-ECG_data_types= {"15_Leads_ECG_upload_data":"15 Leads", "12_Leads_ECG_upload_data":"12 Leads"}
-ECG_data_type= st.sidebar.selectbox("Select the number of signal leads used in the ECG data ",list(ECG_data_types.keys()))
-
-
-if st.sidebar.button("Check Your Heart health"):
- st.caption(ECG_data_type)
-
- if ECG_data_type == "15_Leads_ECG_upload_data" :
- ECG_data_type = "15_Leads_ECG_data"
- elif ECG_data_type == "12_Leads_ECG_upload_data" :
- ECG_data_type = "12_Leads_ECG_data"
-
- X_train, X_test, y_train, y_test = model_gen(ECG_data_type)
-
- xgb_model, daal_model = diagnostic_models_evaluation(X_train, X_test, y_train, y_test)
- ecg_test_data_xgb = np.array([ecg_test_data]) # Convert to 2-dimensional array
- ecg_test_data_xgb = np.reshape(ecg_test_data_xgb, (1, -1)) # Reshape to (1, -1) dimensions
-
- predicted_class_xgb = xgb_model.predict(ecg_test_data_xgb)
- st.caption("Diagnosis using XGBooster: ")
- diagnosis_report(predicted_class_xgb)
-
- n_classes = 11
- predicted_class_daal = d4p.gbt_classification_prediction(
- nClasses=n_classes,
- resultsToEvaluate="computeClassLabels",
- fptype='float'
- ).compute(ecg_test_data_xgb, daal_model)
- st.caption("Diagnosis using daal4py: ")
- diagnosis_report(predicted_class_daal.prediction)
-
- st.caption("Diagnosis using Deep Learning: ")
- dl_model = DL_diagnostic_model_eval(ECG_data_type, X_train, X_test, y_train, y_test)
- predicted_class_dl = dl_model.predict(np.array([ecg_test_data]))
- diagnosis_report(predicted_class_dl)
-
-else:
- st.write('')
-
-
-
-
-
-
-
diff --git a/spaces/derek-thomas/top2vec/app/utilities.py b/spaces/derek-thomas/top2vec/app/utilities.py
deleted file mode 100644
index 16446f82e639a8a7c03df769b20772c063894812..0000000000000000000000000000000000000000
--- a/spaces/derek-thomas/top2vec/app/utilities.py
+++ /dev/null
@@ -1,43 +0,0 @@
-from logging import getLogger
-from pathlib import Path
-
-import joblib
-import pandas as pd
-import streamlit as st
-from top2vec import Top2Vec
-
-logger = getLogger(__name__)
-
-proj_dir = Path(__file__).parents[1]
-
-
-def initialization():
- with st.spinner("Loading app..."):
- if 'model' not in st.session_state:
- model = Top2Vec.load('models/model.pkl')
- model._check_model_status()
- model.hierarchical_topic_reduction(num_topics=20)
-
- st.session_state.model = model
- st.session_state.umap_model = joblib.load(proj_dir / 'models' / 'umap.sav')
- logger.info("loading data...")
-
- if 'data' not in st.session_state:
- logger.info("loading data...")
- data = pd.read_csv(proj_dir / 'data' / 'data.csv')
- data['topic_id'] = data['topic_id'].apply(lambda x: f'{x:02d}')
- st.session_state.data = data
- st.session_state.selected_data = data
- st.session_state.all_topics = list(data.topic_id.unique())
-
- if 'topics' not in st.session_state:
- logger.info("loading topics...")
- topics = pd.read_csv(proj_dir / 'data' / 'topics.csv')
- topics['topic_id'] = topics['topic_id'].apply(lambda x: f'{x:02d}')
- st.session_state.topics = topics
- topics_dict = topics[['topic_id', 'topic_0']].to_dict()
- topic_str_to_word = {topics_dict['topic_id'][i]: topics_dict['topic_0'][i] for i in range(20)}
- st.session_state.topic_str_to_word = topic_str_to_word
-
- if 'selected_points' not in st.session_state:
- st.session_state.selected_points = []
diff --git a/spaces/diacanFperku/AutoGPT/Buku Matematika Smp Kelas 8 Semester 2 Erlangga.md b/spaces/diacanFperku/AutoGPT/Buku Matematika Smp Kelas 8 Semester 2 Erlangga.md
deleted file mode 100644
index cd5a272427eb1217ff0cf4543b9fb538490dfe00..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Buku Matematika Smp Kelas 8 Semester 2 Erlangga.md
+++ /dev/null
@@ -1,32 +0,0 @@
-
-
-seluruh dunia ajaib
-
-terima kasih telah mengikuti saya
-
-Russian:
-
-в одном тесте мы изучаем непростые задачи
-
-Непростая программа - это такое значение
-
-В этом тесте вам предстоит задать
-
-Семинаром разбираться со сложными программами,
-
-что будет для вас проще, чем думать
-
-Именно поэтому ваши школьники и студенты в этом учились
-
-каждый через один и тот же день
-
-под любой программой или задачей
-
-Заряжающая программа строит блоки арифметики
-
-В данном тесте мы просто должны будем подсчитать целое число
-
-Мы должны прос 4fefd39f24
-
-
-
diff --git a/spaces/diacanFperku/AutoGPT/File Backup Mikrotik Rb750.epub.md b/spaces/diacanFperku/AutoGPT/File Backup Mikrotik Rb750.epub.md
deleted file mode 100644
index 021983a3b55adc35c41ddb2d6325fbda0d15300b..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/File Backup Mikrotik Rb750.epub.md
+++ /dev/null
@@ -1,16 +0,0 @@
-
- );
-};
diff --git a/spaces/dpe1/beat_manipulator/examples.py b/spaces/dpe1/beat_manipulator/examples.py
deleted file mode 100644
index 7e33ae272ffc3ea9570739d91171e4b2ba03a6b8..0000000000000000000000000000000000000000
--- a/spaces/dpe1/beat_manipulator/examples.py
+++ /dev/null
@@ -1,11 +0,0 @@
-import beat_manipulator as bm, os, random
-
-path = 'F:/Stuff/Music/Tracks/'
-song = 'Phonetick - You.mp3'
-song = path + song
-
-#bm.presets.savetest(song, scale = 1, shift = 0)
-
-bm.beatswap(song, 'random', scale = 1, shift = 0)
-
-#bm.presets.use(song = song, preset = 'dotted snares fast 1', scale = 1)
\ No newline at end of file
diff --git a/spaces/dragonSwing/annotate-anything/GroundingDINO/groundingdino/models/GroundingDINO/transformer_vanilla.py b/spaces/dragonSwing/annotate-anything/GroundingDINO/groundingdino/models/GroundingDINO/transformer_vanilla.py
deleted file mode 100644
index 10c0920c1a217af5bb3e1b13077568035ab3b7b5..0000000000000000000000000000000000000000
--- a/spaces/dragonSwing/annotate-anything/GroundingDINO/groundingdino/models/GroundingDINO/transformer_vanilla.py
+++ /dev/null
@@ -1,123 +0,0 @@
-# ------------------------------------------------------------------------
-# Grounding DINO
-# url: https://github.com/IDEA-Research/GroundingDINO
-# Copyright (c) 2023 IDEA. All Rights Reserved.
-# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-# ------------------------------------------------------------------------
-# Copyright (c) Aishwarya Kamath & Nicolas Carion. Licensed under the Apache License 2.0. All Rights Reserved
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-"""
-DETR Transformer class.
-
-Copy-paste from torch.nn.Transformer with modifications:
- * positional encodings are passed in MHattention
- * extra LN at the end of encoder is removed
- * decoder returns a stack of activations from all decoding layers
-"""
-from typing import Optional
-
-import torch
-import torch.nn.functional as F
-from torch import Tensor, nn
-
-from .utils import (
- MLP,
- _get_activation_fn,
- _get_clones,
- gen_encoder_output_proposals,
- gen_sineembed_for_position,
- sigmoid_focal_loss,
-)
-
-
-class TextTransformer(nn.Module):
- def __init__(self, num_layers, d_model=256, nheads=8, dim_feedforward=2048, dropout=0.1):
- super().__init__()
- self.num_layers = num_layers
- self.d_model = d_model
- self.nheads = nheads
- self.dim_feedforward = dim_feedforward
- self.norm = None
-
- single_encoder_layer = TransformerEncoderLayer(
- d_model=d_model, nhead=nheads, dim_feedforward=dim_feedforward, dropout=dropout
- )
- self.layers = _get_clones(single_encoder_layer, num_layers)
-
- def forward(self, memory_text: torch.Tensor, text_attention_mask: torch.Tensor):
- """
-
- Args:
- text_attention_mask: bs, num_token
- memory_text: bs, num_token, d_model
-
- Raises:
- RuntimeError: _description_
-
- Returns:
- output: bs, num_token, d_model
- """
-
- output = memory_text.transpose(0, 1)
-
- for layer in self.layers:
- output = layer(output, src_key_padding_mask=text_attention_mask)
-
- if self.norm is not None:
- output = self.norm(output)
-
- return output.transpose(0, 1)
-
-
-class TransformerEncoderLayer(nn.Module):
- def __init__(
- self,
- d_model,
- nhead,
- dim_feedforward=2048,
- dropout=0.1,
- activation="relu",
- normalize_before=False,
- ):
- super().__init__()
- self.self_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout)
- # Implementation of Feedforward model
- self.linear1 = nn.Linear(d_model, dim_feedforward)
- self.dropout = nn.Dropout(dropout)
- self.linear2 = nn.Linear(dim_feedforward, d_model)
-
- self.norm1 = nn.LayerNorm(d_model)
- self.norm2 = nn.LayerNorm(d_model)
- self.dropout1 = nn.Dropout(dropout)
- self.dropout2 = nn.Dropout(dropout)
-
- self.activation = _get_activation_fn(activation)
- self.normalize_before = normalize_before
- self.nhead = nhead
-
- def with_pos_embed(self, tensor, pos: Optional[Tensor]):
- return tensor if pos is None else tensor + pos
-
- def forward(
- self,
- src,
- src_mask: Optional[Tensor] = None,
- src_key_padding_mask: Optional[Tensor] = None,
- pos: Optional[Tensor] = None,
- ):
- # repeat attn mask
- if src_mask.dim() == 3 and src_mask.shape[0] == src.shape[1]:
- # bs, num_q, num_k
- src_mask = src_mask.repeat(self.nhead, 1, 1)
-
- q = k = self.with_pos_embed(src, pos)
-
- src2 = self.self_attn(q, k, value=src, attn_mask=src_mask)[0]
-
- # src2 = self.self_attn(q, k, value=src, attn_mask=src_mask, key_padding_mask=src_key_padding_mask)[0]
- src = src + self.dropout1(src2)
- src = self.norm1(src)
- src2 = self.linear2(self.dropout(self.activation(self.linear1(src))))
- src = src + self.dropout2(src2)
- src = self.norm2(src)
- return src
diff --git a/spaces/dylanebert/gaussian-viewer/public/_app/immutable/nodes/0.2a6e7c35.js b/spaces/dylanebert/gaussian-viewer/public/_app/immutable/nodes/0.2a6e7c35.js
deleted file mode 100644
index 3404e30105b90e1692045141cd9385bee8b3e801..0000000000000000000000000000000000000000
--- a/spaces/dylanebert/gaussian-viewer/public/_app/immutable/nodes/0.2a6e7c35.js
+++ /dev/null
@@ -1 +0,0 @@
-import{s as l,c as r,u as i,g as u,d as _}from"../chunks/scheduler.8b74b908.js";import{S as f,i as c,d as p,t as d}from"../chunks/index.c146e4e6.js";const m=!0,S=Object.freeze(Object.defineProperty({__proto__:null,prerender:m},Symbol.toStringTag,{value:"Module"}));function $(n){let s;const a=n[1].default,e=r(a,n,n[0],null);return{c(){e&&e.c()},l(t){e&&e.l(t)},m(t,o){e&&e.m(t,o),s=!0},p(t,[o]){e&&e.p&&(!s||o&1)&&i(e,a,t,t[0],s?_(a,t[0],o,null):u(t[0]),null)},i(t){s||(p(e,t),s=!0)},o(t){d(e,t),s=!1},d(t){e&&e.d(t)}}}function g(n,s,a){let{$$slots:e={},$$scope:t}=s;return n.$$set=o=>{"$$scope"in o&&a(0,t=o.$$scope)},[t,e]}class v extends f{constructor(s){super(),c(this,s,g,$,l,{})}}export{v as component,S as universal};
diff --git a/spaces/edemgold/conversation-bot/README.md b/spaces/edemgold/conversation-bot/README.md
deleted file mode 100644
index d5b60c29689b12f25f587c065b0c65d30a8bf021..0000000000000000000000000000000000000000
--- a/spaces/edemgold/conversation-bot/README.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: Conversation Bot
-emoji: 🐢
-colorFrom: blue
-colorTo: gray
-sdk: gradio
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/emc348/faces-through-time/models/StyleCLIP/global_directions/utils/visualizer.py b/spaces/emc348/faces-through-time/models/StyleCLIP/global_directions/utils/visualizer.py
deleted file mode 100644
index 8c4a1fba06bf6bc680aa59bf645f796283f6f1c6..0000000000000000000000000000000000000000
--- a/spaces/emc348/faces-through-time/models/StyleCLIP/global_directions/utils/visualizer.py
+++ /dev/null
@@ -1,605 +0,0 @@
-# python 3.7
-"""Utility functions for visualizing results on html page."""
-
-import base64
-import os.path
-import cv2
-import numpy as np
-
-__all__ = [
- 'get_grid_shape', 'get_blank_image', 'load_image', 'save_image',
- 'resize_image', 'add_text_to_image', 'fuse_images', 'HtmlPageVisualizer',
- 'VideoReader', 'VideoWriter', 'adjust_pixel_range'
-]
-
-
-def adjust_pixel_range(images, min_val=-1.0, max_val=1.0, channel_order='NCHW'):
- """Adjusts the pixel range of the input images.
-
- This function assumes the input array (image batch) is with shape [batch_size,
- channel, height, width] if `channel_order = NCHW`, or with shape [batch_size,
- height, width] if `channel_order = NHWC`. The returned images are with shape
- [batch_size, height, width, channel] and pixel range [0, 255].
-
- NOTE: The channel order of output images will remain the same as the input.
-
- Args:
- images: Input images to adjust pixel range.
- min_val: Min value of the input images. (default: -1.0)
- max_val: Max value of the input images. (default: 1.0)
- channel_order: Channel order of the input array. (default: NCHW)
-
- Returns:
- The postprocessed images with dtype `numpy.uint8` and range [0, 255].
-
- Raises:
- ValueError: If the input `images` are not with type `numpy.ndarray` or the
- shape is invalid according to `channel_order`.
- """
- if not isinstance(images, np.ndarray):
- raise ValueError(f'Images should be with type `numpy.ndarray`!')
-
- channel_order = channel_order.upper()
- if channel_order not in ['NCHW', 'NHWC']:
- raise ValueError(f'Invalid channel order `{channel_order}`!')
-
- if images.ndim != 4:
- raise ValueError(f'Input images are expected to be with shape `NCHW` or '
- f'`NHWC`, but `{images.shape}` is received!')
- if channel_order == 'NCHW' and images.shape[1] not in [1, 3]:
- raise ValueError(f'Input images should have 1 or 3 channels under `NCHW` '
- f'channel order!')
- if channel_order == 'NHWC' and images.shape[3] not in [1, 3]:
- raise ValueError(f'Input images should have 1 or 3 channels under `NHWC` '
- f'channel order!')
-
- images = images.astype(np.float32)
- images = (images - min_val) * 255 / (max_val - min_val)
- images = np.clip(images + 0.5, 0, 255).astype(np.uint8)
- if channel_order == 'NCHW':
- images = images.transpose(0, 2, 3, 1)
-
- return images
-
-
-def get_grid_shape(size, row=0, col=0, is_portrait=False):
- """Gets the shape of a grid based on the size.
-
- This function makes greatest effort on making the output grid square if
- neither `row` nor `col` is set. If `is_portrait` is set as `False`, the height
- will always be equal to or smaller than the width. For example, if input
- `size = 16`, output shape will be `(4, 4)`; if input `size = 15`, output shape
- will be (3, 5). Otherwise, the height will always be equal to or larger than
- the width.
-
- Args:
- size: Size (height * width) of the target grid.
- is_portrait: Whether to return a portrait size of a landscape size.
- (default: False)
-
- Returns:
- A two-element tuple, representing height and width respectively.
- """
- assert isinstance(size, int)
- assert isinstance(row, int)
- assert isinstance(col, int)
- if size == 0:
- return (0, 0)
-
- if row > 0 and col > 0 and row * col != size:
- row = 0
- col = 0
-
- if row > 0 and size % row == 0:
- return (row, size // row)
- if col > 0 and size % col == 0:
- return (size // col, col)
-
- row = int(np.sqrt(size))
- while row > 0:
- if size % row == 0:
- col = size // row
- break
- row = row - 1
-
- return (col, row) if is_portrait else (row, col)
-
-
-def get_blank_image(height, width, channels=3, is_black=True):
- """Gets a blank image, either white of black.
-
- NOTE: This function will always return an image with `RGB` channel order for
- color image and pixel range [0, 255].
-
- Args:
- height: Height of the returned image.
- width: Width of the returned image.
- channels: Number of channels. (default: 3)
- is_black: Whether to return a black image or white image. (default: True)
- """
- shape = (height, width, channels)
- if is_black:
- return np.zeros(shape, dtype=np.uint8)
- return np.ones(shape, dtype=np.uint8) * 255
-
-
-def load_image(path):
- """Loads an image from disk.
-
- NOTE: This function will always return an image with `RGB` channel order for
- color image and pixel range [0, 255].
-
- Args:
- path: Path to load the image from.
-
- Returns:
- An image with dtype `np.ndarray` or `None` if input `path` does not exist.
- """
- if not os.path.isfile(path):
- return None
-
- image = cv2.imread(path)
- return image[:, :, ::-1]
-
-
-def save_image(path, image):
- """Saves an image to disk.
-
- NOTE: The input image (if colorful) is assumed to be with `RGB` channel order
- and pixel range [0, 255].
-
- Args:
- path: Path to save the image to.
- image: Image to save.
- """
- if image is None:
- return
-
- assert len(image.shape) == 3 and image.shape[2] in [1, 3]
- cv2.imwrite(path, image[:, :, ::-1])
-
-
-def resize_image(image, *args, **kwargs):
- """Resizes image.
-
- This is a wrap of `cv2.resize()`.
-
- NOTE: THe channel order of the input image will not be changed.
-
- Args:
- image: Image to resize.
- """
- if image is None:
- return None
-
- assert image.ndim == 3 and image.shape[2] in [1, 3]
- image = cv2.resize(image, *args, **kwargs)
- if image.ndim == 2:
- return image[:, :, np.newaxis]
- return image
-
-
-def add_text_to_image(image,
- text='',
- position=None,
- font=cv2.FONT_HERSHEY_TRIPLEX,
- font_size=1.0,
- line_type=cv2.LINE_8,
- line_width=1,
- color=(255, 255, 255)):
- """Overlays text on given image.
-
- NOTE: The input image is assumed to be with `RGB` channel order.
-
- Args:
- image: The image to overlay text on.
- text: Text content to overlay on the image. (default: '')
- position: Target position (bottom-left corner) to add text. If not set,
- center of the image will be used by default. (default: None)
- font: Font of the text added. (default: cv2.FONT_HERSHEY_TRIPLEX)
- font_size: Font size of the text added. (default: 1.0)
- line_type: Line type used to depict the text. (default: cv2.LINE_8)
- line_width: Line width used to depict the text. (default: 1)
- color: Color of the text added in `RGB` channel order. (default:
- (255, 255, 255))
-
- Returns:
- An image with target text overlayed on.
- """
- if image is None or not text:
- return image
-
- cv2.putText(img=image,
- text=text,
- org=position,
- fontFace=font,
- fontScale=font_size,
- color=color,
- thickness=line_width,
- lineType=line_type,
- bottomLeftOrigin=False)
-
- return image
-
-
-def fuse_images(images,
- image_size=None,
- row=0,
- col=0,
- is_row_major=True,
- is_portrait=False,
- row_spacing=0,
- col_spacing=0,
- border_left=0,
- border_right=0,
- border_top=0,
- border_bottom=0,
- black_background=True):
- """Fuses a collection of images into an entire image.
-
- Args:
- images: A collection of images to fuse. Should be with shape [num, height,
- width, channels].
- image_size: Int or two-element tuple. This field is used to resize the image
- before fusing. `None` disables resizing. (default: None)
- row: Number of rows used for image fusion. If not set, this field will be
- automatically assigned based on `col` and total number of images.
- (default: None)
- col: Number of columns used for image fusion. If not set, this field will be
- automatically assigned based on `row` and total number of images.
- (default: None)
- is_row_major: Whether the input images should be arranged row-major or
- column-major. (default: True)
- is_portrait: Only active when both `row` and `col` should be assigned
- automatically. (default: False)
- row_spacing: Space between rows. (default: 0)
- col_spacing: Space between columns. (default: 0)
- border_left: Width of left border. (default: 0)
- border_right: Width of right border. (default: 0)
- border_top: Width of top border. (default: 0)
- border_bottom: Width of bottom border. (default: 0)
-
- Returns:
- The fused image.
-
- Raises:
- ValueError: If the input `images` is not with shape [num, height, width,
- width].
- """
- if images is None:
- return images
-
- if not images.ndim == 4:
- raise ValueError(f'Input `images` should be with shape [num, height, '
- f'width, channels], but {images.shape} is received!')
-
- num, image_height, image_width, channels = images.shape
- if image_size is not None:
- if isinstance(image_size, int):
- image_size = (image_size, image_size)
- assert isinstance(image_size, (list, tuple)) and len(image_size) == 2
- width, height = image_size
- else:
- height, width = image_height, image_width
- row, col = get_grid_shape(num, row=row, col=col, is_portrait=is_portrait)
- fused_height = (
- height * row + row_spacing * (row - 1) + border_top + border_bottom)
- fused_width = (
- width * col + col_spacing * (col - 1) + border_left + border_right)
- fused_image = get_blank_image(
- fused_height, fused_width, channels=channels, is_black=black_background)
- images = images.reshape(row, col, image_height, image_width, channels)
- if not is_row_major:
- images = images.transpose(1, 0, 2, 3, 4)
-
- for i in range(row):
- y = border_top + i * (height + row_spacing)
- for j in range(col):
- x = border_left + j * (width + col_spacing)
- if image_size is not None:
- image = cv2.resize(images[i, j], image_size)
- else:
- image = images[i, j]
- fused_image[y:y + height, x:x + width] = image
-
- return fused_image
-
-
-def get_sortable_html_header(column_name_list, sort_by_ascending=False):
- """Gets header for sortable html page.
-
- Basically, the html page contains a sortable table, where user can sort the
- rows by a particular column by clicking the column head.
-
- Example:
-
- column_name_list = [name_1, name_2, name_3]
- header = get_sortable_html_header(column_name_list)
- footer = get_sortable_html_footer()
- sortable_table = ...
- html_page = header + sortable_table + footer
-
- Args:
- column_name_list: List of column header names.
- sort_by_ascending: Default sorting order. If set as `True`, the html page
- will be sorted by ascending order when the header is clicked for the first
- time.
-
- Returns:
- A string, which represents for the header for a sortable html page.
- """
- header = '\n'.join([
- '',
- '',
- '',
- '',
- '',
- '',
- '',
- '',
- '',
- '',
- '
',
- '',
- '
',
- ''])
- for idx, column_name in enumerate(column_name_list):
- header += f'
{column_name}
\n'
- header += '
\n'
- header += '\n'
- header += '\n'
-
- return header
-
-
-def get_sortable_html_footer():
- """Gets footer for sortable html page.
-
- Check function `get_sortable_html_header()` for more details.
- """
- return '\n
\n\n\n\n'
-
-
-def encode_image_to_html_str(image, image_size=None):
- """Encodes an image to html language.
-
- Args:
- image: The input image to encode. Should be with `RGB` channel order.
- image_size: Int or two-element tuple. This field is used to resize the image
- before encoding. `None` disables resizing. (default: None)
-
- Returns:
- A string which represents the encoded image.
- """
- if image is None:
- return ''
-
- assert len(image.shape) == 3 and image.shape[2] in [1, 3]
-
- # Change channel order to `BGR`, which is opencv-friendly.
- image = image[:, :, ::-1]
-
- # Resize the image if needed.
- if image_size is not None:
- if isinstance(image_size, int):
- image_size = (image_size, image_size)
- assert isinstance(image_size, (list, tuple)) and len(image_size) == 2
- image = cv2.resize(image, image_size)
-
- # Encode the image to html-format string.
- encoded_image = cv2.imencode(".jpg", image)[1].tostring()
- encoded_image_base64 = base64.b64encode(encoded_image).decode('utf-8')
- html_str = f''
-
- return html_str
-
-
-class HtmlPageVisualizer(object):
- """Defines the html page visualizer.
-
- This class can be used to visualize image results as html page. Basically, it
- is based on an html-format sorted table with helper functions
- `get_sortable_html_header()`, `get_sortable_html_footer()`, and
- `encode_image_to_html_str()`. To simplify the usage, specifying the following
- fields is enough to create a visualization page:
-
- (1) num_rows: Number of rows of the table (header-row exclusive).
- (2) num_cols: Number of columns of the table.
- (3) header contents (optional): Title of each column.
-
- NOTE: `grid_size` can be used to assign `num_rows` and `num_cols`
- automatically.
-
- Example:
-
- html = HtmlPageVisualizer(num_rows, num_cols)
- html.set_headers([...])
- for i in range(num_rows):
- for j in range(num_cols):
- html.set_cell(i, j, text=..., image=...)
- html.save('visualize.html')
- """
-
- def __init__(self,
- num_rows=0,
- num_cols=0,
- grid_size=0,
- is_portrait=False,
- viz_size=None):
- if grid_size > 0:
- num_rows, num_cols = get_grid_shape(
- grid_size, row=num_rows, col=num_cols, is_portrait=is_portrait)
- assert num_rows > 0 and num_cols > 0
-
- self.num_rows = num_rows
- self.num_cols = num_cols
- self.viz_size = viz_size
- self.headers = ['' for _ in range(self.num_cols)]
- self.cells = [[{
- 'text': '',
- 'image': '',
- } for _ in range(self.num_cols)] for _ in range(self.num_rows)]
-
- def set_header(self, column_idx, content):
- """Sets the content of a particular header by column index."""
- self.headers[column_idx] = content
-
- def set_headers(self, contents):
- """Sets the contents of all headers."""
- if isinstance(contents, str):
- contents = [contents]
- assert isinstance(contents, (list, tuple))
- assert len(contents) == self.num_cols
- for column_idx, content in enumerate(contents):
- self.set_header(column_idx, content)
-
- def set_cell(self, row_idx, column_idx, text='', image=None):
- """Sets the content of a particular cell.
-
- Basically, a cell contains some text as well as an image. Both text and
- image can be empty.
-
- Args:
- row_idx: Row index of the cell to edit.
- column_idx: Column index of the cell to edit.
- text: Text to add into the target cell.
- image: Image to show in the target cell. Should be with `RGB` channel
- order.
- """
- self.cells[row_idx][column_idx]['text'] = text
- self.cells[row_idx][column_idx]['image'] = encode_image_to_html_str(
- image, self.viz_size)
-
- def save(self, save_path):
- """Saves the html page."""
- html = ''
- for i in range(self.num_rows):
- html += f'
\n'
- for j in range(self.num_cols):
- text = self.cells[i][j]['text']
- image = self.cells[i][j]['image']
- if text:
- html += f'
{text}
{image}
\n'
- else:
- html += f'
{image}
\n'
- html += f'
\n'
-
- header = get_sortable_html_header(self.headers)
- footer = get_sortable_html_footer()
-
- with open(save_path, 'w') as f:
- f.write(header + html + footer)
-
-
-class VideoReader(object):
- """Defines the video reader.
-
- This class can be used to read frames from a given video.
- """
-
- def __init__(self, path):
- """Initializes the video reader by loading the video from disk."""
- if not os.path.isfile(path):
- raise ValueError(f'Video `{path}` does not exist!')
-
- self.path = path
- self.video = cv2.VideoCapture(path)
- assert self.video.isOpened()
- self.position = 0
-
- self.length = int(self.video.get(cv2.CAP_PROP_FRAME_COUNT))
- self.frame_height = int(self.video.get(cv2.CAP_PROP_FRAME_HEIGHT))
- self.frame_width = int(self.video.get(cv2.CAP_PROP_FRAME_WIDTH))
- self.fps = self.video.get(cv2.CAP_PROP_FPS)
-
- def __del__(self):
- """Releases the opened video."""
- self.video.release()
-
- def read(self, position=None):
- """Reads a certain frame.
-
- NOTE: The returned frame is assumed to be with `RGB` channel order.
-
- Args:
- position: Optional. If set, the reader will read frames from the exact
- position. Otherwise, the reader will read next frames. (default: None)
- """
- if position is not None and position < self.length:
- self.video.set(cv2.CAP_PROP_POS_FRAMES, position)
- self.position = position
-
- success, frame = self.video.read()
- self.position = self.position + 1
-
- return frame[:, :, ::-1] if success else None
-
-
-class VideoWriter(object):
- """Defines the video writer.
-
- This class can be used to create a video.
-
- NOTE: `.avi` and `DIVX` is the most recommended codec format since it does not
- rely on other dependencies.
- """
-
- def __init__(self, path, frame_height, frame_width, fps=24, codec='DIVX'):
- """Creates the video writer."""
- self.path = path
- self.frame_height = frame_height
- self.frame_width = frame_width
- self.fps = fps
- self.codec = codec
-
- self.video = cv2.VideoWriter(filename=path,
- fourcc=cv2.VideoWriter_fourcc(*codec),
- fps=fps,
- frameSize=(frame_width, frame_height))
-
- def __del__(self):
- """Releases the opened video."""
- self.video.release()
-
- def write(self, frame):
- """Writes a target frame.
-
- NOTE: The input frame is assumed to be with `RGB` channel order.
- """
- self.video.write(frame[:, :, ::-1])
diff --git a/spaces/ennet/ChatDev/camel/agents/embodied_agent.py b/spaces/ennet/ChatDev/camel/agents/embodied_agent.py
deleted file mode 100644
index a9bf44872d25216f70296df5ccf9aeecf0ed22b1..0000000000000000000000000000000000000000
--- a/spaces/ennet/ChatDev/camel/agents/embodied_agent.py
+++ /dev/null
@@ -1,132 +0,0 @@
-# =========== Copyright 2023 @ CAMEL-AI.org. All Rights Reserved. ===========
-# Licensed under the Apache License, Version 2.0 (the “License”);
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an “AS IS” BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# =========== Copyright 2023 @ CAMEL-AI.org. All Rights Reserved. ===========
-from typing import Any, Dict, List, Optional, Tuple
-
-from colorama import Fore
-
-from camel.agents import BaseToolAgent, ChatAgent, HuggingFaceToolAgent
-from camel.messages import ChatMessage, SystemMessage
-from camel.typing import ModelType
-from camel.utils import print_text_animated
-
-
-class EmbodiedAgent(ChatAgent):
- r"""Class for managing conversations of CAMEL Embodied Agents.
-
- Args:
- system_message (SystemMessage): The system message for the chat agent.
- model (ModelType, optional): The LLM model to use for generating
- responses. (default :obj:`ModelType.GPT_4`)
- model_config (Any, optional): Configuration options for the LLM model.
- (default: :obj:`None`)
- message_window_size (int, optional): The maximum number of previous
- messages to include in the context window. If `None`, no windowing
- is performed. (default: :obj:`None`)
- action_space (List[Any], optional): The action space for the embodied
- agent. (default: :obj:`None`)
- verbose (bool, optional): Whether to print the critic's messages.
- logger_color (Any): The color of the logger displayed to the user.
- (default: :obj:`Fore.MAGENTA`)
- """
-
- def __init__(
- self,
- system_message: SystemMessage,
- model: ModelType = ModelType.GPT_4,
- model_config: Optional[Any] = None,
- message_window_size: Optional[int] = None,
- action_space: Optional[List[BaseToolAgent]] = None,
- verbose: bool = False,
- logger_color: Any = Fore.MAGENTA,
- ) -> None:
- default_action_space = [
- HuggingFaceToolAgent('hugging_face_tool_agent', model=model.value),
- ]
- self.action_space = action_space or default_action_space
- action_space_prompt = self.get_action_space_prompt()
- system_message.content = system_message.content.format(
- action_space=action_space_prompt)
- self.verbose = verbose
- self.logger_color = logger_color
- super().__init__(
- system_message=system_message,
- model=model,
- model_config=model_config,
- message_window_size=message_window_size,
- )
-
- def get_action_space_prompt(self) -> str:
- r"""Returns the action space prompt.
-
- Returns:
- str: The action space prompt.
- """
- return "\n".join([
- f"*** {action.name} ***:\n {action.description}"
- for action in self.action_space
- ])
-
- def step(
- self,
- input_message: ChatMessage,
- ) -> Tuple[ChatMessage, bool, Dict[str, Any]]:
- r"""Performs a step in the conversation.
-
- Args:
- input_message (ChatMessage): The input message.
-
- Returns:
- Tuple[ChatMessage, bool, Dict[str, Any]]: A tuple
- containing the output messages, termination status, and
- additional information.
- """
- response = super().step(input_message)
-
- if response.msgs is None or len(response.msgs) == 0:
- raise RuntimeError("Got None output messages.")
- if response.terminated:
- raise RuntimeError(f"{self.__class__.__name__} step failed.")
-
- # NOTE: Only single output messages are supported
- explanations, codes = response.msg.extract_text_and_code_prompts()
-
- if self.verbose:
- for explanation, code in zip(explanations, codes):
- print_text_animated(self.logger_color +
- f"> Explanation:\n{explanation}")
- print_text_animated(self.logger_color + f"> Code:\n{code}")
-
- if len(explanations) > len(codes):
- print_text_animated(self.logger_color +
- f"> Explanation:\n{explanations}")
-
- content = response.msg.content
-
- if codes is not None:
- content = "\n> Executed Results:"
- global_vars = {action.name: action for action in self.action_space}
- for code in codes:
- executed_outputs = code.execute(global_vars)
- content += (
- f"- Python standard output:\n{executed_outputs[0]}\n"
- f"- Local variables:\n{executed_outputs[1]}\n")
- content += "*" * 50 + "\n"
-
- # TODO: Handle errors
- content = input_message.content + (Fore.RESET +
- f"\n> Embodied Actions:\n{content}")
- message = ChatMessage(input_message.role_name, input_message.role_type,
- input_message.meta_dict, input_message.role,
- content)
- return message, response.terminated, response.info
diff --git a/spaces/evaluate-metric/code_eval/code_eval.py b/spaces/evaluate-metric/code_eval/code_eval.py
deleted file mode 100644
index 0885712e698a34067e8faabe6b029ea8d719e024..0000000000000000000000000000000000000000
--- a/spaces/evaluate-metric/code_eval/code_eval.py
+++ /dev/null
@@ -1,213 +0,0 @@
-# Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""The CodeEval metric estimates the pass@k metric for code synthesis.
-This is an evaluation harness for the HumanEval problem solving dataset
-described in the paper "Evaluating Large Language Models Trained on Code"
-(https://arxiv.org/abs/2107.03374)."""
-
-import itertools
-import os
-from collections import Counter, defaultdict
-from concurrent.futures import ThreadPoolExecutor, as_completed
-
-import datasets
-import numpy as np
-
-import evaluate
-
-from .execute import check_correctness
-
-
-_CITATION = """\
-@misc{chen2021evaluating,
- title={Evaluating Large Language Models Trained on Code},
- author={Mark Chen and Jerry Tworek and Heewoo Jun and Qiming Yuan \
-and Henrique Ponde de Oliveira Pinto and Jared Kaplan and Harri Edwards \
-and Yuri Burda and Nicholas Joseph and Greg Brockman and Alex Ray \
-and Raul Puri and Gretchen Krueger and Michael Petrov and Heidy Khlaaf \
-and Girish Sastry and Pamela Mishkin and Brooke Chan and Scott Gray \
-and Nick Ryder and Mikhail Pavlov and Alethea Power and Lukasz Kaiser \
-and Mohammad Bavarian and Clemens Winter and Philippe Tillet \
-and Felipe Petroski Such and Dave Cummings and Matthias Plappert \
-and Fotios Chantzis and Elizabeth Barnes and Ariel Herbert-Voss \
-and William Hebgen Guss and Alex Nichol and Alex Paino and Nikolas Tezak \
-and Jie Tang and Igor Babuschkin and Suchir Balaji and Shantanu Jain \
-and William Saunders and Christopher Hesse and Andrew N. Carr \
-and Jan Leike and Josh Achiam and Vedant Misra and Evan Morikawa \
-and Alec Radford and Matthew Knight and Miles Brundage and Mira Murati \
-and Katie Mayer and Peter Welinder and Bob McGrew and Dario Amodei \
-and Sam McCandlish and Ilya Sutskever and Wojciech Zaremba},
- year={2021},
- eprint={2107.03374},
- archivePrefix={arXiv},
- primaryClass={cs.LG}
-}
-"""
-
-_DESCRIPTION = """\
-This metric implements the evaluation harness for the HumanEval problem solving dataset
-described in the paper "Evaluating Large Language Models Trained on Code"
-(https://arxiv.org/abs/2107.03374).
-"""
-
-
-_KWARGS_DESCRIPTION = """
-Calculates how good are predictions given some references, using certain scores
-Args:
- predictions: list of candidates to evaluate. Each candidates should be a list
- of strings with several code candidates to solve the problem.
- references: a list with a test for each prediction. Each test should evaluate the
- correctness of a code candidate.
- k: number of code candidates to consider in the evaluation (Default: [1, 10, 100])
- num_workers: number of workers used to evaluate the canidate programs (Default: 4).
- timeout:
-Returns:
- pass_at_k: dict with pass rates for each k
- results: dict with granular results of each unittest
-Examples:
- >>> code_eval = evaluate.load("code_eval")
- >>> test_cases = ["assert add(2,3)==5"]
- >>> candidates = [["def add(a,b): return a*b", "def add(a, b): return a+b"]]
- >>> pass_at_k, results = code_eval.compute(references=test_cases, predictions=candidates, k=[1, 2])
- >>> print(pass_at_k)
- {'pass@1': 0.5, 'pass@2': 1.0}
-"""
-
-
-_WARNING = """
-################################################################################
- !!!WARNING!!!
-################################################################################
-The "code_eval" metric executes untrusted model-generated code in Python.
-Although it is highly unlikely that model-generated code will do something
-overtly malicious in response to this test suite, model-generated code may act
-destructively due to a lack of model capability or alignment.
-Users are strongly encouraged to sandbox this evaluation suite so that it
-does not perform destructive actions on their host or network. For more
-information on how OpenAI sandboxes its code, see the paper "Evaluating Large
-Language Models Trained on Code" (https://arxiv.org/abs/2107.03374).
-
-Once you have read this disclaimer and taken appropriate precautions,
-set the environment variable HF_ALLOW_CODE_EVAL="1". Within Python you can to this
-with:
-
->>> import os
->>> os.environ["HF_ALLOW_CODE_EVAL"] = "1"
-
-################################################################################\
-"""
-
-_LICENSE = """The MIT License
-
-Copyright (c) OpenAI (https://openai.com)
-
-Permission is hereby granted, free of charge, to any person obtaining a copy
-of this software and associated documentation files (the "Software"), to deal
-in the Software without restriction, including without limitation the rights
-to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
-copies of the Software, and to permit persons to whom the Software is
-furnished to do so, subject to the following conditions:
-
-The above copyright notice and this permission notice shall be included in
-all copies or substantial portions of the Software.
-
-THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
-FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
-AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
-LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
-OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
-THE SOFTWARE."""
-
-
-@evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION)
-class CodeEval(evaluate.Metric):
- def _info(self):
- return evaluate.MetricInfo(
- # This is the description that will appear on the metrics page.
- description=_DESCRIPTION,
- citation=_CITATION,
- inputs_description=_KWARGS_DESCRIPTION,
- # This defines the format of each prediction and reference
- features=datasets.Features(
- {
- "predictions": datasets.Sequence(datasets.Value("string")),
- "references": datasets.Value("string"),
- }
- ),
- homepage="https://github.com/openai/human-eval",
- codebase_urls=["https://github.com/openai/human-eval"],
- reference_urls=["https://github.com/openai/human-eval"],
- license=_LICENSE,
- )
-
- def _compute(self, predictions, references, k=[1, 10, 100], num_workers=4, timeout=3.0):
- """Returns the scores"""
-
- if os.getenv("HF_ALLOW_CODE_EVAL", 0) != "1":
- raise ValueError(_WARNING)
-
- if os.name == "nt":
- raise NotImplementedError("This metric is currently not supported on Windows.")
-
- with ThreadPoolExecutor(max_workers=num_workers) as executor:
- futures = []
- completion_id = Counter()
- n_samples = 0
- results = defaultdict(list)
-
- for task_id, (candidates, test_case) in enumerate(zip(predictions, references)):
- for candidate in candidates:
- test_program = candidate + "\n" + test_case
- args = (test_program, timeout, task_id, completion_id[task_id])
- future = executor.submit(check_correctness, *args)
- futures.append(future)
- completion_id[task_id] += 1
- n_samples += 1
-
- for future in as_completed(futures):
- result = future.result()
- results[result["task_id"]].append((result["completion_id"], result))
-
- total, correct = [], []
- for result in results.values():
- result.sort()
- passed = [r[1]["passed"] for r in result]
- total.append(len(passed))
- correct.append(sum(passed))
- total = np.array(total)
- correct = np.array(correct)
-
- ks = k
- pass_at_k = {f"pass@{k}": estimate_pass_at_k(total, correct, k).mean() for k in ks if (total >= k).all()}
-
- return pass_at_k, results
-
-
-def estimate_pass_at_k(num_samples, num_correct, k):
- """Estimates pass@k of each problem and returns them in an array."""
-
- def estimator(n: int, c: int, k: int) -> float:
- """Calculates 1 - comb(n - c, k) / comb(n, k)."""
- if n - c < k:
- return 1.0
- return 1.0 - np.prod(1.0 - k / np.arange(n - c + 1, n + 1))
-
- if isinstance(num_samples, int):
- num_samples_it = itertools.repeat(num_samples, len(num_correct))
- else:
- assert len(num_samples) == len(num_correct)
- num_samples_it = iter(num_samples)
-
- return np.array([estimator(int(n), int(c), k) for n, c in zip(num_samples_it, num_correct)])
diff --git a/spaces/evaluate-metric/rl_reliability/rl_reliability.py b/spaces/evaluate-metric/rl_reliability/rl_reliability.py
deleted file mode 100644
index 34a9c4570cbc2fcd7f4392886b32de6fa17e4dfd..0000000000000000000000000000000000000000
--- a/spaces/evaluate-metric/rl_reliability/rl_reliability.py
+++ /dev/null
@@ -1,186 +0,0 @@
-# Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""Computes the RL Reliability Metrics."""
-
-import datasets
-import numpy as np
-from rl_reliability_metrics.evaluation import eval_metrics
-from rl_reliability_metrics.metrics import metrics_offline, metrics_online
-
-import evaluate
-
-
-logger = evaluate.logging.get_logger(__name__)
-
-DEFAULT_EVAL_POINTS = [
- 50000,
- 150000,
- 250000,
- 350000,
- 450000,
- 550000,
- 650000,
- 750000,
- 850000,
- 950000,
- 1050000,
- 1150000,
- 1250000,
- 1350000,
- 1450000,
- 1550000,
- 1650000,
- 1750000,
- 1850000,
- 1950000,
-]
-
-N_RUNS_RECOMMENDED = 10
-
-_CITATION = """\
-@conference{rl_reliability_metrics,
- title = {Measuring the Reliability of Reinforcement Learning Algorithms},
- author = {Stephanie CY Chan, Sam Fishman, John Canny, Anoop Korattikara, and Sergio Guadarrama},
- booktitle = {International Conference on Learning Representations, Addis Ababa, Ethiopia},
- year = 2020,
-}
-"""
-
-_DESCRIPTION = """\
-Computes the RL reliability metrics from a set of experiments. There is an `"online"` and `"offline"` configuration for evaluation.
-"""
-
-
-_KWARGS_DESCRIPTION = """
-Computes the RL reliability metrics from a set of experiments. There is an `"online"` and `"offline"` configuration for evaluation.
-Args:
- timestamps: list of timestep lists/arrays that serve as index.
- rewards: list of reward lists/arrays of each experiment.
-Returns:
- dictionary: a set of reliability metrics
-Examples:
- >>> import numpy as np
- >>> rl_reliability = evaluate.load("rl_reliability", "online")
- >>> results = rl_reliability.compute(
- ... timesteps=[np.linspace(0, 2000000, 1000)],
- ... rewards=[np.linspace(0, 100, 1000)]
- ... )
- >>> print(results["LowerCVaROnRaw"].round(4))
- [0.0258]
-"""
-
-
-@evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION)
-class RLReliability(evaluate.Metric):
- """Computes the RL Reliability Metrics."""
-
- def _info(self):
- if self.config_name not in ["online", "offline"]:
- raise KeyError("""You should supply a configuration name selected in '["online", "offline"]'""")
-
- return evaluate.MetricInfo(
- module_type="metric",
- description=_DESCRIPTION,
- citation=_CITATION,
- inputs_description=_KWARGS_DESCRIPTION,
- features=datasets.Features(
- {
- "timesteps": datasets.Sequence(datasets.Value("int64")),
- "rewards": datasets.Sequence(datasets.Value("float")),
- }
- ),
- homepage="https://github.com/google-research/rl-reliability-metrics",
- )
-
- def _compute(
- self,
- timesteps,
- rewards,
- baseline="default",
- freq_thresh=0.01,
- window_size=100000,
- window_size_trimmed=99000,
- alpha=0.05,
- eval_points=None,
- ):
- if len(timesteps) < N_RUNS_RECOMMENDED:
- logger.warning(
- f"For robust statistics it is recommended to use at least {N_RUNS_RECOMMENDED} runs whereas you provided {len(timesteps)}."
- )
-
- curves = []
- for timestep, reward in zip(timesteps, rewards):
- curves.append(np.stack([timestep, reward]))
-
- if self.config_name == "online":
- if baseline == "default":
- baseline = "curve_range"
- if eval_points is None:
- eval_points = DEFAULT_EVAL_POINTS
-
- metrics = [
- metrics_online.HighFreqEnergyWithinRuns(thresh=freq_thresh),
- metrics_online.IqrWithinRuns(
- window_size=window_size_trimmed, eval_points=eval_points, baseline=baseline
- ),
- metrics_online.IqrAcrossRuns(
- lowpass_thresh=freq_thresh, eval_points=eval_points, window_size=window_size, baseline=baseline
- ),
- metrics_online.LowerCVaROnDiffs(baseline=baseline),
- metrics_online.LowerCVaROnDrawdown(baseline=baseline),
- metrics_online.LowerCVaROnAcross(
- lowpass_thresh=freq_thresh, eval_points=eval_points, window_size=window_size, baseline=baseline
- ),
- metrics_online.LowerCVaROnRaw(alpha=alpha, baseline=baseline),
- metrics_online.MadAcrossRuns(
- lowpass_thresh=freq_thresh, eval_points=eval_points, window_size=window_size, baseline=baseline
- ),
- metrics_online.MadWithinRuns(
- eval_points=eval_points, window_size=window_size_trimmed, baseline=baseline
- ),
- metrics_online.MaxDrawdown(),
- metrics_online.StddevAcrossRuns(
- lowpass_thresh=freq_thresh, eval_points=eval_points, window_size=window_size, baseline=baseline
- ),
- metrics_online.StddevWithinRuns(
- eval_points=eval_points, window_size=window_size_trimmed, baseline=baseline
- ),
- metrics_online.UpperCVaROnAcross(
- alpha=alpha,
- lowpass_thresh=freq_thresh,
- eval_points=eval_points,
- window_size=window_size,
- baseline=baseline,
- ),
- metrics_online.UpperCVaROnDiffs(alpha=alpha, baseline=baseline),
- metrics_online.UpperCVaROnDrawdown(alpha=alpha, baseline=baseline),
- metrics_online.UpperCVaROnRaw(alpha=alpha, baseline=baseline),
- metrics_online.MedianPerfDuringTraining(window_size=window_size, eval_points=eval_points),
- ]
- else:
- if baseline == "default":
- baseline = "median_perf"
-
- metrics = [
- metrics_offline.MadAcrossRollouts(baseline=baseline),
- metrics_offline.IqrAcrossRollouts(baseline=baseline),
- metrics_offline.StddevAcrossRollouts(baseline=baseline),
- metrics_offline.LowerCVaRAcrossRollouts(alpha=alpha, baseline=baseline),
- metrics_offline.UpperCVaRAcrossRollouts(alpha=alpha, baseline=baseline),
- metrics_offline.MedianPerfAcrossRollouts(baseline=None),
- ]
-
- evaluator = eval_metrics.Evaluator(metrics=metrics)
- result = evaluator.compute_metrics(curves)
- return result
diff --git a/spaces/ezioruan/roop/roop/__init__.py b/spaces/ezioruan/roop/roop/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/f2api/gpt-academic/docs/WithFastapi.md b/spaces/f2api/gpt-academic/docs/WithFastapi.md
deleted file mode 100644
index 188b52716485f15e528772c6454ee7839ced4406..0000000000000000000000000000000000000000
--- a/spaces/f2api/gpt-academic/docs/WithFastapi.md
+++ /dev/null
@@ -1,43 +0,0 @@
-# Running with fastapi
-
-We currently support fastapi in order to solve sub-path deploy issue.
-
-1. change CUSTOM_PATH setting in `config.py`
-
-``` sh
-nano config.py
-```
-
-2. Edit main.py
-
-```diff
- auto_opentab_delay()
- - demo.queue(concurrency_count=CONCURRENT_COUNT).launch(server_name="0.0.0.0", server_port=PORT, auth=AUTHENTICATION, favicon_path="docs/logo.png")
- + demo.queue(concurrency_count=CONCURRENT_COUNT)
-
- - # 如果需要在二级路径下运行
- - # CUSTOM_PATH, = get_conf('CUSTOM_PATH')
- - # if CUSTOM_PATH != "/":
- - # from toolbox import run_gradio_in_subpath
- - # run_gradio_in_subpath(demo, auth=AUTHENTICATION, port=PORT, custom_path=CUSTOM_PATH)
- - # else:
- - # demo.launch(server_name="0.0.0.0", server_port=PORT, auth=AUTHENTICATION, favicon_path="docs/logo.png")
-
- + 如果需要在二级路径下运行
- + CUSTOM_PATH, = get_conf('CUSTOM_PATH')
- + if CUSTOM_PATH != "/":
- + from toolbox import run_gradio_in_subpath
- + run_gradio_in_subpath(demo, auth=AUTHENTICATION, port=PORT, custom_path=CUSTOM_PATH)
- + else:
- + demo.launch(server_name="0.0.0.0", server_port=PORT, auth=AUTHENTICATION, favicon_path="docs/logo.png")
-
-if __name__ == "__main__":
- main()
-```
-
-
-3. Go!
-
-``` sh
-python main.py
-```
diff --git a/spaces/facebook/XLS-R-2B-22-16/README.md b/spaces/facebook/XLS-R-2B-22-16/README.md
deleted file mode 100644
index c596d2a9156a632ef6f2a10c83672e3abfdec202..0000000000000000000000000000000000000000
--- a/spaces/facebook/XLS-R-2B-22-16/README.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: XLS-R All-to-All 2B
-emoji: 🌎
-colorFrom: gray
-colorTo: red
-sdk: gradio
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/facebook/incoder-demo/modules/app.py b/spaces/facebook/incoder-demo/modules/app.py
deleted file mode 100644
index 28ad5b07bcec3c7a0d80684f7404b80eb41548e0..0000000000000000000000000000000000000000
--- a/spaces/facebook/incoder-demo/modules/app.py
+++ /dev/null
@@ -1,240 +0,0 @@
-import sys
-from typing import List
-import traceback
-import os
-import base64
-
-import logging
-logging.basicConfig(level=logging.INFO)
-import modules.cloud_logging
-
-import tokenizers
-import torch
-from transformers import AutoModelForCausalLM, AutoTokenizer
-import json
-import pprint
-
-# needs to be imported *before* transformers
-if os.path.exists('debug'):
- BIG_MODEL = False
- CUDA = False
-else:
- BIG_MODEL = True
- CUDA = True
-
-# from flask import Flask, request, render_template
-# from flask_cors import CORS
-# app = Flask(__name__, static_folder='static')
-# app.config['TEMPLATES_AUTO_RELOAD'] = Tru
-# CORS(app, resources= {
-# r"/generate": {"origins": origins},
-# r"/infill": {"origins": origins},
-# })
-# origins=[f"http://localhost:{PORT}", "https://huggingface.co", "https://hf.space"]
-
-PORT = 7860
-VERBOSE = False
-
-if os.path.exists('unlock'):
- MAX_LENGTH = 2048
-else:
- MAX_LENGTH = 256+64
-TRUNCATION_MESSAGE = f'warning: This demo is limited to {MAX_LENGTH} tokens in the document for efficiency.'
-
-if BIG_MODEL:
- model_name = "facebook/incoder-6B"
- kwargs = dict(
- revision="float16",
- torch_dtype=torch.float16,
- low_cpu_mem_usage=True,
- )
-else:
- model_name = "facebook/incoder-1B"
- kwargs = dict()
-
-from fastapi import FastAPI, Request
-from fastapi.staticfiles import StaticFiles
-from fastapi.responses import FileResponse, StreamingResponse
-app = FastAPI(docs_url=None, redoc_url=None)
-app.mount("/static", StaticFiles(directory="static"), name="static")
-
-
-logging.info("loading model")
-model = AutoModelForCausalLM.from_pretrained(model_name, **kwargs)
-logging.info("loading tokenizer")
-tokenizer = AutoTokenizer.from_pretrained(model_name)
-logging.info("loading complete")
-
-if CUDA:
- model = model.half().cuda()
-
-BOS = "<|endoftext|>"
-EOM = "<|endofmask|>"
-
-def make_sentinel(i):
- return f"<|mask:{i}|>"
-
-SPECIAL_TOKENS = [make_sentinel(i) for i in range(256)] + [EOM]
-
-def generate(input, length_limit=None, temperature=None):
- input_ids = tokenizer(input, return_tensors="pt").input_ids
- if CUDA:
- input_ids = input_ids.cuda()
- current_length = input_ids.flatten().size(0)
- max_length = length_limit + current_length
- truncated = False
- if max_length > MAX_LENGTH:
- max_length = MAX_LENGTH
- truncated = True
- if max_length == current_length:
- return input, True
- output = model.generate(input_ids=input_ids, do_sample=True, top_p=0.95, temperature=temperature, max_length=max_length)
- detok_hypo_str = tokenizer.decode(output.flatten())
- if detok_hypo_str.startswith(BOS):
- detok_hypo_str = detok_hypo_str[len(BOS):]
- return detok_hypo_str, truncated
-
-def infill(parts: List[str], length_limit=None, temperature=None, extra_sentinel=False, max_retries=1):
- assert isinstance(parts, list)
- retries_attempted = 0
- done = False
-
-
- while (not done) and (retries_attempted < max_retries):
- any_truncated = False
- retries_attempted += 1
- if VERBOSE:
- logging.info(f"retry {retries_attempted}")
- if len(parts) == 1:
- prompt = parts[0]
- else:
- prompt = ""
- # encode parts separated by sentinel
- for sentinel_ix, part in enumerate(parts):
- prompt += part
- if extra_sentinel or (sentinel_ix < len(parts) - 1):
- prompt += make_sentinel(sentinel_ix)
-
- # prompt += TokenizerWrapper.make_sentinel(0)
-
- infills = []
- complete = []
-
- done = True
-
- for sentinel_ix, part in enumerate(parts[:-1]):
- complete.append(part)
- prompt += make_sentinel(sentinel_ix)
- completion, this_truncated = generate(prompt, length_limit, temperature)
- any_truncated |= this_truncated
- completion = completion[len(prompt):]
- if EOM not in completion:
- if VERBOSE:
- logging.info(f"warning: {EOM} not found")
- completion += EOM
- # TODO: break inner loop here
- done = False
- completion = completion[:completion.index(EOM) + len(EOM)]
- infilled = completion[:-len(EOM)]
- infills.append(infilled)
- complete.append(infilled)
- prompt += completion
- complete.append(parts[-1])
- text = ''.join(complete)
-
- if VERBOSE:
- logging.info("generated text:")
- logging.info(prompt)
- logging.info()
- logging.info("parts:")
- logging.info(parts)
- logging.info()
- logging.info("infills:")
- logging.info(infills)
- logging.info()
- logging.info("restitched text:")
- logging.info(text)
- logging.info()
-
- return {
- 'text': text,
- 'parts': parts,
- 'infills': infills,
- 'retries_attempted': retries_attempted,
- 'truncated': any_truncated,
- }
-
-
-@app.head("/")
-@app.get("/")
-def index() -> FileResponse:
- return FileResponse(path="static/index.html", media_type="text/html")
-
-@app.get('/generate')
-# async def generate_maybe(request: Request):
-async def generate_maybe(info: str):
- # form = await info.json()
- # form = await request.json()
- # info is a base64-encoded, url-escaped json string (since GET doesn't support a body, and POST leads to CORS issues)
- # fix padding, following https://stackoverflow.com/a/9956217/1319683
- info = base64.urlsafe_b64decode(info + '=' * (4 - len(info) % 4)).decode('utf-8')
- form = json.loads(info)
- # print(form)
- prompt = form['prompt']
- length_limit = int(form['length'])
- temperature = float(form['temperature'])
- logging.info(json.dumps({
- 'length': length_limit,
- 'temperature': temperature,
- 'prompt': prompt,
- }))
- try:
- generation, truncated = generate(prompt, length_limit, temperature)
- if truncated:
- message = TRUNCATION_MESSAGE
- else:
- message = ''
- return {'result': 'success', 'type': 'generate', 'prompt': prompt, 'text': generation, 'message': message}
- except Exception as e:
- traceback.print_exception(*sys.exc_info())
- logging.error(e)
- return {'result': 'error', 'type': 'generate', 'prompt': prompt, 'message': f'Error: {e}.'}
-
-@app.get('/infill')
-# async def infill_maybe(request: Request):
-async def infill_maybe(info: str):
- # form = await info.json()
- # form = await request.json()
- # info is a base64-encoded, url-escaped json string (since GET doesn't support a body, and POST leads to CORS issues)
- # fix padding, following https://stackoverflow.com/a/9956217/1319683
- info = base64.urlsafe_b64decode(info + '=' * (4 - len(info) % 4)).decode('utf-8')
- form = json.loads(info)
- length_limit = int(form['length'])
- temperature = float(form['temperature'])
- max_retries = 1
- extra_sentinel = True
- logging.info(json.dumps({
- 'length': length_limit,
- 'temperature': temperature,
- 'parts_joined': ''.join(form['parts']),
- }))
- try:
- if len(form['parts']) > 4:
- return {'result': 'error', 'text': ''.join(form['parts']), 'type': 'infill', 'message': f"error: Can't use more than 3 tokens in this demo (for efficiency)."}
- generation = infill(form['parts'], length_limit, temperature, extra_sentinel=extra_sentinel, max_retries=max_retries)
- generation['result'] = 'success'
- generation['type'] = 'infill'
- if generation['truncated']:
- generation['message'] = TRUNCATION_MESSAGE
- else:
- generation['message'] = ''
- return generation
- # return {'result': 'success', 'prefix': prefix, 'suffix': suffix, 'text': generation['text']}
- except Exception as e:
- traceback.print_exception(*sys.exc_info())
- logging.error(e)
- return {'result': 'error', 'type': 'infill', 'message': f'Error: {e}.'}
-
-
-if __name__ == "__main__":
- app.run(host='0.0.0.0', port=PORT, threaded=False)
diff --git a/spaces/falterWliame/Face_Mask_Detection/3d Sexvilla 2 Everlust Unlock All TOP Crack 4sharedrar.md b/spaces/falterWliame/Face_Mask_Detection/3d Sexvilla 2 Everlust Unlock All TOP Crack 4sharedrar.md
deleted file mode 100644
index 4b155ff0f5b69873b12312aac859bdc403b7495e..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/3d Sexvilla 2 Everlust Unlock All TOP Crack 4sharedrar.md
+++ /dev/null
@@ -1,9 +0,0 @@
-
-
How to download Naruto Shippuden series movies for free. http://www.sify.com/watch/naruto-shippuden-season-10/eng-sub/1080p. https://szdesign.com/videodownload/xhwd-exwi50dpsmvkklq/download. http://videocdn.movierumorsites.com/3d-sexvilla-2-everlust-unlock-all-crack-4sharedrar-free-download.
-
3d Sexvilla 2 Everlust Unlock All Crack 4sharedrar
3d Sexvilla 2 Everlust Unlock All Crack 4sharedrar Category : 3D Sexvilla 2 Everlust Unlock All Crack 4sharedrar Download. rar 3d Sexvilla 2 Everlust Unlock All Crack 4sharedrar. Naruto Shippuden Season 10 (Sub) 1080p (07,0 Mb) in Full HD 1080p from vidme and hundreds of other compatible sources. 3d Sexvilla 2 Everlust Unlock All Crack 4sharedrar.
-
Tuskalott.TV Movie Hindi Dubbed 1080p X264-AC3.mkv -6thGarde Resuscitate 2014 Hindi Movie Full HD 1080p Subtitle In Maya Diakhaby As Rogue 2015 Full Movie XXX FIFA 15 Game Full Cracked APK + DATA FULL Nike Jordan 4 2016 Full Black On X264-FSH 5. https://eggnogg.us/download/xvzv-exhz50dgvqvmq/3d-sexvilla-2-everlust-unlock-all-crack-4sharedrar.rar 3d Sexvilla 2 Everlust Unlock All Crack 4sharedrar.
-
3d Sexvilla 2 Everlust Unlock All Crack 4sharedrar. In fact, you can at least burn an image onto a CD using some libraries to see whether it works or not. 3d Sexvilla 2 Everlust Unlock All Crack 4sharedrar. i would recommend that you give it a shot and hopefully, you end up having more options available to you. 3d Sexvilla 2 Everlust Unlock All Crack 4sharedrar. Some years ago, as a child, I used to visit my aunt's house in a small village in the interior of the country. 3d Sexvilla 2 Everlust Unlock All Crack 4sharedrar. 3d Sexvilla 2 Everlust Unlock All Crack 4sharedrar. Check out our full list of cracks and keygens. 3d Sexvilla 2 Everlust Unlock All Crack 4sharedrar. Post at bmehsu.com to reach me: http://www.iibuck.com/Free-IPhone-games.html 3d Sexvilla 2 Everlust Unlock All Crack 4sharedrar. Her [url=http://www.uncut-studios.com]3d Sexvilla 2 Everlust Unlock All Crack 4sharedrar[/url] became love at first sight for Todd and she asks him to try to convince her parents to let him marry her. 3d Sexvilla 2 Everlust Unlock All Crack 4sharedrar. I Downloaded From Crackle Of Xvid Is Paul Blart Movie. 3d Sexvilla 2 Everlust Unlock All Crack 4sharedrar. 3d Sexvilla 2 Everlust Unlock All Crack 4sharedrar. 3d Sexvilla 2 Everlust Unlock All Crack 4sharedrar. All you have to do is to take this crack. 3d Sexvilla 2 Everlust Unlock All Crack 4sharedrar.
- 899543212b
-
-
\ No newline at end of file
diff --git a/spaces/falterWliame/Face_Mask_Detection/Download __FULL__ Shadow Of The Colossus Pc Full 12.md b/spaces/falterWliame/Face_Mask_Detection/Download __FULL__ Shadow Of The Colossus Pc Full 12.md
deleted file mode 100644
index 7818fda0ecfb5ab85988d8a6ed652c647f74a9b1..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/Download __FULL__ Shadow Of The Colossus Pc Full 12.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-March's Free PS Plus Games: Shadow of the Colossus and Sonic Forces ... As a reminder, you've still got time to download this month's PS Plus games. ... You can even team up with friends on PC with full cross-play support through the Predator: Hunting Grounds trial. ... March 3, 2020 at 12:55 am PST. 1fdad05405
-
-
-
diff --git a/spaces/falterWliame/Face_Mask_Detection/Justice League (English) Movie Tamil Dubbed In 720p.md b/spaces/falterWliame/Face_Mask_Detection/Justice League (English) Movie Tamil Dubbed In 720p.md
deleted file mode 100644
index 76a776b79a728a2e9288ee35887faf62b924bae1..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/Justice League (English) Movie Tamil Dubbed In 720p.md
+++ /dev/null
@@ -1,82 +0,0 @@
-## Justice League (English) movie tamil dubbed in 720p
-
-
-
-
-
- 
-
-
-
-
-
-**Download File ->>->>->> [https://miimms.com/2tyiYO](https://miimms.com/2tyiYO)**
-
-
-
-
-
-
-
-
-
-
-
- Here is a possible title and article with HTML formatting for the keyword "Justice League (English) movie tamil dubbed in 720p":
-
-# Justice League: A Superhero Spectacle in Tamil
-
-
-
-Justice League is a 2017 American superhero film based on the DC Comics team of the same name. The film features Batman, Superman, Wonder Woman, Flash, Aquaman and Cyborg as they unite to save the world from the evil Steppenwolf and his army of Parademons. The film is directed by Zack Snyder, with additional scenes by Joss Whedon, and stars Ben Affleck, Henry Cavill, Gal Gadot, Ezra Miller, Jason Momoa and Ray Fisher.
-
-
-
-The film was released in English and several other languages worldwide, including Tamil. Tamil is a Dravidian language spoken by millions of people in India, Sri Lanka and other countries. Tamil cinema is one of the largest and most popular film industries in India, producing hundreds of films every year. Tamil dubbed films are also very popular among the Tamil audience, who enjoy watching Hollywood blockbusters in their native language.
-
-
-
-Justice League was dubbed in Tamil by a team of professional voice actors, who matched the tone and personality of the original actors. The Tamil dubbing also added some local flavor and humor to the dialogues, making them more appealing and relatable to the Tamil audience. The Tamil dubbed version of Justice League was released in theaters and online platforms along with the original version. The film received positive reviews from critics and fans alike, who praised the action sequences, visual effects, performances and soundtrack of the film.
-
-
-
-If you are a fan of superhero films and want to watch Justice League in Tamil, you can find it online in 720p quality. 720p is a high-definition video resolution that offers clear and crisp images on your screen. You can watch Justice League in Tamil dubbed in 720p on various online platforms such as YouTube, Netflix, Amazon Prime Video and others. You can also download the film from torrent sites or other sources, but be careful of viruses and malware that may harm your device.
-
-
-
-Justice League is a must-watch film for all superhero lovers, especially in Tamil. The film offers a thrilling and entertaining experience that will keep you hooked till the end. Watch Justice League in Tamil dubbed in 720p today and enjoy the superhero spectacle on your screen.
-
-Here is a possible continuation of the article with HTML formatting:
-
-## Justice League 2: The Unlikely Sequel to Zack Snyder's Vision
-
-
-
-While Justice League was originally planned as a two-part saga, the disappointing reception of the 2017 theatrical cut and the departure of Zack Snyder from the project put an end to those ambitions. However, thanks to the relentless campaign of fans and the launch of HBO Max, Snyder was given the opportunity to release his four-hour director's cut of Justice League in 2021, which restored his original vision and set up a potential sequel.
-
-
-
-Zack Snyder's Justice League ends with a cliffhanger that teases the arrival of Darkseid, the tyrannical ruler of Apokolips and the ultimate threat to the DC universe. The film also features a "Knightmare" sequence that shows a dystopian future where Darkseid has conquered Earth, Superman has turned evil, and Batman leads a resistance group that includes Cyborg, Flash, Mera, Deathstroke and Joker. The film suggests that this nightmare scenario can be prevented if Flash travels back in time and warns Bruce Wayne about Lois Lane's death, which triggers Superman's fall to the dark side.
-
-
-
-However, despite the positive response from critics and fans to Zack Snyder's Justice League, Warner Bros. has not shown any interest in greenlighting a sequel. The studio has stated that Snyder's cut is a "storytelling cul-de-sac" that does not fit with their current plans for the DC Extended Universe (DCEU), which include standalone films like The Batman, Black Adam and The Suicide Squad, as well as spin-offs like Peacemaker and The Trench. The studio has also expressed its desire to diversify its superhero slate and explore different tones and genres.
-
-
-
-Zack Snyder has acknowledged that Justice League 2 is unlikely to happen, but he has also revealed his plans for what it would have been like. According to Snyder, Justice League 2 would have followed the heroes as they travel to Apokolips to face Darkseid and his army, while also dealing with Lex Luthor's formation of the Legion of Doom on Earth. The film would have featured epic battles, sacrifices and betrayals, as well as the introduction of new characters like Green Lantern and Martian Manhunter. The film would have ended with Darkseid killing Lois Lane and Superman succumbing to the Anti-Life Equation, setting up Justice League 3.
-
-
-
-Justice League 3 would have been the final chapter of Snyder's trilogy, which would have focused on Batman's attempt to undo Darkseid's victory by using Flash's time travel abilities. The film would have shown the Knightmare timeline in more detail, as well as Batman's redemption arc and ultimate sacrifice to save Lois Lane and restore Superman's humanity. The film would have also featured a massive showdown between Darkseid and Superman, as well as the birth of Bruce Wayne and Lois Lane's son, who would become the new Batman in the future.
-
-
-
-While these plans sound ambitious and exciting for many fans, they also seem very unlikely to ever materialize on screen. Zack Snyder has moved on to other projects, such as Army of the Dead for Netflix, and Warner Bros. has shifted its focus to other DC properties and filmmakers. However, as Zack Snyder's Justice League has proven, nothing is impossible in the world of superheroes. Perhaps one day, fans will get to see Justice League 2 and Justice League 3 in some form or another.
-
- dfd1c89656
-
-
-
-
-
diff --git a/spaces/falterWliame/Face_Mask_Detection/Oceans Eight Tamil Dubbed Movie Torrent.md b/spaces/falterWliame/Face_Mask_Detection/Oceans Eight Tamil Dubbed Movie Torrent.md
deleted file mode 100644
index 1cef63deef2e6588e9a464ff0a48ab73d3f92281..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/Oceans Eight Tamil Dubbed Movie Torrent.md
+++ /dev/null
@@ -1,12 +0,0 @@
-
-
-**Table S1** Primer sequences for RT‐PCR and miRNA array analysis
-
-Click here for additional data file.
-
-**Table S2** Differential expression of miRNAs between fetal and adult samples
-
-**Table S3** Differential expression of miRNAs between breast and prostate 4fefd39f24
-
-
-
diff --git a/spaces/falterWliame/Face_Mask_Detection/Password Pro100 5.20.txt.md b/spaces/falterWliame/Face_Mask_Detection/Password Pro100 5.20.txt.md
deleted file mode 100644
index 71f78731da706a73f967947f5d5925e1d6fa3a51..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/Password Pro100 5.20.txt.md
+++ /dev/null
@@ -1,10 +0,0 @@
-
-
2. A weak or default password is more likely to be guessed or guessable. Email, Password Recovery" etc. download Basic password recovery Pack 8MB. The private key can then be used to sign all new. Password: Wrong Password!. I believe that this is due to hash collisions, but this is a guess. help.link/mediawiki/index.html. Password: Wrong Password! This is an example of an "unprotected file system.
download Advanced password recovery Pack 8MB. the key cn be used to. This tool detects some weak password that can be guessed. .hxuemhvuirmhdhujdhdwjbdbdbvwvwbkb.com. Password: Wrong Password!. Download.-. Download EaseFab Video Converter Pro Key Generator.. I could not find any damaging exploit. . another have a more secure password. - Password: Wrong Password! Attachments are not. with the tools above for password recovery. There are ways to recover the password even if you know it.
-
Password: Wrong Password!. or by brute force attack.Password. The public key will be uploaded to the server, and it will be auto-updated. .lhqsh.com. Password: Wrong Password!. . " (https://www.nethaxo.com/password-recovery-for-windows-1-0-and-1-1-2. Password: Wrong Password!. Password. " http://www. .
-
instead of having to remember endless combinations of username and password or having to enter credit card numbers and sensitive data. . There are ways to recover the password even if you know it.
-
5. Download Password Recovery for Windows 1., 0. you don't have to send a credit card and the payment page can't be brute forced. This would open up the possiblity of brute forcing the whole Internet. the public key will be uploaded to the server, and it will be auto-updated. Haxo Password Recovery. ., 9. Instead of having to remember endless combinations of username and password or having to enter credit card numbers and sensitive data. . Consider Password Recovery.
- 899543212b
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Attack on Titan - Fan Game The Best Way to Relive the Anime on Android.md b/spaces/fatiXbelha/sd/Attack on Titan - Fan Game The Best Way to Relive the Anime on Android.md
deleted file mode 100644
index 6520e514fabd31397dbd0af1ac8348d30d384511..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Attack on Titan - Fan Game The Best Way to Relive the Anime on Android.md
+++ /dev/null
@@ -1,164 +0,0 @@
-
-
Attack on Titan Download for Android: How to Enjoy the Epic Anime on Your Phone
-
If you are a fan of anime, you have probably heard of Attack on Titan, one of the most popular and acclaimed anime series of all time. But did you know that you can download Attack on Titan for Android and watch it on your phone anytime, anywhere? In this article, we will show you how to do that, as well as give you some tips and tricks to enjoy this epic anime on your phone.
-
What is Attack on Titan?
-
Attack on Titan is a Japanese manga series written and illustrated by Hajime Isayama, which was adapted into an anime television series by Wit Studio and MAPPA. The story is set in a world where humanity lives inside cities surrounded by enormous walls that protect them from giant humanoid Titans, who devour humans on sight. The story follows Eren Yeager, who vows to exterminate the Titans after they bring about the destruction of his hometown and the death of his mother.
The anime series consists of four seasons, with the first three seasons covering the first 27 volumes of the manga, and the fourth season covering the remaining 7 volumes. The first season aired from April to September 2013, followed by a 12-episode second season from April to June 2017. The third season was split into two parts, with the first 12 episodes airing from July to October 2018, and the last 10 episodes airing from April to July 2019. The fourth and final season premiered in December 2020, airing 16 episodes in its first part. A second part consisting of 12 episodes aired from January to April 2022, and a third and final part will air in two halves; the first half premiered in March 2023, and the second half will premiere in late 2023.
-
The main features of the anime series
-
Attack on Titan is known for its dark and gritty tone, its complex and compelling plot, its stunning animation and sound design, its memorable characters and themes, and its thrilling action scenes. Some of the main features of the anime series are:
-
-
The use of 3D maneuver gear, a device that allows humans to move freely in the air using gas-powered grappling hooks, which is essential for fighting against the Titans.
-
The different types of Titans, such as the Colossal Titan, the Armored Titan, the Female Titan, and the Beast Titan, each with their own abilities and weaknesses.
-
The mystery behind the origin and purpose of the Titans, as well as the secrets hidden within the walls and beyond.
-
The moral dilemmas and conflicts faced by the characters, such as whether to fight or flee, whether to trust or betray, whether to kill or spare, and whether to seek freedom or peace.
-
The exploration of themes such as survival, humanity, freedom, oppression, revenge, loyalty, sacrifice, identity, and hope.
-
-
Why Download Attack on Titan for Android?
-
If you are a fan of Attack on Titan, or if you are curious about this anime series, you might want to download it for Android and watch it on your phone. There are several reasons why this is a good idea:
-
The benefits of watching anime on your phoneThe benefits of watching anime on your phone
-
Some of the benefits of watching anime on your phone are:
-
-
You can watch it anytime, anywhere, without being tied to a TV or a computer. You can watch it while commuting, traveling, waiting, or relaxing.
-
You can watch it offline, without worrying about internet connection or data usage. You can download the episodes beforehand and watch them later at your convenience.
-
You can watch it privately, without disturbing others or being disturbed by others. You can use headphones or earphones to enjoy the sound effects and music, and you can adjust the brightness and volume to suit your preference.
-
You can watch it comfortably, without straining your eyes or neck. You can hold your phone at a comfortable distance and angle, and you can pause, rewind, or skip the episodes as you wish.
-
-
The best apps and websites to download Attack on Titan for Android
-
There are many apps and websites that allow you to download Attack on Titan for Android, but not all of them are reliable, safe, or legal. Some of them may contain viruses, malware, or spyware that can harm your phone or steal your personal information. Some of them may have low-quality videos, incomplete episodes, or annoying ads. Some of them may violate the copyright laws and infringe on the rights of the creators and distributors of the anime series.
-
To avoid these problems, you should only use trusted and reputable apps and websites that offer high-quality videos, complete episodes, and no ads. Some of the best apps and websites to download Attack on Titan for Android are:
-
-
-
App/Website
-
Description
-
Pros
-
Cons
-
-
-
Crunchyroll
-
A popular streaming service that offers a large collection of anime, manga, and drama. It has the official license to stream Attack on Titan in various regions and languages.
-
- High-quality videos with subtitles and dubbing options. - Complete episodes with fast updates. - No ads for premium users. - Offline viewing for premium users. - Compatible with various devices and platforms.
-
- Requires subscription for premium features. - Not available in some countries or regions. - May have some bugs or glitches.
-
-
-
Funimation
-
A leading streaming service that specializes in anime and animation. It has the exclusive license to stream Attack on Titan in English-speaking countries.
-
- High-quality videos with subtitles and dubbing options. - Complete episodes with fast updates. - No ads for premium users. - Offline viewing for premium users. - Compatible with various devices and platforms.
-
- Requires subscription for premium features. - Not available in some countries or regions. - May have some bugs or glitches.
-
-
-
AnimeLab
-
A dedicated streaming service that offers a wide range of anime titles. It has the official license to stream Attack on Titan in Australia and New Zealand.
-
- High-quality videos with subtitles and dubbing options. - Complete episodes with fast updates. - No ads for premium users. - Offline viewing for premium users. - Compatible with various devices and platforms.
-
- Requires subscription for premium features. - Not available in some countries or regions. - May have some bugs or glitches.
-
-
-
AnimeFreak
-
A free streaming website that provides a huge library of anime shows and movies. It does not have the official license to stream Attack on Titan, but it hosts the videos from other sources.
-
- Free to use with no registration required. - High-quality videos with subtitles and dubbing options. - Complete episodes with regular updates. - Compatible with various devices and platforms.
-
- Contains ads that may be intrusive or inappropriate. - May not be legal or ethical to use. - May have some bugs or glitches.
-
-
-
Kissanime
-
A free streaming website that offers a vast selection of anime genres and categories. It does not have the official license to stream Attack on Titan, but it hosts the videos from other sources.
-
- Free to use with no registration required. - High-quality videos with subtitles and dubbing options. - Complete episodes with regular updates. - Compatible with various devices and platforms.
-
- Contains ads that may be intrusive or inappropriate. - May not be legal or ethical to use. - May have some bugs or glitches.
-
-
-
How to Download Attack on Titan for Android?
-
Now that
Now that you know the best apps and websites to download Attack on Titan for Android, you might be wondering how to do it. Here are some simple steps to follow:
-
Attack on Titan: Assault APK latest version for Android
-How to download Attack on Titan game on Android devices
-Attack on Titan mobile game free download for Android
-Best Attack on Titan apps for Android in 2023
-Download Attack on Titan wallpapers for Android phone
-Attack on Titan: Tactics - strategy game for Android
-Attack on Titan fan game for Android - download now
-Attack on Titan: The Final Season - watch online on Android
-Attack on Titan manga reader app for Android
-Attack on Titan: Wings of Freedom - action game for Android
-Download Attack on Titan stickers for WhatsApp on Android
-Attack on Titan: No Regrets - spin-off manga for Android
-Attack on Titan live wallpaper for Android - customize your home screen
-Attack on Titan keyboard theme for Android - type with style
-Attack on Titan trivia quiz for Android - test your knowledge
-Attack on Titan ringtone for Android - set your favorite sound
-Attack on Titan cosplay guide for Android - get inspired by the characters
-Attack on Titan VR experience for Android - immerse yourself in the world
-Attack on Titan music player for Android - listen to the soundtrack
-Attack on Titan soundboard for Android - play the iconic quotes
-Attack on Titan emoji keyboard for Android - express yourself with the symbols
-Attack on Titan mod apk for Android - unlock all features and items
-Attack on Titan offline game for Android - play without internet connection
-Attack on Titan wallpaper HD 4k for Android - enjoy the high quality images
-Attack on Titan theme launcher for Android - personalize your device
-Attack on Titan photo editor for Android - create your own fan art
-Attack on Titan anime streaming app for Android - watch all episodes and movies
-Attack on Titan coloring book for Android - relax and have fun
-Attack on Titan alarm clock for Android - wake up with the Survey Corps
-Attack on Titan role playing game for Android - create your own character and story
-Attack on Titan video downloader for Android - save your favorite clips
-Attack on Titan news and updates app for Android - stay informed about the latest developments
-Attack on Titan wallpaper maker for Android - design your own background
-Attack on Titan quiz game multiplayer for Android - challenge your friends and other fans
-Attack on Titan sticker maker for Android - create your own stickers and share them
-
A step-by-step guide to download Attack on Titan for Android using an app
-
-
Choose an app that suits your needs and preferences, such as Crunchyroll, Funimation, or AnimeLab. You can find them on the Google Play Store or their official websites.
-
Download and install the app on your phone. Make sure you have enough storage space and a stable internet connection.
-
Open the app and sign up for an account if you don't have one already. You may need to pay for a subscription to access the premium features, such as offline viewing.
-
Search for Attack on Titan in the app's library or browse through the categories. You can also use the filters and sorting options to narrow down your search.
-
Select the season and episode you want to watch. You can also choose the language and quality of the video.
-
Tap on the download icon or button to start downloading the episode. You can see the progress and status of the download in the app's menu or notification bar.
-
Once the download is complete, you can watch the episode offline by tapping on the play icon or button. You can also delete the episode after watching it to free up some space.
-
-
A step-by-step guide to download Attack on Titan for Android using a website
-
-
Choose a website that offers high-quality videos and complete episodes of Attack on Titan, such as AnimeFreak or Kissanime. You can find them on your web browser or search engine.
-
Go to the website and look for Attack on Titan in its library or search bar. You can also use the filters and sorting options to narrow down your search.
-
Select the season and episode you want to watch. You can also choose the language and quality of the video.
-
Tap on the download icon or button to start downloading the episode. You may need to wait for a few seconds or minutes before the download link appears.
-
Once the download link appears, tap on it and choose a location to save the file on your phone. Make sure you have enough storage space and a stable internet connection.
-
Once the download is complete, you can watch the episode offline by opening it with a video player app on your phone. You can also delete the file after watching it to free up some space.
-
-
Tips and Tricks to Enjoy Attack on Titan on Your Phone
-
Downloading Attack on Titan for Android is not enough to enjoy this epic anime on your phone. You also need some tips and tricks to enhance your viewing experience and avoid any problems. Here are some of them:
-
How to optimize your phone settings for the best viewing experience
-
-
Make sure your phone is fully charged or plugged in before watching an episode, as downloading and playing videos can drain your battery quickly.
-
Turn off any notifications or alerts that may interrupt or distract you while watching an episode, such as calls, messages, emails, or social media updates.
-
Adjust your screen brightness and contrast to suit your eyesight and lighting conditions, as too bright or too dark screens can strain your eyes or affect your visibility.
-
Adjust your sound volume and quality to suit your hearing and environment, as too loud or too low sounds can damage your ears or affect your immersion.
-
Use headphones or earphones to enjoy the sound effects and music better, as well as to block out any background noise or interference.
-
-
How to avoid spoilers and stay updated with the latest episodes
-
-
Avoid browsing through social media, forums, blogs, or websites that may contain spoilers or discussions about Attack on Titan, especially if you are not caught up with the latest episodes.
-
Avoid clicking on any links, images, videos, or articles that may reveal spoilers or details about Attack on Titan, especially if they have misleading or vague titles or thumbnails.
-
Avoid talking to anyone who has watched ahead of you or who may spoil you intentionally or unintentionally about Attack on Titan, especially if they are not respectful of your preferences or boundaries.
-
Stay updated with the release dates and schedules of Attack on Titan episodes, as well as any news or announcements about the anime series, by following its official website, social media accounts, or streaming platforms.
-
Watch each episode as soon as possible after it is released, preferably within 24 hours, to avoid missing out on any important events or developments in Attack on Titan.
-
-
How to join the Attack on Titan fan community and share your thoughts
-
One of the best ways to enjoy Attack on Titan is to join the fan community and share your thoughts, opinions, theories, and emotions with other fans. You can also learn more about the anime series, discover new perspectives, and make new friends. Here are some ways to join the Attack on Titan fan community and share your thoughts:
-
-
Join online platforms that are dedicated to Attack on Titan, such as Reddit, Discord, Twitter, Facebook, Instagram, YouTube, or Tumblr. You can find various groups, channels, pages, accounts, or blogs that focus on Attack on Titan and interact with other fans.
-
Join offline events that are related to Attack on Titan, such as conventions, screenings, meetups, or cosplay. You can find local or international events that celebrate Attack on Titan and meet other fans in person.
-
Share your own content that is inspired by Attack on Titan, such as fan art, fan fiction, fan videos, fan podcasts, or fan games. You can showcase your creativity and passion for Attack on Titan and receive feedback and support from other fans.
-
Respect the rules and etiquette of the fan community and be polite and friendly to other fans. You can have different opinions and preferences, but you should not insult, harass, or spoil anyone. You should also respect the creators and distributors of Attack on Titan and avoid piracy or plagiarism.
-
-
Conclusion
-
Attack on Titan is an epic anime series that you can download for Android and watch on your phone. In this article, we have shown you what Attack on Titan is, why you should download it for Android, how to download it for Android using an app or a website, and how to enjoy it on your phone. We hope you have found this article helpful and informative. Now you can download Attack on Titan for Android and enjoy this amazing anime on your phone.
-
If you have any questions or comments about this article, feel free to leave them below. We would love to hear from you. And if you liked this article, please share it with your friends and family who might be interested in Attack on Titan. Thank you for reading!
-
FAQs
-
Here are some frequently asked questions about Attack on Titan download for Android:
-
-
Q: Is Attack on Titan download for Android legal? A: It depends on the app or website you use to download it. If you use an app or website that has the official license to stream Attack on Titan in your region or country, such as Crunchyroll, Funimation, or AnimeLab, then it is legal. However, if you use an app or website that does not have the official license to stream Attack on Titan in your region or country, such as AnimeFreak or Kissanime, then it may not be legal. You should check the terms and conditions of the app or website before using it.
-
Q: Is Attack on Titan download for Android safe? A: It depends on the app or website you use to download it. If you use an app or website that is trusted and reputable, such as Crunchyroll, Funimation, or AnimeLab, then it is safe. However, if you use an app or website that is not trusted or reputable, such as AnimeFreak or Kissanime, then it may not be safe. You should check the reviews and ratings of the app or website before using it.
-
Q: Is Attack on Titan download for Android free? A: It depends on the app or website you use to download it. Some apps and websites offer free access to Attack on Titan episodes with ads or limited features, such as AnimeFreak or Kissanime. However, some apps and websites require a subscription fee to access Attack on Titan episodes without ads or with premium features, such as Crunchyroll, Funimation, or AnimeLab. You should compare the prices and benefits of the apps and websites before using them.
-
Q: How many episodes are there in Attack on Titan? A: There are currently 76 episodes in Attack on Titan anime series. The first season has 25 episodes; the second season has 12 episodes; the third season has 22 episodes; the fourth season has 16 episodes; and a third part of the fourth season will have 12 episodes.
-
Q: When will the final part of Attack on Titan anime series air? A: The final part of Attack on Titan anime series will air in two halves; the first half premiered in March 2023; and the second half will premiere in late 2023.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/CARS24 Desktop App How It Can Save You Time and Money on Used Cars.md b/spaces/fatiXbelha/sd/CARS24 Desktop App How It Can Save You Time and Money on Used Cars.md
deleted file mode 100644
index 298d62af59b562ca4cbbcb9f66496ec9bb9b0996..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/CARS24 Desktop App How It Can Save You Time and Money on Used Cars.md
+++ /dev/null
@@ -1,144 +0,0 @@
-
-
How to Download and Use Cars24 App on Your PC
-
If you are looking for a convenient and hassle-free way to buy or sell used cars online, you might want to check out the Cars24 app. This app allows you to browse through thousands of certified cars, get instant quotes, book test drives, apply for financing, and get free home delivery or pickup. But what if you want to use this app on your PC instead of your phone? In this article, we will show you how to download and use Cars24 app on your PC using two methods: an Android emulator or Windows Subsystem for Android.
-
What is Cars24 App and Why You Should Use It
-
Cars24 is an innovative platform that aims to revolutionize the used car trading industry in India. It offers a new experience that helps you buy or sell your used car online in a convenient, safe, and easy way. Here are some of the features and benefits of using Cars24 app:
You can choose from more than 1,500 well-known car brands on the app, with detailed information, photos, and videos.
-
You can use the virtual 360-degree view feature to see every car in detail as if you were walking around it yourself.
-
You can book a free test drive for any car you like, with no obligation to buy.
-
You can get an online quote for your car in minutes, by providing a few details about your car.
-
You can sell your car online in a single visit, with instant payment and free RC transfer.
-
You can apply for zero down payment, quick financing, with easy documentation and flexible EMIs.
-
You can get a free warranty for six months on every car you buy, with free after-sale service and support.
-
You can return any car you buy within seven days if you are not satisfied, with a full refund policy.
-
You can buy or sell your car online anytime, anywhere, with free home delivery or pickup at a Cars24 service center.
-
-
Cars24 App Requirements and Compatibility
-
The Cars24 app is compatible with Android devices running Android 5.0 or higher. You can download it from Google Play Store or from the official website. However, if you want to use it on your PC, you will need either an Android emulator or Windows Subsystem for Android. We will explain these methods in the following sections.
-
How to Install Cars24 App on Your PC Using an Android Emulator
-
An Android emulator is a software that simulates the Android environment on your PC and allows you to download and run Android apps from Google Play Store or other sources. One of the most popular and recommended Android emulators is Bluestacks, which supports Windows 7/8/10/11. Here are the steps to install Cars24 app on your PC using Bluestacks:
-
What is an Android Emulator and How It Works
-
An Android emulator is a software that creates a virtual machine on your PC that runs the Android operating system. This way, you can access the Google Play Store and other Android apps on your PC as if you were using an Android device. An Android emulator uses virtualization technology to emulate the hardware and software components of an Android device, such as CPU, RAM, storage, sensors, camera, etc.
How to Download and Install Bluestacks on Your PC
-
To download and install Bluestacks on your PC, follow these steps:
-
-
Go to the official website of Bluestacks and click on the "Download Bluestacks" button.
-
Wait for the download to finish and then run the installer file.
-
Follow the instructions on the screen to complete the installation process.
-
Launch Bluestacks and sign in with your Google account or create a new one.
-
Once you are logged in, you will see the Bluestacks home screen with various app icons.
-
-
How to Download and Install Cars24 App on Bluestacks
-
To download and install Cars24 app on Bluestacks, follow these steps:
-
-
On the Bluestacks home screen, click on the "Google Play Store" icon.
-
In the search bar, type "Cars24" and hit enter.
-
From the search results, click on the "Cars24 - Buy & Sell Used Cars Online" app by CARS24 SERVICES PRIVATE LIMITED.
-
Click on the "Install" button and wait for the app to download and install.
-
Once the installation is done, you will see the "Cars24" app icon on the Bluestacks home screen.
-
Click on the "Cars24" app icon to launch it and start using it on your PC.
-
-
How to Install Cars24 App on Your PC Using Windows Subsystem for Android
-
If you have a Windows 11 PC, you can also use Windows Subsystem for Android (WSA) to run Android apps on your PC. WSA is a feature that allows you to install and run Android apps from the Microsoft Store or from the Amazon Appstore. Here are the steps to install Cars24 app on your PC using WSA:
-
cars24 desktop app for mac and pc
-cars24 buy and sell used cars online
-cars24 services private limited app
-cars24 360° car viewing experience
-cars24 online quote for your car
-cars24 financial services app
-cars24 seller protection policy app
-cars24 mega refurbishment labs app
-cars24 1 year warranty app
-cars24 7 day returns app
-cars24 zero down payment app
-cars24 home inspection app
-cars24 rc transfer app
-cars24 hassle-free documentation app
-cars24 instant payment app
-cars24 sell from anywhere app
-cars24 great price app
-cars24 loan approval in seconds app
-cars24 low interest rates app
-cars24 100% digitised process app
-cars24 maruti suzuki app
-cars24 honda app
-cars24 mahindra app
-cars24 kia app
-cars24 hyundai app
-cars24 webcatalog app
-cars24 google play app
-cars24 gameloop app
-cars24 android on pc app
-cars24 second-hand customer cars app
-cars24 easy finance app
-cars24 verified used car dealers app
-cars24 best price guarantee app
-cars24 after-sales support app
-cars24 quality checks app
-cars24 refurbished with love app
-cars24 net energy gain experiment app
-cars24 holy grail fusion experiment app
-cars24 mini sun experiment app
-cars24 100 million°C experiment app
-cars24 30 seconds experiment app
-cars24 nuclear fusion reaction experiment app
-cars24 korea superconducting tokamak advanced research experiment app
-cars24 korea institute of fusion energy experiment app
-cars24 new scientist article on experiment app
-cars24 the sun article on experiment app
-cars24 yahoo news article on experiment app
-cars24 webcatalog spaces feature for apps
-cars24 distraction-free windows feature for apps
-cars24 multiple accounts feature for apps
-
What is Windows Subsystem for Android and How It Works
-
Windows Subsystem for Android is a feature that enables you to run Android apps natively on your Windows 11 PC. It uses virtualization technology to create a Linux-based environment that runs the Android operating system. This way, you can access Android apps from the Microsoft Store or from the Amazon Appstore on your PC as if you were using an Android device. WSA supports most of the Android features and capabilities, such as touch, audio, camera, sensors, etc.
-
How to Update Your Windows 11 and Microsoft Store
-
To use WSA, you need to have Windows 11 version 22000.194 or higher and Microsoft Store version 22110.1401.6.0 or higher. To update your Windows 11 and Microsoft Store, follow these steps:
-
-
Go to Settings > Windows Update and click on "Check for updates". If there are any available updates, download and install them.
-
Go to Settings > Apps > Apps & features and click on "Microsoft Store". Then click on the three dots icon and select "Advanced options".
-
Scroll down and click on "Repair" or "Reset" if available. This will fix any issues with the Microsoft Store app.
-
Restart your PC and check if your Windows 11 and Microsoft Store are updated.
-
-
How to Download and Install Amazon Appstore and Windows Subsystem for Android
-
To download and install Amazon Appstore and WSA, follow these steps:
-
-
Go to the Microsoft Store app and search for "Amazon Appstore". Click on the "Get" button and wait for the app to download and install.
-
Launch the Amazon Appstore app and sign in with your Amazon account or create a new one.
-
Go back to the Microsoft Store app and search for "Windows Subsystem for Android". Click on the "Get" button and wait for the feature to download and install.
-
Restart your PC and check if you have WSA enabled on your PC.
-
-
How to Download and Install Cars24 App from Amazon Appstore
-
To download and install Cars24 app from Amazon Appstore, follow these steps:
-
-
Launch the Amazon Appstore app and search for "Cars24". Click on the "Cars24 - Buy & Sell Used Cars Online" app by CARS24 SERVICES PRIVATE LIMITED.
-
Click on the "Get" button and wait for the app to download and install.
-
Once the installation is done, you will see the "Cars24" app icon on your desktop or in your Start menu.
-
Click on the "Cars24" app icon to launch it and start using it on your PC.
-
-
Conclusion
-
In this article, we have shown you how to download and use Cars24 app on your PC using two methods: an Android emulator or Windows Subsystem for Android. Both methods have their advantages and disadvantages, so you can choose the one that suits your needs and preferences. With Cars24 app, you can buy or sell your used car online in a convenient, safe, and easy way. You can also enjoy various features and benefits that make your car trading experience more enjoyable and rewarding. So, what are you waiting for? Download Cars24 app today and get started!
-
FAQs
-
Q: Is Cars24 app free to download and use?
-
A: Yes, Cars24 app is free to download and use. However, you may need to pay some fees or charges when you buy or sell your car through the app, such as registration fee, service fee, delivery fee, etc.
-
Q: Is Cars24 app safe and secure?
-
A: Yes, Cars24 app is safe and secure. It uses encryption and authentication technologies to protect your personal and financial information. It also verifies the identity and background of the buyers and sellers to ensure a fair and transparent deal.
-
Q: How can I contact Cars24 customer support?
-
A: You can contact Cars24 customer support by calling their toll-free number 1800 258 5656 or by emailing them at care@cars24.com. You can also visit their website or app and click on the "Help" or "Contact Us" option.
-
Q: How can I update or uninstall Cars24 app on my PC?
-
A: To update or uninstall Cars24 app on your PC, follow these steps:
-
-
If you are using Bluestacks, go to the Bluestacks home screen and click on the "My Apps" tab. Then click on the "Cars24" app icon and select "Update" or "Uninstall" from the menu.
-
If you are using WSA, go to the Start menu and click on the "Settings" icon. Then go to "Apps" > "Apps & features" and find the "Cars24" app from the list. Click on it and select "Modify" or "Uninstall" from the menu.
-
-
Q: What are some alternatives to Cars24 app?
-
A: Some alternatives to Cars24 app are:
-
-
CarDekho - A platform that offers new and used cars, car loans, insurance, reviews, news, etc.
-
CarTrade - A platform that offers new and used cars, car valuation, finance, insurance, auctions, etc.
-
Droom - A platform that offers new and used cars, bikes, scooters, planes, boats, etc.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Enjoy 50 Classic Solitaire Games for Your Mac - Download Now.md b/spaces/fatiXbelha/sd/Enjoy 50 Classic Solitaire Games for Your Mac - Download Now.md
deleted file mode 100644
index 5f0ce88aafaf934b8a0b1f790ff79a160a838277..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Enjoy 50 Classic Solitaire Games for Your Mac - Download Now.md
+++ /dev/null
@@ -1,107 +0,0 @@
-
-
How to Download Solitaire for Mac
-
Solitaire is one of the most played video games of all time. It is a card game that can be enjoyed by anyone, regardless of age or skill level. Solitaire is also a great way to relax, have fun, and exercise your brain.
If you are a Mac user, you might be wondering how to download solitaire for your device. There are many options available, depending on your preferences and needs. In this article, we will show you how to download solitaire from the Mac App Store, as well as from other sources. We will also review some of the best solitaire games for Mac that you can try today.
-
Ready to play some solitaire on your Mac? Let's get started!
-
How to Download Solitaire from the Mac App Store
-
The easiest and safest way to download solitaire for your Mac is from the Mac App Store. The Mac App Store is a digital distribution platform that allows you to browse, buy, and download apps for your Mac. You can access the Mac App Store from your Dock, Launchpad, or Finder.
-
To download solitaire from the Mac App Store, follow these steps:
-
download full deck solitaire for mac
-download klondike solitaire for mac
-download classic solitaire for mac
-download free solitaire games for mac
-download spider solitaire for mac
-download solitaire plus for mac
-download solitaire city for mac
-download solitaire greatest hits for mac
-download solsuite solitaire for mac
-download pretty good solitaire for mac
-how to download solitaire on macbook air
-how to download solitaire on macbook pro
-how to download microsoft solitaire on mac
-how to download windows solitaire on mac
-how to download mahjong solitaire on mac
-best solitaire app for mac free download
-best solitaire game for mac free download
-best offline solitaire for mac free download
-best spider solitaire for mac free download
-best klondike solitaire for mac free download
-where can i download solitaire for mac
-where can i download free solitaire for mac
-where can i download spider solitaire for mac
-where can i download microsoft solitaire for mac
-where can i download windows solitaire for mac
-download and install solitaire on mac
-download and play solitaire on mac
-download and enjoy solitaire on mac
-download and update solitaire on mac
-download and review solitaire on mac
-easy to download solitaire for mac
-easy to play solitaire for mac free download
-easy to learn solitaire for mac free download
-easy to win solitaire for mac free download
-easy to use solitaire for mac free download
-fast and fun solitaire for mac free download
-fast and simple solitaire for mac free download
-fast and smooth solitaire for mac free download
-fast and secure solitaire for mac free download
-fast and reliable solitaire for mac free download
-beautiful and addictive solitaire for mac free download
-beautiful and challenging solitaire for mac free download
-beautiful and relaxing solitaire for mac free download
-beautiful and customizable solitaire for mac free download
-beautiful and elegant solitaire for mac free download
-top rated solitaire app for mac free download
-top rated solitaire game for mac free download
-top rated spider solitaire for mac free download
-top rated klondike solitaire for mac free download
-top rated classic solitaire for mac free download
-
-
Open the Mac App Store on your device.
-
In the search box, type "solitaire" and hit enter.
-
You will see a list of solitaire apps that are compatible with your device. You can filter the results by category, price, rating, or popularity.
-
Choose the solitaire app that you want to download and click on its icon.
-
You will see a page with more information about the app, such as its description, screenshots, reviews, and ratings. You can also see if the app is free or paid, and if it offers in-app purchases.
-
If you want to download the app, click on the "Get" button if it is free, or the price button if it is paid. You might need to enter your Apple ID and password to confirm your purchase.
-
The app will start downloading and installing on your device. You can see the progress in the Launchpad or in the Dock.
-
Once the app is installed, you can open it from the Launchpad or the Dock and start playing solitaire on your Mac.
-
-
There are many solitaire apps for Mac that you can download from the Mac App Store. Here are some of the best ones that we recommend:
-
Solitaire! (Klondike)
-
Solitaire! (Klondike) is a free version of the classic Klondike game, which most people just call "solitaire". It features options for one- or three-card draws from the stock, unlimited recycle of the stock, smart-dragging, one-click moves, autoplay, custom card backs and backgrounds, undo/redo, statistics, and game save/restore. It is a simple and elegant solitaire game that you can enjoy on your Mac.
- Full Deck Solitaire
-
Full Deck Solitaire is a free solitaire app that offers 22 different solitaire games, such as Klondike, Spider, FreeCell, Pyramid, Tri Peaks, Golf, and more. It has beautiful graphics, animations, sound effects, and music. It also has features like hints, undo/redo, auto-complete, statistics, leaderboards, and achievements. You can customize the card backs, backgrounds, and card faces. You can also choose from different difficulty levels and game modes. Full Deck Solitaire is a fun and challenging solitaire app that will keep you entertained for hours.
-
Microsoft Solitaire Collection
-
Microsoft Solitaire Collection is a free solitaire app that brings the classic Windows solitaire games to your Mac. It includes five solitaire games: Klondike, Spider, FreeCell, Pyramid, and TriPeaks. It also has daily challenges, events, themes, achievements, and cloud sync. You can play online or offline, and adjust the settings and preferences to your liking. Microsoft Solitaire Collection is a nostalgic and addictive solitaire app that will make you feel like you are playing on a Windows PC.
-
How to Download Solitaire from Other Sources
-
If you don't want to download solitaire from the Mac App Store, you can also download it from other sources. However, you need to be careful when downloading solitaire from other sources, as some of them might contain malware or viruses that can harm your device. You should always check the reputation and reviews of the source before downloading anything from it. You should also scan the downloaded file with an antivirus software before opening it.
-
Another option is to play solitaire online on your browser without downloading anything. There are many websites that offer solitaire games for free that you can access from your Mac. Here are some of the best ones that we recommend:
-
World of Solitaire
-
World of Solitaire is a website that offers over 100 solitaire games for free. You can play classic solitaire games like Klondike, Spider, FreeCell, Pyramid, Golf, and more. You can also play unique solitaire games like Scorpion, Yukon, Russian Solitaire, and more. You can customize the card backs, backgrounds, card faces, animations, sounds, and options. You can also track your statistics, scores, and time. World of Solitaire is a comprehensive and user-friendly website that will satisfy your solitaire cravings.
-
Solitr.com
-
Solitr.com is a website that offers two solitaire games for free: Klondike and Spider. You can choose from one- or three-card draws for Klondike, and one-, two-, or four-suit modes for Spider. You can also change the theme and the card size. The website has a simple and clean design that allows you to focus on the game. You can also undo/redo moves, see your score and time, and restart the game. Solitr.com is a fast and easy website that will let you play solitaire in seconds.
-
247 Solitaire
-
247 Solitaire is a website that offers 12 solitaire games for free. You can play popular solitaire games like Klondike, Spider, FreeCell, Pyramid, Golf, and more. You can also play less common solitaire games like Wasp, Scorpion, Yukon, and more. You can customize the card backs, backgrounds, and card faces. You can also see your statistics, scores, and time. 247 Solitaire is a colorful and fun website that will give you plenty of solitaire options.
-
Conclusion
-
Solitaire is a classic and enjoyable card game that you can play on your Mac. You can download solitaire from the Mac App Store or from other sources, depending on your preferences and needs. You can also play solitaire online on your browser without downloading anything. There are many solitaire games for Mac that you can choose from, such as Klondike, Spider, FreeCell, Pyramid, Golf, and more. Solitaire is a great way to relax, have fun, and exercise your brain.
-
So what are you waiting for? Download or play solitaire on your Mac today and see how much you love it!
-
FAQs
-
Is solitaire free for Mac?
-
Yes, there are many solitaire apps and websites that are free for Mac. However, some of them might have ads or offer in-app purchases for extra features or content. You can also find paid solitaire apps for Mac that might have more options and quality.
-
How do I uninstall solitaire from my Mac?
-
If you want to uninstall solitaire from your Mac, you can follow these steps:
-
-
Open the Finder on your device.
-
Go to the Applications folder and locate the solitaire app that you want to uninstall.
-
Drag the app icon to the Trash or right-click on it and choose Move to Trash.
-
Empty the Trash to complete the uninstallation.
-
-
How do I play solitaire offline on my Mac?
-
If you want to play solitaire offline on your Mac, you need to download a solitaire app that does not require an internet connection. You can find such apps on the Mac App Store or from other sources. Once you download the app, you can open it and play solitaire offline on your Mac.
-
How do I change the settings and preferences of solitaire on my Mac?
-
If you want to change the settings and preferences of solitaire on your Mac, you need to open the solitaire app that you are using and look for the settings or options menu. There you can change things like the card backs, backgrounds, card faces, sounds, animations, difficulty levels, game modes, hints, undo/redo, auto-complete, statistics, and more.
-
How do I improve my solitaire skills on my Mac?
-
If you want to improve your solitaire skills on your Mac, you need to practice regularly and learn from your mistakes. You can also try different solitaire games and modes to challenge yourself and learn new strategies. You can also read tips and tricks online or watch tutorials and videos from other players.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fb700/chatglm-fitness-RLHF/docs/README.md.German.md b/spaces/fb700/chatglm-fitness-RLHF/docs/README.md.German.md
deleted file mode 100644
index 0fe200cf690b6c9ff699e2e19bb53fd3cd60c201..0000000000000000000000000000000000000000
--- a/spaces/fb700/chatglm-fitness-RLHF/docs/README.md.German.md
+++ /dev/null
@@ -1,307 +0,0 @@
-> **Hinweis**
->
-> Bei der Installation von Abhängigkeiten sollten nur die in **requirements.txt** **angegebenen Versionen** streng ausgewählt werden.
->
-> `pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/`
-
-# GPT Akademisch optimiert (GPT Academic)
-
-**Wenn Ihnen dieses Projekt gefällt, geben Sie ihm bitte einen Stern; wenn Sie bessere Tastenkombinationen oder Funktions-Plugins entwickelt haben, können Sie gerne einen Pull Request eröffnen.**
-
-Wenn Sie dieses Projekt mögen, geben Sie ihm bitte einen Stern. Wenn Sie weitere nützliche wissenschaftliche Abkürzungen oder funktionale Plugins entwickelt haben, können Sie gerne ein Problem oder eine Pull-Anforderung öffnen. Wir haben auch ein README in [Englisch|](docs/README_EN.md)[日本語|](docs/README_JP.md)[한국어|](https://github.com/mldljyh/ko_gpt_academic)[Русский|](docs/README_RS.md)[Français](docs/README_FR.md), das von diesem Projekt selbst übersetzt wurde.
-Um dieses Projekt in eine beliebige Sprache mit GPT zu übersetzen, lesen Sie `multi_language.py` (experimentell).
-
-> **Hinweis**
->
-> 1. Beachten Sie bitte, dass nur Funktionserweiterungen (Schaltflächen) mit **roter Farbe** Dateien lesen können und einige Erweiterungen im **Dropdown-Menü** des Erweiterungsbereichs zu finden sind. Außerdem begrüßen wir jede neue Funktionserweiterung mit **höchster Priorität** und bearbeiten sie.
->
-> 2. Die Funktionalität jeder Datei in diesem Projekt wird in der Selbstanalyse [`self_analysis.md`](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) detailliert beschrieben. Mit der Weiterentwicklung der Versionen können Sie jederzeit die zugehörigen Funktions-Erweiterungen aufrufen, um durch Aufruf von GPT einen Selbstanalysebericht des Projekts zu erstellen. Häufig gestellte Fragen finden Sie in der [`Wiki`](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98). [Installationsanweisungen](#Installation).
->
-> 3. Dieses Projekt ist kompatibel und fördert die Verwendung von inländischen Sprachmodellen wie ChatGLM und RWKV, Pangu, etc. Es unterstützt das Vorhandensein mehrerer api-keys, die in der Konfigurationsdatei wie folgt angegeben werden können: `API_KEY="openai-key1,openai-key2,api2d-key3"`. Wenn ein `API_KEY` temporär geändert werden muss, geben Sie den temporären `API_KEY` im Eingabebereich ein und drücken Sie dann die Eingabetaste, um ihn zu übernehmen.Funktion | Beschreibung
---- | ---
-Ein-Klick-Polieren | Unterstützt ein-Klick-Polieren und ein-Klick-Suche nach grammatikalischen Fehlern in wissenschaftlichen Arbeiten
-Ein-Klick Chinesisch-Englisch Übersetzung | Ein-Klick Chinesisch-Englisch Übersetzung
-Ein-Klick-Code-Erklärung | Zeigt Code, erklärt Code, erzeugt Code und fügt Kommentare zum Code hinzu
-[Benutzerdefinierte Tastenkombinationen](https://www.bilibili.com/video/BV14s4y1E7jN) | Unterstützt benutzerdefinierte Tastenkombinationen
-Modulare Gestaltung | Unterstützt leistungsstarke individuelle [Funktions-Plugins](https://github.com/binary-husky/chatgpt_academic/tree/master/crazy_functions). Plugins unterstützen [Hot-Updates](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
-[Selbstprogramm-Analyse](https://www.bilibili.com/video/BV1cj411A7VW) | [Funktions-Plugin] [Ein-Klick Verstehen](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) der Quellcode dieses Projekts
-[Programmanalyse](https://www.bilibili.com/video/BV1cj411A7VW) | [Funktions-Plugin] Ein-Klick-Analyse des Projektbaums anderer Python/C/C++/Java/Lua/...-Projekte
-Lesen von Papieren, [Übersetzen](https://www.bilibili.com/video/BV1KT411x7Wn) von Papieren | [Funktions-Plugin] Ein-Klick Erklärung des gesamten LaTeX/PDF-Artikels und Erstellung einer Zusammenfassung
-LaTeX-Volltext-Übersetzung und [Polieren](https://www.bilibili.com/video/BV1FT411H7c5/) | [Funktions-Plugin] Ein-Klick-Übersetzung oder-Polieren des LaTeX-Artikels
-Bulk-Kommentargenerierung | [Funktions-Plugin] Ein-Klick Massenerstellung von Funktionskommentaren
-Markdown [Chinesisch-Englisch Übersetzung](https://www.bilibili.com/video/BV1yo4y157jV/) | [Funktions-Plugin] Haben Sie die [README](https://github.com/binary-husky/chatgpt_academic/blob/master/docs/README_EN.md) in den oben genannten 5 Sprachen gesehen?
-Analyse-Berichtserstellung von chat | [Funktions-Plugin] Automatische Zusammenfassung nach der Ausführung
-[Funktion zur vollständigen Übersetzung von PDF-Artikeln](https://www.bilibili.com/video/BV1KT411x7Wn) | [Funktions-Plugin] Extrahiert Titel und Zusammenfassung der PDF-Artikel und übersetzt den gesamten Text (mehrere Threads)
-[Arxiv-Assistent](https://www.bilibili.com/video/BV1LM4y1279X) | [Funktions-Plugin] Geben Sie die Arxiv-Artikel-URL ein und klicken Sie auf Eine-Klick-Übersetzung-Zusammenfassung + PDF-Download
-[Google Scholar Integrations-Assistent](https://www.bilibili.com/video/BV19L411U7ia) | [Funktions-Plugin] Geben Sie eine beliebige Google Scholar Such-URL ein und lassen Sie gpt Ihnen bei der Erstellung von [relatedworks](https://www.bilibili.com/video/BV1GP411U7Az/) helfen
-Internet-Informationen Aggregation + GPT | [Funktions-Plugin] Lassen Sie GPT eine Frage beantworten, indem es [zuerst Informationen aus dem Internet](https://www.bilibili.com/video/BV1om4y127ck/) sammelt und so die Informationen nie veralten
-Anzeige von Formeln / Bildern / Tabellen | Zeigt Formeln in beiden Formen, [TeX-Format und gerendeter Form](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png), unterstützt Formeln und Code-Highlights
-Unterstützung von PlugIns mit mehreren Threads | Unterstützt den Aufruf mehrerer Threads in Chatgpt, um Text oder Programme [Batch zu verarbeiten](https://www.bilibili.com/video/BV1FT411H7c5/)
-Starten Sie das dunkle Gradio-[Thema](https://github.com/binary-husky/chatgpt_academic/issues/173) | Fügen Sie ```/?__theme=dark``` an das Ende der Browser-URL an, um das dunkle Thema zu aktivieren
-[Unterstützung für mehrere LLM-Modelle](https://www.bilibili.com/video/BV1wT411p7yf), [API2D](https://api2d.com/) Interface-Unterstützung | Das Gefühl, gleichzeitig von GPT3.5, GPT4, [Tshinghua ChatGLM](https://github.com/THUDM/ChatGLM-6B), [Fudan MOSS](https://github.com/OpenLMLab/MOSS) bedient zu werden, muss toll sein, oder?
-Zugriff auf weitere LLM-Modelle, Unterstützung von [huggingface deployment](https://huggingface.co/spaces/qingxu98/gpt-academic) | Hinzufügen der Newbing-Schnittstelle (neues Bing), Einführung der Unterstützung von [Jittorllms](https://github.com/Jittor/JittorLLMs) der Tsinghua-Universität, [LLaMA](https://github.com/facebookresearch/llama), [RWKV](https://github.com/BlinkDL/ChatRWKV) und [Pangu alpha](https://openi.org.cn/pangu/)
-Weitere neue Funktionen (wie Bildgenerierung) …… | Siehe Ende dieses Dokuments ……
-
-- Neue Oberfläche (Ändern Sie die LAYOUT-Option in `config.py`, um zwischen "Seitenlayout" und "Oben-unten-Layout" zu wechseln)
-
-
-
- All buttons are dynamically generated by reading `functional.py`, and custom functions can be easily added, freeing up the clipboard.
-
-
-
-
-- Proofreading/Correcting
-
-
-
-
-- If the output contains formulas, they will be displayed in both tex format and rendered format for easy copying and reading.
-
-
-
-
-- Don't feel like reading the project code? Show off the entire project to chatgpt.
-
-
-
-
-- Multiple large language models are mixed and called together (ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4).
-
-
-
-
----
-# Installation
-## Installation-Method 1: Run directly (Windows, Linux or MacOS)
-
-1. Download the project
-```sh
-git clone https://github.com/binary-husky/chatgpt_academic.git
-cd chatgpt_academic
-```
-
-2. Configure API_KEY
-
-Configure API KEY and other settings in `config.py`. [Special Network Environment Settings](https://github.com/binary-husky/gpt_academic/issues/1).
-
-(P.S. When the program is running, it will first check whether there is a "config_private.py" private configuration file, and use the configuration defined in it to override the configuration of "config.py". Therefore, if you understand our configuration reading logic, we strongly recommend that you create a new configuration file named "config_private.py" next to "config.py" and transfer (copy) the configurations in "config.py" to "config_private.py". "config_private.py" is not controlled by git, which can make your privacy information more secure. P.S. The project also supports configuring most options through `environment variables`, and the writing format of environment variables refers to the `docker-compose` file. Reading priority: `environment variable` > `config_private.py` >`config.py`)
-
-
-3. Install dependencies
-```sh
-# (Option I: If familar with Python) (Python version 3.9 or above, the newer the better), Note: Use the official pip source or Ali pip source, temporary switching method: python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
-python -m pip install -r requirements.txt
-
-# (Option II: If not familiar with Python) Use anaconda with similar steps (https://www.bilibili.com/video/BV1rc411W7Dr):
-conda create -n gptac_venv python=3.11 # Create an anaconda environment
-conda activate gptac_venv # Activate the anaconda environment
-python -m pip install -r requirements.txt # Same step as pip installation
-```
-
-Click to expand if supporting Tsinghua ChatGLM/Fudan MOSS as backend
-
-
-[Optional Step] If supporting Tsinghua ChatGLM/Fudan MOSS as backend, additional dependencies need to be installed (Prerequisites: Familiar with Python + Used Pytorch + Sufficient computer configuration):
-```sh
-# [Optional Step I] Support Tsinghua ChatGLM. Remark: If encountering "Call ChatGLM fail Cannot load ChatGLM parameters", please refer to the following: 1: The above default installation is torch+cpu version. To use cuda, uninstall torch and reinstall torch+cuda; 2: If the model cannot be loaded due to insufficient machine configuration, you can modify the model precision in `request_llm/bridge_chatglm.py`, and modify all AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) to AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)
-python -m pip install -r request_llm/requirements_chatglm.txt
-
-# [Optional Step II] Support Fudan MOSS
-python -m pip install -r request_llm/requirements_moss.txt
-git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss # When executing this line of code, you must be in the project root path
-
-# [Optional Step III] Make sure the AVAIL_LLM_MODELS in the config.py configuration file contains the expected models. Currently supported models are as follows (jittorllms series currently only supports docker solutions):
-AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"]
-```
-
-
-
-
-
-
-4. Run
-```sh
-python main.py
-```5. Testing Function Plugin
-```
-- Test function plugin template function (requires gpt to answer what happened today in history), you can use this function as a template to implement more complex functions
- Click "[Function Plugin Template Demo] Today in History"
-```
-
-## Installation-Method 2: Using Docker
-
-1. Only ChatGPT (Recommended for most people)
-
-``` sh
-git clone https://github.com/binary-husky/chatgpt_academic.git # Download the project
-cd chatgpt_academic # Enter the path
-nano config.py # Edit config.py with any text editor, Configure "Proxy","API_KEY"and"WEB_PORT" (e.g 50923) etc.
-docker build -t gpt-academic . # Install
-
-# (Last step-option 1) Under Linux environment, use `--net=host` is more convenient and quick
-docker run --rm -it --net=host gpt-academic
-# (Last step-option 2) Under macOS/windows environment, can only use the -p option to expose the container's port(eg.50923) to the port on the host.
-docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic
-```
-
-2. ChatGPT + ChatGLM + MOSS (Requires familiarity with Docker)
-
-``` sh
-# Modify docker-compose.yml, delete solution 1 and solution 3, and retain solution 2. Modify the configuration of solution 2 in docker-compose.yml, referring to the comments in it.
-docker-compose up
-```
-
-3. ChatGPT+LLAMA+Pangu+RWKV(Requires familiarity with Docker)
-``` sh
-# Modify docker-compose.yml, delete solution 1 and solution 2, and retain solution 3. Modify the configuration of solution 3 in docker-compose.yml, referring to the comments in it.
-docker-compose up
-```
-
-
-## Installation-Method 3: Other Deployment Options
-
-1. How to use reverse proxy URL/Microsoft Azure API
-Configure API_URL_REDIRECT according to the instructions in `config.py`.
-
-2. Remote cloud server deployment (requires cloud server knowledge and experience)
-Please visit [Deployment wiki-1](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
-
-3. Using WSL 2 (Windows subsystem for Linux)
-Please visit [Deployment wiki-2](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
-
-4. How to run at a secondary URL (such as `http://localhost/subpath`)
-Please visit [FastAPI operating instructions](docs/WithFastapi.md)
-
-5. Use docker-compose to run
-Please read docker-compose.yml and follow the prompts to operate.
-
----
-# Advanced Usage
-## Customize new convenience buttons / custom function plugins.
-
-1. Customize new convenience buttons (Academic Shortcut Keys)
-Open `core_functional.py` with any text editor, add an entry as follows, and then restart the program. (If the button has been added successfully and is visible, then the prefix and suffix can be hot-modified, and it will take effect without restarting the program.)
-For example
-```
-"Super English to Chinese": {
- # Prefix, will be added before your input. For example, used to describe your requirements, such as translation, explaining code, polishing, etc.
- "Prefix": "Please translate the following content into Chinese, and then use a markdown table to explain the proper nouns that appear in the text one by one:\n\n",
-
- # Suffix, will be added after your input. For example, combined with prefix, you can enclose your input content in quotes.
- "Suffix": "",
-},
-```
-
-
-
-
-2. Custom function plugins
-
-Write powerful function plugins to perform any task you want and can't think of.
-The difficulty of plugin writing and debugging is very low in this project. As long as you have a certain knowledge of Python, you can implement your own plugin functions by imitating the template we provided.
-For more information, please refer to the [Function Plugin Guide](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97).
-
----
-# Latest Update
-## New feature dynamics1. Funktion zur Speicherung von Dialogen. Rufen Sie im Bereich der Funktions-Plugins "Aktuellen Dialog speichern" auf, um den aktuellen Dialog als lesbares und wiederherstellbares HTML-Datei zu speichern. Darüber hinaus können Sie im Funktions-Plugin-Bereich (Dropdown-Menü) "Laden von Dialogverlauf" aufrufen, um den vorherigen Dialog wiederherzustellen. Tipp: Wenn Sie keine Datei angeben und stattdessen direkt auf "Laden des Dialogverlaufs" klicken, können Sie das HTML-Cache-Archiv anzeigen. Durch Klicken auf "Löschen aller lokalen Dialogverlaufsdatensätze" können alle HTML-Archiv-Caches gelöscht werden.
-
-
-
-
-2. Berichterstellung. Die meisten Plugins generieren nach Abschluss der Ausführung einen Arbeitsbericht.
-
-
-
-
-
-
-3. Modularisierte Funktionsgestaltung, einfache Schnittstellen mit leistungsstarken Funktionen.
-
-
-
-
-
-4. Dies ist ein Open-Source-Projekt, das sich "selbst übersetzen" kann.
-
-
-
-
-5. Die Übersetzung anderer Open-Source-Projekte ist kein Problem.
-
-
-
-
-
-
-
-
-6. Dekorieren Sie [`live2d`](https://github.com/fghrsh/live2d_demo) mit kleinen Funktionen (standardmäßig deaktiviert, Änderungen an `config.py` erforderlich).
-
-
-
-
-7. Neue MOSS-Sprachmodellunterstützung.
-
-
-
-
-8. OpenAI-Bildgenerierung.
-
-
-
-
-9. OpenAI-Audio-Analyse und Zusammenfassung.
-
-
-
-
-10. Latex-Proofreading des gesamten Textes.
-
-
-
-
-
-## Version:
-- Version 3.5 (Todo): Rufen Sie alle Funktionserweiterungen dieses Projekts mit natürlicher Sprache auf (hohe Priorität).
-- Version 3.4 (Todo): Verbesserte Unterstützung mehrerer Threads für Local Large Model (LLM).
-- Version 3.3: + Internet-Informationssynthese-Funktion
-- Version 3.2: Funktionserweiterungen unterstützen mehr Parameter-Schnittstellen (Speicherung von Dialogen, Interpretation beliebigen Sprachcodes + gleichzeitige Abfrage jeder LLM-Kombination)
-- Version 3.1: Unterstützung mehrerer GPT-Modelle gleichzeitig! Unterstützung für API2D, Unterstützung für Lastenausgleich von mehreren API-Schlüsseln.
-- Version 3.0: Unterstützung von Chatglm und anderen kleinen LLMs
-- Version 2.6: Umstrukturierung der Plugin-Struktur zur Verbesserung der Interaktivität, Einführung weiterer Plugins
-- Version 2.5: Automatische Aktualisierung, Problembehebung bei Quelltexten großer Projekte, wenn der Text zu lang ist oder Token überlaufen.
-- Version 2.4: (1) Neue Funktion zur Übersetzung des gesamten PDF-Texts; (2) Neue Funktion zum Wechseln der Position des Eingabebereichs; (3) Neue Option für vertikales Layout; (4) Optimierung von Multithread-Funktions-Plugins.
-- Version 2.3: Verbesserte Interaktivität mit mehreren Threads
-- Version 2.2: Funktionserweiterungen unterstützen "Hot-Reload"
-- Version 2.1: Faltbares Layout
-- Version 2.0: Einführung von modularisierten Funktionserweiterungen
-- Version 1.0: Grundlegende Funktionengpt_academic Entwickler QQ-Gruppe-2: 610599535
-
-- Bekannte Probleme
- - Einige Browser-Übersetzungs-Plugins können die Frontend-Ausführung dieser Software stören.
- - Sowohl eine zu hohe als auch eine zu niedrige Version von Gradio führt zu verschiedenen Ausnahmen.
-
-## Referenz und Lernen
-
-```
-Der Code bezieht sich auf viele Designs von anderen herausragenden Projekten, insbesondere:
-
-# Projekt 1: ChatGLM-6B der Tsinghua Universität:
-https://github.com/THUDM/ChatGLM-6B
-
-# Projekt 2: JittorLLMs der Tsinghua Universität:
-https://github.com/Jittor/JittorLLMs
-
-# Projekt 3: Edge-GPT:
-https://github.com/acheong08/EdgeGPT
-
-# Projekt 4: ChuanhuChatGPT:
-https://github.com/GaiZhenbiao/ChuanhuChatGPT
-
-# Projekt 5: ChatPaper:
-https://github.com/kaixindelele/ChatPaper
-
-# Mehr:
-https://github.com/gradio-app/gradio
-https://github.com/fghrsh/live2d_demo
-```
\ No newline at end of file
diff --git a/spaces/fclong/summary/fengshen/models/auto/tokenization_auto.py b/spaces/fclong/summary/fengshen/models/auto/tokenization_auto.py
deleted file mode 100644
index 6555191bef55336708cabc5e9b17c0322318a417..0000000000000000000000000000000000000000
--- a/spaces/fclong/summary/fengshen/models/auto/tokenization_auto.py
+++ /dev/null
@@ -1,449 +0,0 @@
-# coding=utf-8
-# Copyright 2021 The IDEA Authors. All rights reserved.
-
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-
-# http://www.apache.org/licenses/LICENSE-2.0
-
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-""" Auto Tokenizer class."""
-
-import importlib
-import json
-import os
-from collections import OrderedDict
-from pathlib import Path
-from typing import TYPE_CHECKING, Dict, Optional, Tuple, Union
-
-from transformers.configuration_utils import PretrainedConfig
-from transformers.file_utils import (
- cached_path,
- get_list_of_files,
- hf_bucket_url,
- is_offline_mode,
- is_sentencepiece_available,
- is_tokenizers_available,
-)
-from transformers.tokenization_utils import PreTrainedTokenizer
-from transformers.tokenization_utils_base import TOKENIZER_CONFIG_FILE
-from transformers.tokenization_utils_fast import PreTrainedTokenizerFast
-from transformers.utils import logging
-# from ..encoder_decoder import EncoderDecoderConfig
-from .auto_factory import _LazyAutoMapping
-from .configuration_auto import (
- CONFIG_MAPPING_NAMES,
- AutoConfig,
- config_class_to_model_type,
- model_type_to_module_name,
- replace_list_option_in_docstrings,
-)
-from .dynamic import get_class_from_dynamic_module
-
-
-logger = logging.get_logger(__name__)
-
-if TYPE_CHECKING:
- # This significantly improves completion suggestion performance when
- # the transformers package is used with Microsoft's Pylance language server.
- TOKENIZER_MAPPING_NAMES: OrderedDict[str,
- Tuple[Optional[str], Optional[str]]] = OrderedDict()
-else:
- TOKENIZER_MAPPING_NAMES = OrderedDict(
- [
- ("roformer", ("RoFormerTokenizer", None)),
- ("longformer", ("LongformerTokenizer", None)),
- ]
- )
-
-TOKENIZER_MAPPING = _LazyAutoMapping(
- CONFIG_MAPPING_NAMES, TOKENIZER_MAPPING_NAMES)
-
-CONFIG_TO_TYPE = {v: k for k, v in CONFIG_MAPPING_NAMES.items()}
-
-
-def tokenizer_class_from_name(class_name: str):
- if class_name == "PreTrainedTokenizerFast":
- return PreTrainedTokenizerFast
-
- for module_name, tokenizers in TOKENIZER_MAPPING_NAMES.items():
- if class_name in tokenizers:
- module_name = model_type_to_module_name(module_name)
-
- module = importlib.import_module(
- f".{module_name}", "transformers.models")
- return getattr(module, class_name)
-
- for config, tokenizers in TOKENIZER_MAPPING._extra_content.items():
- for tokenizer in tokenizers:
- if getattr(tokenizer, "__name__", None) == class_name:
- return tokenizer
-
- return None
-
-
-def get_tokenizer_config(
- pretrained_model_name_or_path: Union[str, os.PathLike],
- cache_dir: Optional[Union[str, os.PathLike]] = None,
- force_download: bool = False,
- resume_download: bool = False,
- proxies: Optional[Dict[str, str]] = None,
- use_auth_token: Optional[Union[bool, str]] = None,
- revision: Optional[str] = None,
- local_files_only: bool = False,
- **kwargs,
-):
- """
- Loads the tokenizer configuration from a pretrained model tokenizer configuration.
-
- Args:
- pretrained_model_name_or_path (`str` or `os.PathLike`):
- This can be either:
-
- - a string, the *model id* of a pretrained model configuration hosted inside a model repo on
- huggingface.co. Valid model ids can be located at the root-level, like `bert-base-uncased`, or namespaced
- under a user or organization name, like `dbmdz/bert-base-german-cased`.
- - a path to a *directory* containing a configuration file saved using the
- [`~PreTrainedTokenizer.save_pretrained`] method, e.g., `./my_model_directory/`.
-
- cache_dir (`str` or `os.PathLike`, *optional*):
- Path to a directory in which a downloaded pretrained model configuration should be cached if the standard
- cache should not be used.
- force_download (`bool`, *optional*, defaults to `False`):
- Whether or not to force to (re-)download the configuration files and override the cached versions if they
- exist.
- resume_download (`bool`, *optional*, defaults to `False`):
- Whether or not to delete incompletely received file. Attempts to resume the download if such a file exists.
- proxies (`Dict[str, str]`, *optional*):
- A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128',
- 'http://hostname': 'foo.bar:4012'}.` The proxies are used on each request.
- use_auth_token (`str` or *bool*, *optional*):
- The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated
- when running `transformers-cli login` (stored in `~/.huggingface`).
- revision(`str`, *optional*, defaults to `"main"`):
- The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
- git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any
- identifier allowed by git.
- local_files_only (`bool`, *optional*, defaults to `False`):
- If `True`, will only try to load the tokenizer configuration from local files.
-
-
-
- Passing `use_auth_token=True` is required when you want to use a private model.
-
-
-
- Returns:
- `Dict`: The configuration of the tokenizer.
-
- Examples:
-
- ```python
- # Download configuration from huggingface.co and cache.
- tokenizer_config = get_tokenizer_config("bert-base-uncased")
- # This model does not have a tokenizer config so the result will be an empty dict.
- tokenizer_config = get_tokenizer_config("xlm-roberta-base")
-
- # Save a pretrained tokenizer locally and you can reload its config
- from transformers import AutoTokenizer
-
- tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
- tokenizer.save_pretrained("tokenizer-test")
- tokenizer_config = get_tokenizer_config("tokenizer-test")
- ```"""
- if is_offline_mode() and not local_files_only:
- logger.info("Offline mode: forcing local_files_only=True")
- local_files_only = True
-
- # Will raise a ValueError if `pretrained_model_name_or_path` is not a valid path or model identifier
- repo_files = get_list_of_files(
- pretrained_model_name_or_path,
- revision=revision,
- use_auth_token=use_auth_token,
- local_files_only=local_files_only,
- )
- if TOKENIZER_CONFIG_FILE not in [Path(f).name for f in repo_files]:
- return {}
-
- pretrained_model_name_or_path = str(pretrained_model_name_or_path)
- if os.path.isdir(pretrained_model_name_or_path):
- config_file = os.path.join(
- pretrained_model_name_or_path, TOKENIZER_CONFIG_FILE)
- else:
- config_file = hf_bucket_url(
- pretrained_model_name_or_path, filename=TOKENIZER_CONFIG_FILE, revision=revision, mirror=None
- )
-
- try:
- # Load from URL or cache if already cached
- resolved_config_file = cached_path(
- config_file,
- cache_dir=cache_dir,
- force_download=force_download,
- proxies=proxies,
- resume_download=resume_download,
- local_files_only=local_files_only,
- use_auth_token=use_auth_token,
- )
-
- except EnvironmentError:
- logger.info(
- "Could not locate the tokenizer configuration file, will try to use the model config instead.")
- return {}
-
- with open(resolved_config_file, encoding="utf-8") as reader:
- return json.load(reader)
-
-
-class AutoTokenizer:
- r"""
- This is a generic tokenizer class that will be instantiated as one of the tokenizer classes of the library when
- created with the [`AutoTokenizer.from_pretrained`] class method.
-
- This class cannot be instantiated directly using `__init__()` (throws an error).
- """
-
- def __init__(self):
- raise EnvironmentError(
- "AutoTokenizer is designed to be instantiated "
- "using the `AutoTokenizer.from_pretrained(pretrained_model_name_or_path)` method."
- )
-
- @classmethod
- @replace_list_option_in_docstrings(TOKENIZER_MAPPING_NAMES)
- def from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs):
- r"""
- Instantiate one of the tokenizer classes of the library from a pretrained model vocabulary.
-
- The tokenizer class to instantiate is selected based on the `model_type` property of the config object (either
- passed as an argument or loaded from `pretrained_model_name_or_path` if possible), or when it's missing, by
- falling back to using pattern matching on `pretrained_model_name_or_path`:
-
- List options
-
- Params:
- pretrained_model_name_or_path (`str` or `os.PathLike`):
- Can be either:
-
- - A string, the *model id* of a predefined tokenizer hosted inside a model repo on huggingface.co.
- Valid model ids can be located at the root-level, like `bert-base-uncased`, or namespaced under a
- user or organization name, like `dbmdz/bert-base-german-cased`.
- - A path to a *directory* containing vocabulary files required by the tokenizer, for instance saved
- using the [`~PreTrainedTokenizer.save_pretrained`] method, e.g., `./my_model_directory/`.
- - A path or url to a single saved vocabulary file if and only if the tokenizer only requires a
- single vocabulary file (like Bert or XLNet), e.g.: `./my_model_directory/vocab.txt`. (Not
- applicable to all derived classes)
- inputs (additional positional arguments, *optional*):
- Will be passed along to the Tokenizer `__init__()` method.
- config ([`PretrainedConfig`], *optional*)
- The configuration object used to dertermine the tokenizer class to instantiate.
- cache_dir (`str` or `os.PathLike`, *optional*):
- Path to a directory in which a downloaded pretrained model configuration should be cached if the
- standard cache should not be used.
- force_download (`bool`, *optional*, defaults to `False`):
- Whether or not to force the (re-)download the model weights and configuration files and override the
- cached versions if they exist.
- resume_download (`bool`, *optional*, defaults to `False`):
- Whether or not to delete incompletely received files. Will attempt to resume the download if such a
- file exists.
- proxies (`Dict[str, str]`, *optional*):
- A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128',
- 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.
- revision(`str`, *optional*, defaults to `"main"`):
- The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
- git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any
- identifier allowed by git.
- subfolder (`str`, *optional*):
- In case the relevant files are located inside a subfolder of the model repo on huggingface.co (e.g. for
- facebook/rag-token-base), specify it here.
- use_fast (`bool`, *optional*, defaults to `True`):
- Whether or not to try to load the fast version of the tokenizer.
- tokenizer_type (`str`, *optional*):
- Tokenizer type to be loaded.
- trust_remote_code (`bool`, *optional*, defaults to `False`):
- Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
- should only be set to `True` for repositories you trust and in which you have read the code, as it will
- execute code present on the Hub on your local machine.
- kwargs (additional keyword arguments, *optional*):
- Will be passed to the Tokenizer `__init__()` method. Can be used to set special tokens like
- `bos_token`, `eos_token`, `unk_token`, `sep_token`, `pad_token`, `cls_token`, `mask_token`,
- `additional_special_tokens`. See parameters in the `__init__()` for more details.
-
- Examples:
-
- ```python
- >>> from transformers import AutoTokenizer
-
- >>> # Download vocabulary from huggingface.co and cache.
- >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
-
- >>> # Download vocabulary from huggingface.co (user-uploaded) and cache.
- >>> tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-german-cased")
-
- >>> # If vocabulary files are in a directory (e.g. tokenizer was saved using *save_pretrained('./test/saved_model/')*)
- >>> tokenizer = AutoTokenizer.from_pretrained("./test/bert_saved_model/")
- ```"""
- config = kwargs.pop("config", None)
- kwargs["_from_auto"] = True
-
- use_fast = kwargs.pop("use_fast", True)
- tokenizer_type = kwargs.pop("tokenizer_type", None)
- trust_remote_code = kwargs.pop("trust_remote_code", False)
-
- # First, let's see whether the tokenizer_type is passed so that we can leverage it
- if tokenizer_type is not None:
- tokenizer_class = None
- tokenizer_class_tuple = TOKENIZER_MAPPING_NAMES.get(
- tokenizer_type, None)
-
- if tokenizer_class_tuple is None:
- raise ValueError(
- f"Passed `tokenizer_type` {tokenizer_type} does not exist. `tokenizer_type` should be one of "
- f"{', '.join(c for c in TOKENIZER_MAPPING_NAMES.keys())}."
- )
-
- tokenizer_class_name, tokenizer_fast_class_name = tokenizer_class_tuple
-
- if use_fast and tokenizer_fast_class_name is not None:
- tokenizer_class = tokenizer_class_from_name(
- tokenizer_fast_class_name)
-
- if tokenizer_class is None:
- tokenizer_class = tokenizer_class_from_name(
- tokenizer_class_name)
-
- if tokenizer_class is None:
- raise ValueError(
- f"Tokenizer class {tokenizer_class_name} is not currently imported.")
-
- return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
-
- # Next, let's try to use the tokenizer_config file to get the tokenizer class.
- tokenizer_config = get_tokenizer_config(
- pretrained_model_name_or_path, **kwargs)
-
- config_tokenizer_class = tokenizer_config.get("tokenizer_class")
- tokenizer_auto_map = tokenizer_config.get("auto_map")
-
- # If that did not work, let's try to use the config.
- if config_tokenizer_class is None:
- if not isinstance(config, PretrainedConfig):
- config = AutoConfig.from_pretrained(
- pretrained_model_name_or_path, trust_remote_code=trust_remote_code, **kwargs
- )
- config_tokenizer_class = config.tokenizer_class
- if hasattr(config, "auto_map") and "AutoTokenizer" in config.auto_map:
- tokenizer_auto_map = config.auto_map["AutoTokenizer"]
-
- # If we have the tokenizer class from the tokenizer config or the model config we're good!
- if config_tokenizer_class is not None:
- tokenizer_class = None
- if tokenizer_auto_map is not None:
- if not trust_remote_code:
- raise ValueError(
- f"Loading {pretrained_model_name_or_path} requires you to execute the tokenizer file in that repo "
- "on your local machine. Make sure you have read the code there to avoid malicious use, then set "
- "the option `trust_remote_code=True` to remove this error."
- )
- if kwargs.get("revision", None) is None:
- logger.warn(
- "Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure "
- "no malicious code has been contributed in a newer revision."
- )
-
- if use_fast and tokenizer_auto_map[1] is not None:
- class_ref = tokenizer_auto_map[1]
- else:
- class_ref = tokenizer_auto_map[0]
-
- module_file, class_name = class_ref.split(".")
- tokenizer_class = get_class_from_dynamic_module(
- pretrained_model_name_or_path, module_file + ".py", class_name, **kwargs
- )
-
- elif use_fast and not config_tokenizer_class.endswith("Fast"):
- tokenizer_class_candidate = f"{config_tokenizer_class}Fast"
- tokenizer_class = tokenizer_class_from_name(
- tokenizer_class_candidate)
- if tokenizer_class is None:
- tokenizer_class_candidate = config_tokenizer_class
- tokenizer_class = tokenizer_class_from_name(
- tokenizer_class_candidate)
-
- if tokenizer_class is None:
- raise ValueError(
- f"Tokenizer class {tokenizer_class_candidate} does not exist or is not currently imported."
- )
- return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
-
- model_type = config_class_to_model_type(type(config).__name__)
- if model_type is not None:
- tokenizer_class_py, tokenizer_class_fast = TOKENIZER_MAPPING[type(
- config)]
- if tokenizer_class_fast and (use_fast or tokenizer_class_py is None):
- return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
- else:
- if tokenizer_class_py is not None:
- return tokenizer_class_py.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
- else:
- raise ValueError(
- "This tokenizer cannot be instantiated. Please make sure you have `sentencepiece` installed "
- "in order to use this tokenizer."
- )
-
- raise ValueError(
- f"Unrecognized configuration class {config.__class__} to build an AutoTokenizer.\n"
- f"Model type should be one of {', '.join(c.__name__ for c in TOKENIZER_MAPPING.keys())}."
- )
-
- def register(config_class, slow_tokenizer_class=None, fast_tokenizer_class=None):
- """
- Register a new tokenizer in this mapping.
-
-
- Args:
- config_class ([`PretrainedConfig`]):
- The configuration corresponding to the model to register.
- slow_tokenizer_class ([`PretrainedTokenizer`], *optional*):
- The slow tokenizer to register.
- slow_tokenizer_class ([`PretrainedTokenizerFast`], *optional*):
- The fast tokenizer to register.
- """
- if slow_tokenizer_class is None and fast_tokenizer_class is None:
- raise ValueError(
- "You need to pass either a `slow_tokenizer_class` or a `fast_tokenizer_class")
- if slow_tokenizer_class is not None and issubclass(slow_tokenizer_class, PreTrainedTokenizerFast):
- raise ValueError(
- "You passed a fast tokenizer in the `slow_tokenizer_class`.")
- if fast_tokenizer_class is not None and issubclass(fast_tokenizer_class, PreTrainedTokenizer):
- raise ValueError(
- "You passed a slow tokenizer in the `fast_tokenizer_class`.")
-
- if (
- slow_tokenizer_class is not None
- and fast_tokenizer_class is not None
- and issubclass(fast_tokenizer_class, PreTrainedTokenizerFast)
- and fast_tokenizer_class.slow_tokenizer_class != slow_tokenizer_class
- ):
- raise ValueError(
- "The fast tokenizer class you are passing has a `slow_tokenizer_class` attribute that is not "
- "consistent with the slow tokenizer class you passed (fast tokenizer has "
- f"{fast_tokenizer_class.slow_tokenizer_class} and you passed {slow_tokenizer_class}. Fix one of those "
- "so they match!"
- )
-
- # Avoid resetting a set slow/fast tokenizer if we are passing just the other ones.
- if config_class in TOKENIZER_MAPPING._extra_content:
- existing_slow, existing_fast = TOKENIZER_MAPPING[config_class]
- if slow_tokenizer_class is None:
- slow_tokenizer_class = existing_slow
- if fast_tokenizer_class is None:
- fast_tokenizer_class = existing_fast
-
- TOKENIZER_MAPPING.register(
- config_class, (slow_tokenizer_class, fast_tokenizer_class))
diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Ultimate Mortal Kombat X APK for Android - The Ultimate Fighting Challenge Awaits You.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Ultimate Mortal Kombat X APK for Android - The Ultimate Fighting Challenge Awaits You.md
deleted file mode 100644
index 303bade983922751a6da9cd02733a413edc9f537..0000000000000000000000000000000000000000
--- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Ultimate Mortal Kombat X APK for Android - The Ultimate Fighting Challenge Awaits You.md
+++ /dev/null
@@ -1,114 +0,0 @@
-
-
Ultimate Mortal Kombat X APK Download: Everything You Need to Know
-
If you are a fan of fighting games, you have probably heard of Mortal Kombat X, one of the most popular and brutal titles in the genre. But did you know that you can download and play the game on your Android device for free? In this article, we will tell you everything you need to know about the ultimate Mortal Kombat X APK download, including what it is, why you should get it, how to get it, and how to play it. Read on and get ready to unleash your inner fighter!
-
What is Mortal Kombat X?
-
Mortal Kombat X is a fighting game developed by NetherRealm Studios and published by Warner Bros. Interactive Entertainment in 2015. It is the tenth main installment in the Mortal Kombat series, and a sequel to Mortal Kombat (2011). The game features a roster of 33 characters, including new ones like Cassie Cage, D'Vorah, Erron Black, and Ferra/Torr, as well as guest characters like Alien, Predator, Jason Voorhees, and Leatherface. Each character has three different variations that affect their abilities and fighting style.
The game has a rich and cinematic story mode that spans 25 years after the events of Mortal Kombat (2011), as well as various single-player and multiplayer modes such as Tower, Test Your Luck, Faction Wars, King of the Hill, and more. The game also boasts stunning graphics, smooth animations, realistic physics, and gore-filled fatalities that make every fight a spectacle.
-
Why download the APK version?
-
The official Mortal Kombat X app is available on Google Play Store for free, but it has some limitations and drawbacks. For one thing, it requires a lot of storage space (around 2 GB) and a stable internet connection to run properly. For another thing, it has a lot of in-app purchases and ads that can interrupt your gameplay and make it harder to progress. Moreover, some regions may not have access to the app due to licensing issues or censorship.
-
That's why downloading the APK version of Mortal Kombat X can be a better option for some players. The APK version is a modified version of the app that bypasses these limitations and drawbacks. By downloading the APK file from a trusted source, you can enjoy the following benefits:
-
-
You can play the game offline without any internet connection.
-
You can save storage space by choosing which files to download (such as languages, graphics quality, etc.).
-
You can unlock all the characters, skins, items, and features without spending any money.
-
You can remove all the ads and pop-ups that can annoy you.
-
You can access the game from any region without any restrictions.
-
-
How to download and install the APK file?
-
Downloading and installing the APK file of Mortal Kombat X is not difficult, but it requires some steps and precautions. Here is a step-by-step guide on how to do it:
-
A step-by-step guide on how to get the APK file from a reliable source
-
-
Go to your device's settings and enable the option to install apps from unknown sources. This will allow you to install apps that are not from Google Play Store.
-
Go to a reliable website that offers the APK file of Mortal Kombat X. You can search for "Mortal Kombat X APK" on Google or use one of these links: . Make sure to check the reviews and ratings of the website before downloading anything.
-
Download the APK file and the OBB file (which contains the game data) to your device. The files should be around 1 GB in total, depending on the version you choose.
-
Locate the downloaded files on your device using a file manager app. You can use any app you like, such as ES File Explorer, File Manager, or ZArchiver.
-
-
A step-by-step guide on how to install the APK file on your Android device
-
-
Tap on the APK file and follow the instructions to install it. You may need to grant some permissions to the app during the installation process.
-
Do not open the app yet. Instead, go to the OBB file and extract it using a file manager app. You should get a folder named "com.wb.goog.mkx" or something similar.
-
Move the extracted folder to the Android/OBB directory on your device. This is where the game data will be stored.
-
Now you can open the app and enjoy Mortal Kombat X on your Android device!
-
-
How to play Mortal Kombat X with the APK version?
-
Playing Mortal Kombat X with the APK version is not much different from playing it with the official app. The gameplay and controls are the same, except that you have access to more features and options. Here is a brief overview of how to play the game and some tips and tricks to help you improve your skills and win more matches.
-
A brief overview of the gameplay and controls
-
Mortal Kombat X is a 2D fighting game that pits two characters against each other in a best-of-three match. You can choose from 33 characters, each with three variations that affect their abilities and fighting style. You can also customize your character's appearance, equipment, and skills using coins and souls that you earn by playing the game.
-
The game has two main modes: story mode and battle mode. In story mode, you follow a cinematic narrative that spans 25 years after the events of Mortal Kombat (2011). You play as different characters in each chapter and face various enemies and bosses. In battle mode, you can play solo or online against other players in different modes such as Tower, Test Your Luck, Faction Wars, King of the Hill, and more.
-
mortal kombat x ultimate edition apk free download
-download mortal kombat x ultimate fighting game for android
-mortal kombat x ultimate mod apk unlimited money
-how to install mortal kombat x ultimate apk on android
-mortal kombat x ultimate apk + obb download latest version
-mortal kombat x ultimate apk offline mode
-mortal kombat x ultimate apk download highly compressed
-mortal kombat x ultimate apk no verification
-mortal kombat x ultimate apk 4.2.0
-mortal kombat x ultimate apk revdl
-mortal kombat x ultimate apk rexdl
-mortal kombat x ultimate apk pure
-mortal kombat x ultimate apk uptodown
-mortal kombat x ultimate apk android 1
-mortal kombat x ultimate apk andropalace
-mortal kombat x ultimate apk data download
-mortal kombat x ultimate apk hack download
-mortal kombat x ultimate apk full unlocked
-mortal kombat x ultimate apk all characters unlocked
-mortal kombat x ultimate apk mega mod
-mortal kombat x ultimate apk god mode
-mortal kombat x ultimate apk unlimited souls and coins
-mortal kombat x ultimate apk unlimited everything
-mortal kombat x ultimate apk cheat menu
-mortal kombat x ultimate apk anti ban
-download game mortal kombat x ultimate mod apk
-download game android mortal kombat x ultimate mod apk offline
-download game offline mortal kombat x ultimate mod apk data obb android gratis
-download game ppsspp mortal kombat x ultimate mod apk
-download game ps2 mortal kombat x ultimate mod apk for android
-download game pc mortal kombat x ultimate mod apk free full version
-download game java mortal kombat x ultimate mod apk 320x240 jar zip rar
-download game nes mortal kombat x ultimate mod apk for emulator android
-download game snes mortal kombat x ultimate mod apk for emulator android
-download game gba mortal kombat x ultimate mod apk for emulator android
-download game nds mortal kombat x ultimate mod apk for emulator android
-download game psp iso cso high compressed mortal kombat x ultimate mod apk for emulator android
-download game ps1 iso high compressed mortal kombat x ultimate mod apk for emulator android
-download game ps3 iso high compressed mortal kombat x ultimate mod apk for emulator android
-download game ps4 iso high compressed mortal kombat x ultimate mod apk for emulator android
-download game xbox 360 iso high compressed mortal kombat x ultimate mod apk for emulator android
-download game xbox one iso high compressed mortal kombat x ultimate mod apk for emulator android
-download game wii iso high compressed mortal kombat x ultimate mod apk for emulator android
-download game wii u iso high compressed mortal kombat x ultimate mod apk for emulator android
-download game switch iso high compressed mortal kombat x ultimate mod apk for emulator android
-download game 3ds cia high compressed mortal kombat x ultimate mod apk for emulator android
-download game n64 rom high compressed mortal
-
The game uses a simple and intuitive control scheme that consists of four buttons: attack, block, special, and switch. You can tap, swipe, or hold these buttons to perform different moves and combos. You can also use special items such as x-rays, fatalities, brutalities, and faction kills to finish off your opponents in spectacular ways.
-
Tips and tricks to improve your skills and win more matches
-
Mortal Kombat X is a game that requires practice, strategy, and skill to master. Here are some tips and tricks that can help you become a better fighter:
-
-
Learn your character's moves and combos. Each character has a unique set of moves and combos that you can find in the move list menu. Practice them in training mode or offline matches until you memorize them and execute them flawlessly.
-
Choose your character's variation wisely. Each character has three variations that affect their abilities and fighting style. Some variations are better suited for certain situations or opponents than others. Experiment with different variations and find out which one works best for you.
-
Use your special meter wisely. Your special meter fills up as you deal or receive damage. You can use it to perform x-rays, breakers, or enhanced specials. X-rays are powerful attacks that deal massive damage and break through blocks. Breakers are defensive moves that interrupt your opponent's combo and push them back. Enhanced specials are upgraded versions of your normal specials that have additional effects or damage. Know when to use each of these options depending on your situation.
-
Know your opponent's moves and tendencies. The best way to counter your opponent is to know what they are capable of and what they are likely to do. Study their move list, watch their patterns, and anticipate their actions. Use blocks, dodges, counters, and punishes to avoid or exploit their weaknesses.
-
Use environmental interactions to your advantage. The game features various environmental objects that you can use to interact with during a fight. You can use them to escape, attack, or defend yourself depending on the object and your position. For example, you can throw barrels at your opponent, jump off walls to avoid attacks, or use weapons to deal extra damage. Be aware of your surroundings and use them creatively.
-
-
Conclusion
-
Mortal Kombat X is a thrilling and brutal fighting game that you can enjoy on your Android device for free. By downloading the APK version of the game, you can unlock all the features and options that the official app does not offer. You can also play the game offline without any internet connection or ads. However, you need to be careful and follow the steps and precautions we mentioned in this article to download and install the APK file safely and correctly. Once you do that, you can start playing the game and unleash your inner fighter!
-
Do you have any questions or comments about the ultimate Mortal Kombat X APK download? Let us know in the comment section below. And if you liked this article, please share it with your friends and fellow gamers. Thank you for reading!
-
FAQs
-
Here are some of the most frequently asked questions about the ultimate Mortal Kombat X APK download:
-
-
Is the APK version of Mortal Kombat X legal?
-
The APK version of Mortal Kombat X is not legal, as it violates the terms and conditions of the game's developers and publishers. However, it is unlikely that you will face any legal consequences for downloading and playing it, as long as you do not distribute or sell it to others.
-
Is the APK version of Mortal Kombat X safe?
-
The APK version of Mortal Kombat X is safe, as long as you download it from a reliable source and follow the steps and precautions we mentioned in this article. However, there is always a risk of malware or viruses when downloading any file from the internet, so make sure to scan your device regularly and use a good antivirus app.
-
Can I play Mortal Kombat X with the APK version on other devices?
-
The APK version of Mortal Kombat X is designed for Android devices only. You cannot play it on iOS, Windows, or any other platform. However, you can use an Android emulator on your PC or Mac to run the APK file and play the game on your computer.
-
Can I play Mortal Kombat X with the APK version with other players online?
-
The APK version of Mortal Kombat X allows you to play online with other players who have the same version of the game. However, you cannot play with players who have the official app or a different version of the game. You may also experience some lag or connection issues when playing online with the APK version.
-
Can I update Mortal Kombat X with the APK version?
-
The APK version of Mortal Kombat X does not support automatic updates. You will have to manually download and install a new version of the APK file whenever there is an update available. Make sure to back up your data before updating, as you may lose your progress or settings.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fffffu/bing/src/lib/bots/bing/utils.ts b/spaces/fffffu/bing/src/lib/bots/bing/utils.ts
deleted file mode 100644
index 64b4b96452d125346b0fc4436b5f7c18c962df0b..0000000000000000000000000000000000000000
--- a/spaces/fffffu/bing/src/lib/bots/bing/utils.ts
+++ /dev/null
@@ -1,87 +0,0 @@
-import { ChatResponseMessage, BingChatResponse } from './types'
-
-export function convertMessageToMarkdown(message: ChatResponseMessage): string {
- if (message.messageType === 'InternalSearchQuery') {
- return message.text
- }
- for (const card of message.adaptiveCards??[]) {
- for (const block of card.body) {
- if (block.type === 'TextBlock') {
- return block.text
- }
- }
- }
- return ''
-}
-
-const RecordSeparator = String.fromCharCode(30)
-
-export const websocketUtils = {
- packMessage(data: any) {
- return `${JSON.stringify(data)}${RecordSeparator}`
- },
- unpackMessage(data: string | ArrayBuffer | Blob) {
- if (!data) return {}
- return data
- .toString()
- .split(RecordSeparator)
- .filter(Boolean)
- .map((s) => {
- try {
- return JSON.parse(s)
- } catch (e) {
- return {}
- }
- })
- },
-}
-
-export async function createImage(prompt: string, id: string, headers: HeadersInit): Promise {
- const { headers: responseHeaders } = await fetch(`https://www.bing.com/images/create?partner=sydney&re=1&showselective=1&sude=1&kseed=7000&SFX=&q=${encodeURIComponent(prompt)}&iframeid=${id}`,
- {
- method: 'HEAD',
- headers,
- redirect: 'manual'
- },
- );
-
- if (!/&id=([^&]+)$/.test(responseHeaders.get('location') || '')) {
- throw new Error('请求异常,请检查 cookie 是否有效')
- }
-
- const resultId = RegExp.$1;
- let count = 0
- const imageThumbUrl = `https://www.bing.com/images/create/async/results/${resultId}?q=${encodeURIComponent(prompt)}&partner=sydney&showselective=1&IID=images.as`;
-
- do {
- await sleep(3000);
- const content = await fetch(imageThumbUrl, { headers, method: 'GET' })
-
- // @ts-ignore
- if (content.headers.get('content-length') > 1) {
- const text = await content.text()
- return (text?.match(/ target?.split('src="').pop()?.replace(/&/g, '&'))
- .map(img => ``).join(' ')
- }
- } while(count ++ < 10);
-}
-
-
-export async function* streamAsyncIterable(stream: ReadableStream) {
- const reader = stream.getReader()
- try {
- while (true) {
- const { done, value } = await reader.read()
- if (done) {
- return
- }
- yield value
- }
- } finally {
- reader.releaseLock()
- }
-}
-
-export const sleep = (ms: number) => new Promise(resolve => setTimeout(resolve, ms))
-
diff --git a/spaces/fffiloni/SplitTrack2MusicGen/audiocraft/data/audio.py b/spaces/fffiloni/SplitTrack2MusicGen/audiocraft/data/audio.py
deleted file mode 100644
index 1829d7db4ef832ad65598b471caa7d256a06d012..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/SplitTrack2MusicGen/audiocraft/data/audio.py
+++ /dev/null
@@ -1,213 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Audio IO methods are defined in this module (info, read, write),
-We rely on av library for faster read when possible, otherwise on torchaudio.
-"""
-
-from dataclasses import dataclass
-from pathlib import Path
-import logging
-import typing as tp
-
-import numpy as np
-import soundfile
-import torch
-from torch.nn import functional as F
-import torchaudio as ta
-
-import av
-
-from .audio_utils import f32_pcm, i16_pcm, normalize_audio
-
-
-_av_initialized = False
-
-
-def _init_av():
- global _av_initialized
- if _av_initialized:
- return
- logger = logging.getLogger('libav.mp3')
- logger.setLevel(logging.ERROR)
- _av_initialized = True
-
-
-@dataclass(frozen=True)
-class AudioFileInfo:
- sample_rate: int
- duration: float
- channels: int
-
-
-def _av_info(filepath: tp.Union[str, Path]) -> AudioFileInfo:
- _init_av()
- with av.open(str(filepath)) as af:
- stream = af.streams.audio[0]
- sample_rate = stream.codec_context.sample_rate
- duration = float(stream.duration * stream.time_base)
- channels = stream.channels
- return AudioFileInfo(sample_rate, duration, channels)
-
-
-def _soundfile_info(filepath: tp.Union[str, Path]) -> AudioFileInfo:
- info = soundfile.info(filepath)
- return AudioFileInfo(info.samplerate, info.duration, info.channels)
-
-
-def audio_info(filepath: tp.Union[str, Path]) -> AudioFileInfo:
- # torchaudio no longer returns useful duration informations for some formats like mp3s.
- filepath = Path(filepath)
- if filepath.suffix in ['.flac', '.ogg']: # TODO: Validate .ogg can be safely read with av_info
- # ffmpeg has some weird issue with flac.
- return _soundfile_info(filepath)
- else:
- return _av_info(filepath)
-
-
-def _av_read(filepath: tp.Union[str, Path], seek_time: float = 0, duration: float = -1.) -> tp.Tuple[torch.Tensor, int]:
- """FFMPEG-based audio file reading using PyAV bindings.
- Soundfile cannot read mp3 and av_read is more efficient than torchaudio.
-
- Args:
- filepath (str or Path): Path to audio file to read.
- seek_time (float): Time at which to start reading in the file.
- duration (float): Duration to read from the file. If set to -1, the whole file is read.
- Returns:
- Tuple[torch.Tensor, int]: Tuple containing audio data and sample rate
- """
- _init_av()
- with av.open(str(filepath)) as af:
- stream = af.streams.audio[0]
- sr = stream.codec_context.sample_rate
- num_frames = int(sr * duration) if duration >= 0 else -1
- frame_offset = int(sr * seek_time)
- # we need a small negative offset otherwise we get some edge artifact
- # from the mp3 decoder.
- af.seek(int(max(0, (seek_time - 0.1)) / stream.time_base), stream=stream)
- frames = []
- length = 0
- for frame in af.decode(streams=stream.index):
- current_offset = int(frame.rate * frame.pts * frame.time_base)
- strip = max(0, frame_offset - current_offset)
- buf = torch.from_numpy(frame.to_ndarray())
- if buf.shape[0] != stream.channels:
- buf = buf.view(-1, stream.channels).t()
- buf = buf[:, strip:]
- frames.append(buf)
- length += buf.shape[1]
- if num_frames > 0 and length >= num_frames:
- break
- assert frames
- # If the above assert fails, it is likely because we seeked past the end of file point,
- # in which case ffmpeg returns a single frame with only zeros, and a weird timestamp.
- # This will need proper debugging, in due time.
- wav = torch.cat(frames, dim=1)
- assert wav.shape[0] == stream.channels
- if num_frames > 0:
- wav = wav[:, :num_frames]
- return f32_pcm(wav), sr
-
-
-def audio_read(filepath: tp.Union[str, Path], seek_time: float = 0.,
- duration: float = -1., pad: bool = False) -> tp.Tuple[torch.Tensor, int]:
- """Read audio by picking the most appropriate backend tool based on the audio format.
-
- Args:
- filepath (str or Path): Path to audio file to read.
- seek_time (float): Time at which to start reading in the file.
- duration (float): Duration to read from the file. If set to -1, the whole file is read.
- pad (bool): Pad output audio if not reaching expected duration.
- Returns:
- Tuple[torch.Tensor, int]: Tuple containing audio data and sample rate.
- """
- fp = Path(filepath)
- if fp.suffix in ['.flac', '.ogg']: # TODO: check if we can safely use av_read for .ogg
- # There is some bug with ffmpeg and reading flac
- info = _soundfile_info(filepath)
- frames = -1 if duration <= 0 else int(duration * info.sample_rate)
- frame_offset = int(seek_time * info.sample_rate)
- wav, sr = soundfile.read(filepath, start=frame_offset, frames=frames, dtype=np.float32)
- assert info.sample_rate == sr, f"Mismatch of sample rates {info.sample_rate} {sr}"
- wav = torch.from_numpy(wav).t().contiguous()
- if len(wav.shape) == 1:
- wav = torch.unsqueeze(wav, 0)
- elif (
- fp.suffix in ['.wav', '.mp3'] and fp.suffix[1:] in ta.utils.sox_utils.list_read_formats()
- and duration <= 0 and seek_time == 0
- ):
- # Torchaudio is faster if we load an entire file at once.
- wav, sr = ta.load(fp)
- else:
- wav, sr = _av_read(filepath, seek_time, duration)
- if pad and duration > 0:
- expected_frames = int(duration * sr)
- wav = F.pad(wav, (0, expected_frames - wav.shape[-1]))
- return wav, sr
-
-
-def audio_write(stem_name: tp.Union[str, Path],
- wav: torch.Tensor, sample_rate: int,
- format: str = 'wav', mp3_rate: int = 320, normalize: bool = True,
- strategy: str = 'peak', peak_clip_headroom_db: float = 1,
- rms_headroom_db: float = 18, loudness_headroom_db: float = 14,
- log_clipping: bool = True, make_parent_dir: bool = True,
- add_suffix: bool = True) -> Path:
- """Convenience function for saving audio to disk. Returns the filename the audio was written to.
-
- Args:
- stem_name (str or Path): Filename without extension which will be added automatically.
- format (str): Either "wav" or "mp3".
- mp3_rate (int): kbps when using mp3s.
- normalize (bool): if `True` (default), normalizes according to the prescribed
- strategy (see after). If `False`, the strategy is only used in case clipping
- would happen.
- strategy (str): Can be either 'clip', 'peak', or 'rms'. Default is 'peak',
- i.e. audio is normalized by its largest value. RMS normalizes by root-mean-square
- with extra headroom to avoid clipping. 'clip' just clips.
- peak_clip_headroom_db (float): Headroom in dB when doing 'peak' or 'clip' strategy.
- rms_headroom_db (float): Headroom in dB when doing 'rms' strategy. This must be much larger
- than the `peak_clip` one to avoid further clipping.
- loudness_headroom_db (float): Target loudness for loudness normalization.
- log_clipping (bool): If True, basic logging on stderr when clipping still
- occurs despite strategy (only for 'rms').
- make_parent_dir (bool): Make parent directory if it doesn't exist.
- Returns:
- Path: Path of the saved audio.
- """
- assert wav.dtype.is_floating_point, "wav is not floating point"
- if wav.dim() == 1:
- wav = wav[None]
- elif wav.dim() > 2:
- raise ValueError("Input wav should be at most 2 dimension.")
- assert wav.isfinite().all()
- wav = normalize_audio(wav, normalize, strategy, peak_clip_headroom_db,
- rms_headroom_db, loudness_headroom_db, log_clipping=log_clipping,
- sample_rate=sample_rate, stem_name=str(stem_name))
- kwargs: dict = {}
- if format == 'mp3':
- suffix = '.mp3'
- kwargs.update({"compression": mp3_rate})
- elif format == 'wav':
- wav = i16_pcm(wav)
- suffix = '.wav'
- kwargs.update({"encoding": "PCM_S", "bits_per_sample": 16})
- else:
- raise RuntimeError(f"Invalid format {format}. Only wav or mp3 are supported.")
- if not add_suffix:
- suffix = ''
- path = Path(str(stem_name) + suffix)
- if make_parent_dir:
- path.parent.mkdir(exist_ok=True, parents=True)
- try:
- ta.save(path, wav, sample_rate, **kwargs)
- except Exception:
- if path.exists():
- # we do not want to leave half written files around.
- path.unlink()
- raise
- return path
diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/inspector.d.ts b/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/inspector.d.ts
deleted file mode 100644
index eba0b55d8bca0ef10cbf24922fb899b67c35f3a9..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/inspector.d.ts
+++ /dev/null
@@ -1,2741 +0,0 @@
-// eslint-disable-next-line dt-header
-// Type definitions for inspector
-
-// These definitions are auto-generated.
-// Please see https://github.com/DefinitelyTyped/DefinitelyTyped/pull/19330
-// for more information.
-
-// tslint:disable:max-line-length
-
-/**
- * The `inspector` module provides an API for interacting with the V8 inspector.
- *
- * It can be accessed using:
- *
- * ```js
- * const inspector = require('inspector');
- * ```
- * @see [source](https://github.com/nodejs/node/blob/v18.0.0/lib/inspector.js)
- */
-declare module 'inspector' {
- import EventEmitter = require('node:events');
- interface InspectorNotification {
- method: string;
- params: T;
- }
- namespace Schema {
- /**
- * Description of the protocol domain.
- */
- interface Domain {
- /**
- * Domain name.
- */
- name: string;
- /**
- * Domain version.
- */
- version: string;
- }
- interface GetDomainsReturnType {
- /**
- * List of supported domains.
- */
- domains: Domain[];
- }
- }
- namespace Runtime {
- /**
- * Unique script identifier.
- */
- type ScriptId = string;
- /**
- * Unique object identifier.
- */
- type RemoteObjectId = string;
- /**
- * Primitive value which cannot be JSON-stringified.
- */
- type UnserializableValue = string;
- /**
- * Mirror object referencing original JavaScript object.
- */
- interface RemoteObject {
- /**
- * Object type.
- */
- type: string;
- /**
- * Object subtype hint. Specified for object type values only.
- */
- subtype?: string | undefined;
- /**
- * Object class (constructor) name. Specified for object type values only.
- */
- className?: string | undefined;
- /**
- * Remote object value in case of primitive values or JSON values (if it was requested).
- */
- value?: any;
- /**
- * Primitive value which can not be JSON-stringified does not have value, but gets this property.
- */
- unserializableValue?: UnserializableValue | undefined;
- /**
- * String representation of the object.
- */
- description?: string | undefined;
- /**
- * Unique object identifier (for non-primitive values).
- */
- objectId?: RemoteObjectId | undefined;
- /**
- * Preview containing abbreviated property values. Specified for object type values only.
- * @experimental
- */
- preview?: ObjectPreview | undefined;
- /**
- * @experimental
- */
- customPreview?: CustomPreview | undefined;
- }
- /**
- * @experimental
- */
- interface CustomPreview {
- header: string;
- hasBody: boolean;
- formatterObjectId: RemoteObjectId;
- bindRemoteObjectFunctionId: RemoteObjectId;
- configObjectId?: RemoteObjectId | undefined;
- }
- /**
- * Object containing abbreviated remote object value.
- * @experimental
- */
- interface ObjectPreview {
- /**
- * Object type.
- */
- type: string;
- /**
- * Object subtype hint. Specified for object type values only.
- */
- subtype?: string | undefined;
- /**
- * String representation of the object.
- */
- description?: string | undefined;
- /**
- * True iff some of the properties or entries of the original object did not fit.
- */
- overflow: boolean;
- /**
- * List of the properties.
- */
- properties: PropertyPreview[];
- /**
- * List of the entries. Specified for map and set subtype values only.
- */
- entries?: EntryPreview[] | undefined;
- }
- /**
- * @experimental
- */
- interface PropertyPreview {
- /**
- * Property name.
- */
- name: string;
- /**
- * Object type. Accessor means that the property itself is an accessor property.
- */
- type: string;
- /**
- * User-friendly property value string.
- */
- value?: string | undefined;
- /**
- * Nested value preview.
- */
- valuePreview?: ObjectPreview | undefined;
- /**
- * Object subtype hint. Specified for object type values only.
- */
- subtype?: string | undefined;
- }
- /**
- * @experimental
- */
- interface EntryPreview {
- /**
- * Preview of the key. Specified for map-like collection entries.
- */
- key?: ObjectPreview | undefined;
- /**
- * Preview of the value.
- */
- value: ObjectPreview;
- }
- /**
- * Object property descriptor.
- */
- interface PropertyDescriptor {
- /**
- * Property name or symbol description.
- */
- name: string;
- /**
- * The value associated with the property.
- */
- value?: RemoteObject | undefined;
- /**
- * True if the value associated with the property may be changed (data descriptors only).
- */
- writable?: boolean | undefined;
- /**
- * A function which serves as a getter for the property, or undefined if there is no getter (accessor descriptors only).
- */
- get?: RemoteObject | undefined;
- /**
- * A function which serves as a setter for the property, or undefined if there is no setter (accessor descriptors only).
- */
- set?: RemoteObject | undefined;
- /**
- * True if the type of this property descriptor may be changed and if the property may be deleted from the corresponding object.
- */
- configurable: boolean;
- /**
- * True if this property shows up during enumeration of the properties on the corresponding object.
- */
- enumerable: boolean;
- /**
- * True if the result was thrown during the evaluation.
- */
- wasThrown?: boolean | undefined;
- /**
- * True if the property is owned for the object.
- */
- isOwn?: boolean | undefined;
- /**
- * Property symbol object, if the property is of the symbol type.
- */
- symbol?: RemoteObject | undefined;
- }
- /**
- * Object internal property descriptor. This property isn't normally visible in JavaScript code.
- */
- interface InternalPropertyDescriptor {
- /**
- * Conventional property name.
- */
- name: string;
- /**
- * The value associated with the property.
- */
- value?: RemoteObject | undefined;
- }
- /**
- * Represents function call argument. Either remote object id objectId, primitive value, unserializable primitive value or neither of (for undefined) them should be specified.
- */
- interface CallArgument {
- /**
- * Primitive value or serializable javascript object.
- */
- value?: any;
- /**
- * Primitive value which can not be JSON-stringified.
- */
- unserializableValue?: UnserializableValue | undefined;
- /**
- * Remote object handle.
- */
- objectId?: RemoteObjectId | undefined;
- }
- /**
- * Id of an execution context.
- */
- type ExecutionContextId = number;
- /**
- * Description of an isolated world.
- */
- interface ExecutionContextDescription {
- /**
- * Unique id of the execution context. It can be used to specify in which execution context script evaluation should be performed.
- */
- id: ExecutionContextId;
- /**
- * Execution context origin.
- */
- origin: string;
- /**
- * Human readable name describing given context.
- */
- name: string;
- /**
- * Embedder-specific auxiliary data.
- */
- auxData?: {} | undefined;
- }
- /**
- * Detailed information about exception (or error) that was thrown during script compilation or execution.
- */
- interface ExceptionDetails {
- /**
- * Exception id.
- */
- exceptionId: number;
- /**
- * Exception text, which should be used together with exception object when available.
- */
- text: string;
- /**
- * Line number of the exception location (0-based).
- */
- lineNumber: number;
- /**
- * Column number of the exception location (0-based).
- */
- columnNumber: number;
- /**
- * Script ID of the exception location.
- */
- scriptId?: ScriptId | undefined;
- /**
- * URL of the exception location, to be used when the script was not reported.
- */
- url?: string | undefined;
- /**
- * JavaScript stack trace if available.
- */
- stackTrace?: StackTrace | undefined;
- /**
- * Exception object if available.
- */
- exception?: RemoteObject | undefined;
- /**
- * Identifier of the context where exception happened.
- */
- executionContextId?: ExecutionContextId | undefined;
- }
- /**
- * Number of milliseconds since epoch.
- */
- type Timestamp = number;
- /**
- * Stack entry for runtime errors and assertions.
- */
- interface CallFrame {
- /**
- * JavaScript function name.
- */
- functionName: string;
- /**
- * JavaScript script id.
- */
- scriptId: ScriptId;
- /**
- * JavaScript script name or url.
- */
- url: string;
- /**
- * JavaScript script line number (0-based).
- */
- lineNumber: number;
- /**
- * JavaScript script column number (0-based).
- */
- columnNumber: number;
- }
- /**
- * Call frames for assertions or error messages.
- */
- interface StackTrace {
- /**
- * String label of this stack trace. For async traces this may be a name of the function that initiated the async call.
- */
- description?: string | undefined;
- /**
- * JavaScript function name.
- */
- callFrames: CallFrame[];
- /**
- * Asynchronous JavaScript stack trace that preceded this stack, if available.
- */
- parent?: StackTrace | undefined;
- /**
- * Asynchronous JavaScript stack trace that preceded this stack, if available.
- * @experimental
- */
- parentId?: StackTraceId | undefined;
- }
- /**
- * Unique identifier of current debugger.
- * @experimental
- */
- type UniqueDebuggerId = string;
- /**
- * If debuggerId is set stack trace comes from another debugger and can be resolved there. This allows to track cross-debugger calls. See Runtime.StackTrace and Debugger.paused for usages.
- * @experimental
- */
- interface StackTraceId {
- id: string;
- debuggerId?: UniqueDebuggerId | undefined;
- }
- interface EvaluateParameterType {
- /**
- * Expression to evaluate.
- */
- expression: string;
- /**
- * Symbolic group name that can be used to release multiple objects.
- */
- objectGroup?: string | undefined;
- /**
- * Determines whether Command Line API should be available during the evaluation.
- */
- includeCommandLineAPI?: boolean | undefined;
- /**
- * In silent mode exceptions thrown during evaluation are not reported and do not pause execution. Overrides setPauseOnException state.
- */
- silent?: boolean | undefined;
- /**
- * Specifies in which execution context to perform evaluation. If the parameter is omitted the evaluation will be performed in the context of the inspected page.
- */
- contextId?: ExecutionContextId | undefined;
- /**
- * Whether the result is expected to be a JSON object that should be sent by value.
- */
- returnByValue?: boolean | undefined;
- /**
- * Whether preview should be generated for the result.
- * @experimental
- */
- generatePreview?: boolean | undefined;
- /**
- * Whether execution should be treated as initiated by user in the UI.
- */
- userGesture?: boolean | undefined;
- /**
- * Whether execution should await for resulting value and return once awaited promise is resolved.
- */
- awaitPromise?: boolean | undefined;
- }
- interface AwaitPromiseParameterType {
- /**
- * Identifier of the promise.
- */
- promiseObjectId: RemoteObjectId;
- /**
- * Whether the result is expected to be a JSON object that should be sent by value.
- */
- returnByValue?: boolean | undefined;
- /**
- * Whether preview should be generated for the result.
- */
- generatePreview?: boolean | undefined;
- }
- interface CallFunctionOnParameterType {
- /**
- * Declaration of the function to call.
- */
- functionDeclaration: string;
- /**
- * Identifier of the object to call function on. Either objectId or executionContextId should be specified.
- */
- objectId?: RemoteObjectId | undefined;
- /**
- * Call arguments. All call arguments must belong to the same JavaScript world as the target object.
- */
- arguments?: CallArgument[] | undefined;
- /**
- * In silent mode exceptions thrown during evaluation are not reported and do not pause execution. Overrides setPauseOnException state.
- */
- silent?: boolean | undefined;
- /**
- * Whether the result is expected to be a JSON object which should be sent by value.
- */
- returnByValue?: boolean | undefined;
- /**
- * Whether preview should be generated for the result.
- * @experimental
- */
- generatePreview?: boolean | undefined;
- /**
- * Whether execution should be treated as initiated by user in the UI.
- */
- userGesture?: boolean | undefined;
- /**
- * Whether execution should await for resulting value and return once awaited promise is resolved.
- */
- awaitPromise?: boolean | undefined;
- /**
- * Specifies execution context which global object will be used to call function on. Either executionContextId or objectId should be specified.
- */
- executionContextId?: ExecutionContextId | undefined;
- /**
- * Symbolic group name that can be used to release multiple objects. If objectGroup is not specified and objectId is, objectGroup will be inherited from object.
- */
- objectGroup?: string | undefined;
- }
- interface GetPropertiesParameterType {
- /**
- * Identifier of the object to return properties for.
- */
- objectId: RemoteObjectId;
- /**
- * If true, returns properties belonging only to the element itself, not to its prototype chain.
- */
- ownProperties?: boolean | undefined;
- /**
- * If true, returns accessor properties (with getter/setter) only; internal properties are not returned either.
- * @experimental
- */
- accessorPropertiesOnly?: boolean | undefined;
- /**
- * Whether preview should be generated for the results.
- * @experimental
- */
- generatePreview?: boolean | undefined;
- }
- interface ReleaseObjectParameterType {
- /**
- * Identifier of the object to release.
- */
- objectId: RemoteObjectId;
- }
- interface ReleaseObjectGroupParameterType {
- /**
- * Symbolic object group name.
- */
- objectGroup: string;
- }
- interface SetCustomObjectFormatterEnabledParameterType {
- enabled: boolean;
- }
- interface CompileScriptParameterType {
- /**
- * Expression to compile.
- */
- expression: string;
- /**
- * Source url to be set for the script.
- */
- sourceURL: string;
- /**
- * Specifies whether the compiled script should be persisted.
- */
- persistScript: boolean;
- /**
- * Specifies in which execution context to perform script run. If the parameter is omitted the evaluation will be performed in the context of the inspected page.
- */
- executionContextId?: ExecutionContextId | undefined;
- }
- interface RunScriptParameterType {
- /**
- * Id of the script to run.
- */
- scriptId: ScriptId;
- /**
- * Specifies in which execution context to perform script run. If the parameter is omitted the evaluation will be performed in the context of the inspected page.
- */
- executionContextId?: ExecutionContextId | undefined;
- /**
- * Symbolic group name that can be used to release multiple objects.
- */
- objectGroup?: string | undefined;
- /**
- * In silent mode exceptions thrown during evaluation are not reported and do not pause execution. Overrides setPauseOnException state.
- */
- silent?: boolean | undefined;
- /**
- * Determines whether Command Line API should be available during the evaluation.
- */
- includeCommandLineAPI?: boolean | undefined;
- /**
- * Whether the result is expected to be a JSON object which should be sent by value.
- */
- returnByValue?: boolean | undefined;
- /**
- * Whether preview should be generated for the result.
- */
- generatePreview?: boolean | undefined;
- /**
- * Whether execution should await for resulting value and return once awaited promise is resolved.
- */
- awaitPromise?: boolean | undefined;
- }
- interface QueryObjectsParameterType {
- /**
- * Identifier of the prototype to return objects for.
- */
- prototypeObjectId: RemoteObjectId;
- }
- interface GlobalLexicalScopeNamesParameterType {
- /**
- * Specifies in which execution context to lookup global scope variables.
- */
- executionContextId?: ExecutionContextId | undefined;
- }
- interface EvaluateReturnType {
- /**
- * Evaluation result.
- */
- result: RemoteObject;
- /**
- * Exception details.
- */
- exceptionDetails?: ExceptionDetails | undefined;
- }
- interface AwaitPromiseReturnType {
- /**
- * Promise result. Will contain rejected value if promise was rejected.
- */
- result: RemoteObject;
- /**
- * Exception details if stack strace is available.
- */
- exceptionDetails?: ExceptionDetails | undefined;
- }
- interface CallFunctionOnReturnType {
- /**
- * Call result.
- */
- result: RemoteObject;
- /**
- * Exception details.
- */
- exceptionDetails?: ExceptionDetails | undefined;
- }
- interface GetPropertiesReturnType {
- /**
- * Object properties.
- */
- result: PropertyDescriptor[];
- /**
- * Internal object properties (only of the element itself).
- */
- internalProperties?: InternalPropertyDescriptor[] | undefined;
- /**
- * Exception details.
- */
- exceptionDetails?: ExceptionDetails | undefined;
- }
- interface CompileScriptReturnType {
- /**
- * Id of the script.
- */
- scriptId?: ScriptId | undefined;
- /**
- * Exception details.
- */
- exceptionDetails?: ExceptionDetails | undefined;
- }
- interface RunScriptReturnType {
- /**
- * Run result.
- */
- result: RemoteObject;
- /**
- * Exception details.
- */
- exceptionDetails?: ExceptionDetails | undefined;
- }
- interface QueryObjectsReturnType {
- /**
- * Array with objects.
- */
- objects: RemoteObject;
- }
- interface GlobalLexicalScopeNamesReturnType {
- names: string[];
- }
- interface ExecutionContextCreatedEventDataType {
- /**
- * A newly created execution context.
- */
- context: ExecutionContextDescription;
- }
- interface ExecutionContextDestroyedEventDataType {
- /**
- * Id of the destroyed context
- */
- executionContextId: ExecutionContextId;
- }
- interface ExceptionThrownEventDataType {
- /**
- * Timestamp of the exception.
- */
- timestamp: Timestamp;
- exceptionDetails: ExceptionDetails;
- }
- interface ExceptionRevokedEventDataType {
- /**
- * Reason describing why exception was revoked.
- */
- reason: string;
- /**
- * The id of revoked exception, as reported in exceptionThrown.
- */
- exceptionId: number;
- }
- interface ConsoleAPICalledEventDataType {
- /**
- * Type of the call.
- */
- type: string;
- /**
- * Call arguments.
- */
- args: RemoteObject[];
- /**
- * Identifier of the context where the call was made.
- */
- executionContextId: ExecutionContextId;
- /**
- * Call timestamp.
- */
- timestamp: Timestamp;
- /**
- * Stack trace captured when the call was made.
- */
- stackTrace?: StackTrace | undefined;
- /**
- * Console context descriptor for calls on non-default console context (not console.*): 'anonymous#unique-logger-id' for call on unnamed context, 'name#unique-logger-id' for call on named context.
- * @experimental
- */
- context?: string | undefined;
- }
- interface InspectRequestedEventDataType {
- object: RemoteObject;
- hints: {};
- }
- }
- namespace Debugger {
- /**
- * Breakpoint identifier.
- */
- type BreakpointId = string;
- /**
- * Call frame identifier.
- */
- type CallFrameId = string;
- /**
- * Location in the source code.
- */
- interface Location {
- /**
- * Script identifier as reported in the Debugger.scriptParsed.
- */
- scriptId: Runtime.ScriptId;
- /**
- * Line number in the script (0-based).
- */
- lineNumber: number;
- /**
- * Column number in the script (0-based).
- */
- columnNumber?: number | undefined;
- }
- /**
- * Location in the source code.
- * @experimental
- */
- interface ScriptPosition {
- lineNumber: number;
- columnNumber: number;
- }
- /**
- * JavaScript call frame. Array of call frames form the call stack.
- */
- interface CallFrame {
- /**
- * Call frame identifier. This identifier is only valid while the virtual machine is paused.
- */
- callFrameId: CallFrameId;
- /**
- * Name of the JavaScript function called on this call frame.
- */
- functionName: string;
- /**
- * Location in the source code.
- */
- functionLocation?: Location | undefined;
- /**
- * Location in the source code.
- */
- location: Location;
- /**
- * JavaScript script name or url.
- */
- url: string;
- /**
- * Scope chain for this call frame.
- */
- scopeChain: Scope[];
- /**
- * this object for this call frame.
- */
- this: Runtime.RemoteObject;
- /**
- * The value being returned, if the function is at return point.
- */
- returnValue?: Runtime.RemoteObject | undefined;
- }
- /**
- * Scope description.
- */
- interface Scope {
- /**
- * Scope type.
- */
- type: string;
- /**
- * Object representing the scope. For global and with scopes it represents the actual object; for the rest of the scopes, it is artificial transient object enumerating scope variables as its properties.
- */
- object: Runtime.RemoteObject;
- name?: string | undefined;
- /**
- * Location in the source code where scope starts
- */
- startLocation?: Location | undefined;
- /**
- * Location in the source code where scope ends
- */
- endLocation?: Location | undefined;
- }
- /**
- * Search match for resource.
- */
- interface SearchMatch {
- /**
- * Line number in resource content.
- */
- lineNumber: number;
- /**
- * Line with match content.
- */
- lineContent: string;
- }
- interface BreakLocation {
- /**
- * Script identifier as reported in the Debugger.scriptParsed.
- */
- scriptId: Runtime.ScriptId;
- /**
- * Line number in the script (0-based).
- */
- lineNumber: number;
- /**
- * Column number in the script (0-based).
- */
- columnNumber?: number | undefined;
- type?: string | undefined;
- }
- interface SetBreakpointsActiveParameterType {
- /**
- * New value for breakpoints active state.
- */
- active: boolean;
- }
- interface SetSkipAllPausesParameterType {
- /**
- * New value for skip pauses state.
- */
- skip: boolean;
- }
- interface SetBreakpointByUrlParameterType {
- /**
- * Line number to set breakpoint at.
- */
- lineNumber: number;
- /**
- * URL of the resources to set breakpoint on.
- */
- url?: string | undefined;
- /**
- * Regex pattern for the URLs of the resources to set breakpoints on. Either url or urlRegex must be specified.
- */
- urlRegex?: string | undefined;
- /**
- * Script hash of the resources to set breakpoint on.
- */
- scriptHash?: string | undefined;
- /**
- * Offset in the line to set breakpoint at.
- */
- columnNumber?: number | undefined;
- /**
- * Expression to use as a breakpoint condition. When specified, debugger will only stop on the breakpoint if this expression evaluates to true.
- */
- condition?: string | undefined;
- }
- interface SetBreakpointParameterType {
- /**
- * Location to set breakpoint in.
- */
- location: Location;
- /**
- * Expression to use as a breakpoint condition. When specified, debugger will only stop on the breakpoint if this expression evaluates to true.
- */
- condition?: string | undefined;
- }
- interface RemoveBreakpointParameterType {
- breakpointId: BreakpointId;
- }
- interface GetPossibleBreakpointsParameterType {
- /**
- * Start of range to search possible breakpoint locations in.
- */
- start: Location;
- /**
- * End of range to search possible breakpoint locations in (excluding). When not specified, end of scripts is used as end of range.
- */
- end?: Location | undefined;
- /**
- * Only consider locations which are in the same (non-nested) function as start.
- */
- restrictToFunction?: boolean | undefined;
- }
- interface ContinueToLocationParameterType {
- /**
- * Location to continue to.
- */
- location: Location;
- targetCallFrames?: string | undefined;
- }
- interface PauseOnAsyncCallParameterType {
- /**
- * Debugger will pause when async call with given stack trace is started.
- */
- parentStackTraceId: Runtime.StackTraceId;
- }
- interface StepIntoParameterType {
- /**
- * Debugger will issue additional Debugger.paused notification if any async task is scheduled before next pause.
- * @experimental
- */
- breakOnAsyncCall?: boolean | undefined;
- }
- interface GetStackTraceParameterType {
- stackTraceId: Runtime.StackTraceId;
- }
- interface SearchInContentParameterType {
- /**
- * Id of the script to search in.
- */
- scriptId: Runtime.ScriptId;
- /**
- * String to search for.
- */
- query: string;
- /**
- * If true, search is case sensitive.
- */
- caseSensitive?: boolean | undefined;
- /**
- * If true, treats string parameter as regex.
- */
- isRegex?: boolean | undefined;
- }
- interface SetScriptSourceParameterType {
- /**
- * Id of the script to edit.
- */
- scriptId: Runtime.ScriptId;
- /**
- * New content of the script.
- */
- scriptSource: string;
- /**
- * If true the change will not actually be applied. Dry run may be used to get result description without actually modifying the code.
- */
- dryRun?: boolean | undefined;
- }
- interface RestartFrameParameterType {
- /**
- * Call frame identifier to evaluate on.
- */
- callFrameId: CallFrameId;
- }
- interface GetScriptSourceParameterType {
- /**
- * Id of the script to get source for.
- */
- scriptId: Runtime.ScriptId;
- }
- interface SetPauseOnExceptionsParameterType {
- /**
- * Pause on exceptions mode.
- */
- state: string;
- }
- interface EvaluateOnCallFrameParameterType {
- /**
- * Call frame identifier to evaluate on.
- */
- callFrameId: CallFrameId;
- /**
- * Expression to evaluate.
- */
- expression: string;
- /**
- * String object group name to put result into (allows rapid releasing resulting object handles using releaseObjectGroup).
- */
- objectGroup?: string | undefined;
- /**
- * Specifies whether command line API should be available to the evaluated expression, defaults to false.
- */
- includeCommandLineAPI?: boolean | undefined;
- /**
- * In silent mode exceptions thrown during evaluation are not reported and do not pause execution. Overrides setPauseOnException state.
- */
- silent?: boolean | undefined;
- /**
- * Whether the result is expected to be a JSON object that should be sent by value.
- */
- returnByValue?: boolean | undefined;
- /**
- * Whether preview should be generated for the result.
- * @experimental
- */
- generatePreview?: boolean | undefined;
- /**
- * Whether to throw an exception if side effect cannot be ruled out during evaluation.
- */
- throwOnSideEffect?: boolean | undefined;
- }
- interface SetVariableValueParameterType {
- /**
- * 0-based number of scope as was listed in scope chain. Only 'local', 'closure' and 'catch' scope types are allowed. Other scopes could be manipulated manually.
- */
- scopeNumber: number;
- /**
- * Variable name.
- */
- variableName: string;
- /**
- * New variable value.
- */
- newValue: Runtime.CallArgument;
- /**
- * Id of callframe that holds variable.
- */
- callFrameId: CallFrameId;
- }
- interface SetReturnValueParameterType {
- /**
- * New return value.
- */
- newValue: Runtime.CallArgument;
- }
- interface SetAsyncCallStackDepthParameterType {
- /**
- * Maximum depth of async call stacks. Setting to 0 will effectively disable collecting async call stacks (default).
- */
- maxDepth: number;
- }
- interface SetBlackboxPatternsParameterType {
- /**
- * Array of regexps that will be used to check script url for blackbox state.
- */
- patterns: string[];
- }
- interface SetBlackboxedRangesParameterType {
- /**
- * Id of the script.
- */
- scriptId: Runtime.ScriptId;
- positions: ScriptPosition[];
- }
- interface EnableReturnType {
- /**
- * Unique identifier of the debugger.
- * @experimental
- */
- debuggerId: Runtime.UniqueDebuggerId;
- }
- interface SetBreakpointByUrlReturnType {
- /**
- * Id of the created breakpoint for further reference.
- */
- breakpointId: BreakpointId;
- /**
- * List of the locations this breakpoint resolved into upon addition.
- */
- locations: Location[];
- }
- interface SetBreakpointReturnType {
- /**
- * Id of the created breakpoint for further reference.
- */
- breakpointId: BreakpointId;
- /**
- * Location this breakpoint resolved into.
- */
- actualLocation: Location;
- }
- interface GetPossibleBreakpointsReturnType {
- /**
- * List of the possible breakpoint locations.
- */
- locations: BreakLocation[];
- }
- interface GetStackTraceReturnType {
- stackTrace: Runtime.StackTrace;
- }
- interface SearchInContentReturnType {
- /**
- * List of search matches.
- */
- result: SearchMatch[];
- }
- interface SetScriptSourceReturnType {
- /**
- * New stack trace in case editing has happened while VM was stopped.
- */
- callFrames?: CallFrame[] | undefined;
- /**
- * Whether current call stack was modified after applying the changes.
- */
- stackChanged?: boolean | undefined;
- /**
- * Async stack trace, if any.
- */
- asyncStackTrace?: Runtime.StackTrace | undefined;
- /**
- * Async stack trace, if any.
- * @experimental
- */
- asyncStackTraceId?: Runtime.StackTraceId | undefined;
- /**
- * Exception details if any.
- */
- exceptionDetails?: Runtime.ExceptionDetails | undefined;
- }
- interface RestartFrameReturnType {
- /**
- * New stack trace.
- */
- callFrames: CallFrame[];
- /**
- * Async stack trace, if any.
- */
- asyncStackTrace?: Runtime.StackTrace | undefined;
- /**
- * Async stack trace, if any.
- * @experimental
- */
- asyncStackTraceId?: Runtime.StackTraceId | undefined;
- }
- interface GetScriptSourceReturnType {
- /**
- * Script source.
- */
- scriptSource: string;
- }
- interface EvaluateOnCallFrameReturnType {
- /**
- * Object wrapper for the evaluation result.
- */
- result: Runtime.RemoteObject;
- /**
- * Exception details.
- */
- exceptionDetails?: Runtime.ExceptionDetails | undefined;
- }
- interface ScriptParsedEventDataType {
- /**
- * Identifier of the script parsed.
- */
- scriptId: Runtime.ScriptId;
- /**
- * URL or name of the script parsed (if any).
- */
- url: string;
- /**
- * Line offset of the script within the resource with given URL (for script tags).
- */
- startLine: number;
- /**
- * Column offset of the script within the resource with given URL.
- */
- startColumn: number;
- /**
- * Last line of the script.
- */
- endLine: number;
- /**
- * Length of the last line of the script.
- */
- endColumn: number;
- /**
- * Specifies script creation context.
- */
- executionContextId: Runtime.ExecutionContextId;
- /**
- * Content hash of the script.
- */
- hash: string;
- /**
- * Embedder-specific auxiliary data.
- */
- executionContextAuxData?: {} | undefined;
- /**
- * True, if this script is generated as a result of the live edit operation.
- * @experimental
- */
- isLiveEdit?: boolean | undefined;
- /**
- * URL of source map associated with script (if any).
- */
- sourceMapURL?: string | undefined;
- /**
- * True, if this script has sourceURL.
- */
- hasSourceURL?: boolean | undefined;
- /**
- * True, if this script is ES6 module.
- */
- isModule?: boolean | undefined;
- /**
- * This script length.
- */
- length?: number | undefined;
- /**
- * JavaScript top stack frame of where the script parsed event was triggered if available.
- * @experimental
- */
- stackTrace?: Runtime.StackTrace | undefined;
- }
- interface ScriptFailedToParseEventDataType {
- /**
- * Identifier of the script parsed.
- */
- scriptId: Runtime.ScriptId;
- /**
- * URL or name of the script parsed (if any).
- */
- url: string;
- /**
- * Line offset of the script within the resource with given URL (for script tags).
- */
- startLine: number;
- /**
- * Column offset of the script within the resource with given URL.
- */
- startColumn: number;
- /**
- * Last line of the script.
- */
- endLine: number;
- /**
- * Length of the last line of the script.
- */
- endColumn: number;
- /**
- * Specifies script creation context.
- */
- executionContextId: Runtime.ExecutionContextId;
- /**
- * Content hash of the script.
- */
- hash: string;
- /**
- * Embedder-specific auxiliary data.
- */
- executionContextAuxData?: {} | undefined;
- /**
- * URL of source map associated with script (if any).
- */
- sourceMapURL?: string | undefined;
- /**
- * True, if this script has sourceURL.
- */
- hasSourceURL?: boolean | undefined;
- /**
- * True, if this script is ES6 module.
- */
- isModule?: boolean | undefined;
- /**
- * This script length.
- */
- length?: number | undefined;
- /**
- * JavaScript top stack frame of where the script parsed event was triggered if available.
- * @experimental
- */
- stackTrace?: Runtime.StackTrace | undefined;
- }
- interface BreakpointResolvedEventDataType {
- /**
- * Breakpoint unique identifier.
- */
- breakpointId: BreakpointId;
- /**
- * Actual breakpoint location.
- */
- location: Location;
- }
- interface PausedEventDataType {
- /**
- * Call stack the virtual machine stopped on.
- */
- callFrames: CallFrame[];
- /**
- * Pause reason.
- */
- reason: string;
- /**
- * Object containing break-specific auxiliary properties.
- */
- data?: {} | undefined;
- /**
- * Hit breakpoints IDs
- */
- hitBreakpoints?: string[] | undefined;
- /**
- * Async stack trace, if any.
- */
- asyncStackTrace?: Runtime.StackTrace | undefined;
- /**
- * Async stack trace, if any.
- * @experimental
- */
- asyncStackTraceId?: Runtime.StackTraceId | undefined;
- /**
- * Just scheduled async call will have this stack trace as parent stack during async execution. This field is available only after Debugger.stepInto call with breakOnAsynCall flag.
- * @experimental
- */
- asyncCallStackTraceId?: Runtime.StackTraceId | undefined;
- }
- }
- namespace Console {
- /**
- * Console message.
- */
- interface ConsoleMessage {
- /**
- * Message source.
- */
- source: string;
- /**
- * Message severity.
- */
- level: string;
- /**
- * Message text.
- */
- text: string;
- /**
- * URL of the message origin.
- */
- url?: string | undefined;
- /**
- * Line number in the resource that generated this message (1-based).
- */
- line?: number | undefined;
- /**
- * Column number in the resource that generated this message (1-based).
- */
- column?: number | undefined;
- }
- interface MessageAddedEventDataType {
- /**
- * Console message that has been added.
- */
- message: ConsoleMessage;
- }
- }
- namespace Profiler {
- /**
- * Profile node. Holds callsite information, execution statistics and child nodes.
- */
- interface ProfileNode {
- /**
- * Unique id of the node.
- */
- id: number;
- /**
- * Function location.
- */
- callFrame: Runtime.CallFrame;
- /**
- * Number of samples where this node was on top of the call stack.
- */
- hitCount?: number | undefined;
- /**
- * Child node ids.
- */
- children?: number[] | undefined;
- /**
- * The reason of being not optimized. The function may be deoptimized or marked as don't optimize.
- */
- deoptReason?: string | undefined;
- /**
- * An array of source position ticks.
- */
- positionTicks?: PositionTickInfo[] | undefined;
- }
- /**
- * Profile.
- */
- interface Profile {
- /**
- * The list of profile nodes. First item is the root node.
- */
- nodes: ProfileNode[];
- /**
- * Profiling start timestamp in microseconds.
- */
- startTime: number;
- /**
- * Profiling end timestamp in microseconds.
- */
- endTime: number;
- /**
- * Ids of samples top nodes.
- */
- samples?: number[] | undefined;
- /**
- * Time intervals between adjacent samples in microseconds. The first delta is relative to the profile startTime.
- */
- timeDeltas?: number[] | undefined;
- }
- /**
- * Specifies a number of samples attributed to a certain source position.
- */
- interface PositionTickInfo {
- /**
- * Source line number (1-based).
- */
- line: number;
- /**
- * Number of samples attributed to the source line.
- */
- ticks: number;
- }
- /**
- * Coverage data for a source range.
- */
- interface CoverageRange {
- /**
- * JavaScript script source offset for the range start.
- */
- startOffset: number;
- /**
- * JavaScript script source offset for the range end.
- */
- endOffset: number;
- /**
- * Collected execution count of the source range.
- */
- count: number;
- }
- /**
- * Coverage data for a JavaScript function.
- */
- interface FunctionCoverage {
- /**
- * JavaScript function name.
- */
- functionName: string;
- /**
- * Source ranges inside the function with coverage data.
- */
- ranges: CoverageRange[];
- /**
- * Whether coverage data for this function has block granularity.
- */
- isBlockCoverage: boolean;
- }
- /**
- * Coverage data for a JavaScript script.
- */
- interface ScriptCoverage {
- /**
- * JavaScript script id.
- */
- scriptId: Runtime.ScriptId;
- /**
- * JavaScript script name or url.
- */
- url: string;
- /**
- * Functions contained in the script that has coverage data.
- */
- functions: FunctionCoverage[];
- }
- /**
- * Describes a type collected during runtime.
- * @experimental
- */
- interface TypeObject {
- /**
- * Name of a type collected with type profiling.
- */
- name: string;
- }
- /**
- * Source offset and types for a parameter or return value.
- * @experimental
- */
- interface TypeProfileEntry {
- /**
- * Source offset of the parameter or end of function for return values.
- */
- offset: number;
- /**
- * The types for this parameter or return value.
- */
- types: TypeObject[];
- }
- /**
- * Type profile data collected during runtime for a JavaScript script.
- * @experimental
- */
- interface ScriptTypeProfile {
- /**
- * JavaScript script id.
- */
- scriptId: Runtime.ScriptId;
- /**
- * JavaScript script name or url.
- */
- url: string;
- /**
- * Type profile entries for parameters and return values of the functions in the script.
- */
- entries: TypeProfileEntry[];
- }
- interface SetSamplingIntervalParameterType {
- /**
- * New sampling interval in microseconds.
- */
- interval: number;
- }
- interface StartPreciseCoverageParameterType {
- /**
- * Collect accurate call counts beyond simple 'covered' or 'not covered'.
- */
- callCount?: boolean | undefined;
- /**
- * Collect block-based coverage.
- */
- detailed?: boolean | undefined;
- }
- interface StopReturnType {
- /**
- * Recorded profile.
- */
- profile: Profile;
- }
- interface TakePreciseCoverageReturnType {
- /**
- * Coverage data for the current isolate.
- */
- result: ScriptCoverage[];
- }
- interface GetBestEffortCoverageReturnType {
- /**
- * Coverage data for the current isolate.
- */
- result: ScriptCoverage[];
- }
- interface TakeTypeProfileReturnType {
- /**
- * Type profile for all scripts since startTypeProfile() was turned on.
- */
- result: ScriptTypeProfile[];
- }
- interface ConsoleProfileStartedEventDataType {
- id: string;
- /**
- * Location of console.profile().
- */
- location: Debugger.Location;
- /**
- * Profile title passed as an argument to console.profile().
- */
- title?: string | undefined;
- }
- interface ConsoleProfileFinishedEventDataType {
- id: string;
- /**
- * Location of console.profileEnd().
- */
- location: Debugger.Location;
- profile: Profile;
- /**
- * Profile title passed as an argument to console.profile().
- */
- title?: string | undefined;
- }
- }
- namespace HeapProfiler {
- /**
- * Heap snapshot object id.
- */
- type HeapSnapshotObjectId = string;
- /**
- * Sampling Heap Profile node. Holds callsite information, allocation statistics and child nodes.
- */
- interface SamplingHeapProfileNode {
- /**
- * Function location.
- */
- callFrame: Runtime.CallFrame;
- /**
- * Allocations size in bytes for the node excluding children.
- */
- selfSize: number;
- /**
- * Child nodes.
- */
- children: SamplingHeapProfileNode[];
- }
- /**
- * Profile.
- */
- interface SamplingHeapProfile {
- head: SamplingHeapProfileNode;
- }
- interface StartTrackingHeapObjectsParameterType {
- trackAllocations?: boolean | undefined;
- }
- interface StopTrackingHeapObjectsParameterType {
- /**
- * If true 'reportHeapSnapshotProgress' events will be generated while snapshot is being taken when the tracking is stopped.
- */
- reportProgress?: boolean | undefined;
- }
- interface TakeHeapSnapshotParameterType {
- /**
- * If true 'reportHeapSnapshotProgress' events will be generated while snapshot is being taken.
- */
- reportProgress?: boolean | undefined;
- }
- interface GetObjectByHeapObjectIdParameterType {
- objectId: HeapSnapshotObjectId;
- /**
- * Symbolic group name that can be used to release multiple objects.
- */
- objectGroup?: string | undefined;
- }
- interface AddInspectedHeapObjectParameterType {
- /**
- * Heap snapshot object id to be accessible by means of $x command line API.
- */
- heapObjectId: HeapSnapshotObjectId;
- }
- interface GetHeapObjectIdParameterType {
- /**
- * Identifier of the object to get heap object id for.
- */
- objectId: Runtime.RemoteObjectId;
- }
- interface StartSamplingParameterType {
- /**
- * Average sample interval in bytes. Poisson distribution is used for the intervals. The default value is 32768 bytes.
- */
- samplingInterval?: number | undefined;
- }
- interface GetObjectByHeapObjectIdReturnType {
- /**
- * Evaluation result.
- */
- result: Runtime.RemoteObject;
- }
- interface GetHeapObjectIdReturnType {
- /**
- * Id of the heap snapshot object corresponding to the passed remote object id.
- */
- heapSnapshotObjectId: HeapSnapshotObjectId;
- }
- interface StopSamplingReturnType {
- /**
- * Recorded sampling heap profile.
- */
- profile: SamplingHeapProfile;
- }
- interface GetSamplingProfileReturnType {
- /**
- * Return the sampling profile being collected.
- */
- profile: SamplingHeapProfile;
- }
- interface AddHeapSnapshotChunkEventDataType {
- chunk: string;
- }
- interface ReportHeapSnapshotProgressEventDataType {
- done: number;
- total: number;
- finished?: boolean | undefined;
- }
- interface LastSeenObjectIdEventDataType {
- lastSeenObjectId: number;
- timestamp: number;
- }
- interface HeapStatsUpdateEventDataType {
- /**
- * An array of triplets. Each triplet describes a fragment. The first integer is the fragment index, the second integer is a total count of objects for the fragment, the third integer is a total size of the objects for the fragment.
- */
- statsUpdate: number[];
- }
- }
- namespace NodeTracing {
- interface TraceConfig {
- /**
- * Controls how the trace buffer stores data.
- */
- recordMode?: string | undefined;
- /**
- * Included category filters.
- */
- includedCategories: string[];
- }
- interface StartParameterType {
- traceConfig: TraceConfig;
- }
- interface GetCategoriesReturnType {
- /**
- * A list of supported tracing categories.
- */
- categories: string[];
- }
- interface DataCollectedEventDataType {
- value: Array<{}>;
- }
- }
- namespace NodeWorker {
- type WorkerID = string;
- /**
- * Unique identifier of attached debugging session.
- */
- type SessionID = string;
- interface WorkerInfo {
- workerId: WorkerID;
- type: string;
- title: string;
- url: string;
- }
- interface SendMessageToWorkerParameterType {
- message: string;
- /**
- * Identifier of the session.
- */
- sessionId: SessionID;
- }
- interface EnableParameterType {
- /**
- * Whether to new workers should be paused until the frontend sends `Runtime.runIfWaitingForDebugger`
- * message to run them.
- */
- waitForDebuggerOnStart: boolean;
- }
- interface DetachParameterType {
- sessionId: SessionID;
- }
- interface AttachedToWorkerEventDataType {
- /**
- * Identifier assigned to the session used to send/receive messages.
- */
- sessionId: SessionID;
- workerInfo: WorkerInfo;
- waitingForDebugger: boolean;
- }
- interface DetachedFromWorkerEventDataType {
- /**
- * Detached session identifier.
- */
- sessionId: SessionID;
- }
- interface ReceivedMessageFromWorkerEventDataType {
- /**
- * Identifier of a session which sends a message.
- */
- sessionId: SessionID;
- message: string;
- }
- }
- namespace NodeRuntime {
- interface NotifyWhenWaitingForDisconnectParameterType {
- enabled: boolean;
- }
- }
- /**
- * The `inspector.Session` is used for dispatching messages to the V8 inspector
- * back-end and receiving message responses and notifications.
- */
- class Session extends EventEmitter {
- /**
- * Create a new instance of the inspector.Session class.
- * The inspector session needs to be connected through session.connect() before the messages can be dispatched to the inspector backend.
- */
- constructor();
- /**
- * Connects a session to the inspector back-end.
- * @since v8.0.0
- */
- connect(): void;
- /**
- * Immediately close the session. All pending message callbacks will be called
- * with an error. `session.connect()` will need to be called to be able to send
- * messages again. Reconnected session will lose all inspector state, such as
- * enabled agents or configured breakpoints.
- * @since v8.0.0
- */
- disconnect(): void;
- /**
- * Posts a message to the inspector back-end. `callback` will be notified when
- * a response is received. `callback` is a function that accepts two optional
- * arguments: error and message-specific result.
- *
- * ```js
- * session.post('Runtime.evaluate', { expression: '2 + 2' },
- * (error, { result }) => console.log(result));
- * // Output: { type: 'number', value: 4, description: '4' }
- * ```
- *
- * The latest version of the V8 inspector protocol is published on the [Chrome DevTools Protocol Viewer](https://chromedevtools.github.io/devtools-protocol/v8/).
- *
- * Node.js inspector supports all the Chrome DevTools Protocol domains declared
- * by V8\. Chrome DevTools Protocol domain provides an interface for interacting
- * with one of the runtime agents used to inspect the application state and listen
- * to the run-time events.
- *
- * ## Example usage
- *
- * Apart from the debugger, various V8 Profilers are available through the DevTools
- * protocol.
- * @since v8.0.0
- */
- post(method: string, params?: {}, callback?: (err: Error | null, params?: {}) => void): void;
- post(method: string, callback?: (err: Error | null, params?: {}) => void): void;
- /**
- * Returns supported domains.
- */
- post(method: 'Schema.getDomains', callback?: (err: Error | null, params: Schema.GetDomainsReturnType) => void): void;
- /**
- * Evaluates expression on global object.
- */
- post(method: 'Runtime.evaluate', params?: Runtime.EvaluateParameterType, callback?: (err: Error | null, params: Runtime.EvaluateReturnType) => void): void;
- post(method: 'Runtime.evaluate', callback?: (err: Error | null, params: Runtime.EvaluateReturnType) => void): void;
- /**
- * Add handler to promise with given promise object id.
- */
- post(method: 'Runtime.awaitPromise', params?: Runtime.AwaitPromiseParameterType, callback?: (err: Error | null, params: Runtime.AwaitPromiseReturnType) => void): void;
- post(method: 'Runtime.awaitPromise', callback?: (err: Error | null, params: Runtime.AwaitPromiseReturnType) => void): void;
- /**
- * Calls function with given declaration on the given object. Object group of the result is inherited from the target object.
- */
- post(method: 'Runtime.callFunctionOn', params?: Runtime.CallFunctionOnParameterType, callback?: (err: Error | null, params: Runtime.CallFunctionOnReturnType) => void): void;
- post(method: 'Runtime.callFunctionOn', callback?: (err: Error | null, params: Runtime.CallFunctionOnReturnType) => void): void;
- /**
- * Returns properties of a given object. Object group of the result is inherited from the target object.
- */
- post(method: 'Runtime.getProperties', params?: Runtime.GetPropertiesParameterType, callback?: (err: Error | null, params: Runtime.GetPropertiesReturnType) => void): void;
- post(method: 'Runtime.getProperties', callback?: (err: Error | null, params: Runtime.GetPropertiesReturnType) => void): void;
- /**
- * Releases remote object with given id.
- */
- post(method: 'Runtime.releaseObject', params?: Runtime.ReleaseObjectParameterType, callback?: (err: Error | null) => void): void;
- post(method: 'Runtime.releaseObject', callback?: (err: Error | null) => void): void;
- /**
- * Releases all remote objects that belong to a given group.
- */
- post(method: 'Runtime.releaseObjectGroup', params?: Runtime.ReleaseObjectGroupParameterType, callback?: (err: Error | null) => void): void;
- post(method: 'Runtime.releaseObjectGroup', callback?: (err: Error | null) => void): void;
- /**
- * Tells inspected instance to run if it was waiting for debugger to attach.
- */
- post(method: 'Runtime.runIfWaitingForDebugger', callback?: (err: Error | null) => void): void;
- /**
- * Enables reporting of execution contexts creation by means of executionContextCreated event. When the reporting gets enabled the event will be sent immediately for each existing execution context.
- */
- post(method: 'Runtime.enable', callback?: (err: Error | null) => void): void;
- /**
- * Disables reporting of execution contexts creation.
- */
- post(method: 'Runtime.disable', callback?: (err: Error | null) => void): void;
- /**
- * Discards collected exceptions and console API calls.
- */
- post(method: 'Runtime.discardConsoleEntries', callback?: (err: Error | null) => void): void;
- /**
- * @experimental
- */
- post(method: 'Runtime.setCustomObjectFormatterEnabled', params?: Runtime.SetCustomObjectFormatterEnabledParameterType, callback?: (err: Error | null) => void): void;
- post(method: 'Runtime.setCustomObjectFormatterEnabled', callback?: (err: Error | null) => void): void;
- /**
- * Compiles expression.
- */
- post(method: 'Runtime.compileScript', params?: Runtime.CompileScriptParameterType, callback?: (err: Error | null, params: Runtime.CompileScriptReturnType) => void): void;
- post(method: 'Runtime.compileScript', callback?: (err: Error | null, params: Runtime.CompileScriptReturnType) => void): void;
- /**
- * Runs script with given id in a given context.
- */
- post(method: 'Runtime.runScript', params?: Runtime.RunScriptParameterType, callback?: (err: Error | null, params: Runtime.RunScriptReturnType) => void): void;
- post(method: 'Runtime.runScript', callback?: (err: Error | null, params: Runtime.RunScriptReturnType) => void): void;
- post(method: 'Runtime.queryObjects', params?: Runtime.QueryObjectsParameterType, callback?: (err: Error | null, params: Runtime.QueryObjectsReturnType) => void): void;
- post(method: 'Runtime.queryObjects', callback?: (err: Error | null, params: Runtime.QueryObjectsReturnType) => void): void;
- /**
- * Returns all let, const and class variables from global scope.
- */
- post(
- method: 'Runtime.globalLexicalScopeNames',
- params?: Runtime.GlobalLexicalScopeNamesParameterType,
- callback?: (err: Error | null, params: Runtime.GlobalLexicalScopeNamesReturnType) => void
- ): void;
- post(method: 'Runtime.globalLexicalScopeNames', callback?: (err: Error | null, params: Runtime.GlobalLexicalScopeNamesReturnType) => void): void;
- /**
- * Enables debugger for the given page. Clients should not assume that the debugging has been enabled until the result for this command is received.
- */
- post(method: 'Debugger.enable', callback?: (err: Error | null, params: Debugger.EnableReturnType) => void): void;
- /**
- * Disables debugger for given page.
- */
- post(method: 'Debugger.disable', callback?: (err: Error | null) => void): void;
- /**
- * Activates / deactivates all breakpoints on the page.
- */
- post(method: 'Debugger.setBreakpointsActive', params?: Debugger.SetBreakpointsActiveParameterType, callback?: (err: Error | null) => void): void;
- post(method: 'Debugger.setBreakpointsActive', callback?: (err: Error | null) => void): void;
- /**
- * Makes page not interrupt on any pauses (breakpoint, exception, dom exception etc).
- */
- post(method: 'Debugger.setSkipAllPauses', params?: Debugger.SetSkipAllPausesParameterType, callback?: (err: Error | null) => void): void;
- post(method: 'Debugger.setSkipAllPauses', callback?: (err: Error | null) => void): void;
- /**
- * Sets JavaScript breakpoint at given location specified either by URL or URL regex. Once this command is issued, all existing parsed scripts will have breakpoints resolved and returned in locations property. Further matching script parsing will result in subsequent breakpointResolved events issued. This logical breakpoint will survive page reloads.
- */
- post(method: 'Debugger.setBreakpointByUrl', params?: Debugger.SetBreakpointByUrlParameterType, callback?: (err: Error | null, params: Debugger.SetBreakpointByUrlReturnType) => void): void;
- post(method: 'Debugger.setBreakpointByUrl', callback?: (err: Error | null, params: Debugger.SetBreakpointByUrlReturnType) => void): void;
- /**
- * Sets JavaScript breakpoint at a given location.
- */
- post(method: 'Debugger.setBreakpoint', params?: Debugger.SetBreakpointParameterType, callback?: (err: Error | null, params: Debugger.SetBreakpointReturnType) => void): void;
- post(method: 'Debugger.setBreakpoint', callback?: (err: Error | null, params: Debugger.SetBreakpointReturnType) => void): void;
- /**
- * Removes JavaScript breakpoint.
- */
- post(method: 'Debugger.removeBreakpoint', params?: Debugger.RemoveBreakpointParameterType, callback?: (err: Error | null) => void): void;
- post(method: 'Debugger.removeBreakpoint', callback?: (err: Error | null) => void): void;
- /**
- * Returns possible locations for breakpoint. scriptId in start and end range locations should be the same.
- */
- post(
- method: 'Debugger.getPossibleBreakpoints',
- params?: Debugger.GetPossibleBreakpointsParameterType,
- callback?: (err: Error | null, params: Debugger.GetPossibleBreakpointsReturnType) => void
- ): void;
- post(method: 'Debugger.getPossibleBreakpoints', callback?: (err: Error | null, params: Debugger.GetPossibleBreakpointsReturnType) => void): void;
- /**
- * Continues execution until specific location is reached.
- */
- post(method: 'Debugger.continueToLocation', params?: Debugger.ContinueToLocationParameterType, callback?: (err: Error | null) => void): void;
- post(method: 'Debugger.continueToLocation', callback?: (err: Error | null) => void): void;
- /**
- * @experimental
- */
- post(method: 'Debugger.pauseOnAsyncCall', params?: Debugger.PauseOnAsyncCallParameterType, callback?: (err: Error | null) => void): void;
- post(method: 'Debugger.pauseOnAsyncCall', callback?: (err: Error | null) => void): void;
- /**
- * Steps over the statement.
- */
- post(method: 'Debugger.stepOver', callback?: (err: Error | null) => void): void;
- /**
- * Steps into the function call.
- */
- post(method: 'Debugger.stepInto', params?: Debugger.StepIntoParameterType, callback?: (err: Error | null) => void): void;
- post(method: 'Debugger.stepInto', callback?: (err: Error | null) => void): void;
- /**
- * Steps out of the function call.
- */
- post(method: 'Debugger.stepOut', callback?: (err: Error | null) => void): void;
- /**
- * Stops on the next JavaScript statement.
- */
- post(method: 'Debugger.pause', callback?: (err: Error | null) => void): void;
- /**
- * This method is deprecated - use Debugger.stepInto with breakOnAsyncCall and Debugger.pauseOnAsyncTask instead. Steps into next scheduled async task if any is scheduled before next pause. Returns success when async task is actually scheduled, returns error if no task were scheduled or another scheduleStepIntoAsync was called.
- * @experimental
- */
- post(method: 'Debugger.scheduleStepIntoAsync', callback?: (err: Error | null) => void): void;
- /**
- * Resumes JavaScript execution.
- */
- post(method: 'Debugger.resume', callback?: (err: Error | null) => void): void;
- /**
- * Returns stack trace with given stackTraceId.
- * @experimental
- */
- post(method: 'Debugger.getStackTrace', params?: Debugger.GetStackTraceParameterType, callback?: (err: Error | null, params: Debugger.GetStackTraceReturnType) => void): void;
- post(method: 'Debugger.getStackTrace', callback?: (err: Error | null, params: Debugger.GetStackTraceReturnType) => void): void;
- /**
- * Searches for given string in script content.
- */
- post(method: 'Debugger.searchInContent', params?: Debugger.SearchInContentParameterType, callback?: (err: Error | null, params: Debugger.SearchInContentReturnType) => void): void;
- post(method: 'Debugger.searchInContent', callback?: (err: Error | null, params: Debugger.SearchInContentReturnType) => void): void;
- /**
- * Edits JavaScript source live.
- */
- post(method: 'Debugger.setScriptSource', params?: Debugger.SetScriptSourceParameterType, callback?: (err: Error | null, params: Debugger.SetScriptSourceReturnType) => void): void;
- post(method: 'Debugger.setScriptSource', callback?: (err: Error | null, params: Debugger.SetScriptSourceReturnType) => void): void;
- /**
- * Restarts particular call frame from the beginning.
- */
- post(method: 'Debugger.restartFrame', params?: Debugger.RestartFrameParameterType, callback?: (err: Error | null, params: Debugger.RestartFrameReturnType) => void): void;
- post(method: 'Debugger.restartFrame', callback?: (err: Error | null, params: Debugger.RestartFrameReturnType) => void): void;
- /**
- * Returns source for the script with given id.
- */
- post(method: 'Debugger.getScriptSource', params?: Debugger.GetScriptSourceParameterType, callback?: (err: Error | null, params: Debugger.GetScriptSourceReturnType) => void): void;
- post(method: 'Debugger.getScriptSource', callback?: (err: Error | null, params: Debugger.GetScriptSourceReturnType) => void): void;
- /**
- * Defines pause on exceptions state. Can be set to stop on all exceptions, uncaught exceptions or no exceptions. Initial pause on exceptions state is none.
- */
- post(method: 'Debugger.setPauseOnExceptions', params?: Debugger.SetPauseOnExceptionsParameterType, callback?: (err: Error | null) => void): void;
- post(method: 'Debugger.setPauseOnExceptions', callback?: (err: Error | null) => void): void;
- /**
- * Evaluates expression on a given call frame.
- */
- post(method: 'Debugger.evaluateOnCallFrame', params?: Debugger.EvaluateOnCallFrameParameterType, callback?: (err: Error | null, params: Debugger.EvaluateOnCallFrameReturnType) => void): void;
- post(method: 'Debugger.evaluateOnCallFrame', callback?: (err: Error | null, params: Debugger.EvaluateOnCallFrameReturnType) => void): void;
- /**
- * Changes value of variable in a callframe. Object-based scopes are not supported and must be mutated manually.
- */
- post(method: 'Debugger.setVariableValue', params?: Debugger.SetVariableValueParameterType, callback?: (err: Error | null) => void): void;
- post(method: 'Debugger.setVariableValue', callback?: (err: Error | null) => void): void;
- /**
- * Changes return value in top frame. Available only at return break position.
- * @experimental
- */
- post(method: 'Debugger.setReturnValue', params?: Debugger.SetReturnValueParameterType, callback?: (err: Error | null) => void): void;
- post(method: 'Debugger.setReturnValue', callback?: (err: Error | null) => void): void;
- /**
- * Enables or disables async call stacks tracking.
- */
- post(method: 'Debugger.setAsyncCallStackDepth', params?: Debugger.SetAsyncCallStackDepthParameterType, callback?: (err: Error | null) => void): void;
- post(method: 'Debugger.setAsyncCallStackDepth', callback?: (err: Error | null) => void): void;
- /**
- * Replace previous blackbox patterns with passed ones. Forces backend to skip stepping/pausing in scripts with url matching one of the patterns. VM will try to leave blackboxed script by performing 'step in' several times, finally resorting to 'step out' if unsuccessful.
- * @experimental
- */
- post(method: 'Debugger.setBlackboxPatterns', params?: Debugger.SetBlackboxPatternsParameterType, callback?: (err: Error | null) => void): void;
- post(method: 'Debugger.setBlackboxPatterns', callback?: (err: Error | null) => void): void;
- /**
- * Makes backend skip steps in the script in blackboxed ranges. VM will try leave blacklisted scripts by performing 'step in' several times, finally resorting to 'step out' if unsuccessful. Positions array contains positions where blackbox state is changed. First interval isn't blackboxed. Array should be sorted.
- * @experimental
- */
- post(method: 'Debugger.setBlackboxedRanges', params?: Debugger.SetBlackboxedRangesParameterType, callback?: (err: Error | null) => void): void;
- post(method: 'Debugger.setBlackboxedRanges', callback?: (err: Error | null) => void): void;
- /**
- * Enables console domain, sends the messages collected so far to the client by means of the messageAdded notification.
- */
- post(method: 'Console.enable', callback?: (err: Error | null) => void): void;
- /**
- * Disables console domain, prevents further console messages from being reported to the client.
- */
- post(method: 'Console.disable', callback?: (err: Error | null) => void): void;
- /**
- * Does nothing.
- */
- post(method: 'Console.clearMessages', callback?: (err: Error | null) => void): void;
- post(method: 'Profiler.enable', callback?: (err: Error | null) => void): void;
- post(method: 'Profiler.disable', callback?: (err: Error | null) => void): void;
- /**
- * Changes CPU profiler sampling interval. Must be called before CPU profiles recording started.
- */
- post(method: 'Profiler.setSamplingInterval', params?: Profiler.SetSamplingIntervalParameterType, callback?: (err: Error | null) => void): void;
- post(method: 'Profiler.setSamplingInterval', callback?: (err: Error | null) => void): void;
- post(method: 'Profiler.start', callback?: (err: Error | null) => void): void;
- post(method: 'Profiler.stop', callback?: (err: Error | null, params: Profiler.StopReturnType) => void): void;
- /**
- * Enable precise code coverage. Coverage data for JavaScript executed before enabling precise code coverage may be incomplete. Enabling prevents running optimized code and resets execution counters.
- */
- post(method: 'Profiler.startPreciseCoverage', params?: Profiler.StartPreciseCoverageParameterType, callback?: (err: Error | null) => void): void;
- post(method: 'Profiler.startPreciseCoverage', callback?: (err: Error | null) => void): void;
- /**
- * Disable precise code coverage. Disabling releases unnecessary execution count records and allows executing optimized code.
- */
- post(method: 'Profiler.stopPreciseCoverage', callback?: (err: Error | null) => void): void;
- /**
- * Collect coverage data for the current isolate, and resets execution counters. Precise code coverage needs to have started.
- */
- post(method: 'Profiler.takePreciseCoverage', callback?: (err: Error | null, params: Profiler.TakePreciseCoverageReturnType) => void): void;
- /**
- * Collect coverage data for the current isolate. The coverage data may be incomplete due to garbage collection.
- */
- post(method: 'Profiler.getBestEffortCoverage', callback?: (err: Error | null, params: Profiler.GetBestEffortCoverageReturnType) => void): void;
- /**
- * Enable type profile.
- * @experimental
- */
- post(method: 'Profiler.startTypeProfile', callback?: (err: Error | null) => void): void;
- /**
- * Disable type profile. Disabling releases type profile data collected so far.
- * @experimental
- */
- post(method: 'Profiler.stopTypeProfile', callback?: (err: Error | null) => void): void;
- /**
- * Collect type profile.
- * @experimental
- */
- post(method: 'Profiler.takeTypeProfile', callback?: (err: Error | null, params: Profiler.TakeTypeProfileReturnType) => void): void;
- post(method: 'HeapProfiler.enable', callback?: (err: Error | null) => void): void;
- post(method: 'HeapProfiler.disable', callback?: (err: Error | null) => void): void;
- post(method: 'HeapProfiler.startTrackingHeapObjects', params?: HeapProfiler.StartTrackingHeapObjectsParameterType, callback?: (err: Error | null) => void): void;
- post(method: 'HeapProfiler.startTrackingHeapObjects', callback?: (err: Error | null) => void): void;
- post(method: 'HeapProfiler.stopTrackingHeapObjects', params?: HeapProfiler.StopTrackingHeapObjectsParameterType, callback?: (err: Error | null) => void): void;
- post(method: 'HeapProfiler.stopTrackingHeapObjects', callback?: (err: Error | null) => void): void;
- post(method: 'HeapProfiler.takeHeapSnapshot', params?: HeapProfiler.TakeHeapSnapshotParameterType, callback?: (err: Error | null) => void): void;
- post(method: 'HeapProfiler.takeHeapSnapshot', callback?: (err: Error | null) => void): void;
- post(method: 'HeapProfiler.collectGarbage', callback?: (err: Error | null) => void): void;
- post(
- method: 'HeapProfiler.getObjectByHeapObjectId',
- params?: HeapProfiler.GetObjectByHeapObjectIdParameterType,
- callback?: (err: Error | null, params: HeapProfiler.GetObjectByHeapObjectIdReturnType) => void
- ): void;
- post(method: 'HeapProfiler.getObjectByHeapObjectId', callback?: (err: Error | null, params: HeapProfiler.GetObjectByHeapObjectIdReturnType) => void): void;
- /**
- * Enables console to refer to the node with given id via $x (see Command Line API for more details $x functions).
- */
- post(method: 'HeapProfiler.addInspectedHeapObject', params?: HeapProfiler.AddInspectedHeapObjectParameterType, callback?: (err: Error | null) => void): void;
- post(method: 'HeapProfiler.addInspectedHeapObject', callback?: (err: Error | null) => void): void;
- post(method: 'HeapProfiler.getHeapObjectId', params?: HeapProfiler.GetHeapObjectIdParameterType, callback?: (err: Error | null, params: HeapProfiler.GetHeapObjectIdReturnType) => void): void;
- post(method: 'HeapProfiler.getHeapObjectId', callback?: (err: Error | null, params: HeapProfiler.GetHeapObjectIdReturnType) => void): void;
- post(method: 'HeapProfiler.startSampling', params?: HeapProfiler.StartSamplingParameterType, callback?: (err: Error | null) => void): void;
- post(method: 'HeapProfiler.startSampling', callback?: (err: Error | null) => void): void;
- post(method: 'HeapProfiler.stopSampling', callback?: (err: Error | null, params: HeapProfiler.StopSamplingReturnType) => void): void;
- post(method: 'HeapProfiler.getSamplingProfile', callback?: (err: Error | null, params: HeapProfiler.GetSamplingProfileReturnType) => void): void;
- /**
- * Gets supported tracing categories.
- */
- post(method: 'NodeTracing.getCategories', callback?: (err: Error | null, params: NodeTracing.GetCategoriesReturnType) => void): void;
- /**
- * Start trace events collection.
- */
- post(method: 'NodeTracing.start', params?: NodeTracing.StartParameterType, callback?: (err: Error | null) => void): void;
- post(method: 'NodeTracing.start', callback?: (err: Error | null) => void): void;
- /**
- * Stop trace events collection. Remaining collected events will be sent as a sequence of
- * dataCollected events followed by tracingComplete event.
- */
- post(method: 'NodeTracing.stop', callback?: (err: Error | null) => void): void;
- /**
- * Sends protocol message over session with given id.
- */
- post(method: 'NodeWorker.sendMessageToWorker', params?: NodeWorker.SendMessageToWorkerParameterType, callback?: (err: Error | null) => void): void;
- post(method: 'NodeWorker.sendMessageToWorker', callback?: (err: Error | null) => void): void;
- /**
- * Instructs the inspector to attach to running workers. Will also attach to new workers
- * as they start
- */
- post(method: 'NodeWorker.enable', params?: NodeWorker.EnableParameterType, callback?: (err: Error | null) => void): void;
- post(method: 'NodeWorker.enable', callback?: (err: Error | null) => void): void;
- /**
- * Detaches from all running workers and disables attaching to new workers as they are started.
- */
- post(method: 'NodeWorker.disable', callback?: (err: Error | null) => void): void;
- /**
- * Detached from the worker with given sessionId.
- */
- post(method: 'NodeWorker.detach', params?: NodeWorker.DetachParameterType, callback?: (err: Error | null) => void): void;
- post(method: 'NodeWorker.detach', callback?: (err: Error | null) => void): void;
- /**
- * Enable the `NodeRuntime.waitingForDisconnect`.
- */
- post(method: 'NodeRuntime.notifyWhenWaitingForDisconnect', params?: NodeRuntime.NotifyWhenWaitingForDisconnectParameterType, callback?: (err: Error | null) => void): void;
- post(method: 'NodeRuntime.notifyWhenWaitingForDisconnect', callback?: (err: Error | null) => void): void;
- // Events
- addListener(event: string, listener: (...args: any[]) => void): this;
- /**
- * Emitted when any notification from the V8 Inspector is received.
- */
- addListener(event: 'inspectorNotification', listener: (message: InspectorNotification<{}>) => void): this;
- /**
- * Issued when new execution context is created.
- */
- addListener(event: 'Runtime.executionContextCreated', listener: (message: InspectorNotification) => void): this;
- /**
- * Issued when execution context is destroyed.
- */
- addListener(event: 'Runtime.executionContextDestroyed', listener: (message: InspectorNotification) => void): this;
- /**
- * Issued when all executionContexts were cleared in browser
- */
- addListener(event: 'Runtime.executionContextsCleared', listener: () => void): this;
- /**
- * Issued when exception was thrown and unhandled.
- */
- addListener(event: 'Runtime.exceptionThrown', listener: (message: InspectorNotification) => void): this;
- /**
- * Issued when unhandled exception was revoked.
- */
- addListener(event: 'Runtime.exceptionRevoked', listener: (message: InspectorNotification) => void): this;
- /**
- * Issued when console API was called.
- */
- addListener(event: 'Runtime.consoleAPICalled', listener: (message: InspectorNotification) => void): this;
- /**
- * Issued when object should be inspected (for example, as a result of inspect() command line API call).
- */
- addListener(event: 'Runtime.inspectRequested', listener: (message: InspectorNotification) => void): this;
- /**
- * Fired when virtual machine parses script. This event is also fired for all known and uncollected scripts upon enabling debugger.
- */
- addListener(event: 'Debugger.scriptParsed', listener: (message: InspectorNotification) => void): this;
- /**
- * Fired when virtual machine fails to parse the script.
- */
- addListener(event: 'Debugger.scriptFailedToParse', listener: (message: InspectorNotification) => void): this;
- /**
- * Fired when breakpoint is resolved to an actual script and location.
- */
- addListener(event: 'Debugger.breakpointResolved', listener: (message: InspectorNotification) => void): this;
- /**
- * Fired when the virtual machine stopped on breakpoint or exception or any other stop criteria.
- */
- addListener(event: 'Debugger.paused', listener: (message: InspectorNotification) => void): this;
- /**
- * Fired when the virtual machine resumed execution.
- */
- addListener(event: 'Debugger.resumed', listener: () => void): this;
- /**
- * Issued when new console message is added.
- */
- addListener(event: 'Console.messageAdded', listener: (message: InspectorNotification) => void): this;
- /**
- * Sent when new profile recording is started using console.profile() call.
- */
- addListener(event: 'Profiler.consoleProfileStarted', listener: (message: InspectorNotification) => void): this;
- addListener(event: 'Profiler.consoleProfileFinished', listener: (message: InspectorNotification) => void): this;
- addListener(event: 'HeapProfiler.addHeapSnapshotChunk', listener: (message: InspectorNotification) => void): this;
- addListener(event: 'HeapProfiler.resetProfiles', listener: () => void): this;
- addListener(event: 'HeapProfiler.reportHeapSnapshotProgress', listener: (message: InspectorNotification) => void): this;
- /**
- * If heap objects tracking has been started then backend regularly sends a current value for last seen object id and corresponding timestamp. If the were changes in the heap since last event then one or more heapStatsUpdate events will be sent before a new lastSeenObjectId event.
- */
- addListener(event: 'HeapProfiler.lastSeenObjectId', listener: (message: InspectorNotification) => void): this;
- /**
- * If heap objects tracking has been started then backend may send update for one or more fragments
- */
- addListener(event: 'HeapProfiler.heapStatsUpdate', listener: (message: InspectorNotification) => void): this;
- /**
- * Contains an bucket of collected trace events.
- */
- addListener(event: 'NodeTracing.dataCollected', listener: (message: InspectorNotification) => void): this;
- /**
- * Signals that tracing is stopped and there is no trace buffers pending flush, all data were
- * delivered via dataCollected events.
- */
- addListener(event: 'NodeTracing.tracingComplete', listener: () => void): this;
- /**
- * Issued when attached to a worker.
- */
- addListener(event: 'NodeWorker.attachedToWorker', listener: (message: InspectorNotification) => void): this;
- /**
- * Issued when detached from the worker.
- */
- addListener(event: 'NodeWorker.detachedFromWorker', listener: (message: InspectorNotification) => void): this;
- /**
- * Notifies about a new protocol message received from the session
- * (session ID is provided in attachedToWorker notification).
- */
- addListener(event: 'NodeWorker.receivedMessageFromWorker', listener: (message: InspectorNotification) => void): this;
- /**
- * This event is fired instead of `Runtime.executionContextDestroyed` when
- * enabled.
- * It is fired when the Node process finished all code execution and is
- * waiting for all frontends to disconnect.
- */
- addListener(event: 'NodeRuntime.waitingForDisconnect', listener: () => void): this;
- emit(event: string | symbol, ...args: any[]): boolean;
- emit(event: 'inspectorNotification', message: InspectorNotification<{}>): boolean;
- emit(event: 'Runtime.executionContextCreated', message: InspectorNotification): boolean;
- emit(event: 'Runtime.executionContextDestroyed', message: InspectorNotification): boolean;
- emit(event: 'Runtime.executionContextsCleared'): boolean;
- emit(event: 'Runtime.exceptionThrown', message: InspectorNotification): boolean;
- emit(event: 'Runtime.exceptionRevoked', message: InspectorNotification): boolean;
- emit(event: 'Runtime.consoleAPICalled', message: InspectorNotification): boolean;
- emit(event: 'Runtime.inspectRequested', message: InspectorNotification): boolean;
- emit(event: 'Debugger.scriptParsed', message: InspectorNotification): boolean;
- emit(event: 'Debugger.scriptFailedToParse', message: InspectorNotification): boolean;
- emit(event: 'Debugger.breakpointResolved', message: InspectorNotification): boolean;
- emit(event: 'Debugger.paused', message: InspectorNotification): boolean;
- emit(event: 'Debugger.resumed'): boolean;
- emit(event: 'Console.messageAdded', message: InspectorNotification): boolean;
- emit(event: 'Profiler.consoleProfileStarted', message: InspectorNotification): boolean;
- emit(event: 'Profiler.consoleProfileFinished', message: InspectorNotification): boolean;
- emit(event: 'HeapProfiler.addHeapSnapshotChunk', message: InspectorNotification): boolean;
- emit(event: 'HeapProfiler.resetProfiles'): boolean;
- emit(event: 'HeapProfiler.reportHeapSnapshotProgress', message: InspectorNotification): boolean;
- emit(event: 'HeapProfiler.lastSeenObjectId', message: InspectorNotification): boolean;
- emit(event: 'HeapProfiler.heapStatsUpdate', message: InspectorNotification): boolean;
- emit(event: 'NodeTracing.dataCollected', message: InspectorNotification): boolean;
- emit(event: 'NodeTracing.tracingComplete'): boolean;
- emit(event: 'NodeWorker.attachedToWorker', message: InspectorNotification): boolean;
- emit(event: 'NodeWorker.detachedFromWorker', message: InspectorNotification): boolean;
- emit(event: 'NodeWorker.receivedMessageFromWorker', message: InspectorNotification): boolean;
- emit(event: 'NodeRuntime.waitingForDisconnect'): boolean;
- on(event: string, listener: (...args: any[]) => void): this;
- /**
- * Emitted when any notification from the V8 Inspector is received.
- */
- on(event: 'inspectorNotification', listener: (message: InspectorNotification<{}>) => void): this;
- /**
- * Issued when new execution context is created.
- */
- on(event: 'Runtime.executionContextCreated', listener: (message: InspectorNotification) => void): this;
- /**
- * Issued when execution context is destroyed.
- */
- on(event: 'Runtime.executionContextDestroyed', listener: (message: InspectorNotification) => void): this;
- /**
- * Issued when all executionContexts were cleared in browser
- */
- on(event: 'Runtime.executionContextsCleared', listener: () => void): this;
- /**
- * Issued when exception was thrown and unhandled.
- */
- on(event: 'Runtime.exceptionThrown', listener: (message: InspectorNotification) => void): this;
- /**
- * Issued when unhandled exception was revoked.
- */
- on(event: 'Runtime.exceptionRevoked', listener: (message: InspectorNotification) => void): this;
- /**
- * Issued when console API was called.
- */
- on(event: 'Runtime.consoleAPICalled', listener: (message: InspectorNotification) => void): this;
- /**
- * Issued when object should be inspected (for example, as a result of inspect() command line API call).
- */
- on(event: 'Runtime.inspectRequested', listener: (message: InspectorNotification) => void): this;
- /**
- * Fired when virtual machine parses script. This event is also fired for all known and uncollected scripts upon enabling debugger.
- */
- on(event: 'Debugger.scriptParsed', listener: (message: InspectorNotification) => void): this;
- /**
- * Fired when virtual machine fails to parse the script.
- */
- on(event: 'Debugger.scriptFailedToParse', listener: (message: InspectorNotification) => void): this;
- /**
- * Fired when breakpoint is resolved to an actual script and location.
- */
- on(event: 'Debugger.breakpointResolved', listener: (message: InspectorNotification) => void): this;
- /**
- * Fired when the virtual machine stopped on breakpoint or exception or any other stop criteria.
- */
- on(event: 'Debugger.paused', listener: (message: InspectorNotification) => void): this;
- /**
- * Fired when the virtual machine resumed execution.
- */
- on(event: 'Debugger.resumed', listener: () => void): this;
- /**
- * Issued when new console message is added.
- */
- on(event: 'Console.messageAdded', listener: (message: InspectorNotification) => void): this;
- /**
- * Sent when new profile recording is started using console.profile() call.
- */
- on(event: 'Profiler.consoleProfileStarted', listener: (message: InspectorNotification) => void): this;
- on(event: 'Profiler.consoleProfileFinished', listener: (message: InspectorNotification) => void): this;
- on(event: 'HeapProfiler.addHeapSnapshotChunk', listener: (message: InspectorNotification) => void): this;
- on(event: 'HeapProfiler.resetProfiles', listener: () => void): this;
- on(event: 'HeapProfiler.reportHeapSnapshotProgress', listener: (message: InspectorNotification) => void): this;
- /**
- * If heap objects tracking has been started then backend regularly sends a current value for last seen object id and corresponding timestamp. If the were changes in the heap since last event then one or more heapStatsUpdate events will be sent before a new lastSeenObjectId event.
- */
- on(event: 'HeapProfiler.lastSeenObjectId', listener: (message: InspectorNotification) => void): this;
- /**
- * If heap objects tracking has been started then backend may send update for one or more fragments
- */
- on(event: 'HeapProfiler.heapStatsUpdate', listener: (message: InspectorNotification) => void): this;
- /**
- * Contains an bucket of collected trace events.
- */
- on(event: 'NodeTracing.dataCollected', listener: (message: InspectorNotification) => void): this;
- /**
- * Signals that tracing is stopped and there is no trace buffers pending flush, all data were
- * delivered via dataCollected events.
- */
- on(event: 'NodeTracing.tracingComplete', listener: () => void): this;
- /**
- * Issued when attached to a worker.
- */
- on(event: 'NodeWorker.attachedToWorker', listener: (message: InspectorNotification) => void): this;
- /**
- * Issued when detached from the worker.
- */
- on(event: 'NodeWorker.detachedFromWorker', listener: (message: InspectorNotification) => void): this;
- /**
- * Notifies about a new protocol message received from the session
- * (session ID is provided in attachedToWorker notification).
- */
- on(event: 'NodeWorker.receivedMessageFromWorker', listener: (message: InspectorNotification) => void): this;
- /**
- * This event is fired instead of `Runtime.executionContextDestroyed` when
- * enabled.
- * It is fired when the Node process finished all code execution and is
- * waiting for all frontends to disconnect.
- */
- on(event: 'NodeRuntime.waitingForDisconnect', listener: () => void): this;
- once(event: string, listener: (...args: any[]) => void): this;
- /**
- * Emitted when any notification from the V8 Inspector is received.
- */
- once(event: 'inspectorNotification', listener: (message: InspectorNotification<{}>) => void): this;
- /**
- * Issued when new execution context is created.
- */
- once(event: 'Runtime.executionContextCreated', listener: (message: InspectorNotification) => void): this;
- /**
- * Issued when execution context is destroyed.
- */
- once(event: 'Runtime.executionContextDestroyed', listener: (message: InspectorNotification) => void): this;
- /**
- * Issued when all executionContexts were cleared in browser
- */
- once(event: 'Runtime.executionContextsCleared', listener: () => void): this;
- /**
- * Issued when exception was thrown and unhandled.
- */
- once(event: 'Runtime.exceptionThrown', listener: (message: InspectorNotification) => void): this;
- /**
- * Issued when unhandled exception was revoked.
- */
- once(event: 'Runtime.exceptionRevoked', listener: (message: InspectorNotification) => void): this;
- /**
- * Issued when console API was called.
- */
- once(event: 'Runtime.consoleAPICalled', listener: (message: InspectorNotification) => void): this;
- /**
- * Issued when object should be inspected (for example, as a result of inspect() command line API call).
- */
- once(event: 'Runtime.inspectRequested', listener: (message: InspectorNotification) => void): this;
- /**
- * Fired when virtual machine parses script. This event is also fired for all known and uncollected scripts upon enabling debugger.
- */
- once(event: 'Debugger.scriptParsed', listener: (message: InspectorNotification) => void): this;
- /**
- * Fired when virtual machine fails to parse the script.
- */
- once(event: 'Debugger.scriptFailedToParse', listener: (message: InspectorNotification) => void): this;
- /**
- * Fired when breakpoint is resolved to an actual script and location.
- */
- once(event: 'Debugger.breakpointResolved', listener: (message: InspectorNotification) => void): this;
- /**
- * Fired when the virtual machine stopped on breakpoint or exception or any other stop criteria.
- */
- once(event: 'Debugger.paused', listener: (message: InspectorNotification) => void): this;
- /**
- * Fired when the virtual machine resumed execution.
- */
- once(event: 'Debugger.resumed', listener: () => void): this;
- /**
- * Issued when new console message is added.
- */
- once(event: 'Console.messageAdded', listener: (message: InspectorNotification) => void): this;
- /**
- * Sent when new profile recording is started using console.profile() call.
- */
- once(event: 'Profiler.consoleProfileStarted', listener: (message: InspectorNotification) => void): this;
- once(event: 'Profiler.consoleProfileFinished', listener: (message: InspectorNotification) => void): this;
- once(event: 'HeapProfiler.addHeapSnapshotChunk', listener: (message: InspectorNotification) => void): this;
- once(event: 'HeapProfiler.resetProfiles', listener: () => void): this;
- once(event: 'HeapProfiler.reportHeapSnapshotProgress', listener: (message: InspectorNotification) => void): this;
- /**
- * If heap objects tracking has been started then backend regularly sends a current value for last seen object id and corresponding timestamp. If the were changes in the heap since last event then one or more heapStatsUpdate events will be sent before a new lastSeenObjectId event.
- */
- once(event: 'HeapProfiler.lastSeenObjectId', listener: (message: InspectorNotification) => void): this;
- /**
- * If heap objects tracking has been started then backend may send update for one or more fragments
- */
- once(event: 'HeapProfiler.heapStatsUpdate', listener: (message: InspectorNotification) => void): this;
- /**
- * Contains an bucket of collected trace events.
- */
- once(event: 'NodeTracing.dataCollected', listener: (message: InspectorNotification) => void): this;
- /**
- * Signals that tracing is stopped and there is no trace buffers pending flush, all data were
- * delivered via dataCollected events.
- */
- once(event: 'NodeTracing.tracingComplete', listener: () => void): this;
- /**
- * Issued when attached to a worker.
- */
- once(event: 'NodeWorker.attachedToWorker', listener: (message: InspectorNotification) => void): this;
- /**
- * Issued when detached from the worker.
- */
- once(event: 'NodeWorker.detachedFromWorker', listener: (message: InspectorNotification) => void): this;
- /**
- * Notifies about a new protocol message received from the session
- * (session ID is provided in attachedToWorker notification).
- */
- once(event: 'NodeWorker.receivedMessageFromWorker', listener: (message: InspectorNotification) => void): this;
- /**
- * This event is fired instead of `Runtime.executionContextDestroyed` when
- * enabled.
- * It is fired when the Node process finished all code execution and is
- * waiting for all frontends to disconnect.
- */
- once(event: 'NodeRuntime.waitingForDisconnect', listener: () => void): this;
- prependListener(event: string, listener: (...args: any[]) => void): this;
- /**
- * Emitted when any notification from the V8 Inspector is received.
- */
- prependListener(event: 'inspectorNotification', listener: (message: InspectorNotification<{}>) => void): this;
- /**
- * Issued when new execution context is created.
- */
- prependListener(event: 'Runtime.executionContextCreated', listener: (message: InspectorNotification) => void): this;
- /**
- * Issued when execution context is destroyed.
- */
- prependListener(event: 'Runtime.executionContextDestroyed', listener: (message: InspectorNotification) => void): this;
- /**
- * Issued when all executionContexts were cleared in browser
- */
- prependListener(event: 'Runtime.executionContextsCleared', listener: () => void): this;
- /**
- * Issued when exception was thrown and unhandled.
- */
- prependListener(event: 'Runtime.exceptionThrown', listener: (message: InspectorNotification) => void): this;
- /**
- * Issued when unhandled exception was revoked.
- */
- prependListener(event: 'Runtime.exceptionRevoked', listener: (message: InspectorNotification) => void): this;
- /**
- * Issued when console API was called.
- */
- prependListener(event: 'Runtime.consoleAPICalled', listener: (message: InspectorNotification) => void): this;
- /**
- * Issued when object should be inspected (for example, as a result of inspect() command line API call).
- */
- prependListener(event: 'Runtime.inspectRequested', listener: (message: InspectorNotification) => void): this;
- /**
- * Fired when virtual machine parses script. This event is also fired for all known and uncollected scripts upon enabling debugger.
- */
- prependListener(event: 'Debugger.scriptParsed', listener: (message: InspectorNotification) => void): this;
- /**
- * Fired when virtual machine fails to parse the script.
- */
- prependListener(event: 'Debugger.scriptFailedToParse', listener: (message: InspectorNotification) => void): this;
- /**
- * Fired when breakpoint is resolved to an actual script and location.
- */
- prependListener(event: 'Debugger.breakpointResolved', listener: (message: InspectorNotification) => void): this;
- /**
- * Fired when the virtual machine stopped on breakpoint or exception or any other stop criteria.
- */
- prependListener(event: 'Debugger.paused', listener: (message: InspectorNotification) => void): this;
- /**
- * Fired when the virtual machine resumed execution.
- */
- prependListener(event: 'Debugger.resumed', listener: () => void): this;
- /**
- * Issued when new console message is added.
- */
- prependListener(event: 'Console.messageAdded', listener: (message: InspectorNotification) => void): this;
- /**
- * Sent when new profile recording is started using console.profile() call.
- */
- prependListener(event: 'Profiler.consoleProfileStarted', listener: (message: InspectorNotification) => void): this;
- prependListener(event: 'Profiler.consoleProfileFinished', listener: (message: InspectorNotification) => void): this;
- prependListener(event: 'HeapProfiler.addHeapSnapshotChunk', listener: (message: InspectorNotification) => void): this;
- prependListener(event: 'HeapProfiler.resetProfiles', listener: () => void): this;
- prependListener(event: 'HeapProfiler.reportHeapSnapshotProgress', listener: (message: InspectorNotification) => void): this;
- /**
- * If heap objects tracking has been started then backend regularly sends a current value for last seen object id and corresponding timestamp. If the were changes in the heap since last event then one or more heapStatsUpdate events will be sent before a new lastSeenObjectId event.
- */
- prependListener(event: 'HeapProfiler.lastSeenObjectId', listener: (message: InspectorNotification) => void): this;
- /**
- * If heap objects tracking has been started then backend may send update for one or more fragments
- */
- prependListener(event: 'HeapProfiler.heapStatsUpdate', listener: (message: InspectorNotification) => void): this;
- /**
- * Contains an bucket of collected trace events.
- */
- prependListener(event: 'NodeTracing.dataCollected', listener: (message: InspectorNotification