diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Deewaar in hindi torrent download Enjoy the legendary drama of two brothers on opposite sides of the law.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Deewaar in hindi torrent download Enjoy the legendary drama of two brothers on opposite sides of the law.md
deleted file mode 100644
index 575ca6eafac58cdc19957be75379e78499abb160..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Deewaar in hindi torrent download Enjoy the legendary drama of two brothers on opposite sides of the law.md
+++ /dev/null
@@ -1,16 +0,0 @@
-
-
Deewaar in hindi torrent download: How to watch the classic Bollywood movie online
-
If you are a fan of Bollywood movies, you have probably heard of Deewaar, one of the most iconic films in Indian cinema history. Released in 1975, Deewaar is a crime drama that explores the themes of brotherhood, loyalty, corruption, and social injustice. It stars Amitabh Bachchan and Shashi Kapoor as two brothers who take different paths in life, one becoming a gangster and the other a police officer. The movie was a huge commercial and critical success, earning several awards and accolades. It also influenced many filmmakers and actors in India and abroad, such as Quentin Tarantino, Danny Boyle, Rajkumar Hirani, and Shah Rukh Khan.
But how can you watch this masterpiece online if you don't have access to a DVD or a streaming service that offers it? One option that many people resort to is downloading Deewaar in hindi torrent from various websites. However, this method is not only illegal but also risky, as it can expose you to malware, viruses, legal troubles, and poor quality videos. In this article, we will tell you why you should avoid using torrent sites to watch Deewaar online, and what are some better alternatives that are safe and legal. We will also give you some tips and tricks for finding Deewaar in hindi online easily and quickly.
-
What is Deewaar and why is it a must-watch movie?
-
Before we dive into the details of how to watch Deewaar online, let's first understand what makes this movie so special and why you should watch it if you haven't already. Here are some of the reasons why Deewaar is a must-watch movie for any Bollywood lover.
-
The plot and the themes of Deewaar
-
The story of Deewaar revolves around two brothers, Vijay (Amitabh Bachchan) and Ravi (Shashi Kapoor), who grow up in poverty after their father (Satyendra Kapoor) is framed for a crime he didn't commit by a corrupt businessman (Iftekhar). Vijay becomes bitter and disillusioned with society, and joins a gang led by Samant (Madan Puri), while Ravi becomes an honest and upright police officer. The brothers clash with each other over their conflicting ideologies and loyalties, leading to a dramatic confrontation that tests their bond.
-
The movie explores various themes such as family, friendship, morality, justice, violence, class struggle, and urban decay. It also reflects the socio-political context of India in the 1970s, when the country was facing economic crisis, political unrest, labor strikes, and corruption scandals. The movie portrays the plight of the common man who is oppressed by the system and has to resort to crime or rebellion to survive. It also questions the role of law enforcement and its effectiveness in dealing with crime and corruption.
-
The cast and the crew of Deewaar
-
Deewaar boasts of an impressive cast and crew who delivered stellar performances and technical excellence. Amitabh Bachchan and Shashi Kapoor are brilliant as the two brothers who share a deep love but also a bitter rivalry. They showcase their acting range by portraying complex emotions such as anger, pain, guilt, pride, and remorse. Their chemistry is palpable and their dialogues are memorable. The movie also features other talented actors such as Nirupa Roy as the mother of Vijay and Ravi; Parveen Babi as Anita, Vijay's love interest; Neetu Singh as Veera, Ravi's love interest; Nirupa Roy as Sumitra Devi; Iftekhar as Deshmukh; Madan Puri as Samant; Sudhir as Jaichand; Jagdish Raj as Jaggi; Alankar Joshi as young Vijay; Raju Shrestha as young Ravi; Manmohan Krishna as DCP Narang; Yunus Parvez as Rahim Chacha; Raj Kishore as Darpan; Shetty as Shetty; Mac Mohan as Mac; Viju Khote as Viju; Mohan Sherry as Peter; Satyendra Kapoor as Anand Verma; Kamal Kapoor as Mr Agarwal; Rajpal Yadav as Munna Bhaiya; Ramesh Deo as Sub-Inspector Shinde; Murad as Police Commissioner.
-
The movie was directed by Yash Chopra, one of the most celebrated filmmakers in Indian cinema history. He was known for his versatility and his ability to create engaging stories across different genres such as romance, drama, thriller, action, comedy, musicals etc. He was also known for his collaboration with Amitabh Bachchan in several hit movies such as Zanjeer (1973), Kabhi Kabhie (1976), Trishul (1978), Kaala Patthar (1979), Silsila (1981), Mashaal (1984), Lamhe (1991), Veer-Zaara (2004) etc. He won six National Film Awards and 11 Filmfare Awards for his work.
-
The movie was written by Salim-Javed
0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/EADO 2022 Where to Find and Download the Best PowerPoint Slides on Skin Cancer.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/EADO 2022 Where to Find and Download the Best PowerPoint Slides on Skin Cancer.md
deleted file mode 100644
index e3c4ac0bdbdd5da477d37f95298750241c266a07..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/EADO 2022 Where to Find and Download the Best PowerPoint Slides on Skin Cancer.md
+++ /dev/null
@@ -1,24 +0,0 @@
-
-
How to Download PowerPoint Presentations for EADO 2022
-
If you are planning to attend the 19th EADO Congress in Stockholm, Sweden, on May 10-13, 2022, you might be interested in downloading some PowerPoint presentations to prepare for the event. The EADO Congress is a major international meeting that brings together experts and researchers in the field of dermato-oncology, the study and treatment of skin cancers. The congress will feature keynote lectures, symposia, workshops, oral and poster presentations, and networking opportunities.
There are two ways to download PowerPoint presentations for EADO 2022:
-
-
From the official website of the congress: https://eado2022.com/. Here you can find the scientific program, the abstract submission guidelines, the registration information, and the sponsors and exhibitors. You can also access some of the previous congresses' presentations by clicking on the "Past Congresses" tab and selecting the year of your interest.
-
From Microsoft PowerPoint: If you have a Microsoft 365 subscription, you can use PowerPoint to create your own presentations or download templates from the online library. You can also use PowerPoint on the web for free by signing in with a Microsoft account. To download PowerPoint or access it online, visit https://www.microsoft.com/en-ww/microsoft-365/powerpoint. You can search for "EADO" or "dermato-oncology" in the template gallery to find relevant designs.
-
-
We hope this article helps you download PowerPoint presentations for EADO 2022. We look forward to seeing you at the congress!
Here are some more paragraphs for the article:
-
Why attend EADO 2022?
-
-
EADO 2022 is a great opportunity to learn from the leading experts in dermato-oncology, share your research and clinical experience, and network with colleagues from around the world. You will be able to update your knowledge on the latest advances and challenges in the diagnosis, prevention, and treatment of skin cancers, including melanoma, non-melanoma skin cancer, cutaneous lymphoma, and rare tumors. You will also be able to participate in interactive sessions, workshops, and debates on topics such as immunotherapy, targeted therapy, surgery, radiotherapy, dermatopathology, dermoscopy, and more.
-
How to prepare for EADO 2022?
-
To make the most of your attendance at EADO 2022, we recommend that you:
-
-
Register early to secure your place and benefit from the early bird rates. You can register online at https://eado2022.com/registration/.
-
Submit your abstract before the deadline of January 15, 2022. You can submit your abstract online at https://eado2022.com/abstracts/. You can choose between oral or poster presentation formats. The best abstracts will be awarded prizes and published in the Journal of the European Academy of Dermatology and Venereology.
-
Book your accommodation and travel arrangements in advance. You can find information on the congress venue, hotels, transportation, and visa requirements at https://eado2022.com/general-information/.
-
Download the EADO 2022 app to access the congress program, speakers' bios, abstracts, exhibitors' list, floor plans, and more. You can also use the app to create your personal agenda, rate sessions, ask questions, and interact with other attendees. The app will be available for download a few weeks before the congress.
-
-
We hope you enjoy EADO 2022 and have a productive and rewarding experience!
GridinSoft Anti-Malware 4.1.30 Crack License Keys 2020 [Latest]: A Powerful Tool to Protect Your PC from Malware
-
Malware is a serious threat to your computer and your privacy. It can infect your system through various ways, such as email attachments, downloads, pop-ups, fake updates, etc. Malware can damage your files, slow down your PC, steal your personal information, monitor your online activities, and even lock your system until you pay a ransom.
-
That's why you need a reliable anti-malware solution that can detect and remove malware from your PC effectively and efficiently. One such solution is GridinSoft Anti-Malware, an impressive application that has been developed specifically for the automatic removal of viruses, bots, spyware, keyloggers, trojans, scareware, rootkits, and other malicious software.
In this article, we will show you how to download, install, activate, and use GridinSoft Anti-Malware with crack license keys 2020 [latest] to protect your PC from malware. We will also answer some frequently asked questions about GridinSoft Anti-Malware.
-
What is GridinSoft Anti-Malware?
-
GridinSoft Anti-Malware is an excellent anti-malware solution that has been designed to provide high-speed system scanning process without slowing down your PC. It has a user-friendly and simple interface that makes it easy to use for both beginners and experts.
-
Features and benefits of GridinSoft Anti-Malware
-
Some of the features and benefits of GridinSoft Anti-Malware are:
-
-
It can automatically delete viruses, bots, spyware, keyloggers, trojans of using crack license keys for GridinSoft Anti-Malware, such as:
-
-
You may violate the terms and conditions of GridinSoft Anti-Malware and face legal consequences.
-
You may expose your PC to malware or viruses that may be hidden in the crack license keys file or the source website.
-
You may not receive any technical support or customer service from GridinSoft Anti-Malware.
-
-
Therefore, you should use crack license keys for GridinSoft Anti-Malware at your own risk and discretion. We do not recommend or endorse the use of crack license keys for GridinSoft Anti-Malware or any other software.
-
How to use GridinSoft Anti-Malware to scan and remove malware from your PC?
-
Now that you have activated GridinSoft Anti-Malware with crack license keys, you can use it to scan and remove malware from your PC. Here are the steps to do so:
-
Types of scans available in GridinSoft Anti-Malware
-
GridinSoft Anti-Malware offers four types of scans for your convenience and preference. They are:
-
-
-
Standard scan: This is the default and recommended scan mode that scans your system memory, startup items, registry, and system drive for malware. It takes a few minutes to complete and provides a comprehensive overview of your system status.
-
Quick scan: This is a faster scan mode that scans only the most critical areas of your system for malware. It takes a few seconds to complete and provides a brief summary of your system status.
-
Full scan: This is a thorough scan mode that scans all the drives and folders on your PC for malware. It takes a long time to complete and provides a detailed report of your system status.
-
Removable scan: This is a special scan mode that scans only the removable devices such as USB flash drives, external hard drives, memory cards, etc. that are connected to your PC for malware. It takes a variable time to complete depending on the size and number of the devices and provides a specific report of their status.
-
-
How to start and customize a scan in GridinSoft Anti-Malware?
-
To start and customize a scan in GridinSoft Anti-Malware, you need to follow these steps:
-
-
Open GridinSoft Anti-Malware and click on the "Scan" button at the top left corner of the main window.
-
Select the type of scan that you want to perform from the four options: standard scan, quick scan, full scan, or removable scan.
-
If you want to customize the scan settings, click on the "Settings" button at the bottom right corner of the scan window. You can change the options such as scan priority, heuristic rules, file types, file size, etc.
-
Click on the "Start Scan" button to begin the scanning process.
-
-
Wait for GridinSoft Anti-Malware to finish scanning your PC and display the results.
-
How to view and manage scan results in GridinSoft Anti-Malware?
-
To view and manage scan results in GridinSoft Anti-Malware, you need to follow these steps:
-
-
After the scanning process is completed, GridinSoft Anti-Malware will show you a summary of the results, such as the number of scanned items, detected threats, removed threats, etc.
-
If you want to see more details about the results, click on the "View Results" button at the bottom right corner of the summary window. You will see a list of all the detected threats with their names, locations, types, and statuses.
-
If you want to remove all the detected threats from your PC, click on the "Fix Now" button at the bottom right corner of the results window. GridinSoft Anti-Malware will automatically delete all the threats from your PC.
-
If you want to remove only some of the detected threats from your PC, uncheck the boxes next to the threats that you want to keep on your PC. Then, click on the "Fix Selected" button at the bottom right corner of the results window. GridinSoft Anti-Malware will delete only the selected threats from your PC.
-
If you want to ignore some of the detected threats from your PC, check the boxes next to the threats that you want to ignore. Then, click on the "Ignore Selected" button at the bottom right corner of the results window. GridinSoft Anti-Malware will add the selected threats to the ignore list and will not scan them again in the future.
-
If you want to restore some of the removed threats to your PC, click on the "Restore" button at the bottom left corner of the results window. You will see a list of all the removed threats with their names, locations, types, and statuses. Check the boxes next to the threats that you want to restore. Then, click on the "Restore Selected" button at the bottom right corner of the restore window. GridinSoft Anti-Malware will restore the selected threats to their original locations on your PC.
-
-
Congratulations! You have successfully used GridinSoft Anti-Malware to scan and remove malware from your PC.
-
How to reset browser settings with GridinSoft Anti-Malware?
-
Sometimes, malware can alter your browser settings, such as changing your homepage, search engine, new tab page, extensions, etc. This can affect your browsing experience and expose you to more malware or phishing sites. To fix this problem, you can use GridinSoft Anti-Malware to reset your browser settings to default. Here are the steps to do so:
-
Why you need to reset browser settings with GridinSoft Anti-Malware?
-
Some of the reasons why you need to reset browser settings with GridinSoft Anti-Malware are:
-
-
You can restore your browser settings to their original state and get rid of any unwanted changes made by malware.
-
You can prevent malware from redirecting you to malicious or suspicious websites that may harm your PC or steal your data.
-
You can improve your browser performance and speed by removing any unnecessary or harmful extensions or plugins.
-
You can enhance your browser security and privacy by clearing any cookies, cache, history, or other data that may be used by malware or hackers.
-
-
How to reset browser settings with GridinSoft Anti-Malware?
-
To reset browser settings with GridinSoft Anti-Malware, you need to follow these steps:
-
-
Open GridinSoft Anti-Malware and click on the "Tools" button at the top right corner of the main window.
-
Select "Reset Browser Settings" from the drop-down menu and click on it.
-
Select the browsers that you want to reset from the list of available browsers. You can choose one or more browsers depending on your preference.
-
Check or uncheck the options that you want to reset for each browser. You can choose to reset homepage, search engine, new tab page, extensions, cookies, cache, history, etc.
-
Click on the "Reset" button at the bottom right corner of the reset window and wait for GridinSoft Anti-Malware to complete the resetting process.
-
-
Congratulations! You have successfully reset your browser settings with GridinSoft Anti-Malware. Now you can enjoy a safer and smoother browsing experience.
-
Conclusion
-
GridinSoft Anti-Malware is a powerful tool to protect your PC from malware. It can scan and remove malware from your PC effectively and efficiently. It can also reset your browser settings to default if they have been altered by malware. You can download, install, activate, and use GridinSoft Anti-Malware with crack license keys 2020 [latest] to enjoy its full features and benefits. However, you should also be aware of the disadvantages and risks of using crack license keys for GridinSoft Anti-Malware, such as violating the terms and conditions of the program, exposing your PC to malware or viruses, and not receiving any technical support or customer service. Therefore, you should use crack license keys for GridinSoft Anti-Malware at your own risk and discretion.
-
We hope that this article has helped you understand how to use GridinSoft Anti-Malware with crack license keys 2020 [latest] to protect your PC from malware. If you have any questions or feedback, please feel free to contact us or leave a comment below. Thank you for reading!
-
FAQs
-
Here are some frequently asked questions about GridinSoft Anti-Malware:
-
Q: Is GridinSoft Anti-Malware safe to use?
-
A: Yes, GridinSoft Anti-Malware is safe to use as long as you download it from the official website and activate it with the official license keys. However, if you use crack license keys for GridinSoft Anti-Malware, you may expose your PC to malware or viruses that may be hidden in the crack license keys file or the source website.
-
Q: How much does GridinSoft Anti-Malware cost?
-
A: GridinSoft Anti-Malware offers a free trial version for 15 days, and a lifetime license for $29.95. You can also get discounts and offers if you buy multiple licenses or subscribe to their newsletter.
-
Q: How can I update GridinSoft Anti-Malware?
-
A: GridinSoft Anti-Malware can update its database automatically if you enable the option in the settings. You can also update it manually by clicking on the "Update" button at the top right corner of the main window.
-
Q: How can I contact GridinSoft Anti-Malware support?
-
A: You can contact GridinSoft Anti-Malware support via email, phone, or online chat. You can find their contact details on their official website at https://gridinsoft.com/support/.
-
Q: What are the system requirements for GridinSoft Anti-Malware?
-
A: The system requirements for GridinSoft Anti-Malware are:
-
-
Operating system: Windows XP/Vista/7/8/10
-
Processor: 800 MHz CPU or higher
-
Memory: 256 MB RAM or higher
-
Disk space: 50 MB free disk space or higher
-
Internet connection: Required for activation and updates
-
b2dd77e56b
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Downloadwindowsxpprofessionalx64editionsp3sataedition !!TOP!!.md b/spaces/1gistliPinn/ChatGPT4/Examples/Downloadwindowsxpprofessionalx64editionsp3sataedition !!TOP!!.md
deleted file mode 100644
index 82bd6aeb5f2561c974d1c0f0a90b39b93f19d2a7..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Downloadwindowsxpprofessionalx64editionsp3sataedition !!TOP!!.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-April 25, 2021 - Download the latest full version of Windows XP Professional SP3 ISO for free. XP Professional SP3 has all preinstalled drivers for SATA drives. It has advanced security features, including support for data encryption. In addition, you can perform a secure system restore, making XP Professional SP3 the perfect choice for laptops and PCs. Download. Microsoft Office 2007 SP3 - Download the latest full version of the free Office 2007 suite from Microsoft. Office 2007 includes all the standard features you need to work efficiently with documents, spreadsheets, and presentations. 8a78ff9644
-
-
-
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Bitcoin Pop A Modern Crypto Twist on the Classic Bubble Shooter Game!.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Bitcoin Pop A Modern Crypto Twist on the Classic Bubble Shooter Game!.md
deleted file mode 100644
index dc42f30254dad3b2b9d5558237cc4ea99edc165b..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Bitcoin Pop A Modern Crypto Twist on the Classic Bubble Shooter Game!.md
+++ /dev/null
@@ -1,83 +0,0 @@
-
-
Download Bitcoin Pop: A Fun and Rewarding Crypto Game
-
Do you love playing bubble shooter games? Do you want to earn some crypto while having fun? If you answered yes to both questions, then you should download Bitcoin Pop, a free app that lets you play a timeless bubble shooter game with sweet crypto rewards.
-
How to play Bitcoin Pop and earn crypto rewards
-
Bitcoin Pop is a simple and addictive game that puts your hand-eye coordination and puzzle solving skills to the test. The goal is to aim, match, and pop like-colored crypto bubbles to collect the required number of sodas. Each level gets progressively harder, but aim and precision is the key to earn crypto.
As you play, you will earn Bling Points that can be exchanged for Bitcoin or USD via PayPal. You will need a valid Coinbase or PayPal account to cash out your rewards. You can also earn extra Bling Points by watching ads, inviting friends, or completing surveys.
-
How to download Bitcoin Pop on your device
-
Bitcoin Pop is available for both Android and iOS devices. You can download it from the Google Play Store or the App Store. The only requirement is that you register and login before playing. No tricks or hoops to jump through to receive your crypto - just download, register, play and start collecting crypto.
-
The pros and cons of Bitcoin Pop
-
Like any app, Bitcoin Pop has its advantages and disadvantages. Here are some of them:
-
-
Pros
Cons
-
- Fun and easy gameplay
- Low payout rate
-
- Modern crypto-themed graphics
- High battery consumption
-
- No international transaction fees or red tape
- Limited use of Bitcoin as a payment method
-
- User anonymity and transparency
- Volatility and risk of Bitcoin
-
- Independence from a central authority
- No government regulations or protection
-
-
Conclusion: Is Bitcoin Pop worth playing?
-
Bitcoin Pop is a great game for anyone who enjoys bubble shooter games and wants to earn some crypto in their spare time. It is not a get-rich-quick scheme, but rather a fun and rewarding way to learn more about Bitcoin and the crypto world. If you are looking for a casual and entertaining game that also gives you some exposure to cryptocurrency, then you should definitely download Bitcoin Pop and give it a try.
-
FAQs
-
-
What is Bitcoin?
-
Bitcoin is a digital currency that operates on a decentralized network of computers. It is not controlled by any central authority or intermediary. It can be used to send and receive payments online without intermediaries or fees.
-
How do I get a Coinbase or PayPal account?
-
You can sign up for a Coinbase account at coinbase.com. You will need to verify your identity and link your bank account or debit card. You can sign up for a PayPal account at paypal.com. You will need to provide your email address and link your bank account or credit card.
-
How do I exchange my Bling Points for Bitcoin or USD?
-
You can exchange your Bling Points for Bitcoin or USD in the app. Tap on the "Cash Out" button and choose your preferred option. You will need to enter your Coinbase email address or PayPal email address to receive your payment.
-
download bitcoin pop app
-download bitcoin pop game
-download bitcoin pop apk
-download bitcoin pop for pc
-download bitcoin pop android
-download bitcoin pop ios
-download bitcoin pop bling
-download bitcoin pop earn crypto
-download bitcoin pop bubble shooter
-download bitcoin pop mod apk
-how to download bitcoin pop
-where to download bitcoin pop
-download bitcoin pop - get bitcoin
-download bitcoin pop - get bitcoin apk
-download bitcoin pop - get bitcoin for free
-download bitcoin pop - get bitcoin on pc
-download bitcoin pop - get bitcoin on android
-download bitcoin pop - get bitcoin on ios
-download bitcoin pop - get bitcoin with bling points
-download bitcoin pop - get bitcoin with paypal
-best way to download bitcoin pop
-easiest way to download bitcoin pop
-fastest way to download bitcoin pop
-safest way to download bitcoin pop
-free download of bitcoin pop
-latest version of bitcoin pop download
-old version of bitcoin pop download
-update version of bitcoin pop download
-offline version of bitcoin pop download
-online version of bitcoin pop download
-benefits of downloading bitcoin pop
-disadvantages of downloading bitcoin pop
-reviews of downloading bitcoin pop
-ratings of downloading bitcoin pop
-tips for downloading bitcoin pop
-tricks for downloading bitcoin pop
-hacks for downloading bitcoin pop
-cheats for downloading bitcoin pop
-guides for downloading bitcoin pop
-tutorials for downloading bitcoin pop
-steps for downloading bitcoin pop
-requirements for downloading bitcoin pop
-features of downloading bitcoin pop
-advantages of downloading bitcoin pop
-challenges of downloading bitcoin pop
-problems of downloading bitcoin pop
-solutions of downloading bitcoin pop
-alternatives of downloading bitcoin pop
-competitors of downloading bitcoin pop
-
How long does it take to receive my payment?
-
It usually takes 24 hours to process your payment request. However, it may take longer depending on the network congestion or other factors.
-
Is Bitcoin Pop safe and legit?
Bitcoin Pop is a safe and legit app that has been verified by Google Play Protect and App Store Review. It has over 1 million downloads and a 4.5-star rating on both platforms. It is also backed by Bling, a reputable company that has been featured on Forbes, CNN, and The Wall Street Journal. You can trust that Bitcoin Pop will pay you your rewards and protect your privacy.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Attack on Titan AOT Mobile Fan Game v3.0 APK Offline An Immersive and Action-Packed Adventure.md b/spaces/1phancelerku/anime-remove-background/Attack on Titan AOT Mobile Fan Game v3.0 APK Offline An Immersive and Action-Packed Adventure.md
deleted file mode 100644
index 1772a30d14287cd065b06da98935b8fb01b7d036..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Attack on Titan AOT Mobile Fan Game v3.0 APK Offline An Immersive and Action-Packed Adventure.md
+++ /dev/null
@@ -1,65 +0,0 @@
-
-
Attack on Titan AOT Mobile Fan Game V3.0 APK Offline
-
If you are a fan of the anime and manga series Attack on Titan, you might be interested in playing a mobile game based on it. However, most of the official games are online-only and require a stable internet connection. That's why some fans have created their own fan-made games that can be played offline. One of them is Attack on Titan AOT Mobile Fan Game V3.0 APK Offline, which is a free and fun game that you can download and install on your Android device.
-
What is Attack on Titan AOT Mobile Fan Game V3.0 APK Offline?
-
A fan-made game based on the popular anime and manga series
-
Attack on Titan AOT Mobile Fan Game V3.0 APK Offline is a game created by Julhiecio, a fan of the series who wanted to make a game that captures the essence of the original story. The game is set in a world where humanity lives inside walls to protect themselves from giant humanoid creatures called Titans, who devour humans for no apparent reason. The game follows the adventures of Eren Yeager, Mikasa Ackerman, Armin Arlert, and other members of the Survey Corps, who fight against the Titans using special equipment called Vertical Maneuvering Equipment (VME), which allows them to move around using grappling hooks and blades.
-
attack on titan aot mobile fan game v3.0 apk offline
One of the main features of Attack on Titan AOT Mobile Fan Game V3.0 APK Offline is that it can be played offline with up to four players using a local Wi-Fi network. This means that you don't need an internet connection to enjoy the game with your friends or family. You can choose to cooperate or compete with each other in various modes, such as survival, capture the flag, or deathmatch.
-
Large map with various locations
-
The game also features a large map that is based on the anime and manga series, with various locations that you can explore and interact with. You can visit places like Shiganshina District, Trost District, Forest of Giant Trees, Utgard Castle, and more. You can also find resources like gas, blades, food, and water that you can use to replenish your equipment and health.
-
Customizable characters and weapons
-
Another feature of Attack on Titan AOT Mobile Fan Game V3.0 APK Offline is that you can customize your character and weapons according to your preferences. You can choose from different hairstyles, outfits, accessories, and skins for your character, as well as different types of blades, guns, and bombs for your weapons. You can also unlock new items by completing missions or collecting coins.
-
Smooth graphics and animations
-
The game also boasts smooth graphics and animations that make the game look realistic and immersive. The game uses 3D models and textures that are faithful to the original series, as well as dynamic lighting and shadows that create a realistic atmosphere. The game also uses fluid animations that make the movement and combat of the characters smooth and responsive.
-
Easy controls and interface
-
The game also has easy controls and interface that make the game easy to play and navigate. The game uses touch-screen controls that are intuitive and simple to use. You can move your character using a virtual joystick, aim and shoot using buttons, and switch between weapons using icons. You can also access menus and options using a hamburger button. The game also has a clear and simple interface that shows your health, gas, blades, and coins, as well as a mini-map and a mission log.
-
How to download and install Attack on Titan AOT Mobile Fan Game V3.0 APK Offline?
-
Download the APK file from a trusted source
-
To download and install Attack on Titan AOT Mobile Fan Game V3.0 APK Offline, you need to get the APK file from a trusted source. You can find the link to the official website of the game developer in the description below. Alternatively, you can search for the game on Google Play Store or other third-party websites that offer APK files. However, be careful of fake or malicious links that may harm your device or steal your data.
-
Enable unknown sources on your device
-
Before you can install the APK file, you need to enable unknown sources on your device. This is because the game is not available on the official app store and you need to allow your device to install apps from other sources. To do this, go to your device settings and look for security or privacy options. Then, find and enable the option that says unknown sources or allow installation of apps from unknown sources.
-
Install the APK file and launch the game
-
After you have enabled unknown sources, you can install the APK file by tapping on it and following the instructions. It may take a few minutes for the installation to complete. Once it is done, you can launch the game by tapping on its icon on your home screen or app drawer. You can also create a shortcut for the game on your desktop for easier access.
-
attack on titan mobile offline gameplay fan made julhiecio
-aot mobile v3.0 fan made apk download youtube
-attack on titan fan game apk for android filehippo
-attack on titan mobile v3.0 apk offline update fanmade
-aot mobile offline vr game by slavka based on manga
-attack on titan mobile apk offline latest version julhiecio
-aot mobile fan game v3.0 apk download link reddit
-attack on titan fan game android vr component unrestrictive
-attack on titan mobile v3.0 apk file size 110 mb
-aot mobile offline gameplay youtube newbzone gaming channel
-attack on titan fan game apk android immersive experience
-attack on titan mobile v3.0 apk offline julhiecio official website
-aot mobile fan made vr game spectacular graphics and physics
-attack on titan fan game android filehippo download free
-attack on titan mobile v3.0 apk offline update date 2022
-aot mobile offline gameplay fan made julhiecio subscribe and follow
-attack on titan fan game apk android slavka manga adaptation
-attack on titan mobile v3.0 apk offline latest update from julhiecio
-aot mobile fan made vr game based on critically acclaimed manga
-attack on titan fan game android filehippo review and rating
-attack on titan mobile v3.0 apk offline how to install and play
-aot mobile offline gameplay youtube video watch and comment
-attack on titan fan game apk android vr headset compatible
-attack on titan mobile v3.0 apk offline features and improvements
-aot mobile fan made vr game download and install instructions
-
Tips and tricks for playing Attack on Titan AOT Mobile Fan Game V3.0 APK Offline
-
Learn the basics of movement and combat
-
The first thing you need to do when playing Attack on Titan AOT Mobile Fan Game V3.0 APK Offline is to learn the basics of movement and combat. The game has a tutorial mode that teaches you how to use the VME, how to attack and dodge Titans, how to reload and change weapons, and how to use items. You can also practice your skills in training mode or in offline mode with bots.
-
Explore the map and find resources
-
The next thing you need to do is to explore the map and find resources that can help you survive and fight better. The map has various locations that have different advantages and disadvantages. For example, some places have more gas stations or supply depots, while others have more Titans or enemies. You can also find hidden items or secrets that can give you extra coins or bonuses.
-
Upgrade your equipment and skills
-
The third thing you need to do is to upgrade your equipment and skills as you progress in the game. You can use coins that you earn from missions or collect from the map to buy new items or upgrade existing ones. You can also use skill points that you gain from leveling up to improve your attributes or unlock new abilities. You can access the shop and the skill tree from the main menu or from checkpoints in the map.
-
Team up with other players or play solo
-
The last thing you need to do is to decide whether you want to team up with other players or play solo in offline mode. Both options have their pros and cons, depending on your preference and play style. If you team up with other players, you can cooperate and communicate with them using voice chat or text chat, as well as share resources and items. However, you may also encounter problems such as lag, disconnects, trolls, or cheaters. If you play solo, you can enjoy the game at your own pace and without any distractions or interruptions. However, you may also face more challenges and difficulties, especially against stronger Titans or enemies.
-
Conclusion
-
Attack on Titan AOT Mobile Fan Game V3.0 APK Offline is a fan-made game that lets you experience the thrill and excitement of the anime and manga series on your mobile device. The game has offline multiplayer mode, large map with various locations, customizable characters and weapons, smooth graphics and animations, easy controls and interface, and more features that make it fun and enjoyable. The game is free to download and install, but you need to follow some steps to get it safely and securely. The game also has some tips and tricks that can help you play better and have more fun.
-
If you are looking for a game that is based on Attack on Titan and can be played offline with your friends or alone, then Attack on Titan AOT Mobile Fan Game V3.0 APK Offline is a great choice for you.
- FAQs Q: Is Attack on Titan A OT Mobile Fan Game V3.0 APK Offline an official game? A: No, it is not an official game. It is a fan-made game created by Julhiecio, a fan of the series who wanted to make a game that captures the essence of the original story. Q: How can I play Attack on Titan AOT Mobile Fan Game V3.0 APK Offline with my friends? A: You can play the game with your friends using a local Wi-Fi network. You can choose to cooperate or compete with each other in various modes, such as survival, capture the flag, or deathmatch. Q: What are the requirements to play Attack on Titan AOT Mobile Fan Game V3.0 APK Offline? A: You need an Android device that has at least 2 GB of RAM and 500 MB of free storage space. You also need to enable unknown sources on your device to install the APK file. Q: Where can I get more information about Attack on Titan AOT Mobile Fan Game V3.0 APK Offline? A: You can get more information about the game from the official website of the game developer, which is linked in the description below. You can also follow the game developer on social media platforms, such as Facebook, Twitter, Instagram, and YouTube. Q: How can I support the game developer of Attack on Titan AOT Mobile Fan Game V3.0 APK Offline? A: You can support the game developer by giving feedback, suggestions, or bug reports on the official website or social media platforms. You can also donate to the game developer via PayPal or Patreon. 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Enjoy Stumble Guys in Your Browser - No APK Required.md b/spaces/1phancelerku/anime-remove-background/Enjoy Stumble Guys in Your Browser - No APK Required.md
deleted file mode 100644
index b9c6a503781ec4189ddd1cadbb1eb42a02acfdc3..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Enjoy Stumble Guys in Your Browser - No APK Required.md
+++ /dev/null
@@ -1,130 +0,0 @@
-
-
How to Download Stumble Guys Without APK
-
If you are looking for a fun and addictive game to play with your friends or strangers online, you might want to check out Stumble Guys. It is a massive multiplayer party knockout game that will make you laugh, scream, and stumble your way to victory. But how can you download Stumble Guys without APK? In this article, we will explain what Stumble Guys is, what an APK file is, and how you can download Stumble Guys without APK on your device.
-
What is Stumble Guys?
-
A fun and chaotic multiplayer party game
-
Stumble Guys is a game that was inspired by popular TV shows like Wipeout and Takeshi's Castle. The game involves racing through obstacle courses against up to 32 players online. You have to run, jump, dash, slide, and dodge your way to the finish line while avoiding being eliminated by other players or the environment. The game features 17 unique obstacle courses that are randomly selected each round, so you never know what to expect. The game also has colorful, whacky graphics and hilarious sound effects that add to the fun.
Stumble Guys was originally released as a mobile game for Android devices in August 2020. Since then, it has gained millions of downloads and positive reviews from players. In October 2021, the game was also released on Steam for Windows PC users. The game supports cross-play between Android and PC users, so you can play with anyone regardless of their device. The game also has a party mode that allows you to invite your friends and create private matches. You can also customize your Stumble Guy with different outfits and emotes that you can unlock by playing the game or purchasing them from the store.
-
What is an APK file?
-
A package file format for Android apps
-
An APK file stands for Android Package Kit. It is a file format that Android uses to distribute and install apps. An APK file contains all the code, resources, assets, certificates, and manifest file that an app needs to run properly on an Android device. An APK file can be downloaded from various sources, such as Google Play Store, third-party websites, or directly from the app developer.
-
The pros and cons of using APK files
-
There are some advantages and disadvantages of using APK files to install apps on your Android device. Some of the pros are:
-
-
You can access apps that are not available in your region or on Google Play Store.
-
You can install older versions of apps that may have features or compatibility that you prefer.
-
You can update apps faster than waiting for the official update from Google Play Store.
-
-
Some of the cons are:
-
-
You may expose your device to malware or viruses that may harm your data or system.
-
You may violate the terms of service or privacy policy of the app developer or Google Play Store.
-
You may encounter compatibility or performance issues with your device or other apps.
-
-
How to download Stumble Guys without APK
-
Download from Google Play Store or Steam
-
The easiest and safest way to download Stumble Guys without APK is to get it from the official sources, such as Google Play Store or Steam. Here are the steps to do so:
-
download stumble guys online for free
-download stumble guys multiplayer royale game
-download stumble guys without apk file
-download stumble guys on pc and mobile
-download stumble guys action platformer game
-download stumble guys unblocked games for school
-download stumble guys from google play store
-download stumble guys xapk version
-download stumble guys mod apk with unlimited gems
-download stumble guys latest update 2023
-download stumble guys for android and ios devices
-download stumble guys and join the endless running fun
-download stumble guys and play with your friends online
-download stumble guys and customize your character
-download stumble guys and dodge oncoming obstacles
-download stumble guys and win the trophy
-download stumble guys and experience the comical physics
-download stumble guys and enjoy the colorful design
-download stumble guys and challenge yourself in different levels
-download stumble guys and beat all your rivals
-download stumble guys and become the champion
-download stumble guys and try the new features
-download stumble guys and explore more games on now.gg
-download stumble guys and join the tournaments
-download stumble guys and earn rewards from the stumble pass
-download stumble guys and share your hilarious fails
-download stumble guys and rate the game on google play
-download stumble guys and watch video clips of gameplay
-download stumble guys and learn tips and tricks from other players
-download stumble guys and discover new maps and modes
-download stumble guys and have fun with the stylized graphics
-download stumble guys and run, dash, and slide past opponents
-download stumble guys and avoid getting wiped out
-download stumble guys and participate in the massive multiplayer party knockout game
-download stumble guys and support the developers by purchasing in-app items
-download stumble guys and check out the data safety and privacy policy of the app
-download stumble guys and read the reviews from other users
-download stumble guys and follow the official social media accounts of the game
-download stumble guys and contact the customer support if you have any issues or feedbacks
-download stumble guys and join the community of fans of the game
Download and install the game from your Steam library.
-
Launch the game and enjoy.
-
-
-
-
-
Use an Android emulator on PC or Mac
-
If you don't have an Android device or a PC that can run Steam, you can still play Stumble Guys without APK by using an Android emulator. An Android emulator is a software that simulates an Android device on your PC or Mac. You can use it to run Android apps and games on your computer. There are many Android emulators available online, such as BlueStacks, NoxPlayer, LDPlayer, etc. Here are the general steps to use an Android emulator to play Stumble Guys:
-
-
Download and install an Android emulator of your choice from its official website.
-
Launch the emulator and sign in with your Google account.
In conclusion, Stumble Guys is a fun and chaotic multiplayer party game that you can play with up to 32 players online. You can download Stumble Guys without APK by getting it from Google Play Store or Steam, or by using an Android emulator on your PC or Mac. By doing so, you can avoid the risks of using APK files and enjoy the game safely and smoothly.
-
Provide a call to action for the readers
-
If you are ready to join the fun and stumble your way to victory, download Stumble Guys today and invite your friends to play with you. You will have a blast competing with other players in hilarious obstacle courses. Don't forget to customize your Stumble Guy with cool outfits and emotes. Have fun and good luck!
-
FAQs
-
Is Stumble Guys free to play?
-
Yes, Stumble Guys is free to play on Android devices. However, you can purchase in-game items such as outfits, emotes, coins, and gems with real money. On PC, you have to buy the game from Steam for $4.99.
-
Can I play Stumble Guys with my friends?
-
Yes, you can play Stumble Guys with your friends by using the party mode. You can invite up to 32 friends to join your private match. You can also chat with them using voice or text messages.
-
How many players can join a Stumble Guys match?
-
A Stumble Guys match can have up to 32 players online. The match consists of multiple rounds of obstacle courses that eliminate players until one winner remains.
-
What are the system requirements for Stumble Guys?
-
The minimum system requirements for Stumble Guys are:
-
Platform
Requirements
Android
Android 5.0 or higher
2 GB of RAM or more
100 MB of free storage space or more
PC
Windows 7 or higher (64-bit)
Dual core CPU 2.4 GHz or faster
NVIDIA GeForce 8600/9600GT or equivalent GPU
4 GB of RAM or more
-
1 GB of free storage space or more
-
-
-
-
-
How can I customize my Stumble Guy?
-
You can customize your Stumble Guy by changing its outfit and emote. You can unlock new outfits and emotes by playing the game, completing missions, or buying them from the store. You can also mix and match different parts of the outfits to create your own unique look.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/AB-TW/team-ai/documents/bussiness_context/NOTION_DB/Engineering Wiki 2402f5396a3244fdb3f1d135bdb0f3d6/Engineering Interviews 4be8039581d04456b0151f2cc4b22130/Questions ede8818b3a0e447f80145905690eb3f6/FizzBuzz 70828a5e5e6846a48686f66bb9ccc8b6.md b/spaces/AB-TW/team-ai/documents/bussiness_context/NOTION_DB/Engineering Wiki 2402f5396a3244fdb3f1d135bdb0f3d6/Engineering Interviews 4be8039581d04456b0151f2cc4b22130/Questions ede8818b3a0e447f80145905690eb3f6/FizzBuzz 70828a5e5e6846a48686f66bb9ccc8b6.md
deleted file mode 100644
index 2e1e81a219fd857c6d3f7aba932800910bea88af..0000000000000000000000000000000000000000
--- a/spaces/AB-TW/team-ai/documents/bussiness_context/NOTION_DB/Engineering Wiki 2402f5396a3244fdb3f1d135bdb0f3d6/Engineering Interviews 4be8039581d04456b0151f2cc4b22130/Questions ede8818b3a0e447f80145905690eb3f6/FizzBuzz 70828a5e5e6846a48686f66bb9ccc8b6.md
+++ /dev/null
@@ -1,46 +0,0 @@
-# FizzBuzz
-
-Difficulty: Easy
-Skills: Algorithms, Front end
-
-
-
-# Description
-
-Write a description for the interview question here.
-
-# Sample Inputs
-
-Give some valid inputs the candidate can expect to test their solution with.
-
-- ...
-- ...
-
-# Expected Outputs
-
-For each sample input above, list the expected output.
-
-- ...
-- ...
-
-# Solutions
-
-Provide possible solutions in common languages to this problem.
-
-### Javascript
-
-```jsx
-function solution() {
-
-}
-```
-
-### Python
-
-```python
-def solution():
- pass
-```
\ No newline at end of file
diff --git a/spaces/AIFILMS/ControlNet-Video/style.css b/spaces/AIFILMS/ControlNet-Video/style.css
deleted file mode 100644
index 98c1607dba4c5e2055c5bc59197a9c995389a3fa..0000000000000000000000000000000000000000
--- a/spaces/AIFILMS/ControlNet-Video/style.css
+++ /dev/null
@@ -1,105 +0,0 @@
-#col-container {max-width: 820px; margin-left: auto; margin-right: auto;}
-#duplicate-container{
- display: flex;
- justify-content: space-between;
- align-items: center;
- line-height: 1em;
- flex-direction: row-reverse;
- font-size:1em;
-}
-a, a:hover, a:visited {
- text-decoration-line: underline;
- font-weight: 600;
- color: #1f2937 !important;
-}
-
-.dark a, .dark a:hover, .dark a:visited {
- color: #f3f4f6 !important;
-}
-
-.label-wrap {
- margin-bottom: 12px;
-}
-
-.footer {
- margin-bottom: 45px;
- margin-top: 10px;
- text-align: center;
- border-bottom: 1px solid #e5e5e5;
-}
-
-.footer>p {
- font-size: .8rem!important;
- display: inline-block;
- padding: 0 10px;
- transform: translateY(26px);
- background: white;
-}
-.dark .footer {
- border-color: #303030;
-}
-.dark .footer>p {
- background: #0b0f19;
-}
-
-div#may-like-container > p {
- font-size: .8em;
- margin-bottom: 4px;
-}
-
-.animate-spin {
- animation: spin 1s linear infinite;
-}
-
-@keyframes spin {
- from {
- transform: rotate(0deg);
- }
- to {
- transform: rotate(360deg);
- }
-}
-
-#share-btn-container {
- display: flex;
- padding-left: 0.5rem !important;
- padding-right: 0.5rem !important;
- background-color: #000000;
- justify-content: center;
- align-items: center;
- border-radius: 9999px !important;
- max-width: 13rem;
-}
-
-#share-btn-container:hover {
- background-color: #060606;
-}
-
-#share-btn {
- all: initial;
- color: #ffffff;
- font-weight: 600;
- cursor:pointer;
- font-family: 'IBM Plex Sans', sans-serif;
- margin-left: 0.5rem !important;
- padding-top: 0.5rem !important;
- padding-bottom: 0.5rem !important;
- right:0;
-}
-
-#share-btn * {
- all: unset;
-}
-
-#share-btn-container div:nth-child(-n+2){
- width: auto !important;
- min-height: 0px !important;
-}
-
-#share-btn-container .wrap {
- display: none !important;
-}
-
-#share-btn-container.hidden {
- display: none!important;
-}
\ No newline at end of file
diff --git a/spaces/AIFILMS/StyleGANEX/models/stylegan2/op/upfirdn2d.py b/spaces/AIFILMS/StyleGANEX/models/stylegan2/op/upfirdn2d.py
deleted file mode 100644
index d509eb5e11e8cd01468dded5e5b53f5326057706..0000000000000000000000000000000000000000
--- a/spaces/AIFILMS/StyleGANEX/models/stylegan2/op/upfirdn2d.py
+++ /dev/null
@@ -1,61 +0,0 @@
-from collections import abc
-
-import torch
-from torch.nn import functional as F
-
-
-def upfirdn2d(inputs, kernel, up=1, down=1, pad=(0, 0)):
- if not isinstance(up, abc.Iterable):
- up = (up, up)
-
- if not isinstance(down, abc.Iterable):
- down = (down, down)
-
- if len(pad) == 2:
- pad = (pad[0], pad[1], pad[0], pad[1])
-
- return upfirdn2d_native(inputs, kernel, *up, *down, *pad)
-
-
-def upfirdn2d_native(
- inputs, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1
-):
- _, channel, in_h, in_w = inputs.shape
- inputs = inputs.reshape(-1, in_h, in_w, 1)
-
- _, in_h, in_w, minor = inputs.shape
- kernel_h, kernel_w = kernel.shape
-
- out = inputs.view(-1, in_h, 1, in_w, 1, minor)
- out = F.pad(out, [0, 0, 0, up_x - 1, 0, 0, 0, up_y - 1])
- out = out.view(-1, in_h * up_y, in_w * up_x, minor)
-
- out = F.pad(
- out, [0, 0, max(pad_x0, 0), max(pad_x1, 0), max(pad_y0, 0), max(pad_y1, 0)]
- )
- out = out[
- :,
- max(-pad_y0, 0): out.shape[1] - max(-pad_y1, 0),
- max(-pad_x0, 0): out.shape[2] - max(-pad_x1, 0),
- :,
- ]
-
- out = out.permute(0, 3, 1, 2)
- out = out.reshape(
- [-1, 1, in_h * up_y + pad_y0 + pad_y1, in_w * up_x + pad_x0 + pad_x1]
- )
- w = torch.flip(kernel, [0, 1]).view(1, 1, kernel_h, kernel_w)
- out = F.conv2d(out, w)
- out = out.reshape(
- -1,
- minor,
- in_h * up_y + pad_y0 + pad_y1 - kernel_h + 1,
- in_w * up_x + pad_x0 + pad_x1 - kernel_w + 1,
- )
- out = out.permute(0, 2, 3, 1)
- out = out[:, ::down_y, ::down_x, :]
-
- out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h + down_y) // down_y
- out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w + down_x) // down_x
-
- return out.view(-1, channel, out_h, out_w)
\ No newline at end of file
diff --git a/spaces/ASJMO/freegpt/g4f/Provider/Providers/helpers/phind.py b/spaces/ASJMO/freegpt/g4f/Provider/Providers/helpers/phind.py
deleted file mode 100644
index 70525d51d849c43bd1cf29c7f9b18f22bff1e982..0000000000000000000000000000000000000000
--- a/spaces/ASJMO/freegpt/g4f/Provider/Providers/helpers/phind.py
+++ /dev/null
@@ -1,69 +0,0 @@
-import sys
-import json
-import datetime
-import urllib.parse
-
-from curl_cffi import requests
-
-config = json.loads(sys.argv[1])
-prompt = config['messages'][-1]['content']
-
-skill = 'expert' if config['model'] == 'gpt-4' else 'intermediate'
-
-json_data = json.dumps({
- 'question': prompt,
- 'options': {
- 'skill': skill,
- 'date': datetime.datetime.now().strftime('%d/%m/%Y'),
- 'language': 'en',
- 'detailed': True,
- 'creative': True,
- 'customLinks': []}}, separators=(',', ':'))
-
-headers = {
- 'Content-Type': 'application/json',
- 'Pragma': 'no-cache',
- 'Accept': '*/*',
- 'Sec-Fetch-Site': 'same-origin',
- 'Accept-Language': 'en-GB,en;q=0.9',
- 'Cache-Control': 'no-cache',
- 'Sec-Fetch-Mode': 'cors',
- 'Content-Length': str(len(json_data)),
- 'Origin': 'https://www.phind.com',
- 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.4 Safari/605.1.15',
- 'Referer': f'https://www.phind.com/search?q={urllib.parse.quote(prompt)}&source=searchbox',
- 'Connection': 'keep-alive',
- 'Host': 'www.phind.com',
- 'Sec-Fetch-Dest': 'empty'
-}
-
-
-def output(chunk):
- try:
- if b'PHIND_METADATA' in chunk:
- return
-
- if chunk == b'data: \r\ndata: \r\ndata: \r\n\r\n':
- chunk = b'data: \n\r\n\r\n'
-
- chunk = chunk.decode()
-
- chunk = chunk.replace('data: \r\n\r\ndata: ', 'data: \n')
- chunk = chunk.replace('\r\ndata: \r\ndata: \r\n\r\n', '\n\r\n\r\n')
- chunk = chunk.replace('data: ', '').replace('\r\n\r\n', '')
-
- print(chunk, flush=True, end = '')
-
- except json.decoder.JSONDecodeError:
- pass
-
-while True:
- try:
- response = requests.post('https://www.phind.com/api/infer/answer',
- headers=headers, data=json_data, content_callback=output, timeout=999999, impersonate='safari15_5')
-
- exit(0)
-
- except Exception as e:
- print('an error occured, retrying... |', e, flush=True)
- continue
\ No newline at end of file
diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/types/WebSearch.ts b/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/types/WebSearch.ts
deleted file mode 100644
index 7416f01f1a2c7ea9b94f525aec473312bc3deefd..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/types/WebSearch.ts
+++ /dev/null
@@ -1,36 +0,0 @@
-import type { Conversation } from "./Conversation";
-import type { Timestamps } from "./Timestamps";
-
-export interface WebSearch extends Timestamps {
- prompt: string;
-
- searchQuery: string;
- results: string[];
- knowledgeGraph: string;
- answerBox: string;
- summary: string;
-
- messages: WebSearchMessage[];
-}
-
-export type WebSearchMessageUpdate = {
- type: "update";
- message: string;
- args?: string[];
-};
-
-export type WebSearchMessageError = {
- type: "error";
- message: string;
- args?: string[];
-};
-
-export type WebSearchMessageResult = {
- type: "result";
- id: string;
-};
-
-export type WebSearchMessage =
- | WebSearchMessageUpdate
- | WebSearchMessageResult
- | WebSearchMessageError;
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/perspective/Perspective.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/perspective/Perspective.d.ts
deleted file mode 100644
index e0f9f7343c198dfc5010747c4415fec67945c7e5..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/perspective/Perspective.d.ts
+++ /dev/null
@@ -1,2 +0,0 @@
-import { ContainerPerspective } from '../../../plugins/perspectiveimage';
-export default ContainerPerspective;
\ No newline at end of file
diff --git a/spaces/AlexWang/lama/bin/gen_mask_dataset_hydra.py b/spaces/AlexWang/lama/bin/gen_mask_dataset_hydra.py
deleted file mode 100644
index 4f4fdea52315f24f83fbd802e51a1815097d0fcb..0000000000000000000000000000000000000000
--- a/spaces/AlexWang/lama/bin/gen_mask_dataset_hydra.py
+++ /dev/null
@@ -1,124 +0,0 @@
-#!/usr/bin/env python3
-
-import glob
-import os
-import shutil
-import traceback
-import hydra
-from omegaconf import OmegaConf
-
-import PIL.Image as Image
-import numpy as np
-from joblib import Parallel, delayed
-
-from saicinpainting.evaluation.masks.mask import SegmentationMask, propose_random_square_crop
-from saicinpainting.evaluation.utils import load_yaml, SmallMode
-from saicinpainting.training.data.masks import MixedMaskGenerator
-
-
-class MakeManyMasksWrapper:
- def __init__(self, impl, variants_n=2):
- self.impl = impl
- self.variants_n = variants_n
-
- def get_masks(self, img):
- img = np.transpose(np.array(img), (2, 0, 1))
- return [self.impl(img)[0] for _ in range(self.variants_n)]
-
-
-def process_images(src_images, indir, outdir, config):
- if config.generator_kind == 'segmentation':
- mask_generator = SegmentationMask(**config.mask_generator_kwargs)
- elif config.generator_kind == 'random':
- mask_generator_kwargs = OmegaConf.to_container(config.mask_generator_kwargs, resolve=True)
- variants_n = mask_generator_kwargs.pop('variants_n', 2)
- mask_generator = MakeManyMasksWrapper(MixedMaskGenerator(**mask_generator_kwargs),
- variants_n=variants_n)
- else:
- raise ValueError(f'Unexpected generator kind: {config.generator_kind}')
-
- max_tamper_area = config.get('max_tamper_area', 1)
-
- for infile in src_images:
- try:
- file_relpath = infile[len(indir):]
- img_outpath = os.path.join(outdir, file_relpath)
- os.makedirs(os.path.dirname(img_outpath), exist_ok=True)
-
- image = Image.open(infile).convert('RGB')
-
- # scale input image to output resolution and filter smaller images
- if min(image.size) < config.cropping.out_min_size:
- handle_small_mode = SmallMode(config.cropping.handle_small_mode)
- if handle_small_mode == SmallMode.DROP:
- continue
- elif handle_small_mode == SmallMode.UPSCALE:
- factor = config.cropping.out_min_size / min(image.size)
- out_size = (np.array(image.size) * factor).round().astype('uint32')
- image = image.resize(out_size, resample=Image.BICUBIC)
- else:
- factor = config.cropping.out_min_size / min(image.size)
- out_size = (np.array(image.size) * factor).round().astype('uint32')
- image = image.resize(out_size, resample=Image.BICUBIC)
-
- # generate and select masks
- src_masks = mask_generator.get_masks(image)
-
- filtered_image_mask_pairs = []
- for cur_mask in src_masks:
- if config.cropping.out_square_crop:
- (crop_left,
- crop_top,
- crop_right,
- crop_bottom) = propose_random_square_crop(cur_mask,
- min_overlap=config.cropping.crop_min_overlap)
- cur_mask = cur_mask[crop_top:crop_bottom, crop_left:crop_right]
- cur_image = image.copy().crop((crop_left, crop_top, crop_right, crop_bottom))
- else:
- cur_image = image
-
- if len(np.unique(cur_mask)) == 0 or cur_mask.mean() > max_tamper_area:
- continue
-
- filtered_image_mask_pairs.append((cur_image, cur_mask))
-
- mask_indices = np.random.choice(len(filtered_image_mask_pairs),
- size=min(len(filtered_image_mask_pairs), config.max_masks_per_image),
- replace=False)
-
- # crop masks; save masks together with input image
- mask_basename = os.path.join(outdir, os.path.splitext(file_relpath)[0])
- for i, idx in enumerate(mask_indices):
- cur_image, cur_mask = filtered_image_mask_pairs[idx]
- cur_basename = mask_basename + f'_crop{i:03d}'
- Image.fromarray(np.clip(cur_mask * 255, 0, 255).astype('uint8'),
- mode='L').save(cur_basename + f'_mask{i:03d}.png')
- cur_image.save(cur_basename + '.png')
- except KeyboardInterrupt:
- return
- except Exception as ex:
- print(f'Could not make masks for {infile} due to {ex}:\n{traceback.format_exc()}')
-
-
-@hydra.main(config_path='../configs/data_gen/whydra', config_name='random_medium_256.yaml')
-def main(config: OmegaConf):
- if not config.indir.endswith('/'):
- config.indir += '/'
-
- os.makedirs(config.outdir, exist_ok=True)
-
- in_files = list(glob.glob(os.path.join(config.indir, '**', f'*.{config.location.extension}'),
- recursive=True))
- if config.n_jobs == 0:
- process_images(in_files, config.indir, config.outdir, config)
- else:
- in_files_n = len(in_files)
- chunk_size = in_files_n // config.n_jobs + (1 if in_files_n % config.n_jobs > 0 else 0)
- Parallel(n_jobs=config.n_jobs)(
- delayed(process_images)(in_files[start:start+chunk_size], config.indir, config.outdir, config)
- for start in range(0, len(in_files), chunk_size)
- )
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/AlexWang/lama/models/ade20k/segm_lib/utils/data/dataloader.py b/spaces/AlexWang/lama/models/ade20k/segm_lib/utils/data/dataloader.py
deleted file mode 100644
index 039b9ec3645b2a4626ff47c221e372f32a6ad339..0000000000000000000000000000000000000000
--- a/spaces/AlexWang/lama/models/ade20k/segm_lib/utils/data/dataloader.py
+++ /dev/null
@@ -1,425 +0,0 @@
-import torch
-import torch.multiprocessing as multiprocessing
-from torch._C import _set_worker_signal_handlers, \
- _remove_worker_pids, _error_if_any_worker_fails
-try:
- from torch._C import _set_worker_pids
-except:
- from torch._C import _update_worker_pids as _set_worker_pids
-from .sampler import SequentialSampler, RandomSampler, BatchSampler
-import signal
-import collections
-import re
-import sys
-import threading
-import traceback
-from torch._six import string_classes, int_classes
-import numpy as np
-
-if sys.version_info[0] == 2:
- import Queue as queue
-else:
- import queue
-
-
-class ExceptionWrapper(object):
- r"Wraps an exception plus traceback to communicate across threads"
-
- def __init__(self, exc_info):
- self.exc_type = exc_info[0]
- self.exc_msg = "".join(traceback.format_exception(*exc_info))
-
-
-_use_shared_memory = False
-"""Whether to use shared memory in default_collate"""
-
-
-def _worker_loop(dataset, index_queue, data_queue, collate_fn, seed, init_fn, worker_id):
- global _use_shared_memory
- _use_shared_memory = True
-
- # Intialize C side signal handlers for SIGBUS and SIGSEGV. Python signal
- # module's handlers are executed after Python returns from C low-level
- # handlers, likely when the same fatal signal happened again already.
- # https://docs.python.org/3/library/signal.html Sec. 18.8.1.1
- _set_worker_signal_handlers()
-
- torch.set_num_threads(1)
- torch.manual_seed(seed)
- np.random.seed(seed)
-
- if init_fn is not None:
- init_fn(worker_id)
-
- while True:
- r = index_queue.get()
- if r is None:
- break
- idx, batch_indices = r
- try:
- samples = collate_fn([dataset[i] for i in batch_indices])
- except Exception:
- data_queue.put((idx, ExceptionWrapper(sys.exc_info())))
- else:
- data_queue.put((idx, samples))
-
-
-def _worker_manager_loop(in_queue, out_queue, done_event, pin_memory, device_id):
- if pin_memory:
- torch.cuda.set_device(device_id)
-
- while True:
- try:
- r = in_queue.get()
- except Exception:
- if done_event.is_set():
- return
- raise
- if r is None:
- break
- if isinstance(r[1], ExceptionWrapper):
- out_queue.put(r)
- continue
- idx, batch = r
- try:
- if pin_memory:
- batch = pin_memory_batch(batch)
- except Exception:
- out_queue.put((idx, ExceptionWrapper(sys.exc_info())))
- else:
- out_queue.put((idx, batch))
-
-numpy_type_map = {
- 'float64': torch.DoubleTensor,
- 'float32': torch.FloatTensor,
- 'float16': torch.HalfTensor,
- 'int64': torch.LongTensor,
- 'int32': torch.IntTensor,
- 'int16': torch.ShortTensor,
- 'int8': torch.CharTensor,
- 'uint8': torch.ByteTensor,
-}
-
-
-def default_collate(batch):
- "Puts each data field into a tensor with outer dimension batch size"
-
- error_msg = "batch must contain tensors, numbers, dicts or lists; found {}"
- elem_type = type(batch[0])
- if torch.is_tensor(batch[0]):
- out = None
- if _use_shared_memory:
- # If we're in a background process, concatenate directly into a
- # shared memory tensor to avoid an extra copy
- numel = sum([x.numel() for x in batch])
- storage = batch[0].storage()._new_shared(numel)
- out = batch[0].new(storage)
- return torch.stack(batch, 0, out=out)
- elif elem_type.__module__ == 'numpy' and elem_type.__name__ != 'str_' \
- and elem_type.__name__ != 'string_':
- elem = batch[0]
- if elem_type.__name__ == 'ndarray':
- # array of string classes and object
- if re.search('[SaUO]', elem.dtype.str) is not None:
- raise TypeError(error_msg.format(elem.dtype))
-
- return torch.stack([torch.from_numpy(b) for b in batch], 0)
- if elem.shape == (): # scalars
- py_type = float if elem.dtype.name.startswith('float') else int
- return numpy_type_map[elem.dtype.name](list(map(py_type, batch)))
- elif isinstance(batch[0], int_classes):
- return torch.LongTensor(batch)
- elif isinstance(batch[0], float):
- return torch.DoubleTensor(batch)
- elif isinstance(batch[0], string_classes):
- return batch
- elif isinstance(batch[0], collections.Mapping):
- return {key: default_collate([d[key] for d in batch]) for key in batch[0]}
- elif isinstance(batch[0], collections.Sequence):
- transposed = zip(*batch)
- return [default_collate(samples) for samples in transposed]
-
- raise TypeError((error_msg.format(type(batch[0]))))
-
-
-def pin_memory_batch(batch):
- if torch.is_tensor(batch):
- return batch.pin_memory()
- elif isinstance(batch, string_classes):
- return batch
- elif isinstance(batch, collections.Mapping):
- return {k: pin_memory_batch(sample) for k, sample in batch.items()}
- elif isinstance(batch, collections.Sequence):
- return [pin_memory_batch(sample) for sample in batch]
- else:
- return batch
-
-
-_SIGCHLD_handler_set = False
-"""Whether SIGCHLD handler is set for DataLoader worker failures. Only one
-handler needs to be set for all DataLoaders in a process."""
-
-
-def _set_SIGCHLD_handler():
- # Windows doesn't support SIGCHLD handler
- if sys.platform == 'win32':
- return
- # can't set signal in child threads
- if not isinstance(threading.current_thread(), threading._MainThread):
- return
- global _SIGCHLD_handler_set
- if _SIGCHLD_handler_set:
- return
- previous_handler = signal.getsignal(signal.SIGCHLD)
- if not callable(previous_handler):
- previous_handler = None
-
- def handler(signum, frame):
- # This following call uses `waitid` with WNOHANG from C side. Therefore,
- # Python can still get and update the process status successfully.
- _error_if_any_worker_fails()
- if previous_handler is not None:
- previous_handler(signum, frame)
-
- signal.signal(signal.SIGCHLD, handler)
- _SIGCHLD_handler_set = True
-
-
-class DataLoaderIter(object):
- "Iterates once over the DataLoader's dataset, as specified by the sampler"
-
- def __init__(self, loader):
- self.dataset = loader.dataset
- self.collate_fn = loader.collate_fn
- self.batch_sampler = loader.batch_sampler
- self.num_workers = loader.num_workers
- self.pin_memory = loader.pin_memory and torch.cuda.is_available()
- self.timeout = loader.timeout
- self.done_event = threading.Event()
-
- self.sample_iter = iter(self.batch_sampler)
-
- if self.num_workers > 0:
- self.worker_init_fn = loader.worker_init_fn
- self.index_queue = multiprocessing.SimpleQueue()
- self.worker_result_queue = multiprocessing.SimpleQueue()
- self.batches_outstanding = 0
- self.worker_pids_set = False
- self.shutdown = False
- self.send_idx = 0
- self.rcvd_idx = 0
- self.reorder_dict = {}
-
- base_seed = torch.LongTensor(1).random_(0, 2**31-1)[0]
- self.workers = [
- multiprocessing.Process(
- target=_worker_loop,
- args=(self.dataset, self.index_queue, self.worker_result_queue, self.collate_fn,
- base_seed + i, self.worker_init_fn, i))
- for i in range(self.num_workers)]
-
- if self.pin_memory or self.timeout > 0:
- self.data_queue = queue.Queue()
- if self.pin_memory:
- maybe_device_id = torch.cuda.current_device()
- else:
- # do not initialize cuda context if not necessary
- maybe_device_id = None
- self.worker_manager_thread = threading.Thread(
- target=_worker_manager_loop,
- args=(self.worker_result_queue, self.data_queue, self.done_event, self.pin_memory,
- maybe_device_id))
- self.worker_manager_thread.daemon = True
- self.worker_manager_thread.start()
- else:
- self.data_queue = self.worker_result_queue
-
- for w in self.workers:
- w.daemon = True # ensure that the worker exits on process exit
- w.start()
-
- _set_worker_pids(id(self), tuple(w.pid for w in self.workers))
- _set_SIGCHLD_handler()
- self.worker_pids_set = True
-
- # prime the prefetch loop
- for _ in range(2 * self.num_workers):
- self._put_indices()
-
- def __len__(self):
- return len(self.batch_sampler)
-
- def _get_batch(self):
- if self.timeout > 0:
- try:
- return self.data_queue.get(timeout=self.timeout)
- except queue.Empty:
- raise RuntimeError('DataLoader timed out after {} seconds'.format(self.timeout))
- else:
- return self.data_queue.get()
-
- def __next__(self):
- if self.num_workers == 0: # same-process loading
- indices = next(self.sample_iter) # may raise StopIteration
- batch = self.collate_fn([self.dataset[i] for i in indices])
- if self.pin_memory:
- batch = pin_memory_batch(batch)
- return batch
-
- # check if the next sample has already been generated
- if self.rcvd_idx in self.reorder_dict:
- batch = self.reorder_dict.pop(self.rcvd_idx)
- return self._process_next_batch(batch)
-
- if self.batches_outstanding == 0:
- self._shutdown_workers()
- raise StopIteration
-
- while True:
- assert (not self.shutdown and self.batches_outstanding > 0)
- idx, batch = self._get_batch()
- self.batches_outstanding -= 1
- if idx != self.rcvd_idx:
- # store out-of-order samples
- self.reorder_dict[idx] = batch
- continue
- return self._process_next_batch(batch)
-
- next = __next__ # Python 2 compatibility
-
- def __iter__(self):
- return self
-
- def _put_indices(self):
- assert self.batches_outstanding < 2 * self.num_workers
- indices = next(self.sample_iter, None)
- if indices is None:
- return
- self.index_queue.put((self.send_idx, indices))
- self.batches_outstanding += 1
- self.send_idx += 1
-
- def _process_next_batch(self, batch):
- self.rcvd_idx += 1
- self._put_indices()
- if isinstance(batch, ExceptionWrapper):
- raise batch.exc_type(batch.exc_msg)
- return batch
-
- def __getstate__(self):
- # TODO: add limited pickling support for sharing an iterator
- # across multiple threads for HOGWILD.
- # Probably the best way to do this is by moving the sample pushing
- # to a separate thread and then just sharing the data queue
- # but signalling the end is tricky without a non-blocking API
- raise NotImplementedError("DataLoaderIterator cannot be pickled")
-
- def _shutdown_workers(self):
- try:
- if not self.shutdown:
- self.shutdown = True
- self.done_event.set()
- # if worker_manager_thread is waiting to put
- while not self.data_queue.empty():
- self.data_queue.get()
- for _ in self.workers:
- self.index_queue.put(None)
- # done_event should be sufficient to exit worker_manager_thread,
- # but be safe here and put another None
- self.worker_result_queue.put(None)
- finally:
- # removes pids no matter what
- if self.worker_pids_set:
- _remove_worker_pids(id(self))
- self.worker_pids_set = False
-
- def __del__(self):
- if self.num_workers > 0:
- self._shutdown_workers()
-
-
-class DataLoader(object):
- """
- Data loader. Combines a dataset and a sampler, and provides
- single- or multi-process iterators over the dataset.
-
- Arguments:
- dataset (Dataset): dataset from which to load the data.
- batch_size (int, optional): how many samples per batch to load
- (default: 1).
- shuffle (bool, optional): set to ``True`` to have the data reshuffled
- at every epoch (default: False).
- sampler (Sampler, optional): defines the strategy to draw samples from
- the dataset. If specified, ``shuffle`` must be False.
- batch_sampler (Sampler, optional): like sampler, but returns a batch of
- indices at a time. Mutually exclusive with batch_size, shuffle,
- sampler, and drop_last.
- num_workers (int, optional): how many subprocesses to use for data
- loading. 0 means that the data will be loaded in the main process.
- (default: 0)
- collate_fn (callable, optional): merges a list of samples to form a mini-batch.
- pin_memory (bool, optional): If ``True``, the data loader will copy tensors
- into CUDA pinned memory before returning them.
- drop_last (bool, optional): set to ``True`` to drop the last incomplete batch,
- if the dataset size is not divisible by the batch size. If ``False`` and
- the size of dataset is not divisible by the batch size, then the last batch
- will be smaller. (default: False)
- timeout (numeric, optional): if positive, the timeout value for collecting a batch
- from workers. Should always be non-negative. (default: 0)
- worker_init_fn (callable, optional): If not None, this will be called on each
- worker subprocess with the worker id (an int in ``[0, num_workers - 1]``) as
- input, after seeding and before data loading. (default: None)
-
- .. note:: By default, each worker will have its PyTorch seed set to
- ``base_seed + worker_id``, where ``base_seed`` is a long generated
- by main process using its RNG. You may use ``torch.initial_seed()`` to access
- this value in :attr:`worker_init_fn`, which can be used to set other seeds
- (e.g. NumPy) before data loading.
-
- .. warning:: If ``spawn'' start method is used, :attr:`worker_init_fn` cannot be an
- unpicklable object, e.g., a lambda function.
- """
-
- def __init__(self, dataset, batch_size=1, shuffle=False, sampler=None, batch_sampler=None,
- num_workers=0, collate_fn=default_collate, pin_memory=False, drop_last=False,
- timeout=0, worker_init_fn=None):
- self.dataset = dataset
- self.batch_size = batch_size
- self.num_workers = num_workers
- self.collate_fn = collate_fn
- self.pin_memory = pin_memory
- self.drop_last = drop_last
- self.timeout = timeout
- self.worker_init_fn = worker_init_fn
-
- if timeout < 0:
- raise ValueError('timeout option should be non-negative')
-
- if batch_sampler is not None:
- if batch_size > 1 or shuffle or sampler is not None or drop_last:
- raise ValueError('batch_sampler is mutually exclusive with '
- 'batch_size, shuffle, sampler, and drop_last')
-
- if sampler is not None and shuffle:
- raise ValueError('sampler is mutually exclusive with shuffle')
-
- if self.num_workers < 0:
- raise ValueError('num_workers cannot be negative; '
- 'use num_workers=0 to disable multiprocessing.')
-
- if batch_sampler is None:
- if sampler is None:
- if shuffle:
- sampler = RandomSampler(dataset)
- else:
- sampler = SequentialSampler(dataset)
- batch_sampler = BatchSampler(sampler, batch_size, drop_last)
-
- self.sampler = sampler
- self.batch_sampler = batch_sampler
-
- def __iter__(self):
- return DataLoaderIter(self)
-
- def __len__(self):
- return len(self.batch_sampler)
diff --git a/spaces/Ananthap4/itineraryGenerator/README.md b/spaces/Ananthap4/itineraryGenerator/README.md
deleted file mode 100644
index 120b83a1637c1717a041b371d56c6c4eceacecee..0000000000000000000000000000000000000000
--- a/spaces/Ananthap4/itineraryGenerator/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: ItineraryGenerator
-emoji: 🌍
-colorFrom: indigo
-colorTo: green
-sdk: gradio
-sdk_version: 3.27.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_ddpm_parallel.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_ddpm_parallel.py
deleted file mode 100644
index a92e175877d24057e49bf405e88185fd4297e6d2..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_ddpm_parallel.py
+++ /dev/null
@@ -1,604 +0,0 @@
-# Copyright 2023 ParaDiGMS authors and The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# DISCLAIMER: This file is strongly influenced by https://github.com/ermongroup/ddim
-
-import math
-from dataclasses import dataclass
-from typing import List, Optional, Tuple, Union
-
-import numpy as np
-import torch
-
-from ..configuration_utils import ConfigMixin, register_to_config
-from ..utils import BaseOutput, randn_tensor
-from .scheduling_utils import KarrasDiffusionSchedulers, SchedulerMixin
-
-
-@dataclass
-# Copied from diffusers.schedulers.scheduling_ddpm.DDPMSchedulerOutput
-class DDPMParallelSchedulerOutput(BaseOutput):
- """
- Output class for the scheduler's step function output.
-
- Args:
- prev_sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` for images):
- Computed sample (x_{t-1}) of previous timestep. `prev_sample` should be used as next model input in the
- denoising loop.
- pred_original_sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` for images):
- The predicted denoised sample (x_{0}) based on the model output from the current timestep.
- `pred_original_sample` can be used to preview progress or for guidance.
- """
-
- prev_sample: torch.FloatTensor
- pred_original_sample: Optional[torch.FloatTensor] = None
-
-
-# Copied from diffusers.schedulers.scheduling_ddpm.betas_for_alpha_bar
-def betas_for_alpha_bar(
- num_diffusion_timesteps,
- max_beta=0.999,
- alpha_transform_type="cosine",
-):
- """
- Create a beta schedule that discretizes the given alpha_t_bar function, which defines the cumulative product of
- (1-beta) over time from t = [0,1].
-
- Contains a function alpha_bar that takes an argument t and transforms it to the cumulative product of (1-beta) up
- to that part of the diffusion process.
-
-
- Args:
- num_diffusion_timesteps (`int`): the number of betas to produce.
- max_beta (`float`): the maximum beta to use; use values lower than 1 to
- prevent singularities.
- alpha_transform_type (`str`, *optional*, default to `cosine`): the type of noise schedule for alpha_bar.
- Choose from `cosine` or `exp`
-
- Returns:
- betas (`np.ndarray`): the betas used by the scheduler to step the model outputs
- """
- if alpha_transform_type == "cosine":
-
- def alpha_bar_fn(t):
- return math.cos((t + 0.008) / 1.008 * math.pi / 2) ** 2
-
- elif alpha_transform_type == "exp":
-
- def alpha_bar_fn(t):
- return math.exp(t * -12.0)
-
- else:
- raise ValueError(f"Unsupported alpha_tranform_type: {alpha_transform_type}")
-
- betas = []
- for i in range(num_diffusion_timesteps):
- t1 = i / num_diffusion_timesteps
- t2 = (i + 1) / num_diffusion_timesteps
- betas.append(min(1 - alpha_bar_fn(t2) / alpha_bar_fn(t1), max_beta))
- return torch.tensor(betas, dtype=torch.float32)
-
-
-class DDPMParallelScheduler(SchedulerMixin, ConfigMixin):
- """
- Denoising diffusion probabilistic models (DDPMs) explores the connections between denoising score matching and
- Langevin dynamics sampling.
-
- [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__`
- function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`.
- [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and
- [`~SchedulerMixin.from_pretrained`] functions.
-
- For more details, see the original paper: https://arxiv.org/abs/2006.11239
-
- Args:
- num_train_timesteps (`int`): number of diffusion steps used to train the model.
- beta_start (`float`): the starting `beta` value of inference.
- beta_end (`float`): the final `beta` value.
- beta_schedule (`str`):
- the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
- `linear`, `scaled_linear`, `squaredcos_cap_v2` or `sigmoid`.
- trained_betas (`np.ndarray`, optional):
- option to pass an array of betas directly to the constructor to bypass `beta_start`, `beta_end` etc.
- variance_type (`str`):
- options to clip the variance used when adding noise to the denoised sample. Choose from `fixed_small`,
- `fixed_small_log`, `fixed_large`, `fixed_large_log`, `learned` or `learned_range`.
- clip_sample (`bool`, default `True`):
- option to clip predicted sample for numerical stability.
- clip_sample_range (`float`, default `1.0`):
- the maximum magnitude for sample clipping. Valid only when `clip_sample=True`.
- prediction_type (`str`, default `epsilon`, optional):
- prediction type of the scheduler function, one of `epsilon` (predicting the noise of the diffusion
- process), `sample` (directly predicting the noisy sample`) or `v_prediction` (see section 2.4
- https://imagen.research.google/video/paper.pdf)
- thresholding (`bool`, default `False`):
- whether to use the "dynamic thresholding" method (introduced by Imagen, https://arxiv.org/abs/2205.11487).
- Note that the thresholding method is unsuitable for latent-space diffusion models (such as
- stable-diffusion).
- dynamic_thresholding_ratio (`float`, default `0.995`):
- the ratio for the dynamic thresholding method. Default is `0.995`, the same as Imagen
- (https://arxiv.org/abs/2205.11487). Valid only when `thresholding=True`.
- sample_max_value (`float`, default `1.0`):
- the threshold value for dynamic thresholding. Valid only when `thresholding=True`.
- timestep_spacing (`str`, default `"leading"`):
- The way the timesteps should be scaled. Refer to Table 2. of [Common Diffusion Noise Schedules and Sample
- Steps are Flawed](https://arxiv.org/abs/2305.08891) for more information.
- steps_offset (`int`, default `0`):
- an offset added to the inference steps. You can use a combination of `offset=1` and
- `set_alpha_to_one=False`, to make the last step use step 0 for the previous alpha product, as done in
- stable diffusion.
- """
-
- _compatibles = [e.name for e in KarrasDiffusionSchedulers]
- order = 1
- _is_ode_scheduler = False
-
- @register_to_config
- # Copied from diffusers.schedulers.scheduling_ddpm.DDPMScheduler.__init__
- def __init__(
- self,
- num_train_timesteps: int = 1000,
- beta_start: float = 0.0001,
- beta_end: float = 0.02,
- beta_schedule: str = "linear",
- trained_betas: Optional[Union[np.ndarray, List[float]]] = None,
- variance_type: str = "fixed_small",
- clip_sample: bool = True,
- prediction_type: str = "epsilon",
- thresholding: bool = False,
- dynamic_thresholding_ratio: float = 0.995,
- clip_sample_range: float = 1.0,
- sample_max_value: float = 1.0,
- timestep_spacing: str = "leading",
- steps_offset: int = 0,
- ):
- if trained_betas is not None:
- self.betas = torch.tensor(trained_betas, dtype=torch.float32)
- elif beta_schedule == "linear":
- self.betas = torch.linspace(beta_start, beta_end, num_train_timesteps, dtype=torch.float32)
- elif beta_schedule == "scaled_linear":
- # this schedule is very specific to the latent diffusion model.
- self.betas = (
- torch.linspace(beta_start**0.5, beta_end**0.5, num_train_timesteps, dtype=torch.float32) ** 2
- )
- elif beta_schedule == "squaredcos_cap_v2":
- # Glide cosine schedule
- self.betas = betas_for_alpha_bar(num_train_timesteps)
- elif beta_schedule == "sigmoid":
- # GeoDiff sigmoid schedule
- betas = torch.linspace(-6, 6, num_train_timesteps)
- self.betas = torch.sigmoid(betas) * (beta_end - beta_start) + beta_start
- else:
- raise NotImplementedError(f"{beta_schedule} does is not implemented for {self.__class__}")
-
- self.alphas = 1.0 - self.betas
- self.alphas_cumprod = torch.cumprod(self.alphas, dim=0)
- self.one = torch.tensor(1.0)
-
- # standard deviation of the initial noise distribution
- self.init_noise_sigma = 1.0
-
- # setable values
- self.custom_timesteps = False
- self.num_inference_steps = None
- self.timesteps = torch.from_numpy(np.arange(0, num_train_timesteps)[::-1].copy())
-
- self.variance_type = variance_type
-
- # Copied from diffusers.schedulers.scheduling_ddpm.DDPMScheduler.scale_model_input
- def scale_model_input(self, sample: torch.FloatTensor, timestep: Optional[int] = None) -> torch.FloatTensor:
- """
- Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
- current timestep.
-
- Args:
- sample (`torch.FloatTensor`): input sample
- timestep (`int`, optional): current timestep
-
- Returns:
- `torch.FloatTensor`: scaled input sample
- """
- return sample
-
- # Copied from diffusers.schedulers.scheduling_ddpm.DDPMScheduler.set_timesteps
- def set_timesteps(
- self,
- num_inference_steps: Optional[int] = None,
- device: Union[str, torch.device] = None,
- timesteps: Optional[List[int]] = None,
- ):
- """
- Sets the discrete timesteps used for the diffusion chain. Supporting function to be run before inference.
-
- Args:
- num_inference_steps (`Optional[int]`):
- the number of diffusion steps used when generating samples with a pre-trained model. If passed, then
- `timesteps` must be `None`.
- device (`str` or `torch.device`, optional):
- the device to which the timesteps are moved to.
- custom_timesteps (`List[int]`, optional):
- custom timesteps used to support arbitrary spacing between timesteps. If `None`, then the default
- timestep spacing strategy of equal spacing between timesteps is used. If passed, `num_inference_steps`
- must be `None`.
-
- """
- if num_inference_steps is not None and timesteps is not None:
- raise ValueError("Can only pass one of `num_inference_steps` or `custom_timesteps`.")
-
- if timesteps is not None:
- for i in range(1, len(timesteps)):
- if timesteps[i] >= timesteps[i - 1]:
- raise ValueError("`custom_timesteps` must be in descending order.")
-
- if timesteps[0] >= self.config.num_train_timesteps:
- raise ValueError(
- f"`timesteps` must start before `self.config.train_timesteps`:"
- f" {self.config.num_train_timesteps}."
- )
-
- timesteps = np.array(timesteps, dtype=np.int64)
- self.custom_timesteps = True
- else:
- if num_inference_steps > self.config.num_train_timesteps:
- raise ValueError(
- f"`num_inference_steps`: {num_inference_steps} cannot be larger than `self.config.train_timesteps`:"
- f" {self.config.num_train_timesteps} as the unet model trained with this scheduler can only handle"
- f" maximal {self.config.num_train_timesteps} timesteps."
- )
-
- self.num_inference_steps = num_inference_steps
- self.custom_timesteps = False
-
- # "linspace", "leading", "trailing" corresponds to annotation of Table 2. of https://arxiv.org/abs/2305.08891
- if self.config.timestep_spacing == "linspace":
- timesteps = (
- np.linspace(0, self.config.num_train_timesteps - 1, num_inference_steps)
- .round()[::-1]
- .copy()
- .astype(np.int64)
- )
- elif self.config.timestep_spacing == "leading":
- step_ratio = self.config.num_train_timesteps // self.num_inference_steps
- # creates integer timesteps by multiplying by ratio
- # casting to int to avoid issues when num_inference_step is power of 3
- timesteps = (np.arange(0, num_inference_steps) * step_ratio).round()[::-1].copy().astype(np.int64)
- timesteps += self.config.steps_offset
- elif self.config.timestep_spacing == "trailing":
- step_ratio = self.config.num_train_timesteps / self.num_inference_steps
- # creates integer timesteps by multiplying by ratio
- # casting to int to avoid issues when num_inference_step is power of 3
- timesteps = np.round(np.arange(self.config.num_train_timesteps, 0, -step_ratio)).astype(np.int64)
- timesteps -= 1
- else:
- raise ValueError(
- f"{self.config.timestep_spacing} is not supported. Please make sure to choose one of 'linspace', 'leading' or 'trailing'."
- )
-
- self.timesteps = torch.from_numpy(timesteps).to(device)
-
- # Copied from diffusers.schedulers.scheduling_ddpm.DDPMScheduler._get_variance
- def _get_variance(self, t, predicted_variance=None, variance_type=None):
- prev_t = self.previous_timestep(t)
-
- alpha_prod_t = self.alphas_cumprod[t]
- alpha_prod_t_prev = self.alphas_cumprod[prev_t] if prev_t >= 0 else self.one
- current_beta_t = 1 - alpha_prod_t / alpha_prod_t_prev
-
- # For t > 0, compute predicted variance βt (see formula (6) and (7) from https://arxiv.org/pdf/2006.11239.pdf)
- # and sample from it to get previous sample
- # x_{t-1} ~ N(pred_prev_sample, variance) == add variance to pred_sample
- variance = (1 - alpha_prod_t_prev) / (1 - alpha_prod_t) * current_beta_t
-
- # we always take the log of variance, so clamp it to ensure it's not 0
- variance = torch.clamp(variance, min=1e-20)
-
- if variance_type is None:
- variance_type = self.config.variance_type
-
- # hacks - were probably added for training stability
- if variance_type == "fixed_small":
- variance = variance
- # for rl-diffuser https://arxiv.org/abs/2205.09991
- elif variance_type == "fixed_small_log":
- variance = torch.log(variance)
- variance = torch.exp(0.5 * variance)
- elif variance_type == "fixed_large":
- variance = current_beta_t
- elif variance_type == "fixed_large_log":
- # Glide max_log
- variance = torch.log(current_beta_t)
- elif variance_type == "learned":
- return predicted_variance
- elif variance_type == "learned_range":
- min_log = torch.log(variance)
- max_log = torch.log(current_beta_t)
- frac = (predicted_variance + 1) / 2
- variance = frac * max_log + (1 - frac) * min_log
-
- return variance
-
- # Copied from diffusers.schedulers.scheduling_ddpm.DDPMScheduler._threshold_sample
- def _threshold_sample(self, sample: torch.FloatTensor) -> torch.FloatTensor:
- """
- "Dynamic thresholding: At each sampling step we set s to a certain percentile absolute pixel value in xt0 (the
- prediction of x_0 at timestep t), and if s > 1, then we threshold xt0 to the range [-s, s] and then divide by
- s. Dynamic thresholding pushes saturated pixels (those near -1 and 1) inwards, thereby actively preventing
- pixels from saturation at each step. We find that dynamic thresholding results in significantly better
- photorealism as well as better image-text alignment, especially when using very large guidance weights."
-
- https://arxiv.org/abs/2205.11487
- """
- dtype = sample.dtype
- batch_size, channels, height, width = sample.shape
-
- if dtype not in (torch.float32, torch.float64):
- sample = sample.float() # upcast for quantile calculation, and clamp not implemented for cpu half
-
- # Flatten sample for doing quantile calculation along each image
- sample = sample.reshape(batch_size, channels * height * width)
-
- abs_sample = sample.abs() # "a certain percentile absolute pixel value"
-
- s = torch.quantile(abs_sample, self.config.dynamic_thresholding_ratio, dim=1)
- s = torch.clamp(
- s, min=1, max=self.config.sample_max_value
- ) # When clamped to min=1, equivalent to standard clipping to [-1, 1]
-
- s = s.unsqueeze(1) # (batch_size, 1) because clamp will broadcast along dim=0
- sample = torch.clamp(sample, -s, s) / s # "we threshold xt0 to the range [-s, s] and then divide by s"
-
- sample = sample.reshape(batch_size, channels, height, width)
- sample = sample.to(dtype)
-
- return sample
-
- def step(
- self,
- model_output: torch.FloatTensor,
- timestep: int,
- sample: torch.FloatTensor,
- generator=None,
- return_dict: bool = True,
- ) -> Union[DDPMParallelSchedulerOutput, Tuple]:
- """
- Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion
- process from the learned model outputs (most often the predicted noise).
-
- Args:
- model_output (`torch.FloatTensor`): direct output from learned diffusion model.
- timestep (`int`): current discrete timestep in the diffusion chain.
- sample (`torch.FloatTensor`):
- current instance of sample being created by diffusion process.
- generator: random number generator.
- return_dict (`bool`): option for returning tuple rather than DDPMParallelSchedulerOutput class
-
- Returns:
- [`~schedulers.scheduling_utils.DDPMParallelSchedulerOutput`] or `tuple`:
- [`~schedulers.scheduling_utils.DDPMParallelSchedulerOutput`] if `return_dict` is True, otherwise a `tuple`.
- When returning a tuple, the first element is the sample tensor.
-
- """
- t = timestep
-
- prev_t = self.previous_timestep(t)
-
- if model_output.shape[1] == sample.shape[1] * 2 and self.variance_type in ["learned", "learned_range"]:
- model_output, predicted_variance = torch.split(model_output, sample.shape[1], dim=1)
- else:
- predicted_variance = None
-
- # 1. compute alphas, betas
- alpha_prod_t = self.alphas_cumprod[t]
- alpha_prod_t_prev = self.alphas_cumprod[prev_t] if prev_t >= 0 else self.one
- beta_prod_t = 1 - alpha_prod_t
- beta_prod_t_prev = 1 - alpha_prod_t_prev
- current_alpha_t = alpha_prod_t / alpha_prod_t_prev
- current_beta_t = 1 - current_alpha_t
-
- # 2. compute predicted original sample from predicted noise also called
- # "predicted x_0" of formula (15) from https://arxiv.org/pdf/2006.11239.pdf
- if self.config.prediction_type == "epsilon":
- pred_original_sample = (sample - beta_prod_t ** (0.5) * model_output) / alpha_prod_t ** (0.5)
- elif self.config.prediction_type == "sample":
- pred_original_sample = model_output
- elif self.config.prediction_type == "v_prediction":
- pred_original_sample = (alpha_prod_t**0.5) * sample - (beta_prod_t**0.5) * model_output
- else:
- raise ValueError(
- f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, `sample` or"
- " `v_prediction` for the DDPMScheduler."
- )
-
- # 3. Clip or threshold "predicted x_0"
- if self.config.thresholding:
- pred_original_sample = self._threshold_sample(pred_original_sample)
- elif self.config.clip_sample:
- pred_original_sample = pred_original_sample.clamp(
- -self.config.clip_sample_range, self.config.clip_sample_range
- )
-
- # 4. Compute coefficients for pred_original_sample x_0 and current sample x_t
- # See formula (7) from https://arxiv.org/pdf/2006.11239.pdf
- pred_original_sample_coeff = (alpha_prod_t_prev ** (0.5) * current_beta_t) / beta_prod_t
- current_sample_coeff = current_alpha_t ** (0.5) * beta_prod_t_prev / beta_prod_t
-
- # 5. Compute predicted previous sample µ_t
- # See formula (7) from https://arxiv.org/pdf/2006.11239.pdf
- pred_prev_sample = pred_original_sample_coeff * pred_original_sample + current_sample_coeff * sample
-
- # 6. Add noise
- variance = 0
- if t > 0:
- device = model_output.device
- variance_noise = randn_tensor(
- model_output.shape, generator=generator, device=device, dtype=model_output.dtype
- )
- if self.variance_type == "fixed_small_log":
- variance = self._get_variance(t, predicted_variance=predicted_variance) * variance_noise
- elif self.variance_type == "learned_range":
- variance = self._get_variance(t, predicted_variance=predicted_variance)
- variance = torch.exp(0.5 * variance) * variance_noise
- else:
- variance = (self._get_variance(t, predicted_variance=predicted_variance) ** 0.5) * variance_noise
-
- pred_prev_sample = pred_prev_sample + variance
-
- if not return_dict:
- return (pred_prev_sample,)
-
- return DDPMParallelSchedulerOutput(prev_sample=pred_prev_sample, pred_original_sample=pred_original_sample)
-
- def batch_step_no_noise(
- self,
- model_output: torch.FloatTensor,
- timesteps: List[int],
- sample: torch.FloatTensor,
- ) -> torch.FloatTensor:
- """
- Batched version of the `step` function, to be able to reverse the SDE for multiple samples/timesteps at once.
- Also, does not add any noise to the predicted sample, which is necessary for parallel sampling where the noise
- is pre-sampled by the pipeline.
-
- Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion
- process from the learned model outputs (most often the predicted noise).
-
- Args:
- model_output (`torch.FloatTensor`): direct output from learned diffusion model.
- timesteps (`List[int]`):
- current discrete timesteps in the diffusion chain. This is now a list of integers.
- sample (`torch.FloatTensor`):
- current instance of sample being created by diffusion process.
-
- Returns:
- `torch.FloatTensor`: sample tensor at previous timestep.
- """
- t = timesteps
- num_inference_steps = self.num_inference_steps if self.num_inference_steps else self.config.num_train_timesteps
- prev_t = t - self.config.num_train_timesteps // num_inference_steps
-
- t = t.view(-1, *([1] * (model_output.ndim - 1)))
- prev_t = prev_t.view(-1, *([1] * (model_output.ndim - 1)))
-
- if model_output.shape[1] == sample.shape[1] * 2 and self.variance_type in ["learned", "learned_range"]:
- model_output, predicted_variance = torch.split(model_output, sample.shape[1], dim=1)
- else:
- pass
-
- # 1. compute alphas, betas
- self.alphas_cumprod = self.alphas_cumprod.to(model_output.device)
- alpha_prod_t = self.alphas_cumprod[t]
- alpha_prod_t_prev = self.alphas_cumprod[torch.clip(prev_t, min=0)]
- alpha_prod_t_prev[prev_t < 0] = torch.tensor(1.0)
-
- beta_prod_t = 1 - alpha_prod_t
- beta_prod_t_prev = 1 - alpha_prod_t_prev
- current_alpha_t = alpha_prod_t / alpha_prod_t_prev
- current_beta_t = 1 - current_alpha_t
-
- # 2. compute predicted original sample from predicted noise also called
- # "predicted x_0" of formula (15) from https://arxiv.org/pdf/2006.11239.pdf
- if self.config.prediction_type == "epsilon":
- pred_original_sample = (sample - beta_prod_t ** (0.5) * model_output) / alpha_prod_t ** (0.5)
- elif self.config.prediction_type == "sample":
- pred_original_sample = model_output
- elif self.config.prediction_type == "v_prediction":
- pred_original_sample = (alpha_prod_t**0.5) * sample - (beta_prod_t**0.5) * model_output
- else:
- raise ValueError(
- f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, `sample` or"
- " `v_prediction` for the DDPMParallelScheduler."
- )
-
- # 3. Clip or threshold "predicted x_0"
- if self.config.thresholding:
- pred_original_sample = self._threshold_sample(pred_original_sample)
- elif self.config.clip_sample:
- pred_original_sample = pred_original_sample.clamp(
- -self.config.clip_sample_range, self.config.clip_sample_range
- )
-
- # 4. Compute coefficients for pred_original_sample x_0 and current sample x_t
- # See formula (7) from https://arxiv.org/pdf/2006.11239.pdf
- pred_original_sample_coeff = (alpha_prod_t_prev ** (0.5) * current_beta_t) / beta_prod_t
- current_sample_coeff = current_alpha_t ** (0.5) * beta_prod_t_prev / beta_prod_t
-
- # 5. Compute predicted previous sample µ_t
- # See formula (7) from https://arxiv.org/pdf/2006.11239.pdf
- pred_prev_sample = pred_original_sample_coeff * pred_original_sample + current_sample_coeff * sample
-
- return pred_prev_sample
-
- # Copied from diffusers.schedulers.scheduling_ddpm.DDPMScheduler.add_noise
- def add_noise(
- self,
- original_samples: torch.FloatTensor,
- noise: torch.FloatTensor,
- timesteps: torch.IntTensor,
- ) -> torch.FloatTensor:
- # Make sure alphas_cumprod and timestep have same device and dtype as original_samples
- alphas_cumprod = self.alphas_cumprod.to(device=original_samples.device, dtype=original_samples.dtype)
- timesteps = timesteps.to(original_samples.device)
-
- sqrt_alpha_prod = alphas_cumprod[timesteps] ** 0.5
- sqrt_alpha_prod = sqrt_alpha_prod.flatten()
- while len(sqrt_alpha_prod.shape) < len(original_samples.shape):
- sqrt_alpha_prod = sqrt_alpha_prod.unsqueeze(-1)
-
- sqrt_one_minus_alpha_prod = (1 - alphas_cumprod[timesteps]) ** 0.5
- sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.flatten()
- while len(sqrt_one_minus_alpha_prod.shape) < len(original_samples.shape):
- sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.unsqueeze(-1)
-
- noisy_samples = sqrt_alpha_prod * original_samples + sqrt_one_minus_alpha_prod * noise
- return noisy_samples
-
- # Copied from diffusers.schedulers.scheduling_ddpm.DDPMScheduler.get_velocity
- def get_velocity(
- self, sample: torch.FloatTensor, noise: torch.FloatTensor, timesteps: torch.IntTensor
- ) -> torch.FloatTensor:
- # Make sure alphas_cumprod and timestep have same device and dtype as sample
- alphas_cumprod = self.alphas_cumprod.to(device=sample.device, dtype=sample.dtype)
- timesteps = timesteps.to(sample.device)
-
- sqrt_alpha_prod = alphas_cumprod[timesteps] ** 0.5
- sqrt_alpha_prod = sqrt_alpha_prod.flatten()
- while len(sqrt_alpha_prod.shape) < len(sample.shape):
- sqrt_alpha_prod = sqrt_alpha_prod.unsqueeze(-1)
-
- sqrt_one_minus_alpha_prod = (1 - alphas_cumprod[timesteps]) ** 0.5
- sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.flatten()
- while len(sqrt_one_minus_alpha_prod.shape) < len(sample.shape):
- sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.unsqueeze(-1)
-
- velocity = sqrt_alpha_prod * noise - sqrt_one_minus_alpha_prod * sample
- return velocity
-
- def __len__(self):
- return self.config.num_train_timesteps
-
- # Copied from diffusers.schedulers.scheduling_ddpm.DDPMScheduler.previous_timestep
- def previous_timestep(self, timestep):
- if self.custom_timesteps:
- index = (self.timesteps == timestep).nonzero(as_tuple=True)[0][0]
- if index == self.timesteps.shape[0] - 1:
- prev_t = torch.tensor(-1)
- else:
- prev_t = self.timesteps[index + 1]
- else:
- num_inference_steps = (
- self.num_inference_steps if self.num_inference_steps else self.config.num_train_timesteps
- )
- prev_t = timestep - self.config.num_train_timesteps // num_inference_steps
-
- return prev_t
diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/datasets/deepfashion.py b/spaces/Andy1621/uniformer_image_detection/mmdet/datasets/deepfashion.py
deleted file mode 100644
index 1125376091f2d4ee6843ae4f2156b3b0453be369..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/mmdet/datasets/deepfashion.py
+++ /dev/null
@@ -1,10 +0,0 @@
-from .builder import DATASETS
-from .coco import CocoDataset
-
-
-@DATASETS.register_module()
-class DeepFashionDataset(CocoDataset):
-
- CLASSES = ('top', 'skirt', 'leggings', 'dress', 'outer', 'pants', 'bag',
- 'neckwear', 'headwear', 'eyeglass', 'belt', 'footwear', 'hair',
- 'skin', 'face')
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r50-d8_480x480_40k_pascal_context_59.py b/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r50-d8_480x480_40k_pascal_context_59.py
deleted file mode 100644
index f9e831bcd1043ed9feba88bc28ab69d87287ca98..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r50-d8_480x480_40k_pascal_context_59.py
+++ /dev/null
@@ -1,10 +0,0 @@
-_base_ = [
- '../_base_/models/deeplabv3plus_r50-d8.py',
- '../_base_/datasets/pascal_context_59.py', '../_base_/default_runtime.py',
- '../_base_/schedules/schedule_40k.py'
-]
-model = dict(
- decode_head=dict(num_classes=59),
- auxiliary_head=dict(num_classes=59),
- test_cfg=dict(mode='slide', crop_size=(480, 480), stride=(320, 320)))
-optimizer = dict(type='SGD', lr=0.004, momentum=0.9, weight_decay=0.0001)
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r18-d8_512x1024_80k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r18-d8_512x1024_80k_cityscapes.py
deleted file mode 100644
index d914f93c023a6384e0e856b8608280cef589d5c6..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r18-d8_512x1024_80k_cityscapes.py
+++ /dev/null
@@ -1,9 +0,0 @@
-_base_ = './pspnet_r50-d8_512x1024_80k_cityscapes.py'
-model = dict(
- pretrained='open-mmlab://resnet18_v1c',
- backbone=dict(depth=18),
- decode_head=dict(
- in_channels=512,
- channels=128,
- ),
- auxiliary_head=dict(in_channels=256, channels=64))
diff --git a/spaces/AnimalEquality/chatbot/lv_recipe_chatbot/ingredient_vision.py b/spaces/AnimalEquality/chatbot/lv_recipe_chatbot/ingredient_vision.py
deleted file mode 100644
index 0fb366ab7dce9c5b256b2d5326a519f706da7f32..0000000000000000000000000000000000000000
--- a/spaces/AnimalEquality/chatbot/lv_recipe_chatbot/ingredient_vision.py
+++ /dev/null
@@ -1,132 +0,0 @@
-# AUTOGENERATED! DO NOT EDIT! File to edit: ../nbs/03_ingredient_vision.ipynb.
-
-# %% auto 0
-__all__ = ['SAMPLE_IMG_DIR', 'format_image', 'BlipImageCaptioning', 'BlipVQA', 'VeganIngredientFinder']
-
-# %% ../nbs/03_ingredient_vision.ipynb 3
-import imghdr
-import os
-import time
-from pathlib import Path
-
-import numpy as np
-import torch
-from PIL import Image
-from transformers import (
- BlipForConditionalGeneration,
- BlipForQuestionAnswering,
- BlipProcessor,
- pipeline,
-)
-
-import constants
-
-# %% ../nbs/03_ingredient_vision.ipynb 7
-# fmt: off
-def format_image(
- image: str # Image file path
-):
- # fmt: on
- img = Image.open(image)
- width, height = img.size
- ratio = min(512 / width, 512 / height)
- width_new, height_new = (round(width * ratio), round(height * ratio))
- width_new = int(np.round(width_new / 64.0)) * 64
- height_new = int(np.round(height_new / 64.0)) * 64
- img = img.resize((width_new, height_new))
- img = img.convert("RGB")
- return img
-
-# %% ../nbs/03_ingredient_vision.ipynb 8
-class BlipImageCaptioning:
- """
- Useful when you want to know what is inside the photo.
- """
-
- # fmt: off
- def __init__(self,
- device: str
- ): # pytorch hardware identifier to run model on options: "cpu, cuda_0, cuda_1 ..., cuda_n"
- # fmt: on
- self.device = device
- self.torch_dtype = torch.float16 if "cuda" in device else torch.float32
- self.processor = BlipProcessor.from_pretrained(
- "Salesforce/blip-image-captioning-base"
- )
- self.model = BlipForConditionalGeneration.from_pretrained(
- "Salesforce/blip-image-captioning-base", torch_dtype=self.torch_dtype
- ).to(self.device)
-
- def inference(self,
- image: Image
- ) -> str: # Caption for the image
- inputs = self.processor(image, return_tensors="pt").to(
- self.device, self.torch_dtype
- )
- out = self.model.generate(**inputs, max_new_tokens=50)
- captions = self.processor.decode(out[0], skip_special_tokens=True)
- return captions
-
-# %% ../nbs/03_ingredient_vision.ipynb 10
-class BlipVQA:
- # fmt: off
- """
- BLIP Visual Question Answering
- Useful when you need an answer for a question based on an image.
- Examples:
- what is the background color of this image, how many cats are in this figure, what is in this figure?
- """
- # fmt: on
- def __init__(self, device: str):
- self.torch_dtype = torch.float16 if "cuda" in device else torch.float32
- self.device = device
- self.processor = BlipProcessor.from_pretrained("Salesforce/blip-vqa-base")
- self.model = BlipForQuestionAnswering.from_pretrained(
- "Salesforce/blip-vqa-base", torch_dtype=self.torch_dtype
- ).to(self.device)
-
- # fmt: off
- def inference(self,
- image: Image,
- question: str
- ) -> str: # Answer to the query on the image
- # fmt: on
- image = image.convert("RGB")
- inputs = self.processor(image, question, return_tensors="pt").to(
- self.device, self.torch_dtype
- )
- out = self.model.generate(**inputs, max_new_tokens=100)
- answer = self.processor.decode(out[0], skip_special_tokens=True)
- return answer
-
-# %% ../nbs/03_ingredient_vision.ipynb 12
-SAMPLE_IMG_DIR = Path(f"{constants.ROOT_DIR}/assets/images/vegan_ingredients")
-
-# %% ../nbs/03_ingredient_vision.ipynb 19
-class VeganIngredientFinder:
- def __init__(self):
- self.vqa = BlipVQA("cpu")
-
- # fmt: off
- def list_ingredients(self,
- img: str # Image file path
- ) -> str:
- #fmt: on
- img = format_image(img)
- answer = self.vqa.inference(
- img, f"What are three of the vegetables seen in the image if any?"
- )
- answer += "\n" + self.vqa.inference(
- img, f"What are three of the fruits seen in the image if any?"
- )
- answer += "\n" + self.vqa.inference(
- img, f"What grains and starches are in the image if any?"
- )
- if (
- "yes"
- in self.vqa.inference(
- img, f"Is there plant-based milk in the image?"
- ).lower()
- ):
- answer += "\n" + "plant-based milk"
- return answer
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/video/optflow.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/video/optflow.py
deleted file mode 100644
index 84160f8d6ef9fceb5a2f89e7481593109fc1905d..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/video/optflow.py
+++ /dev/null
@@ -1,254 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import warnings
-
-import cv2
-import numpy as np
-
-from annotator.uniformer.mmcv.arraymisc import dequantize, quantize
-from annotator.uniformer.mmcv.image import imread, imwrite
-from annotator.uniformer.mmcv.utils import is_str
-
-
-def flowread(flow_or_path, quantize=False, concat_axis=0, *args, **kwargs):
- """Read an optical flow map.
-
- Args:
- flow_or_path (ndarray or str): A flow map or filepath.
- quantize (bool): whether to read quantized pair, if set to True,
- remaining args will be passed to :func:`dequantize_flow`.
- concat_axis (int): The axis that dx and dy are concatenated,
- can be either 0 or 1. Ignored if quantize is False.
-
- Returns:
- ndarray: Optical flow represented as a (h, w, 2) numpy array
- """
- if isinstance(flow_or_path, np.ndarray):
- if (flow_or_path.ndim != 3) or (flow_or_path.shape[-1] != 2):
- raise ValueError(f'Invalid flow with shape {flow_or_path.shape}')
- return flow_or_path
- elif not is_str(flow_or_path):
- raise TypeError(f'"flow_or_path" must be a filename or numpy array, '
- f'not {type(flow_or_path)}')
-
- if not quantize:
- with open(flow_or_path, 'rb') as f:
- try:
- header = f.read(4).decode('utf-8')
- except Exception:
- raise IOError(f'Invalid flow file: {flow_or_path}')
- else:
- if header != 'PIEH':
- raise IOError(f'Invalid flow file: {flow_or_path}, '
- 'header does not contain PIEH')
-
- w = np.fromfile(f, np.int32, 1).squeeze()
- h = np.fromfile(f, np.int32, 1).squeeze()
- flow = np.fromfile(f, np.float32, w * h * 2).reshape((h, w, 2))
- else:
- assert concat_axis in [0, 1]
- cat_flow = imread(flow_or_path, flag='unchanged')
- if cat_flow.ndim != 2:
- raise IOError(
- f'{flow_or_path} is not a valid quantized flow file, '
- f'its dimension is {cat_flow.ndim}.')
- assert cat_flow.shape[concat_axis] % 2 == 0
- dx, dy = np.split(cat_flow, 2, axis=concat_axis)
- flow = dequantize_flow(dx, dy, *args, **kwargs)
-
- return flow.astype(np.float32)
-
-
-def flowwrite(flow, filename, quantize=False, concat_axis=0, *args, **kwargs):
- """Write optical flow to file.
-
- If the flow is not quantized, it will be saved as a .flo file losslessly,
- otherwise a jpeg image which is lossy but of much smaller size. (dx and dy
- will be concatenated horizontally into a single image if quantize is True.)
-
- Args:
- flow (ndarray): (h, w, 2) array of optical flow.
- filename (str): Output filepath.
- quantize (bool): Whether to quantize the flow and save it to 2 jpeg
- images. If set to True, remaining args will be passed to
- :func:`quantize_flow`.
- concat_axis (int): The axis that dx and dy are concatenated,
- can be either 0 or 1. Ignored if quantize is False.
- """
- if not quantize:
- with open(filename, 'wb') as f:
- f.write('PIEH'.encode('utf-8'))
- np.array([flow.shape[1], flow.shape[0]], dtype=np.int32).tofile(f)
- flow = flow.astype(np.float32)
- flow.tofile(f)
- f.flush()
- else:
- assert concat_axis in [0, 1]
- dx, dy = quantize_flow(flow, *args, **kwargs)
- dxdy = np.concatenate((dx, dy), axis=concat_axis)
- imwrite(dxdy, filename)
-
-
-def quantize_flow(flow, max_val=0.02, norm=True):
- """Quantize flow to [0, 255].
-
- After this step, the size of flow will be much smaller, and can be
- dumped as jpeg images.
-
- Args:
- flow (ndarray): (h, w, 2) array of optical flow.
- max_val (float): Maximum value of flow, values beyond
- [-max_val, max_val] will be truncated.
- norm (bool): Whether to divide flow values by image width/height.
-
- Returns:
- tuple[ndarray]: Quantized dx and dy.
- """
- h, w, _ = flow.shape
- dx = flow[..., 0]
- dy = flow[..., 1]
- if norm:
- dx = dx / w # avoid inplace operations
- dy = dy / h
- # use 255 levels instead of 256 to make sure 0 is 0 after dequantization.
- flow_comps = [
- quantize(d, -max_val, max_val, 255, np.uint8) for d in [dx, dy]
- ]
- return tuple(flow_comps)
-
-
-def dequantize_flow(dx, dy, max_val=0.02, denorm=True):
- """Recover from quantized flow.
-
- Args:
- dx (ndarray): Quantized dx.
- dy (ndarray): Quantized dy.
- max_val (float): Maximum value used when quantizing.
- denorm (bool): Whether to multiply flow values with width/height.
-
- Returns:
- ndarray: Dequantized flow.
- """
- assert dx.shape == dy.shape
- assert dx.ndim == 2 or (dx.ndim == 3 and dx.shape[-1] == 1)
-
- dx, dy = [dequantize(d, -max_val, max_val, 255) for d in [dx, dy]]
-
- if denorm:
- dx *= dx.shape[1]
- dy *= dx.shape[0]
- flow = np.dstack((dx, dy))
- return flow
-
-
-def flow_warp(img, flow, filling_value=0, interpolate_mode='nearest'):
- """Use flow to warp img.
-
- Args:
- img (ndarray, float or uint8): Image to be warped.
- flow (ndarray, float): Optical Flow.
- filling_value (int): The missing pixels will be set with filling_value.
- interpolate_mode (str): bilinear -> Bilinear Interpolation;
- nearest -> Nearest Neighbor.
-
- Returns:
- ndarray: Warped image with the same shape of img
- """
- warnings.warn('This function is just for prototyping and cannot '
- 'guarantee the computational efficiency.')
- assert flow.ndim == 3, 'Flow must be in 3D arrays.'
- height = flow.shape[0]
- width = flow.shape[1]
- channels = img.shape[2]
-
- output = np.ones(
- (height, width, channels), dtype=img.dtype) * filling_value
-
- grid = np.indices((height, width)).swapaxes(0, 1).swapaxes(1, 2)
- dx = grid[:, :, 0] + flow[:, :, 1]
- dy = grid[:, :, 1] + flow[:, :, 0]
- sx = np.floor(dx).astype(int)
- sy = np.floor(dy).astype(int)
- valid = (sx >= 0) & (sx < height - 1) & (sy >= 0) & (sy < width - 1)
-
- if interpolate_mode == 'nearest':
- output[valid, :] = img[dx[valid].round().astype(int),
- dy[valid].round().astype(int), :]
- elif interpolate_mode == 'bilinear':
- # dirty walkround for integer positions
- eps_ = 1e-6
- dx, dy = dx + eps_, dy + eps_
- left_top_ = img[np.floor(dx[valid]).astype(int),
- np.floor(dy[valid]).astype(int), :] * (
- np.ceil(dx[valid]) - dx[valid])[:, None] * (
- np.ceil(dy[valid]) - dy[valid])[:, None]
- left_down_ = img[np.ceil(dx[valid]).astype(int),
- np.floor(dy[valid]).astype(int), :] * (
- dx[valid] - np.floor(dx[valid]))[:, None] * (
- np.ceil(dy[valid]) - dy[valid])[:, None]
- right_top_ = img[np.floor(dx[valid]).astype(int),
- np.ceil(dy[valid]).astype(int), :] * (
- np.ceil(dx[valid]) - dx[valid])[:, None] * (
- dy[valid] - np.floor(dy[valid]))[:, None]
- right_down_ = img[np.ceil(dx[valid]).astype(int),
- np.ceil(dy[valid]).astype(int), :] * (
- dx[valid] - np.floor(dx[valid]))[:, None] * (
- dy[valid] - np.floor(dy[valid]))[:, None]
- output[valid, :] = left_top_ + left_down_ + right_top_ + right_down_
- else:
- raise NotImplementedError(
- 'We only support interpolation modes of nearest and bilinear, '
- f'but got {interpolate_mode}.')
- return output.astype(img.dtype)
-
-
-def flow_from_bytes(content):
- """Read dense optical flow from bytes.
-
- .. note::
- This load optical flow function works for FlyingChairs, FlyingThings3D,
- Sintel, FlyingChairsOcc datasets, but cannot load the data from
- ChairsSDHom.
-
- Args:
- content (bytes): Optical flow bytes got from files or other streams.
-
- Returns:
- ndarray: Loaded optical flow with the shape (H, W, 2).
- """
-
- # header in first 4 bytes
- header = content[:4]
- if header.decode('utf-8') != 'PIEH':
- raise Exception('Flow file header does not contain PIEH')
- # width in second 4 bytes
- width = np.frombuffer(content[4:], np.int32, 1).squeeze()
- # height in third 4 bytes
- height = np.frombuffer(content[8:], np.int32, 1).squeeze()
- # after first 12 bytes, all bytes are flow
- flow = np.frombuffer(content[12:], np.float32, width * height * 2).reshape(
- (height, width, 2))
-
- return flow
-
-
-def sparse_flow_from_bytes(content):
- """Read the optical flow in KITTI datasets from bytes.
-
- This function is modified from RAFT load the `KITTI datasets
- `_.
-
- Args:
- content (bytes): Optical flow bytes got from files or other streams.
-
- Returns:
- Tuple(ndarray, ndarray): Loaded optical flow with the shape (H, W, 2)
- and flow valid mask with the shape (H, W).
- """ # nopa
-
- content = np.frombuffer(content, np.uint8)
- flow = cv2.imdecode(content, cv2.IMREAD_ANYDEPTH | cv2.IMREAD_COLOR)
- flow = flow[:, :, ::-1].astype(np.float32)
- # flow shape (H, W, 2) valid shape (H, W)
- flow, valid = flow[:, :, :2], flow[:, :, 2]
- flow = (flow - 2**15) / 64.0
- return flow, valid
diff --git a/spaces/AsakuraMizu/moe-tts/export_model.py b/spaces/AsakuraMizu/moe-tts/export_model.py
deleted file mode 100644
index 52d3b3d083df7bf027b46d9c63e399b2da3f0e0a..0000000000000000000000000000000000000000
--- a/spaces/AsakuraMizu/moe-tts/export_model.py
+++ /dev/null
@@ -1,13 +0,0 @@
-import torch
-
-if __name__ == '__main__':
- model_path = "saved_model/18/model.pth"
- output_path = "saved_model/18/model1.pth"
- checkpoint_dict = torch.load(model_path, map_location='cpu')
- checkpoint_dict_new = {}
- for k, v in checkpoint_dict.items():
- if k == "optimizer":
- print("remove optimizer")
- continue
- checkpoint_dict_new[k] = v
- torch.save(checkpoint_dict_new, output_path)
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/pager.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/pager.py
deleted file mode 100644
index a3f7aa62af1ee2690e1e17ee41f3c368953625b8..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/pager.py
+++ /dev/null
@@ -1,34 +0,0 @@
-from abc import ABC, abstractmethod
-from typing import Any
-
-
-class Pager(ABC):
- """Base class for a pager."""
-
- @abstractmethod
- def show(self, content: str) -> None:
- """Show content in pager.
-
- Args:
- content (str): Content to be displayed.
- """
-
-
-class SystemPager(Pager):
- """Uses the pager installed on the system."""
-
- def _pager(self, content: str) -> Any: # pragma: no cover
- return __import__("pydoc").pager(content)
-
- def show(self, content: str) -> None:
- """Use the same pager used by pydoc."""
- self._pager(content)
-
-
-if __name__ == "__main__": # pragma: no cover
- from .__main__ import make_test_card
- from .console import Console
-
- console = Console()
- with console.pager(styles=True):
- console.print(make_test_card())
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tools/deploy/torchscript_mask_rcnn.cpp b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tools/deploy/torchscript_mask_rcnn.cpp
deleted file mode 100644
index b40f13b81f601788847992e6627b448d62a287e2..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tools/deploy/torchscript_mask_rcnn.cpp
+++ /dev/null
@@ -1,187 +0,0 @@
-// Copyright (c) Facebook, Inc. and its affiliates.
-// @lint-ignore-every CLANGTIDY
-// This is an example code that demonstrates how to run inference
-// with a torchscript format Mask R-CNN model exported by ./export_model.py
-// using export method=tracing, caffe2_tracing & scripting.
-
-#include
-#include
-#include
-
-#include
-#include
-#include
-#include
-
-// only needed for export_method=tracing
-#include // @oss-only
-// @fb-only: #include
-
-using namespace std;
-
-c10::IValue get_caffe2_tracing_inputs(cv::Mat& img, c10::Device device) {
- const int height = img.rows;
- const int width = img.cols;
- // FPN models require divisibility of 32.
- // Tracing mode does padding inside the graph, but caffe2_tracing does not.
- assert(height % 32 == 0 && width % 32 == 0);
- const int channels = 3;
-
- auto input =
- torch::from_blob(img.data, {1, height, width, channels}, torch::kUInt8);
- // NHWC to NCHW
- input = input.to(device, torch::kFloat).permute({0, 3, 1, 2}).contiguous();
-
- std::array im_info_data{height * 1.0f, width * 1.0f, 1.0f};
- auto im_info =
- torch::from_blob(im_info_data.data(), {1, 3}).clone().to(device);
- return std::make_tuple(input, im_info);
-}
-
-c10::IValue get_tracing_inputs(cv::Mat& img, c10::Device device) {
- const int height = img.rows;
- const int width = img.cols;
- const int channels = 3;
-
- auto input =
- torch::from_blob(img.data, {height, width, channels}, torch::kUInt8);
- // HWC to CHW
- input = input.to(device, torch::kFloat).permute({2, 0, 1}).contiguous();
- return input;
-}
-
-// create a Tuple[Dict[str, Tensor]] which is the input type of scripted model
-c10::IValue get_scripting_inputs(cv::Mat& img, c10::Device device) {
- const int height = img.rows;
- const int width = img.cols;
- const int channels = 3;
-
- auto img_tensor =
- torch::from_blob(img.data, {height, width, channels}, torch::kUInt8);
- // HWC to CHW
- img_tensor =
- img_tensor.to(device, torch::kFloat).permute({2, 0, 1}).contiguous();
- auto dic = c10::Dict();
- dic.insert("image", img_tensor);
- return std::make_tuple(dic);
-}
-
-c10::IValue
-get_inputs(std::string export_method, cv::Mat& img, c10::Device device) {
- // Given an image, create inputs in the format required by the model.
- if (export_method == "tracing")
- return get_tracing_inputs(img, device);
- if (export_method == "caffe2_tracing")
- return get_caffe2_tracing_inputs(img, device);
- if (export_method == "scripting")
- return get_scripting_inputs(img, device);
- abort();
-}
-
-struct MaskRCNNOutputs {
- at::Tensor pred_boxes, pred_classes, pred_masks, scores;
- int num_instances() const {
- return pred_boxes.sizes()[0];
- }
-};
-
-MaskRCNNOutputs get_outputs(std::string export_method, c10::IValue outputs) {
- // Given outputs of the model, extract tensors from it to turn into a
- // common MaskRCNNOutputs format.
- if (export_method == "tracing") {
- auto out_tuple = outputs.toTuple()->elements();
- // They are ordered alphabetically by their field name in Instances
- return MaskRCNNOutputs{
- out_tuple[0].toTensor(),
- out_tuple[1].toTensor(),
- out_tuple[2].toTensor(),
- out_tuple[3].toTensor()};
- }
- if (export_method == "caffe2_tracing") {
- auto out_tuple = outputs.toTuple()->elements();
- // A legacy order used by caffe2 models
- return MaskRCNNOutputs{
- out_tuple[0].toTensor(),
- out_tuple[2].toTensor(),
- out_tuple[3].toTensor(),
- out_tuple[1].toTensor()};
- }
- if (export_method == "scripting") {
- // With the ScriptableAdapter defined in export_model.py, the output is
- // List[Dict[str, Any]].
- auto out_dict = outputs.toList().get(0).toGenericDict();
- return MaskRCNNOutputs{
- out_dict.at("pred_boxes").toTensor(),
- out_dict.at("pred_classes").toTensor(),
- out_dict.at("pred_masks").toTensor(),
- out_dict.at("scores").toTensor()};
- }
- abort();
-}
-
-int main(int argc, const char* argv[]) {
- if (argc != 4) {
- cerr << R"xx(
-Usage:
- ./torchscript_mask_rcnn model.ts input.jpg EXPORT_METHOD
-
- EXPORT_METHOD can be "tracing", "caffe2_tracing" or "scripting".
-)xx";
- return 1;
- }
- std::string image_file = argv[2];
- std::string export_method = argv[3];
- assert(
- export_method == "caffe2_tracing" || export_method == "tracing" ||
- export_method == "scripting");
-
- torch::jit::getBailoutDepth() = 1;
- torch::autograd::AutoGradMode guard(false);
- auto module = torch::jit::load(argv[1]);
-
- assert(module.buffers().size() > 0);
- // Assume that the entire model is on the same device.
- // We just put input to this device.
- auto device = (*begin(module.buffers())).device();
-
- cv::Mat input_img = cv::imread(image_file, cv::IMREAD_COLOR);
- auto inputs = get_inputs(export_method, input_img, device);
-
- // Run the network
- auto output = module.forward({inputs});
- if (device.is_cuda())
- c10::cuda::getCurrentCUDAStream().synchronize();
-
- // run 3 more times to benchmark
- int N_benchmark = 3, N_warmup = 1;
- auto start_time = chrono::high_resolution_clock::now();
- for (int i = 0; i < N_benchmark + N_warmup; ++i) {
- if (i == N_warmup)
- start_time = chrono::high_resolution_clock::now();
- output = module.forward({inputs});
- if (device.is_cuda())
- c10::cuda::getCurrentCUDAStream().synchronize();
- }
- auto end_time = chrono::high_resolution_clock::now();
- auto ms = chrono::duration_cast(end_time - start_time)
- .count();
- cout << "Latency (should vary with different inputs): "
- << ms * 1.0 / 1e6 / N_benchmark << " seconds" << endl;
-
- // Parse Mask R-CNN outputs
- auto rcnn_outputs = get_outputs(export_method, output);
- cout << "Number of detected objects: " << rcnn_outputs.num_instances()
- << endl;
-
- cout << "pred_boxes: " << rcnn_outputs.pred_boxes.toString() << " "
- << rcnn_outputs.pred_boxes.sizes() << endl;
- cout << "scores: " << rcnn_outputs.scores.toString() << " "
- << rcnn_outputs.scores.sizes() << endl;
- cout << "pred_classes: " << rcnn_outputs.pred_classes.toString() << " "
- << rcnn_outputs.pred_classes.sizes() << endl;
- cout << "pred_masks: " << rcnn_outputs.pred_masks.toString() << " "
- << rcnn_outputs.pred_masks.sizes() << endl;
-
- cout << rcnn_outputs.pred_boxes << endl;
- return 0;
-}
diff --git a/spaces/AyameYODAYO/xijinpingx/style.css b/spaces/AyameYODAYO/xijinpingx/style.css
deleted file mode 100644
index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000
--- a/spaces/AyameYODAYO/xijinpingx/style.css
+++ /dev/null
@@ -1,28 +0,0 @@
-body {
- padding: 2rem;
- font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif;
-}
-
-h1 {
- font-size: 16px;
- margin-top: 0;
-}
-
-p {
- color: rgb(107, 114, 128);
- font-size: 15px;
- margin-bottom: 10px;
- margin-top: 5px;
-}
-
-.card {
- max-width: 620px;
- margin: 0 auto;
- padding: 16px;
- border: 1px solid lightgray;
- border-radius: 16px;
-}
-
-.card p:last-child {
- margin-bottom: 0;
-}
diff --git a/spaces/Aziizzz/ChestXrayClassification/app.py b/spaces/Aziizzz/ChestXrayClassification/app.py
deleted file mode 100644
index 2baa13215528e479982c79e0d5c37720bec6ff92..0000000000000000000000000000000000000000
--- a/spaces/Aziizzz/ChestXrayClassification/app.py
+++ /dev/null
@@ -1,107 +0,0 @@
-### 1. Imports and class names setup ###
-import gradio as gr
-import os
-import torch
-
-from timeit import default_timer as timer
-from typing import Tuple, Dict
-import torchvision
-
-from torch import nn
-
-
-def create_effnetb2_model(num_classes: int = 1,
- seed: int = 42):
- """Creates an EfficientNetB2 feature extractor model and transforms.
-
- Args:
- num_classes (int, optional): number of classes in the classifier head.
- Defaults to 3.
- seed (int, optional): random seed value. Defaults to 42.
-
- Returns:
- model (torch.nn.Module): EffNetB2 feature extractor model.
- transforms (torchvision.transforms): EffNetB2 image transforms.
- """
- # Create EffNetB2 pretrained weights, transforms and model
- weights = torchvision.models.AlexNet_Weights.DEFAULT
- transforms = weights.transforms()
- model = torchvision.models.alexnet(weights=weights)
-
- # Freeze all layers in base model
- for param in model.parameters():
- param.requires_grad = False
-
- # Change classifier head with random seed for reproducibility
- torch.manual_seed(seed)
- model.classifier = nn.Sequential(
- nn.Dropout(p=0.2,),
- nn.Linear(in_features=9216, out_features=1),
- )
-
- return model, transforms
-
-
-# Setup class names
-class_names = ["Normal", "Pneumonia"]
-
-### 2. Model and transforms preparation ###
-
-# Create EffNetB2 model
-effnetb2, effnetb2_transforms = create_effnetb2_model(
- num_classes=1, # len(class_names) would also work
-)
-
-# Load saved weights
-effnetb2.load_state_dict(
- torch.load(
- f="alexnet_pretrained.pth",
- map_location=torch.device("cpu"), # load to CPU
- )
-)
-
-
-def predict(img) -> Tuple[Dict, float]:
- """Transforms and performs a prediction on img and returns prediction and time taken.
- """
- # Start the timer
- start_time = timer()
-
- # Transform the target image and add a batch dimension
- img = effnetb2_transforms(img).unsqueeze(0)
-
- # Put model into evaluation mode and turn on inference mode
- effnetb2.eval()
- with torch.inference_mode():
- # Pass the transformed image through the model and turn the prediction logits into prediction probabilities
- pred_probs = torch.sigmoid(effnetb2(img)).squeeze()
-
- # Create a prediction label and prediction probability dictionary for each prediction class (this is the required format for Gradio's output parameter)
- pred_labels_and_probs = {
- 'Normal': 1-pred_probs.item(), 'Pneumonia': pred_probs.item()}
-
- # Calculate the prediction time
- pred_time = round(timer() - start_time, 5)
-
- # Return the prediction dictionary and prediction time
- return pred_labels_and_probs, pred_time
-
-
-example_list = [[f"examples/example{i+1}.jpg"] for i in range(3)]
-# Create title, description and article strings
-title = "ChestXray Classification"
-description = "An Alexnet computer vision model to classify images of Xray Chest images as Normal or Pneumonia."
-article = "Created at (https://github.com/azizche/chest_xray_Classification)."
-
-# Create the Gradio demo
-demo = gr.Interface(fn=predict, # mapping function from input to output
- inputs=gr.Image(type="pil"), # what are the inputs?
- outputs=[gr.Label(num_top_classes=2, label="Predictions"), # what are the outputs?
- gr.Number(label="Prediction time (s)")], # our fn has two outputs, therefore we have two outputs
- examples=example_list,
- title=title,
- description=description,
- article=article)
-
-# Launch the demo!
-demo.launch()
diff --git a/spaces/BenjaminB/pyscript-demo/index.html b/spaces/BenjaminB/pyscript-demo/index.html
deleted file mode 100644
index 31de657aa95126585748527b5b8d65f0b6e632e7..0000000000000000000000000000000000000000
--- a/spaces/BenjaminB/pyscript-demo/index.html
+++ /dev/null
@@ -1,57 +0,0 @@
-
-
-
-
- PyScript Test
-
-
-
- - scikit-learn
- - tabulate
-
-
-
-
-
-
-
-
-
-
Define your own sklearn classifier and evaluate it on the toy dataset. An example is shown below:
-
- from sklearn.linear_model import LogisticRegression
-clf = LogisticRegression(random_state=0)
-evaluate(clf)
-
- Try to achieve a test accuracy of 0.85 or better! Get some inspiration for possible classifiers here.
-
- Enter your code below, then press Shift+Enter:
-
- from statistics import mean
- from sklearn.datasets import make_classification
- from sklearn.model_selection import cross_validate
- import tabulate
-
- X, y = make_classification(n_samples=1000, n_informative=10, random_state=0)
-
- def evaluate(clf):
- cv_result = cross_validate(clf, X, y, scoring='accuracy', cv=5)
- time_fit = sum(cv_result['fit_time'])
- time_score = sum(cv_result['score_time'])
-
- print(f"Mean test accuracy: {mean(cv_result['test_score']):.3f}")
- print(f"Total training time: {time_fit:.1f} seconds")
- print(f"Total time for scoring: {time_score:.1f} seconds")
-
- show_result = {'split': [1, 2, 3, 4, 5], 'accuracy': cv_result['test_score']}
- print("Accuracy for each cross validation split:")
- return tabulate.tabulate(show_result, tablefmt='html', headers='keys', floatfmt='.3')
-
-
-
-
-
diff --git a/spaces/Benson/text-generation/Examples/Descargar Gratis Youtube Apk.md b/spaces/Benson/text-generation/Examples/Descargar Gratis Youtube Apk.md
deleted file mode 100644
index 75245fb51478393fdf6668a3e2e092e10d87ca76..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar Gratis Youtube Apk.md
+++ /dev/null
@@ -1,62 +0,0 @@
-
-
Cómo descargar vídeos de YouTube con Yandex APK
-
YouTube es una de las plataformas para compartir videos más populares del mundo, donde puedes ver millones de videos gratis. Sin embargo, a veces es posible que desee descargar videos de YouTube a su dispositivo Android para su visualización sin conexión, especialmente cuando tiene una conexión a Internet limitada o inestable. En este artículo, le mostraremos cómo descargar videos de YouTube con Yandex APK, una aplicación de navegador potente y versátil que puede ayudarlo a guardar sus videos favoritos de forma fácil y rápida.
Yandex APK es una aplicación para Android que le permite acceder al navegador Yandex, un navegador web rápido y seguro desarrollado por Yandex, una empresa de internet rusa. Yandex Browser tiene muchas características que lo hacen destacar de otros navegadores, como:
-
Características de Yandex APK
-
-
Modo de protección: Esta función le protege de sitios web maliciosos, phishing y malware mediante el bloqueo de anuncios y rastreadores no deseados.
-
Modo Turbo: Esta función acelera su experiencia de navegación comprimiendo páginas web y guardando sus datos móviles.
-
Modo Zen: Esta función personaliza sus recomendaciones de contenido basadas en sus intereses y preferencias.
-
SmartBox: Esta función le permite buscar en la web y acceder a sus aplicaciones favoritas desde la barra de direcciones.
-
Gestor de descargas: Esta función le permite gestionar sus descargas de forma fácil y eficiente.
-
-
Cómo instalar Yandex APK en su dispositivo Android
-
Para instalar Yandex APK en su dispositivo Android, es necesario seguir estos pasos:
-
-
Descargar el archivo Yandex APK de una fuente de confianza, como [APKCombo]( 2 ) o [JalanTikus]( 1 ).
-
Abra la aplicación Administrador de archivos en su dispositivo Android y busque el archivo APK descargado.
-
Toque en el archivo APK y permitir la instalación de aplicaciones desconocidas desde su configuración.
-
Siga las instrucciones en la pantalla para completar el proceso de instalación.
-
-
-
Cómo descargar vídeos de YouTube con Yandex APK
-
Una vez que haya instalado Yandex APK en su dispositivo Android, puede comenzar a descargar videos de YouTube con él. Estos son los pasos que debes seguir:
-
Paso 1: Abra el navegador Yandex en su dispositivo Android
-
Abra la aplicación Yandex Browser en su dispositivo Android y asegúrese de que tiene una conexión a Internet estable.
-
Paso 2: Ir a YouTube y encontrar el video que desea descargar
-
En la barra de direcciones, escribe youtube.com y pulsa enter. Serás redirigido al sitio web de YouTube. También puede utilizar la función SmartBox para buscar vídeos de YouTube directamente desde la barra de direcciones. Encuentre el video que desea descargar y toque en él para reproducirlo.
-
-
Paso 3: Toque en el icono de descarga en la parte inferior del reproductor de vídeo
-
Tan pronto como comience a reproducir un video de YouTube, verá un icono de descarga en la parte inferior del reproductor de video. Toque en el icono de descarga y verá una ventana emergente con diferentes opciones.
-
Paso 4: Elija el formato y la calidad del video
-
En la ventana emergente, puede elegir el formato y la calidad del video que desea descargar. Puede elegir entre formatos MP4, 3GP, WEBM y M4A, y de calidad 144p a 1080p. También puede ver el tamaño del archivo y el tiempo estimado de descarga para cada opción. Elija la opción que se adapte a sus necesidades y toque en el botón de descarga.
-
Paso 5: Espere a que la descarga termine y disfrute de su video sin conexión
-
-
Beneficios de descargar vídeos de YouTube con Yandex APK
-
Descargar vídeos de YouTube con Yandex APK tiene muchos beneficios, tales como:
-
Guardar datos móviles y espacio de almacenamiento
-
Al descargar videos de YouTube con Yandex APK, puede guardar sus datos móviles y espacio de almacenamiento. Puede utilizar la función de modo Turbo para comprimir páginas web y reducir el consumo de datos. También puede elegir el formato y la calidad del vídeo que se adapte a la capacidad de su dispositivo. Puedes eliminar o mover tus videos descargados cuando quieras.
-
Ver vídeos en cualquier momento y en cualquier lugar sin conexión a Internet
-
Al descargar videos de YouTube con Yandex APK, puede ver videos en cualquier momento y en cualquier lugar sin conexión a Internet. No tiene que preocuparse por el almacenamiento en búfer, la carga o las interrupciones. Puedes ver tus videos favoritos sin conexión en la pantalla de tu dispositivo o en una pantalla más grande con un Chromecast o un televisor inteligente.
-
Compartir vídeos con tus amigos y familiares fácilmente
-
Al descargar videos de YouTube con Yandex APK, puede compartir videos con sus amigos y familiares fácilmente. Puede enviar sus vídeos descargados a través de Bluetooth, Wi-Fi Direct u otras aplicaciones. También puede subirlos a servicios en la nube o plataformas de redes sociales. Puedes compartir tus vídeos con quien quieras sin problemas.
-
Conclusión
-
En conclusión, Yandex APK es una gran aplicación que le permite descargar vídeos de YouTube con facilidad y comodidad. Tiene muchas características que lo convierten en una aplicación de navegador potente y versátil que puede mejorar su experiencia de navegación. Es rápido, seguro y personalizado. También es fácil de instalar y usar. Si quieres descargar vídeos de YouTube con Yandex APK, solo tienes que seguir los pasos que te hemos mostrado en este artículo y disfrutar de tus vídeos sin conexión.
-
Preguntas frecuentes
-
-
Q: ¿Es Yandex APK seguro de usar?
-
-
Q: ¿Yandex APK es libre de usar?
-
A: Sí, Yandex APK es de uso gratuito. Usted no tiene que pagar nada para descargar o usarlo. Sin embargo, puede ver algunos anuncios o contenido patrocinado en la aplicación, que ayudan a apoyar su desarrollo y mantenimiento.
-
Q: ¿Puedo descargar vídeos de YouTube con Yandex APK en otros dispositivos?
-
A: Sí, puede descargar vídeos de YouTube con Yandex APK en otros dispositivos además de los dispositivos Android. También puede usarlo en dispositivos Windows, Mac, Linux, iOS y Smart TV. Solo necesitas descargar la versión apropiada de Yandex Browser para tu dispositivo desde su sitio web oficial o tienda de aplicaciones.
-
Q: ¿Puedo descargar vídeos de YouTube con Yandex APK en otros idiomas?
-
A: Sí, puede descargar vídeos de YouTube con Yandex APK en otros idiomas además de Inglés. Puede cambiar el idioma de la aplicación desde el menú de configuración. También puede cambiar el idioma de YouTube desde el menú de configuración.
-
Q: ¿Puedo descargar vídeos de YouTube con Yandex APK en alta resolución?
-
A: Sí, puede descargar vídeos de YouTube con Yandex APK en alta resolución hasta 1080p de calidad. Sin embargo, esto puede depender de la disponibilidad de la fuente de vídeo y del rendimiento y el espacio de almacenamiento del dispositivo. También puede utilizar la función de modo Turbo para reducir el tamaño del archivo y el tiempo de descarga de los vídeos de alta resolución.
-
-
Espero que haya encontrado este artículo útil e informativo. Si tiene alguna pregunta o comentario, no dude en dejar un comentario a continuación. ¡Gracias por leer y feliz descarga!
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/BetterAPI/BetterChat_new/src/routes/conversation/[id]/stop-generating/+server.ts b/spaces/BetterAPI/BetterChat_new/src/routes/conversation/[id]/stop-generating/+server.ts
deleted file mode 100644
index b27c0ccf2aaafda990d853d34e1f5432c8ad5eaf..0000000000000000000000000000000000000000
--- a/spaces/BetterAPI/BetterChat_new/src/routes/conversation/[id]/stop-generating/+server.ts
+++ /dev/null
@@ -1,27 +0,0 @@
-import { collections } from "$lib/server/database";
-import { error } from "@sveltejs/kit";
-import { ObjectId } from "mongodb";
-
-/**
- * Ideally, we'd be able to detect the client-side abort, see https://github.com/huggingface/chat-ui/pull/88#issuecomment-1523173850
- */
-export async function POST({ params, locals }) {
- const conversationId = new ObjectId(params.id);
-
- const conversation = await collections.conversations.findOne({
- _id: conversationId,
- sessionId: locals.sessionId,
- });
-
- if (!conversation) {
- throw error(404, "Conversation not found");
- }
-
- await collections.abortedGenerations.updateOne(
- { conversationId },
- { $set: { updatedAt: new Date() }, $setOnInsert: { createdAt: new Date() } },
- { upsert: true }
- );
-
- return new Response();
-}
diff --git a/spaces/Blessin/movie-poster-generator/README.md b/spaces/Blessin/movie-poster-generator/README.md
deleted file mode 100644
index 538d2bdd64260d80a0eb2c385908fc7e07403034..0000000000000000000000000000000000000000
--- a/spaces/Blessin/movie-poster-generator/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Movie Poster Generator
-emoji: 🐨
-colorFrom: gray
-colorTo: gray
-sdk: gradio
-sdk_version: 3.50.2
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/CVPR/LIVE/pybind11/tests/test_operator_overloading.py b/spaces/CVPR/LIVE/pybind11/tests/test_operator_overloading.py
deleted file mode 100644
index 39e3aee271c6f94ab0d54207a02e1962fdc20a24..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/pybind11/tests/test_operator_overloading.py
+++ /dev/null
@@ -1,145 +0,0 @@
-# -*- coding: utf-8 -*-
-import pytest
-from pybind11_tests import operators as m
-from pybind11_tests import ConstructorStats
-
-
-def test_operator_overloading():
- v1 = m.Vector2(1, 2)
- v2 = m.Vector(3, -1)
- v3 = m.Vector2(1, 2) # Same value as v1, but different instance.
- assert v1 is not v3
-
- assert str(v1) == "[1.000000, 2.000000]"
- assert str(v2) == "[3.000000, -1.000000]"
-
- assert str(-v2) == "[-3.000000, 1.000000]"
-
- assert str(v1 + v2) == "[4.000000, 1.000000]"
- assert str(v1 - v2) == "[-2.000000, 3.000000]"
- assert str(v1 - 8) == "[-7.000000, -6.000000]"
- assert str(v1 + 8) == "[9.000000, 10.000000]"
- assert str(v1 * 8) == "[8.000000, 16.000000]"
- assert str(v1 / 8) == "[0.125000, 0.250000]"
- assert str(8 - v1) == "[7.000000, 6.000000]"
- assert str(8 + v1) == "[9.000000, 10.000000]"
- assert str(8 * v1) == "[8.000000, 16.000000]"
- assert str(8 / v1) == "[8.000000, 4.000000]"
- assert str(v1 * v2) == "[3.000000, -2.000000]"
- assert str(v2 / v1) == "[3.000000, -0.500000]"
-
- assert v1 == v3
- assert v1 != v2
- assert hash(v1) == 4
- # TODO(eric.cousineau): Make this work.
- # assert abs(v1) == "abs(Vector2)"
-
- v1 += 2 * v2
- assert str(v1) == "[7.000000, 0.000000]"
- v1 -= v2
- assert str(v1) == "[4.000000, 1.000000]"
- v1 *= 2
- assert str(v1) == "[8.000000, 2.000000]"
- v1 /= 16
- assert str(v1) == "[0.500000, 0.125000]"
- v1 *= v2
- assert str(v1) == "[1.500000, -0.125000]"
- v2 /= v1
- assert str(v2) == "[2.000000, 8.000000]"
-
- cstats = ConstructorStats.get(m.Vector2)
- assert cstats.alive() == 3
- del v1
- assert cstats.alive() == 2
- del v2
- assert cstats.alive() == 1
- del v3
- assert cstats.alive() == 0
- assert cstats.values() == [
- '[1.000000, 2.000000]',
- '[3.000000, -1.000000]',
- '[1.000000, 2.000000]',
- '[-3.000000, 1.000000]',
- '[4.000000, 1.000000]',
- '[-2.000000, 3.000000]',
- '[-7.000000, -6.000000]',
- '[9.000000, 10.000000]',
- '[8.000000, 16.000000]',
- '[0.125000, 0.250000]',
- '[7.000000, 6.000000]',
- '[9.000000, 10.000000]',
- '[8.000000, 16.000000]',
- '[8.000000, 4.000000]',
- '[3.000000, -2.000000]',
- '[3.000000, -0.500000]',
- '[6.000000, -2.000000]',
- ]
- assert cstats.default_constructions == 0
- assert cstats.copy_constructions == 0
- assert cstats.move_constructions >= 10
- assert cstats.copy_assignments == 0
- assert cstats.move_assignments == 0
-
-
-def test_operators_notimplemented():
- """#393: need to return NotSupported to ensure correct arithmetic operator behavior"""
-
- c1, c2 = m.C1(), m.C2()
- assert c1 + c1 == 11
- assert c2 + c2 == 22
- assert c2 + c1 == 21
- assert c1 + c2 == 12
-
-
-def test_nested():
- """#328: first member in a class can't be used in operators"""
-
- a = m.NestA()
- b = m.NestB()
- c = m.NestC()
-
- a += 10
- assert m.get_NestA(a) == 13
- b.a += 100
- assert m.get_NestA(b.a) == 103
- c.b.a += 1000
- assert m.get_NestA(c.b.a) == 1003
- b -= 1
- assert m.get_NestB(b) == 3
- c.b -= 3
- assert m.get_NestB(c.b) == 1
- c *= 7
- assert m.get_NestC(c) == 35
-
- abase = a.as_base()
- assert abase.value == -2
- a.as_base().value += 44
- assert abase.value == 42
- assert c.b.a.as_base().value == -2
- c.b.a.as_base().value += 44
- assert c.b.a.as_base().value == 42
-
- del c
- pytest.gc_collect()
- del a # Shouldn't delete while abase is still alive
- pytest.gc_collect()
-
- assert abase.value == 42
- del abase, b
- pytest.gc_collect()
-
-
-def test_overriding_eq_reset_hash():
-
- assert m.Comparable(15) is not m.Comparable(15)
- assert m.Comparable(15) == m.Comparable(15)
-
- with pytest.raises(TypeError):
- hash(m.Comparable(15)) # TypeError: unhashable type: 'm.Comparable'
-
- for hashable in (m.Hashable, m.Hashable2):
- assert hashable(15) is not hashable(15)
- assert hashable(15) == hashable(15)
-
- assert hash(hashable(15)) == 15
- assert hash(hashable(15)) == hash(hashable(15))
diff --git a/spaces/CVPR/LIVE/thrust/thrust/equal.h b/spaces/CVPR/LIVE/thrust/thrust/equal.h
deleted file mode 100644
index bc6db501573534cf5c78f51d9dd3becffb7e2180..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/equal.h
+++ /dev/null
@@ -1,238 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-
-/*! \file equal.h
- * \brief Equality between ranges
- */
-
-#pragma once
-
-#include
-#include
-
-namespace thrust
-{
-
-
-/*! \addtogroup reductions
- * \{
- * \addtogroup comparisons
- * \ingroup reductions
- * \{
- */
-
-
-/*! \p equal returns \c true if the two ranges [first1, last1)
- * and [first2, first2 + (last1 - first1)) are identical when
- * compared element-by-element, and otherwise returns \c false.
- *
- * This version of \p equal returns \c true if and only if for every
- * iterator \c i in [first1, last1), *i == *(first2 + (i - first1)).
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param first1 The beginning of the first sequence.
- * \param last1 The end of the first sequence.
- * \param first2 The beginning of the second sequence.
- * \return \c true, if the sequences are equal; \c false, otherwise.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam InputIterator1 is a model of Input Iterator,
- * and \p InputIterator1's \c value_type is a model of Equality Comparable,
- * and \p InputIterator1's \c value_type can be compared for equality with \c InputIterator2's \c value_type.
- * \tparam InputIterator2 is a model of Input Iterator,
- * and \p InputIterator2's \c value_type is a model of Equality Comparable,
- * and \p InputIterator2's \c value_type can be compared for equality with \c InputIterator1's \c value_type.
- *
- * The following code snippet demonstrates how to use \p equal to test
- * two ranges for equality using the \p thrust::host execution policy:
- *
- * \code
- * #include
- * #include
- * ...
- * int A1[7] = {3, 1, 4, 1, 5, 9, 3};
- * int A2[7] = {3, 1, 4, 2, 8, 5, 7};
- * ...
- * bool result = thrust::equal(thrust::host, A1, A1 + 7, A2);
- *
- * // result == false
- * \endcode
- *
- * \see http://www.sgi.com/tech/stl/equal.html
- */
-template
-__host__ __device__
-bool equal(const thrust::detail::execution_policy_base &exec, InputIterator1 first1, InputIterator1 last1, InputIterator2 first2);
-
-
-/*! \p equal returns \c true if the two ranges [first1, last1)
- * and [first2, first2 + (last1 - first1)) are identical when
- * compared element-by-element, and otherwise returns \c false.
- *
- * This version of \p equal returns \c true if and only if for every
- * iterator \c i in [first1, last1), *i == *(first2 + (i - first1)).
- *
- * \param first1 The beginning of the first sequence.
- * \param last1 The end of the first sequence.
- * \param first2 The beginning of the second sequence.
- * \return \c true, if the sequences are equal; \c false, otherwise.
- *
- * \tparam InputIterator1 is a model of Input Iterator,
- * and \p InputIterator1's \c value_type is a model of Equality Comparable,
- * and \p InputIterator1's \c value_type can be compared for equality with \c InputIterator2's \c value_type.
- * \tparam InputIterator2 is a model of Input Iterator,
- * and \p InputIterator2's \c value_type is a model of Equality Comparable,
- * and \p InputIterator2's \c value_type can be compared for equality with \c InputIterator1's \c value_type.
- *
- * The following code snippet demonstrates how to use \p equal to test
- * two ranges for equality.
- *
- * \code
- * #include
- * ...
- * int A1[7] = {3, 1, 4, 1, 5, 9, 3};
- * int A2[7] = {3, 1, 4, 2, 8, 5, 7};
- * ...
- * bool result = thrust::equal(A1, A1 + 7, A2);
- *
- * // result == false
- * \endcode
- *
- * \see http://www.sgi.com/tech/stl/equal.html
- */
-template
-bool equal(InputIterator1 first1, InputIterator1 last1,
- InputIterator2 first2);
-
-
-/*! \p equal returns \c true if the two ranges [first1, last1)
- * and [first2, first2 + (last1 - first1)) are identical when
- * compared element-by-element, and otherwise returns \c false.
- *
- * This version of \p equal returns \c true if and only if for every
- * iterator \c i in [first1, last1),
- * binary_pred(*i, *(first2 + (i - first1))) is \c true.
- *
- * The algorithm's execution is parallelized as determined by \p exec.
- *
- * \param exec The execution policy to use for parallelization.
- * \param first1 The beginning of the first sequence.
- * \param last1 The end of the first sequence.
- * \param first2 The beginning of the second sequence.
- * \param binary_pred Binary predicate used to test element equality.
- * \return \c true, if the sequences are equal; \c false, otherwise.
- *
- * \tparam DerivedPolicy The name of the derived execution policy.
- * \tparam InputIterator1 is a model of Input Iterator,
- * and \p InputIterator1's \c value_type is convertible to \p BinaryPredicate's \c first_argument_type.
- * \tparam InputIterator2 is a model of Input Iterator,
- * and \p InputIterator2's \c value_type is convertible to \p BinaryPredicate's \c second_argument_type.
- * \tparam BinaryPredicate is a model of Binary Predicate.
- *
- * The following code snippet demonstrates how to use \p equal to compare the
- * elements in two ranges modulo 2 using the \p thrust::host execution policy.
- *
- * \code
- * #include
- * #include
- * ...
- *
- * struct compare_modulo_two
- * {
- * __host__ __device__
- * bool operator()(int x, int y) const
- * {
- * return (x % 2) == (y % 2);
- * }
- * };
- * ...
- * int x[6] = {0, 2, 4, 6, 8, 10};
- * int y[6] = {1, 3, 5, 7, 9, 11};
- *
- * bool result = thrust::equal(x, x + 6, y, compare_modulo_two());
- *
- * // result is false
- * \endcode
- *
- * \see http://www.sgi.com/tech/stl/equal.html
- */
-template
-__host__ __device__
-bool equal(const thrust::detail::execution_policy_base &exec, InputIterator1 first1, InputIterator1 last1, InputIterator2 first2, BinaryPredicate binary_pred);
-
-
-/*! \p equal returns \c true if the two ranges [first1, last1)
- * and [first2, first2 + (last1 - first1)) are identical when
- * compared element-by-element, and otherwise returns \c false.
- *
- * This version of \p equal returns \c true if and only if for every
- * iterator \c i in [first1, last1),
- * binary_pred(*i, *(first2 + (i - first1))) is \c true.
- *
- * \param first1 The beginning of the first sequence.
- * \param last1 The end of the first sequence.
- * \param first2 The beginning of the second sequence.
- * \param binary_pred Binary predicate used to test element equality.
- * \return \c true, if the sequences are equal; \c false, otherwise.
- *
- * \tparam InputIterator1 is a model of Input Iterator,
- * and \p InputIterator1's \c value_type is convertible to \p BinaryPredicate's \c first_argument_type.
- * \tparam InputIterator2 is a model of Input Iterator,
- * and \p InputIterator2's \c value_type is convertible to \p BinaryPredicate's \c second_argument_type.
- * \tparam BinaryPredicate is a model of Binary Predicate.
- *
- * The following code snippet demonstrates how to use \p equal to compare the
- * elements in two ranges modulo 2.
- *
- * \code
- * #include
- *
- * struct compare_modulo_two
- * {
- * __host__ __device__
- * bool operator()(int x, int y) const
- * {
- * return (x % 2) == (y % 2);
- * }
- * };
- * ...
- * int x[6] = {0, 2, 4, 6, 8, 10};
- * int y[6] = {1, 3, 5, 7, 9, 11};
- *
- * bool result = thrust::equal(x, x + 5, y, compare_modulo_two());
- *
- * // result is true
- * \endcode
- *
- * \see http://www.sgi.com/tech/stl/equal.html
- */
-template
-bool equal(InputIterator1 first1, InputIterator1 last1,
- InputIterator2 first2, BinaryPredicate binary_pred);
-
-
-/*! \} // end comparisons
- * \} // end reductions
- */
-
-} // end namespace thrust
-
-#include
-
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/scan.h b/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/scan.h
deleted file mode 100644
index 4d38e648437322d078d49c9412ab9532b7cc8b69..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/scan.h
+++ /dev/null
@@ -1,23 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-
-// this system inherits scan
-#include
-
diff --git a/spaces/CVPR/Text2Human/Text2Human/data/parsing_generation_segm_attr_dataset.py b/spaces/CVPR/Text2Human/Text2Human/data/parsing_generation_segm_attr_dataset.py
deleted file mode 100644
index 3a9d50c2fe21e0bb327334c64148ff79efd9dcad..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Text2Human/Text2Human/data/parsing_generation_segm_attr_dataset.py
+++ /dev/null
@@ -1,80 +0,0 @@
-import os
-import os.path
-
-import numpy as np
-import torch
-import torch.utils.data as data
-from PIL import Image
-
-
-class ParsingGenerationDeepFashionAttrSegmDataset(data.Dataset):
-
- def __init__(self, segm_dir, pose_dir, ann_file, downsample_factor=2):
- self._densepose_path = pose_dir
- self._segm_path = segm_dir
- self._image_fnames = []
- self.attrs = []
-
- self.downsample_factor = downsample_factor
-
- # training, ground-truth available
- assert os.path.exists(ann_file)
- for row in open(os.path.join(ann_file), 'r'):
- annotations = row.split()
- self._image_fnames.append(annotations[0])
- self.attrs.append([int(i) for i in annotations[1:]])
-
- def _open_file(self, path_prefix, fname):
- return open(os.path.join(path_prefix, fname), 'rb')
-
- def _load_densepose(self, raw_idx):
- fname = self._image_fnames[raw_idx]
- fname = f'{fname[:-4]}_densepose.png'
- with self._open_file(self._densepose_path, fname) as f:
- densepose = Image.open(f)
- if self.downsample_factor != 1:
- width, height = densepose.size
- width = width // self.downsample_factor
- height = height // self.downsample_factor
- densepose = densepose.resize(
- size=(width, height), resample=Image.NEAREST)
- # channel-wise IUV order, [3, H, W]
- densepose = np.array(densepose)[:, :, 2:].transpose(2, 0, 1)
- return densepose.astype(np.float32)
-
- def _load_segm(self, raw_idx):
- fname = self._image_fnames[raw_idx]
- fname = f'{fname[:-4]}_segm.png'
- with self._open_file(self._segm_path, fname) as f:
- segm = Image.open(f)
- if self.downsample_factor != 1:
- width, height = segm.size
- width = width // self.downsample_factor
- height = height // self.downsample_factor
- segm = segm.resize(
- size=(width, height), resample=Image.NEAREST)
- segm = np.array(segm)
- return segm.astype(np.float32)
-
- def __getitem__(self, index):
- pose = self._load_densepose(index)
- segm = self._load_segm(index)
- attr = self.attrs[index]
-
- pose = torch.from_numpy(pose)
- segm = torch.LongTensor(segm)
- attr = torch.LongTensor(attr)
-
- pose = pose / 12. - 1
-
- return_dict = {
- 'densepose': pose,
- 'segm': segm,
- 'attr': attr,
- 'img_name': self._image_fnames[index]
- }
-
- return return_dict
-
- def __len__(self):
- return len(self._image_fnames)
diff --git a/spaces/CVPR/WALT/mmdet/core/bbox/coder/pseudo_bbox_coder.py b/spaces/CVPR/WALT/mmdet/core/bbox/coder/pseudo_bbox_coder.py
deleted file mode 100644
index 1c8346f4ae2c7db9719a70c7dc0244e088a9965b..0000000000000000000000000000000000000000
--- a/spaces/CVPR/WALT/mmdet/core/bbox/coder/pseudo_bbox_coder.py
+++ /dev/null
@@ -1,18 +0,0 @@
-from ..builder import BBOX_CODERS
-from .base_bbox_coder import BaseBBoxCoder
-
-
-@BBOX_CODERS.register_module()
-class PseudoBBoxCoder(BaseBBoxCoder):
- """Pseudo bounding box coder."""
-
- def __init__(self, **kwargs):
- super(BaseBBoxCoder, self).__init__(**kwargs)
-
- def encode(self, bboxes, gt_bboxes):
- """torch.Tensor: return the given ``bboxes``"""
- return gt_bboxes
-
- def decode(self, bboxes, pred_bboxes):
- """torch.Tensor: return the given ``pred_bboxes``"""
- return pred_bboxes
diff --git a/spaces/CVPR/WALT/mmdet/core/evaluation/__init__.py b/spaces/CVPR/WALT/mmdet/core/evaluation/__init__.py
deleted file mode 100644
index d11ef15b9db95166b4427ad4d08debbd0630a741..0000000000000000000000000000000000000000
--- a/spaces/CVPR/WALT/mmdet/core/evaluation/__init__.py
+++ /dev/null
@@ -1,15 +0,0 @@
-from .class_names import (cityscapes_classes, coco_classes, dataset_aliases,
- get_classes, imagenet_det_classes,
- imagenet_vid_classes, voc_classes)
-from .eval_hooks import DistEvalHook, EvalHook
-from .mean_ap import average_precision, eval_map, print_map_summary
-from .recall import (eval_recalls, plot_iou_recall, plot_num_recall,
- print_recall_summary)
-
-__all__ = [
- 'voc_classes', 'imagenet_det_classes', 'imagenet_vid_classes',
- 'coco_classes', 'cityscapes_classes', 'dataset_aliases', 'get_classes',
- 'DistEvalHook', 'EvalHook', 'average_precision', 'eval_map',
- 'print_map_summary', 'eval_recalls', 'print_recall_summary',
- 'plot_num_recall', 'plot_iou_recall'
-]
diff --git a/spaces/CVPR/WALT/mmdet/models/losses/accuracy.py b/spaces/CVPR/WALT/mmdet/models/losses/accuracy.py
deleted file mode 100644
index 789a2240a491289c5801b6690116e8ca657d004f..0000000000000000000000000000000000000000
--- a/spaces/CVPR/WALT/mmdet/models/losses/accuracy.py
+++ /dev/null
@@ -1,78 +0,0 @@
-import mmcv
-import torch.nn as nn
-
-
-@mmcv.jit(coderize=True)
-def accuracy(pred, target, topk=1, thresh=None):
- """Calculate accuracy according to the prediction and target.
-
- Args:
- pred (torch.Tensor): The model prediction, shape (N, num_class)
- target (torch.Tensor): The target of each prediction, shape (N, )
- topk (int | tuple[int], optional): If the predictions in ``topk``
- matches the target, the predictions will be regarded as
- correct ones. Defaults to 1.
- thresh (float, optional): If not None, predictions with scores under
- this threshold are considered incorrect. Default to None.
-
- Returns:
- float | tuple[float]: If the input ``topk`` is a single integer,
- the function will return a single float as accuracy. If
- ``topk`` is a tuple containing multiple integers, the
- function will return a tuple containing accuracies of
- each ``topk`` number.
- """
- assert isinstance(topk, (int, tuple))
- if isinstance(topk, int):
- topk = (topk, )
- return_single = True
- else:
- return_single = False
-
- maxk = max(topk)
- if pred.size(0) == 0:
- accu = [pred.new_tensor(0.) for i in range(len(topk))]
- return accu[0] if return_single else accu
- assert pred.ndim == 2 and target.ndim == 1
- assert pred.size(0) == target.size(0)
- assert maxk <= pred.size(1), \
- f'maxk {maxk} exceeds pred dimension {pred.size(1)}'
- pred_value, pred_label = pred.topk(maxk, dim=1)
- pred_label = pred_label.t() # transpose to shape (maxk, N)
- correct = pred_label.eq(target.view(1, -1).expand_as(pred_label))
- if thresh is not None:
- # Only prediction values larger than thresh are counted as correct
- correct = correct & (pred_value > thresh).t()
- res = []
- for k in topk:
- correct_k = correct[:k].reshape(-1).float().sum(0, keepdim=True)
- res.append(correct_k.mul_(100.0 / pred.size(0)))
- return res[0] if return_single else res
-
-
-class Accuracy(nn.Module):
-
- def __init__(self, topk=(1, ), thresh=None):
- """Module to calculate the accuracy.
-
- Args:
- topk (tuple, optional): The criterion used to calculate the
- accuracy. Defaults to (1,).
- thresh (float, optional): If not None, predictions with scores
- under this threshold are considered incorrect. Default to None.
- """
- super().__init__()
- self.topk = topk
- self.thresh = thresh
-
- def forward(self, pred, target):
- """Forward function to calculate accuracy.
-
- Args:
- pred (torch.Tensor): Prediction of models.
- target (torch.Tensor): Target for each prediction.
-
- Returns:
- tuple[float]: The accuracies under different topk criterions.
- """
- return accuracy(pred, target, self.topk, self.thresh)
diff --git a/spaces/CVPR/WALT/mmdet/models/roi_heads/__init__.py b/spaces/CVPR/WALT/mmdet/models/roi_heads/__init__.py
deleted file mode 100644
index 1a3e515e40ffa26f83381342952ea9a0e1ccc235..0000000000000000000000000000000000000000
--- a/spaces/CVPR/WALT/mmdet/models/roi_heads/__init__.py
+++ /dev/null
@@ -1,43 +0,0 @@
-'''
-from .base_roi_head import BaseRoIHead
-from .bbox_heads import (BBoxHead, ConvFCBBoxHead, DoubleConvFCBBoxHead,
- SCNetBBoxHead, Shared2FCBBoxHead,
- Shared4Conv1FCBBoxHead)
-from .cascade_roi_head import CascadeRoIHead
-from .double_roi_head import DoubleHeadRoIHead
-from .dynamic_roi_head import DynamicRoIHead
-from .grid_roi_head import GridRoIHead
-from .htc_roi_head import HybridTaskCascadeRoIHead
-from .mask_heads import (CoarseMaskHead, FCNMaskHead, FeatureRelayHead,
- FusedSemanticHead, GlobalContextHead, GridHead,
- HTCMaskHead, MaskIoUHead, MaskPointHead,
- SCNetMaskHead, SCNetSemanticHead)
-from .mask_scoring_roi_head import MaskScoringRoIHead
-from .pisa_roi_head import PISARoIHead
-from .point_rend_roi_head import PointRendRoIHead
-from .roi_extractors import SingleRoIExtractor
-from .scnet_roi_head import SCNetRoIHead
-from .shared_heads import ResLayer
-from .sparse_roi_head import SparseRoIHead
-from .standard_roi_head import StandardRoIHead
-from .trident_roi_head import TridentRoIHead
-
-__all__ = [
- 'BaseRoIHead', 'CascadeRoIHead', 'DoubleHeadRoIHead', 'MaskScoringRoIHead',
- 'HybridTaskCascadeRoIHead', 'GridRoIHead', 'ResLayer', 'BBoxHead',
- 'ConvFCBBoxHead', 'Shared2FCBBoxHead', 'StandardRoIHead',
- 'Shared4Conv1FCBBoxHead', 'DoubleConvFCBBoxHead', 'FCNMaskHead',
- 'HTCMaskHead', 'FusedSemanticHead', 'GridHead', 'MaskIoUHead',
- 'SingleRoIExtractor', 'PISARoIHead', 'PointRendRoIHead', 'MaskPointHead',
- 'CoarseMaskHead', 'DynamicRoIHead', 'SparseRoIHead', 'TridentRoIHead',
- 'SCNetRoIHead', 'SCNetMaskHead', 'SCNetSemanticHead', 'SCNetBBoxHead',
- 'FeatureRelayHead', 'GlobalContextHead'
-]
-'''
-from .bbox_heads import (BBoxHead, ConvFCBBoxHead, DoubleConvFCBBoxHead,
- SCNetBBoxHead, Shared2FCBBoxHead,
- Shared4Conv1FCBBoxHead)
-from .standard_roi_head import StandardRoIHead
-from .roi_extractors import SingleRoIExtractor
-from .mask_heads import FCNMaskHead
-__all__ = ['BBoxHead','StandardRoIHead','SingleRoIExtractor','Shared2FCBBoxHead','FCNMaskHead']
diff --git a/spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/components/WebSocket.js b/spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/components/WebSocket.js
deleted file mode 100644
index 73aed11bd67a22eb298eb3a481a1f87384a5a5e8..0000000000000000000000000000000000000000
--- a/spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/components/WebSocket.js
+++ /dev/null
@@ -1,134 +0,0 @@
-import Client from "./Client.js";
-import { Config, Version } from './index.js'
-import { sleep } from '../model/index.js'
-import { redAdapter } from '../model/red/index.js'
-// import { satoriAdapter } from '../model/satori/index.js'
-
-let sendSocketList = []
-let allSocketList = []
-
-async function createWebSocket(data) {
- if (typeof data.close != 'undefined' && typeof data.closed == 'undefined') {
- data.closed = data.close
- delete data.close
- }
- const client = new Client(data)
- setAllSocketList(client)
- if (data.address == 'ws_address') return
- if (data.closed) return
- sendSocketList = sendSocketList.filter(i => i.name != data.name)
- switch (Number(data.type)) {
- case 1:
- if (!await checkVersion(data)) return
- client.createWs()
- sendSocketList.push(client)
- break;
- case 2:
- if (!await checkVersion(data)) return
- client.createServer()
- sendSocketList.push(client)
- break
- case 3:
- client.createGSUidWs()
- sendSocketList.push(client)
- break
- case 4:
- if (Version.isTrss) return
- // client.createQQNT()
- redAdapter.connect(client)
- break
- case 5:
- if (!await checkVersion(data)) return
- client.createHttp()
- break
- case 6:
- if (!await checkVersion(data)) return
- client.createHttpPost()
- sendSocketList.push(client)
- break
- default:
- return;
- }
-}
-
-function setAllSocketList(data) {
- allSocketList = allSocketList.filter(i => i.name != data.name)
- allSocketList.push(data)
-}
-
-async function checkVersion(data) {
- if (Version.isTrss) {
- if (!data.uin) {
- logger.warn(`[ws-plugin] ${data.name} 缺少配置项uin 请删除连接后重新#ws添加连接`)
- return false
- } else {
- let log = false
- for (let i = 0; i < 20; i++) {
- if (Version.protocol.some(i => i == Bot[data.uin]?.version?.name)) {
- return true
- }
- if (!log) {
- logger.warn(`[ws-plugin] ${data.name} 暂未适配当前协议端或未连接对应协议端,20秒后重新判断`)
- log = true
- }
- await sleep(1000)
- }
- logger.warn(`[ws-plugin] ${data.name} 暂未适配当前协议端或未连接对应协议端 ${data.uin}`)
- return false
- }
- }
- return true
-}
-
-function modifyWebSocket(target) {
- // if (Version.isTrss) return
- switch (target.type) {
- case 'add':
- case 'open':
- if (target.data.type == 4) {
- const client = new Client(target.data)
- setAllSocketList(client)
- redAdapter.connect(client)
- } else {
- createWebSocket(target.data)
- }
- break;
- case 'del':
- case 'close':
- for (const i of allSocketList) {
- if (i.name == target.data.name) {
- i.close()
- break
- }
- }
- break
- default:
- return;
- }
-}
-
-function clearWebSocket() {
- for (const i of allSocketList) {
- i.close()
- }
-}
-
-
-function initWebSocket() {
- // if (Version.isTrss) return
- for (const i of Config.servers) {
- createWebSocket(i)
- }
-}
-
-
-export {
- initWebSocket,
- clearWebSocket,
- modifyWebSocket,
- allSocketList,
- setAllSocketList,
- sendSocketList,
- createWebSocket
-}
-
diff --git a/spaces/ClinBAY/Safeterm_Demo/send_email_request.py b/spaces/ClinBAY/Safeterm_Demo/send_email_request.py
deleted file mode 100644
index 6ba495f716a89ecb86f4603203c10e850b16dee3..0000000000000000000000000000000000000000
--- a/spaces/ClinBAY/Safeterm_Demo/send_email_request.py
+++ /dev/null
@@ -1,98 +0,0 @@
-import os
-from dotenv import load_dotenv
-import msal
-import requests
-# import json
-
-
-def send_email(subject, email, name, organization, meddra_license, agree_terms, save_data) -> None:
- """
- Send an email with user settings
- @param save_data:
- @type save_data:
- @param agree_terms:
- @type agree_terms:
- @param meddra_license:
- @type meddra_license:
- @param organization:
- @type organization:
- @param name:
- @type name:
- @param email:
- @type email:
- @param subject:
- @type subject:
- @return:
- @rtype:
- """
-
- body = f"""
- Request for API Key - Safeterm
-
- Settings:
- - Free Demo (30 days, 50 terms limit)
- - Version: 26.0
- - Language: English
-
- Contact Information:
- - Email: {email}
- - Full Name: {name}
- - Organization: {organization}
-
- Terms of use:
- - Valid medDRA License: {meddra_license}
- - Agrees to Safeterm terms: {agree_terms}
- - Consent to data storage: {save_data}
- """
-
- load_dotenv()
-
- client_id = os.getenv("CLIENT_ID")
- client_secret = os.getenv("CLIENT_SECRET")
- tenant_id = os.getenv("TENANT_ID")
- authority = f"https://login.microsoftonline.com/{tenant_id}"
- sender = os.getenv("MAIL_SENDER")
- receiver = os.getenv("MAIL_RECIPIENT")
- cc_receiver = os.getenv("CC_RECIPIENT")
-
- app = msal.ConfidentialClientApplication(
- client_id=client_id,
- client_credential=client_secret,
- authority=authority)
-
- scopes = ["https://graph.microsoft.com/.default"]
-
- result = app.acquire_token_silent(scopes, account=None)
-
- if not result:
- print("No suitable token exists in cache. Let's get a new one from Azure Active Directory.")
- result = app.acquire_token_for_client(scopes=scopes)
-
- if "access_token" in result:
- endpoint = f'https://graph.microsoft.com/v1.0/users/{sender}/sendMail'
- email_msg = {
- 'Message': {
- 'Subject': subject,
- 'Body': {
- 'ContentType': 'Text',
- 'Content': body
- },
- 'ToRecipients': [{'EmailAddress': {'Address': receiver}}],
- 'CcRecipients': [{'EmailAddress': {'Address': cc_receiver}}] # Added CcRecipients here
- },
- 'SaveToSentItems': 'true'
- }
-
- r = requests.post(endpoint, headers={'Authorization': 'Bearer ' + result['access_token']}, json=email_msg)
-
- if r.ok:
- print('Sent email successfully')
- else:
- print(r.json())
- else:
- print(result.get("error"))
- print(result.get("error_description"))
- print(result.get("correlation_id"))
-
-# Sample usage
-# send_email("Test Email Hugging Face Demo", "This is a test email.")
diff --git a/spaces/CognitiveLabs/GPT-auto-webscraping/chains/output_format/base.py b/spaces/CognitiveLabs/GPT-auto-webscraping/chains/output_format/base.py
deleted file mode 100644
index aade8b8e246ebff982a82a399e8314586c6ce2c8..0000000000000000000000000000000000000000
--- a/spaces/CognitiveLabs/GPT-auto-webscraping/chains/output_format/base.py
+++ /dev/null
@@ -1,19 +0,0 @@
-from langchain.chains import LLMChain
-from langchain.memory import ConversationBufferMemory
-from chains.output_format.templates import output_format_chat_prompt
-
-
-def chain_output_format(llm) -> LLMChain:
- # memory
- html_memory = ConversationBufferMemory(
- input_key="html_content", memory_key="chat_history"
- )
-
- # chain
- return LLMChain(
- llm=llm,
- prompt=output_format_chat_prompt,
- verbose=True,
- output_key="output_format",
- memory=html_memory,
- )
diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/csrc/ROIAlign.h b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/csrc/ROIAlign.h
deleted file mode 100644
index 3907deab2a750a9f83f0f3ef38fee279c1445c61..0000000000000000000000000000000000000000
--- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/csrc/ROIAlign.h
+++ /dev/null
@@ -1,46 +0,0 @@
-// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
-#pragma once
-
-#include "cpu/vision.h"
-
-#ifdef WITH_CUDA
-#include "cuda/vision.h"
-#endif
-
-// Interface for Python
-at::Tensor ROIAlign_forward(const at::Tensor& input,
- const at::Tensor& rois,
- const float spatial_scale,
- const int pooled_height,
- const int pooled_width,
- const int sampling_ratio) {
- if (input.type().is_cuda()) {
-#ifdef WITH_CUDA
- return ROIAlign_forward_cuda(input, rois, spatial_scale, pooled_height, pooled_width, sampling_ratio);
-#else
- AT_ERROR("Not compiled with GPU support");
-#endif
- }
- return ROIAlign_forward_cpu(input, rois, spatial_scale, pooled_height, pooled_width, sampling_ratio);
-}
-
-at::Tensor ROIAlign_backward(const at::Tensor& grad,
- const at::Tensor& rois,
- const float spatial_scale,
- const int pooled_height,
- const int pooled_width,
- const int batch_size,
- const int channels,
- const int height,
- const int width,
- const int sampling_ratio) {
- if (grad.type().is_cuda()) {
-#ifdef WITH_CUDA
- return ROIAlign_backward_cuda(grad, rois, spatial_scale, pooled_height, pooled_width, batch_size, channels, height, width, sampling_ratio);
-#else
- AT_ERROR("Not compiled with GPU support");
-#endif
- }
- AT_ERROR("Not implemented on the CPU");
-}
-
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/misc/etree.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/misc/etree.py
deleted file mode 100644
index 9d4a65c36014c8381306968c69432f50f0c0b886..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/misc/etree.py
+++ /dev/null
@@ -1,478 +0,0 @@
-"""Shim module exporting the same ElementTree API for lxml and
-xml.etree backends.
-
-When lxml is installed, it is automatically preferred over the built-in
-xml.etree module.
-On Python 2.7, the cElementTree module is preferred over the pure-python
-ElementTree module.
-
-Besides exporting a unified interface, this also defines extra functions
-or subclasses built-in ElementTree classes to add features that are
-only availble in lxml, like OrderedDict for attributes, pretty_print and
-iterwalk.
-"""
-from fontTools.misc.textTools import tostr
-
-
-XML_DECLARATION = """"""
-
-__all__ = [
- # public symbols
- "Comment",
- "dump",
- "Element",
- "ElementTree",
- "fromstring",
- "fromstringlist",
- "iselement",
- "iterparse",
- "parse",
- "ParseError",
- "PI",
- "ProcessingInstruction",
- "QName",
- "SubElement",
- "tostring",
- "tostringlist",
- "TreeBuilder",
- "XML",
- "XMLParser",
- "register_namespace",
-]
-
-try:
- from lxml.etree import *
-
- _have_lxml = True
-except ImportError:
- try:
- from xml.etree.cElementTree import *
-
- # the cElementTree version of XML function doesn't support
- # the optional 'parser' keyword argument
- from xml.etree.ElementTree import XML
- except ImportError: # pragma: no cover
- from xml.etree.ElementTree import *
- _have_lxml = False
-
- import sys
-
- # dict is always ordered in python >= 3.6 and on pypy
- PY36 = sys.version_info >= (3, 6)
- try:
- import __pypy__
- except ImportError:
- __pypy__ = None
- _dict_is_ordered = bool(PY36 or __pypy__)
- del PY36, __pypy__
-
- if _dict_is_ordered:
- _Attrib = dict
- else:
- from collections import OrderedDict as _Attrib
-
- if isinstance(Element, type):
- _Element = Element
- else:
- # in py27, cElementTree.Element cannot be subclassed, so
- # we need to import the pure-python class
- from xml.etree.ElementTree import Element as _Element
-
- class Element(_Element):
- """Element subclass that keeps the order of attributes."""
-
- def __init__(self, tag, attrib=_Attrib(), **extra):
- super(Element, self).__init__(tag)
- self.attrib = _Attrib()
- if attrib:
- self.attrib.update(attrib)
- if extra:
- self.attrib.update(extra)
-
- def SubElement(parent, tag, attrib=_Attrib(), **extra):
- """Must override SubElement as well otherwise _elementtree.SubElement
- fails if 'parent' is a subclass of Element object.
- """
- element = parent.__class__(tag, attrib, **extra)
- parent.append(element)
- return element
-
- def _iterwalk(element, events, tag):
- include = tag is None or element.tag == tag
- if include and "start" in events:
- yield ("start", element)
- for e in element:
- for item in _iterwalk(e, events, tag):
- yield item
- if include:
- yield ("end", element)
-
- def iterwalk(element_or_tree, events=("end",), tag=None):
- """A tree walker that generates events from an existing tree as
- if it was parsing XML data with iterparse().
- Drop-in replacement for lxml.etree.iterwalk.
- """
- if iselement(element_or_tree):
- element = element_or_tree
- else:
- element = element_or_tree.getroot()
- if tag == "*":
- tag = None
- for item in _iterwalk(element, events, tag):
- yield item
-
- _ElementTree = ElementTree
-
- class ElementTree(_ElementTree):
- """ElementTree subclass that adds 'pretty_print' and 'doctype'
- arguments to the 'write' method.
- Currently these are only supported for the default XML serialization
- 'method', and not also for "html" or "text", for these are delegated
- to the base class.
- """
-
- def write(
- self,
- file_or_filename,
- encoding=None,
- xml_declaration=False,
- method=None,
- doctype=None,
- pretty_print=False,
- ):
- if method and method != "xml":
- # delegate to super-class
- super(ElementTree, self).write(
- file_or_filename,
- encoding=encoding,
- xml_declaration=xml_declaration,
- method=method,
- )
- return
-
- if encoding is not None and encoding.lower() == "unicode":
- if xml_declaration:
- raise ValueError(
- "Serialisation to unicode must not request an XML declaration"
- )
- write_declaration = False
- encoding = "unicode"
- elif xml_declaration is None:
- # by default, write an XML declaration only for non-standard encodings
- write_declaration = encoding is not None and encoding.upper() not in (
- "ASCII",
- "UTF-8",
- "UTF8",
- "US-ASCII",
- )
- else:
- write_declaration = xml_declaration
-
- if encoding is None:
- encoding = "ASCII"
-
- if pretty_print:
- # NOTE this will modify the tree in-place
- _indent(self._root)
-
- with _get_writer(file_or_filename, encoding) as write:
- if write_declaration:
- write(XML_DECLARATION % encoding.upper())
- if pretty_print:
- write("\n")
- if doctype:
- write(_tounicode(doctype))
- if pretty_print:
- write("\n")
-
- qnames, namespaces = _namespaces(self._root)
- _serialize_xml(write, self._root, qnames, namespaces)
-
- import io
-
- def tostring(
- element,
- encoding=None,
- xml_declaration=None,
- method=None,
- doctype=None,
- pretty_print=False,
- ):
- """Custom 'tostring' function that uses our ElementTree subclass, with
- pretty_print support.
- """
- stream = io.StringIO() if encoding == "unicode" else io.BytesIO()
- ElementTree(element).write(
- stream,
- encoding=encoding,
- xml_declaration=xml_declaration,
- method=method,
- doctype=doctype,
- pretty_print=pretty_print,
- )
- return stream.getvalue()
-
- # serialization support
-
- import re
-
- # Valid XML strings can include any Unicode character, excluding control
- # characters, the surrogate blocks, FFFE, and FFFF:
- # Char ::= #x9 | #xA | #xD | [#x20-#xD7FF] | [#xE000-#xFFFD] | [#x10000-#x10FFFF]
- # Here we reversed the pattern to match only the invalid characters.
- # For the 'narrow' python builds supporting only UCS-2, which represent
- # characters beyond BMP as UTF-16 surrogate pairs, we need to pass through
- # the surrogate block. I haven't found a more elegant solution...
- UCS2 = sys.maxunicode < 0x10FFFF
- if UCS2:
- _invalid_xml_string = re.compile(
- "[\u0000-\u0008\u000B-\u000C\u000E-\u001F\uFFFE-\uFFFF]"
- )
- else:
- _invalid_xml_string = re.compile(
- "[\u0000-\u0008\u000B-\u000C\u000E-\u001F\uD800-\uDFFF\uFFFE-\uFFFF]"
- )
-
- def _tounicode(s):
- """Test if a string is valid user input and decode it to unicode string
- using ASCII encoding if it's a bytes string.
- Reject all bytes/unicode input that contains non-XML characters.
- Reject all bytes input that contains non-ASCII characters.
- """
- try:
- s = tostr(s, encoding="ascii", errors="strict")
- except UnicodeDecodeError:
- raise ValueError(
- "Bytes strings can only contain ASCII characters. "
- "Use unicode strings for non-ASCII characters."
- )
- except AttributeError:
- _raise_serialization_error(s)
- if s and _invalid_xml_string.search(s):
- raise ValueError(
- "All strings must be XML compatible: Unicode or ASCII, "
- "no NULL bytes or control characters"
- )
- return s
-
- import contextlib
-
- @contextlib.contextmanager
- def _get_writer(file_or_filename, encoding):
- # returns text write method and release all resources after using
- try:
- write = file_or_filename.write
- except AttributeError:
- # file_or_filename is a file name
- f = open(
- file_or_filename,
- "w",
- encoding="utf-8" if encoding == "unicode" else encoding,
- errors="xmlcharrefreplace",
- )
- with f:
- yield f.write
- else:
- # file_or_filename is a file-like object
- # encoding determines if it is a text or binary writer
- if encoding == "unicode":
- # use a text writer as is
- yield write
- else:
- # wrap a binary writer with TextIOWrapper
- detach_buffer = False
- if isinstance(file_or_filename, io.BufferedIOBase):
- buf = file_or_filename
- elif isinstance(file_or_filename, io.RawIOBase):
- buf = io.BufferedWriter(file_or_filename)
- detach_buffer = True
- else:
- # This is to handle passed objects that aren't in the
- # IOBase hierarchy, but just have a write method
- buf = io.BufferedIOBase()
- buf.writable = lambda: True
- buf.write = write
- try:
- # TextIOWrapper uses this methods to determine
- # if BOM (for UTF-16, etc) should be added
- buf.seekable = file_or_filename.seekable
- buf.tell = file_or_filename.tell
- except AttributeError:
- pass
- wrapper = io.TextIOWrapper(
- buf,
- encoding=encoding,
- errors="xmlcharrefreplace",
- newline="\n",
- )
- try:
- yield wrapper.write
- finally:
- # Keep the original file open when the TextIOWrapper and
- # the BufferedWriter are destroyed
- wrapper.detach()
- if detach_buffer:
- buf.detach()
-
- from xml.etree.ElementTree import _namespace_map
-
- def _namespaces(elem):
- # identify namespaces used in this tree
-
- # maps qnames to *encoded* prefix:local names
- qnames = {None: None}
-
- # maps uri:s to prefixes
- namespaces = {}
-
- def add_qname(qname):
- # calculate serialized qname representation
- try:
- qname = _tounicode(qname)
- if qname[:1] == "{":
- uri, tag = qname[1:].rsplit("}", 1)
- prefix = namespaces.get(uri)
- if prefix is None:
- prefix = _namespace_map.get(uri)
- if prefix is None:
- prefix = "ns%d" % len(namespaces)
- else:
- prefix = _tounicode(prefix)
- if prefix != "xml":
- namespaces[uri] = prefix
- if prefix:
- qnames[qname] = "%s:%s" % (prefix, tag)
- else:
- qnames[qname] = tag # default element
- else:
- qnames[qname] = qname
- except TypeError:
- _raise_serialization_error(qname)
-
- # populate qname and namespaces table
- for elem in elem.iter():
- tag = elem.tag
- if isinstance(tag, QName):
- if tag.text not in qnames:
- add_qname(tag.text)
- elif isinstance(tag, str):
- if tag not in qnames:
- add_qname(tag)
- elif tag is not None and tag is not Comment and tag is not PI:
- _raise_serialization_error(tag)
- for key, value in elem.items():
- if isinstance(key, QName):
- key = key.text
- if key not in qnames:
- add_qname(key)
- if isinstance(value, QName) and value.text not in qnames:
- add_qname(value.text)
- text = elem.text
- if isinstance(text, QName) and text.text not in qnames:
- add_qname(text.text)
- return qnames, namespaces
-
- def _serialize_xml(write, elem, qnames, namespaces, **kwargs):
- tag = elem.tag
- text = elem.text
- if tag is Comment:
- write("" % _tounicode(text))
- elif tag is ProcessingInstruction:
- write("%s?>" % _tounicode(text))
- else:
- tag = qnames[_tounicode(tag) if tag is not None else None]
- if tag is None:
- if text:
- write(_escape_cdata(text))
- for e in elem:
- _serialize_xml(write, e, qnames, None)
- else:
- write("<" + tag)
- if namespaces:
- for uri, prefix in sorted(
- namespaces.items(), key=lambda x: x[1]
- ): # sort on prefix
- if prefix:
- prefix = ":" + prefix
- write(' xmlns%s="%s"' % (prefix, _escape_attrib(uri)))
- attrs = elem.attrib
- if attrs:
- # try to keep existing attrib order
- if len(attrs) <= 1 or type(attrs) is _Attrib:
- items = attrs.items()
- else:
- # if plain dict, use lexical order
- items = sorted(attrs.items())
- for k, v in items:
- if isinstance(k, QName):
- k = _tounicode(k.text)
- else:
- k = _tounicode(k)
- if isinstance(v, QName):
- v = qnames[_tounicode(v.text)]
- else:
- v = _escape_attrib(v)
- write(' %s="%s"' % (qnames[k], v))
- if text is not None or len(elem):
- write(">")
- if text:
- write(_escape_cdata(text))
- for e in elem:
- _serialize_xml(write, e, qnames, None)
- write("" + tag + ">")
- else:
- write("/>")
- if elem.tail:
- write(_escape_cdata(elem.tail))
-
- def _raise_serialization_error(text):
- raise TypeError("cannot serialize %r (type %s)" % (text, type(text).__name__))
-
- def _escape_cdata(text):
- # escape character data
- try:
- text = _tounicode(text)
- # it's worth avoiding do-nothing calls for short strings
- if "&" in text:
- text = text.replace("&", "&")
- if "<" in text:
- text = text.replace("<", "<")
- if ">" in text:
- text = text.replace(">", ">")
- return text
- except (TypeError, AttributeError):
- _raise_serialization_error(text)
-
- def _escape_attrib(text):
- # escape attribute value
- try:
- text = _tounicode(text)
- if "&" in text:
- text = text.replace("&", "&")
- if "<" in text:
- text = text.replace("<", "<")
- if ">" in text:
- text = text.replace(">", ">")
- if '"' in text:
- text = text.replace('"', """)
- if "\n" in text:
- text = text.replace("\n", "
")
- return text
- except (TypeError, AttributeError):
- _raise_serialization_error(text)
-
- def _indent(elem, level=0):
- # From http://effbot.org/zone/element-lib.htm#prettyprint
- i = "\n" + level * " "
- if len(elem):
- if not elem.text or not elem.text.strip():
- elem.text = i + " "
- if not elem.tail or not elem.tail.strip():
- elem.tail = i
- for elem in elem:
- _indent(elem, level + 1)
- if not elem.tail or not elem.tail.strip():
- elem.tail = i
- else:
- if level and (not elem.tail or not elem.tail.strip()):
- elem.tail = i
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/varLib/featureVars.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/varLib/featureVars.py
deleted file mode 100644
index f0403d76e40a67812193c9d821def6ab1c0adaaf..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/varLib/featureVars.py
+++ /dev/null
@@ -1,605 +0,0 @@
-"""Module to build FeatureVariation tables:
-https://docs.microsoft.com/en-us/typography/opentype/spec/chapter2#featurevariations-table
-
-NOTE: The API is experimental and subject to change.
-"""
-from fontTools.misc.dictTools import hashdict
-from fontTools.misc.intTools import bit_count
-from fontTools.ttLib import newTable
-from fontTools.ttLib.tables import otTables as ot
-from fontTools.ttLib.ttVisitor import TTVisitor
-from fontTools.otlLib.builder import buildLookup, buildSingleSubstSubtable
-from collections import OrderedDict
-
-from .errors import VarLibError, VarLibValidationError
-
-
-def addFeatureVariations(font, conditionalSubstitutions, featureTag="rvrn"):
- """Add conditional substitutions to a Variable Font.
-
- The `conditionalSubstitutions` argument is a list of (Region, Substitutions)
- tuples.
-
- A Region is a list of Boxes. A Box is a dict mapping axisTags to
- (minValue, maxValue) tuples. Irrelevant axes may be omitted and they are
- interpretted as extending to end of axis in each direction. A Box represents
- an orthogonal 'rectangular' subset of an N-dimensional design space.
- A Region represents a more complex subset of an N-dimensional design space,
- ie. the union of all the Boxes in the Region.
- For efficiency, Boxes within a Region should ideally not overlap, but
- functionality is not compromised if they do.
-
- The minimum and maximum values are expressed in normalized coordinates.
-
- A Substitution is a dict mapping source glyph names to substitute glyph names.
-
- Example:
-
- # >>> f = TTFont(srcPath)
- # >>> condSubst = [
- # ... # A list of (Region, Substitution) tuples.
- # ... ([{"wdth": (0.5, 1.0)}], {"cent": "cent.rvrn"}),
- # ... ([{"wght": (0.5, 1.0)}], {"dollar": "dollar.rvrn"}),
- # ... ]
- # >>> addFeatureVariations(f, condSubst)
- # >>> f.save(dstPath)
- """
-
- processLast = featureTag != "rvrn"
-
- _checkSubstitutionGlyphsExist(
- glyphNames=set(font.getGlyphOrder()),
- substitutions=conditionalSubstitutions,
- )
-
- substitutions = overlayFeatureVariations(conditionalSubstitutions)
-
- # turn substitution dicts into tuples of tuples, so they are hashable
- conditionalSubstitutions, allSubstitutions = makeSubstitutionsHashable(
- substitutions
- )
- if "GSUB" not in font:
- font["GSUB"] = buildGSUB()
-
- # setup lookups
- lookupMap = buildSubstitutionLookups(
- font["GSUB"].table, allSubstitutions, processLast
- )
-
- # addFeatureVariationsRaw takes a list of
- # ( {condition}, [ lookup indices ] )
- # so rearrange our lookups to match
- conditionsAndLookups = []
- for conditionSet, substitutions in conditionalSubstitutions:
- conditionsAndLookups.append(
- (conditionSet, [lookupMap[s] for s in substitutions])
- )
-
- addFeatureVariationsRaw(font, font["GSUB"].table, conditionsAndLookups, featureTag)
-
-
-def _checkSubstitutionGlyphsExist(glyphNames, substitutions):
- referencedGlyphNames = set()
- for _, substitution in substitutions:
- referencedGlyphNames |= substitution.keys()
- referencedGlyphNames |= set(substitution.values())
- missing = referencedGlyphNames - glyphNames
- if missing:
- raise VarLibValidationError(
- "Missing glyphs are referenced in conditional substitution rules:"
- f" {', '.join(missing)}"
- )
-
-
-def overlayFeatureVariations(conditionalSubstitutions):
- """Compute overlaps between all conditional substitutions.
-
- The `conditionalSubstitutions` argument is a list of (Region, Substitutions)
- tuples.
-
- A Region is a list of Boxes. A Box is a dict mapping axisTags to
- (minValue, maxValue) tuples. Irrelevant axes may be omitted and they are
- interpretted as extending to end of axis in each direction. A Box represents
- an orthogonal 'rectangular' subset of an N-dimensional design space.
- A Region represents a more complex subset of an N-dimensional design space,
- ie. the union of all the Boxes in the Region.
- For efficiency, Boxes within a Region should ideally not overlap, but
- functionality is not compromised if they do.
-
- The minimum and maximum values are expressed in normalized coordinates.
-
- A Substitution is a dict mapping source glyph names to substitute glyph names.
-
- Returns data is in similar but different format. Overlaps of distinct
- substitution Boxes (*not* Regions) are explicitly listed as distinct rules,
- and rules with the same Box merged. The more specific rules appear earlier
- in the resulting list. Moreover, instead of just a dictionary of substitutions,
- a list of dictionaries is returned for substitutions corresponding to each
- unique space, with each dictionary being identical to one of the input
- substitution dictionaries. These dictionaries are not merged to allow data
- sharing when they are converted into font tables.
-
- Example::
-
- >>> condSubst = [
- ... # A list of (Region, Substitution) tuples.
- ... ([{"wght": (0.5, 1.0)}], {"dollar": "dollar.rvrn"}),
- ... ([{"wght": (0.5, 1.0)}], {"dollar": "dollar.rvrn"}),
- ... ([{"wdth": (0.5, 1.0)}], {"cent": "cent.rvrn"}),
- ... ([{"wght": (0.5, 1.0), "wdth": (-1, 1.0)}], {"dollar": "dollar.rvrn"}),
- ... ]
- >>> from pprint import pprint
- >>> pprint(overlayFeatureVariations(condSubst))
- [({'wdth': (0.5, 1.0), 'wght': (0.5, 1.0)},
- [{'dollar': 'dollar.rvrn'}, {'cent': 'cent.rvrn'}]),
- ({'wdth': (0.5, 1.0)}, [{'cent': 'cent.rvrn'}]),
- ({'wght': (0.5, 1.0)}, [{'dollar': 'dollar.rvrn'}])]
-
- """
-
- # Merge same-substitutions rules, as this creates fewer number oflookups.
- merged = OrderedDict()
- for value, key in conditionalSubstitutions:
- key = hashdict(key)
- if key in merged:
- merged[key].extend(value)
- else:
- merged[key] = value
- conditionalSubstitutions = [(v, dict(k)) for k, v in merged.items()]
- del merged
-
- # Merge same-region rules, as this is cheaper.
- # Also convert boxes to hashdict()
- #
- # Reversing is such that earlier entries win in case of conflicting substitution
- # rules for the same region.
- merged = OrderedDict()
- for key, value in reversed(conditionalSubstitutions):
- key = tuple(
- sorted(
- (hashdict(cleanupBox(k)) for k in key),
- key=lambda d: tuple(sorted(d.items())),
- )
- )
- if key in merged:
- merged[key].update(value)
- else:
- merged[key] = dict(value)
- conditionalSubstitutions = list(reversed(merged.items()))
- del merged
-
- # Overlay
- #
- # Rank is the bit-set of the index of all contributing layers.
- initMapInit = ((hashdict(), 0),) # Initializer representing the entire space
- boxMap = OrderedDict(initMapInit) # Map from Box to Rank
- for i, (currRegion, _) in enumerate(conditionalSubstitutions):
- newMap = OrderedDict(initMapInit)
- currRank = 1 << i
- for box, rank in boxMap.items():
- for currBox in currRegion:
- intersection, remainder = overlayBox(currBox, box)
- if intersection is not None:
- intersection = hashdict(intersection)
- newMap[intersection] = newMap.get(intersection, 0) | rank | currRank
- if remainder is not None:
- remainder = hashdict(remainder)
- newMap[remainder] = newMap.get(remainder, 0) | rank
- boxMap = newMap
-
- # Generate output
- items = []
- for box, rank in sorted(
- boxMap.items(), key=(lambda BoxAndRank: -bit_count(BoxAndRank[1]))
- ):
- # Skip any box that doesn't have any substitution.
- if rank == 0:
- continue
- substsList = []
- i = 0
- while rank:
- if rank & 1:
- substsList.append(conditionalSubstitutions[i][1])
- rank >>= 1
- i += 1
- items.append((dict(box), substsList))
- return items
-
-
-#
-# Terminology:
-#
-# A 'Box' is a dict representing an orthogonal "rectangular" bit of N-dimensional space.
-# The keys in the dict are axis tags, the values are (minValue, maxValue) tuples.
-# Missing dimensions (keys) are substituted by the default min and max values
-# from the corresponding axes.
-#
-
-
-def overlayBox(top, bot):
- """Overlays ``top`` box on top of ``bot`` box.
-
- Returns two items:
-
- * Box for intersection of ``top`` and ``bot``, or None if they don't intersect.
- * Box for remainder of ``bot``. Remainder box might not be exact (since the
- remainder might not be a simple box), but is inclusive of the exact
- remainder.
- """
-
- # Intersection
- intersection = {}
- intersection.update(top)
- intersection.update(bot)
- for axisTag in set(top) & set(bot):
- min1, max1 = top[axisTag]
- min2, max2 = bot[axisTag]
- minimum = max(min1, min2)
- maximum = min(max1, max2)
- if not minimum < maximum:
- return None, bot # Do not intersect
- intersection[axisTag] = minimum, maximum
-
- # Remainder
- #
- # Remainder is empty if bot's each axis range lies within that of intersection.
- #
- # Remainder is shrank if bot's each, except for exactly one, axis range lies
- # within that of intersection, and that one axis, it extrudes out of the
- # intersection only on one side.
- #
- # Bot is returned in full as remainder otherwise, as true remainder is not
- # representable as a single box.
-
- remainder = dict(bot)
- extruding = False
- fullyInside = True
- for axisTag in top:
- if axisTag in bot:
- continue
- extruding = True
- fullyInside = False
- break
- for axisTag in bot:
- if axisTag not in top:
- continue # Axis range lies fully within
- min1, max1 = intersection[axisTag]
- min2, max2 = bot[axisTag]
- if min1 <= min2 and max2 <= max1:
- continue # Axis range lies fully within
-
- # Bot's range doesn't fully lie within that of top's for this axis.
- # We know they intersect, so it cannot lie fully without either; so they
- # overlap.
-
- # If we have had an overlapping axis before, remainder is not
- # representable as a box, so return full bottom and go home.
- if extruding:
- return intersection, bot
- extruding = True
- fullyInside = False
-
- # Otherwise, cut remainder on this axis and continue.
- if min1 <= min2:
- # Right side survives.
- minimum = max(max1, min2)
- maximum = max2
- elif max2 <= max1:
- # Left side survives.
- minimum = min2
- maximum = min(min1, max2)
- else:
- # Remainder leaks out from both sides. Can't cut either.
- return intersection, bot
-
- remainder[axisTag] = minimum, maximum
-
- if fullyInside:
- # bot is fully within intersection. Remainder is empty.
- return intersection, None
-
- return intersection, remainder
-
-
-def cleanupBox(box):
- """Return a sparse copy of `box`, without redundant (default) values.
-
- >>> cleanupBox({})
- {}
- >>> cleanupBox({'wdth': (0.0, 1.0)})
- {'wdth': (0.0, 1.0)}
- >>> cleanupBox({'wdth': (-1.0, 1.0)})
- {}
-
- """
- return {tag: limit for tag, limit in box.items() if limit != (-1.0, 1.0)}
-
-
-#
-# Low level implementation
-#
-
-
-def addFeatureVariationsRaw(font, table, conditionalSubstitutions, featureTag="rvrn"):
- """Low level implementation of addFeatureVariations that directly
- models the possibilities of the FeatureVariations table."""
-
- processLast = featureTag != "rvrn"
-
- #
- # if there is no feature:
- # make empty feature
- # sort features, get feature index
- # add feature to all scripts
- # make lookups
- # add feature variations
- #
- if table.Version < 0x00010001:
- table.Version = 0x00010001 # allow table.FeatureVariations
-
- table.FeatureVariations = None # delete any existing FeatureVariations
-
- varFeatureIndices = []
- for index, feature in enumerate(table.FeatureList.FeatureRecord):
- if feature.FeatureTag == featureTag:
- varFeatureIndices.append(index)
-
- if not varFeatureIndices:
- varFeature = buildFeatureRecord(featureTag, [])
- table.FeatureList.FeatureRecord.append(varFeature)
- table.FeatureList.FeatureCount = len(table.FeatureList.FeatureRecord)
-
- sortFeatureList(table)
- varFeatureIndex = table.FeatureList.FeatureRecord.index(varFeature)
-
- for scriptRecord in table.ScriptList.ScriptRecord:
- if scriptRecord.Script.DefaultLangSys is None:
- raise VarLibError(
- "Feature variations require that the script "
- f"'{scriptRecord.ScriptTag}' defines a default language system."
- )
- langSystems = [lsr.LangSys for lsr in scriptRecord.Script.LangSysRecord]
- for langSys in [scriptRecord.Script.DefaultLangSys] + langSystems:
- langSys.FeatureIndex.append(varFeatureIndex)
- langSys.FeatureCount = len(langSys.FeatureIndex)
-
- varFeatureIndices = [varFeatureIndex]
-
- axisIndices = {
- axis.axisTag: axisIndex for axisIndex, axis in enumerate(font["fvar"].axes)
- }
-
- featureVariationRecords = []
- for conditionSet, lookupIndices in conditionalSubstitutions:
- conditionTable = []
- for axisTag, (minValue, maxValue) in sorted(conditionSet.items()):
- if minValue > maxValue:
- raise VarLibValidationError(
- "A condition set has a minimum value above the maximum value."
- )
- ct = buildConditionTable(axisIndices[axisTag], minValue, maxValue)
- conditionTable.append(ct)
- records = []
- for varFeatureIndex in varFeatureIndices:
- existingLookupIndices = table.FeatureList.FeatureRecord[
- varFeatureIndex
- ].Feature.LookupListIndex
- combinedLookupIndices = (
- existingLookupIndices + lookupIndices
- if processLast
- else lookupIndices + existingLookupIndices
- )
-
- records.append(
- buildFeatureTableSubstitutionRecord(
- varFeatureIndex, combinedLookupIndices
- )
- )
- featureVariationRecords.append(
- buildFeatureVariationRecord(conditionTable, records)
- )
-
- table.FeatureVariations = buildFeatureVariations(featureVariationRecords)
-
-
-#
-# Building GSUB/FeatureVariations internals
-#
-
-
-def buildGSUB():
- """Build a GSUB table from scratch."""
- fontTable = newTable("GSUB")
- gsub = fontTable.table = ot.GSUB()
- gsub.Version = 0x00010001 # allow gsub.FeatureVariations
-
- gsub.ScriptList = ot.ScriptList()
- gsub.ScriptList.ScriptRecord = []
- gsub.FeatureList = ot.FeatureList()
- gsub.FeatureList.FeatureRecord = []
- gsub.LookupList = ot.LookupList()
- gsub.LookupList.Lookup = []
-
- srec = ot.ScriptRecord()
- srec.ScriptTag = "DFLT"
- srec.Script = ot.Script()
- srec.Script.DefaultLangSys = None
- srec.Script.LangSysRecord = []
- srec.Script.LangSysCount = 0
-
- langrec = ot.LangSysRecord()
- langrec.LangSys = ot.LangSys()
- langrec.LangSys.ReqFeatureIndex = 0xFFFF
- langrec.LangSys.FeatureIndex = []
- srec.Script.DefaultLangSys = langrec.LangSys
-
- gsub.ScriptList.ScriptRecord.append(srec)
- gsub.ScriptList.ScriptCount = 1
- gsub.FeatureVariations = None
-
- return fontTable
-
-
-def makeSubstitutionsHashable(conditionalSubstitutions):
- """Turn all the substitution dictionaries in sorted tuples of tuples so
- they are hashable, to detect duplicates so we don't write out redundant
- data."""
- allSubstitutions = set()
- condSubst = []
- for conditionSet, substitutionMaps in conditionalSubstitutions:
- substitutions = []
- for substitutionMap in substitutionMaps:
- subst = tuple(sorted(substitutionMap.items()))
- substitutions.append(subst)
- allSubstitutions.add(subst)
- condSubst.append((conditionSet, substitutions))
- return condSubst, sorted(allSubstitutions)
-
-
-class ShifterVisitor(TTVisitor):
- def __init__(self, shift):
- self.shift = shift
-
-
-@ShifterVisitor.register_attr(ot.Feature, "LookupListIndex") # GSUB/GPOS
-def visit(visitor, obj, attr, value):
- shift = visitor.shift
- value = [l + shift for l in value]
- setattr(obj, attr, value)
-
-
-@ShifterVisitor.register_attr(
- (ot.SubstLookupRecord, ot.PosLookupRecord), "LookupListIndex"
-)
-def visit(visitor, obj, attr, value):
- setattr(obj, attr, visitor.shift + value)
-
-
-def buildSubstitutionLookups(gsub, allSubstitutions, processLast=False):
- """Build the lookups for the glyph substitutions, return a dict mapping
- the substitution to lookup indices."""
-
- # Insert lookups at the beginning of the lookup vector
- # https://github.com/googlefonts/fontmake/issues/950
-
- firstIndex = len(gsub.LookupList.Lookup) if processLast else 0
- lookupMap = {}
- for i, substitutionMap in enumerate(allSubstitutions):
- lookupMap[substitutionMap] = firstIndex + i
-
- if not processLast:
- # Shift all lookup indices in gsub by len(allSubstitutions)
- shift = len(allSubstitutions)
- visitor = ShifterVisitor(shift)
- visitor.visit(gsub.FeatureList.FeatureRecord)
- visitor.visit(gsub.LookupList.Lookup)
-
- for i, subst in enumerate(allSubstitutions):
- substMap = dict(subst)
- lookup = buildLookup([buildSingleSubstSubtable(substMap)])
- if processLast:
- gsub.LookupList.Lookup.append(lookup)
- else:
- gsub.LookupList.Lookup.insert(i, lookup)
- assert gsub.LookupList.Lookup[lookupMap[subst]] is lookup
- gsub.LookupList.LookupCount = len(gsub.LookupList.Lookup)
- return lookupMap
-
-
-def buildFeatureVariations(featureVariationRecords):
- """Build the FeatureVariations subtable."""
- fv = ot.FeatureVariations()
- fv.Version = 0x00010000
- fv.FeatureVariationRecord = featureVariationRecords
- fv.FeatureVariationCount = len(featureVariationRecords)
- return fv
-
-
-def buildFeatureRecord(featureTag, lookupListIndices):
- """Build a FeatureRecord."""
- fr = ot.FeatureRecord()
- fr.FeatureTag = featureTag
- fr.Feature = ot.Feature()
- fr.Feature.LookupListIndex = lookupListIndices
- fr.Feature.populateDefaults()
- return fr
-
-
-def buildFeatureVariationRecord(conditionTable, substitutionRecords):
- """Build a FeatureVariationRecord."""
- fvr = ot.FeatureVariationRecord()
- fvr.ConditionSet = ot.ConditionSet()
- fvr.ConditionSet.ConditionTable = conditionTable
- fvr.ConditionSet.ConditionCount = len(conditionTable)
- fvr.FeatureTableSubstitution = ot.FeatureTableSubstitution()
- fvr.FeatureTableSubstitution.Version = 0x00010000
- fvr.FeatureTableSubstitution.SubstitutionRecord = substitutionRecords
- fvr.FeatureTableSubstitution.SubstitutionCount = len(substitutionRecords)
- return fvr
-
-
-def buildFeatureTableSubstitutionRecord(featureIndex, lookupListIndices):
- """Build a FeatureTableSubstitutionRecord."""
- ftsr = ot.FeatureTableSubstitutionRecord()
- ftsr.FeatureIndex = featureIndex
- ftsr.Feature = ot.Feature()
- ftsr.Feature.LookupListIndex = lookupListIndices
- ftsr.Feature.LookupCount = len(lookupListIndices)
- return ftsr
-
-
-def buildConditionTable(axisIndex, filterRangeMinValue, filterRangeMaxValue):
- """Build a ConditionTable."""
- ct = ot.ConditionTable()
- ct.Format = 1
- ct.AxisIndex = axisIndex
- ct.FilterRangeMinValue = filterRangeMinValue
- ct.FilterRangeMaxValue = filterRangeMaxValue
- return ct
-
-
-def sortFeatureList(table):
- """Sort the feature list by feature tag, and remap the feature indices
- elsewhere. This is needed after the feature list has been modified.
- """
- # decorate, sort, undecorate, because we need to make an index remapping table
- tagIndexFea = [
- (fea.FeatureTag, index, fea)
- for index, fea in enumerate(table.FeatureList.FeatureRecord)
- ]
- tagIndexFea.sort()
- table.FeatureList.FeatureRecord = [fea for tag, index, fea in tagIndexFea]
- featureRemap = dict(
- zip([index for tag, index, fea in tagIndexFea], range(len(tagIndexFea)))
- )
-
- # Remap the feature indices
- remapFeatures(table, featureRemap)
-
-
-def remapFeatures(table, featureRemap):
- """Go through the scripts list, and remap feature indices."""
- for scriptIndex, script in enumerate(table.ScriptList.ScriptRecord):
- defaultLangSys = script.Script.DefaultLangSys
- if defaultLangSys is not None:
- _remapLangSys(defaultLangSys, featureRemap)
- for langSysRecordIndex, langSysRec in enumerate(script.Script.LangSysRecord):
- langSys = langSysRec.LangSys
- _remapLangSys(langSys, featureRemap)
-
- if hasattr(table, "FeatureVariations") and table.FeatureVariations is not None:
- for fvr in table.FeatureVariations.FeatureVariationRecord:
- for ftsr in fvr.FeatureTableSubstitution.SubstitutionRecord:
- ftsr.FeatureIndex = featureRemap[ftsr.FeatureIndex]
-
-
-def _remapLangSys(langSys, featureRemap):
- if langSys.ReqFeatureIndex != 0xFFFF:
- langSys.ReqFeatureIndex = featureRemap[langSys.ReqFeatureIndex]
- langSys.FeatureIndex = [featureRemap[index] for index in langSys.FeatureIndex]
-
-
-if __name__ == "__main__":
- import doctest, sys
-
- sys.exit(doctest.testmod().failed)
diff --git a/spaces/DeepFloyd/IF/model.py b/spaces/DeepFloyd/IF/model.py
deleted file mode 100644
index c45d94901d18d6144382e0ee3a11c8eb2a9c822f..0000000000000000000000000000000000000000
--- a/spaces/DeepFloyd/IF/model.py
+++ /dev/null
@@ -1,313 +0,0 @@
-from __future__ import annotations
-
-import gc
-import json
-import tempfile
-from typing import Generator
-
-import numpy as np
-import PIL.Image
-import torch
-from diffusers import DiffusionPipeline, StableDiffusionUpscalePipeline
-from diffusers.pipelines.deepfloyd_if import (fast27_timesteps,
- smart27_timesteps,
- smart50_timesteps,
- smart100_timesteps,
- smart185_timesteps)
-
-from settings import (DISABLE_AUTOMATIC_CPU_OFFLOAD, DISABLE_SD_X4_UPSCALER,
- HF_TOKEN, MAX_NUM_IMAGES, MAX_NUM_STEPS, MAX_SEED,
- RUN_GARBAGE_COLLECTION)
-
-
-class Model:
- def __init__(self):
- self.device = torch.device(
- 'cuda:0' if torch.cuda.is_available() else 'cpu')
- self.pipe = None
- self.super_res_1_pipe = None
- self.super_res_2_pipe = None
- self.watermark_image = None
-
- if torch.cuda.is_available():
- self.load_weights()
- self.watermark_image = PIL.Image.fromarray(
- self.pipe.watermarker.watermark_image.to(
- torch.uint8).cpu().numpy(),
- mode='RGBA')
-
- def load_weights(self) -> None:
- self.pipe = DiffusionPipeline.from_pretrained(
- 'DeepFloyd/IF-I-XL-v1.0',
- torch_dtype=torch.float16,
- variant='fp16',
- use_safetensors=True,
- use_auth_token=HF_TOKEN)
- self.super_res_1_pipe = DiffusionPipeline.from_pretrained(
- 'DeepFloyd/IF-II-L-v1.0',
- text_encoder=None,
- torch_dtype=torch.float16,
- variant='fp16',
- use_safetensors=True,
- use_auth_token=HF_TOKEN)
-
- if not DISABLE_SD_X4_UPSCALER:
- self.super_res_2_pipe = StableDiffusionUpscalePipeline.from_pretrained(
- 'stabilityai/stable-diffusion-x4-upscaler',
- torch_dtype=torch.float16)
-
- if DISABLE_AUTOMATIC_CPU_OFFLOAD:
- self.pipe.to(self.device)
- self.super_res_1_pipe.to(self.device)
-
- self.pipe.unet.to(memory_format=torch.channels_last)
- self.pipe.unet = torch.compile(self.pipe.unet, mode="reduce-overhead", fullgraph=True)
-
- if not DISABLE_SD_X4_UPSCALER:
- self.super_res_2_pipe.to(self.device)
- else:
- self.pipe.enable_model_cpu_offload()
- self.super_res_1_pipe.enable_model_cpu_offload()
- if not DISABLE_SD_X4_UPSCALER:
- self.super_res_2_pipe.enable_model_cpu_offload()
-
- def apply_watermark_to_sd_x4_upscaler_results(
- self, images: list[PIL.Image.Image]) -> None:
- w, h = images[0].size
-
- stability_x4_upscaler_sample_size = 128
-
- coef = min(h / stability_x4_upscaler_sample_size,
- w / stability_x4_upscaler_sample_size)
- img_h, img_w = (int(h / coef), int(w / coef)) if coef < 1 else (h, w)
-
- S1, S2 = 1024**2, img_w * img_h
- K = (S2 / S1)**0.5
- watermark_size = int(K * 62)
- watermark_x = img_w - int(14 * K)
- watermark_y = img_h - int(14 * K)
-
- watermark_image = self.watermark_image.copy().resize(
- (watermark_size, watermark_size),
- PIL.Image.Resampling.BICUBIC,
- reducing_gap=None)
-
- for image in images:
- image.paste(watermark_image,
- box=(
- watermark_x - watermark_size,
- watermark_y - watermark_size,
- watermark_x,
- watermark_y,
- ),
- mask=watermark_image.split()[-1])
-
- @staticmethod
- def to_pil_images(images: torch.Tensor) -> list[PIL.Image.Image]:
- images = (images / 2 + 0.5).clamp(0, 1)
- images = images.cpu().permute(0, 2, 3, 1).float().numpy()
- images = np.round(images * 255).astype(np.uint8)
- return [PIL.Image.fromarray(image) for image in images]
-
- @staticmethod
- def check_seed(seed: int) -> None:
- if not 0 <= seed <= MAX_SEED:
- raise ValueError
-
- @staticmethod
- def check_num_images(num_images: int) -> None:
- if not 1 <= num_images <= MAX_NUM_IMAGES:
- raise ValueError
-
- @staticmethod
- def check_num_inference_steps(num_steps: int) -> None:
- if not 1 <= num_steps <= MAX_NUM_STEPS:
- raise ValueError
-
- @staticmethod
- def get_custom_timesteps(name: str) -> list[int] | None:
- if name == 'none':
- timesteps = None
- elif name == 'fast27':
- timesteps = fast27_timesteps
- elif name == 'smart27':
- timesteps = smart27_timesteps
- elif name == 'smart50':
- timesteps = smart50_timesteps
- elif name == 'smart100':
- timesteps = smart100_timesteps
- elif name == 'smart185':
- timesteps = smart185_timesteps
- else:
- raise ValueError
- return timesteps
-
- @staticmethod
- def run_garbage_collection():
- gc.collect()
- torch.cuda.empty_cache()
-
- def run_stage1(
- self,
- prompt: str,
- negative_prompt: str = '',
- seed: int = 0,
- num_images: int = 1,
- guidance_scale_1: float = 7.0,
- custom_timesteps_1: str = 'smart100',
- num_inference_steps_1: int = 100,
- ) -> tuple[list[PIL.Image.Image], str, str]:
- self.check_seed(seed)
- self.check_num_images(num_images)
- self.check_num_inference_steps(num_inference_steps_1)
-
- if RUN_GARBAGE_COLLECTION:
- self.run_garbage_collection()
-
- generator = torch.Generator(device=self.device).manual_seed(seed)
-
- prompt_embeds, negative_embeds = self.pipe.encode_prompt(
- prompt=prompt, negative_prompt=negative_prompt)
-
- timesteps = self.get_custom_timesteps(custom_timesteps_1)
-
- images = self.pipe(prompt_embeds=prompt_embeds,
- negative_prompt_embeds=negative_embeds,
- num_images_per_prompt=num_images,
- guidance_scale=guidance_scale_1,
- timesteps=timesteps,
- num_inference_steps=num_inference_steps_1,
- generator=generator,
- output_type='pt').images
- pil_images = self.to_pil_images(images)
- self.pipe.watermarker.apply_watermark(
- pil_images, self.pipe.unet.config.sample_size)
-
- stage1_params = {
- 'prompt': prompt,
- 'negative_prompt': negative_prompt,
- 'seed': seed,
- 'num_images': num_images,
- 'guidance_scale_1': guidance_scale_1,
- 'custom_timesteps_1': custom_timesteps_1,
- 'num_inference_steps_1': num_inference_steps_1,
- }
- with tempfile.NamedTemporaryFile(mode='w', delete=False) as param_file:
- param_file.write(json.dumps(stage1_params))
- stage1_result = {
- 'prompt_embeds': prompt_embeds,
- 'negative_embeds': negative_embeds,
- 'images': images,
- 'pil_images': pil_images,
- }
- with tempfile.NamedTemporaryFile(delete=False) as result_file:
- torch.save(stage1_result, result_file.name)
- return pil_images, param_file.name, result_file.name
-
- def run_stage2(
- self,
- stage1_result_path: str,
- stage2_index: int,
- seed_2: int = 0,
- guidance_scale_2: float = 4.0,
- custom_timesteps_2: str = 'smart50',
- num_inference_steps_2: int = 50,
- disable_watermark: bool = False,
- ) -> PIL.Image.Image:
- self.check_seed(seed_2)
- self.check_num_inference_steps(num_inference_steps_2)
-
- if RUN_GARBAGE_COLLECTION:
- self.run_garbage_collection()
-
- generator = torch.Generator(device=self.device).manual_seed(seed_2)
-
- stage1_result = torch.load(stage1_result_path)
- prompt_embeds = stage1_result['prompt_embeds']
- negative_embeds = stage1_result['negative_embeds']
- images = stage1_result['images']
- images = images[[stage2_index]]
-
- timesteps = self.get_custom_timesteps(custom_timesteps_2)
-
- out = self.super_res_1_pipe(image=images,
- prompt_embeds=prompt_embeds,
- negative_prompt_embeds=negative_embeds,
- num_images_per_prompt=1,
- guidance_scale=guidance_scale_2,
- timesteps=timesteps,
- num_inference_steps=num_inference_steps_2,
- generator=generator,
- output_type='pt',
- noise_level=250).images
- pil_images = self.to_pil_images(out)
-
- if disable_watermark:
- return pil_images[0]
-
- self.super_res_1_pipe.watermarker.apply_watermark(
- pil_images, self.super_res_1_pipe.unet.config.sample_size)
- return pil_images[0]
-
- def run_stage3(
- self,
- image: PIL.Image.Image,
- prompt: str = '',
- negative_prompt: str = '',
- seed_3: int = 0,
- guidance_scale_3: float = 9.0,
- num_inference_steps_3: int = 75,
- ) -> PIL.Image.Image:
- self.check_seed(seed_3)
- self.check_num_inference_steps(num_inference_steps_3)
-
- if RUN_GARBAGE_COLLECTION:
- self.run_garbage_collection()
-
- generator = torch.Generator(device=self.device).manual_seed(seed_3)
- out = self.super_res_2_pipe(image=image,
- prompt=prompt,
- negative_prompt=negative_prompt,
- num_images_per_prompt=1,
- guidance_scale=guidance_scale_3,
- num_inference_steps=num_inference_steps_3,
- generator=generator,
- noise_level=100).images
- self.apply_watermark_to_sd_x4_upscaler_results(out)
- return out[0]
-
- def run_stage2_3(
- self,
- stage1_result_path: str,
- stage2_index: int,
- seed_2: int = 0,
- guidance_scale_2: float = 4.0,
- custom_timesteps_2: str = 'smart50',
- num_inference_steps_2: int = 50,
- prompt: str = '',
- negative_prompt: str = '',
- seed_3: int = 0,
- guidance_scale_3: float = 9.0,
- num_inference_steps_3: int = 75,
- ) -> Generator[PIL.Image.Image]:
- self.check_seed(seed_3)
- self.check_num_inference_steps(num_inference_steps_3)
-
- out_image = self.run_stage2(
- stage1_result_path=stage1_result_path,
- stage2_index=stage2_index,
- seed_2=seed_2,
- guidance_scale_2=guidance_scale_2,
- custom_timesteps_2=custom_timesteps_2,
- num_inference_steps_2=num_inference_steps_2,
- disable_watermark=True)
- temp_image = out_image.copy()
- self.super_res_1_pipe.watermarker.apply_watermark(
- [temp_image], self.super_res_1_pipe.unet.config.sample_size)
- yield temp_image
- yield self.run_stage3(image=out_image,
- prompt=prompt,
- negative_prompt=negative_prompt,
- seed_3=seed_3,
- guidance_scale_3=guidance_scale_3,
- num_inference_steps_3=num_inference_steps_3)
diff --git a/spaces/DylanWolf/h2ogpt-api/app.py b/spaces/DylanWolf/h2ogpt-api/app.py
deleted file mode 100644
index 74bb72f37e6a44d46edfae83a21489be634f140e..0000000000000000000000000000000000000000
--- a/spaces/DylanWolf/h2ogpt-api/app.py
+++ /dev/null
@@ -1,12 +0,0 @@
-import os
-
-os.system("git clone https://github.com/oobabooga/text-generation-webui.git")
-
-os.chdir("text-generation-webui")
-
-os.system("pip install -r requirements.txt")
-
-with open("input.txt", "w") as f:
- f.write("N\n")
-
-os.system("./start_linux.sh < input.txt")
\ No newline at end of file
diff --git a/spaces/ECCV2022/bytetrack/yolox/data/data_prefetcher.py b/spaces/ECCV2022/bytetrack/yolox/data/data_prefetcher.py
deleted file mode 100644
index 0f5d2b5eeec2b552f381239a16117a5c98255041..0000000000000000000000000000000000000000
--- a/spaces/ECCV2022/bytetrack/yolox/data/data_prefetcher.py
+++ /dev/null
@@ -1,77 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding:utf-8 -*-
-# Copyright (c) Megvii, Inc. and its affiliates.
-
-import torch
-import torch.distributed as dist
-
-from yolox.utils import synchronize
-
-import random
-
-
-class DataPrefetcher:
- """
- DataPrefetcher is inspired by code of following file:
- https://github.com/NVIDIA/apex/blob/master/examples/imagenet/main_amp.py
- It could speedup your pytorch dataloader. For more information, please check
- https://github.com/NVIDIA/apex/issues/304#issuecomment-493562789.
- """
-
- def __init__(self, loader):
- self.loader = iter(loader)
- self.stream = torch.cuda.Stream()
- self.input_cuda = self._input_cuda_for_image
- self.record_stream = DataPrefetcher._record_stream_for_image
- self.preload()
-
- def preload(self):
- try:
- self.next_input, self.next_target, _, _ = next(self.loader)
- except StopIteration:
- self.next_input = None
- self.next_target = None
- return
-
- with torch.cuda.stream(self.stream):
- self.input_cuda()
- self.next_target = self.next_target.cuda(non_blocking=True)
-
- def next(self):
- torch.cuda.current_stream().wait_stream(self.stream)
- input = self.next_input
- target = self.next_target
- if input is not None:
- self.record_stream(input)
- if target is not None:
- target.record_stream(torch.cuda.current_stream())
- self.preload()
- return input, target
-
- def _input_cuda_for_image(self):
- self.next_input = self.next_input.cuda(non_blocking=True)
-
- @staticmethod
- def _record_stream_for_image(input):
- input.record_stream(torch.cuda.current_stream())
-
-
-def random_resize(data_loader, exp, epoch, rank, is_distributed):
- tensor = torch.LongTensor(1).cuda()
- if is_distributed:
- synchronize()
-
- if rank == 0:
- if epoch > exp.max_epoch - 10:
- size = exp.input_size
- else:
- size = random.randint(*exp.random_size)
- size = int(32 * size)
- tensor.fill_(size)
-
- if is_distributed:
- synchronize()
- dist.broadcast(tensor, 0)
-
- input_size = data_loader.change_input_dim(multiple=tensor.item(), random_range=None)
- return input_size
diff --git a/spaces/ECCV2022/bytetrack/yolox/deepsort_tracker/track.py b/spaces/ECCV2022/bytetrack/yolox/deepsort_tracker/track.py
deleted file mode 100644
index 6867441e016e80224fda6ecf3e0c7e8072be4e57..0000000000000000000000000000000000000000
--- a/spaces/ECCV2022/bytetrack/yolox/deepsort_tracker/track.py
+++ /dev/null
@@ -1,158 +0,0 @@
-# vim: expandtab:ts=4:sw=4
-
-
-class TrackState:
- """
- Enumeration type for the single target track state. Newly created tracks are
- classified as `tentative` until enough evidence has been collected. Then,
- the track state is changed to `confirmed`. Tracks that are no longer alive
- are classified as `deleted` to mark them for removal from the set of active
- tracks.
- """
-
- Tentative = 1
- Confirmed = 2
- Deleted = 3
-
-
-class Track:
- """
- A single target track with state space `(x, y, a, h)` and associated
- velocities, where `(x, y)` is the center of the bounding box, `a` is the
- aspect ratio and `h` is the height.
- Parameters
- ----------
- mean : ndarray
- Mean vector of the initial state distribution.
- covariance : ndarray
- Covariance matrix of the initial state distribution.
- track_id : int
- A unique track identifier.
- n_init : int
- Number of consecutive detections before the track is confirmed. The
- track state is set to `Deleted` if a miss occurs within the first
- `n_init` frames.
- max_age : int
- The maximum number of consecutive misses before the track state is
- set to `Deleted`.
- feature : Optional[ndarray]
- Feature vector of the detection this track originates from. If not None,
- this feature is added to the `features` cache.
- Attributes
- ----------
- mean : ndarray
- Mean vector of the initial state distribution.
- covariance : ndarray
- Covariance matrix of the initial state distribution.
- track_id : int
- A unique track identifier.
- hits : int
- Total number of measurement updates.
- age : int
- Total number of frames since first occurance.
- time_since_update : int
- Total number of frames since last measurement update.
- state : TrackState
- The current track state.
- features : List[ndarray]
- A cache of features. On each measurement update, the associated feature
- vector is added to this list.
- """
-
- def __init__(self, mean, covariance, track_id, class_id, n_init, max_age,
- feature=None):
- self.mean = mean
- self.covariance = covariance
- self.track_id = track_id
- self.class_id = class_id
- self.hits = 1
- self.age = 1
- self.time_since_update = 0
-
- self.state = TrackState.Tentative
- self.features = []
- if feature is not None:
- self.features.append(feature)
-
- self._n_init = n_init
- self._max_age = max_age
-
- def to_tlwh(self):
- """Get current position in bounding box format `(top left x, top left y,
- width, height)`.
- Returns
- -------
- ndarray
- The bounding box.
- """
- ret = self.mean[:4].copy()
- ret[2] *= ret[3]
- ret[:2] -= ret[2:] / 2
- return ret
-
- def to_tlbr(self):
- """Get current position in bounding box format `(min x, miny, max x,
- max y)`.
- Returns
- -------
- ndarray
- The bounding box.
- """
- ret = self.to_tlwh()
- ret[2:] = ret[:2] + ret[2:]
- return ret
-
- def increment_age(self):
- self.age += 1
- self.time_since_update += 1
-
- def predict(self, kf):
- """Propagate the state distribution to the current time step using a
- Kalman filter prediction step.
- Parameters
- ----------
- kf : kalman_filter.KalmanFilter
- The Kalman filter.
- """
- self.mean, self.covariance = kf.predict(self.mean, self.covariance)
- self.increment_age()
-
- def update(self, kf, detection):
- """Perform Kalman filter measurement update step and update the feature
- cache.
- Parameters
- ----------
- kf : kalman_filter.KalmanFilter
- The Kalman filter.
- detection : Detection
- The associated detection.
- """
- self.mean, self.covariance = kf.update(
- self.mean, self.covariance, detection.to_xyah())
- self.features.append(detection.feature)
-
- self.hits += 1
- self.time_since_update = 0
- if self.state == TrackState.Tentative and self.hits >= self._n_init:
- self.state = TrackState.Confirmed
-
- def mark_missed(self):
- """Mark this track as missed (no association at the current time step).
- """
- if self.state == TrackState.Tentative:
- self.state = TrackState.Deleted
- elif self.time_since_update > self._max_age:
- self.state = TrackState.Deleted
-
- def is_tentative(self):
- """Returns True if this track is tentative (unconfirmed).
- """
- return self.state == TrackState.Tentative
-
- def is_confirmed(self):
- """Returns True if this track is confirmed."""
- return self.state == TrackState.Confirmed
-
- def is_deleted(self):
- """Returns True if this track is dead and should be deleted."""
- return self.state == TrackState.Deleted
\ No newline at end of file
diff --git a/spaces/EPFL-VILAB/MultiMAE/mask2former/modeling/pixel_decoder/ops/src/cpu/ms_deform_attn_cpu.cpp b/spaces/EPFL-VILAB/MultiMAE/mask2former/modeling/pixel_decoder/ops/src/cpu/ms_deform_attn_cpu.cpp
deleted file mode 100644
index 48757e2b0156b2c1513b615d2a17e5aee5172ae7..0000000000000000000000000000000000000000
--- a/spaces/EPFL-VILAB/MultiMAE/mask2former/modeling/pixel_decoder/ops/src/cpu/ms_deform_attn_cpu.cpp
+++ /dev/null
@@ -1,46 +0,0 @@
-/*!
-**************************************************************************************************
-* Deformable DETR
-* Copyright (c) 2020 SenseTime. All Rights Reserved.
-* Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-**************************************************************************************************
-* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0
-**************************************************************************************************
-*/
-
-/*!
-* Copyright (c) Facebook, Inc. and its affiliates.
-* Modified by Bowen Cheng from https://github.com/fundamentalvision/Deformable-DETR
-*/
-
-#include
-
-#include
-#include
-
-
-at::Tensor
-ms_deform_attn_cpu_forward(
- const at::Tensor &value,
- const at::Tensor &spatial_shapes,
- const at::Tensor &level_start_index,
- const at::Tensor &sampling_loc,
- const at::Tensor &attn_weight,
- const int im2col_step)
-{
- AT_ERROR("Not implement on cpu");
-}
-
-std::vector
-ms_deform_attn_cpu_backward(
- const at::Tensor &value,
- const at::Tensor &spatial_shapes,
- const at::Tensor &level_start_index,
- const at::Tensor &sampling_loc,
- const at::Tensor &attn_weight,
- const at::Tensor &grad_output,
- const int im2col_step)
-{
- AT_ERROR("Not implement on cpu");
-}
-
diff --git a/spaces/Eddycrack864/Applio-Inference/infer/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py b/spaces/Eddycrack864/Applio-Inference/infer/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py
deleted file mode 100644
index 06f2b79f5e5c6f2049bf8220c29ae20c3f82d524..0000000000000000000000000000000000000000
--- a/spaces/Eddycrack864/Applio-Inference/infer/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py
+++ /dev/null
@@ -1,98 +0,0 @@
-import numpy as np
-import parselmouth
-
-from infer.lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor
-
-
-class PMF0Predictor(F0Predictor):
- def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100):
- self.hop_length = hop_length
- self.f0_min = f0_min
- self.f0_max = f0_max
- self.sampling_rate = sampling_rate
-
- def interpolate_f0(self, f0):
- """
- 对F0进行插值处理
- """
-
- data = np.reshape(f0, (f0.size, 1))
-
- vuv_vector = np.zeros((data.size, 1), dtype=np.float32)
- vuv_vector[data > 0.0] = 1.0
- vuv_vector[data <= 0.0] = 0.0
-
- ip_data = data
-
- frame_number = data.size
- last_value = 0.0
- for i in range(frame_number):
- if data[i] <= 0.0:
- j = i + 1
- for j in range(i + 1, frame_number):
- if data[j] > 0.0:
- break
- if j < frame_number - 1:
- if last_value > 0.0:
- step = (data[j] - data[i - 1]) / float(j - i)
- for k in range(i, j):
- ip_data[k] = data[i - 1] + step * (k - i + 1)
- else:
- for k in range(i, j):
- ip_data[k] = data[j]
- else:
- for k in range(i, frame_number):
- ip_data[k] = last_value
- else:
- ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝
- last_value = data[i]
-
- return ip_data[:, 0], vuv_vector[:, 0]
-
- def compute_f0(self, wav, p_len=None):
- x = wav
- if p_len is None:
- p_len = x.shape[0] // self.hop_length
- else:
- assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error"
- time_step = self.hop_length / self.sampling_rate * 1000
- f0 = (
- parselmouth.Sound(x, self.sampling_rate)
- .to_pitch_ac(
- time_step=time_step / 1000,
- voicing_threshold=0.6,
- pitch_floor=self.f0_min,
- pitch_ceiling=self.f0_max,
- )
- .selected_array["frequency"]
- )
-
- pad_size = (p_len - len(f0) + 1) // 2
- if pad_size > 0 or p_len - len(f0) - pad_size > 0:
- f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant")
- f0, uv = self.interpolate_f0(f0)
- return f0
-
- def compute_f0_uv(self, wav, p_len=None):
- x = wav
- if p_len is None:
- p_len = x.shape[0] // self.hop_length
- else:
- assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error"
- time_step = self.hop_length / self.sampling_rate * 1000
- f0 = (
- parselmouth.Sound(x, self.sampling_rate)
- .to_pitch_ac(
- time_step=time_step / 1000,
- voicing_threshold=0.6,
- pitch_floor=self.f0_min,
- pitch_ceiling=self.f0_max,
- )
- .selected_array["frequency"]
- )
-
- pad_size = (p_len - len(f0) + 1) // 2
- if pad_size > 0 or p_len - len(f0) - pad_size > 0:
- f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant")
- f0, uv = self.interpolate_f0(f0)
- return f0, uv
diff --git a/spaces/FahadAlam/Question-Generator/README.md b/spaces/FahadAlam/Question-Generator/README.md
deleted file mode 100644
index c891df263dae41fb49a39acd193a8f55ef9532dd..0000000000000000000000000000000000000000
--- a/spaces/FahadAlam/Question-Generator/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Question Generator
-emoji: 🌍
-colorFrom: indigo
-colorTo: purple
-sdk: gradio
-sdk_version: 3.4
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Faridmaruf/rvc-genshin-v2/config.py b/spaces/Faridmaruf/rvc-genshin-v2/config.py
deleted file mode 100644
index 2fda460b186b86923e757618c2f4f6fc0c45d8cf..0000000000000000000000000000000000000000
--- a/spaces/Faridmaruf/rvc-genshin-v2/config.py
+++ /dev/null
@@ -1,117 +0,0 @@
-import argparse
-import sys
-import torch
-from multiprocessing import cpu_count
-
-class Config:
- def __init__(self):
- self.device = "cuda:0"
- self.is_half = True
- self.n_cpu = 0
- self.gpu_name = None
- self.gpu_mem = None
- (
- self.python_cmd,
- self.listen_port,
- self.colab,
- self.noparallel,
- self.noautoopen,
- self.api
- ) = self.arg_parse()
- self.x_pad, self.x_query, self.x_center, self.x_max = self.device_config()
-
- @staticmethod
- def arg_parse() -> tuple:
- exe = sys.executable or "python"
- parser = argparse.ArgumentParser()
- parser.add_argument("--port", type=int, default=7865, help="Listen port")
- parser.add_argument("--pycmd", type=str, default=exe, help="Python command")
- parser.add_argument("--colab", action="store_true", help="Launch in colab")
- parser.add_argument(
- "--noparallel", action="store_true", help="Disable parallel processing"
- )
- parser.add_argument(
- "--noautoopen",
- action="store_true",
- help="Do not open in browser automatically",
- )
- parser.add_argument("--api", action="store_true", help="Launch with api")
- cmd_opts = parser.parse_args()
-
- cmd_opts.port = cmd_opts.port if 0 <= cmd_opts.port <= 65535 else 7865
-
- return (
- cmd_opts.pycmd,
- cmd_opts.port,
- cmd_opts.colab,
- cmd_opts.noparallel,
- cmd_opts.noautoopen,
- cmd_opts.api
- )
-
- # has_mps is only available in nightly pytorch (for now) and MasOS 12.3+.
- # check `getattr` and try it for compatibility
- @staticmethod
- def has_mps() -> bool:
- if not torch.backends.mps.is_available():
- return False
- try:
- torch.zeros(1).to(torch.device("mps"))
- return True
- except Exception:
- return False
-
- def device_config(self) -> tuple:
- if torch.cuda.is_available():
- i_device = int(self.device.split(":")[-1])
- self.gpu_name = torch.cuda.get_device_name(i_device)
- if (
- ("16" in self.gpu_name and "V100" not in self.gpu_name.upper())
- or "P40" in self.gpu_name.upper()
- or "1060" in self.gpu_name
- or "1070" in self.gpu_name
- or "1080" in self.gpu_name
- ):
- print("Found GPU", self.gpu_name, ", force to fp32")
- self.is_half = False
- else:
- print("Found GPU", self.gpu_name)
- self.gpu_mem = int(
- torch.cuda.get_device_properties(i_device).total_memory
- / 1024
- / 1024
- / 1024
- + 0.4
- )
- elif self.has_mps():
- print("No supported Nvidia GPU found, use MPS instead")
- self.device = "mps"
- self.is_half = False
- else:
- print("No supported Nvidia GPU found, use CPU instead")
- self.device = "cpu"
- self.is_half = False
-
- if self.n_cpu == 0:
- self.n_cpu = cpu_count()
-
- if self.is_half:
- # 6G显存配置
- x_pad = 3
- x_query = 10
- x_center = 60
- x_max = 65
- else:
- # 5G显存配置
- x_pad = 1
- x_query = 6
- x_center = 38
- x_max = 41
-
- if self.gpu_mem != None and self.gpu_mem <= 4:
- x_pad = 1
- x_query = 5
- x_center = 30
- x_max = 32
-
- return x_pad, x_query, x_center, x_max
diff --git "a/spaces/Fengbinbin/gpt-academic/crazy_functions/\350\247\243\346\236\220\351\241\271\347\233\256\346\272\220\344\273\243\347\240\201.py" "b/spaces/Fengbinbin/gpt-academic/crazy_functions/\350\247\243\346\236\220\351\241\271\347\233\256\346\272\220\344\273\243\347\240\201.py"
deleted file mode 100644
index cd162563cb949acae49f20ef2a0949f9b5f154af..0000000000000000000000000000000000000000
--- "a/spaces/Fengbinbin/gpt-academic/crazy_functions/\350\247\243\346\236\220\351\241\271\347\233\256\346\272\220\344\273\243\347\240\201.py"
+++ /dev/null
@@ -1,316 +0,0 @@
-from toolbox import update_ui
-from toolbox import CatchException, report_execption, write_results_to_file
-from .crazy_utils import input_clipping
-
-def 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
- import os, copy
- from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
- from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
- msg = '正常'
- inputs_array = []
- inputs_show_user_array = []
- history_array = []
- sys_prompt_array = []
- report_part_1 = []
-
- assert len(file_manifest) <= 512, "源文件太多(超过512个), 请缩减输入文件的数量。或者,您也可以选择删除此行警告,并修改代码拆分file_manifest列表,从而实现分批次处理。"
- ############################## <第一步,逐个文件分析,多线程> ##################################
- for index, fp in enumerate(file_manifest):
- # 读取文件
- with open(fp, 'r', encoding='utf-8', errors='replace') as f:
- file_content = f.read()
- prefix = "接下来请你逐文件分析下面的工程" if index==0 else ""
- i_say = prefix + f'请对下面的程序文件做一个概述文件名是{os.path.relpath(fp, project_folder)},文件代码是 ```{file_content}```'
- i_say_show_user = prefix + f'[{index}/{len(file_manifest)}] 请对下面的程序文件做一个概述: {os.path.abspath(fp)}'
- # 装载请求内容
- inputs_array.append(i_say)
- inputs_show_user_array.append(i_say_show_user)
- history_array.append([])
- sys_prompt_array.append("你是一个程序架构分析师,正在分析一个源代码项目。你的回答必须简单明了。")
-
- # 文件读取完成,对每一个源代码文件,生成一个请求线程,发送到chatgpt进行分析
- gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
- inputs_array = inputs_array,
- inputs_show_user_array = inputs_show_user_array,
- history_array = history_array,
- sys_prompt_array = sys_prompt_array,
- llm_kwargs = llm_kwargs,
- chatbot = chatbot,
- show_user_at_complete = True
- )
-
- # 全部文件解析完成,结果写入文件,准备对工程源代码进行汇总分析
- report_part_1 = copy.deepcopy(gpt_response_collection)
- history_to_return = report_part_1
- res = write_results_to_file(report_part_1)
- chatbot.append(("完成?", "逐个文件分析已完成。" + res + "\n\n正在开始汇总。"))
- yield from update_ui(chatbot=chatbot, history=history_to_return) # 刷新界面
-
- ############################## <第二步,综合,单线程,分组+迭代处理> ##################################
- batchsize = 16 # 10个文件为一组
- report_part_2 = []
- previous_iteration_files = []
- last_iteration_result = ""
- while True:
- if len(file_manifest) == 0: break
- this_iteration_file_manifest = file_manifest[:batchsize]
- this_iteration_gpt_response_collection = gpt_response_collection[:batchsize*2]
- file_rel_path = [os.path.relpath(fp, project_folder) for index, fp in enumerate(this_iteration_file_manifest)]
- # 把“请对下面的程序文件做一个概述” 替换成 精简的 "文件名:{all_file[index]}"
- for index, content in enumerate(this_iteration_gpt_response_collection):
- if index%2==0: this_iteration_gpt_response_collection[index] = f"{file_rel_path[index//2]}" # 只保留文件名节省token
- previous_iteration_files.extend([os.path.relpath(fp, project_folder) for index, fp in enumerate(this_iteration_file_manifest)])
- previous_iteration_files_string = ', '.join(previous_iteration_files)
- current_iteration_focus = ', '.join([os.path.relpath(fp, project_folder) for index, fp in enumerate(this_iteration_file_manifest)])
- i_say = f'用一张Markdown表格简要描述以下文件的功能:{previous_iteration_files_string}。根据以上分析,用一句话概括程序的整体功能。'
- inputs_show_user = f'根据以上分析,对程序的整体功能和构架重新做出概括,由于输入长度限制,可能需要分组处理,本组文件为 {current_iteration_focus} + 已经汇总的文件组。'
- this_iteration_history = copy.deepcopy(this_iteration_gpt_response_collection)
- this_iteration_history.append(last_iteration_result)
- # 裁剪input
- inputs, this_iteration_history_feed = input_clipping(inputs=i_say, history=this_iteration_history, max_token_limit=2560)
- result = yield from request_gpt_model_in_new_thread_with_ui_alive(
- inputs=inputs, inputs_show_user=inputs_show_user, llm_kwargs=llm_kwargs, chatbot=chatbot,
- history=this_iteration_history_feed, # 迭代之前的分析
- sys_prompt="你是一个程序架构分析师,正在分析一个项目的源代码。")
- report_part_2.extend([i_say, result])
- last_iteration_result = result
-
- file_manifest = file_manifest[batchsize:]
- gpt_response_collection = gpt_response_collection[batchsize*2:]
-
- ############################## ##################################
- history_to_return.extend(report_part_2)
- res = write_results_to_file(history_to_return)
- chatbot.append(("完成了吗?", res))
- yield from update_ui(chatbot=chatbot, history=history_to_return) # 刷新界面
-
-
-@CatchException
-def 解析项目本身(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- history = [] # 清空历史,以免输入溢出
- import glob
- file_manifest = [f for f in glob.glob('./*.py') if ('test_project' not in f) and ('gpt_log' not in f)] + \
- [f for f in glob.glob('./crazy_functions/*.py') if ('test_project' not in f) and ('gpt_log' not in f)]+ \
- [f for f in glob.glob('./request_llm/*.py') if ('test_project' not in f) and ('gpt_log' not in f)]
- project_folder = './'
- if len(file_manifest) == 0:
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何python文件: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
-
-@CatchException
-def 解析一个Python项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- history = [] # 清空历史,以免输入溢出
- import glob, os
- if os.path.exists(txt):
- project_folder = txt
- else:
- if txt == "": txt = '空空如也的输入栏'
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.py', recursive=True)]
- if len(file_manifest) == 0:
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何python文件: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
-
-
-@CatchException
-def 解析一个C项目的头文件(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- history = [] # 清空历史,以免输入溢出
- import glob, os
- if os.path.exists(txt):
- project_folder = txt
- else:
- if txt == "": txt = '空空如也的输入栏'
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.h', recursive=True)] + \
- [f for f in glob.glob(f'{project_folder}/**/*.hpp', recursive=True)] #+ \
- # [f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)]
- if len(file_manifest) == 0:
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.h头文件: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
-
-@CatchException
-def 解析一个C项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- history = [] # 清空历史,以免输入溢出
- import glob, os
- if os.path.exists(txt):
- project_folder = txt
- else:
- if txt == "": txt = '空空如也的输入栏'
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.h', recursive=True)] + \
- [f for f in glob.glob(f'{project_folder}/**/*.cpp', recursive=True)] + \
- [f for f in glob.glob(f'{project_folder}/**/*.hpp', recursive=True)] + \
- [f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)]
- if len(file_manifest) == 0:
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.h头文件: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
-
-
-@CatchException
-def 解析一个Java项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- history = [] # 清空历史,以免输入溢出
- import glob, os
- if os.path.exists(txt):
- project_folder = txt
- else:
- if txt == "": txt = '空空如也的输入栏'
- report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.java', recursive=True)] + \
- [f for f in glob.glob(f'{project_folder}/**/*.jar', recursive=True)] + \
- [f for f in glob.glob(f'{project_folder}/**/*.xml', recursive=True)] + \
- [f for f in glob.glob(f'{project_folder}/**/*.sh', recursive=True)]
- if len(file_manifest) == 0:
- report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何java文件: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
-
-
-@CatchException
-def 解析一个前端项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- history = [] # 清空历史,以免输入溢出
- import glob, os
- if os.path.exists(txt):
- project_folder = txt
- else:
- if txt == "": txt = '空空如也的输入栏'
- report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.ts', recursive=True)] + \
- [f for f in glob.glob(f'{project_folder}/**/*.tsx', recursive=True)] + \
- [f for f in glob.glob(f'{project_folder}/**/*.json', recursive=True)] + \
- [f for f in glob.glob(f'{project_folder}/**/*.js', recursive=True)] + \
- [f for f in glob.glob(f'{project_folder}/**/*.vue', recursive=True)] + \
- [f for f in glob.glob(f'{project_folder}/**/*.less', recursive=True)] + \
- [f for f in glob.glob(f'{project_folder}/**/*.sass', recursive=True)] + \
- [f for f in glob.glob(f'{project_folder}/**/*.wxml', recursive=True)] + \
- [f for f in glob.glob(f'{project_folder}/**/*.wxss', recursive=True)] + \
- [f for f in glob.glob(f'{project_folder}/**/*.css', recursive=True)] + \
- [f for f in glob.glob(f'{project_folder}/**/*.jsx', recursive=True)]
- if len(file_manifest) == 0:
- report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何前端相关文件: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
-
-
-@CatchException
-def 解析一个Golang项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- history = [] # 清空历史,以免输入溢出
- import glob, os
- if os.path.exists(txt):
- project_folder = txt
- else:
- if txt == "": txt = '空空如也的输入栏'
- report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.go', recursive=True)] + \
- [f for f in glob.glob(f'{project_folder}/**/go.mod', recursive=True)] + \
- [f for f in glob.glob(f'{project_folder}/**/go.sum', recursive=True)] + \
- [f for f in glob.glob(f'{project_folder}/**/go.work', recursive=True)]
- if len(file_manifest) == 0:
- report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何golang文件: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
-
-
-@CatchException
-def 解析一个Lua项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- history = [] # 清空历史,以免输入溢出
- import glob, os
- if os.path.exists(txt):
- project_folder = txt
- else:
- if txt == "": txt = '空空如也的输入栏'
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.lua', recursive=True)] + \
- [f for f in glob.glob(f'{project_folder}/**/*.xml', recursive=True)] + \
- [f for f in glob.glob(f'{project_folder}/**/*.json', recursive=True)] + \
- [f for f in glob.glob(f'{project_folder}/**/*.toml', recursive=True)]
- if len(file_manifest) == 0:
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何lua文件: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
-
-
-@CatchException
-def 解析一个CSharp项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- history = [] # 清空历史,以免输入溢出
- import glob, os
- if os.path.exists(txt):
- project_folder = txt
- else:
- if txt == "": txt = '空空如也的输入栏'
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.cs', recursive=True)] + \
- [f for f in glob.glob(f'{project_folder}/**/*.csproj', recursive=True)]
- if len(file_manifest) == 0:
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何CSharp文件: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
-
-
-@CatchException
-def 解析任意code项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- txt_pattern = plugin_kwargs.get("advanced_arg")
- txt_pattern = txt_pattern.replace(",", ",")
- # 将要匹配的模式(例如: *.c, *.cpp, *.py, config.toml)
- pattern_include = [_.lstrip(" ,").rstrip(" ,") for _ in txt_pattern.split(",") if _ != "" and not _.strip().startswith("^")]
- if not pattern_include: pattern_include = ["*"] # 不输入即全部匹配
- # 将要忽略匹配的文件后缀(例如: ^*.c, ^*.cpp, ^*.py)
- pattern_except_suffix = [_.lstrip(" ^*.,").rstrip(" ,") for _ in txt_pattern.split(" ") if _ != "" and _.strip().startswith("^*.")]
- pattern_except_suffix += ['zip', 'rar', '7z', 'tar', 'gz'] # 避免解析压缩文件
- # 将要忽略匹配的文件名(例如: ^README.md)
- pattern_except_name = [_.lstrip(" ^*,").rstrip(" ,").replace(".", "\.") for _ in txt_pattern.split(" ") if _ != "" and _.strip().startswith("^") and not _.strip().startswith("^*.")]
- # 生成正则表达式
- pattern_except = '/[^/]+\.(' + "|".join(pattern_except_suffix) + ')$'
- pattern_except += '|/(' + "|".join(pattern_except_name) + ')$' if pattern_except_name != [] else ''
-
- history.clear()
- import glob, os, re
- if os.path.exists(txt):
- project_folder = txt
- else:
- if txt == "": txt = '空空如也的输入栏'
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- # 若上传压缩文件, 先寻找到解压的文件夹路径, 从而避免解析压缩文件
- maybe_dir = [f for f in glob.glob(f'{project_folder}/*') if os.path.isdir(f)]
- if len(maybe_dir)>0 and maybe_dir[0].endswith('.extract'):
- extract_folder_path = maybe_dir[0]
- else:
- extract_folder_path = project_folder
- # 按输入的匹配模式寻找上传的非压缩文件和已解压的文件
- file_manifest = [f for pattern in pattern_include for f in glob.glob(f'{extract_folder_path}/**/{pattern}', recursive=True) if "" != extract_folder_path and \
- os.path.isfile(f) and (not re.search(pattern_except, f) or pattern.endswith('.' + re.search(pattern_except, f).group().split('.')[-1]))]
- if len(file_manifest) == 0:
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何文件: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
\ No newline at end of file
diff --git a/spaces/Fernando22/freegpt-webui/client/js/theme-toggler.js b/spaces/Fernando22/freegpt-webui/client/js/theme-toggler.js
deleted file mode 100644
index 67e1a9501b70d54ab8a717f34983c012328e74a0..0000000000000000000000000000000000000000
--- a/spaces/Fernando22/freegpt-webui/client/js/theme-toggler.js
+++ /dev/null
@@ -1,22 +0,0 @@
-var switch_theme_toggler = document.getElementById("theme-toggler");
-
-switch_theme_toggler.addEventListener("change", toggleTheme);
-
-function setTheme(themeName) {
- localStorage.setItem("theme", themeName);
- document.documentElement.className = themeName;
-}
-
-function toggleTheme() {
- var currentTheme = localStorage.getItem("theme");
- var newTheme = currentTheme === "theme-dark" ? "theme-light" : "theme-dark";
-
- setTheme(newTheme);
- switch_theme_toggler.checked = newTheme === "theme-dark";
-}
-
-(function () {
- var currentTheme = localStorage.getItem("theme") || "theme-dark";
- setTheme(currentTheme);
- switch_theme_toggler.checked = currentTheme === "theme-dark";
-})();
diff --git a/spaces/FourthBrainGenAI/GenerAd-AI/README.md b/spaces/FourthBrainGenAI/GenerAd-AI/README.md
deleted file mode 100644
index 221be040770a46171b3c51ef6f5eb54004394feb..0000000000000000000000000000000000000000
--- a/spaces/FourthBrainGenAI/GenerAd-AI/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: GenerAd AI
-emoji: 🔥
-colorFrom: yellow
-colorTo: red
-sdk: gradio
-sdk_version: 3.24.1
-app_file: app.py
-pinned: false
-license: bigscience-openrail-m
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/FrankZxShen/vits-fast-fineturning-models-ba/monotonic_align/core.py b/spaces/FrankZxShen/vits-fast-fineturning-models-ba/monotonic_align/core.py
deleted file mode 100644
index 1f940605fe4fd0738fa0006149fcba14ef88223a..0000000000000000000000000000000000000000
--- a/spaces/FrankZxShen/vits-fast-fineturning-models-ba/monotonic_align/core.py
+++ /dev/null
@@ -1,36 +0,0 @@
-import numba
-
-
-@numba.jit(numba.void(numba.int32[:, :, ::1], numba.float32[:, :, ::1], numba.int32[::1], numba.int32[::1]),
- nopython=True, nogil=True)
-def maximum_path_jit(paths, values, t_ys, t_xs):
- b = paths.shape[0]
- max_neg_val = -1e9
- for i in range(int(b)):
- path = paths[i]
- value = values[i]
- t_y = t_ys[i]
- t_x = t_xs[i]
-
- v_prev = v_cur = 0.0
- index = t_x - 1
-
- for y in range(t_y):
- for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)):
- if x == y:
- v_cur = max_neg_val
- else:
- v_cur = value[y - 1, x]
- if x == 0:
- if y == 0:
- v_prev = 0.
- else:
- v_prev = max_neg_val
- else:
- v_prev = value[y - 1, x - 1]
- value[y, x] += max(v_prev, v_cur)
-
- for y in range(t_y - 1, -1, -1):
- path[y, index] = 1
- if index != 0 and (index == y or value[y - 1, index] < value[y - 1, index - 1]):
- index = index - 1
diff --git a/spaces/Froleptan/stablediffusion-infinity/PyPatchMatch/Makefile b/spaces/Froleptan/stablediffusion-infinity/PyPatchMatch/Makefile
deleted file mode 100644
index 67e4d4dedb0353540206d98305f76006806fcca4..0000000000000000000000000000000000000000
--- a/spaces/Froleptan/stablediffusion-infinity/PyPatchMatch/Makefile
+++ /dev/null
@@ -1,54 +0,0 @@
-#
-# Makefile
-# Jiayuan Mao, 2019-01-09 13:59
-#
-
-SRC_DIR = csrc
-INC_DIR = csrc
-OBJ_DIR = build/obj
-TARGET = libpatchmatch.so
-
-LIB_TARGET = $(TARGET)
-INCLUDE_DIR = -I $(SRC_DIR) -I $(INC_DIR)
-
-CXX = $(ENVIRONMENT_OPTIONS) g++
-CXXFLAGS = -std=c++14
-CXXFLAGS += -Ofast -ffast-math -w
-# CXXFLAGS += -g
-CXXFLAGS += $(shell pkg-config --cflags opencv) -fPIC
-CXXFLAGS += $(INCLUDE_DIR)
-LDFLAGS = $(shell pkg-config --cflags --libs opencv) -shared -fPIC
-
-
-CXXSOURCES = $(shell find $(SRC_DIR)/ -name "*.cpp")
-OBJS = $(addprefix $(OBJ_DIR)/,$(CXXSOURCES:.cpp=.o))
-DEPFILES = $(OBJS:.o=.d)
-
-.PHONY: all clean rebuild test
-
-all: $(LIB_TARGET)
-
-$(OBJ_DIR)/%.o: %.cpp
- @echo "[CC] $< ..."
- @$(CXX) -c $< $(CXXFLAGS) -o $@
-
-$(OBJ_DIR)/%.d: %.cpp
- @mkdir -pv $(dir $@)
- @echo "[dep] $< ..."
- @$(CXX) $(INCLUDE_DIR) $(CXXFLAGS) -MM -MT "$(OBJ_DIR)/$(<:.cpp=.o) $(OBJ_DIR)/$(<:.cpp=.d)" "$<" > "$@"
-
-sinclude $(DEPFILES)
-
-$(LIB_TARGET): $(OBJS)
- @echo "[link] $(LIB_TARGET) ..."
- @$(CXX) $(OBJS) -o $@ $(CXXFLAGS) $(LDFLAGS)
-
-clean:
- rm -rf $(OBJ_DIR) $(LIB_TARGET)
-
-rebuild:
- +@make clean
- +@make
-
-# vim:ft=make
-#
diff --git a/spaces/Gradio-Blocks/anime-colorization/pixel_guide_diffusion/dist_util.py b/spaces/Gradio-Blocks/anime-colorization/pixel_guide_diffusion/dist_util.py
deleted file mode 100644
index f665604d6baaf5df6008f131c86cf0779c8b208a..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/anime-colorization/pixel_guide_diffusion/dist_util.py
+++ /dev/null
@@ -1,82 +0,0 @@
-"""
-Helpers for distributed training.
-"""
-
-import io
-import os
-import socket
-
-import blobfile as bf
-from mpi4py import MPI
-import torch as th
-import torch.distributed as dist
-
-# Change this to reflect your cluster layout.
-# The GPU for a given rank is (rank % GPUS_PER_NODE).
-GPUS_PER_NODE = 8
-
-SETUP_RETRY_COUNT = 3
-
-
-def setup_dist():
- """
- Setup a distributed process group.
- """
- if dist.is_initialized():
- return
-
- comm = MPI.COMM_WORLD
- backend = "gloo" if not th.cuda.is_available() else "nccl"
-
- if backend == "gloo":
- hostname = "localhost"
- else:
- hostname = socket.gethostbyname(socket.getfqdn())
- os.environ["MASTER_ADDR"] = comm.bcast(hostname, root=0)
- os.environ["RANK"] = str(comm.rank)
- os.environ["WORLD_SIZE"] = str(comm.size)
-
- port = comm.bcast(_find_free_port(), root=0)
- os.environ["MASTER_PORT"] = str(port)
- dist.init_process_group(backend=backend, init_method="env://")
-
-
-def dev():
- """
- Get the device to use for torch.distributed.
- """
- if th.cuda.is_available():
- return th.device(f"cuda:{MPI.COMM_WORLD.Get_rank() % GPUS_PER_NODE}")
- return th.device("cpu")
-
-
-def load_state_dict(path, **kwargs):
- """
- Load a PyTorch file without redundant fetches across MPI ranks.
- """
- if MPI.COMM_WORLD.Get_rank() == 0:
- with bf.BlobFile(path, "rb") as f:
- data = f.read()
- else:
- data = None
- data = MPI.COMM_WORLD.bcast(data)
- return th.load(io.BytesIO(data), **kwargs)
-
-
-def sync_params(params):
- """
- Synchronize a sequence of Tensors across ranks from rank 0.
- """
- for p in params:
- with th.no_grad():
- dist.broadcast(p, 0)
-
-
-def _find_free_port():
- try:
- s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
- s.bind(("", 0))
- s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
- return s.getsockname()[1]
- finally:
- s.close()
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/sparse_rcnn/sparse_rcnn_r50_fpn_300_proposals_crop_mstrain_480-800_3x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/sparse_rcnn/sparse_rcnn_r50_fpn_300_proposals_crop_mstrain_480-800_3x_coco.py
deleted file mode 100644
index 36f1d62eba62bb9c3266864cd4250caedea95a21..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/sparse_rcnn/sparse_rcnn_r50_fpn_300_proposals_crop_mstrain_480-800_3x_coco.py
+++ /dev/null
@@ -1,52 +0,0 @@
-_base_ = './sparse_rcnn_r50_fpn_mstrain_480-800_3x_coco.py'
-num_proposals = 300
-model = dict(
- rpn_head=dict(num_proposals=num_proposals),
- test_cfg=dict(
- _delete_=True, rpn=None, rcnn=dict(max_per_img=num_proposals)))
-img_norm_cfg = dict(
- mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
-
-# augmentation strategy originates from DETR.
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations', with_bbox=True),
- dict(type='RandomFlip', flip_ratio=0.5),
- dict(
- type='AutoAugment',
- policies=[[
- dict(
- type='Resize',
- img_scale=[(480, 1333), (512, 1333), (544, 1333), (576, 1333),
- (608, 1333), (640, 1333), (672, 1333), (704, 1333),
- (736, 1333), (768, 1333), (800, 1333)],
- multiscale_mode='value',
- keep_ratio=True)
- ],
- [
- dict(
- type='Resize',
- img_scale=[(400, 1333), (500, 1333), (600, 1333)],
- multiscale_mode='value',
- keep_ratio=True),
- dict(
- type='RandomCrop',
- crop_type='absolute_range',
- crop_size=(384, 600),
- allow_negative_crop=True),
- dict(
- type='Resize',
- img_scale=[(480, 1333), (512, 1333), (544, 1333),
- (576, 1333), (608, 1333), (640, 1333),
- (672, 1333), (704, 1333), (736, 1333),
- (768, 1333), (800, 1333)],
- multiscale_mode='value',
- override=True,
- keep_ratio=True)
- ]]),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels'])
-]
-data = dict(train=dict(pipeline=train_pipeline))
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/__init__.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/__init__.py
deleted file mode 100644
index ce2930f62a0091e06b37575b96db2ae51ca7908e..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/__init__.py
+++ /dev/null
@@ -1,28 +0,0 @@
-import mmcv
-
-from .version import __version__, short_version
-
-
-def digit_version(version_str):
- digit_version = []
- for x in version_str.split('.'):
- if x.isdigit():
- digit_version.append(int(x))
- elif x.find('rc') != -1:
- patch_version = x.split('rc')
- digit_version.append(int(patch_version[0]) - 1)
- digit_version.append(int(patch_version[1]))
- return digit_version
-
-
-mmcv_minimum_version = '1.2.4'
-mmcv_maximum_version = '1.4.0'
-mmcv_version = digit_version(mmcv.__version__)
-
-
-assert (mmcv_version >= digit_version(mmcv_minimum_version)
- and mmcv_version <= digit_version(mmcv_maximum_version)), \
- f'MMCV=={mmcv.__version__} is used but incompatible. ' \
- f'Please install mmcv>={mmcv_minimum_version}, <={mmcv_maximum_version}.'
-
-__all__ = ['__version__', 'short_version']
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/tools/align_resize.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/tools/align_resize.py
deleted file mode 100644
index 3819df8d6b78d88eb53aa1323387a7425dbd8a86..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/tools/align_resize.py
+++ /dev/null
@@ -1,218 +0,0 @@
-import mmcv
-import numpy as np
-from mmcv.utils import deprecated_api_warning, is_tuple_of
-from numpy import random
-
-from mmseg.datasets.builder import PIPELINES
-from IPython import embed
-
-@PIPELINES.register_module()
-class AlignResize(object):
- """Resize images & seg. Align
- """
-
- def __init__(self,
- img_scale=None,
- multiscale_mode='range',
- ratio_range=None,
- keep_ratio=True,
- size_divisor=32):
- if img_scale is None:
- self.img_scale = None
- else:
- if isinstance(img_scale, list):
- self.img_scale = img_scale
- else:
- self.img_scale = [img_scale]
- assert mmcv.is_list_of(self.img_scale, tuple)
-
- if ratio_range is not None:
- # mode 1: given img_scale=None and a range of image ratio
- # mode 2: given a scale and a range of image ratio
- assert self.img_scale is None or len(self.img_scale) == 1
- else:
- # mode 3 and 4: given multiple scales or a range of scales
- assert multiscale_mode in ['value', 'range']
-
- self.multiscale_mode = multiscale_mode
- self.ratio_range = ratio_range
- self.keep_ratio = keep_ratio
- self.size_divisor = size_divisor
-
- @staticmethod
- def random_select(img_scales):
- """Randomly select an img_scale from given candidates.
- Args:
- img_scales (list[tuple]): Images scales for selection.
- Returns:
- (tuple, int): Returns a tuple ``(img_scale, scale_dix)``,
- where ``img_scale`` is the selected image scale and
- ``scale_idx`` is the selected index in the given candidates.
- """
-
- assert mmcv.is_list_of(img_scales, tuple)
- scale_idx = np.random.randint(len(img_scales))
- img_scale = img_scales[scale_idx]
- return img_scale, scale_idx
-
- @staticmethod
- def random_sample(img_scales):
- """Randomly sample an img_scale when ``multiscale_mode=='range'``.
- Args:
- img_scales (list[tuple]): Images scale range for sampling.
- There must be two tuples in img_scales, which specify the lower
- and uper bound of image scales.
- Returns:
- (tuple, None): Returns a tuple ``(img_scale, None)``, where
- ``img_scale`` is sampled scale and None is just a placeholder
- to be consistent with :func:`random_select`.
- """
-
- assert mmcv.is_list_of(img_scales, tuple) and len(img_scales) == 2
- img_scale_long = [max(s) for s in img_scales]
- img_scale_short = [min(s) for s in img_scales]
- long_edge = np.random.randint(
- min(img_scale_long),
- max(img_scale_long) + 1)
- short_edge = np.random.randint(
- min(img_scale_short),
- max(img_scale_short) + 1)
- img_scale = (long_edge, short_edge)
- return img_scale, None
-
- @staticmethod
- def random_sample_ratio(img_scale, ratio_range):
- """Randomly sample an img_scale when ``ratio_range`` is specified.
- A ratio will be randomly sampled from the range specified by
- ``ratio_range``. Then it would be multiplied with ``img_scale`` to
- generate sampled scale.
- Args:
- img_scale (tuple): Images scale base to multiply with ratio.
- ratio_range (tuple[float]): The minimum and maximum ratio to scale
- the ``img_scale``.
- Returns:
- (tuple, None): Returns a tuple ``(scale, None)``, where
- ``scale`` is sampled ratio multiplied with ``img_scale`` and
- None is just a placeholder to be consistent with
- :func:`random_select`.
- """
-
- assert isinstance(img_scale, tuple) and len(img_scale) == 2
- min_ratio, max_ratio = ratio_range
- assert min_ratio <= max_ratio
- ratio = np.random.random_sample() * (max_ratio - min_ratio) + min_ratio
- scale = int(img_scale[0] * ratio), int(img_scale[1] * ratio)
- return scale, None
-
- def _random_scale(self, results):
- """Randomly sample an img_scale according to ``ratio_range`` and
- ``multiscale_mode``.
- If ``ratio_range`` is specified, a ratio will be sampled and be
- multiplied with ``img_scale``.
- If multiple scales are specified by ``img_scale``, a scale will be
- sampled according to ``multiscale_mode``.
- Otherwise, single scale will be used.
- Args:
- results (dict): Result dict from :obj:`dataset`.
- Returns:
- dict: Two new keys 'scale` and 'scale_idx` are added into
- ``results``, which would be used by subsequent pipelines.
- """
-
- if self.ratio_range is not None:
- if self.img_scale is None:
- h, w = results['img'].shape[:2]
- scale, scale_idx = self.random_sample_ratio((w, h),
- self.ratio_range)
- else:
- scale, scale_idx = self.random_sample_ratio(
- self.img_scale[0], self.ratio_range)
- elif len(self.img_scale) == 1:
- scale, scale_idx = self.img_scale[0], 0
- elif self.multiscale_mode == 'range':
- scale, scale_idx = self.random_sample(self.img_scale)
- elif self.multiscale_mode == 'value':
- scale, scale_idx = self.random_select(self.img_scale)
- else:
- raise NotImplementedError
-
- results['scale'] = scale
- results['scale_idx'] = scale_idx
-
- def _align(self, img, size_divisor, interpolation=None):
- align_h = int(np.ceil(img.shape[0] / size_divisor)) * size_divisor
- align_w = int(np.ceil(img.shape[1] / size_divisor)) * size_divisor
- if interpolation == None:
- img = mmcv.imresize(img, (align_w, align_h))
- else:
- img = mmcv.imresize(img, (align_w, align_h), interpolation=interpolation)
- return img
-
- def _resize_img(self, results):
- """Resize images with ``results['scale']``."""
- if self.keep_ratio:
- img, scale_factor = mmcv.imrescale(
- results['img'], results['scale'], return_scale=True)
- #### align ####
- img = self._align(img, self.size_divisor)
- # the w_scale and h_scale has minor difference
- # a real fix should be done in the mmcv.imrescale in the future
- new_h, new_w = img.shape[:2]
- h, w = results['img'].shape[:2]
- w_scale = new_w / w
- h_scale = new_h / h
- else:
- img, w_scale, h_scale = mmcv.imresize(
- results['img'], results['scale'], return_scale=True)
-
- h, w = img.shape[:2]
- assert int(np.ceil(h / self.size_divisor)) * self.size_divisor == h and \
- int(np.ceil(w / self.size_divisor)) * self.size_divisor == w, \
- "img size not align. h:{} w:{}".format(h,w)
- scale_factor = np.array([w_scale, h_scale, w_scale, h_scale],
- dtype=np.float32)
- results['img'] = img
- results['img_shape'] = img.shape
- results['pad_shape'] = img.shape # in case that there is no padding
- results['scale_factor'] = scale_factor
- results['keep_ratio'] = self.keep_ratio
-
- def _resize_seg(self, results):
- """Resize semantic segmentation map with ``results['scale']``."""
- for key in results.get('seg_fields', []):
- if self.keep_ratio:
- gt_seg = mmcv.imrescale(
- results[key], results['scale'], interpolation='nearest')
- gt_seg = self._align(gt_seg, self.size_divisor, interpolation='nearest')
- else:
- gt_seg = mmcv.imresize(
- results[key], results['scale'], interpolation='nearest')
- h, w = gt_seg.shape[:2]
- assert int(np.ceil(h / self.size_divisor)) * self.size_divisor == h and \
- int(np.ceil(w / self.size_divisor)) * self.size_divisor == w, \
- "gt_seg size not align. h:{} w:{}".format(h, w)
- results[key] = gt_seg
-
- def __call__(self, results):
- """Call function to resize images, bounding boxes, masks, semantic
- segmentation map.
- Args:
- results (dict): Result dict from loading pipeline.
- Returns:
- dict: Resized results, 'img_shape', 'pad_shape', 'scale_factor',
- 'keep_ratio' keys are added into result dict.
- """
-
- if 'scale' not in results:
- self._random_scale(results)
- self._resize_img(results)
- self._resize_seg(results)
- return results
-
- def __repr__(self):
- repr_str = self.__class__.__name__
- repr_str += (f'(img_scale={self.img_scale}, '
- f'multiscale_mode={self.multiscale_mode}, '
- f'ratio_range={self.ratio_range}, '
- f'keep_ratio={self.keep_ratio})')
- return repr_str
\ No newline at end of file
diff --git a/spaces/Grezz/generate_human_motion/VQ-Trans/utils/rotation_conversions.py b/spaces/Grezz/generate_human_motion/VQ-Trans/utils/rotation_conversions.py
deleted file mode 100644
index 1006e8a3117b231a7a456d5b826e76347fe0bfd4..0000000000000000000000000000000000000000
--- a/spaces/Grezz/generate_human_motion/VQ-Trans/utils/rotation_conversions.py
+++ /dev/null
@@ -1,532 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.
-# Check PYTORCH3D_LICENCE before use
-
-import functools
-from typing import Optional
-
-import torch
-import torch.nn.functional as F
-
-
-"""
-The transformation matrices returned from the functions in this file assume
-the points on which the transformation will be applied are column vectors.
-i.e. the R matrix is structured as
- R = [
- [Rxx, Rxy, Rxz],
- [Ryx, Ryy, Ryz],
- [Rzx, Rzy, Rzz],
- ] # (3, 3)
-This matrix can be applied to column vectors by post multiplication
-by the points e.g.
- points = [[0], [1], [2]] # (3 x 1) xyz coordinates of a point
- transformed_points = R * points
-To apply the same matrix to points which are row vectors, the R matrix
-can be transposed and pre multiplied by the points:
-e.g.
- points = [[0, 1, 2]] # (1 x 3) xyz coordinates of a point
- transformed_points = points * R.transpose(1, 0)
-"""
-
-
-def quaternion_to_matrix(quaternions):
- """
- Convert rotations given as quaternions to rotation matrices.
- Args:
- quaternions: quaternions with real part first,
- as tensor of shape (..., 4).
- Returns:
- Rotation matrices as tensor of shape (..., 3, 3).
- """
- r, i, j, k = torch.unbind(quaternions, -1)
- two_s = 2.0 / (quaternions * quaternions).sum(-1)
-
- o = torch.stack(
- (
- 1 - two_s * (j * j + k * k),
- two_s * (i * j - k * r),
- two_s * (i * k + j * r),
- two_s * (i * j + k * r),
- 1 - two_s * (i * i + k * k),
- two_s * (j * k - i * r),
- two_s * (i * k - j * r),
- two_s * (j * k + i * r),
- 1 - two_s * (i * i + j * j),
- ),
- -1,
- )
- return o.reshape(quaternions.shape[:-1] + (3, 3))
-
-
-def _copysign(a, b):
- """
- Return a tensor where each element has the absolute value taken from the,
- corresponding element of a, with sign taken from the corresponding
- element of b. This is like the standard copysign floating-point operation,
- but is not careful about negative 0 and NaN.
- Args:
- a: source tensor.
- b: tensor whose signs will be used, of the same shape as a.
- Returns:
- Tensor of the same shape as a with the signs of b.
- """
- signs_differ = (a < 0) != (b < 0)
- return torch.where(signs_differ, -a, a)
-
-
-def _sqrt_positive_part(x):
- """
- Returns torch.sqrt(torch.max(0, x))
- but with a zero subgradient where x is 0.
- """
- ret = torch.zeros_like(x)
- positive_mask = x > 0
- ret[positive_mask] = torch.sqrt(x[positive_mask])
- return ret
-
-
-def matrix_to_quaternion(matrix):
- """
- Convert rotations given as rotation matrices to quaternions.
- Args:
- matrix: Rotation matrices as tensor of shape (..., 3, 3).
- Returns:
- quaternions with real part first, as tensor of shape (..., 4).
- """
- if matrix.size(-1) != 3 or matrix.size(-2) != 3:
- raise ValueError(f"Invalid rotation matrix shape f{matrix.shape}.")
- m00 = matrix[..., 0, 0]
- m11 = matrix[..., 1, 1]
- m22 = matrix[..., 2, 2]
- o0 = 0.5 * _sqrt_positive_part(1 + m00 + m11 + m22)
- x = 0.5 * _sqrt_positive_part(1 + m00 - m11 - m22)
- y = 0.5 * _sqrt_positive_part(1 - m00 + m11 - m22)
- z = 0.5 * _sqrt_positive_part(1 - m00 - m11 + m22)
- o1 = _copysign(x, matrix[..., 2, 1] - matrix[..., 1, 2])
- o2 = _copysign(y, matrix[..., 0, 2] - matrix[..., 2, 0])
- o3 = _copysign(z, matrix[..., 1, 0] - matrix[..., 0, 1])
- return torch.stack((o0, o1, o2, o3), -1)
-
-
-def _axis_angle_rotation(axis: str, angle):
- """
- Return the rotation matrices for one of the rotations about an axis
- of which Euler angles describe, for each value of the angle given.
- Args:
- axis: Axis label "X" or "Y or "Z".
- angle: any shape tensor of Euler angles in radians
- Returns:
- Rotation matrices as tensor of shape (..., 3, 3).
- """
-
- cos = torch.cos(angle)
- sin = torch.sin(angle)
- one = torch.ones_like(angle)
- zero = torch.zeros_like(angle)
-
- if axis == "X":
- R_flat = (one, zero, zero, zero, cos, -sin, zero, sin, cos)
- if axis == "Y":
- R_flat = (cos, zero, sin, zero, one, zero, -sin, zero, cos)
- if axis == "Z":
- R_flat = (cos, -sin, zero, sin, cos, zero, zero, zero, one)
-
- return torch.stack(R_flat, -1).reshape(angle.shape + (3, 3))
-
-
-def euler_angles_to_matrix(euler_angles, convention: str):
- """
- Convert rotations given as Euler angles in radians to rotation matrices.
- Args:
- euler_angles: Euler angles in radians as tensor of shape (..., 3).
- convention: Convention string of three uppercase letters from
- {"X", "Y", and "Z"}.
- Returns:
- Rotation matrices as tensor of shape (..., 3, 3).
- """
- if euler_angles.dim() == 0 or euler_angles.shape[-1] != 3:
- raise ValueError("Invalid input euler angles.")
- if len(convention) != 3:
- raise ValueError("Convention must have 3 letters.")
- if convention[1] in (convention[0], convention[2]):
- raise ValueError(f"Invalid convention {convention}.")
- for letter in convention:
- if letter not in ("X", "Y", "Z"):
- raise ValueError(f"Invalid letter {letter} in convention string.")
- matrices = map(_axis_angle_rotation, convention, torch.unbind(euler_angles, -1))
- return functools.reduce(torch.matmul, matrices)
-
-
-def _angle_from_tan(
- axis: str, other_axis: str, data, horizontal: bool, tait_bryan: bool
-):
- """
- Extract the first or third Euler angle from the two members of
- the matrix which are positive constant times its sine and cosine.
- Args:
- axis: Axis label "X" or "Y or "Z" for the angle we are finding.
- other_axis: Axis label "X" or "Y or "Z" for the middle axis in the
- convention.
- data: Rotation matrices as tensor of shape (..., 3, 3).
- horizontal: Whether we are looking for the angle for the third axis,
- which means the relevant entries are in the same row of the
- rotation matrix. If not, they are in the same column.
- tait_bryan: Whether the first and third axes in the convention differ.
- Returns:
- Euler Angles in radians for each matrix in data as a tensor
- of shape (...).
- """
-
- i1, i2 = {"X": (2, 1), "Y": (0, 2), "Z": (1, 0)}[axis]
- if horizontal:
- i2, i1 = i1, i2
- even = (axis + other_axis) in ["XY", "YZ", "ZX"]
- if horizontal == even:
- return torch.atan2(data[..., i1], data[..., i2])
- if tait_bryan:
- return torch.atan2(-data[..., i2], data[..., i1])
- return torch.atan2(data[..., i2], -data[..., i1])
-
-
-def _index_from_letter(letter: str):
- if letter == "X":
- return 0
- if letter == "Y":
- return 1
- if letter == "Z":
- return 2
-
-
-def matrix_to_euler_angles(matrix, convention: str):
- """
- Convert rotations given as rotation matrices to Euler angles in radians.
- Args:
- matrix: Rotation matrices as tensor of shape (..., 3, 3).
- convention: Convention string of three uppercase letters.
- Returns:
- Euler angles in radians as tensor of shape (..., 3).
- """
- if len(convention) != 3:
- raise ValueError("Convention must have 3 letters.")
- if convention[1] in (convention[0], convention[2]):
- raise ValueError(f"Invalid convention {convention}.")
- for letter in convention:
- if letter not in ("X", "Y", "Z"):
- raise ValueError(f"Invalid letter {letter} in convention string.")
- if matrix.size(-1) != 3 or matrix.size(-2) != 3:
- raise ValueError(f"Invalid rotation matrix shape f{matrix.shape}.")
- i0 = _index_from_letter(convention[0])
- i2 = _index_from_letter(convention[2])
- tait_bryan = i0 != i2
- if tait_bryan:
- central_angle = torch.asin(
- matrix[..., i0, i2] * (-1.0 if i0 - i2 in [-1, 2] else 1.0)
- )
- else:
- central_angle = torch.acos(matrix[..., i0, i0])
-
- o = (
- _angle_from_tan(
- convention[0], convention[1], matrix[..., i2], False, tait_bryan
- ),
- central_angle,
- _angle_from_tan(
- convention[2], convention[1], matrix[..., i0, :], True, tait_bryan
- ),
- )
- return torch.stack(o, -1)
-
-
-def random_quaternions(
- n: int, dtype: Optional[torch.dtype] = None, device=None, requires_grad=False
-):
- """
- Generate random quaternions representing rotations,
- i.e. versors with nonnegative real part.
- Args:
- n: Number of quaternions in a batch to return.
- dtype: Type to return.
- device: Desired device of returned tensor. Default:
- uses the current device for the default tensor type.
- requires_grad: Whether the resulting tensor should have the gradient
- flag set.
- Returns:
- Quaternions as tensor of shape (N, 4).
- """
- o = torch.randn((n, 4), dtype=dtype, device=device, requires_grad=requires_grad)
- s = (o * o).sum(1)
- o = o / _copysign(torch.sqrt(s), o[:, 0])[:, None]
- return o
-
-
-def random_rotations(
- n: int, dtype: Optional[torch.dtype] = None, device=None, requires_grad=False
-):
- """
- Generate random rotations as 3x3 rotation matrices.
- Args:
- n: Number of rotation matrices in a batch to return.
- dtype: Type to return.
- device: Device of returned tensor. Default: if None,
- uses the current device for the default tensor type.
- requires_grad: Whether the resulting tensor should have the gradient
- flag set.
- Returns:
- Rotation matrices as tensor of shape (n, 3, 3).
- """
- quaternions = random_quaternions(
- n, dtype=dtype, device=device, requires_grad=requires_grad
- )
- return quaternion_to_matrix(quaternions)
-
-
-def random_rotation(
- dtype: Optional[torch.dtype] = None, device=None, requires_grad=False
-):
- """
- Generate a single random 3x3 rotation matrix.
- Args:
- dtype: Type to return
- device: Device of returned tensor. Default: if None,
- uses the current device for the default tensor type
- requires_grad: Whether the resulting tensor should have the gradient
- flag set
- Returns:
- Rotation matrix as tensor of shape (3, 3).
- """
- return random_rotations(1, dtype, device, requires_grad)[0]
-
-
-def standardize_quaternion(quaternions):
- """
- Convert a unit quaternion to a standard form: one in which the real
- part is non negative.
- Args:
- quaternions: Quaternions with real part first,
- as tensor of shape (..., 4).
- Returns:
- Standardized quaternions as tensor of shape (..., 4).
- """
- return torch.where(quaternions[..., 0:1] < 0, -quaternions, quaternions)
-
-
-def quaternion_raw_multiply(a, b):
- """
- Multiply two quaternions.
- Usual torch rules for broadcasting apply.
- Args:
- a: Quaternions as tensor of shape (..., 4), real part first.
- b: Quaternions as tensor of shape (..., 4), real part first.
- Returns:
- The product of a and b, a tensor of quaternions shape (..., 4).
- """
- aw, ax, ay, az = torch.unbind(a, -1)
- bw, bx, by, bz = torch.unbind(b, -1)
- ow = aw * bw - ax * bx - ay * by - az * bz
- ox = aw * bx + ax * bw + ay * bz - az * by
- oy = aw * by - ax * bz + ay * bw + az * bx
- oz = aw * bz + ax * by - ay * bx + az * bw
- return torch.stack((ow, ox, oy, oz), -1)
-
-
-def quaternion_multiply(a, b):
- """
- Multiply two quaternions representing rotations, returning the quaternion
- representing their composition, i.e. the versor with nonnegative real part.
- Usual torch rules for broadcasting apply.
- Args:
- a: Quaternions as tensor of shape (..., 4), real part first.
- b: Quaternions as tensor of shape (..., 4), real part first.
- Returns:
- The product of a and b, a tensor of quaternions of shape (..., 4).
- """
- ab = quaternion_raw_multiply(a, b)
- return standardize_quaternion(ab)
-
-
-def quaternion_invert(quaternion):
- """
- Given a quaternion representing rotation, get the quaternion representing
- its inverse.
- Args:
- quaternion: Quaternions as tensor of shape (..., 4), with real part
- first, which must be versors (unit quaternions).
- Returns:
- The inverse, a tensor of quaternions of shape (..., 4).
- """
-
- return quaternion * quaternion.new_tensor([1, -1, -1, -1])
-
-
-def quaternion_apply(quaternion, point):
- """
- Apply the rotation given by a quaternion to a 3D point.
- Usual torch rules for broadcasting apply.
- Args:
- quaternion: Tensor of quaternions, real part first, of shape (..., 4).
- point: Tensor of 3D points of shape (..., 3).
- Returns:
- Tensor of rotated points of shape (..., 3).
- """
- if point.size(-1) != 3:
- raise ValueError(f"Points are not in 3D, f{point.shape}.")
- real_parts = point.new_zeros(point.shape[:-1] + (1,))
- point_as_quaternion = torch.cat((real_parts, point), -1)
- out = quaternion_raw_multiply(
- quaternion_raw_multiply(quaternion, point_as_quaternion),
- quaternion_invert(quaternion),
- )
- return out[..., 1:]
-
-
-def axis_angle_to_matrix(axis_angle):
- """
- Convert rotations given as axis/angle to rotation matrices.
- Args:
- axis_angle: Rotations given as a vector in axis angle form,
- as a tensor of shape (..., 3), where the magnitude is
- the angle turned anticlockwise in radians around the
- vector's direction.
- Returns:
- Rotation matrices as tensor of shape (..., 3, 3).
- """
- return quaternion_to_matrix(axis_angle_to_quaternion(axis_angle))
-
-
-def matrix_to_axis_angle(matrix):
- """
- Convert rotations given as rotation matrices to axis/angle.
- Args:
- matrix: Rotation matrices as tensor of shape (..., 3, 3).
- Returns:
- Rotations given as a vector in axis angle form, as a tensor
- of shape (..., 3), where the magnitude is the angle
- turned anticlockwise in radians around the vector's
- direction.
- """
- return quaternion_to_axis_angle(matrix_to_quaternion(matrix))
-
-
-def axis_angle_to_quaternion(axis_angle):
- """
- Convert rotations given as axis/angle to quaternions.
- Args:
- axis_angle: Rotations given as a vector in axis angle form,
- as a tensor of shape (..., 3), where the magnitude is
- the angle turned anticlockwise in radians around the
- vector's direction.
- Returns:
- quaternions with real part first, as tensor of shape (..., 4).
- """
- angles = torch.norm(axis_angle, p=2, dim=-1, keepdim=True)
- half_angles = 0.5 * angles
- eps = 1e-6
- small_angles = angles.abs() < eps
- sin_half_angles_over_angles = torch.empty_like(angles)
- sin_half_angles_over_angles[~small_angles] = (
- torch.sin(half_angles[~small_angles]) / angles[~small_angles]
- )
- # for x small, sin(x/2) is about x/2 - (x/2)^3/6
- # so sin(x/2)/x is about 1/2 - (x*x)/48
- sin_half_angles_over_angles[small_angles] = (
- 0.5 - (angles[small_angles] * angles[small_angles]) / 48
- )
- quaternions = torch.cat(
- [torch.cos(half_angles), axis_angle * sin_half_angles_over_angles], dim=-1
- )
- return quaternions
-
-
-def quaternion_to_axis_angle(quaternions):
- """
- Convert rotations given as quaternions to axis/angle.
- Args:
- quaternions: quaternions with real part first,
- as tensor of shape (..., 4).
- Returns:
- Rotations given as a vector in axis angle form, as a tensor
- of shape (..., 3), where the magnitude is the angle
- turned anticlockwise in radians around the vector's
- direction.
- """
- norms = torch.norm(quaternions[..., 1:], p=2, dim=-1, keepdim=True)
- half_angles = torch.atan2(norms, quaternions[..., :1])
- angles = 2 * half_angles
- eps = 1e-6
- small_angles = angles.abs() < eps
- sin_half_angles_over_angles = torch.empty_like(angles)
- sin_half_angles_over_angles[~small_angles] = (
- torch.sin(half_angles[~small_angles]) / angles[~small_angles]
- )
- # for x small, sin(x/2) is about x/2 - (x/2)^3/6
- # so sin(x/2)/x is about 1/2 - (x*x)/48
- sin_half_angles_over_angles[small_angles] = (
- 0.5 - (angles[small_angles] * angles[small_angles]) / 48
- )
- return quaternions[..., 1:] / sin_half_angles_over_angles
-
-
-def rotation_6d_to_matrix(d6: torch.Tensor) -> torch.Tensor:
- """
- Converts 6D rotation representation by Zhou et al. [1] to rotation matrix
- using Gram--Schmidt orthogonalisation per Section B of [1].
- Args:
- d6: 6D rotation representation, of size (*, 6)
- Returns:
- batch of rotation matrices of size (*, 3, 3)
- [1] Zhou, Y., Barnes, C., Lu, J., Yang, J., & Li, H.
- On the Continuity of Rotation Representations in Neural Networks.
- IEEE Conference on Computer Vision and Pattern Recognition, 2019.
- Retrieved from http://arxiv.org/abs/1812.07035
- """
-
- a1, a2 = d6[..., :3], d6[..., 3:]
- b1 = F.normalize(a1, dim=-1)
- b2 = a2 - (b1 * a2).sum(-1, keepdim=True) * b1
- b2 = F.normalize(b2, dim=-1)
- b3 = torch.cross(b1, b2, dim=-1)
- return torch.stack((b1, b2, b3), dim=-2)
-
-
-def matrix_to_rotation_6d(matrix: torch.Tensor) -> torch.Tensor:
- """
- Converts rotation matrices to 6D rotation representation by Zhou et al. [1]
- by dropping the last row. Note that 6D representation is not unique.
- Args:
- matrix: batch of rotation matrices of size (*, 3, 3)
- Returns:
- 6D rotation representation, of size (*, 6)
- [1] Zhou, Y., Barnes, C., Lu, J., Yang, J., & Li, H.
- On the Continuity of Rotation Representations in Neural Networks.
- IEEE Conference on Computer Vision and Pattern Recognition, 2019.
- Retrieved from http://arxiv.org/abs/1812.07035
- """
- return matrix[..., :2, :].clone().reshape(*matrix.size()[:-2], 6)
-
-def canonicalize_smplh(poses, trans = None):
- bs, nframes, njoints = poses.shape[:3]
-
- global_orient = poses[:, :, 0]
-
- # first global rotations
- rot2d = matrix_to_axis_angle(global_orient[:, 0])
- #rot2d[:, :2] = 0 # Remove the rotation along the vertical axis
- rot2d = axis_angle_to_matrix(rot2d)
-
- # Rotate the global rotation to eliminate Z rotations
- global_orient = torch.einsum("ikj,imkl->imjl", rot2d, global_orient)
-
- # Construct canonicalized version of x
- xc = torch.cat((global_orient[:, :, None], poses[:, :, 1:]), dim=2)
-
- if trans is not None:
- vel = trans[:, 1:] - trans[:, :-1]
- # Turn the translation as well
- vel = torch.einsum("ikj,ilk->ilj", rot2d, vel)
- trans = torch.cat((torch.zeros(bs, 1, 3, device=vel.device),
- torch.cumsum(vel, 1)), 1)
- return xc, trans
- else:
- return xc
-
-
\ No newline at end of file
diff --git a/spaces/Grezz/generate_human_motion/pyrender/tests/unit/test_scenes.py b/spaces/Grezz/generate_human_motion/pyrender/tests/unit/test_scenes.py
deleted file mode 100644
index d85dd714cb5d842ea12dee4140adfd7db55c9c01..0000000000000000000000000000000000000000
--- a/spaces/Grezz/generate_human_motion/pyrender/tests/unit/test_scenes.py
+++ /dev/null
@@ -1,235 +0,0 @@
-import numpy as np
-import pytest
-import trimesh
-
-from pyrender import (Mesh, PerspectiveCamera, DirectionalLight,
- SpotLight, PointLight, Scene, Node, OrthographicCamera)
-
-
-def test_scenes():
-
- # Basics
- s = Scene()
- assert np.allclose(s.bg_color, np.ones(4))
- assert np.allclose(s.ambient_light, np.zeros(3))
- assert len(s.nodes) == 0
- assert s.name is None
- s.name = 'asdf'
- s.bg_color = None
- s.ambient_light = None
- assert np.allclose(s.bg_color, np.ones(4))
- assert np.allclose(s.ambient_light, np.zeros(3))
-
- assert s.nodes == set()
- assert s.cameras == set()
- assert s.lights == set()
- assert s.point_lights == set()
- assert s.spot_lights == set()
- assert s.directional_lights == set()
- assert s.meshes == set()
- assert s.camera_nodes == set()
- assert s.light_nodes == set()
- assert s.point_light_nodes == set()
- assert s.spot_light_nodes == set()
- assert s.directional_light_nodes == set()
- assert s.mesh_nodes == set()
- assert s.main_camera_node is None
- assert np.all(s.bounds == 0)
- assert np.all(s.centroid == 0)
- assert np.all(s.extents == 0)
- assert np.all(s.scale == 0)
-
- # From trimesh scene
- tms = trimesh.load('tests/data/WaterBottle.glb')
- s = Scene.from_trimesh_scene(tms)
- assert len(s.meshes) == 1
- assert len(s.mesh_nodes) == 1
-
- # Test bg color formatting
- s = Scene(bg_color=[0, 1.0, 0])
- assert np.allclose(s.bg_color, np.array([0.0, 1.0, 0.0, 1.0]))
-
- # Test constructor for nodes
- n1 = Node()
- n2 = Node()
- n3 = Node()
- nodes = [n1, n2, n3]
- s = Scene(nodes=nodes)
- n1.children.append(n2)
- s = Scene(nodes=nodes)
- n3.children.append(n2)
- with pytest.raises(ValueError):
- s = Scene(nodes=nodes)
- n3.children = []
- n2.children.append(n3)
- n3.children.append(n2)
- with pytest.raises(ValueError):
- s = Scene(nodes=nodes)
-
- # Test node accessors
- n1 = Node()
- n2 = Node()
- n3 = Node()
- nodes = [n1, n2]
- s = Scene(nodes=nodes)
- assert s.has_node(n1)
- assert s.has_node(n2)
- assert not s.has_node(n3)
-
- # Test node poses
- for n in nodes:
- assert np.allclose(s.get_pose(n), np.eye(4))
- with pytest.raises(ValueError):
- s.get_pose(n3)
- with pytest.raises(ValueError):
- s.set_pose(n3, np.eye(4))
- tf = np.eye(4)
- tf[:3,3] = np.ones(3)
- s.set_pose(n1, tf)
- assert np.allclose(s.get_pose(n1), tf)
- assert np.allclose(s.get_pose(n2), np.eye(4))
-
- nodes = [n1, n2, n3]
- tf2 = np.eye(4)
- tf2[:3,:3] = np.diag([-1,-1,1])
- n1.children.append(n2)
- n1.matrix = tf
- n2.matrix = tf2
- s = Scene(nodes=nodes)
- assert np.allclose(s.get_pose(n1), tf)
- assert np.allclose(s.get_pose(n2), tf.dot(tf2))
- assert np.allclose(s.get_pose(n3), np.eye(4))
-
- n1 = Node()
- n2 = Node()
- n3 = Node()
- n1.children.append(n2)
- s = Scene()
- s.add_node(n1)
- with pytest.raises(ValueError):
- s.add_node(n2)
- s.set_pose(n1, tf)
- assert np.allclose(s.get_pose(n1), tf)
- assert np.allclose(s.get_pose(n2), tf)
- s.set_pose(n2, tf2)
- assert np.allclose(s.get_pose(n2), tf.dot(tf2))
-
- # Test node removal
- n1 = Node()
- n2 = Node()
- n3 = Node()
- n1.children.append(n2)
- n2.children.append(n3)
- s = Scene(nodes=[n1, n2, n3])
- s.remove_node(n2)
- assert len(s.nodes) == 1
- assert n1 in s.nodes
- assert len(n1.children) == 0
- assert len(n2.children) == 1
- s.add_node(n2, parent_node=n1)
- assert len(n1.children) == 1
- n1.matrix = tf
- n3.matrix = tf2
- assert np.allclose(s.get_pose(n3), tf.dot(tf2))
-
- # Now test ADD function
- s = Scene()
- m = Mesh([], name='m')
- cp = PerspectiveCamera(yfov=2.0)
- co = OrthographicCamera(xmag=1.0, ymag=1.0)
- dl = DirectionalLight()
- pl = PointLight()
- sl = SpotLight()
-
- n1 = s.add(m, name='mn')
- assert n1.mesh == m
- assert len(s.nodes) == 1
- assert len(s.mesh_nodes) == 1
- assert n1 in s.mesh_nodes
- assert len(s.meshes) == 1
- assert m in s.meshes
- assert len(s.get_nodes(node=n2)) == 0
- n2 = s.add(m, pose=tf)
- assert len(s.nodes) == len(s.mesh_nodes) == 2
- assert len(s.meshes) == 1
- assert len(s.get_nodes(node=n1)) == 1
- assert len(s.get_nodes(node=n1, name='mn')) == 1
- assert len(s.get_nodes(name='mn')) == 1
- assert len(s.get_nodes(obj=m)) == 2
- assert len(s.get_nodes(obj=m, obj_name='m')) == 2
- assert len(s.get_nodes(obj=co)) == 0
- nsl = s.add(sl, name='sln')
- npl = s.add(pl, parent_name='sln')
- assert nsl.children[0] == npl
- ndl = s.add(dl, parent_node=npl)
- assert npl.children[0] == ndl
- nco = s.add(co)
- ncp = s.add(cp)
-
- assert len(s.light_nodes) == len(s.lights) == 3
- assert len(s.point_light_nodes) == len(s.point_lights) == 1
- assert npl in s.point_light_nodes
- assert len(s.spot_light_nodes) == len(s.spot_lights) == 1
- assert nsl in s.spot_light_nodes
- assert len(s.directional_light_nodes) == len(s.directional_lights) == 1
- assert ndl in s.directional_light_nodes
- assert len(s.cameras) == len(s.camera_nodes) == 2
- assert s.main_camera_node == nco
- s.main_camera_node = ncp
- s.remove_node(ncp)
- assert len(s.cameras) == len(s.camera_nodes) == 1
- assert s.main_camera_node == nco
- s.remove_node(n2)
- assert len(s.meshes) == 1
- s.remove_node(n1)
- assert len(s.meshes) == 0
- s.remove_node(nsl)
- assert len(s.lights) == 0
- s.remove_node(nco)
- assert s.main_camera_node is None
-
- s.add_node(n1)
- s.clear()
- assert len(s.nodes) == 0
-
- # Trigger final errors
- with pytest.raises(ValueError):
- s.main_camera_node = None
- with pytest.raises(ValueError):
- s.main_camera_node = ncp
- with pytest.raises(ValueError):
- s.add(m, parent_node=n1)
- with pytest.raises(ValueError):
- s.add(m, name='asdf')
- s.add(m, name='asdf')
- s.add(m, parent_name='asdf')
- with pytest.raises(ValueError):
- s.add(m, parent_name='asfd')
- with pytest.raises(TypeError):
- s.add(None)
-
- s.clear()
- # Test bounds
- m1 = Mesh.from_trimesh(trimesh.creation.box())
- m2 = Mesh.from_trimesh(trimesh.creation.box())
- m3 = Mesh.from_trimesh(trimesh.creation.box())
- n1 = Node(mesh=m1)
- n2 = Node(mesh=m2, translation=[1.0, 0.0, 0.0])
- n3 = Node(mesh=m3, translation=[0.5, 0.0, 1.0])
- s.add_node(n1)
- s.add_node(n2)
- s.add_node(n3)
- assert np.allclose(s.bounds, [[-0.5, -0.5, -0.5], [1.5, 0.5, 1.5]])
- s.clear()
- s.add_node(n1)
- s.add_node(n2, parent_node=n1)
- s.add_node(n3, parent_node=n2)
- assert np.allclose(s.bounds, [[-0.5, -0.5, -0.5], [2.0, 0.5, 1.5]])
- tf = np.eye(4)
- tf[:3,3] = np.ones(3)
- s.set_pose(n3, tf)
- assert np.allclose(s.bounds, [[-0.5, -0.5, -0.5], [2.5, 1.5, 1.5]])
- s.remove_node(n2)
- assert np.allclose(s.bounds, [[-0.5, -0.5, -0.5], [0.5, 0.5, 0.5]])
- s.clear()
- assert np.allclose(s.bounds, 0.0)
diff --git a/spaces/HALLA/HALL-E/README.md b/spaces/HALLA/HALL-E/README.md
deleted file mode 100644
index 0d1b68701b204ab989b2ff1f812f20b0ce3e4c01..0000000000000000000000000000000000000000
--- a/spaces/HALLA/HALL-E/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: HALL E
-emoji: 👀
-colorFrom: green
-colorTo: yellow
-sdk: static
-pinned: false
-license: other
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/wav2vec/unsupervised/models/wav2vec_u.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/wav2vec/unsupervised/models/wav2vec_u.py
deleted file mode 100644
index 27792ebda842057e33fed3dc53dd9d8a594d0483..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/wav2vec/unsupervised/models/wav2vec_u.py
+++ /dev/null
@@ -1,637 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from dataclasses import dataclass
-from enum import Enum, auto
-import math
-import numpy as np
-from typing import Tuple, List, Optional, Dict
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from torch import autograd
-
-from fairseq import checkpoint_utils, utils
-from fairseq.dataclass import FairseqDataclass
-from fairseq.models import BaseFairseqModel, register_model
-from fairseq.modules import (
- SamePad,
- TransposeLast,
-)
-
-
-class SegmentationType(Enum):
- NONE = auto()
- RANDOM = auto()
- UNIFORM_RANDOM = auto()
- UNIFORM_RANDOM_JOIN = auto()
- JOIN = auto()
-
-
-@dataclass
-class SegmentationConfig(FairseqDataclass):
- type: SegmentationType = SegmentationType.NONE
- subsample_rate: float = 0.25
- mean_pool: bool = True
- mean_pool_join: bool = False
- remove_zeros: bool = False
-
-
-@dataclass
-class Wav2vec_UConfig(FairseqDataclass):
-
- discriminator_kernel: int = 3
- discriminator_dilation: int = 1
- discriminator_dim: int = 256
- discriminator_causal: bool = True
- discriminator_linear_emb: bool = False
- discriminator_depth: int = 1
- discriminator_max_pool: bool = False
- discriminator_act_after_linear: bool = False
- discriminator_dropout: float = 0.0
- discriminator_spectral_norm: bool = False
- discriminator_weight_norm: bool = False
-
- generator_kernel: int = 4
- generator_dilation: int = 1
- generator_stride: int = 1
- generator_bias: bool = False
- generator_dropout: float = 0.0
-
- blank_weight: float = 0
- blank_mode: str = "add"
- blank_is_sil: bool = False
- no_softmax: bool = False
-
- smoothness_weight: float = 0.0
- smoothing: float = 0.0
- smoothing_one_sided: bool = False
- gradient_penalty: float = 0.0
- probabilistic_grad_penalty_slicing: bool = False
- code_penalty: float = 0.0
- gumbel: bool = False
- hard_gumbel: bool = True
- temp: Tuple[float, float, float] = (2, 0.1, 0.99995)
- input_dim: int = 128
-
- segmentation: SegmentationConfig = SegmentationConfig()
-
-
-class Segmenter(nn.Module):
- cfg: SegmentationConfig
-
- def __init__(self, cfg: SegmentationConfig):
- super().__init__()
- self.cfg = cfg
- self.subsample_rate = cfg.subsample_rate
-
- def pre_segment(self, dense_x, dense_padding_mask):
- return dense_x, dense_padding_mask
-
- def logit_segment(self, logits, padding_mask):
- return logits, padding_mask
-
-
-class RandomSegmenter(Segmenter):
- def pre_segment(self, dense_x, dense_padding_mask):
- target_num = math.ceil(dense_x.size(1) * self.subsample_rate)
- ones = torch.ones(dense_x.shape[:-1], device=dense_x.device)
- indices, _ = ones.multinomial(target_num).sort(dim=-1)
- indices_ld = indices.unsqueeze(-1).expand(-1, -1, dense_x.size(-1))
- dense_x = dense_x.gather(1, indices_ld)
- dense_padding_mask = dense_padding_mask.gather(1, index=indices)
- return dense_x, dense_padding_mask
-
-
-class UniformRandomSegmenter(Segmenter):
- def pre_segment(self, dense_x, dense_padding_mask):
- bsz, tsz, fsz = dense_x.shape
-
- target_num = math.ceil(tsz * self.subsample_rate)
-
- rem = tsz % target_num
-
- if rem > 0:
- dense_x = F.pad(dense_x, [0, 0, 0, target_num - rem])
- dense_padding_mask = F.pad(
- dense_padding_mask, [0, target_num - rem], value=True
- )
-
- dense_x = dense_x.view(bsz, target_num, -1, fsz)
- dense_padding_mask = dense_padding_mask.view(bsz, target_num, -1)
-
- if self.cfg.mean_pool:
- dense_x = dense_x.mean(dim=-2)
- dense_padding_mask = dense_padding_mask.all(dim=-1)
- else:
- ones = torch.ones((bsz, dense_x.size(2)), device=dense_x.device)
- indices = ones.multinomial(1)
- indices = indices.unsqueeze(-1).expand(-1, target_num, -1)
- indices_ld = indices.unsqueeze(-1).expand(-1, -1, -1, fsz)
- dense_x = dense_x.gather(2, indices_ld).reshape(bsz, -1, fsz)
- dense_padding_mask = dense_padding_mask.gather(2, index=indices).reshape(
- bsz, -1
- )
- return dense_x, dense_padding_mask
-
-
-class JoinSegmenter(Segmenter):
- def logit_segment(self, logits, padding_mask):
- preds = logits.argmax(dim=-1)
-
- if padding_mask.any():
- preds[padding_mask] = -1 # mark pad
- uniques = []
-
- bsz, tsz, csz = logits.shape
-
- for p in preds:
- uniques.append(
- p.cpu().unique_consecutive(return_inverse=True, return_counts=True)
- )
-
- new_tsz = max(u[0].numel() for u in uniques)
- new_logits = logits.new_zeros(bsz, new_tsz, csz)
- new_pad = padding_mask.new_zeros(bsz, new_tsz)
-
- for b in range(bsz):
- u, idx, c = uniques[b]
- keep = u != -1
-
- if self.cfg.remove_zeros:
- keep.logical_and_(u != 0)
-
- if self.training and not self.cfg.mean_pool_join:
- u[0] = 0
- u[1:] = c.cumsum(0)[:-1]
- m = c > 1
- r = torch.rand(m.sum())
- o = (c[m] * r).long()
- u[m] += o
- new_logits[b, : u.numel()] = logits[b, u]
- else:
- new_logits[b].index_add_(
- dim=0, index=idx.to(new_logits.device), source=logits[b]
- )
- new_logits[b, : c.numel()] /= c.unsqueeze(-1).to(new_logits.device)
-
- new_sz = keep.sum()
- if not keep.all():
- kept_logits = new_logits[b, : c.numel()][keep]
- new_logits[b, :new_sz] = kept_logits
-
- if new_sz < new_tsz:
- pad = new_tsz - new_sz
- new_logits[b, -pad:] = 0
- new_pad[b, -pad:] = True
-
- return new_logits, new_pad
-
-
-class UniformRandomJoinSegmenter(UniformRandomSegmenter, JoinSegmenter):
- pass
-
-
-SEGMENT_FACTORY = {
- SegmentationType.NONE: Segmenter,
- SegmentationType.RANDOM: RandomSegmenter,
- SegmentationType.UNIFORM_RANDOM: UniformRandomSegmenter,
- SegmentationType.UNIFORM_RANDOM_JOIN: UniformRandomJoinSegmenter,
- SegmentationType.JOIN: JoinSegmenter,
-}
-
-
-class Discriminator(nn.Module):
- def __init__(self, dim, cfg: Wav2vec_UConfig):
- super().__init__()
-
- inner_dim = cfg.discriminator_dim
- kernel = cfg.discriminator_kernel
- dilation = cfg.discriminator_dilation
- self.max_pool = cfg.discriminator_max_pool
-
- if cfg.discriminator_causal:
- padding = kernel - 1
- else:
- padding = kernel // 2
-
- def make_conv(in_d, out_d, k, p=0, has_dilation=True):
- conv = nn.Conv1d(
- in_d,
- out_d,
- kernel_size=k,
- padding=p,
- dilation=dilation if has_dilation else 1,
- )
- if cfg.discriminator_spectral_norm:
- conv = nn.utils.spectral_norm(conv)
- elif cfg.discriminator_weight_norm:
- conv = nn.utils.weight_norm(conv)
- return conv
-
- inner_net = [
- nn.Sequential(
- make_conv(inner_dim, inner_dim, kernel, padding),
- SamePad(kernel_size=kernel, causal=cfg.discriminator_causal),
- nn.Dropout(cfg.discriminator_dropout),
- nn.GELU(),
- )
- for _ in range(cfg.discriminator_depth - 1)
- ] + [
- make_conv(inner_dim, 1, kernel, padding, has_dilation=False),
- SamePad(kernel_size=kernel, causal=cfg.discriminator_causal),
- ]
-
- if cfg.discriminator_linear_emb:
- emb_net = [make_conv(dim, inner_dim, 1)]
- else:
- emb_net = [
- make_conv(dim, inner_dim, kernel, padding),
- SamePad(kernel_size=kernel, causal=cfg.discriminator_causal),
- ]
-
- if cfg.discriminator_act_after_linear:
- emb_net.append(nn.GELU())
-
- self.net = nn.Sequential(
- *emb_net,
- nn.Dropout(cfg.discriminator_dropout),
- *inner_net,
- )
-
- def forward(self, x, padding_mask):
- x = x.transpose(1, 2) # BTC -> BCT
- x = self.net(x)
- x = x.transpose(1, 2)
- x_sz = x.size(1)
- if padding_mask is not None and padding_mask.any() and padding_mask.dim() > 1:
- padding_mask = padding_mask[:, : x.size(1)]
- x[padding_mask] = float("-inf") if self.max_pool else 0
- x_sz = x_sz - padding_mask.sum(dim=-1)
- x = x.squeeze(-1)
- if self.max_pool:
- x, _ = x.max(dim=-1)
- else:
- x = x.sum(dim=-1)
- x = x / x_sz
- return x
-
-
-class Generator(nn.Module):
- def __init__(self, input_dim, output_dim, cfg: Wav2vec_UConfig):
- super().__init__()
-
- self.cfg = cfg
- self.output_dim = output_dim
- self.stride = cfg.generator_stride
- self.dropout = nn.Dropout(cfg.generator_dropout)
-
- padding = cfg.generator_kernel // 2
- self.proj = nn.Sequential(
- TransposeLast(),
- nn.Conv1d(
- input_dim,
- output_dim,
- kernel_size=cfg.generator_kernel,
- stride=cfg.generator_stride,
- dilation=cfg.generator_dilation,
- padding=padding,
- bias=cfg.generator_bias,
- ),
- TransposeLast(),
- )
-
- def forward(self, dense_x, tokens, dense_padding_mask):
- dense_x = self.dropout(dense_x)
-
- dense_x = self.proj(dense_x)
- if self.stride > 1:
- dense_padding_mask = dense_padding_mask[:, :: self.stride]
-
- if dense_padding_mask.size(1) != dense_x.size(1):
- new_padding = dense_padding_mask.new_zeros(dense_x.shape[:-1])
- diff = new_padding.size(1) - dense_padding_mask.size(1)
- assert (
- diff > 0
- ), f"{new_padding.shape}, {dense_padding_mask.shape}, {dense_x.shape}, {diff}"
- if diff > 0:
- new_padding[:, diff:] = dense_padding_mask
- else:
- assert diff < 0
- new_padding = dense_padding_mask[:, :diff]
-
- dense_padding_mask = new_padding
-
- result = {}
-
- token_x = None
- if tokens is not None:
- token_x = dense_x.new_zeros(tokens.numel(), self.output_dim)
- token_x.scatter_(1, tokens.view(-1, 1).long(), 1)
- token_x = token_x.view(tokens.shape + (self.output_dim,))
-
- result["dense_x"] = dense_x
- result["token_x"] = token_x
- result["dense_padding_mask"] = dense_padding_mask
-
- return result
-
-
-@register_model("wav2vec_u", dataclass=Wav2vec_UConfig)
-class Wav2vec_U(BaseFairseqModel):
- def calc_gradient_penalty(self, real_data, fake_data):
-
- b_size = min(real_data.size(0), fake_data.size(0))
- t_size = min(real_data.size(1), fake_data.size(1))
-
- if self.cfg.probabilistic_grad_penalty_slicing:
-
- def get_slice(data, dim, target_size):
-
- size = data.size(dim)
- diff = size - target_size
- if diff <= 0:
- return data
-
- start = np.random.randint(0, diff + 1)
- return data.narrow(dim=dim, start=start, length=target_size)
-
- real_data = get_slice(real_data, 0, b_size)
- real_data = get_slice(real_data, 1, t_size)
- fake_data = get_slice(fake_data, 0, b_size)
- fake_data = get_slice(fake_data, 1, t_size)
-
- else:
- real_data = real_data[:b_size, :t_size]
- fake_data = fake_data[:b_size, :t_size]
-
- alpha = torch.rand(real_data.size(0), 1, 1)
- alpha = alpha.expand(real_data.size())
- alpha = alpha.to(real_data.device)
-
- interpolates = alpha * real_data + ((1 - alpha) * fake_data)
-
- disc_interpolates = self.discriminator(interpolates, None)
-
- gradients = autograd.grad(
- outputs=disc_interpolates,
- inputs=interpolates,
- grad_outputs=torch.ones(disc_interpolates.size(), device=real_data.device),
- create_graph=True,
- retain_graph=True,
- only_inputs=True,
- )[0]
-
- gradient_penalty = (gradients.norm(2, dim=1) - 1) ** 2
- return gradient_penalty
-
- def set_num_updates(self, num_updates):
- super().set_num_updates(num_updates)
- self.update_num = num_updates
- self.curr_temp = max(
- self.max_temp * self.temp_decay ** num_updates, self.min_temp
- )
-
- def discrim_step(self, num_updates):
- return num_updates % 2 == 1
-
- def get_groups_for_update(self, num_updates):
- return "discriminator" if self.discrim_step(num_updates) else "generator"
-
- def __init__(self, cfg: Wav2vec_UConfig, target_dict):
- super().__init__()
-
- self.cfg = cfg
- self.zero_index = target_dict.index("") if "" in target_dict else 0
- self.smoothness_weight = cfg.smoothness_weight
-
- output_size = len(target_dict)
- self.pad = target_dict.pad()
- self.eos = target_dict.eos()
- self.smoothing = cfg.smoothing
- self.smoothing_one_sided = cfg.smoothing_one_sided
- self.no_softmax = cfg.no_softmax
- self.gumbel = cfg.gumbel
- self.hard_gumbel = cfg.hard_gumbel
- self.last_acc = None
-
- self.gradient_penalty = cfg.gradient_penalty
- self.code_penalty = cfg.code_penalty
- self.blank_weight = cfg.blank_weight
- self.blank_mode = cfg.blank_mode
- self.blank_index = target_dict.index("") if cfg.blank_is_sil else 0
- assert self.blank_index != target_dict.unk()
-
- self.discriminator = Discriminator(output_size, cfg)
- for p in self.discriminator.parameters():
- p.param_group = "discriminator"
-
- self.pca_A = self.pca_b = None
- d = cfg.input_dim
-
- self.segmenter = SEGMENT_FACTORY[cfg.segmentation.type](cfg.segmentation)
-
- self.generator = Generator(d, output_size, cfg)
-
- for p in self.generator.parameters():
- p.param_group = "generator"
-
- for p in self.segmenter.parameters():
- p.param_group = "generator"
-
- self.max_temp, self.min_temp, self.temp_decay = cfg.temp
- self.curr_temp = self.max_temp
- self.update_num = 0
-
- @classmethod
- def build_model(cls, cfg, task):
- return cls(cfg, task.target_dictionary)
-
- def get_logits(
- self,
- net_output: Optional[Dict[str, List[Optional[torch.Tensor]]]],
- normalize: bool = False,
- ):
- logits = net_output["logits"]
-
- if self.blank_weight != 0:
- if self.blank_mode == "add":
- logits[..., self.blank_index] += self.blank_weight
- elif self.blank_mode == "set":
- logits[..., self.blank_index] = self.blank_weight
- else:
- raise Exception(f"invalid blank mode {self.blank_mode}")
-
- padding = net_output["padding_mask"]
- if padding.any():
- logits[padding] = float("-inf")
- logits[padding][..., self.blank_index] = float("inf")
-
- if normalize:
- logits = utils.log_softmax(logits.float(), dim=-1)
-
- return logits.transpose(0, 1)
-
- def get_normalized_probs(
- self,
- net_output: Tuple[
- torch.Tensor, Optional[Dict[str, List[Optional[torch.Tensor]]]]
- ],
- log_probs: bool,
- sample: Optional[Dict[str, torch.Tensor]] = None,
- ):
- logits = self.get_logits(net_output)
-
- probs = super().get_normalized_probs(logits, log_probs, sample)
- # BTC -> TBC for ctc
- probs = probs.transpose(0, 1)
- return probs
-
- def normalize(self, dense_x):
-
- bsz, tsz, csz = dense_x.shape
-
- if dense_x.numel() == 0:
- raise Exception(dense_x.shape)
- _, k = dense_x.max(-1)
- hard_x = (
- dense_x.new_zeros(bsz * tsz, csz)
- .scatter_(-1, k.view(-1, 1), 1.0)
- .view(-1, csz)
- )
- hard_probs = torch.mean(hard_x.float(), dim=0)
- code_perplexity = torch.exp(
- -torch.sum(hard_probs * torch.log(hard_probs + 1e-7), dim=-1)
- )
-
- avg_probs = torch.softmax(dense_x.reshape(-1, csz).float(), dim=-1).mean(dim=0)
- prob_perplexity = torch.exp(
- -torch.sum(avg_probs * torch.log(avg_probs + 1e-7), dim=-1)
- )
-
- if not self.no_softmax:
- if self.training and self.gumbel:
- dense_x = F.gumbel_softmax(
- dense_x.float(), tau=self.curr_temp, hard=self.hard_gumbel
- ).type_as(dense_x)
- else:
- dense_x = dense_x.softmax(-1)
-
- return dense_x, code_perplexity, prob_perplexity
-
- def forward(
- self,
- features,
- padding_mask,
- random_label=None,
- dense_x_only=False,
- segment=True,
- ):
- if segment:
- features, padding_mask = self.segmenter.pre_segment(features, padding_mask)
-
- orig_size = features.size(0) * features.size(1) - padding_mask.sum()
-
- gen_result = self.generator(features, random_label, padding_mask)
-
- orig_dense_x, token_x = gen_result["dense_x"], gen_result["token_x"]
- orig_dense_padding_mask = gen_result["dense_padding_mask"]
-
- if segment:
- dense_x, dense_padding_mask = self.segmenter.logit_segment(
- orig_dense_x, orig_dense_padding_mask
- )
- else:
- dense_x = orig_dense_x
- dense_padding_mask = orig_dense_padding_mask
-
- dense_logits = dense_x
- prob_perplexity = None
- code_perplexity = None
-
- if not (self.no_softmax and dense_x_only):
- dense_x, code_perplexity, prob_perplexity = self.normalize(dense_logits)
-
- if dense_x_only or self.discriminator is None:
- return {
- "logits": dense_x,
- "padding_mask": dense_padding_mask,
- }
-
- token_padding_mask = random_label == self.pad
-
- dense_y = self.discriminator(dense_x, dense_padding_mask)
- token_y = self.discriminator(token_x, token_padding_mask)
-
- sample_size = features.size(0)
-
- d_step = self.discrim_step(self.update_num)
-
- fake_smooth = self.smoothing
- real_smooth = self.smoothing
- if self.smoothing_one_sided:
- fake_smooth = 0
-
- zero_loss = None
- smoothness_loss = None
- code_pen = None
-
- if d_step:
- loss_dense = F.binary_cross_entropy_with_logits(
- dense_y,
- dense_y.new_ones(dense_y.shape) - fake_smooth,
- reduction="sum",
- )
- loss_token = F.binary_cross_entropy_with_logits(
- token_y,
- token_y.new_zeros(token_y.shape) + real_smooth,
- reduction="sum",
- )
- if self.training and self.gradient_penalty > 0:
- grad_pen = self.calc_gradient_penalty(token_x, dense_x)
- grad_pen = grad_pen.sum() * self.gradient_penalty
- else:
- grad_pen = None
- else:
- grad_pen = None
- loss_token = None
- loss_dense = F.binary_cross_entropy_with_logits(
- dense_y,
- dense_y.new_zeros(dense_y.shape) + fake_smooth,
- reduction="sum",
- )
- num_vars = dense_x.size(-1)
- if prob_perplexity is not None:
- code_pen = (num_vars - prob_perplexity) / num_vars
- code_pen = code_pen * sample_size * self.code_penalty
-
- if self.smoothness_weight > 0:
- smoothness_loss = F.mse_loss(
- dense_logits[:, :-1], dense_logits[:, 1:], reduction="none"
- )
- smoothness_loss[dense_padding_mask[:, 1:]] = 0
- smoothness_loss = (
- smoothness_loss.mean() * sample_size * self.smoothness_weight
- )
-
- result = {
- "losses": {
- "grad_pen": grad_pen,
- "code_pen": code_pen,
- "smoothness": smoothness_loss,
- },
- "temp": self.curr_temp,
- "code_ppl": code_perplexity,
- "prob_ppl": prob_perplexity,
- "d_steps": int(d_step),
- "sample_size": sample_size,
- }
-
- suff = "_d" if d_step else "_g"
- result["losses"]["dense" + suff] = loss_dense
- result["losses"]["token" + suff] = loss_token
-
- return result
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/quant_noise.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/quant_noise.py
deleted file mode 100644
index d777dfbb6c1bf6a9b769dfdaec35d5ef084c8a8b..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/quant_noise.py
+++ /dev/null
@@ -1,107 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-import torch.nn as nn
-
-
-def quant_noise(module, p, block_size):
- """
- Wraps modules and applies quantization noise to the weights for
- subsequent quantization with Iterative Product Quantization as
- described in "Training with Quantization Noise for Extreme Model Compression"
-
- Args:
- - module: nn.Module
- - p: amount of Quantization Noise
- - block_size: size of the blocks for subsequent quantization with iPQ
-
- Remarks:
- - Module weights must have the right sizes wrt the block size
- - Only Linear, Embedding and Conv2d modules are supported for the moment
- - For more detail on how to quantize by blocks with convolutional weights,
- see "And the Bit Goes Down: Revisiting the Quantization of Neural Networks"
- - We implement the simplest form of noise here as stated in the paper
- which consists in randomly dropping blocks
- """
-
- # if no quantization noise, don't register hook
- if p <= 0:
- return module
-
- # supported modules
- assert isinstance(module, (nn.Linear, nn.Embedding, nn.Conv2d))
-
- # test whether module.weight has the right sizes wrt block_size
- is_conv = module.weight.ndim == 4
-
- # 2D matrix
- if not is_conv:
- assert (
- module.weight.size(1) % block_size == 0
- ), "Input features must be a multiple of block sizes"
-
- # 4D matrix
- else:
- # 1x1 convolutions
- if module.kernel_size == (1, 1):
- assert (
- module.in_channels % block_size == 0
- ), "Input channels must be a multiple of block sizes"
- # regular convolutions
- else:
- k = module.kernel_size[0] * module.kernel_size[1]
- assert k % block_size == 0, "Kernel size must be a multiple of block size"
-
- def _forward_pre_hook(mod, input):
- # no noise for evaluation
- if mod.training:
- if not is_conv:
- # gather weight and sizes
- weight = mod.weight
- in_features = weight.size(1)
- out_features = weight.size(0)
-
- # split weight matrix into blocks and randomly drop selected blocks
- mask = torch.zeros(
- in_features // block_size * out_features, device=weight.device
- )
- mask.bernoulli_(p)
- mask = mask.repeat_interleave(block_size, -1).view(-1, in_features)
-
- else:
- # gather weight and sizes
- weight = mod.weight
- in_channels = mod.in_channels
- out_channels = mod.out_channels
-
- # split weight matrix into blocks and randomly drop selected blocks
- if mod.kernel_size == (1, 1):
- mask = torch.zeros(
- int(in_channels // block_size * out_channels),
- device=weight.device,
- )
- mask.bernoulli_(p)
- mask = mask.repeat_interleave(block_size, -1).view(-1, in_channels)
- else:
- mask = torch.zeros(
- weight.size(0), weight.size(1), device=weight.device
- )
- mask.bernoulli_(p)
- mask = (
- mask.unsqueeze(2)
- .unsqueeze(3)
- .repeat(1, 1, mod.kernel_size[0], mod.kernel_size[1])
- )
-
- # scale weights and apply mask
- mask = mask.to(
- torch.bool
- ) # x.bool() is not currently supported in TorchScript
- s = 1 / (1 - p)
- mod.weight.data = s * weight.masked_fill(mask, 0)
-
- module.register_forward_pre_hook(_forward_pre_hook)
- return module
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/tests/test_plasma_utils.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/tests/test_plasma_utils.py
deleted file mode 100644
index e6344c2a5a73fcb2fb81376e7bd43470963b3674..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/tests/test_plasma_utils.py
+++ /dev/null
@@ -1,126 +0,0 @@
-import contextlib
-import unittest
-import tempfile
-from io import StringIO
-
-import numpy as np
-
-from tests.utils import create_dummy_data, preprocess_lm_data, train_language_model
-
-try:
- from pyarrow import plasma
- from fairseq.data.plasma_utils import PlasmaView, PlasmaStore
-
- PYARROW_AVAILABLE = True
-except ImportError:
- PYARROW_AVAILABLE = False
-
-dummy_path = "dummy"
-
-
-@unittest.skipUnless(PYARROW_AVAILABLE, "")
-class TestPlasmaView(unittest.TestCase):
- def setUp(self) -> None:
- self.tmp_file = tempfile.NamedTemporaryFile() # noqa: P201
- self.path = self.tmp_file.name
- self.server = PlasmaStore.start(path=self.path, nbytes=10000)
- self.client = plasma.connect(self.path, num_retries=10)
-
- def tearDown(self) -> None:
- self.client.disconnect()
- self.tmp_file.close()
- self.server.kill()
-
- def test_two_servers_do_not_share_object_id_space(self):
- data_server_1 = np.array([0, 1])
- data_server_2 = np.array([2, 3])
- server_2_path = self.path
- with tempfile.NamedTemporaryFile() as server_1_path:
- server = PlasmaStore.start(path=server_1_path.name, nbytes=10000)
- arr1 = PlasmaView(
- data_server_1, dummy_path, 1, plasma_path=server_1_path.name
- )
- assert len(arr1.client.list()) == 1
- assert (arr1.array == data_server_1).all()
- arr2 = PlasmaView(data_server_2, dummy_path, 1, plasma_path=server_2_path)
- assert (arr2.array == data_server_2).all()
- assert (arr1.array == data_server_1).all()
- server.kill()
-
- def test_hash_collision(self):
- data_server_1 = np.array([0, 1])
- data_server_2 = np.array([2, 3])
- arr1 = PlasmaView(data_server_1, dummy_path, 1, plasma_path=self.path)
- assert len(arr1.client.list()) == 1
- arr2 = PlasmaView(data_server_2, dummy_path, 1, plasma_path=self.path)
- assert len(arr1.client.list()) == 1
- assert len(arr2.client.list()) == 1
- assert (arr2.array == data_server_1).all()
- # New hash key based on tuples
- arr3 = PlasmaView(
- data_server_2, dummy_path, (1, 12312312312, None), plasma_path=self.path
- )
- assert (
- len(arr2.client.list()) == 2
- ), "No new object was created by using a novel hash key"
- assert (
- arr3.object_id in arr2.client.list()
- ), "No new object was created by using a novel hash key"
- assert (
- arr3.object_id in arr3.client.list()
- ), "No new object was created by using a novel hash key"
- del arr3, arr2, arr1
-
- @staticmethod
- def _assert_view_equal(pv1, pv2):
- np.testing.assert_array_equal(pv1.array, pv2.array)
-
- def test_putting_same_array_twice(self):
- data = np.array([4, 4, 4])
- arr1 = PlasmaView(data, dummy_path, 1, plasma_path=self.path)
- assert len(self.client.list()) == 1
- arr1b = PlasmaView(
- data, dummy_path, 1, plasma_path=self.path
- ) # should not change contents of store
- arr1c = PlasmaView(
- None, dummy_path, 1, plasma_path=self.path
- ) # should not change contents of store
-
- assert len(self.client.list()) == 1
- self._assert_view_equal(arr1, arr1b)
- self._assert_view_equal(arr1, arr1c)
- PlasmaView(
- data, dummy_path, 2, plasma_path=self.path
- ) # new object id, adds new entry
- assert len(self.client.list()) == 2
-
- new_client = plasma.connect(self.path)
- assert len(new_client.list()) == 2 # new client can access same objects
- assert isinstance(arr1.object_id, plasma.ObjectID)
- del arr1b
- del arr1c
-
- def test_plasma_store_full_raises(self):
- with tempfile.NamedTemporaryFile() as new_path:
- server = PlasmaStore.start(path=new_path.name, nbytes=10000)
- with self.assertRaises(plasma.PlasmaStoreFull):
- # 2000 floats is more than 2000 bytes
- PlasmaView(
- np.random.rand(10000, 1), dummy_path, 1, plasma_path=new_path.name
- )
- server.kill()
-
- def test_object_id_overflow(self):
- PlasmaView.get_object_id("", 2 ** 21)
-
- def test_training_lm_plasma(self):
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory("test_transformer_lm") as data_dir:
- create_dummy_data(data_dir)
- preprocess_lm_data(data_dir)
- train_language_model(
- data_dir,
- "transformer_lm",
- ["--use-plasma-view", "--plasma-path", self.path],
- run_validation=True,
- )
diff --git a/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/src/glow_tts/train.py b/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/src/glow_tts/train.py
deleted file mode 100644
index 79bf515a707b309e82e9686c140658f23acf1b91..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/src/glow_tts/train.py
+++ /dev/null
@@ -1,286 +0,0 @@
-import os
-import json
-import argparse
-import math
-import torch
-from torch import nn, optim
-from torch.nn import functional as F
-from torch.utils.data import DataLoader
-from torch.utils.tensorboard import SummaryWriter
-import torch.multiprocessing as mp
-import torch.distributed as dist
-from apex.parallel import DistributedDataParallel as DDP
-from apex import amp
-
-from data_utils import TextMelLoader, TextMelCollate
-import models
-import commons
-import utils
-
-
-global_step = 0
-
-
-def main():
- """Assume Single Node Multi GPUs Training Only"""
- assert torch.cuda.is_available(), "CPU training is not allowed."
-
- n_gpus = torch.cuda.device_count()
- os.environ["MASTER_ADDR"] = "localhost"
- os.environ["MASTER_PORT"] = "80000"
-
- hps = utils.get_hparams()
- mp.spawn(
- train_and_eval,
- nprocs=n_gpus,
- args=(
- n_gpus,
- hps,
- ),
- )
-
-
-def train_and_eval(rank, n_gpus, hps):
- global global_step
- if rank == 0:
- logger = utils.get_logger(hps.log_dir)
- logger.info(hps)
- utils.check_git_hash(hps.log_dir)
- writer = SummaryWriter(log_dir=hps.log_dir)
- writer_eval = SummaryWriter(log_dir=os.path.join(hps.log_dir, "eval"))
-
- dist.init_process_group(
- backend="nccl", init_method="env://", world_size=n_gpus, rank=rank
- )
- torch.manual_seed(hps.train.seed)
- torch.cuda.set_device(rank)
-
- train_dataset = TextMelLoader(hps.data.training_files, hps.data)
- train_sampler = torch.utils.data.distributed.DistributedSampler(
- train_dataset, num_replicas=n_gpus, rank=rank, shuffle=True
- )
- collate_fn = TextMelCollate(1)
- train_loader = DataLoader(
- train_dataset,
- num_workers=8,
- shuffle=False,
- batch_size=hps.train.batch_size,
- pin_memory=True,
- drop_last=True,
- collate_fn=collate_fn,
- sampler=train_sampler,
- )
- if rank == 0:
- val_dataset = TextMelLoader(hps.data.validation_files, hps.data)
- val_loader = DataLoader(
- val_dataset,
- num_workers=8,
- shuffle=False,
- batch_size=hps.train.batch_size,
- pin_memory=True,
- drop_last=True,
- collate_fn=collate_fn,
- )
- symbols = hps.data.punc + hps.data.chars
- generator = models.FlowGenerator(
- n_vocab=len(symbols) + getattr(hps.data, "add_blank", False),
- out_channels=hps.data.n_mel_channels,
- **hps.model
- ).cuda(rank)
- optimizer_g = commons.Adam(
- generator.parameters(),
- scheduler=hps.train.scheduler,
- dim_model=hps.model.hidden_channels,
- warmup_steps=hps.train.warmup_steps,
- lr=hps.train.learning_rate,
- betas=hps.train.betas,
- eps=hps.train.eps,
- )
- if hps.train.fp16_run:
- generator, optimizer_g._optim = amp.initialize(
- generator, optimizer_g._optim, opt_level="O1"
- )
- generator = DDP(generator)
- epoch_str = 1
- global_step = 0
- try:
- _, _, _, epoch_str = utils.load_checkpoint(
- utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"),
- generator,
- optimizer_g,
- )
- epoch_str += 1
- optimizer_g.step_num = (epoch_str - 1) * len(train_loader)
- optimizer_g._update_learning_rate()
- global_step = (epoch_str - 1) * len(train_loader)
- except:
- if hps.train.ddi and os.path.isfile(os.path.join(hps.model_dir, "ddi_G.pth")):
- _ = utils.load_checkpoint(
- os.path.join(hps.model_dir, "ddi_G.pth"), generator, optimizer_g
- )
-
- for epoch in range(epoch_str, hps.train.epochs + 1):
- if rank == 0:
- train(
- rank, epoch, hps, generator, optimizer_g, train_loader, logger, writer
- )
- evaluate(
- rank,
- epoch,
- hps,
- generator,
- optimizer_g,
- val_loader,
- logger,
- writer_eval,
- )
- if epoch % hps.train.save_epoch == 0:
- utils.save_checkpoint(
- generator,
- optimizer_g,
- hps.train.learning_rate,
- epoch,
- os.path.join(hps.model_dir, "G_{}.pth".format(epoch)),
- )
- else:
- train(rank, epoch, hps, generator, optimizer_g, train_loader, None, None)
-
-
-def train(rank, epoch, hps, generator, optimizer_g, train_loader, logger, writer):
- train_loader.sampler.set_epoch(epoch)
- global global_step
-
- generator.train()
- for batch_idx, (x, x_lengths, y, y_lengths) in enumerate(train_loader):
- x, x_lengths = x.cuda(rank, non_blocking=True), x_lengths.cuda(
- rank, non_blocking=True
- )
- y, y_lengths = y.cuda(rank, non_blocking=True), y_lengths.cuda(
- rank, non_blocking=True
- )
-
- # Train Generator
- optimizer_g.zero_grad()
-
- (
- (z, z_m, z_logs, logdet, z_mask),
- (x_m, x_logs, x_mask),
- (attn, logw, logw_),
- ) = generator(x, x_lengths, y, y_lengths, gen=False)
- l_mle = commons.mle_loss(z, z_m, z_logs, logdet, z_mask)
- l_length = commons.duration_loss(logw, logw_, x_lengths)
-
- loss_gs = [l_mle, l_length]
- loss_g = sum(loss_gs)
-
- if hps.train.fp16_run:
- with amp.scale_loss(loss_g, optimizer_g._optim) as scaled_loss:
- scaled_loss.backward()
- grad_norm = commons.clip_grad_value_(
- amp.master_params(optimizer_g._optim), 5
- )
- else:
- loss_g.backward()
- grad_norm = commons.clip_grad_value_(generator.parameters(), 5)
- optimizer_g.step()
-
- if rank == 0:
- if batch_idx % hps.train.log_interval == 0:
- (y_gen, *_), *_ = generator.module(x[:1], x_lengths[:1], gen=True)
- logger.info(
- "Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}".format(
- epoch,
- batch_idx * len(x),
- len(train_loader.dataset),
- 100.0 * batch_idx / len(train_loader),
- loss_g.item(),
- )
- )
- logger.info(
- [x.item() for x in loss_gs] + [global_step, optimizer_g.get_lr()]
- )
-
- scalar_dict = {
- "loss/g/total": loss_g,
- "learning_rate": optimizer_g.get_lr(),
- "grad_norm": grad_norm,
- }
- scalar_dict.update(
- {"loss/g/{}".format(i): v for i, v in enumerate(loss_gs)}
- )
- utils.summarize(
- writer=writer,
- global_step=global_step,
- images={
- "y_org": utils.plot_spectrogram_to_numpy(
- y[0].data.cpu().numpy()
- ),
- "y_gen": utils.plot_spectrogram_to_numpy(
- y_gen[0].data.cpu().numpy()
- ),
- "attn": utils.plot_alignment_to_numpy(
- attn[0, 0].data.cpu().numpy()
- ),
- },
- scalars=scalar_dict,
- )
- global_step += 1
-
- if rank == 0:
- logger.info("====> Epoch: {}".format(epoch))
-
-
-def evaluate(rank, epoch, hps, generator, optimizer_g, val_loader, logger, writer_eval):
- if rank == 0:
- global global_step
- generator.eval()
- losses_tot = []
- with torch.no_grad():
- for batch_idx, (x, x_lengths, y, y_lengths) in enumerate(val_loader):
- x, x_lengths = x.cuda(rank, non_blocking=True), x_lengths.cuda(
- rank, non_blocking=True
- )
- y, y_lengths = y.cuda(rank, non_blocking=True), y_lengths.cuda(
- rank, non_blocking=True
- )
-
- (
- (z, z_m, z_logs, logdet, z_mask),
- (x_m, x_logs, x_mask),
- (attn, logw, logw_),
- ) = generator(x, x_lengths, y, y_lengths, gen=False)
- l_mle = commons.mle_loss(z, z_m, z_logs, logdet, z_mask)
- l_length = commons.duration_loss(logw, logw_, x_lengths)
-
- loss_gs = [l_mle, l_length]
- loss_g = sum(loss_gs)
-
- if batch_idx == 0:
- losses_tot = loss_gs
- else:
- losses_tot = [x + y for (x, y) in zip(losses_tot, loss_gs)]
-
- if batch_idx % hps.train.log_interval == 0:
- logger.info(
- "Eval Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}".format(
- epoch,
- batch_idx * len(x),
- len(val_loader.dataset),
- 100.0 * batch_idx / len(val_loader),
- loss_g.item(),
- )
- )
- logger.info([x.item() for x in loss_gs])
-
- losses_tot = [x / len(val_loader) for x in losses_tot]
- loss_tot = sum(losses_tot)
- scalar_dict = {"loss/g/total": loss_tot}
- scalar_dict.update({"loss/g/{}".format(i): v for i, v in enumerate(losses_tot)})
- utils.summarize(
- writer=writer_eval, global_step=global_step, scalars=scalar_dict
- )
- logger.info("====> Epoch: {}".format(epoch))
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/Hobe/bing/README.md b/spaces/Hobe/bing/README.md
deleted file mode 100644
index 81f9fb598a7f7472d93664f390425dfb57e618b1..0000000000000000000000000000000000000000
--- a/spaces/Hobe/bing/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Go Proxy Bingai
-emoji: 📉
-colorFrom: gray
-colorTo: red
-sdk: docker
-pinned: false
-license: mit
-app_port: 8080
-duplicated_from: laogou717/bing
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/models/common.py b/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/models/common.py
deleted file mode 100644
index 8b5ec1c786d8efbfdffa268a4d13b02a47338f8c..0000000000000000000000000000000000000000
--- a/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/models/common.py
+++ /dev/null
@@ -1,860 +0,0 @@
-# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
-"""
-Common modules
-"""
-
-import ast
-import contextlib
-import json
-import math
-import platform
-import warnings
-import zipfile
-from collections import OrderedDict, namedtuple
-from copy import copy
-from pathlib import Path
-from urllib.parse import urlparse
-
-import cv2
-import numpy as np
-import pandas as pd
-import requests
-import torch
-import torch.nn as nn
-from IPython.display import display
-from PIL import Image
-from torch.cuda import amp
-
-from utils import TryExcept
-from utils.dataloaders import exif_transpose, letterbox
-from utils.general import (LOGGER, ROOT, Profile, check_requirements, check_suffix, check_version, colorstr,
- increment_path, is_notebook, make_divisible, non_max_suppression, scale_boxes, xywh2xyxy,
- xyxy2xywh, yaml_load)
-from utils.plots import Annotator, colors, save_one_box
-from utils.torch_utils import copy_attr, smart_inference_mode
-
-
-def autopad(k, p=None, d=1): # kernel, padding, dilation
- # Pad to 'same' shape outputs
- if d > 1:
- k = d * (k - 1) + 1 if isinstance(k, int) else [d * (x - 1) + 1 for x in k] # actual kernel-size
- if p is None:
- p = k // 2 if isinstance(k, int) else [x // 2 for x in k] # auto-pad
- return p
-
-
-class Conv(nn.Module):
- # Standard convolution with args(ch_in, ch_out, kernel, stride, padding, groups, dilation, activation)
- default_act = nn.SiLU() # default activation
-
- def __init__(self, c1, c2, k=1, s=1, p=None, g=1, d=1, act=True):
- super().__init__()
- self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p, d), groups=g, dilation=d, bias=False)
- self.bn = nn.BatchNorm2d(c2)
- self.act = self.default_act if act is True else act if isinstance(act, nn.Module) else nn.Identity()
-
- def forward(self, x):
- return self.act(self.bn(self.conv(x)))
-
- def forward_fuse(self, x):
- return self.act(self.conv(x))
-
-
-class DWConv(Conv):
- # Depth-wise convolution
- def __init__(self, c1, c2, k=1, s=1, d=1, act=True): # ch_in, ch_out, kernel, stride, dilation, activation
- super().__init__(c1, c2, k, s, g=math.gcd(c1, c2), d=d, act=act)
-
-
-class DWConvTranspose2d(nn.ConvTranspose2d):
- # Depth-wise transpose convolution
- def __init__(self, c1, c2, k=1, s=1, p1=0, p2=0): # ch_in, ch_out, kernel, stride, padding, padding_out
- super().__init__(c1, c2, k, s, p1, p2, groups=math.gcd(c1, c2))
-
-
-class TransformerLayer(nn.Module):
- # Transformer layer https://arxiv.org/abs/2010.11929 (LayerNorm layers removed for better performance)
- def __init__(self, c, num_heads):
- super().__init__()
- self.q = nn.Linear(c, c, bias=False)
- self.k = nn.Linear(c, c, bias=False)
- self.v = nn.Linear(c, c, bias=False)
- self.ma = nn.MultiheadAttention(embed_dim=c, num_heads=num_heads)
- self.fc1 = nn.Linear(c, c, bias=False)
- self.fc2 = nn.Linear(c, c, bias=False)
-
- def forward(self, x):
- x = self.ma(self.q(x), self.k(x), self.v(x))[0] + x
- x = self.fc2(self.fc1(x)) + x
- return x
-
-
-class TransformerBlock(nn.Module):
- # Vision Transformer https://arxiv.org/abs/2010.11929
- def __init__(self, c1, c2, num_heads, num_layers):
- super().__init__()
- self.conv = None
- if c1 != c2:
- self.conv = Conv(c1, c2)
- self.linear = nn.Linear(c2, c2) # learnable position embedding
- self.tr = nn.Sequential(*(TransformerLayer(c2, num_heads) for _ in range(num_layers)))
- self.c2 = c2
-
- def forward(self, x):
- if self.conv is not None:
- x = self.conv(x)
- b, _, w, h = x.shape
- p = x.flatten(2).permute(2, 0, 1)
- return self.tr(p + self.linear(p)).permute(1, 2, 0).reshape(b, self.c2, w, h)
-
-
-class Bottleneck(nn.Module):
- # Standard bottleneck
- def __init__(self, c1, c2, shortcut=True, g=1, e=0.5): # ch_in, ch_out, shortcut, groups, expansion
- super().__init__()
- c_ = int(c2 * e) # hidden channels
- self.cv1 = Conv(c1, c_, 1, 1)
- self.cv2 = Conv(c_, c2, 3, 1, g=g)
- self.add = shortcut and c1 == c2
-
- def forward(self, x):
- return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))
-
-
-class BottleneckCSP(nn.Module):
- # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks
- def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
- super().__init__()
- c_ = int(c2 * e) # hidden channels
- self.cv1 = Conv(c1, c_, 1, 1)
- self.cv2 = nn.Conv2d(c1, c_, 1, 1, bias=False)
- self.cv3 = nn.Conv2d(c_, c_, 1, 1, bias=False)
- self.cv4 = Conv(2 * c_, c2, 1, 1)
- self.bn = nn.BatchNorm2d(2 * c_) # applied to cat(cv2, cv3)
- self.act = nn.SiLU()
- self.m = nn.Sequential(*(Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)))
-
- def forward(self, x):
- y1 = self.cv3(self.m(self.cv1(x)))
- y2 = self.cv2(x)
- return self.cv4(self.act(self.bn(torch.cat((y1, y2), 1))))
-
-
-class CrossConv(nn.Module):
- # Cross Convolution Downsample
- def __init__(self, c1, c2, k=3, s=1, g=1, e=1.0, shortcut=False):
- # ch_in, ch_out, kernel, stride, groups, expansion, shortcut
- super().__init__()
- c_ = int(c2 * e) # hidden channels
- self.cv1 = Conv(c1, c_, (1, k), (1, s))
- self.cv2 = Conv(c_, c2, (k, 1), (s, 1), g=g)
- self.add = shortcut and c1 == c2
-
- def forward(self, x):
- return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))
-
-
-class C3(nn.Module):
- # CSP Bottleneck with 3 convolutions
- def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
- super().__init__()
- c_ = int(c2 * e) # hidden channels
- self.cv1 = Conv(c1, c_, 1, 1)
- self.cv2 = Conv(c1, c_, 1, 1)
- self.cv3 = Conv(2 * c_, c2, 1) # optional act=FReLU(c2)
- self.m = nn.Sequential(*(Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)))
-
- def forward(self, x):
- return self.cv3(torch.cat((self.m(self.cv1(x)), self.cv2(x)), 1))
-
-
-class C3x(C3):
- # C3 module with cross-convolutions
- def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):
- super().__init__(c1, c2, n, shortcut, g, e)
- c_ = int(c2 * e)
- self.m = nn.Sequential(*(CrossConv(c_, c_, 3, 1, g, 1.0, shortcut) for _ in range(n)))
-
-
-class C3TR(C3):
- # C3 module with TransformerBlock()
- def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):
- super().__init__(c1, c2, n, shortcut, g, e)
- c_ = int(c2 * e)
- self.m = TransformerBlock(c_, c_, 4, n)
-
-
-class C3SPP(C3):
- # C3 module with SPP()
- def __init__(self, c1, c2, k=(5, 9, 13), n=1, shortcut=True, g=1, e=0.5):
- super().__init__(c1, c2, n, shortcut, g, e)
- c_ = int(c2 * e)
- self.m = SPP(c_, c_, k)
-
-
-class C3Ghost(C3):
- # C3 module with GhostBottleneck()
- def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):
- super().__init__(c1, c2, n, shortcut, g, e)
- c_ = int(c2 * e) # hidden channels
- self.m = nn.Sequential(*(GhostBottleneck(c_, c_) for _ in range(n)))
-
-
-class SPP(nn.Module):
- # Spatial Pyramid Pooling (SPP) layer https://arxiv.org/abs/1406.4729
- def __init__(self, c1, c2, k=(5, 9, 13)):
- super().__init__()
- c_ = c1 // 2 # hidden channels
- self.cv1 = Conv(c1, c_, 1, 1)
- self.cv2 = Conv(c_ * (len(k) + 1), c2, 1, 1)
- self.m = nn.ModuleList([nn.MaxPool2d(kernel_size=x, stride=1, padding=x // 2) for x in k])
-
- def forward(self, x):
- x = self.cv1(x)
- with warnings.catch_warnings():
- warnings.simplefilter('ignore') # suppress torch 1.9.0 max_pool2d() warning
- return self.cv2(torch.cat([x] + [m(x) for m in self.m], 1))
-
-
-class SPPF(nn.Module):
- # Spatial Pyramid Pooling - Fast (SPPF) layer for YOLOv5 by Glenn Jocher
- def __init__(self, c1, c2, k=5): # equivalent to SPP(k=(5, 9, 13))
- super().__init__()
- c_ = c1 // 2 # hidden channels
- self.cv1 = Conv(c1, c_, 1, 1)
- self.cv2 = Conv(c_ * 4, c2, 1, 1)
- self.m = nn.MaxPool2d(kernel_size=k, stride=1, padding=k // 2)
-
- def forward(self, x):
- x = self.cv1(x)
- with warnings.catch_warnings():
- warnings.simplefilter('ignore') # suppress torch 1.9.0 max_pool2d() warning
- y1 = self.m(x)
- y2 = self.m(y1)
- return self.cv2(torch.cat((x, y1, y2, self.m(y2)), 1))
-
-
-class Focus(nn.Module):
- # Focus wh information into c-space
- def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups
- super().__init__()
- self.conv = Conv(c1 * 4, c2, k, s, p, g, act=act)
- # self.contract = Contract(gain=2)
-
- def forward(self, x): # x(b,c,w,h) -> y(b,4c,w/2,h/2)
- return self.conv(torch.cat((x[..., ::2, ::2], x[..., 1::2, ::2], x[..., ::2, 1::2], x[..., 1::2, 1::2]), 1))
- # return self.conv(self.contract(x))
-
-
-class GhostConv(nn.Module):
- # Ghost Convolution https://github.com/huawei-noah/ghostnet
- def __init__(self, c1, c2, k=1, s=1, g=1, act=True): # ch_in, ch_out, kernel, stride, groups
- super().__init__()
- c_ = c2 // 2 # hidden channels
- self.cv1 = Conv(c1, c_, k, s, None, g, act=act)
- self.cv2 = Conv(c_, c_, 5, 1, None, c_, act=act)
-
- def forward(self, x):
- y = self.cv1(x)
- return torch.cat((y, self.cv2(y)), 1)
-
-
-class GhostBottleneck(nn.Module):
- # Ghost Bottleneck https://github.com/huawei-noah/ghostnet
- def __init__(self, c1, c2, k=3, s=1): # ch_in, ch_out, kernel, stride
- super().__init__()
- c_ = c2 // 2
- self.conv = nn.Sequential(
- GhostConv(c1, c_, 1, 1), # pw
- DWConv(c_, c_, k, s, act=False) if s == 2 else nn.Identity(), # dw
- GhostConv(c_, c2, 1, 1, act=False)) # pw-linear
- self.shortcut = nn.Sequential(DWConv(c1, c1, k, s, act=False), Conv(c1, c2, 1, 1,
- act=False)) if s == 2 else nn.Identity()
-
- def forward(self, x):
- return self.conv(x) + self.shortcut(x)
-
-
-class Contract(nn.Module):
- # Contract width-height into channels, i.e. x(1,64,80,80) to x(1,256,40,40)
- def __init__(self, gain=2):
- super().__init__()
- self.gain = gain
-
- def forward(self, x):
- b, c, h, w = x.size() # assert (h / s == 0) and (W / s == 0), 'Indivisible gain'
- s = self.gain
- x = x.view(b, c, h // s, s, w // s, s) # x(1,64,40,2,40,2)
- x = x.permute(0, 3, 5, 1, 2, 4).contiguous() # x(1,2,2,64,40,40)
- return x.view(b, c * s * s, h // s, w // s) # x(1,256,40,40)
-
-
-class Expand(nn.Module):
- # Expand channels into width-height, i.e. x(1,64,80,80) to x(1,16,160,160)
- def __init__(self, gain=2):
- super().__init__()
- self.gain = gain
-
- def forward(self, x):
- b, c, h, w = x.size() # assert C / s ** 2 == 0, 'Indivisible gain'
- s = self.gain
- x = x.view(b, s, s, c // s ** 2, h, w) # x(1,2,2,16,80,80)
- x = x.permute(0, 3, 4, 1, 5, 2).contiguous() # x(1,16,80,2,80,2)
- return x.view(b, c // s ** 2, h * s, w * s) # x(1,16,160,160)
-
-
-class Concat(nn.Module):
- # Concatenate a list of tensors along dimension
- def __init__(self, dimension=1):
- super().__init__()
- self.d = dimension
-
- def forward(self, x):
- return torch.cat(x, self.d)
-
-
-class DetectMultiBackend(nn.Module):
- # YOLOv5 MultiBackend class for python inference on various backends
- def __init__(self, weights='yolov5s.pt', device=torch.device('cpu'), dnn=False, data=None, fp16=False, fuse=True):
- # Usage:
- # PyTorch: weights = *.pt
- # TorchScript: *.torchscript
- # ONNX Runtime: *.onnx
- # ONNX OpenCV DNN: *.onnx --dnn
- # OpenVINO: *_openvino_model
- # CoreML: *.mlmodel
- # TensorRT: *.engine
- # TensorFlow SavedModel: *_saved_model
- # TensorFlow GraphDef: *.pb
- # TensorFlow Lite: *.tflite
- # TensorFlow Edge TPU: *_edgetpu.tflite
- # PaddlePaddle: *_paddle_model
- from models.experimental import attempt_download, attempt_load # scoped to avoid circular import
-
- super().__init__()
- w = str(weights[0] if isinstance(weights, list) else weights)
- pt, jit, onnx, xml, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs, paddle, triton = self._model_type(w)
- fp16 &= pt or jit or onnx or engine # FP16
- nhwc = coreml or saved_model or pb or tflite or edgetpu # BHWC formats (vs torch BCWH)
- stride = 32 # default stride
- cuda = torch.cuda.is_available() and device.type != 'cpu' # use CUDA
- if not (pt or triton):
- w = attempt_download(w) # download if not local
-
- if pt: # PyTorch
- model = attempt_load(weights if isinstance(weights, list) else w, device=device, inplace=True, fuse=fuse)
- stride = max(int(model.stride.max()), 32) # model stride
- names = model.module.names if hasattr(model, 'module') else model.names # get class names
- model.half() if fp16 else model.float()
- self.model = model # explicitly assign for to(), cpu(), cuda(), half()
- elif jit: # TorchScript
- LOGGER.info(f'Loading {w} for TorchScript inference...')
- extra_files = {'config.txt': ''} # model metadata
- model = torch.jit.load(w, _extra_files=extra_files, map_location=device)
- model.half() if fp16 else model.float()
- if extra_files['config.txt']: # load metadata dict
- d = json.loads(extra_files['config.txt'],
- object_hook=lambda d: {int(k) if k.isdigit() else k: v
- for k, v in d.items()})
- stride, names = int(d['stride']), d['names']
- elif dnn: # ONNX OpenCV DNN
- LOGGER.info(f'Loading {w} for ONNX OpenCV DNN inference...')
- check_requirements('opencv-python>=4.5.4')
- net = cv2.dnn.readNetFromONNX(w)
- elif onnx: # ONNX Runtime
- LOGGER.info(f'Loading {w} for ONNX Runtime inference...')
- check_requirements(('onnx', 'onnxruntime-gpu' if cuda else 'onnxruntime'))
- import onnxruntime
- providers = ['CUDAExecutionProvider', 'CPUExecutionProvider'] if cuda else ['CPUExecutionProvider']
- session = onnxruntime.InferenceSession(w, providers=providers)
- output_names = [x.name for x in session.get_outputs()]
- meta = session.get_modelmeta().custom_metadata_map # metadata
- if 'stride' in meta:
- stride, names = int(meta['stride']), eval(meta['names'])
- elif xml: # OpenVINO
- LOGGER.info(f'Loading {w} for OpenVINO inference...')
- check_requirements('openvino') # requires openvino-dev: https://pypi.org/project/openvino-dev/
- from openvino.runtime import Core, Layout, get_batch
- ie = Core()
- if not Path(w).is_file(): # if not *.xml
- w = next(Path(w).glob('*.xml')) # get *.xml file from *_openvino_model dir
- network = ie.read_model(model=w, weights=Path(w).with_suffix('.bin'))
- if network.get_parameters()[0].get_layout().empty:
- network.get_parameters()[0].set_layout(Layout("NCHW"))
- batch_dim = get_batch(network)
- if batch_dim.is_static:
- batch_size = batch_dim.get_length()
- executable_network = ie.compile_model(network, device_name="CPU") # device_name="MYRIAD" for Intel NCS2
- stride, names = self._load_metadata(Path(w).with_suffix('.yaml')) # load metadata
- elif engine: # TensorRT
- LOGGER.info(f'Loading {w} for TensorRT inference...')
- import tensorrt as trt # https://developer.nvidia.com/nvidia-tensorrt-download
- check_version(trt.__version__, '7.0.0', hard=True) # require tensorrt>=7.0.0
- if device.type == 'cpu':
- device = torch.device('cuda:0')
- Binding = namedtuple('Binding', ('name', 'dtype', 'shape', 'data', 'ptr'))
- logger = trt.Logger(trt.Logger.INFO)
- with open(w, 'rb') as f, trt.Runtime(logger) as runtime:
- model = runtime.deserialize_cuda_engine(f.read())
- context = model.create_execution_context()
- bindings = OrderedDict()
- output_names = []
- fp16 = False # default updated below
- dynamic = False
- for i in range(model.num_bindings):
- name = model.get_binding_name(i)
- dtype = trt.nptype(model.get_binding_dtype(i))
- if model.binding_is_input(i):
- if -1 in tuple(model.get_binding_shape(i)): # dynamic
- dynamic = True
- context.set_binding_shape(i, tuple(model.get_profile_shape(0, i)[2]))
- if dtype == np.float16:
- fp16 = True
- else: # output
- output_names.append(name)
- shape = tuple(context.get_binding_shape(i))
- im = torch.from_numpy(np.empty(shape, dtype=dtype)).to(device)
- bindings[name] = Binding(name, dtype, shape, im, int(im.data_ptr()))
- binding_addrs = OrderedDict((n, d.ptr) for n, d in bindings.items())
- batch_size = bindings['images'].shape[0] # if dynamic, this is instead max batch size
- elif coreml: # CoreML
- LOGGER.info(f'Loading {w} for CoreML inference...')
- import coremltools as ct
- model = ct.models.MLModel(w)
- elif saved_model: # TF SavedModel
- LOGGER.info(f'Loading {w} for TensorFlow SavedModel inference...')
- import tensorflow as tf
- keras = False # assume TF1 saved_model
- model = tf.keras.models.load_model(w) if keras else tf.saved_model.load(w)
- elif pb: # GraphDef https://www.tensorflow.org/guide/migrate#a_graphpb_or_graphpbtxt
- LOGGER.info(f'Loading {w} for TensorFlow GraphDef inference...')
- import tensorflow as tf
-
- def wrap_frozen_graph(gd, inputs, outputs):
- x = tf.compat.v1.wrap_function(lambda: tf.compat.v1.import_graph_def(gd, name=""), []) # wrapped
- ge = x.graph.as_graph_element
- return x.prune(tf.nest.map_structure(ge, inputs), tf.nest.map_structure(ge, outputs))
-
- def gd_outputs(gd):
- name_list, input_list = [], []
- for node in gd.node: # tensorflow.core.framework.node_def_pb2.NodeDef
- name_list.append(node.name)
- input_list.extend(node.input)
- return sorted(f'{x}:0' for x in list(set(name_list) - set(input_list)) if not x.startswith('NoOp'))
-
- gd = tf.Graph().as_graph_def() # TF GraphDef
- with open(w, 'rb') as f:
- gd.ParseFromString(f.read())
- frozen_func = wrap_frozen_graph(gd, inputs="x:0", outputs=gd_outputs(gd))
- elif tflite or edgetpu: # https://www.tensorflow.org/lite/guide/python#install_tensorflow_lite_for_python
- try: # https://coral.ai/docs/edgetpu/tflite-python/#update-existing-tf-lite-code-for-the-edge-tpu
- from tflite_runtime.interpreter import Interpreter, load_delegate
- except ImportError:
- import tensorflow as tf
- Interpreter, load_delegate = tf.lite.Interpreter, tf.lite.experimental.load_delegate,
- if edgetpu: # TF Edge TPU https://coral.ai/software/#edgetpu-runtime
- LOGGER.info(f'Loading {w} for TensorFlow Lite Edge TPU inference...')
- delegate = {
- 'Linux': 'libedgetpu.so.1',
- 'Darwin': 'libedgetpu.1.dylib',
- 'Windows': 'edgetpu.dll'}[platform.system()]
- interpreter = Interpreter(model_path=w, experimental_delegates=[load_delegate(delegate)])
- else: # TFLite
- LOGGER.info(f'Loading {w} for TensorFlow Lite inference...')
- interpreter = Interpreter(model_path=w) # load TFLite model
- interpreter.allocate_tensors() # allocate
- input_details = interpreter.get_input_details() # inputs
- output_details = interpreter.get_output_details() # outputs
- # load metadata
- with contextlib.suppress(zipfile.BadZipFile):
- with zipfile.ZipFile(w, "r") as model:
- meta_file = model.namelist()[0]
- meta = ast.literal_eval(model.read(meta_file).decode("utf-8"))
- stride, names = int(meta['stride']), meta['names']
- elif tfjs: # TF.js
- raise NotImplementedError('ERROR: YOLOv5 TF.js inference is not supported')
- elif paddle: # PaddlePaddle
- LOGGER.info(f'Loading {w} for PaddlePaddle inference...')
- check_requirements('paddlepaddle-gpu' if cuda else 'paddlepaddle')
- import paddle.inference as pdi
- if not Path(w).is_file(): # if not *.pdmodel
- w = next(Path(w).rglob('*.pdmodel')) # get *.pdmodel file from *_paddle_model dir
- weights = Path(w).with_suffix('.pdiparams')
- config = pdi.Config(str(w), str(weights))
- if cuda:
- config.enable_use_gpu(memory_pool_init_size_mb=2048, device_id=0)
- predictor = pdi.create_predictor(config)
- input_handle = predictor.get_input_handle(predictor.get_input_names()[0])
- output_names = predictor.get_output_names()
- elif triton: # NVIDIA Triton Inference Server
- LOGGER.info(f'Using {w} as Triton Inference Server...')
- check_requirements('tritonclient[all]')
- from utils.triton import TritonRemoteModel
- model = TritonRemoteModel(url=w)
- nhwc = model.runtime.startswith("tensorflow")
- else:
- raise NotImplementedError(f'ERROR: {w} is not a supported format')
-
- # class names
- if 'names' not in locals():
- names = yaml_load(data)['names'] if data else {i: f'class{i}' for i in range(999)}
- if names[0] == 'n01440764' and len(names) == 1000: # ImageNet
- names = yaml_load(ROOT / 'data/ImageNet.yaml')['names'] # human-readable names
-
- self.__dict__.update(locals()) # assign all variables to self
-
- def forward(self, im, augment=False, visualize=False):
- # YOLOv5 MultiBackend inference
- b, ch, h, w = im.shape # batch, channel, height, width
- if self.fp16 and im.dtype != torch.float16:
- im = im.half() # to FP16
- if self.nhwc:
- im = im.permute(0, 2, 3, 1) # torch BCHW to numpy BHWC shape(1,320,192,3)
-
- if self.pt: # PyTorch
- y = self.model(im, augment=augment, visualize=visualize) if augment or visualize else self.model(im)
- elif self.jit: # TorchScript
- y = self.model(im)
- elif self.dnn: # ONNX OpenCV DNN
- im = im.cpu().numpy() # torch to numpy
- self.net.setInput(im)
- y = self.net.forward()
- elif self.onnx: # ONNX Runtime
- im = im.cpu().numpy() # torch to numpy
- y = self.session.run(self.output_names, {self.session.get_inputs()[0].name: im})
- elif self.xml: # OpenVINO
- im = im.cpu().numpy() # FP32
- y = list(self.executable_network([im]).values())
- elif self.engine: # TensorRT
- if self.dynamic and im.shape != self.bindings['images'].shape:
- i = self.model.get_binding_index('images')
- self.context.set_binding_shape(i, im.shape) # reshape if dynamic
- self.bindings['images'] = self.bindings['images']._replace(shape=im.shape)
- for name in self.output_names:
- i = self.model.get_binding_index(name)
- self.bindings[name].data.resize_(tuple(self.context.get_binding_shape(i)))
- s = self.bindings['images'].shape
- assert im.shape == s, f"input size {im.shape} {'>' if self.dynamic else 'not equal to'} max model size {s}"
- self.binding_addrs['images'] = int(im.data_ptr())
- self.context.execute_v2(list(self.binding_addrs.values()))
- y = [self.bindings[x].data for x in sorted(self.output_names)]
- elif self.coreml: # CoreML
- im = im.cpu().numpy()
- im = Image.fromarray((im[0] * 255).astype('uint8'))
- # im = im.resize((192, 320), Image.ANTIALIAS)
- y = self.model.predict({'image': im}) # coordinates are xywh normalized
- if 'confidence' in y:
- box = xywh2xyxy(y['coordinates'] * [[w, h, w, h]]) # xyxy pixels
- conf, cls = y['confidence'].max(1), y['confidence'].argmax(1).astype(np.float)
- y = np.concatenate((box, conf.reshape(-1, 1), cls.reshape(-1, 1)), 1)
- else:
- y = list(reversed(y.values())) # reversed for segmentation models (pred, proto)
- elif self.paddle: # PaddlePaddle
- im = im.cpu().numpy().astype(np.float32)
- self.input_handle.copy_from_cpu(im)
- self.predictor.run()
- y = [self.predictor.get_output_handle(x).copy_to_cpu() for x in self.output_names]
- elif self.triton: # NVIDIA Triton Inference Server
- y = self.model(im)
- else: # TensorFlow (SavedModel, GraphDef, Lite, Edge TPU)
- im = im.cpu().numpy()
- if self.saved_model: # SavedModel
- y = self.model(im, training=False) if self.keras else self.model(im)
- elif self.pb: # GraphDef
- y = self.frozen_func(x=self.tf.constant(im))
- else: # Lite or Edge TPU
- input = self.input_details[0]
- int8 = input['dtype'] == np.uint8 # is TFLite quantized uint8 model
- if int8:
- scale, zero_point = input['quantization']
- im = (im / scale + zero_point).astype(np.uint8) # de-scale
- self.interpreter.set_tensor(input['index'], im)
- self.interpreter.invoke()
- y = []
- for output in self.output_details:
- x = self.interpreter.get_tensor(output['index'])
- if int8:
- scale, zero_point = output['quantization']
- x = (x.astype(np.float32) - zero_point) * scale # re-scale
- y.append(x)
- y = [x if isinstance(x, np.ndarray) else x.numpy() for x in y]
- y[0][..., :4] *= [w, h, w, h] # xywh normalized to pixels
-
- if isinstance(y, (list, tuple)):
- return self.from_numpy(y[0]) if len(y) == 1 else [self.from_numpy(x) for x in y]
- else:
- return self.from_numpy(y)
-
- def from_numpy(self, x):
- return torch.from_numpy(x).to(self.device) if isinstance(x, np.ndarray) else x
-
- def warmup(self, imgsz=(1, 3, 640, 640)):
- # Warmup model by running inference once
- warmup_types = self.pt, self.jit, self.onnx, self.engine, self.saved_model, self.pb, self.triton
- if any(warmup_types) and (self.device.type != 'cpu' or self.triton):
- im = torch.empty(*imgsz, dtype=torch.half if self.fp16 else torch.float, device=self.device) # input
- for _ in range(2 if self.jit else 1): #
- self.forward(im) # warmup
-
- @staticmethod
- def _model_type(p='path/to/model.pt'):
- # Return model type from model path, i.e. path='path/to/model.onnx' -> type=onnx
- # types = [pt, jit, onnx, xml, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs, paddle]
- from export import export_formats
- from utils.downloads import is_url
- sf = list(export_formats().Suffix) # export suffixes
- if not is_url(p, check=False):
- check_suffix(p, sf) # checks
- url = urlparse(p) # if url may be Triton inference server
- types = [s in Path(p).name for s in sf]
- types[8] &= not types[9] # tflite &= not edgetpu
- triton = not any(types) and all([any(s in url.scheme for s in ["http", "grpc"]), url.netloc])
- return types + [triton]
-
- @staticmethod
- def _load_metadata(f=Path('path/to/meta.yaml')):
- # Load metadata from meta.yaml if it exists
- if f.exists():
- d = yaml_load(f)
- return d['stride'], d['names'] # assign stride, names
- return None, None
-
-
-class AutoShape(nn.Module):
- # YOLOv5 input-robust model wrapper for passing cv2/np/PIL/torch inputs. Includes preprocessing, inference and NMS
- conf = 0.25 # NMS confidence threshold
- iou = 0.45 # NMS IoU threshold
- agnostic = False # NMS class-agnostic
- multi_label = False # NMS multiple labels per box
- classes = None # (optional list) filter by class, i.e. = [0, 15, 16] for COCO persons, cats and dogs
- max_det = 1000 # maximum number of detections per image
- amp = False # Automatic Mixed Precision (AMP) inference
-
- def __init__(self, model, verbose=True):
- super().__init__()
- if verbose:
- LOGGER.info('Adding AutoShape... ')
- copy_attr(self, model, include=('yaml', 'nc', 'hyp', 'names', 'stride', 'abc'), exclude=()) # copy attributes
- self.dmb = isinstance(model, DetectMultiBackend) # DetectMultiBackend() instance
- self.pt = not self.dmb or model.pt # PyTorch model
- self.model = model.eval()
- if self.pt:
- m = self.model.model.model[-1] if self.dmb else self.model.model[-1] # Detect()
- m.inplace = False # Detect.inplace=False for safe multithread inference
- m.export = True # do not output loss values
-
- def _apply(self, fn):
- # Apply to(), cpu(), cuda(), half() to model tensors that are not parameters or registered buffers
- self = super()._apply(fn)
- if self.pt:
- m = self.model.model.model[-1] if self.dmb else self.model.model[-1] # Detect()
- m.stride = fn(m.stride)
- m.grid = list(map(fn, m.grid))
- if isinstance(m.anchor_grid, list):
- m.anchor_grid = list(map(fn, m.anchor_grid))
- return self
-
- @smart_inference_mode()
- def forward(self, ims, size=640, augment=False, profile=False):
- # Inference from various sources. For size(height=640, width=1280), RGB images example inputs are:
- # file: ims = 'data/images/zidane.jpg' # str or PosixPath
- # URI: = 'https://ultralytics.com/images/zidane.jpg'
- # OpenCV: = cv2.imread('image.jpg')[:,:,::-1] # HWC BGR to RGB x(640,1280,3)
- # PIL: = Image.open('image.jpg') or ImageGrab.grab() # HWC x(640,1280,3)
- # numpy: = np.zeros((640,1280,3)) # HWC
- # torch: = torch.zeros(16,3,320,640) # BCHW (scaled to size=640, 0-1 values)
- # multiple: = [Image.open('image1.jpg'), Image.open('image2.jpg'), ...] # list of images
-
- dt = (Profile(), Profile(), Profile())
- with dt[0]:
- if isinstance(size, int): # expand
- size = (size, size)
- p = next(self.model.parameters()) if self.pt else torch.empty(1, device=self.model.device) # param
- autocast = self.amp and (p.device.type != 'cpu') # Automatic Mixed Precision (AMP) inference
- if isinstance(ims, torch.Tensor): # torch
- with amp.autocast(autocast):
- return self.model(ims.to(p.device).type_as(p), augment=augment) # inference
-
- # Pre-process
- n, ims = (len(ims), list(ims)) if isinstance(ims, (list, tuple)) else (1, [ims]) # number, list of images
- shape0, shape1, files = [], [], [] # image and inference shapes, filenames
- for i, im in enumerate(ims):
- f = f'image{i}' # filename
- if isinstance(im, (str, Path)): # filename or uri
- im, f = Image.open(requests.get(im, stream=True).raw if str(im).startswith('http') else im), im
- im = np.asarray(exif_transpose(im))
- elif isinstance(im, Image.Image): # PIL Image
- im, f = np.asarray(exif_transpose(im)), getattr(im, 'filename', f) or f
- files.append(Path(f).with_suffix('.jpg').name)
- if im.shape[0] < 5: # image in CHW
- im = im.transpose((1, 2, 0)) # reverse dataloader .transpose(2, 0, 1)
- im = im[..., :3] if im.ndim == 3 else cv2.cvtColor(im, cv2.COLOR_GRAY2BGR) # enforce 3ch input
- s = im.shape[:2] # HWC
- shape0.append(s) # image shape
- g = max(size) / max(s) # gain
- shape1.append([int(y * g) for y in s])
- ims[i] = im if im.data.contiguous else np.ascontiguousarray(im) # update
- shape1 = [make_divisible(x, self.stride) for x in np.array(shape1).max(0)] # inf shape
- x = [letterbox(im, shape1, auto=False)[0] for im in ims] # pad
- x = np.ascontiguousarray(np.array(x).transpose((0, 3, 1, 2))) # stack and BHWC to BCHW
- x = torch.from_numpy(x).to(p.device).type_as(p) / 255 # uint8 to fp16/32
-
- with amp.autocast(autocast):
- # Inference
- with dt[1]:
- y = self.model(x, augment=augment) # forward
-
- # Post-process
- with dt[2]:
- y = non_max_suppression(y if self.dmb else y[0],
- self.conf,
- self.iou,
- self.classes,
- self.agnostic,
- self.multi_label,
- max_det=self.max_det) # NMS
- for i in range(n):
- scale_boxes(shape1, y[i][:, :4], shape0[i])
-
- return Detections(ims, y, files, dt, self.names, x.shape)
-
-
-class Detections:
- # YOLOv5 detections class for inference results
- def __init__(self, ims, pred, files, times=(0, 0, 0), names=None, shape=None):
- super().__init__()
- d = pred[0].device # device
- gn = [torch.tensor([*(im.shape[i] for i in [1, 0, 1, 0]), 1, 1], device=d) for im in ims] # normalizations
- self.ims = ims # list of images as numpy arrays
- self.pred = pred # list of tensors pred[0] = (xyxy, conf, cls)
- self.names = names # class names
- self.files = files # image filenames
- self.times = times # profiling times
- self.xyxy = pred # xyxy pixels
- self.xywh = [xyxy2xywh(x) for x in pred] # xywh pixels
- self.xyxyn = [x / g for x, g in zip(self.xyxy, gn)] # xyxy normalized
- self.xywhn = [x / g for x, g in zip(self.xywh, gn)] # xywh normalized
- self.n = len(self.pred) # number of images (batch size)
- self.t = tuple(x.t / self.n * 1E3 for x in times) # timestamps (ms)
- self.s = tuple(shape) # inference BCHW shape
-
- def _run(self, pprint=False, show=False, save=False, crop=False, render=False, labels=True, save_dir=Path('')):
- s, crops = '', []
- for i, (im, pred) in enumerate(zip(self.ims, self.pred)):
- s += f'\nimage {i + 1}/{len(self.pred)}: {im.shape[0]}x{im.shape[1]} ' # string
- if pred.shape[0]:
- for c in pred[:, -1].unique():
- n = (pred[:, -1] == c).sum() # detections per class
- s += f"{n} {self.names[int(c)]}{'s' * (n > 1)}, " # add to string
- s = s.rstrip(', ')
- if show or save or render or crop:
- annotator = Annotator(im, example=str(self.names))
- for *box, conf, cls in reversed(pred): # xyxy, confidence, class
- label = f'{self.names[int(cls)]} {conf:.2f}'
- if crop:
- file = save_dir / 'crops' / self.names[int(cls)] / self.files[i] if save else None
- crops.append({
- 'box': box,
- 'conf': conf,
- 'cls': cls,
- 'label': label,
- 'im': save_one_box(box, im, file=file, save=save)})
- else: # all others
- annotator.box_label(box, label if labels else '', color=colors(cls))
- im = annotator.im
- else:
- s += '(no detections)'
-
- im = Image.fromarray(im.astype(np.uint8)) if isinstance(im, np.ndarray) else im # from np
- if show:
- display(im) if is_notebook() else im.show(self.files[i])
- if save:
- f = self.files[i]
- im.save(save_dir / f) # save
- if i == self.n - 1:
- LOGGER.info(f"Saved {self.n} image{'s' * (self.n > 1)} to {colorstr('bold', save_dir)}")
- if render:
- self.ims[i] = np.asarray(im)
- if pprint:
- s = s.lstrip('\n')
- return f'{s}\nSpeed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {self.s}' % self.t
- if crop:
- if save:
- LOGGER.info(f'Saved results to {save_dir}\n')
- return crops
-
- @TryExcept('Showing images is not supported in this environment')
- def show(self, labels=True):
- self._run(show=True, labels=labels) # show results
-
- def save(self, labels=True, save_dir='runs/detect/exp', exist_ok=False):
- save_dir = increment_path(save_dir, exist_ok, mkdir=True) # increment save_dir
- self._run(save=True, labels=labels, save_dir=save_dir) # save results
-
- def crop(self, save=True, save_dir='runs/detect/exp', exist_ok=False):
- save_dir = increment_path(save_dir, exist_ok, mkdir=True) if save else None
- return self._run(crop=True, save=save, save_dir=save_dir) # crop results
-
- def render(self, labels=True):
- self._run(render=True, labels=labels) # render results
- return self.ims
-
- def pandas(self):
- # return detections as pandas DataFrames, i.e. print(results.pandas().xyxy[0])
- new = copy(self) # return copy
- ca = 'xmin', 'ymin', 'xmax', 'ymax', 'confidence', 'class', 'name' # xyxy columns
- cb = 'xcenter', 'ycenter', 'width', 'height', 'confidence', 'class', 'name' # xywh columns
- for k, c in zip(['xyxy', 'xyxyn', 'xywh', 'xywhn'], [ca, ca, cb, cb]):
- a = [[x[:5] + [int(x[5]), self.names[int(x[5])]] for x in x.tolist()] for x in getattr(self, k)] # update
- setattr(new, k, [pd.DataFrame(x, columns=c) for x in a])
- return new
-
- def tolist(self):
- # return a list of Detections objects, i.e. 'for result in results.tolist():'
- r = range(self.n) # iterable
- x = [Detections([self.ims[i]], [self.pred[i]], [self.files[i]], self.times, self.names, self.s) for i in r]
- # for d in x:
- # for k in ['ims', 'pred', 'xyxy', 'xyxyn', 'xywh', 'xywhn']:
- # setattr(d, k, getattr(d, k)[0]) # pop out of list
- return x
-
- def print(self):
- LOGGER.info(self.__str__())
-
- def __len__(self): # override len(results)
- return self.n
-
- def __str__(self): # override print(results)
- return self._run(pprint=True) # print results
-
- def __repr__(self):
- return f'YOLOv5 {self.__class__} instance\n' + self.__str__()
-
-
-class Proto(nn.Module):
- # YOLOv5 mask Proto module for segmentation models
- def __init__(self, c1, c_=256, c2=32): # ch_in, number of protos, number of masks
- super().__init__()
- self.cv1 = Conv(c1, c_, k=3)
- self.upsample = nn.Upsample(scale_factor=2, mode='nearest')
- self.cv2 = Conv(c_, c_, k=3)
- self.cv3 = Conv(c_, c2)
-
- def forward(self, x):
- return self.cv3(self.cv2(self.upsample(self.cv1(x))))
-
-
-class Classify(nn.Module):
- # YOLOv5 classification head, i.e. x(b,c1,20,20) to x(b,c2)
- def __init__(self, c1, c2, k=1, s=1, p=None, g=1): # ch_in, ch_out, kernel, stride, padding, groups
- super().__init__()
- c_ = 1280 # efficientnet_b0 size
- self.conv = Conv(c1, c_, k, s, autopad(k, p), g)
- self.pool = nn.AdaptiveAvgPool2d(1) # to x(b,c_,1,1)
- self.drop = nn.Dropout(p=0.0, inplace=True)
- self.linear = nn.Linear(c_, c2) # to x(b,c2)
-
- def forward(self, x):
- if isinstance(x, list):
- x = torch.cat(x, 1)
- return self.linear(self.drop(self.pool(self.conv(x)).flatten(1)))
diff --git a/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/utils/segment/metrics.py b/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/utils/segment/metrics.py
deleted file mode 100644
index b09ce23fb9e398ab654fce676d23f74d81cc5c57..0000000000000000000000000000000000000000
--- a/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/utils/segment/metrics.py
+++ /dev/null
@@ -1,210 +0,0 @@
-# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
-"""
-Model validation metrics
-"""
-
-import numpy as np
-
-from ..metrics import ap_per_class
-
-
-def fitness(x):
- # Model fitness as a weighted combination of metrics
- w = [0.0, 0.0, 0.1, 0.9, 0.0, 0.0, 0.1, 0.9]
- return (x[:, :8] * w).sum(1)
-
-
-def ap_per_class_box_and_mask(
- tp_m,
- tp_b,
- conf,
- pred_cls,
- target_cls,
- plot=False,
- save_dir=".",
- names=(),
-):
- """
- Args:
- tp_b: tp of boxes.
- tp_m: tp of masks.
- other arguments see `func: ap_per_class`.
- """
- results_boxes = ap_per_class(tp_b,
- conf,
- pred_cls,
- target_cls,
- plot=plot,
- save_dir=save_dir,
- names=names,
- prefix="Box")[2:]
- results_masks = ap_per_class(tp_m,
- conf,
- pred_cls,
- target_cls,
- plot=plot,
- save_dir=save_dir,
- names=names,
- prefix="Mask")[2:]
-
- results = {
- "boxes": {
- "p": results_boxes[0],
- "r": results_boxes[1],
- "ap": results_boxes[3],
- "f1": results_boxes[2],
- "ap_class": results_boxes[4]},
- "masks": {
- "p": results_masks[0],
- "r": results_masks[1],
- "ap": results_masks[3],
- "f1": results_masks[2],
- "ap_class": results_masks[4]}}
- return results
-
-
-class Metric:
-
- def __init__(self) -> None:
- self.p = [] # (nc, )
- self.r = [] # (nc, )
- self.f1 = [] # (nc, )
- self.all_ap = [] # (nc, 10)
- self.ap_class_index = [] # (nc, )
-
- @property
- def ap50(self):
- """AP@0.5 of all classes.
- Return:
- (nc, ) or [].
- """
- return self.all_ap[:, 0] if len(self.all_ap) else []
-
- @property
- def ap(self):
- """AP@0.5:0.95
- Return:
- (nc, ) or [].
- """
- return self.all_ap.mean(1) if len(self.all_ap) else []
-
- @property
- def mp(self):
- """mean precision of all classes.
- Return:
- float.
- """
- return self.p.mean() if len(self.p) else 0.0
-
- @property
- def mr(self):
- """mean recall of all classes.
- Return:
- float.
- """
- return self.r.mean() if len(self.r) else 0.0
-
- @property
- def map50(self):
- """Mean AP@0.5 of all classes.
- Return:
- float.
- """
- return self.all_ap[:, 0].mean() if len(self.all_ap) else 0.0
-
- @property
- def map(self):
- """Mean AP@0.5:0.95 of all classes.
- Return:
- float.
- """
- return self.all_ap.mean() if len(self.all_ap) else 0.0
-
- def mean_results(self):
- """Mean of results, return mp, mr, map50, map"""
- return (self.mp, self.mr, self.map50, self.map)
-
- def class_result(self, i):
- """class-aware result, return p[i], r[i], ap50[i], ap[i]"""
- return (self.p[i], self.r[i], self.ap50[i], self.ap[i])
-
- def get_maps(self, nc):
- maps = np.zeros(nc) + self.map
- for i, c in enumerate(self.ap_class_index):
- maps[c] = self.ap[i]
- return maps
-
- def update(self, results):
- """
- Args:
- results: tuple(p, r, ap, f1, ap_class)
- """
- p, r, all_ap, f1, ap_class_index = results
- self.p = p
- self.r = r
- self.all_ap = all_ap
- self.f1 = f1
- self.ap_class_index = ap_class_index
-
-
-class Metrics:
- """Metric for boxes and masks."""
-
- def __init__(self) -> None:
- self.metric_box = Metric()
- self.metric_mask = Metric()
-
- def update(self, results):
- """
- Args:
- results: Dict{'boxes': Dict{}, 'masks': Dict{}}
- """
- self.metric_box.update(list(results["boxes"].values()))
- self.metric_mask.update(list(results["masks"].values()))
-
- def mean_results(self):
- return self.metric_box.mean_results() + self.metric_mask.mean_results()
-
- def class_result(self, i):
- return self.metric_box.class_result(i) + self.metric_mask.class_result(i)
-
- def get_maps(self, nc):
- return self.metric_box.get_maps(nc) + self.metric_mask.get_maps(nc)
-
- @property
- def ap_class_index(self):
- # boxes and masks have the same ap_class_index
- return self.metric_box.ap_class_index
-
-
-KEYS = [
- "train/box_loss",
- "train/seg_loss", # train loss
- "train/obj_loss",
- "train/cls_loss",
- "metrics/precision(B)",
- "metrics/recall(B)",
- "metrics/mAP_0.5(B)",
- "metrics/mAP_0.5:0.95(B)", # metrics
- "metrics/precision(M)",
- "metrics/recall(M)",
- "metrics/mAP_0.5(M)",
- "metrics/mAP_0.5:0.95(M)", # metrics
- "val/box_loss",
- "val/seg_loss", # val loss
- "val/obj_loss",
- "val/cls_loss",
- "x/lr0",
- "x/lr1",
- "x/lr2",]
-
-BEST_KEYS = [
- "best/epoch",
- "best/precision(B)",
- "best/recall(B)",
- "best/mAP_0.5(B)",
- "best/mAP_0.5:0.95(B)",
- "best/precision(M)",
- "best/recall(M)",
- "best/mAP_0.5(M)",
- "best/mAP_0.5:0.95(M)",]
diff --git a/spaces/Illumotion/Koboldcpp/otherarch/utils.cpp b/spaces/Illumotion/Koboldcpp/otherarch/utils.cpp
deleted file mode 100644
index 16e015c841b35a9282c201d5ae686482de6d9cbd..0000000000000000000000000000000000000000
--- a/spaces/Illumotion/Koboldcpp/otherarch/utils.cpp
+++ /dev/null
@@ -1,236 +0,0 @@
-#include "utils.h"
-
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-
-
-
-void utreplace(std::string & str, const std::string & needle, const std::string & replacement) {
- size_t pos = 0;
- while ((pos = str.find(needle, pos)) != std::string::npos) {
- str.replace(pos, needle.length(), replacement);
- pos += replacement.length();
- }
-}
-
-std::map json_parse(const std::string & fname) {
- std::map result;
-
- // read file into string
- std::string json;
- {
- std::ifstream ifs(fname);
- if (!ifs) {
- fprintf(stderr, "Failed to open %s\n", fname.c_str());
- exit(1);
- }
-
- json = std::string((std::istreambuf_iterator(ifs)),
- (std::istreambuf_iterator()));
- }
-
- if (json[0] != '{') {
- return result;
- }
-
- // parse json
- {
- bool has_key = false;
- bool in_token = false;
-
- std::string str_key = "";
- std::string str_val = "";
-
- int n = json.size();
- for (int i = 1; i < n; ++i) {
- if (!in_token) {
- if (json[i] == ' ') continue;
- if (json[i] == '"') {
- in_token = true;
- continue;
- }
- } else {
- if (json[i] == '\\' && i+1 < n) {
- if (has_key == false) {
- str_key += json[i];
- } else {
- str_val += json[i];
- }
- ++i;
- } else if (json[i] == '"') {
- if (has_key == false) {
- has_key = true;
- ++i;
- while (json[i] == ' ') ++i;
- ++i; // :
- while (json[i] == ' ') ++i;
- if (json[i] != '\"') {
- while (json[i] != ',' && json[i] != '}') {
- str_val += json[i++];
- }
- has_key = false;
- } else {
- in_token = true;
- continue;
- }
- } else {
- has_key = false;
- }
-
- ::utreplace(str_key, "\\u0120", " " ); // \u0120 -> space
- ::utreplace(str_key, "\\u010a", "\n"); // \u010a -> new line
- ::utreplace(str_key, "\\\"", "\""); // \\\" -> "
-
- try {
- result[str_key] = std::stoi(str_val);
- } catch (...) {
- //fprintf(stderr, "%s: ignoring key '%s' with value '%s'\n", fname.c_str(), str_key.c_str(), str_val.c_str());
-
- }
- str_key = "";
- str_val = "";
- in_token = false;
- continue;
- }
- if (has_key == false) {
- str_key += json[i];
- } else {
- str_val += json[i];
- }
- }
- }
- }
-
- return result;
-}
-
-
-void gpt_vocab::add_special_token(const std::string & token) {
- special_tokens.push_back(token);
-}
-
-
-std::string convert_to_utf8(const std::wstring & input) {
- std::wstring_convert> converter;
- return converter.to_bytes(input);
-}
-
-
-std::wstring convert_to_wstring(const std::string & input) {
- try {
- std::wstring_convert> converter;
- return converter.from_bytes(input);
- } catch (const std::range_error& e) {
- return L"";
- } catch (...) {
- return L"";
- }
-}
-
-void gpt_split_words(std::string str, std::vector& words) {
- const std::string pattern = R"('s|'t|'re|'ve|'m|'ll|'d| ?[[:alpha:]]+| ?[[:digit:]]+| ?[^\s[:alpha:][:digit:]]+|\s+(?!\S)|\s+)";
- const std::regex re(pattern);
- std::smatch m;
-
- while (std::regex_search(str, m, re)) {
- for (auto x : m) {
- words.push_back(x);
- }
- str = m.suffix();
- }
-}
-
-std::vector gpt_tokenize(const gpt_vocab & vocab, const std::string & text) {
- std::vector words;
-
- // first split the text into words
- {
- std::string str = text;
-
- // Generate the subpattern from the special_tokens vector if it's not empty
- if (!vocab.special_tokens.empty()) {
- const std::regex escape(R"([\[\\\^\$\.\|\?\*\+\(\)\{\}])");
- std::string special_tokens_subpattern;
- for (const auto & token : vocab.special_tokens) {
- if (!special_tokens_subpattern.empty()) {
- special_tokens_subpattern += "|";
- }
- special_tokens_subpattern += std::regex_replace(token, escape, R"(\$&)");
- }
-
- std::regex re(special_tokens_subpattern);
- std::smatch m;
- // Split the text by special tokens.
- while (std::regex_search(str, m, re)) {
- // Split the substrings in-between special tokens into words.
- gpt_split_words(m.prefix(), words);
- // Add matched special tokens as words.
- for (auto x : m) {
- words.push_back(x);
- }
- str = m.suffix();
- }
- // Remaining text without special tokens will be handled below.
- }
-
- gpt_split_words(str, words);
- }
-
- // find the longest token that forms each word in words:
- std::vector tokens;
- for (const auto & word : words) {
- for (int i = 0; i < word.size(); ){
- for (int j = word.size() - 1; j >= i; j--){
- auto cand = word.substr(i, j-i+1);
- auto it = vocab.token_to_id.find(cand);
- if (it != vocab.token_to_id.end()){ // word.substr(i, j-i+1) in vocab
- tokens.push_back(it->second);
- i = j + 1;
- break;
- }
- else if (j == i){ // word.substr(i, 1) has no matching
- fprintf(stderr, "%s: unknown token '%s'\n", __func__, word.substr(i, 1).data());
- i++;
- }
- }
- }
- }
-
-
- return tokens;
-}
-
-bool should_transpose_layer(std::string name)
-{
-
- if(name.find(".mlp.fc_in.weight")!=std::string::npos ||
- name.find(".attn.out_proj.weight")!=std::string::npos ||
- name.find(".attn.q_proj.weight")!=std::string::npos ||
- name.find(".attn.k_proj.weight")!=std::string::npos ||
- name.find(".attn.v_proj.weight")!=std::string::npos ||
- name.find("/attn/c_attn/w")!=std::string::npos ||
- name.find("/attn/c_proj/w")!=std::string::npos ||
- name.find("/mlp/c_fc/w")!=std::string::npos ||
- name.find("/mlp/c_proj/w")!=std::string::npos)
- {
- return true;
- }
- return false;
-}
-
-static std::vector kcpp_compute_buf;
-void kcpp_graph_compute_helper(ggml_cgraph *graph, int n_threads)
-{
- struct ggml_cplan plan = ggml_graph_plan(graph, n_threads);
- if (plan.work_size > 0)
- {
- kcpp_compute_buf.resize(plan.work_size);
- plan.work_data = kcpp_compute_buf.data();
- }
- ggml_graph_compute(graph, &plan);
-}
\ No newline at end of file
diff --git a/spaces/JCTN/stable-diffusion-webui-cpu/README.md b/spaces/JCTN/stable-diffusion-webui-cpu/README.md
deleted file mode 100644
index 137b882ac1d4e2af67d65b232ada4d224c94336b..0000000000000000000000000000000000000000
--- a/spaces/JCTN/stable-diffusion-webui-cpu/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Stable Diffusion Webui on Cpu
-emoji: 🏃
-colorFrom: pink
-colorTo: purple
-sdk: gradio
-sdk_version: 3.28.0
-app_file: app.py
-pinned: false
-python_version : 3.10.6
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Jacob209/AUTOMATIC-promptgen-lexart/README.md b/spaces/Jacob209/AUTOMATIC-promptgen-lexart/README.md
deleted file mode 100644
index b4ccac2cff23e6df812ffd7086246c9a2aa3ce47..0000000000000000000000000000000000000000
--- a/spaces/Jacob209/AUTOMATIC-promptgen-lexart/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: AUTOMATIC Promptgen Lexart
-emoji: 📚
-colorFrom: gray
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.17.0
-app_file: app.py
-pinned: false
-license: openrail
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/JeffJing/ZookChatBot/tls_client/response.py b/spaces/JeffJing/ZookChatBot/tls_client/response.py
deleted file mode 100644
index d37a10dd24d4426451e7dee59a7340bde8cc2871..0000000000000000000000000000000000000000
--- a/spaces/JeffJing/ZookChatBot/tls_client/response.py
+++ /dev/null
@@ -1,60 +0,0 @@
-from .cookies import cookiejar_from_dict, RequestsCookieJar
-from .structures import CaseInsensitiveDict
-
-from http.cookiejar import CookieJar
-from typing import Union
-import json
-
-
-class Response:
- """object, which contains the response to an HTTP request."""
-
- def __init__(self):
-
- # Reference of URL the response is coming from (especially useful with redirects)
- self.url = None
-
- # Integer Code of responded HTTP Status, e.g. 404 or 200.
- self.status_code = None
-
- # String of responded HTTP Body.
- self.text = None
-
- # Case-insensitive Dictionary of Response Headers.
- self.headers = CaseInsensitiveDict()
-
- # A CookieJar of Cookies the server sent back.
- self.cookies = cookiejar_from_dict({})
-
- def __enter__(self):
- return self
-
- def __repr__(self):
- return f""
-
- def json(self, **kwargs):
- """parse response body to json (dict/list)"""
- return json.loads(self.text, **kwargs)
-
-
-def build_response(res: Union[dict, list], res_cookies: RequestsCookieJar) -> Response:
- """Builds a Response object """
- response = Response()
- # Add target / url
- response.url = res["target"]
- # Add status code
- response.status_code = res["status"]
- # Add headers
- response_headers = {}
- if res["headers"] is not None:
- for header_key, header_value in res["headers"].items():
- if len(header_value) == 1:
- response_headers[header_key] = header_value[0]
- else:
- response_headers[header_key] = header_value
- response.headers = response_headers
- # Add cookies
- response.cookies = res_cookies
- # Add response body
- response.text = res["body"]
- return response
diff --git a/spaces/JohnSmith9982/ChuanhuChatGPT/modules/webui.py b/spaces/JohnSmith9982/ChuanhuChatGPT/modules/webui.py
deleted file mode 100644
index 61f863d7ca3b8975222b90d4f66a2c6cdc9d2e0d..0000000000000000000000000000000000000000
--- a/spaces/JohnSmith9982/ChuanhuChatGPT/modules/webui.py
+++ /dev/null
@@ -1,70 +0,0 @@
-
-from collections import namedtuple
-import os
-import gradio as gr
-
-from . import shared
-
-# with open("./assets/ChuanhuChat.js", "r", encoding="utf-8") as f, \
-# open("./assets/external-scripts.js", "r", encoding="utf-8") as f1:
-# customJS = f.read()
-# externalScripts = f1.read()
-
-
-def get_html(filename):
- path = os.path.join(shared.chuanhu_path, "web_assets", "html", filename)
- if os.path.exists(path):
- with open(path, encoding="utf8") as file:
- return file.read()
- return ""
-
-def webpath(fn):
- if fn.startswith(shared.assets_path):
- web_path = os.path.relpath(fn, shared.chuanhu_path).replace('\\', '/')
- else:
- web_path = os.path.abspath(fn)
- return f'file={web_path}?{os.path.getmtime(fn)}'
-
-ScriptFile = namedtuple("ScriptFile", ["basedir", "filename", "path"])
-
-def javascript_html():
- head = ""
- for script in list_scripts("javascript", ".js"):
- head += f'\n'
- for script in list_scripts("javascript", ".mjs"):
- head += f'\n'
- return head
-
-def css_html():
- head = ""
- for cssfile in list_scripts("stylesheet", ".css"):
- head += f''
- return head
-
-def list_scripts(scriptdirname, extension):
- scripts_list = []
- scripts_dir = os.path.join(shared.chuanhu_path, "web_assets", scriptdirname)
- if os.path.exists(scripts_dir):
- for filename in sorted(os.listdir(scripts_dir)):
- scripts_list.append(ScriptFile(shared.assets_path, filename, os.path.join(scripts_dir, filename)))
- scripts_list = [x for x in scripts_list if os.path.splitext(x.path)[1].lower() == extension and os.path.isfile(x.path)]
- return scripts_list
-
-
-def reload_javascript():
- js = javascript_html()
- js += ''
- js += ''
-
- css = css_html()
-
- def template_response(*args, **kwargs):
- res = GradioTemplateResponseOriginal(*args, **kwargs)
- res.body = res.body.replace(b'', f'{js}'.encode("utf8"))
- res.body = res.body.replace(b'