diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Crack For Kerio Winroute Firewall 670.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Crack For Kerio Winroute Firewall 670.md
deleted file mode 100644
index cd170f0145684199b40d90e6e5d4c4c07b97433d..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Crack For Kerio Winroute Firewall 670.md
+++ /dev/null
@@ -1,25 +0,0 @@
-
-
Kerio WinRoute Firewall 670: A Comprehensive Network Security Solution
-
Kerio WinRoute Firewall 670 is a software product that provides advanced protection for your network and data. It is an all-in-one Unified Threat Management (UTM) solution that includes a next-generation firewall and router, Intrusion Detection and Prevention (IPS), gateway anti-virus, VPN and web content and application filtering[^3^].
-
In this article, we will review some of the key features and benefits of Kerio WinRoute Firewall 670 and why you should consider it for your network security needs.
Kerio WinRoute Firewall 670 offers a variety of content security features such as MP3 music download blocking, filtering for potentially dangerous executable files or blocking of annoying pop-up windows[^1^]. It also allows you to create custom rules and policies to control the traffic on your network based on user, group, time, protocol, port, source or destination IP address, URL or domain name.
-
Kerio WinRoute Firewall 670 also acts as a router that supports multiple Internet connections, load balancing, failover, bandwidth management and quality of service (QoS). You can optimize the performance and reliability of your network by distributing the traffic among different Internet links or prioritizing the traffic based on the type of application or service.
-
Intrusion Detection and Prevention (IPS)
-
Kerio WinRoute Firewall 670 monitors the network traffic for any signs of malicious activity or attacks. It uses a signature-based engine that can detect and block known threats such as worms, trojans, denial-of-service attacks, port scans, buffer overflows and more. It also uses a behavior-based engine that can identify and stop unknown or zero-day attacks based on their abnormal patterns or anomalies.
-
Kerio WinRoute Firewall 670 can also prevent intrusions by applying security patches to vulnerable applications or systems on your network. It can automatically update itself with the latest signatures and patches from Kerio Technologies or third-party vendors.
-
Gateway Anti-Virus
-
Kerio WinRoute Firewall 670 scans all incoming and outgoing traffic for viruses, spyware, malware and other malicious code. It uses a dual-engine approach that combines the power of BitDefender and Kerio antivirus engines to provide maximum detection and prevention. It can also scan email attachments, web downloads, FTP transfers and VPN tunnels.
-
Kerio WinRoute Firewall 670 can also quarantine or delete infected files, notify the administrator or user, or block the source of infection. It can also update itself with the latest virus definitions from Kerio Technologies or third-party vendors.
-
VPN
-
Kerio WinRoute Firewall 670 allows you to create secure connections between your network and remote sites or users. It supports various VPN protocols such as IPsec, L2TP, PPTP and SSL. It also supports Kerio VPN Client, a proprietary software that provides easy-to-use and secure VPN access for Windows, Mac OS X and Linux users.
-
Kerio WinRoute Firewall 670 can also integrate with Active Directory or LDAP servers to authenticate VPN users. It can also encrypt VPN traffic with strong algorithms such as AES or 3DES.
-
-
Web Content and Application Filtering
-
Kerio WinRoute Firewall 670 allows you to control what websites or applications your users can access on the Internet. It uses a category-based filtering system that can block or allow access to over 141 million websites in 74 categories such as adult content, gambling, social networking, gaming, shopping and more[^3^]. You can also create custom categories or lists to suit your specific needs.
-
Kerio WinRoute Firewall 670 can also filter applications based on their type or protocol such as instant messaging, peer-to-peer file sharing, VoIP, streaming media and more. You can also limit the bandwidth usage or time spent on certain applications or websites.
-
Conclusion
-
Kerio WinRoute Firewall 670 is a comprehensive network security solution that provides advanced protection for your network and data. It is an all-in-one Unified Threat Management (UTM) solution that includes a next-generation firewall and router, Intrusion Detection and Prevention (IPS), gateway anti-virus, VPN and web content and application filtering.
7b8c122e87
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Epson PX660 Adjustment Program A Utility Program for Printer Maintenance and Repair.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Epson PX660 Adjustment Program A Utility Program for Printer Maintenance and Repair.md
deleted file mode 100644
index 01255183f236d8618ea782510dbb54ab0ec0bd9b..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Epson PX660 Adjustment Program A Utility Program for Printer Maintenance and Repair.md
+++ /dev/null
@@ -1,157 +0,0 @@
-
-
-
-
Heading
-
Subheading
-
-
-
Epson PX660 Adjustment Program: How to Reset Your Printer and Fix Common Problems
-
-
-
-
Introduction
-
-
-
Explain what is Epson PX660 Adjustment Program and what it can do
-
Mention some common problems that can be solved by using the program
-
Provide a brief overview of the article
-
-
-
-
-
What is Epson PX660 Adjustment Program?
-
-
-
Describe the program as a utility tool for Epson Stylus Photo PX660 printer model
-
Explain that the program can reset the waste ink pad counter, prescribe the print head ID, do printer initialization and other functions
-
Mention that the program is original and works only with USB on Windows
-
-
-
-
-
Why Do You Need Epson PX660 Adjustment Program?
-
-
-
Explain that the waste ink pad counter is a feature that prevents the printer from overflowing with ink
-
Mention that when the counter reaches a certain limit, the printer stops working and displays an error message
-
Explain that the program can reset the counter and allow the printer to work again
-
Mention some other benefits of using the program, such as improving print quality, saving ink and paper, etc.
-
-
-
-
-
How to Download and Install Epson PX660 Adjustment Program?
-
-
-
Provide a link to download the program from a reliable source (e.g. ORPYS)
-
Explain how to install the program on the computer
-
Mention some requirements and precautions, such as disabling antivirus, binding to one PC, etc.
-
-
-
-
-
How to Use Epson PX660 Adjustment Program?
-
-
-
Explain how to connect the printer to the computer via USB
-
Explain how to run the program and select the printer model
-
Explain how to access different functions and settings of the program
-
Provide some screenshots and examples of using the program
-
-
-
-
-
Conclusion
-
-
-
Summarize the main points of the article
-
Emphasize the benefits and advantages of using Epson PX660 Adjustment Program
-
Provide some tips and recommendations for maintaining the printer
-
Invite readers to share their feedback and questions
-
-
-
-
-
FAQs
-
-
-
Provide some frequently asked questions and answers about Epson PX660 Adjustment Program - What are the advantages of using this program over other methods? - How often do I need to reset my waste ink pad counter? - How can I check if my print head ID is correct? - What should I do if I encounter any errors or problems while using this program? - Where can I find more information or support for this program?
-
-
-
-
- Here is the article based on the outline:
Epson PX660 Adjustment Program: How to Reset Your Printer and Fix Common Problems
-
If you own an Epson Stylus Photo PX660 printer, you may have encountered some issues that prevent you from printing your photos or documents. For example, you may see an error message saying that your printer's ink pads are at the end of their service life, or that your print head needs alignment or cleaning. These problems can be frustrating and costly, but there is a simple solution: Epson PX660 Adjustment Program.
-
Epson PX660 Adjustment Program is a utility tool that allows you to reset your printer's waste ink pad counter, prescribe the print head ID, do printer initialization and other functions. It is an original program that works only with USB on Windows computers. By using this program, you can fix common problems with your printer and improve its performance and quality.
In this article, we will explain what Epson PX660 Adjustment Program is, why you need it, how to download and install it, and how to use it. We will also provide some frequently asked questions and answers about this program. Read on to learn more!
-
What is Epson PX660 Adjustment Program?
-
Epson PX660 Adjustment Program is a utility tool for Epson Stylus Photo PX660 printer model. It is a service adjustment program that allows you to perform various functions on your printer. Some of these functions are:
-
-
Resetting the waste ink pad counter: This feature prevents your printer from overflowing with ink by counting how much ink is used during printing and cleaning cycles. When the counter reaches a certain limit, your printer stops working and displays an error message. By using this program, you can reset the counter and continue printing.
-
Prescribing the print head ID: This feature allows you to assign a unique ID to your print head, which helps your printer recognize it and adjust its settings accordingly. This can improve your print quality and prevent errors such as missing colors or lines.
-
Doing printer initialization: This feature allows you to reset your printer's settings to their factory defaults. This can help you solve some problems that may occur due to incorrect or corrupted settings.
-
Other functions: The program also allows you to perform other functions such as checking nozzle patterns, cleaning print heads, reading or writing EEPROM data, etc.
-
-
Epson PX660 Adjustment Program is an original program that works only with USB on Windows computers. It is not compatible with other operating systems or connection methods. It is also attached to one PC only, which means you cannot use it on multiple computers. You need to purchase a license key for each PC you want to use it on.
-
epson px660 resetter software free download
-epson px660 printer adjustment tool rar file
-epson px660 waste ink pad counter reset program
-how to download and use epson px660 adjustment utility
-epson px660 service required error fix software
-epson px660 ink level resetter free download
-epson px660 head cleaning program download
-epson px660 maintenance mode software rar
-epson px660 firmware update tool free download
-epson px660 print quality adjustment program
-epson px660 paper feed adjustment software
-epson px660 nozzle check program download
-epson px660 alignment program free download
-epson px660 color calibration software rar
-epson px660 ink cartridge replacement program
-epson px660 scanner adjustment tool free download
-epson px660 network settings program rar
-epson px660 wireless setup software download
-epson px660 driver installation program free
-epson px660 troubleshooting guide software rar
-epson px660 manual pdf download program
-epson px660 cd printing software free download
-epson px660 photo editing program rar file
-epson px660 copy settings adjustment tool
-epson px660 scan to email program free download
-epson px660 borderless printing software rar
-epson px660 duplex printing program download
-epson px660 print speed adjustment tool free
-epson px660 memory card slot program rar file
-epson px660 lcd screen replacement software download
-epson px660 power saving mode program free
-epson px660 error codes software rar file
-epson px660 parts list and diagram program download
-epson px660 cleaning and lubrication software free
-epson px660 disassembly and assembly program rar
-epson px660 firmware downgrade software download
-epson px660 chipless conversion program free
-epson px660 continuous ink system installation software rar
-epson px660 sublimation printing program download
-epson px660 refillable ink cartridges program free
-epson px660 compatible ink cartridges software rar file
-epson px660 black ink only printing program download
-epson px660 grayscale printing software free download
-epson px660 draft mode printing program rar file
-epson px660 high resolution printing software download
-epson px660 glossy paper printing program free
-epson px660 matte paper printing software rar file
-epson px660 iron-on transfer printing program download
-epson px660 sticker printing software free download
-
Why Do You Need Epson PX660 Adjustment Program?
- settings to their factory defaults. This can help you solve some problems that may occur due to incorrect or corrupted settings. For example, if your printer prints too dark or too light, or if it does not respond properly to commands. Other benefits of using this program include: - Saving ink and paper by optimizing your print settings and cleaning cycles - Extending your printer's lifespan by maintaining its parts in good condition - Troubleshooting your printer's problems by accessing various diagnostic tools - Updating your printer's firmware by downloading new versions from online sources
How to Download and Install Epson PX660 Adjustment Program?
- To use Epson PX660 Adjustment Program, you need to download it from a reliable source and install it on your computer. Here are the steps: 1. Download Epson PX660 Adjustment Program from ORPYS website (https://orpys.com/en/epson/164-px660-adjustment-program.html). This website offers original programs for various Epson printer models at affordable prices. You can also find more information about each program's features and requirements on this website. 2. After downloading the file (px660_adjustment_program.zip), extract it using a zip extractor software such as WinRAR or 7-Zip. 3. Open the extracted folder and run AdjProg.exe file as administrator. 4. Follow the instructions on the screen to install the program on your computer. 5. After installation, you will receive a license key via email. You need this key to activate the program and access its functions. 6. To activate the program, run AdjProg.exe file again as administrator and enter your license key when prompted. 7. After activation, you can start using Epson PX660 Adjustment Program. Some requirements and precautions for downloading and installing this program are: - You need a Windows computer with USB port to use this program. - You need to disable your antivirus software before installing this program as some antivirus programs may block or delete it. - You need to bind this program to one PC only as it will not work on multiple computers. - You need to purchase a new license key for each PC you want to use this program on.
How to Use Epson PX660 Adjustment Program?
- To use Epson PX660 Adjustment Program, you need to connect your printer to your computer via USB cable and run AdjProg.exe file as administrator. Then you can access different functions and settings of this program by following these steps: 1. Select Particular adjustment mode from the main menu. 2. Select Waste ink pad counter from Maintenance menu. 3. Click Check button to see how much ink has been used by your printer. 4. Click Initialization button to reset the waste ink pad counter. 5. Turn off your printer when prompted and turn it back on after 10 seconds. 6. Click Finish button when done. To prescribe your print head ID, follow these steps: 1. Select Particular adjustment mode from the main menu. 2. Select Head ID input from Head maintenance menu. 3. Enter your print head ID in hexadecimal format (e.g., 0A0B0C0D) or click Get button to read it from EEPROM data. 4. Click Set button to write it into EEPROM data. 5. Turn off your printer when prompted and turn it back on after 10 seconds. 6. Click Finish button when done. To do printer initialization, follow these steps: 1. Select Particular adjustment mode from the main menu. 2. Select Initialize (PF deterioration offset) from Initial setting menu. 3. Click OK button to confirm the initialization process. 4. Turn off your printer when prompted and turn it back on after 10 seconds. 5. Click Finish button when done. You can also use other functions and settings of this program by exploring different menus and options. For example, you can check nozzle patterns, clean print heads, read or write EEPROM data, etc. Here are some screenshots and examples of using this program:
-
-
-
-
Conclusion
- In conclusion, Epson PX660 Adjustment Program is a useful tool that can help you reset your printer and fix common problems. By using this program, you can reset your waste ink pad counter, prescribe your print head ID, do printer initialization and other functions. This can improve your print quality, save ink and paper, extend your printer's lifespan, troubleshoot your printer's problems, and update your printer's firmware. If you own an Epson Stylus Photo PX660 printer, we recommend you download and install this program from ORPYS website (https://orpys.com/en/epson/164-px660-adjustment-program.html). This website offers original programs for various Epson printer models at affordable prices. You can also find more information about each program's features and requirements on this website. We hope this article has been helpful and informative for you. If you have any feedback or questions about Epson PX660 Adjustment Program, please feel free to share them with us in the comments section below. We would love to hear from you!
FAQs
- Here are some frequently asked questions and answers about Epson PX660 Adjustment Program: - What are the advantages of using this program over other methods? - This program is an original program that works only with USB on Windows computers. It is not compatible with other operating systems or connection methods. It is also attached to one PC only, which means you cannot use it on multiple computers. You need to purchase a license key for each PC you want to use it on. - This program allows you to reset the waste ink pad counter, prescribe the print head ID, do printer initialization and other functions that may not be available or easy to perform with other methods. - This program is easy to use and has a user-friendly interface that guides you through each step of the process. - How often do I need to reset my waste ink pad counter? - The waste ink pad counter is a feature that prevents your printer from overflowing with ink by counting how much ink is used during printing and cleaning cycles. When the counter reaches a certain limit (usually around 6700 pages), your printer stops working and displays an error message saying that your ink pads are at the end of their service life. - The frequency of resetting your waste ink pad counter depends on how often you use your printer and how much ink is used during each cycle. Generally speaking, you may need to reset it once every few months or years depending on your usage patterns. - How can I check if my print head ID is correct? - The print head ID is a unique ID that helps your printer recognize your print head and adjust its settings accordingly. This can improve your print quality and prevent errors such as missing colors or lines. - You can check if your print head ID is correct by using Epson PX660 Adjustment Program. To do so, follow these steps: - Select Particular adjustment mode from the main menu. - Select Head ID input from Head maintenance menu. - Enter your print head ID in hexadecimal format (e.g., 0A0B0C0D) or click Get button to read it from EEPROM data. - Compare the entered or read ID with the one printed on your print head label (usually located under the cartridge cover). - If they match, then your print head ID is correct. If they don't match, then you need to write the correct ID into EEPROM data by clicking Set button. - What should I do if I encounter any errors or problems while using this program? - If you encounter any errors or problems while using this program, you should try the following solutions: - Make sure you have downloaded and installed the correct version of this program for your printer model from ORPYS website (https://orpys.com/en/epson/164-px660-adjustment-program.html). - Make sure you have disabled your antivirus software before installing this program as some antivirus programs may block or delete it. - Make sure you have connected your printer to your computer via USB cable and turned it on before running this program. - Make sure you have entered the correct license key when activating this program. - Make sure you have followed the instructions on the screen carefully when using this program. - Where can I find more information or support for this program? - You can find more information or support for this program by visiting ORPYS website (https://orpys.com/en/epson/164-px660-adjustment-program.html). This website offers original programs for various Epson printer models at affordable prices. You can also find more information about each program's features and requirements on this website. - You can also contact ORPYS customer service by email (info@orpys.com) or phone (+380636000000) if you have any questions or issues regarding this program. 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Free Download Tally ERP 9 with Crack Pros and Cons.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Free Download Tally ERP 9 with Crack Pros and Cons.md
deleted file mode 100644
index dc7a141e6eb4dfa4039c41c0150c74b0e9b3cf09..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Free Download Tally ERP 9 with Crack Pros and Cons.md
+++ /dev/null
@@ -1,34 +0,0 @@
-
-
How to Free Download Tally ERP 9 with Crack Full Version
-
Tally ERP 9 is a popular business software for accounting, inventory, and payroll. It is used by millions of small and medium enterprises in India and abroad. Tally ERP 9 helps you manage your finances and operations efficiently and easily. It also offers features like statutory compliance, GST filing, remote access, and customization.
However, Tally ERP 9 is not a free software. You need to purchase a license to use it legally and get updates and support. The license fee depends on the edition and the number of users you need. For example, the Silver edition for a single user costs Rs. 18,000 plus GST, while the Gold edition for unlimited users costs Rs. 54,000 plus GST.
-
But what if you want to use Tally ERP 9 without paying anything? Is there a way to get it for free? The answer is yes, but it comes with a lot of risks and drawbacks. In this article, we will explain how to free download Tally ERP 9 with crack full version, what are the dangers of using a cracked software, and what are the alternatives to Tally ERP 9.
-
How to Free Download Tally ERP 9 with Crack Full Version
-
A crack is a modified version of a software that bypasses its security features and allows you to use it without a license. There are many websites that claim to offer Tally ERP 9 with crack full version for free download. Some of them are:
To download Tally ERP 9 with crack from these websites, you need to follow these steps:
-
-
-
Click on the download link or button on the website.
-
Wait for the file to be downloaded on your computer.
-
Extract the zip file using WinRAR or any other software.
-
Run the setup file and follow the instructions.
-
Replace the original tally.exe file with the cracked one in the installation folder.
-
Run Tally ERP 9 as administrator and activate it using any fake serial number and email address.
-
-
Congratulations! You have successfully installed Tally ERP 9 with crack full version on your computer. You can now use it without any restrictions.
-
What are the Dangers of Using Tally ERP 9 with Crack Full Version
-
While using Tally ERP 9 with crack may seem tempting, it is not a wise decision. There are many risks and disadvantages of using a cracked software that can harm your business and your computer. Some of them are:
-
-
You are violating the intellectual property rights of Tally Solutions Pvt Ltd, the developer of Tally ERP 9. This is illegal and unethical, and can result in legal action against you.
-
You are exposing your computer to viruses, malware, spyware, ransomware, and other malicious programs that can damage your data and system. Cracked software often contains hidden codes that can infect your computer and compromise your security.
-
You are missing out on important updates and patches that fix bugs, improve performance, and add new features to Tally ERP 9. Cracked software cannot be updated online, and you have to rely on outdated versions that may not work properly or support new requirements.
-
You are losing access to technical support and customer service from Tally Solutions Pvt Ltd. If you face any problem or issue with Tally ERP 9, you cannot contact the official support team or get any assistance from them.
-
You are risking your business reputation and credibility by using a pirated software. Your customers, suppliers, ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Activation Key PhotoStage Slideshow Producer Keygen.epubhttps Scoutmails.com Index301.php K Activa.md b/spaces/1gistliPinn/ChatGPT4/Examples/Activation Key PhotoStage Slideshow Producer Keygen.epubhttps Scoutmails.com Index301.php K Activa.md
deleted file mode 100644
index d4f671629efa6147350e2f3109eaa91110ed98af..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Activation Key PhotoStage Slideshow Producer Keygen.epubhttps Scoutmails.com Index301.php K Activa.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Activation Key PhotoStage Slideshow Producer Keygen.epubhttps: scoutmails.com index301.php k Activa
-
-automatabookbyadeshkpandeypdfdownload · HD Online Player (tenggelamnya kapal van der wijck 201). 좋아요공ê°. ê³µìœ í•˜ê¸°. 글 요소. 구ë…하기 hair, there and ... 1fdad05405
-
-
-
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/1v1 LOL Mod Menu APK How to Get God Mode Unlimited Ammo and Fly Mode on Your Device.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/1v1 LOL Mod Menu APK How to Get God Mode Unlimited Ammo and Fly Mode on Your Device.md
deleted file mode 100644
index f34e7a66a0977ee18df7d733497f5724835e7737..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/1v1 LOL Mod Menu APK How to Get God Mode Unlimited Ammo and Fly Mode on Your Device.md
+++ /dev/null
@@ -1,88 +0,0 @@
-
-
How to Download 1v1 Lol Mod Menu and Enjoy Unlimited Features
-
If you are a fan of online shooter games, you might have heard of 1v1 Lol, a popular browser-based game that lets you compete with other players in various modes and maps. But did you know that you can also download 1v1 Lol mod menu, a hack script that gives you access to cheats and mods that can make your gameplay more fun and exciting? In this article, we will tell you what 1v1 Lol is, what 1v1 Lol mod menu offers, and how to download and install it on your device.
-
What is 1v1 Lol and Why You Should Play It
-
A Fun and Fast-Paced Online Shooter Game
-
1v1 Lol is a free online shooter game that combines elements of Fortnite and PUBG. You can play as a solo player or team up with your friends in different game modes such as 1v1, 2v2, 4v4, Battle Royale, Box Fight, Party, and more. You can also customize your character's appearance, weapons, skins, and emotes. The game is easy to play but hard to master, as you need to have good aim, reflexes, and building skills to survive.
One of the best things about 1v1 Lol is that it offers a variety of game modes and maps to suit your preferences and mood. You can choose from different modes such as:
-
-
1v1: A classic mode where you face off against another player in a small map. The first one to eliminate the other wins.
-
2v2: A team mode where you pair up with another player and fight against another duo in a medium-sized map. The team with the most kills wins.
-
4v4: A squad mode where you join a team of four players and battle against another team of four in a large map. The team with the most kills wins.
-
Battle Royale: A survival mode where you join a lobby of up to 10 players and parachute into a random map. You have to loot weapons, ammo, health kits, shields, and materials while avoiding the storm that shrinks the map. The last one standing wins.
-
Box Fight: A creative mode where you spawn in a box-shaped map with walls that you can edit. You have to use your building skills and weapons to eliminate your opponents.
-
Party: A casual mode where you can join or create a room with up to 10 players and play any mode or map you want. You can also chat with your friends and invite them to your room.
-
-
What is 1v1 Lol Mod Menu and What Does It Offer
-
A Hack Script That Gives You Access to Cheats and Mods
-
If you want to spice up your gameplay and have an edge over your opponents, you might want to try 1v1 Lol mod menu, a hack script that gives you access to cheats and mods that can enhance your gaming experience. 1v1 Lol mod menu is a cheat engine script that you can download and run on your device. It allows you to enable or disable various cheats and mods that can affect your gameplay, such as god mode, infinite ammo, rapid fire, free fly, no clip, no recoil, and more. With 1v1 Lol mod menu, you can have more fun and dominate your opponents in any mode or map.
-
How to download 1v1 lol mod menu for free
-1v1 lol mod menu cheat engine download link
-1v1 lol mod menu script gui tutorial
-1v1 lol mod menu god mode rapid fire fly mode
-1v1 lol mod menu hacks cheats scripts
-Best 1v1 lol mod menu for pc and mobile
-1v1 lol mod menu unlimited ammo and health
-1v1 lol mod menu aimbot wallhack esp
-1v1 lol mod menu no virus no survey
-1v1 lol mod menu working 2023 update
-1v1 lol mod menu installation guide and tips
-1v1 lol mod menu review and gameplay
-1v1 lol mod menu discord server and support
-1v1 lol mod menu pastebin and github links
-1v1 lol mod menu greasy fork and tampermonkey scripts
-1v1 lol mod menu firefox and chrome extensions
-1v1 lol mod menu youtube videos and channels
-1v1 lol mod menu reddit posts and comments
-1v1 lol mod menu features and options
-1v1 lol mod menu pros and cons
-1v1 lol mod menu vs vanilla game comparison
-1v1 lol mod menu custom skins and maps
-1v1 lol mod menu fun and funny moments
-1v1 lol mod menu glitches and bugs
-1v1 lol mod menu ban risk and safety
-1v1 lol mod menu alternatives and competitors
-1v1 lol mod menu download speed and size
-1v1 lol mod menu compatibility and requirements
-1v1 lol mod menu feedback and ratings
-1v1 lol mod menu developer and creator
-How to uninstall 1v1 lol mod menu from your device
-How to update 1v1 lol mod menu to the latest version
-How to fix 1v1 lol mod menu errors and issues
-How to customize 1v1 lol mod menu settings and preferences
-How to use 1v
-
Some of the Features of 1v1 Lol Mod Menu
-
God Mode, Infinite Ammo, Rapid Fire, and More
-
One of the features of 1v1 Lol mod menu is that it gives you god mode, which means you are invincible and cannot be killed by any weapon or damage. You can also have infinite ammo, which means you never run out of bullets or rockets. You can also have rapid fire, which means you can shoot faster and more frequently. These cheats can make you unstoppable and give you an unfair advantage over your enemies.
-
Free Fly, No Clip, No Recoil, and More
-
Another feature of 1v1 Lol mod menu is that it gives you free fly, which means you can fly around the map without any limitations. You can also have no clip, which means you can pass through walls and objects. You can also have no recoil, which means your aim is steady and accurate. These mods can make you more agile and flexible and give you more freedom and control over your movements.
-
How to Download and Install 1v1 Lol Mod Menu on Your Device
-
The Requirements and Precautions for Using 1v1 Lol Mod Menu
-
Before you download and install 1v1 Lol mod menu on your device, there are some requirements and precautions that you need to know. First of all, you need to have a device that can run 1v1 Lol, such as a PC, a laptop, a tablet, or a smartphone. You also need to have a stable internet connection and a web browser that supports the game. Secondly, you need to have Cheat Engine installed on your device. Cheat Engine is a software that allows you to modify games and applications by changing their code. You can download Cheat Engine from its official website. Thirdly, you need to be aware of the risks and consequences of using 1v1 Lol mod menu. Using cheats and mods can be considered as cheating and hacking by the game developers and other players. You might get banned from the game or reported by other players if they notice your abnormal behavior. You might also ruin the fun and challenge of the game for yourself and others. Therefore, use 1v1 Lol mod menu at your own discretion and responsibility.
-
The Steps to Download and Install 1v1 Lol Mod Menu
-
Download Cheat Engine and the Mod Menu Script
-
The first step to download and install 1v1 Lol mod menu is to download Cheat Engine and the mod menu script from their respective sources. You can download Cheat Engine from its official website or from any other trusted source. You can download the mod menu script from this link or from any other reliable source. Make sure to save them in a folder that you can easily access.
-
Run Cheat Engine and Select 1v1 Lol as the Process
-
The second step to download and install 1v1 Lol mod menu is to run Cheat Engine and select 1v1 Lol as the process that you want to modify. To do this, open Cheat Engine on your device and click on the computer icon on the top left corner. A window will pop up with a list of processes that are running on your device. Find and select the process that corresponds to 1v1 Lol (it might be named as "chrome.exe" or "firefox.exe" depending on your browser) and click on "Open".
-
Load the Mod Menu Script and Enable the Cheats You Want
-
The third step to download and install 1v1 Lol mod menu is to load the mod menu script and enable the cheats you want. To do this, click on the folder icon on the top left corner of Cheat Engine and browse to the folder where you saved the mod menu script. Select the script and click on "Open". A window will pop up with a list of cheats and mods that are available in the script. You can check or uncheck the boxes next to the cheats and mods that you want to enable or disable. You can also change the values of some of the cheats and mods by double-clicking on them and typing in the desired value. For example, you can change the speed of your character, the gravity of the map, or the size of your weapon. Once you are done, click on "OK".
-
Conclusion
-
1v1 Lol is a fun and fast-paced online shooter game that you can play with your friends or strangers in various modes and maps. However, if you want to have more fun and excitement, you can also download 1v1 Lol mod menu, a hack script that gives you access to cheats and mods that can enhance your gameplay. 1v1 Lol mod menu offers features such as god mode, infinite ammo, rapid fire, free fly, no clip, no recoil, and more. To download and install 1v1 Lol mod menu on your device, you need to have Cheat Engine and the mod menu script. You also need to run Cheat Engine and select 1v1 Lol as the process. Then, you need to load the mod menu script and enable the cheats you want. However, you also need to be aware of the risks and consequences of using 1v1 Lol mod menu. You might get banned from the game or reported by other players if they notice your abnormal behavior. You might also ruin the fun and challenge of the game for yourself and others. Therefore, use 1v1 Lol mod menu at your own discretion and responsibility.
-
FAQs
-
Here are some of the frequently asked questions about 1v1 Lol mod menu:
-
-
Q: Is 1v1 Lol mod menu safe to use?
-
A: 1v1 Lol mod menu is not an official or authorized product of 1v1 Lol or its developers. It is a hack script that modifies the game code and can be detected by anti-cheat systems or other players. Therefore, using 1v1 Lol mod menu is not safe and can result in getting banned from the game or reported by other players.
-
Q: How can I update 1v1 Lol mod menu?
-
A: 1v1 Lol mod menu is not a static product that works forever. It is a dynamic script that needs to be updated regularly to match the updates of 1v1 Lol and its anti-cheat systems. Therefore, you need to check for updates of 1v1 Lol mod menu from its source or from other reliable sources. You also need to download and install the latest version of Cheat Engine and the mod menu script.
-
Q: Can I use 1v1 Lol mod menu on any device?
-
A: 1v1 Lol mod menu is compatible with any device that can run 1v1 Lol, such as a PC, a laptop, a tablet, or a smartphone. However, you also need to have Cheat Engine installed on your device, which might not be available for some devices or operating systems. Therefore, you need to check if your device can support Cheat Engine before using 1v1 Lol mod menu.
-
Q: Can I use 1v1 Lol mod menu on any mode or map?
-
A: 1v1 Lol mod menu works on any mode or map that is available in 1v1 Lol, such as 1v1, 2v2, 4v4, Battle Royale, Box Fight, Party, and more. However, some of the cheats and mods might not work properly or have different effects depending on the mode or map. Therefore, you need to test and adjust the cheats and mods according to the mode or map that you are playing.
-
Q: Can I use 1v1 Lol mod menu with my friends?
-
A: 1v1 Lol mod menu can be used with your friends if they also have Cheat Engine and the mod menu script installed on their devices. You can join or create a room with your friends and play any mode or map you want with them. However, you also need to be careful not to get caught by other players or by the game developers if you use 1v1 Lol mod menu with your friends.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/AetherSX2 Everything You Need to Know to Download and Play PS2 Games on Android.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/AetherSX2 Everything You Need to Know to Download and Play PS2 Games on Android.md
deleted file mode 100644
index e984edb0eae84663a777944d90e48c7d97a2cae6..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/AetherSX2 Everything You Need to Know to Download and Play PS2 Games on Android.md
+++ /dev/null
@@ -1,136 +0,0 @@
-
-
Where to Download AetherSX2 Games
-
The Sony PlayStation 2 (PS2) is one of the most popular and nostalgic gaming consoles of all time. It has a huge library of games that span across various genres, from action and adventure to sports and racing. If you want to relive your childhood memories or discover new games that you missed out on, you can now play PS2 games on your Android device using the AetherSX2 emulator.
-
AetherSX2 is the best PS2 emulator for Android by a country mile. It is based on the PCSX2 emulator for PC, which is a long-running, well-established program that can run most PS2 games smoothly. AetherSX2 has received the approval and support from the PCSX2 developers, unlike some other shady emulators that stole the code and violated the license agreement.
In this article, we will show you how to download AetherSX2 emulator, how to download games for AetherSX2, and how to install and play them on your Android device. We will also provide some tips and tricks to optimize your gaming experience and recommend some of the best games to play on AetherSX2.
-
How to download AetherSX2 emulator
-
The first step to play PS2 games on Android is to download the AetherSX2 emulator. There are two ways to get it: from the Google Play Store or from the official website.
-
How to get it from the Google Play Store
-
The easiest way to download AetherSX2 emulator is to get it from the Google Play Store. You can simply search for "AetherSX2" on the store or use this link: (https://play.google.com/store/apps/details?id=xyz.aethersx2.android). The app is free to download and use, so don't fall for any scams that ask you to pay for it.
-
Once you have downloaded the app, you can open it and grant it the necessary permissions. You will also need to download a PS2 BIOS file, which is required for the emulator to run. You can get it from your own PS2 console or from other sources online. Just make sure you don't download any malware or viruses along with it.
-
How to get it from the official website
-
If you prefer to sideload the APK file of the emulator, you can also get it from the official website: (https://www.aethersx2.com/). You can find different versions of the app there, including stable releases and alpha builds. You can choose whichever one suits your needs, but keep in mind that alpha builds may have more bugs and issues than stable releases.
-
To install the APK file, you will need to enable unknown sources on your device settings. You can do this by going to Settings > Security > Unknown sources and toggling it on. Then, you can tap on the downloaded APK file and follow the instructions to install it. You will also need a PS2 BIOS file, as mentioned above.
-
How to play PS2 games on Android with AetherSX2
-AetherSX2 emulator settings and performance tips
-Best PS2 games to download for AetherSX2
-AetherSX2 vs other PS2 emulators for Android
-AetherSX2 compatibility list and game reviews
-Where to find PS2 BIOS and ISO files for AetherSX2
-How to use save states and cheats in AetherSX2
-AetherSX2 controller support and configuration guide
-How to install and update AetherSX2 on Android
-AetherSX2 Vulkan vs OpenGL graphics comparison
-AetherSX2 minimum requirements and recommended devices
-How to fix common issues and errors in AetherSX2
-AetherSX2 multiplayer and online features overview
-How to stream PS2 games from AetherSX2 to PC or TV
-AetherSX2 internal resolution scaling and quality settings
-How to backup and restore your AetherSX2 data and games
-How to mod PS2 games and use custom skins in AetherSX2
-How to record and share your gameplay videos from AetherSX2
-How to optimize your battery life and performance while using AetherSX2
-How to get free PS2 games legally for AetherSX2
-How to run PS1 and PSP games on AetherSX2
-How to connect a PS4 or Xbox controller to AetherSX2
-How to play PS2 games on Chromebook with AetherSX2
-How to download and play GTA San Andreas on AetherSX2
-How to download and play God of War on AetherSX2
-How to download and play Kingdom Hearts on AetherSX2
-How to download and play Final Fantasy X on AetherSX2
-How to download and play Metal Gear Solid 3 on AetherSX2
-How to download and play Shadow of the Colossus on AetherSX2
-How to download and play Resident Evil 4 on AetherSX2
-How to download and play Silent Hill 2 on AetherSX2
-How to download and play Persona 4 on AetherSX2
-How to download and play Dragon Ball Z Budokai Tenkaichi 3 on AetherSX2
-How to download and play Naruto Shippuden Ultimate Ninja 5 on AetherSX2
-How to download and play Tekken 5 on AetherSX2
-How to download and play Mortal Kombat Deception on AetherSX2
-How to download and play Need for Speed Most Wanted on AetherSX2
-How to download and play Burnout 3 Takedown on AetherSX2
-How to download and play Gran Turismo 4 on AetherSX2
-How to download and play Ratchet & Clank on AetherSX2
-How to download and play Jak & Daxter on AetherSX2
-How to download and play Sly Cooper on AetherSX2
-How to download and play Crash Bandicoot The Wrath of Cortex on AetherSX2
-How to download and play Spyro Enter the Dragonfly on AetherSX2
-How to download and play Lego Star Wars II The Original Trilogy on AetherSX2
-
How to download games for AetherSX2
-
Now that you have downloaded the AetherSX2 emulator, you will need some games to play on it. The emulator supports PS2 game ISOs or ROMs, which are digital copies of game discs. You can either rip them from your own PS2 discs using a PC or a modded console, or you can download them from legal sources online. However, we do not condone piracy or illegal downloading of games, so you should only download games that you own or have the right to use. You can find some legal sources of PS2 game ISOs or ROMs here: (https://www.emuparadise.me/Sony_Playstation_2_ISOs/41) (https://www.freeroms.com/ps2_roms.htm) (https://romsmania.cc/roms/playstation-2). Be careful of any fake or malicious links that may harm your device or compromise your privacy.
-
How to extract and transfer them to your device
-
Once you have downloaded the PS2 game ISOs or ROMs, you will need to extract them from their compressed formats, such as ZIP or RAR. You can use any file manager app that supports extraction, such as (https://play.google.com/store/apps/details?id=com.rarlab.rar) or (https://play.google.com/store/apps/details?id=com.estrongs.android.pop). You can also use a PC to extract them and then transfer them to your device via USB cable or cloud storage.
-
After extracting the PS2 game ISOs or ROMs, you will need to transfer them to a folder on your device where the AetherSX2 emulator can access them. You can create a folder named "AetherSX2" on your internal storage or SD card and copy the game files there. Alternatively, you can use the default folder that the emulator creates when you first run it, which is "/storage/emulated/0/AetherSX2".
-
How to install and play games on AetherSX2
-
Now that you have the AetherSX2 emulator and the PS2 game ISOs or ROMs on your device, you are ready to install and play them. Here are the steps to follow:
-
How to load and run games on the emulator
-
To load and run games on the AetherSX2 emulator, you need to open the app and tap on the "Games" tab. You will see a list of games that are available in your device's storage. You can also browse for games by tapping on the "Browse" button and navigating to the folder where you stored them.
-
Once you find the game that you want to play, tap on it and wait for it to load. The emulator will automatically detect the game's region and language settings and apply them accordingly. You will see a loading screen with some information about the game, such as its title, cover art, developer, publisher, genre, and release date.
-
After the loading screen, the game will start running on the emulator. You will see a virtual controller overlay on the screen, which mimics the PS2 controller layout. You can use it to control the game as you would on a real PS2 console. You can also hide or show the controller overlay by tapping on the "Menu" button and selecting "Toggle On-screen Controls".
-
How to adjust settings and controls for optimal performance and experience
-
To adjust settings and controls for optimal performance and experience, you need to tap on the "Menu" button and select "Settings". You will see various options that you can tweak according to your preferences and device's capabilities. Here are some of the most important settings that you should pay attention to:
-
-
Graphics: Here you can change the resolution, aspect ratio, frame rate, anti-aliasing, texture filtering, shaders, and other graphical enhancements of the emulator. You can also enable or disable cheats, speed hacks, skip frames, widescreen patches, and other features that may improve or worsen the game's appearance and performance.
-
Audio: Here you can change the volume, latency, interpolation, reverb, sync mode, and other audio settings of the emulator. You can also enable or disable sound effects, music, voiceovers, and other sound components of the game.
-
Input: Here you can change the layout, size, opacity, vibration, sensitivity, mapping, and other input settings of the emulator. You can also enable or disable touch input, motion sensor input, external controller input (such as Bluetooth or USB), and other input methods for the game.
-
System: Here you can change the BIOS file, language, region, clock speed, memory card size, save state slot number, auto-save frequency, and other system settings of the emulator. You can also enable or disable fast boot, full boot, debug mode, logging, and other system features for the game.
-
-
You can experiment with different settings and controls to find the best combination for your device and game. You can also save and load different profiles for different games, so you don't have to change the settings every time you switch games.
-
Conclusion
-
AetherSX2 is the best PS2 emulator for Android that lets you play PS2 games on your device with ease and convenience. You can download the emulator from the Google Play Store or the official website, and download games from legal sources online. You can also adjust settings and controls to optimize your gaming experience and enjoy the PS2 classics on your Android device.
-
Some of the best games to play on AetherSX2 are:
-
-
-
Game
-
Genre
-
Description
-
-
-
God of War
-
Action-adventure
-
A hack-and-slash game that follows the story of Kratos, a Spartan warrior who seeks revenge against the gods of Olympus.
-
-
-
Shadow of the Colossus
-
Action-adventure
-
A unique game that involves exploring a vast land and defeating giant creatures called colossi to revive a girl named Mono.
-
-
-
Grand Theft Auto: San Andreas
-
Action-adventure
-
A sandbox game that allows you to roam freely in a fictional state of San Andreas, where you can engage in various missions, activities, and crimes.
-
-
-
Final Fantasy X
-
Role-playing
-
A turn-based game that follows the journey of Tidus, a young athlete who is transported to a world called Spira, where he joins a summoner named Yuna on her quest to defeat a monster called Sin.
-
-
-
Metal Gear Solid 3: Snake Eater
-
Stealth-action
-
A stealth game that takes place in 1964, where you play as Naked Snake, a special agent who is sent to infiltrate a Soviet base and rescue a scientist.
-
-
-
FAQs
-
Q: Is AetherSX2 emulator legal?
-
A: Yes, AetherSX2 emulator is legal as long as you use it with games that you own or have the right to use. The emulator itself does not contain any copyrighted material or code from Sony or other parties.
-
Q: How much storage space do I need for AetherSX2 emulator and games?
-
A: The AetherSX2 emulator app itself is about 30 MB in size, but you will also need some additional space for the PS2 BIOS file and the game files. The PS2 BIOS file is about 4 MB in size, while the game files vary depending on the game. Some games are less than 1 GB in size, while others are more than 4 GB in size. You can check the file size of each game before downloading it.
-
Q: How powerful does my device need to be to run AetherSX2 emulator and games?
-
A: The AetherSX2 emulator and games require a decent amount of processing power and memory to run smoothly. The minimum requirements are:
-
-
CPU: Quad-core 1.5 GHz or higher
-
RAM: 2 GB or higher
-
GPU: Adreno 320 or higher, Mali-400MP4 or higher, PowerVR SGX544MP or higher
-
OS: Android 5.0 Lollipop or higher
-
OpenGL ES: 3.0 or higher
-
Vulkan: Supported (optional)
-
-
Q: Can I use cheats or mods with AetherSX2 emulator and games?
-
A: Yes, you can use cheats or mods with AetherSX2 emulator and games. The emulator supports cheat codes in various formats, such as RAW, PNACH, CB, ARMAX, etc. You can enter them manually or import them from files. You can also use mods that alter the game's graphics, sound, gameplay, etc. You can find them online or create them yourself.
-
Q: Can I play multiplayer games with AetherSX2 emulator?
-
A: Yes, you can play multiplayer games with AetherSX2 emulator. The emulator supports local multiplayer via split-screen mode or external controllers. You can also play online multiplayer via a network adapter or a VPN service. However, the online multiplayer feature is still experimental and may not work for all games or devices.
-
I hope this article has helped you learn how to download AetherSX2 games and play them on your Android device. If you have any questions or feedback, feel free to leave a comment below. Happy gaming!
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download and Play Real Drag Bike Racing Mod APK for Free.md b/spaces/1phancelerku/anime-remove-background/Download and Play Real Drag Bike Racing Mod APK for Free.md
deleted file mode 100644
index 95ef85648b95d5b6d50bf13899bb7075c56313f2..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download and Play Real Drag Bike Racing Mod APK for Free.md
+++ /dev/null
@@ -1,90 +0,0 @@
-
-
Link Real Drag Bike Racing Mod APK: A Guide for Thrill Seekers
-
If you are a fan of motorcycle racing, you might want to check out Link Real Drag Bike Racing Mod APK, a game that lets you experience the thrill of drag racing on your Android device. In this game, you can choose from a variety of drag bikes, customize them to your liking, and compete on different tracks and modes. You can also enjoy unlimited money and unlocked features with the modded version of the game. In this article, we will tell you more about Link Real Drag Bike Racing Mod APK, how to download and install it, how to play it, and some tips and tricks to help you win.
Link Real Drag Bike Racing Mod APK is a modified version of Link Real Drag Bike Racing, a game developed by Get Mods Apk. The game is inspired by the real-life sport of drag racing, where two motorcycles race on a straight track for a short distance. The game features realistic graphics, sound effects, and physics, as well as a diverse selection of drag bikes for you to choose from. You can also customize your bike with different parts, colors, stickers, and accessories. The game offers various tracks and modes for you to challenge yourself, such as street racing, tournament racing, time trial racing, and online racing. You can also upgrade your skills and equipment as you progress in the game.
-
The modded version of the game gives you some advantages over the original version, such as unlimited money and unlocked features. With unlimited money, you can buy any bike or part you want without worrying about the cost. You can also access all the tracks and modes without having to complete certain levels or tasks. The modded version also removes ads and other annoying features that might interrupt your gameplay.
-
Features and benefits of Link Real Drag Bike Racing Mod APK
-
Some of the features and benefits of Link Real Drag Bike Racing Mod APK are:
-
Download real drag bike racing mod apk unlimited money
-Real drag bike racing mod apk latest version free
-How to install real drag bike racing mod apk on android
-Real drag bike racing mod apk offline gameplay
-Real drag bike racing mod apk v2.0 download link
-Real drag bike racing mod apk hack cheats 2023
-Real drag bike racing mod apk review and rating
-Real drag bike racing mod apk features and tips
-Real drag bike racing mod apk from uptodown.com
-Real drag bike racing mod apk by marangi putro
-Real drag bike racing mod apk for pc windows 10
-Real drag bike racing mod apk no root required
-Real drag bike racing mod apk update and news
-Real drag bike racing mod apk best bikes and tracks
-Real drag bike racing mod apk online multiplayer mode
-Real drag bike racing mod apk vs real moto gp
-Real drag bike racing mod apk dafunda download site
-Real drag bike racing mod apk file size and requirements
-Real drag bike racing mod apk support and feedback
-Real drag bike racing mod apk alternatives and similar apps
-Real drag bike racing mod apk with unlimited coins and gems
-Real drag bike racing mod apk full unlocked and premium
-Real drag bike racing mod apk bug fixes and improvements
-Real drag bike racing mod apk fun and addictive game
-Real drag bike racing mod apk realistic graphics and sound
-Real drag bike racing mod apk easy and smooth controls
-Real drag bike racing mod apk customise and upgrade bikes
-Real drag bike racing mod apk challenge and compete with friends
-Real drag bike racing mod apk new and exciting levels
-Real drag bike racing mod apk for ios iphone and ipad
-
-
Realistic graphics, sound effects, and physics that make you feel like you are on a real drag bike.
-
A diverse selection of drag bikes for you to choose from, each with its own characteristics and performance.
-
A variety of customization options for your bike, such as parts, colors, stickers, and accessories.
-
Different tracks and modes for you to compete on, such as street racing, tournament racing, time trial racing, and online racing.
-
Unlimited money and unlocked features that let you enjoy the game without any limitations or restrictions.
-
No ads or other annoying features that might disrupt your gameplay.
-
-
How to download and install Link Real Drag Bike Racing Mod APK
-
To download and install Link Real Drag Bike Racing Mod APK on your Android device, follow these steps:
-
-
Go to [this link](^1^) to download the modded version of the game.
-
After downloading the file, go to your device settings and enable unknown sources. This will allow you to install apps from sources other than the Google Play Store.
-
Locate the downloaded file in your device storage and tap on it to start the installation process.
-
Follow the instructions on the screen to complete the installation.
-
Once the installation is done, launch the game and enjoy the thrill of drag racing.
-
-
How to play Link Real Drag Bike Racing Mod APK
-
Link Real Drag Bike Racing Mod APK is easy to play, but hard to master. Here are some basic steps to help you get started:
-
Choose your drag bike and customize it
-
The first thing you need to do is to choose your drag bike from the garage. You can select from different categories, such as street bikes, sport bikes, or super bikes. Each bike has its own stats, such as power, torque, weight, and speed. You can also customize your bike with different parts, colors, stickers, and accessories. You can use the unlimited money from the modded version to buy any bike or part you want.
-
Compete on various tracks and modes
-
After choosing and customizing your bike, you can compete on various tracks and modes. You can choose from different locations, such as city streets, highways, deserts, or mountains. You can also choose from different modes, such as street racing, tournament racing, time trial racing, or online racing. In street racing, you can race against random opponents on different tracks. In tournament racing, you can participate in a series of races and earn trophies and rewards. In time trial racing, you can race against the clock and beat your own records. In online racing, you can race against other players from around the world and show off your skills.
-
Upgrade your skills and equipment
-
As you progress in the game, you can upgrade your skills and equipment to improve your performance. You can upgrade your skills, such as launch control, shifting control, nitrous control, and tuning control. You can also upgrade your equipment, such as engine, turbo, exhaust, transmission, tires, and brakes. You can use the unlimited money from the modded version to upgrade anything you want.
-
Tips and tricks for Link Real Drag Bike Racing Mod APK
-
Link Real Drag Bike Racing Mod APK is a game that requires skill and strategy to win. Here are some tips and tricks to help you become a better drag racer:
-
Master the launch and shifting
-
The launch and shifting are two of the most important aspects of drag racing. You need to launch your bike at the right time and shift gears at the right time to optimize your acceleration and speed. To launch your bike, you need to press the clutch button when the countdown starts and release it when the green light appears. To shift gears, you need to press the shift button when the needle reaches the green zone on the tachometer. If you launch or shift too early or too late, you will lose speed and time.
-
Use nitrous wisely
-
Nitrous is a powerful boost that can give you an edge over your opponents. However, you need to use it wisely, as it is limited and can run out quickly. You can activate nitrous by pressing the nitrous button on the screen. You should use nitrous when you need an extra burst of speed or when you are behind your opponent. You should avoid using nitrous when you are already at top speed or when you are about to shift gears.
-
Tune your bike according to the track conditions
-
Tuning your bike is a way of adjusting its performance according to the track conditions. You can tune your bike by changing its settings, such as gear ratio, tire pressure, suspension stiffness, and wheel alignment. You can access the tuning menu by tapping on the tuning button on the screen. You should tune your bike according to the track length, surface type, weather condition, and elevation. For example, if the track is long and straight, you should increase your gear ratio for higher top speed. If the track is short and curvy, you should decrease your gear ratio for faster acceleration. You should also adjust your tire pressure, suspension stiffness, and wheel alignment according to the surface type, weather condition, and elevation of the track.
-
Conclusion
-
Link Real Drag Bike Racing Mod APK is a game that lets you experience the thrill of drag racing on your Android device. You can choose from a variety of drag bikes, customize them to your liking, and compete on different tracks and modes. You can also enjoy unlimited money and unlocked features with the modded version of the game. The game is easy to play, but hard to master. You need to master the launch and shifting, use nitrous wisely, and tune your bike according to the track conditions. If you follow these tips and tricks, you will become a better drag racer and have more fun with the game.
-
FAQs
-
Here are some frequently asked questions about Link Real Drag Bike Racing Mod APK:
-
-
Is Link Real Drag Bike Racing Mod APK safe to download and install?
-
Yes, Link Real Drag Bike Racing Mod APK is safe to download and install, as long as you download it from a trusted source, such as [this link]. The modded version of the game does not contain any viruses or malware that might harm your device or data.
-
Do I need to root my device to use Link Real Drag Bike Racing Mod APK?
-
No, you do not need to root your device to use Link Real Drag Bike Racing Mod APK. The modded version of the game works on both rooted and non-rooted devices. However, you need to enable unknown sources in your device settings to install apps from sources other than the Google Play Store.
-
Can I play Link Real Drag Bike Racing Mod APK offline?
-
Yes, you can play Link Real Drag Bike Racing Mod APK offline, except for the online racing mode. The online racing mode requires an internet connection to connect with other players from around the world. The other modes, such as street racing, tournament racing, and time trial racing, can be played offline without any problem.
-
Can I update Link Real Drag Bike Racing Mod APK?
-
No, you cannot update Link Real Drag Bike Racing Mod APK from the Google Play Store or any other source. The modded version of the game is not compatible with the original version of the game. If you try to update it, you might lose your progress and data. You should only download and install the latest version of Link Real Drag Bike Racing Mod APK from [this link].
-
How can I contact the developer of Link Real Drag Bike Racing Mod APK?
-
You can contact the developer of Link Real Drag Bike Racing Mod APK by visiting their website at [this link]. You can also follow them on their social media accounts, such as Facebook, Twitter, Instagram, and YouTube. You can send them your feedback, suggestions, questions, or complaints about the game.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/AB-TW/team-ai/agents/tools/smart_domain/domain_layer_code_tool.py b/spaces/AB-TW/team-ai/agents/tools/smart_domain/domain_layer_code_tool.py
deleted file mode 100644
index 556c3afb357d100a73082c41f58c1bc78d3dfd3d..0000000000000000000000000000000000000000
--- a/spaces/AB-TW/team-ai/agents/tools/smart_domain/domain_layer_code_tool.py
+++ /dev/null
@@ -1,56 +0,0 @@
-from langchain import LLMChain, PromptTemplate
-from langchain.agents import tool
-from agents.tools.smart_domain.common import getPrefix
-
-from models import llm
-from agents.tools.smart_domain.entity import entity_architecture, entity_test_strategy, entity_tech_stack
-from agents.tools.smart_domain.association import association_architecture, association_test_strategy, association_teck_stack
-
-
-
-
-domain_task = """Your task is to generate the domain layer tests and product code."""
-domain_teck_stack = """Java17、reactor、lombok、Junit5、reactor test、Mockito"""
-domain_architecture = f"""the domain layer inclue 2 componets:
-* {entity_architecture}
-* {association_architecture}"""
-
-domain_test_strategy = f"""{entity_test_strategy}
-{association_test_strategy}"""
-
-
-
-DOMAIN_LAYER = getPrefix(domain_task, domain_teck_stack, domain_architecture, domain_test_strategy) + """
-
-Use the following format:
-request: the request that you need to fulfill
-
-Entity:
-```
-the Entity code that you write to fulfill the request, follow TechStack and Architecture
-```
-
-Association:
-```
-the Association code that you write to fulfill the request, follow TechStack and Architecture
-```
-
-Test:
-```
-the test code that you write to fulfill the request, follow TechStack Architecture and TestStrategy
-```
-
-request: {input}"""
-
-
-
-DOMAIN_LAYER_PROMPT = PromptTemplate(input_variables=["input"], template=DOMAIN_LAYER,)
-
-domainLayerChain = LLMChain(llm = llm(temperature=0.1), prompt=DOMAIN_LAYER_PROMPT)
-
-
-@tool("Generate Domain Layer Code", return_direct=True)
-def domainLayerCodeGenerator(input: str) -> str:
- '''useful for when you need to generate domain layer code'''
- response = domainLayerChain.run(input)
- return response
\ No newline at end of file
diff --git a/spaces/AIConsultant/MusicGen/audiocraft/data/audio.py b/spaces/AIConsultant/MusicGen/audiocraft/data/audio.py
deleted file mode 100644
index 39c87047f5033d0016200df77004a9536e06e81a..0000000000000000000000000000000000000000
--- a/spaces/AIConsultant/MusicGen/audiocraft/data/audio.py
+++ /dev/null
@@ -1,216 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Audio IO methods are defined in this module (info, read, write),
-We rely on av library for faster read when possible, otherwise on torchaudio.
-"""
-
-from dataclasses import dataclass
-from pathlib import Path
-import logging
-import typing as tp
-
-import numpy as np
-import soundfile
-import torch
-from torch.nn import functional as F
-import torchaudio as ta
-
-import av
-
-from .audio_utils import f32_pcm, i16_pcm, normalize_audio
-
-
-_av_initialized = False
-
-
-def _init_av():
- global _av_initialized
- if _av_initialized:
- return
- logger = logging.getLogger('libav.mp3')
- logger.setLevel(logging.ERROR)
- _av_initialized = True
-
-
-@dataclass(frozen=True)
-class AudioFileInfo:
- sample_rate: int
- duration: float
- channels: int
-
-
-def _av_info(filepath: tp.Union[str, Path]) -> AudioFileInfo:
- _init_av()
- with av.open(str(filepath)) as af:
- stream = af.streams.audio[0]
- sample_rate = stream.codec_context.sample_rate
- duration = float(stream.duration * stream.time_base)
- channels = stream.channels
- return AudioFileInfo(sample_rate, duration, channels)
-
-
-def _soundfile_info(filepath: tp.Union[str, Path]) -> AudioFileInfo:
- info = soundfile.info(filepath)
- return AudioFileInfo(info.samplerate, info.duration, info.channels)
-
-
-def audio_info(filepath: tp.Union[str, Path]) -> AudioFileInfo:
- # torchaudio no longer returns useful duration informations for some formats like mp3s.
- filepath = Path(filepath)
- if filepath.suffix in ['.flac', '.ogg']: # TODO: Validate .ogg can be safely read with av_info
- # ffmpeg has some weird issue with flac.
- return _soundfile_info(filepath)
- else:
- return _av_info(filepath)
-
-
-def _av_read(filepath: tp.Union[str, Path], seek_time: float = 0, duration: float = -1.) -> tp.Tuple[torch.Tensor, int]:
- """FFMPEG-based audio file reading using PyAV bindings.
- Soundfile cannot read mp3 and av_read is more efficient than torchaudio.
-
- Args:
- filepath (str or Path): Path to audio file to read.
- seek_time (float): Time at which to start reading in the file.
- duration (float): Duration to read from the file. If set to -1, the whole file is read.
- Returns:
- tuple of torch.Tensor, int: Tuple containing audio data and sample rate
- """
- _init_av()
- with av.open(str(filepath)) as af:
- stream = af.streams.audio[0]
- sr = stream.codec_context.sample_rate
- num_frames = int(sr * duration) if duration >= 0 else -1
- frame_offset = int(sr * seek_time)
- # we need a small negative offset otherwise we get some edge artifact
- # from the mp3 decoder.
- af.seek(int(max(0, (seek_time - 0.1)) / stream.time_base), stream=stream)
- frames = []
- length = 0
- for frame in af.decode(streams=stream.index):
- current_offset = int(frame.rate * frame.pts * frame.time_base)
- strip = max(0, frame_offset - current_offset)
- buf = torch.from_numpy(frame.to_ndarray())
- if buf.shape[0] != stream.channels:
- buf = buf.view(-1, stream.channels).t()
- buf = buf[:, strip:]
- frames.append(buf)
- length += buf.shape[1]
- if num_frames > 0 and length >= num_frames:
- break
- assert frames
- # If the above assert fails, it is likely because we seeked past the end of file point,
- # in which case ffmpeg returns a single frame with only zeros, and a weird timestamp.
- # This will need proper debugging, in due time.
- wav = torch.cat(frames, dim=1)
- assert wav.shape[0] == stream.channels
- if num_frames > 0:
- wav = wav[:, :num_frames]
- return f32_pcm(wav), sr
-
-
-def audio_read(filepath: tp.Union[str, Path], seek_time: float = 0.,
- duration: float = -1., pad: bool = False) -> tp.Tuple[torch.Tensor, int]:
- """Read audio by picking the most appropriate backend tool based on the audio format.
-
- Args:
- filepath (str or Path): Path to audio file to read.
- seek_time (float): Time at which to start reading in the file.
- duration (float): Duration to read from the file. If set to -1, the whole file is read.
- pad (bool): Pad output audio if not reaching expected duration.
- Returns:
- tuple of torch.Tensor, int: Tuple containing audio data and sample rate.
- """
- fp = Path(filepath)
- if fp.suffix in ['.flac', '.ogg']: # TODO: check if we can safely use av_read for .ogg
- # There is some bug with ffmpeg and reading flac
- info = _soundfile_info(filepath)
- frames = -1 if duration <= 0 else int(duration * info.sample_rate)
- frame_offset = int(seek_time * info.sample_rate)
- wav, sr = soundfile.read(filepath, start=frame_offset, frames=frames, dtype=np.float32)
- assert info.sample_rate == sr, f"Mismatch of sample rates {info.sample_rate} {sr}"
- wav = torch.from_numpy(wav).t().contiguous()
- if len(wav.shape) == 1:
- wav = torch.unsqueeze(wav, 0)
- elif (
- fp.suffix in ['.wav', '.mp3'] and fp.suffix[1:] in ta.utils.sox_utils.list_read_formats()
- and duration <= 0 and seek_time == 0
- ):
- # Torchaudio is faster if we load an entire file at once.
- wav, sr = ta.load(fp)
- else:
- wav, sr = _av_read(filepath, seek_time, duration)
- if pad and duration > 0:
- expected_frames = int(duration * sr)
- wav = F.pad(wav, (0, expected_frames - wav.shape[-1]))
- return wav, sr
-
-
-def audio_write(stem_name: tp.Union[str, Path],
- wav: torch.Tensor, sample_rate: int,
- format: str = 'wav', mp3_rate: int = 320, normalize: bool = True,
- strategy: str = 'peak', peak_clip_headroom_db: float = 1,
- rms_headroom_db: float = 18, loudness_headroom_db: float = 14,
- loudness_compressor: bool = False,
- log_clipping: bool = True, make_parent_dir: bool = True,
- add_suffix: bool = True) -> Path:
- """Convenience function for saving audio to disk. Returns the filename the audio was written to.
-
- Args:
- stem_name (str or Path): Filename without extension which will be added automatically.
- format (str): Either "wav" or "mp3".
- mp3_rate (int): kbps when using mp3s.
- normalize (bool): if `True` (default), normalizes according to the prescribed
- strategy (see after). If `False`, the strategy is only used in case clipping
- would happen.
- strategy (str): Can be either 'clip', 'peak', or 'rms'. Default is 'peak',
- i.e. audio is normalized by its largest value. RMS normalizes by root-mean-square
- with extra headroom to avoid clipping. 'clip' just clips.
- peak_clip_headroom_db (float): Headroom in dB when doing 'peak' or 'clip' strategy.
- rms_headroom_db (float): Headroom in dB when doing 'rms' strategy. This must be much larger
- than the `peak_clip` one to avoid further clipping.
- loudness_headroom_db (float): Target loudness for loudness normalization.
- loudness_compressor (bool): Uses tanh for soft clipping when strategy is 'loudness'.
- when strategy is 'loudness' log_clipping (bool): If True, basic logging on stderr when clipping still
- occurs despite strategy (only for 'rms').
- make_parent_dir (bool): Make parent directory if it doesn't exist.
- Returns:
- Path: Path of the saved audio.
- """
- assert wav.dtype.is_floating_point, "wav is not floating point"
- if wav.dim() == 1:
- wav = wav[None]
- elif wav.dim() > 2:
- raise ValueError("Input wav should be at most 2 dimension.")
- assert wav.isfinite().all()
- wav = normalize_audio(wav, normalize, strategy, peak_clip_headroom_db,
- rms_headroom_db, loudness_headroom_db, loudness_compressor,
- log_clipping=log_clipping, sample_rate=sample_rate,
- stem_name=str(stem_name))
- kwargs: dict = {}
- if format == 'mp3':
- suffix = '.mp3'
- kwargs.update({"compression": mp3_rate})
- elif format == 'wav':
- wav = i16_pcm(wav)
- suffix = '.wav'
- kwargs.update({"encoding": "PCM_S", "bits_per_sample": 16})
- else:
- raise RuntimeError(f"Invalid format {format}. Only wav or mp3 are supported.")
- if not add_suffix:
- suffix = ''
- path = Path(str(stem_name) + suffix)
- if make_parent_dir:
- path.parent.mkdir(exist_ok=True, parents=True)
- try:
- ta.save(path, wav, sample_rate, **kwargs)
- except Exception:
- if path.exists():
- # we do not want to leave half written files around.
- path.unlink()
- raise
- return path
diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/diffusionmodules/util.py b/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/diffusionmodules/util.py
deleted file mode 100644
index a952e6c40308c33edd422da0ce6a60f47e73661b..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/diffusionmodules/util.py
+++ /dev/null
@@ -1,267 +0,0 @@
-# adopted from
-# https://github.com/openai/improved-diffusion/blob/main/improved_diffusion/gaussian_diffusion.py
-# and
-# https://github.com/lucidrains/denoising-diffusion-pytorch/blob/7706bdfc6f527f58d33f84b7b522e61e6e3164b3/denoising_diffusion_pytorch/denoising_diffusion_pytorch.py
-# and
-# https://github.com/openai/guided-diffusion/blob/0ba878e517b276c45d1195eb29f6f5f72659a05b/guided_diffusion/nn.py
-#
-# thanks!
-
-
-import os
-import math
-import torch
-import torch.nn as nn
-import numpy as np
-from einops import repeat
-
-from ldm.util import instantiate_from_config
-
-
-def make_beta_schedule(schedule, n_timestep, linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3):
- if schedule == "linear":
- betas = (
- torch.linspace(linear_start ** 0.5, linear_end ** 0.5, n_timestep, dtype=torch.float64) ** 2
- )
-
- elif schedule == "cosine":
- timesteps = (
- torch.arange(n_timestep + 1, dtype=torch.float64) / n_timestep + cosine_s
- )
- alphas = timesteps / (1 + cosine_s) * np.pi / 2
- alphas = torch.cos(alphas).pow(2)
- alphas = alphas / alphas[0]
- betas = 1 - alphas[1:] / alphas[:-1]
- betas = np.clip(betas, a_min=0, a_max=0.999)
-
- elif schedule == "sqrt_linear":
- betas = torch.linspace(linear_start, linear_end, n_timestep, dtype=torch.float64)
- elif schedule == "sqrt":
- betas = torch.linspace(linear_start, linear_end, n_timestep, dtype=torch.float64) ** 0.5
- else:
- raise ValueError(f"schedule '{schedule}' unknown.")
- return betas.numpy()
-
-
-def make_ddim_timesteps(ddim_discr_method, num_ddim_timesteps, num_ddpm_timesteps, verbose=True):
- if ddim_discr_method == 'uniform':
- c = num_ddpm_timesteps // num_ddim_timesteps
- ddim_timesteps = np.asarray(list(range(0, num_ddpm_timesteps, c)))
- elif ddim_discr_method == 'quad':
- ddim_timesteps = ((np.linspace(0, np.sqrt(num_ddpm_timesteps * .8), num_ddim_timesteps)) ** 2).astype(int)
- else:
- raise NotImplementedError(f'There is no ddim discretization method called "{ddim_discr_method}"')
-
- # assert ddim_timesteps.shape[0] == num_ddim_timesteps
- # add one to get the final alpha values right (the ones from first scale to data during sampling)
- steps_out = ddim_timesteps + 1
- if verbose:
- print(f'Selected timesteps for ddim sampler: {steps_out}')
- return steps_out
-
-
-def make_ddim_sampling_parameters(alphacums, ddim_timesteps, eta, verbose=True):
- # select alphas for computing the variance schedule
- alphas = alphacums[ddim_timesteps]
- alphas_prev = np.asarray([alphacums[0]] + alphacums[ddim_timesteps[:-1]].tolist())
-
- # according the the formula provided in https://arxiv.org/abs/2010.02502
- sigmas = eta * np.sqrt((1 - alphas_prev) / (1 - alphas) * (1 - alphas / alphas_prev))
- if verbose:
- print(f'Selected alphas for ddim sampler: a_t: {alphas}; a_(t-1): {alphas_prev}')
- print(f'For the chosen value of eta, which is {eta}, '
- f'this results in the following sigma_t schedule for ddim sampler {sigmas}')
- return sigmas, alphas, alphas_prev
-
-
-def betas_for_alpha_bar(num_diffusion_timesteps, alpha_bar, max_beta=0.999):
- """
- Create a beta schedule that discretizes the given alpha_t_bar function,
- which defines the cumulative product of (1-beta) over time from t = [0,1].
- :param num_diffusion_timesteps: the number of betas to produce.
- :param alpha_bar: a lambda that takes an argument t from 0 to 1 and
- produces the cumulative product of (1-beta) up to that
- part of the diffusion process.
- :param max_beta: the maximum beta to use; use values lower than 1 to
- prevent singularities.
- """
- betas = []
- for i in range(num_diffusion_timesteps):
- t1 = i / num_diffusion_timesteps
- t2 = (i + 1) / num_diffusion_timesteps
- betas.append(min(1 - alpha_bar(t2) / alpha_bar(t1), max_beta))
- return np.array(betas)
-
-
-def extract_into_tensor(a, t, x_shape):
- b, *_ = t.shape
- out = a.gather(-1, t)
- return out.reshape(b, *((1,) * (len(x_shape) - 1)))
-
-
-def checkpoint(func, inputs, params, flag):
- """
- Evaluate a function without caching intermediate activations, allowing for
- reduced memory at the expense of extra compute in the backward pass.
- :param func: the function to evaluate.
- :param inputs: the argument sequence to pass to `func`.
- :param params: a sequence of parameters `func` depends on but does not
- explicitly take as arguments.
- :param flag: if False, disable gradient checkpointing.
- """
- if flag:
- args = tuple(inputs) + tuple(params)
- return CheckpointFunction.apply(func, len(inputs), *args)
- else:
- return func(*inputs)
-
-
-class CheckpointFunction(torch.autograd.Function):
- @staticmethod
- def forward(ctx, run_function, length, *args):
- ctx.run_function = run_function
- ctx.input_tensors = list(args[:length])
- ctx.input_params = list(args[length:])
-
- with torch.no_grad():
- output_tensors = ctx.run_function(*ctx.input_tensors)
- return output_tensors
-
- @staticmethod
- def backward(ctx, *output_grads):
- ctx.input_tensors = [x.detach().requires_grad_(True) for x in ctx.input_tensors]
- with torch.enable_grad():
- # Fixes a bug where the first op in run_function modifies the
- # Tensor storage in place, which is not allowed for detach()'d
- # Tensors.
- shallow_copies = [x.view_as(x) for x in ctx.input_tensors]
- output_tensors = ctx.run_function(*shallow_copies)
- input_grads = torch.autograd.grad(
- output_tensors,
- ctx.input_tensors + ctx.input_params,
- output_grads,
- allow_unused=True,
- )
- del ctx.input_tensors
- del ctx.input_params
- del output_tensors
- return (None, None) + input_grads
-
-
-def timestep_embedding(timesteps, dim, max_period=10000, repeat_only=False):
- """
- Create sinusoidal timestep embeddings.
- :param timesteps: a 1-D Tensor of N indices, one per batch element.
- These may be fractional.
- :param dim: the dimension of the output.
- :param max_period: controls the minimum frequency of the embeddings.
- :return: an [N x dim] Tensor of positional embeddings.
- """
- if not repeat_only:
- half = dim // 2
- freqs = torch.exp(
- -math.log(max_period) * torch.arange(start=0, end=half, dtype=torch.float32) / half
- ).to(device=timesteps.device)
- args = timesteps[:, None].float() * freqs[None]
- embedding = torch.cat([torch.cos(args), torch.sin(args)], dim=-1)
- if dim % 2:
- embedding = torch.cat([embedding, torch.zeros_like(embedding[:, :1])], dim=-1)
- else:
- embedding = repeat(timesteps, 'b -> b d', d=dim)
- return embedding
-
-
-def zero_module(module):
- """
- Zero out the parameters of a module and return it.
- """
- for p in module.parameters():
- p.detach().zero_()
- return module
-
-
-def scale_module(module, scale):
- """
- Scale the parameters of a module and return it.
- """
- for p in module.parameters():
- p.detach().mul_(scale)
- return module
-
-
-def mean_flat(tensor):
- """
- Take the mean over all non-batch dimensions.
- """
- return tensor.mean(dim=list(range(1, len(tensor.shape))))
-
-
-def normalization(channels):
- """
- Make a standard normalization layer.
- :param channels: number of input channels.
- :return: an nn.Module for normalization.
- """
- return GroupNorm32(32, channels)
-
-
-# PyTorch 1.7 has SiLU, but we support PyTorch 1.5.
-class SiLU(nn.Module):
- def forward(self, x):
- return x * torch.sigmoid(x)
-
-
-class GroupNorm32(nn.GroupNorm):
- def forward(self, x):
- return super().forward(x.float()).type(x.dtype)
-
-def conv_nd(dims, *args, **kwargs):
- """
- Create a 1D, 2D, or 3D convolution module.
- """
- if dims == 1:
- return nn.Conv1d(*args, **kwargs)
- elif dims == 2:
- return nn.Conv2d(*args, **kwargs)
- elif dims == 3:
- return nn.Conv3d(*args, **kwargs)
- raise ValueError(f"unsupported dimensions: {dims}")
-
-
-def linear(*args, **kwargs):
- """
- Create a linear module.
- """
- return nn.Linear(*args, **kwargs)
-
-
-def avg_pool_nd(dims, *args, **kwargs):
- """
- Create a 1D, 2D, or 3D average pooling module.
- """
- if dims == 1:
- return nn.AvgPool1d(*args, **kwargs)
- elif dims == 2:
- return nn.AvgPool2d(*args, **kwargs)
- elif dims == 3:
- return nn.AvgPool3d(*args, **kwargs)
- raise ValueError(f"unsupported dimensions: {dims}")
-
-
-class HybridConditioner(nn.Module):
-
- def __init__(self, c_concat_config, c_crossattn_config):
- super().__init__()
- self.concat_conditioner = instantiate_from_config(c_concat_config)
- self.crossattn_conditioner = instantiate_from_config(c_crossattn_config)
-
- def forward(self, c_concat, c_crossattn):
- c_concat = self.concat_conditioner(c_concat)
- c_crossattn = self.crossattn_conditioner(c_crossattn)
- return {'c_concat': [c_concat], 'c_crossattn': [c_crossattn]}
-
-
-def noise_like(shape, device, repeat=False):
- repeat_noise = lambda: torch.randn((1, *shape[1:]), device=device).repeat(shape[0], *((1,) * (len(shape) - 1)))
- noise = lambda: torch.randn(shape, device=device)
- return repeat_noise() if repeat else noise()
\ No newline at end of file
diff --git a/spaces/Abhilashvj/planogram-compliance/utils/aws/mime.sh b/spaces/Abhilashvj/planogram-compliance/utils/aws/mime.sh
deleted file mode 100644
index c319a83cfbdf09bea634c3bd9fca737c0b1dd505..0000000000000000000000000000000000000000
--- a/spaces/Abhilashvj/planogram-compliance/utils/aws/mime.sh
+++ /dev/null
@@ -1,26 +0,0 @@
-# AWS EC2 instance startup 'MIME' script https://aws.amazon.com/premiumsupport/knowledge-center/execute-user-data-ec2/
-# This script will run on every instance restart, not only on first start
-# --- DO NOT COPY ABOVE COMMENTS WHEN PASTING INTO USERDATA ---
-
-Content-Type: multipart/mixed; boundary="//"
-MIME-Version: 1.0
-
---//
-Content-Type: text/cloud-config; charset="us-ascii"
-MIME-Version: 1.0
-Content-Transfer-Encoding: 7bit
-Content-Disposition: attachment; filename="cloud-config.txt"
-
-#cloud-config
-cloud_final_modules:
-- [scripts-user, always]
-
---//
-Content-Type: text/x-shellscript; charset="us-ascii"
-MIME-Version: 1.0
-Content-Transfer-Encoding: 7bit
-Content-Disposition: attachment; filename="userdata.txt"
-
-#!/bin/bash
-# --- paste contents of userdata.sh here ---
---//
diff --git a/spaces/Adapter/T2I-Adapter/ldm/modules/extra_condition/model_edge.py b/spaces/Adapter/T2I-Adapter/ldm/modules/extra_condition/model_edge.py
deleted file mode 100644
index 5511f1d89e30160477f37792ecc345901fe893a9..0000000000000000000000000000000000000000
--- a/spaces/Adapter/T2I-Adapter/ldm/modules/extra_condition/model_edge.py
+++ /dev/null
@@ -1,653 +0,0 @@
-"""
-Author: Zhuo Su, Wenzhe Liu
-Date: Feb 18, 2021
-"""
-
-import math
-
-import cv2
-import numpy as np
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from basicsr.utils import img2tensor
-
-nets = {
- 'baseline': {
- 'layer0': 'cv',
- 'layer1': 'cv',
- 'layer2': 'cv',
- 'layer3': 'cv',
- 'layer4': 'cv',
- 'layer5': 'cv',
- 'layer6': 'cv',
- 'layer7': 'cv',
- 'layer8': 'cv',
- 'layer9': 'cv',
- 'layer10': 'cv',
- 'layer11': 'cv',
- 'layer12': 'cv',
- 'layer13': 'cv',
- 'layer14': 'cv',
- 'layer15': 'cv',
- },
- 'c-v15': {
- 'layer0': 'cd',
- 'layer1': 'cv',
- 'layer2': 'cv',
- 'layer3': 'cv',
- 'layer4': 'cv',
- 'layer5': 'cv',
- 'layer6': 'cv',
- 'layer7': 'cv',
- 'layer8': 'cv',
- 'layer9': 'cv',
- 'layer10': 'cv',
- 'layer11': 'cv',
- 'layer12': 'cv',
- 'layer13': 'cv',
- 'layer14': 'cv',
- 'layer15': 'cv',
- },
- 'a-v15': {
- 'layer0': 'ad',
- 'layer1': 'cv',
- 'layer2': 'cv',
- 'layer3': 'cv',
- 'layer4': 'cv',
- 'layer5': 'cv',
- 'layer6': 'cv',
- 'layer7': 'cv',
- 'layer8': 'cv',
- 'layer9': 'cv',
- 'layer10': 'cv',
- 'layer11': 'cv',
- 'layer12': 'cv',
- 'layer13': 'cv',
- 'layer14': 'cv',
- 'layer15': 'cv',
- },
- 'r-v15': {
- 'layer0': 'rd',
- 'layer1': 'cv',
- 'layer2': 'cv',
- 'layer3': 'cv',
- 'layer4': 'cv',
- 'layer5': 'cv',
- 'layer6': 'cv',
- 'layer7': 'cv',
- 'layer8': 'cv',
- 'layer9': 'cv',
- 'layer10': 'cv',
- 'layer11': 'cv',
- 'layer12': 'cv',
- 'layer13': 'cv',
- 'layer14': 'cv',
- 'layer15': 'cv',
- },
- 'cvvv4': {
- 'layer0': 'cd',
- 'layer1': 'cv',
- 'layer2': 'cv',
- 'layer3': 'cv',
- 'layer4': 'cd',
- 'layer5': 'cv',
- 'layer6': 'cv',
- 'layer7': 'cv',
- 'layer8': 'cd',
- 'layer9': 'cv',
- 'layer10': 'cv',
- 'layer11': 'cv',
- 'layer12': 'cd',
- 'layer13': 'cv',
- 'layer14': 'cv',
- 'layer15': 'cv',
- },
- 'avvv4': {
- 'layer0': 'ad',
- 'layer1': 'cv',
- 'layer2': 'cv',
- 'layer3': 'cv',
- 'layer4': 'ad',
- 'layer5': 'cv',
- 'layer6': 'cv',
- 'layer7': 'cv',
- 'layer8': 'ad',
- 'layer9': 'cv',
- 'layer10': 'cv',
- 'layer11': 'cv',
- 'layer12': 'ad',
- 'layer13': 'cv',
- 'layer14': 'cv',
- 'layer15': 'cv',
- },
- 'rvvv4': {
- 'layer0': 'rd',
- 'layer1': 'cv',
- 'layer2': 'cv',
- 'layer3': 'cv',
- 'layer4': 'rd',
- 'layer5': 'cv',
- 'layer6': 'cv',
- 'layer7': 'cv',
- 'layer8': 'rd',
- 'layer9': 'cv',
- 'layer10': 'cv',
- 'layer11': 'cv',
- 'layer12': 'rd',
- 'layer13': 'cv',
- 'layer14': 'cv',
- 'layer15': 'cv',
- },
- 'cccv4': {
- 'layer0': 'cd',
- 'layer1': 'cd',
- 'layer2': 'cd',
- 'layer3': 'cv',
- 'layer4': 'cd',
- 'layer5': 'cd',
- 'layer6': 'cd',
- 'layer7': 'cv',
- 'layer8': 'cd',
- 'layer9': 'cd',
- 'layer10': 'cd',
- 'layer11': 'cv',
- 'layer12': 'cd',
- 'layer13': 'cd',
- 'layer14': 'cd',
- 'layer15': 'cv',
- },
- 'aaav4': {
- 'layer0': 'ad',
- 'layer1': 'ad',
- 'layer2': 'ad',
- 'layer3': 'cv',
- 'layer4': 'ad',
- 'layer5': 'ad',
- 'layer6': 'ad',
- 'layer7': 'cv',
- 'layer8': 'ad',
- 'layer9': 'ad',
- 'layer10': 'ad',
- 'layer11': 'cv',
- 'layer12': 'ad',
- 'layer13': 'ad',
- 'layer14': 'ad',
- 'layer15': 'cv',
- },
- 'rrrv4': {
- 'layer0': 'rd',
- 'layer1': 'rd',
- 'layer2': 'rd',
- 'layer3': 'cv',
- 'layer4': 'rd',
- 'layer5': 'rd',
- 'layer6': 'rd',
- 'layer7': 'cv',
- 'layer8': 'rd',
- 'layer9': 'rd',
- 'layer10': 'rd',
- 'layer11': 'cv',
- 'layer12': 'rd',
- 'layer13': 'rd',
- 'layer14': 'rd',
- 'layer15': 'cv',
- },
- 'c16': {
- 'layer0': 'cd',
- 'layer1': 'cd',
- 'layer2': 'cd',
- 'layer3': 'cd',
- 'layer4': 'cd',
- 'layer5': 'cd',
- 'layer6': 'cd',
- 'layer7': 'cd',
- 'layer8': 'cd',
- 'layer9': 'cd',
- 'layer10': 'cd',
- 'layer11': 'cd',
- 'layer12': 'cd',
- 'layer13': 'cd',
- 'layer14': 'cd',
- 'layer15': 'cd',
- },
- 'a16': {
- 'layer0': 'ad',
- 'layer1': 'ad',
- 'layer2': 'ad',
- 'layer3': 'ad',
- 'layer4': 'ad',
- 'layer5': 'ad',
- 'layer6': 'ad',
- 'layer7': 'ad',
- 'layer8': 'ad',
- 'layer9': 'ad',
- 'layer10': 'ad',
- 'layer11': 'ad',
- 'layer12': 'ad',
- 'layer13': 'ad',
- 'layer14': 'ad',
- 'layer15': 'ad',
- },
- 'r16': {
- 'layer0': 'rd',
- 'layer1': 'rd',
- 'layer2': 'rd',
- 'layer3': 'rd',
- 'layer4': 'rd',
- 'layer5': 'rd',
- 'layer6': 'rd',
- 'layer7': 'rd',
- 'layer8': 'rd',
- 'layer9': 'rd',
- 'layer10': 'rd',
- 'layer11': 'rd',
- 'layer12': 'rd',
- 'layer13': 'rd',
- 'layer14': 'rd',
- 'layer15': 'rd',
- },
- 'carv4': {
- 'layer0': 'cd',
- 'layer1': 'ad',
- 'layer2': 'rd',
- 'layer3': 'cv',
- 'layer4': 'cd',
- 'layer5': 'ad',
- 'layer6': 'rd',
- 'layer7': 'cv',
- 'layer8': 'cd',
- 'layer9': 'ad',
- 'layer10': 'rd',
- 'layer11': 'cv',
- 'layer12': 'cd',
- 'layer13': 'ad',
- 'layer14': 'rd',
- 'layer15': 'cv',
- },
- }
-
-def createConvFunc(op_type):
- assert op_type in ['cv', 'cd', 'ad', 'rd'], 'unknown op type: %s' % str(op_type)
- if op_type == 'cv':
- return F.conv2d
-
- if op_type == 'cd':
- def func(x, weights, bias=None, stride=1, padding=0, dilation=1, groups=1):
- assert dilation in [1, 2], 'dilation for cd_conv should be in 1 or 2'
- assert weights.size(2) == 3 and weights.size(3) == 3, 'kernel size for cd_conv should be 3x3'
- assert padding == dilation, 'padding for cd_conv set wrong'
-
- weights_c = weights.sum(dim=[2, 3], keepdim=True)
- yc = F.conv2d(x, weights_c, stride=stride, padding=0, groups=groups)
- y = F.conv2d(x, weights, bias, stride=stride, padding=padding, dilation=dilation, groups=groups)
- return y - yc
- return func
- elif op_type == 'ad':
- def func(x, weights, bias=None, stride=1, padding=0, dilation=1, groups=1):
- assert dilation in [1, 2], 'dilation for ad_conv should be in 1 or 2'
- assert weights.size(2) == 3 and weights.size(3) == 3, 'kernel size for ad_conv should be 3x3'
- assert padding == dilation, 'padding for ad_conv set wrong'
-
- shape = weights.shape
- weights = weights.view(shape[0], shape[1], -1)
- weights_conv = (weights - weights[:, :, [3, 0, 1, 6, 4, 2, 7, 8, 5]]).view(shape) # clock-wise
- y = F.conv2d(x, weights_conv, bias, stride=stride, padding=padding, dilation=dilation, groups=groups)
- return y
- return func
- elif op_type == 'rd':
- def func(x, weights, bias=None, stride=1, padding=0, dilation=1, groups=1):
- assert dilation in [1, 2], 'dilation for rd_conv should be in 1 or 2'
- assert weights.size(2) == 3 and weights.size(3) == 3, 'kernel size for rd_conv should be 3x3'
- padding = 2 * dilation
-
- shape = weights.shape
- if weights.is_cuda:
- buffer = torch.cuda.FloatTensor(shape[0], shape[1], 5 * 5).fill_(0)
- else:
- buffer = torch.zeros(shape[0], shape[1], 5 * 5)
- weights = weights.view(shape[0], shape[1], -1)
- buffer[:, :, [0, 2, 4, 10, 14, 20, 22, 24]] = weights[:, :, 1:]
- buffer[:, :, [6, 7, 8, 11, 13, 16, 17, 18]] = -weights[:, :, 1:]
- buffer[:, :, 12] = 0
- buffer = buffer.view(shape[0], shape[1], 5, 5)
- y = F.conv2d(x, buffer, bias, stride=stride, padding=padding, dilation=dilation, groups=groups)
- return y
- return func
- else:
- print('impossible to be here unless you force that')
- return None
-
-class Conv2d(nn.Module):
- def __init__(self, pdc, in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=False):
- super(Conv2d, self).__init__()
- if in_channels % groups != 0:
- raise ValueError('in_channels must be divisible by groups')
- if out_channels % groups != 0:
- raise ValueError('out_channels must be divisible by groups')
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.stride = stride
- self.padding = padding
- self.dilation = dilation
- self.groups = groups
- self.weight = nn.Parameter(torch.Tensor(out_channels, in_channels // groups, kernel_size, kernel_size))
- if bias:
- self.bias = nn.Parameter(torch.Tensor(out_channels))
- else:
- self.register_parameter('bias', None)
- self.reset_parameters()
- self.pdc = pdc
-
- def reset_parameters(self):
- nn.init.kaiming_uniform_(self.weight, a=math.sqrt(5))
- if self.bias is not None:
- fan_in, _ = nn.init._calculate_fan_in_and_fan_out(self.weight)
- bound = 1 / math.sqrt(fan_in)
- nn.init.uniform_(self.bias, -bound, bound)
-
- def forward(self, input):
-
- return self.pdc(input, self.weight, self.bias, self.stride, self.padding, self.dilation, self.groups)
-
-class CSAM(nn.Module):
- """
- Compact Spatial Attention Module
- """
- def __init__(self, channels):
- super(CSAM, self).__init__()
-
- mid_channels = 4
- self.relu1 = nn.ReLU()
- self.conv1 = nn.Conv2d(channels, mid_channels, kernel_size=1, padding=0)
- self.conv2 = nn.Conv2d(mid_channels, 1, kernel_size=3, padding=1, bias=False)
- self.sigmoid = nn.Sigmoid()
- nn.init.constant_(self.conv1.bias, 0)
-
- def forward(self, x):
- y = self.relu1(x)
- y = self.conv1(y)
- y = self.conv2(y)
- y = self.sigmoid(y)
-
- return x * y
-
-class CDCM(nn.Module):
- """
- Compact Dilation Convolution based Module
- """
- def __init__(self, in_channels, out_channels):
- super(CDCM, self).__init__()
-
- self.relu1 = nn.ReLU()
- self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=1, padding=0)
- self.conv2_1 = nn.Conv2d(out_channels, out_channels, kernel_size=3, dilation=5, padding=5, bias=False)
- self.conv2_2 = nn.Conv2d(out_channels, out_channels, kernel_size=3, dilation=7, padding=7, bias=False)
- self.conv2_3 = nn.Conv2d(out_channels, out_channels, kernel_size=3, dilation=9, padding=9, bias=False)
- self.conv2_4 = nn.Conv2d(out_channels, out_channels, kernel_size=3, dilation=11, padding=11, bias=False)
- nn.init.constant_(self.conv1.bias, 0)
-
- def forward(self, x):
- x = self.relu1(x)
- x = self.conv1(x)
- x1 = self.conv2_1(x)
- x2 = self.conv2_2(x)
- x3 = self.conv2_3(x)
- x4 = self.conv2_4(x)
- return x1 + x2 + x3 + x4
-
-
-class MapReduce(nn.Module):
- """
- Reduce feature maps into a single edge map
- """
- def __init__(self, channels):
- super(MapReduce, self).__init__()
- self.conv = nn.Conv2d(channels, 1, kernel_size=1, padding=0)
- nn.init.constant_(self.conv.bias, 0)
-
- def forward(self, x):
- return self.conv(x)
-
-
-class PDCBlock(nn.Module):
- def __init__(self, pdc, inplane, ouplane, stride=1):
- super(PDCBlock, self).__init__()
- self.stride=stride
-
- self.stride=stride
- if self.stride > 1:
- self.pool = nn.MaxPool2d(kernel_size=2, stride=2)
- self.shortcut = nn.Conv2d(inplane, ouplane, kernel_size=1, padding=0)
- self.conv1 = Conv2d(pdc, inplane, inplane, kernel_size=3, padding=1, groups=inplane, bias=False)
- self.relu2 = nn.ReLU()
- self.conv2 = nn.Conv2d(inplane, ouplane, kernel_size=1, padding=0, bias=False)
-
- def forward(self, x):
- if self.stride > 1:
- x = self.pool(x)
- y = self.conv1(x)
- y = self.relu2(y)
- y = self.conv2(y)
- if self.stride > 1:
- x = self.shortcut(x)
- y = y + x
- return y
-
-class PDCBlock_converted(nn.Module):
- """
- CPDC, APDC can be converted to vanilla 3x3 convolution
- RPDC can be converted to vanilla 5x5 convolution
- """
- def __init__(self, pdc, inplane, ouplane, stride=1):
- super(PDCBlock_converted, self).__init__()
- self.stride=stride
-
- if self.stride > 1:
- self.pool = nn.MaxPool2d(kernel_size=2, stride=2)
- self.shortcut = nn.Conv2d(inplane, ouplane, kernel_size=1, padding=0)
- if pdc == 'rd':
- self.conv1 = nn.Conv2d(inplane, inplane, kernel_size=5, padding=2, groups=inplane, bias=False)
- else:
- self.conv1 = nn.Conv2d(inplane, inplane, kernel_size=3, padding=1, groups=inplane, bias=False)
- self.relu2 = nn.ReLU()
- self.conv2 = nn.Conv2d(inplane, ouplane, kernel_size=1, padding=0, bias=False)
-
- def forward(self, x):
- if self.stride > 1:
- x = self.pool(x)
- y = self.conv1(x)
- y = self.relu2(y)
- y = self.conv2(y)
- if self.stride > 1:
- x = self.shortcut(x)
- y = y + x
- return y
-
-class PiDiNet(nn.Module):
- def __init__(self, inplane, pdcs, dil=None, sa=False, convert=False):
- super(PiDiNet, self).__init__()
- self.sa = sa
- if dil is not None:
- assert isinstance(dil, int), 'dil should be an int'
- self.dil = dil
-
- self.fuseplanes = []
-
- self.inplane = inplane
- if convert:
- if pdcs[0] == 'rd':
- init_kernel_size = 5
- init_padding = 2
- else:
- init_kernel_size = 3
- init_padding = 1
- self.init_block = nn.Conv2d(3, self.inplane,
- kernel_size=init_kernel_size, padding=init_padding, bias=False)
- block_class = PDCBlock_converted
- else:
- self.init_block = Conv2d(pdcs[0], 3, self.inplane, kernel_size=3, padding=1)
- block_class = PDCBlock
-
- self.block1_1 = block_class(pdcs[1], self.inplane, self.inplane)
- self.block1_2 = block_class(pdcs[2], self.inplane, self.inplane)
- self.block1_3 = block_class(pdcs[3], self.inplane, self.inplane)
- self.fuseplanes.append(self.inplane) # C
-
- inplane = self.inplane
- self.inplane = self.inplane * 2
- self.block2_1 = block_class(pdcs[4], inplane, self.inplane, stride=2)
- self.block2_2 = block_class(pdcs[5], self.inplane, self.inplane)
- self.block2_3 = block_class(pdcs[6], self.inplane, self.inplane)
- self.block2_4 = block_class(pdcs[7], self.inplane, self.inplane)
- self.fuseplanes.append(self.inplane) # 2C
-
- inplane = self.inplane
- self.inplane = self.inplane * 2
- self.block3_1 = block_class(pdcs[8], inplane, self.inplane, stride=2)
- self.block3_2 = block_class(pdcs[9], self.inplane, self.inplane)
- self.block3_3 = block_class(pdcs[10], self.inplane, self.inplane)
- self.block3_4 = block_class(pdcs[11], self.inplane, self.inplane)
- self.fuseplanes.append(self.inplane) # 4C
-
- self.block4_1 = block_class(pdcs[12], self.inplane, self.inplane, stride=2)
- self.block4_2 = block_class(pdcs[13], self.inplane, self.inplane)
- self.block4_3 = block_class(pdcs[14], self.inplane, self.inplane)
- self.block4_4 = block_class(pdcs[15], self.inplane, self.inplane)
- self.fuseplanes.append(self.inplane) # 4C
-
- self.conv_reduces = nn.ModuleList()
- if self.sa and self.dil is not None:
- self.attentions = nn.ModuleList()
- self.dilations = nn.ModuleList()
- for i in range(4):
- self.dilations.append(CDCM(self.fuseplanes[i], self.dil))
- self.attentions.append(CSAM(self.dil))
- self.conv_reduces.append(MapReduce(self.dil))
- elif self.sa:
- self.attentions = nn.ModuleList()
- for i in range(4):
- self.attentions.append(CSAM(self.fuseplanes[i]))
- self.conv_reduces.append(MapReduce(self.fuseplanes[i]))
- elif self.dil is not None:
- self.dilations = nn.ModuleList()
- for i in range(4):
- self.dilations.append(CDCM(self.fuseplanes[i], self.dil))
- self.conv_reduces.append(MapReduce(self.dil))
- else:
- for i in range(4):
- self.conv_reduces.append(MapReduce(self.fuseplanes[i]))
-
- self.classifier = nn.Conv2d(4, 1, kernel_size=1) # has bias
- nn.init.constant_(self.classifier.weight, 0.25)
- nn.init.constant_(self.classifier.bias, 0)
-
- # print('initialization done')
-
- def get_weights(self):
- conv_weights = []
- bn_weights = []
- relu_weights = []
- for pname, p in self.named_parameters():
- if 'bn' in pname:
- bn_weights.append(p)
- elif 'relu' in pname:
- relu_weights.append(p)
- else:
- conv_weights.append(p)
-
- return conv_weights, bn_weights, relu_weights
-
- def forward(self, x):
- H, W = x.size()[2:]
-
- x = self.init_block(x)
-
- x1 = self.block1_1(x)
- x1 = self.block1_2(x1)
- x1 = self.block1_3(x1)
-
- x2 = self.block2_1(x1)
- x2 = self.block2_2(x2)
- x2 = self.block2_3(x2)
- x2 = self.block2_4(x2)
-
- x3 = self.block3_1(x2)
- x3 = self.block3_2(x3)
- x3 = self.block3_3(x3)
- x3 = self.block3_4(x3)
-
- x4 = self.block4_1(x3)
- x4 = self.block4_2(x4)
- x4 = self.block4_3(x4)
- x4 = self.block4_4(x4)
-
- x_fuses = []
- if self.sa and self.dil is not None:
- for i, xi in enumerate([x1, x2, x3, x4]):
- x_fuses.append(self.attentions[i](self.dilations[i](xi)))
- elif self.sa:
- for i, xi in enumerate([x1, x2, x3, x4]):
- x_fuses.append(self.attentions[i](xi))
- elif self.dil is not None:
- for i, xi in enumerate([x1, x2, x3, x4]):
- x_fuses.append(self.dilations[i](xi))
- else:
- x_fuses = [x1, x2, x3, x4]
-
- e1 = self.conv_reduces[0](x_fuses[0])
- e1 = F.interpolate(e1, (H, W), mode="bilinear", align_corners=False)
-
- e2 = self.conv_reduces[1](x_fuses[1])
- e2 = F.interpolate(e2, (H, W), mode="bilinear", align_corners=False)
-
- e3 = self.conv_reduces[2](x_fuses[2])
- e3 = F.interpolate(e3, (H, W), mode="bilinear", align_corners=False)
-
- e4 = self.conv_reduces[3](x_fuses[3])
- e4 = F.interpolate(e4, (H, W), mode="bilinear", align_corners=False)
-
- outputs = [e1, e2, e3, e4]
-
- output = self.classifier(torch.cat(outputs, dim=1))
- #if not self.training:
- # return torch.sigmoid(output)
-
- outputs.append(output)
- outputs = [torch.sigmoid(r) for r in outputs]
- return outputs
-
-def config_model(model):
- model_options = list(nets.keys())
- assert model in model_options, \
- 'unrecognized model, please choose from %s' % str(model_options)
-
- # print(str(nets[model]))
-
- pdcs = []
- for i in range(16):
- layer_name = 'layer%d' % i
- op = nets[model][layer_name]
- pdcs.append(createConvFunc(op))
-
- return pdcs
-
-def pidinet():
- pdcs = config_model('carv4')
- dil = 24 #if args.dil else None
- return PiDiNet(60, pdcs, dil=dil, sa=True)
-
-
-if __name__ == '__main__':
- model = pidinet()
- ckp = torch.load('table5_pidinet.pth')['state_dict']
- model.load_state_dict({k.replace('module.',''):v for k, v in ckp.items()})
- im = cv2.imread('examples/test_my/cat_v4.png')
- im = img2tensor(im).unsqueeze(0)/255.
- res = model(im)[-1]
- res = res>0.5
- res = res.float()
- res = (res[0,0].cpu().data.numpy()*255.).astype(np.uint8)
- print(res.shape)
- cv2.imwrite('edge.png', res)
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/grid/Factory.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/grid/Factory.js
deleted file mode 100644
index 37c342597af1c74a196ab07376073ddc3a7dd16c..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/grid/Factory.js
+++ /dev/null
@@ -1,13 +0,0 @@
-import Grid from './Grid.js';
-import ObjectFactory from '../ObjectFactory.js';
-import SetValue from '../../../plugins/utils/object/SetValue.js';
-
-ObjectFactory.register('grid', function (config) {
- var gameObject = new Grid(this.scene, config);
- this.scene.add.existing(gameObject);
- return gameObject;
-});
-
-SetValue(window, 'RexPlugins.Spinner.Grid', Grid);
-
-export default Grid;
\ No newline at end of file
diff --git a/spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/cmd_inference.py b/spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/cmd_inference.py
deleted file mode 100644
index cfaee189e3905d5e6f0fc6c85f36fbc978cb1508..0000000000000000000000000000000000000000
--- a/spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/cmd_inference.py
+++ /dev/null
@@ -1,106 +0,0 @@
-"""该模块用于生成VITS文件
-使用方法
-
-python cmd_inference.py -m 模型路径 -c 配置文件路径 -o 输出文件路径 -l 输入的语言 -t 输入文本 -s 合成目标说话人名称
-
-可选参数
--ns 感情变化程度
--nsw 音素发音长度
--ls 整体语速
--on 输出文件的名称
-
-"""
-
-from pathlib import Path
-import utils
-from models import SynthesizerTrn
-import torch
-from torch import no_grad, LongTensor
-import librosa
-from text import text_to_sequence, _clean_text
-import commons
-import scipy.io.wavfile as wavf
-import os
-
-device = "cuda:0" if torch.cuda.is_available() else "cpu"
-
-language_marks = {
- "Japanese": "",
- "日本語": "[JA]",
- "简体中文": "[ZH]",
- "English": "[EN]",
- "Mix": "",
-}
-
-
-def get_text(text, hps, is_symbol):
- text_norm = text_to_sequence(text, hps.symbols, [] if is_symbol else hps.data.text_cleaners)
- if hps.data.add_blank:
- text_norm = commons.intersperse(text_norm, 0)
- text_norm = LongTensor(text_norm)
- return text_norm
-
-
-
-if __name__ == "__main__":
- import argparse
-
- parser = argparse.ArgumentParser(description='vits inference')
- #必须参数
- parser.add_argument('-m', '--model_path', type=str, default="logs/44k/G_0.pth", help='模型路径')
- parser.add_argument('-c', '--config_path', type=str, default="configs/config.json", help='配置文件路径')
- parser.add_argument('-o', '--output_path', type=str, default="output/vits", help='输出文件路径')
- parser.add_argument('-l', '--language', type=str, default="日本語", help='输入的语言')
- parser.add_argument('-t', '--text', type=str, help='输入文本')
- parser.add_argument('-s', '--spk', type=str, help='合成目标说话人名称')
- #可选参数
- parser.add_argument('-on', '--output_name', type=str, default="output", help='输出文件的名称')
- parser.add_argument('-ns', '--noise_scale', type=float,default= .667,help='感情变化程度')
- parser.add_argument('-nsw', '--noise_scale_w', type=float,default=0.6, help='音素发音长度')
- parser.add_argument('-ls', '--length_scale', type=float,default=1, help='整体语速')
-
- args = parser.parse_args()
-
- model_path = args.model_path
- config_path = args.config_path
- output_dir = Path(args.output_path)
- output_dir.mkdir(parents=True, exist_ok=True)
-
- language = args.language
- text = args.text
- spk = args.spk
- noise_scale = args.noise_scale
- noise_scale_w = args.noise_scale_w
- length = args.length_scale
- output_name = args.output_name
-
- hps = utils.get_hparams_from_file(config_path)
- net_g = SynthesizerTrn(
- len(hps.symbols),
- hps.data.filter_length // 2 + 1,
- hps.train.segment_size // hps.data.hop_length,
- n_speakers=hps.data.n_speakers,
- **hps.model).to(device)
- _ = net_g.eval()
- _ = utils.load_checkpoint(model_path, net_g, None)
-
- speaker_ids = hps.speakers
-
-
- if language is not None:
- text = language_marks[language] + text + language_marks[language]
- speaker_id = speaker_ids[spk]
- stn_tst = get_text(text, hps, False)
- with no_grad():
- x_tst = stn_tst.unsqueeze(0).to(device)
- x_tst_lengths = LongTensor([stn_tst.size(0)]).to(device)
- sid = LongTensor([speaker_id]).to(device)
- audio = net_g.infer(x_tst, x_tst_lengths, sid=sid, noise_scale=noise_scale, noise_scale_w=noise_scale_w,
- length_scale=1.0 / length)[0][0, 0].data.cpu().float().numpy()
- del stn_tst, x_tst, x_tst_lengths, sid
-
- wavf.write(str(output_dir)+"/"+output_name+".wav",hps.data.sampling_rate,audio)
-
-
-
-
\ No newline at end of file
diff --git a/spaces/AlhitawiMohammed22/HTD_HTR/README.md b/spaces/AlhitawiMohammed22/HTD_HTR/README.md
deleted file mode 100644
index bdf0e07a0524bb3b47acce26e1fb0bac808a6311..0000000000000000000000000000000000000000
--- a/spaces/AlhitawiMohammed22/HTD_HTR/README.md
+++ /dev/null
@@ -1,20 +0,0 @@
----
-title: HTD HTR
-emoji: 📉
-colorFrom: pink
-colorTo: blue
-sdk: gradio
-sdk_version: 3.43.2
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-reference:
-
-https://github.com/kforcodeai/doctr-trocr
-
-https://github.com/mindee/doctr/issues/1307
-
-https://github.com/mindee/doctr/discussions/606
-
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/gfl/gfl_r50_fpn_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/gfl/gfl_r50_fpn_1x_coco.py
deleted file mode 100644
index 29fb077369977688174a4c5e2a0cda548e8e3931..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/gfl/gfl_r50_fpn_1x_coco.py
+++ /dev/null
@@ -1,57 +0,0 @@
-_base_ = [
- '../_base_/datasets/coco_detection.py',
- '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
-]
-model = dict(
- type='GFL',
- pretrained='torchvision://resnet50',
- backbone=dict(
- type='ResNet',
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=1,
- norm_cfg=dict(type='BN', requires_grad=True),
- norm_eval=True,
- style='pytorch'),
- neck=dict(
- type='FPN',
- in_channels=[256, 512, 1024, 2048],
- out_channels=256,
- start_level=1,
- add_extra_convs='on_output',
- num_outs=5),
- bbox_head=dict(
- type='GFLHead',
- num_classes=80,
- in_channels=256,
- stacked_convs=4,
- feat_channels=256,
- anchor_generator=dict(
- type='AnchorGenerator',
- ratios=[1.0],
- octave_base_scale=8,
- scales_per_octave=1,
- strides=[8, 16, 32, 64, 128]),
- loss_cls=dict(
- type='QualityFocalLoss',
- use_sigmoid=True,
- beta=2.0,
- loss_weight=1.0),
- loss_dfl=dict(type='DistributionFocalLoss', loss_weight=0.25),
- reg_max=16,
- loss_bbox=dict(type='GIoULoss', loss_weight=2.0)),
- # training and testing settings
- train_cfg=dict(
- assigner=dict(type='ATSSAssigner', topk=9),
- allowed_border=-1,
- pos_weight=-1,
- debug=False),
- test_cfg=dict(
- nms_pre=1000,
- min_bbox_size=0,
- score_thr=0.05,
- nms=dict(type='nms', iou_threshold=0.6),
- max_per_img=100))
-# optimizer
-optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001)
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/regnet/retinanet_regnetx-1.6GF_fpn_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/regnet/retinanet_regnetx-1.6GF_fpn_1x_coco.py
deleted file mode 100644
index 4f2beb850ded95402d6b44c80553f224e15fb557..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/regnet/retinanet_regnetx-1.6GF_fpn_1x_coco.py
+++ /dev/null
@@ -1,16 +0,0 @@
-_base_ = './retinanet_regnetx-3.2GF_fpn_1x_coco.py'
-model = dict(
- pretrained='open-mmlab://regnetx_1.6gf',
- backbone=dict(
- type='RegNet',
- arch='regnetx_1.6gf',
- out_indices=(0, 1, 2, 3),
- frozen_stages=1,
- norm_cfg=dict(type='BN', requires_grad=True),
- norm_eval=True,
- style='pytorch'),
- neck=dict(
- type='FPN',
- in_channels=[72, 168, 408, 912],
- out_channels=256,
- num_outs=5))
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/sabl/sabl_faster_rcnn_r50_fpn_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/sabl/sabl_faster_rcnn_r50_fpn_1x_coco.py
deleted file mode 100644
index 732c7ba3f607e2ac68f16acceddd16b1269aa2cf..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/sabl/sabl_faster_rcnn_r50_fpn_1x_coco.py
+++ /dev/null
@@ -1,34 +0,0 @@
-_base_ = [
- '../_base_/models/faster_rcnn_r50_fpn.py',
- '../_base_/datasets/coco_detection.py',
- '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
-]
-model = dict(
- roi_head=dict(
- bbox_head=dict(
- _delete_=True,
- type='SABLHead',
- num_classes=80,
- cls_in_channels=256,
- reg_in_channels=256,
- roi_feat_size=7,
- reg_feat_up_ratio=2,
- reg_pre_kernel=3,
- reg_post_kernel=3,
- reg_pre_num=2,
- reg_post_num=1,
- cls_out_channels=1024,
- reg_offset_out_channels=256,
- reg_cls_out_channels=256,
- num_cls_fcs=1,
- num_reg_fcs=0,
- reg_class_agnostic=True,
- norm_cfg=None,
- bbox_coder=dict(
- type='BucketingBBoxCoder', num_buckets=14, scale_factor=1.7),
- loss_cls=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
- loss_bbox_cls=dict(
- type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
- loss_bbox_reg=dict(type='SmoothL1Loss', beta=0.1,
- loss_weight=1.0))))
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r50-d8_512x1024_40k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r50-d8_512x1024_40k_cityscapes.py
deleted file mode 100644
index 5deb5872b00a30d5c18a980c4d6c1b0d915908b9..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r50-d8_512x1024_40k_cityscapes.py
+++ /dev/null
@@ -1,4 +0,0 @@
-_base_ = [
- '../_base_/models/pspnet_r50-d8.py', '../_base_/datasets/cityscapes.py',
- '../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py'
-]
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/resnest/deeplabv3plus_s101-d8_512x512_160k_ade20k.py b/spaces/Andy1621/uniformer_image_segmentation/configs/resnest/deeplabv3plus_s101-d8_512x512_160k_ade20k.py
deleted file mode 100644
index d51bccb965dafc40d7859219d132dc9467740a1b..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/resnest/deeplabv3plus_s101-d8_512x512_160k_ade20k.py
+++ /dev/null
@@ -1,9 +0,0 @@
-_base_ = '../deeplabv3plus/deeplabv3plus_r101-d8_512x512_160k_ade20k.py'
-model = dict(
- pretrained='open-mmlab://resnest101',
- backbone=dict(
- type='ResNeSt',
- stem_channels=128,
- radix=2,
- reduction_factor=4,
- avg_down_stride=True))
diff --git a/spaces/AnishKumbhar/ChatBot/app.py b/spaces/AnishKumbhar/ChatBot/app.py
deleted file mode 100644
index b4d9b1e1220b5cc04f65b66309f7d327127c98e8..0000000000000000000000000000000000000000
--- a/spaces/AnishKumbhar/ChatBot/app.py
+++ /dev/null
@@ -1 +0,0 @@
-python /text-generation-webui/server.py --share --chat --wbits 4 --groupsize 128 --model_type llama
\ No newline at end of file
diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/docs/One-Click-Installers.md b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/docs/One-Click-Installers.md
deleted file mode 100644
index 1597f484ef8a15e237259b8c8a92c20c86abcfa0..0000000000000000000000000000000000000000
--- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/docs/One-Click-Installers.md
+++ /dev/null
@@ -1,72 +0,0 @@
-# Additional one-click installers info
-
-## Installing nvcc
-
-If you have an NVIDIA GPU and ever need to compile something, like ExLlamav2 (that currently doesn't have pre-built wheels), you can install `nvcc` by running the `cmd_` script for your OS and entering this command:
-
-```
-conda install cuda -c nvidia/label/cuda-11.7.1
-```
-
-## Using an AMD GPU in Linux
-
-Requires ROCm SDK 5.4.2 or 5.4.3 to be installed. Some systems may also
-need: sudo apt-get install libstdc++-12-dev
-
-Edit the "one_click.py" script using a text editor and un-comment and
-modify the lines near the top of the script according to your setup. In
-particular, modify the os.environ["ROCM_PATH"] = '/opt/rocm' line to
-point to your ROCm installation.
-
-## WSL instructions
-
-If you do not have WSL installed, see here:
-https://learn.microsoft.com/en-us/windows/wsl/install
-
-If you want to install Linux to a drive other than C
-Open powershell and enter these commands:
-
-cd D:\Path\To\Linux
-$ProgressPreference = 'SilentlyContinue'
-Invoke-WebRequest -Uri -OutFile Linux.appx -UseBasicParsing
-mv Linux.appx Linux.zip
-
-Then open Linux.zip and you should see several .appx files inside.
-The one with _x64.appx contains the exe installer that you need.
-Extract the contents of that _x64.appx file and run .exe to install.
-
-Linux Distro URLs:
-https://learn.microsoft.com/en-us/windows/wsl/install-manual#downloading-distributions
-
-******************************************************************************
-*ENSURE THAT THE WSL LINUX DISTRO THAT YOU WISH TO USE IS SET AS THE DEFAULT!*
-******************************************************************************
-
-Do this by using these commands:
-wsl -l
-wsl -s
-
-### Web UI Installation
-
-Run the "start" script. By default it will install the web UI in WSL:
-/home/{username}/text-gen-install
-
-To launch the web UI in the future after it is already installed, run
-the same "start" script. Ensure that one_click.py and wsl.sh are next to it!
-
-### Updating the web UI
-
-As an alternative to running the "update" script, you can also run "wsl.sh update" in WSL.
-
-### Running an interactive shell
-
-As an alternative to running the "cmd" script, you can also run "wsl.sh cmd" in WSL.
-
-### Changing the default install location
-
-To change this, you will need to edit the scripts as follows:
-wsl.sh: line ~22 INSTALL_DIR="/path/to/install/dir"
-
-Keep in mind that there is a long-standing bug in WSL that significantly
-slows drive read/write speeds when using a physical drive as opposed to
-the virtual one that Linux is installed in.
diff --git a/spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/util/util.py b/spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/util/util.py
deleted file mode 100644
index e9fccb09ab022f04fd8c6905be99c8d4ce6b40fb..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/util/util.py
+++ /dev/null
@@ -1,76 +0,0 @@
-"""This module contains simple helper functions """
-from __future__ import print_function
-import torch
-import numpy as np
-import os
-import imageio
-
-
-def tensor2im(input_image, imtype=np.uint8):
- """"Converts a Tensor array into a numpy image array.
-
- Parameters:
- input_image (tensor) -- the input image tensor array
- imtype (type) -- the desired type of the converted numpy array
- """
- if not isinstance(input_image, np.ndarray):
- if isinstance(input_image, torch.Tensor): # get the data from a variable
- image_tensor = input_image.data
- else:
- return input_image
- image_numpy = image_tensor[0].cpu().float().numpy() # convert it into a numpy array
- if image_numpy.shape[0] == 1: # grayscale to RGB
- image_numpy = np.tile(image_numpy, (3, 1, 1))
- image_numpy = (np.transpose(image_numpy, (1, 2, 0)) + 1) / 2.0 * 255.0 # post-processing: tranpose and scaling
- else: # if it is a numpy array, do nothing
- image_numpy = input_image
- return image_numpy.astype(imtype)
-
-
-def tensor2array(value_tensor):
- """Converts a Tensor array into a numpy
- :param value_tensor:
- :return:
- """
- if value_tensor.dim() == 3:
- numpy = value_tensor.view(-1).cpu().float().numpy()
- else:
- numpy = value_tensor[0].view(-1).cpu().float().numpy()
- return numpy
-
-
-def save_image(image_numpy, image_path):
- """Save a numpy image to the disk
-
- Parameters:
- image_numpy (numpy array) -- input numpy array
- image_path (str) -- the path of the image
- """
-
- if image_numpy.shape[2] == 1:
- image_numpy = image_numpy.reshape(image_numpy.shape[0], image_numpy.shape[1])
-
- imageio.imwrite(image_path, image_numpy)
-
-
-def mkdirs(paths):
- """create empty directories if they don't exist
-
- Parameters:
- paths (str list) -- a list of directory paths
- """
- if isinstance(paths, list) and not isinstance(paths, str):
- for path in paths:
- mkdir(path)
- else:
- mkdir(paths)
-
-
-def mkdir(path):
- """create a single empty directory if it didn't exist
-
- Parameters:
- path (str) -- a single directory path
- """
- if not os.path.exists(path):
- os.makedirs(path)
\ No newline at end of file
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/utils/__init__.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/utils/__init__.py
deleted file mode 100644
index a263e31c1e3977712827ca229bbc04910b4e928e..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/utils/__init__.py
+++ /dev/null
@@ -1,19 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from .flops_counter import get_model_complexity_info
-from .fuse_conv_bn import fuse_conv_bn
-from .sync_bn import revert_sync_batchnorm
-from .weight_init import (INITIALIZERS, Caffe2XavierInit, ConstantInit,
- KaimingInit, NormalInit, PretrainedInit,
- TruncNormalInit, UniformInit, XavierInit,
- bias_init_with_prob, caffe2_xavier_init,
- constant_init, initialize, kaiming_init, normal_init,
- trunc_normal_init, uniform_init, xavier_init)
-
-__all__ = [
- 'get_model_complexity_info', 'bias_init_with_prob', 'caffe2_xavier_init',
- 'constant_init', 'kaiming_init', 'normal_init', 'trunc_normal_init',
- 'uniform_init', 'xavier_init', 'fuse_conv_bn', 'initialize',
- 'INITIALIZERS', 'ConstantInit', 'XavierInit', 'NormalInit',
- 'TruncNormalInit', 'UniformInit', 'KaimingInit', 'PretrainedInit',
- 'Caffe2XavierInit', 'revert_sync_batchnorm'
-]
diff --git a/spaces/Arnx/MusicGenXvAKN/audiocraft/utils/export.py b/spaces/Arnx/MusicGenXvAKN/audiocraft/utils/export.py
deleted file mode 100644
index b513b52267f7bf5aae09282c15b0a2e20c8a8fee..0000000000000000000000000000000000000000
--- a/spaces/Arnx/MusicGenXvAKN/audiocraft/utils/export.py
+++ /dev/null
@@ -1,56 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Utility to export a training checkpoint to a lightweight release checkpoint.
-"""
-
-from pathlib import Path
-import typing as tp
-
-from omegaconf import OmegaConf, DictConfig
-import torch
-
-
-def _clean_lm_cfg(cfg: DictConfig):
- OmegaConf.set_struct(cfg, False)
- # This used to be set automatically in the LM solver, need a more robust solution
- # for the future.
- cfg['transformer_lm']['card'] = 2048
- cfg['transformer_lm']['n_q'] = 4
- # Experimental params no longer supported.
- bad_params = ['spectral_norm_attn_iters', 'spectral_norm_ff_iters',
- 'residual_balancer_attn', 'residual_balancer_ff', 'layer_drop']
- for name in bad_params:
- del cfg['transformer_lm'][name]
- OmegaConf.set_struct(cfg, True)
- return cfg
-
-
-def export_encodec(checkpoint_path: tp.Union[Path, str], out_folder: tp.Union[Path, str]):
- sig = Path(checkpoint_path).parent.name
- assert len(sig) == 8, "Not a valid Dora signature"
- pkg = torch.load(checkpoint_path, 'cpu')
- new_pkg = {
- 'best_state': pkg['ema']['state']['model'],
- 'xp.cfg': OmegaConf.to_yaml(pkg['xp.cfg']),
- }
- out_file = Path(out_folder) / f'{sig}.th'
- torch.save(new_pkg, out_file)
- return out_file
-
-
-def export_lm(checkpoint_path: tp.Union[Path, str], out_folder: tp.Union[Path, str]):
- sig = Path(checkpoint_path).parent.name
- assert len(sig) == 8, "Not a valid Dora signature"
- pkg = torch.load(checkpoint_path, 'cpu')
- new_pkg = {
- 'best_state': pkg['fsdp_best_state']['model'],
- 'xp.cfg': OmegaConf.to_yaml(_clean_lm_cfg(pkg['xp.cfg']))
- }
- out_file = Path(out_folder) / f'{sig}.th'
- torch.save(new_pkg, out_file)
- return out_file
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/console.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/console.py
deleted file mode 100644
index 7c363dfdc5e8aa344c26f285cb2000c632bcce49..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/console.py
+++ /dev/null
@@ -1,2633 +0,0 @@
-import inspect
-import os
-import platform
-import sys
-import threading
-import zlib
-from abc import ABC, abstractmethod
-from dataclasses import dataclass, field
-from datetime import datetime
-from functools import wraps
-from getpass import getpass
-from html import escape
-from inspect import isclass
-from itertools import islice
-from math import ceil
-from time import monotonic
-from types import FrameType, ModuleType, TracebackType
-from typing import (
- IO,
- TYPE_CHECKING,
- Any,
- Callable,
- Dict,
- Iterable,
- List,
- Mapping,
- NamedTuple,
- Optional,
- TextIO,
- Tuple,
- Type,
- Union,
- cast,
-)
-
-from pip._vendor.rich._null_file import NULL_FILE
-
-if sys.version_info >= (3, 8):
- from typing import Literal, Protocol, runtime_checkable
-else:
- from pip._vendor.typing_extensions import (
- Literal,
- Protocol,
- runtime_checkable,
- ) # pragma: no cover
-
-from . import errors, themes
-from ._emoji_replace import _emoji_replace
-from ._export_format import CONSOLE_HTML_FORMAT, CONSOLE_SVG_FORMAT
-from ._fileno import get_fileno
-from ._log_render import FormatTimeCallable, LogRender
-from .align import Align, AlignMethod
-from .color import ColorSystem, blend_rgb
-from .control import Control
-from .emoji import EmojiVariant
-from .highlighter import NullHighlighter, ReprHighlighter
-from .markup import render as render_markup
-from .measure import Measurement, measure_renderables
-from .pager import Pager, SystemPager
-from .pretty import Pretty, is_expandable
-from .protocol import rich_cast
-from .region import Region
-from .scope import render_scope
-from .screen import Screen
-from .segment import Segment
-from .style import Style, StyleType
-from .styled import Styled
-from .terminal_theme import DEFAULT_TERMINAL_THEME, SVG_EXPORT_THEME, TerminalTheme
-from .text import Text, TextType
-from .theme import Theme, ThemeStack
-
-if TYPE_CHECKING:
- from ._windows import WindowsConsoleFeatures
- from .live import Live
- from .status import Status
-
-JUPYTER_DEFAULT_COLUMNS = 115
-JUPYTER_DEFAULT_LINES = 100
-WINDOWS = platform.system() == "Windows"
-
-HighlighterType = Callable[[Union[str, "Text"]], "Text"]
-JustifyMethod = Literal["default", "left", "center", "right", "full"]
-OverflowMethod = Literal["fold", "crop", "ellipsis", "ignore"]
-
-
-class NoChange:
- pass
-
-
-NO_CHANGE = NoChange()
-
-try:
- _STDIN_FILENO = sys.__stdin__.fileno()
-except Exception:
- _STDIN_FILENO = 0
-try:
- _STDOUT_FILENO = sys.__stdout__.fileno()
-except Exception:
- _STDOUT_FILENO = 1
-try:
- _STDERR_FILENO = sys.__stderr__.fileno()
-except Exception:
- _STDERR_FILENO = 2
-
-_STD_STREAMS = (_STDIN_FILENO, _STDOUT_FILENO, _STDERR_FILENO)
-_STD_STREAMS_OUTPUT = (_STDOUT_FILENO, _STDERR_FILENO)
-
-
-_TERM_COLORS = {
- "kitty": ColorSystem.EIGHT_BIT,
- "256color": ColorSystem.EIGHT_BIT,
- "16color": ColorSystem.STANDARD,
-}
-
-
-class ConsoleDimensions(NamedTuple):
- """Size of the terminal."""
-
- width: int
- """The width of the console in 'cells'."""
- height: int
- """The height of the console in lines."""
-
-
-@dataclass
-class ConsoleOptions:
- """Options for __rich_console__ method."""
-
- size: ConsoleDimensions
- """Size of console."""
- legacy_windows: bool
- """legacy_windows: flag for legacy windows."""
- min_width: int
- """Minimum width of renderable."""
- max_width: int
- """Maximum width of renderable."""
- is_terminal: bool
- """True if the target is a terminal, otherwise False."""
- encoding: str
- """Encoding of terminal."""
- max_height: int
- """Height of container (starts as terminal)"""
- justify: Optional[JustifyMethod] = None
- """Justify value override for renderable."""
- overflow: Optional[OverflowMethod] = None
- """Overflow value override for renderable."""
- no_wrap: Optional[bool] = False
- """Disable wrapping for text."""
- highlight: Optional[bool] = None
- """Highlight override for render_str."""
- markup: Optional[bool] = None
- """Enable markup when rendering strings."""
- height: Optional[int] = None
-
- @property
- def ascii_only(self) -> bool:
- """Check if renderables should use ascii only."""
- return not self.encoding.startswith("utf")
-
- def copy(self) -> "ConsoleOptions":
- """Return a copy of the options.
-
- Returns:
- ConsoleOptions: a copy of self.
- """
- options: ConsoleOptions = ConsoleOptions.__new__(ConsoleOptions)
- options.__dict__ = self.__dict__.copy()
- return options
-
- def update(
- self,
- *,
- width: Union[int, NoChange] = NO_CHANGE,
- min_width: Union[int, NoChange] = NO_CHANGE,
- max_width: Union[int, NoChange] = NO_CHANGE,
- justify: Union[Optional[JustifyMethod], NoChange] = NO_CHANGE,
- overflow: Union[Optional[OverflowMethod], NoChange] = NO_CHANGE,
- no_wrap: Union[Optional[bool], NoChange] = NO_CHANGE,
- highlight: Union[Optional[bool], NoChange] = NO_CHANGE,
- markup: Union[Optional[bool], NoChange] = NO_CHANGE,
- height: Union[Optional[int], NoChange] = NO_CHANGE,
- ) -> "ConsoleOptions":
- """Update values, return a copy."""
- options = self.copy()
- if not isinstance(width, NoChange):
- options.min_width = options.max_width = max(0, width)
- if not isinstance(min_width, NoChange):
- options.min_width = min_width
- if not isinstance(max_width, NoChange):
- options.max_width = max_width
- if not isinstance(justify, NoChange):
- options.justify = justify
- if not isinstance(overflow, NoChange):
- options.overflow = overflow
- if not isinstance(no_wrap, NoChange):
- options.no_wrap = no_wrap
- if not isinstance(highlight, NoChange):
- options.highlight = highlight
- if not isinstance(markup, NoChange):
- options.markup = markup
- if not isinstance(height, NoChange):
- if height is not None:
- options.max_height = height
- options.height = None if height is None else max(0, height)
- return options
-
- def update_width(self, width: int) -> "ConsoleOptions":
- """Update just the width, return a copy.
-
- Args:
- width (int): New width (sets both min_width and max_width)
-
- Returns:
- ~ConsoleOptions: New console options instance.
- """
- options = self.copy()
- options.min_width = options.max_width = max(0, width)
- return options
-
- def update_height(self, height: int) -> "ConsoleOptions":
- """Update the height, and return a copy.
-
- Args:
- height (int): New height
-
- Returns:
- ~ConsoleOptions: New Console options instance.
- """
- options = self.copy()
- options.max_height = options.height = height
- return options
-
- def reset_height(self) -> "ConsoleOptions":
- """Return a copy of the options with height set to ``None``.
-
- Returns:
- ~ConsoleOptions: New console options instance.
- """
- options = self.copy()
- options.height = None
- return options
-
- def update_dimensions(self, width: int, height: int) -> "ConsoleOptions":
- """Update the width and height, and return a copy.
-
- Args:
- width (int): New width (sets both min_width and max_width).
- height (int): New height.
-
- Returns:
- ~ConsoleOptions: New console options instance.
- """
- options = self.copy()
- options.min_width = options.max_width = max(0, width)
- options.height = options.max_height = height
- return options
-
-
-@runtime_checkable
-class RichCast(Protocol):
- """An object that may be 'cast' to a console renderable."""
-
- def __rich__(
- self,
- ) -> Union["ConsoleRenderable", "RichCast", str]: # pragma: no cover
- ...
-
-
-@runtime_checkable
-class ConsoleRenderable(Protocol):
- """An object that supports the console protocol."""
-
- def __rich_console__(
- self, console: "Console", options: "ConsoleOptions"
- ) -> "RenderResult": # pragma: no cover
- ...
-
-
-# A type that may be rendered by Console.
-RenderableType = Union[ConsoleRenderable, RichCast, str]
-
-# The result of calling a __rich_console__ method.
-RenderResult = Iterable[Union[RenderableType, Segment]]
-
-_null_highlighter = NullHighlighter()
-
-
-class CaptureError(Exception):
- """An error in the Capture context manager."""
-
-
-class NewLine:
- """A renderable to generate new line(s)"""
-
- def __init__(self, count: int = 1) -> None:
- self.count = count
-
- def __rich_console__(
- self, console: "Console", options: "ConsoleOptions"
- ) -> Iterable[Segment]:
- yield Segment("\n" * self.count)
-
-
-class ScreenUpdate:
- """Render a list of lines at a given offset."""
-
- def __init__(self, lines: List[List[Segment]], x: int, y: int) -> None:
- self._lines = lines
- self.x = x
- self.y = y
-
- def __rich_console__(
- self, console: "Console", options: ConsoleOptions
- ) -> RenderResult:
- x = self.x
- move_to = Control.move_to
- for offset, line in enumerate(self._lines, self.y):
- yield move_to(x, offset)
- yield from line
-
-
-class Capture:
- """Context manager to capture the result of printing to the console.
- See :meth:`~rich.console.Console.capture` for how to use.
-
- Args:
- console (Console): A console instance to capture output.
- """
-
- def __init__(self, console: "Console") -> None:
- self._console = console
- self._result: Optional[str] = None
-
- def __enter__(self) -> "Capture":
- self._console.begin_capture()
- return self
-
- def __exit__(
- self,
- exc_type: Optional[Type[BaseException]],
- exc_val: Optional[BaseException],
- exc_tb: Optional[TracebackType],
- ) -> None:
- self._result = self._console.end_capture()
-
- def get(self) -> str:
- """Get the result of the capture."""
- if self._result is None:
- raise CaptureError(
- "Capture result is not available until context manager exits."
- )
- return self._result
-
-
-class ThemeContext:
- """A context manager to use a temporary theme. See :meth:`~rich.console.Console.use_theme` for usage."""
-
- def __init__(self, console: "Console", theme: Theme, inherit: bool = True) -> None:
- self.console = console
- self.theme = theme
- self.inherit = inherit
-
- def __enter__(self) -> "ThemeContext":
- self.console.push_theme(self.theme)
- return self
-
- def __exit__(
- self,
- exc_type: Optional[Type[BaseException]],
- exc_val: Optional[BaseException],
- exc_tb: Optional[TracebackType],
- ) -> None:
- self.console.pop_theme()
-
-
-class PagerContext:
- """A context manager that 'pages' content. See :meth:`~rich.console.Console.pager` for usage."""
-
- def __init__(
- self,
- console: "Console",
- pager: Optional[Pager] = None,
- styles: bool = False,
- links: bool = False,
- ) -> None:
- self._console = console
- self.pager = SystemPager() if pager is None else pager
- self.styles = styles
- self.links = links
-
- def __enter__(self) -> "PagerContext":
- self._console._enter_buffer()
- return self
-
- def __exit__(
- self,
- exc_type: Optional[Type[BaseException]],
- exc_val: Optional[BaseException],
- exc_tb: Optional[TracebackType],
- ) -> None:
- if exc_type is None:
- with self._console._lock:
- buffer: List[Segment] = self._console._buffer[:]
- del self._console._buffer[:]
- segments: Iterable[Segment] = buffer
- if not self.styles:
- segments = Segment.strip_styles(segments)
- elif not self.links:
- segments = Segment.strip_links(segments)
- content = self._console._render_buffer(segments)
- self.pager.show(content)
- self._console._exit_buffer()
-
-
-class ScreenContext:
- """A context manager that enables an alternative screen. See :meth:`~rich.console.Console.screen` for usage."""
-
- def __init__(
- self, console: "Console", hide_cursor: bool, style: StyleType = ""
- ) -> None:
- self.console = console
- self.hide_cursor = hide_cursor
- self.screen = Screen(style=style)
- self._changed = False
-
- def update(
- self, *renderables: RenderableType, style: Optional[StyleType] = None
- ) -> None:
- """Update the screen.
-
- Args:
- renderable (RenderableType, optional): Optional renderable to replace current renderable,
- or None for no change. Defaults to None.
- style: (Style, optional): Replacement style, or None for no change. Defaults to None.
- """
- if renderables:
- self.screen.renderable = (
- Group(*renderables) if len(renderables) > 1 else renderables[0]
- )
- if style is not None:
- self.screen.style = style
- self.console.print(self.screen, end="")
-
- def __enter__(self) -> "ScreenContext":
- self._changed = self.console.set_alt_screen(True)
- if self._changed and self.hide_cursor:
- self.console.show_cursor(False)
- return self
-
- def __exit__(
- self,
- exc_type: Optional[Type[BaseException]],
- exc_val: Optional[BaseException],
- exc_tb: Optional[TracebackType],
- ) -> None:
- if self._changed:
- self.console.set_alt_screen(False)
- if self.hide_cursor:
- self.console.show_cursor(True)
-
-
-class Group:
- """Takes a group of renderables and returns a renderable object that renders the group.
-
- Args:
- renderables (Iterable[RenderableType]): An iterable of renderable objects.
- fit (bool, optional): Fit dimension of group to contents, or fill available space. Defaults to True.
- """
-
- def __init__(self, *renderables: "RenderableType", fit: bool = True) -> None:
- self._renderables = renderables
- self.fit = fit
- self._render: Optional[List[RenderableType]] = None
-
- @property
- def renderables(self) -> List["RenderableType"]:
- if self._render is None:
- self._render = list(self._renderables)
- return self._render
-
- def __rich_measure__(
- self, console: "Console", options: "ConsoleOptions"
- ) -> "Measurement":
- if self.fit:
- return measure_renderables(console, options, self.renderables)
- else:
- return Measurement(options.max_width, options.max_width)
-
- def __rich_console__(
- self, console: "Console", options: "ConsoleOptions"
- ) -> RenderResult:
- yield from self.renderables
-
-
-def group(fit: bool = True) -> Callable[..., Callable[..., Group]]:
- """A decorator that turns an iterable of renderables in to a group.
-
- Args:
- fit (bool, optional): Fit dimension of group to contents, or fill available space. Defaults to True.
- """
-
- def decorator(
- method: Callable[..., Iterable[RenderableType]]
- ) -> Callable[..., Group]:
- """Convert a method that returns an iterable of renderables in to a Group."""
-
- @wraps(method)
- def _replace(*args: Any, **kwargs: Any) -> Group:
- renderables = method(*args, **kwargs)
- return Group(*renderables, fit=fit)
-
- return _replace
-
- return decorator
-
-
-def _is_jupyter() -> bool: # pragma: no cover
- """Check if we're running in a Jupyter notebook."""
- try:
- get_ipython # type: ignore[name-defined]
- except NameError:
- return False
- ipython = get_ipython() # type: ignore[name-defined]
- shell = ipython.__class__.__name__
- if (
- "google.colab" in str(ipython.__class__)
- or os.getenv("DATABRICKS_RUNTIME_VERSION")
- or shell == "ZMQInteractiveShell"
- ):
- return True # Jupyter notebook or qtconsole
- elif shell == "TerminalInteractiveShell":
- return False # Terminal running IPython
- else:
- return False # Other type (?)
-
-
-COLOR_SYSTEMS = {
- "standard": ColorSystem.STANDARD,
- "256": ColorSystem.EIGHT_BIT,
- "truecolor": ColorSystem.TRUECOLOR,
- "windows": ColorSystem.WINDOWS,
-}
-
-_COLOR_SYSTEMS_NAMES = {system: name for name, system in COLOR_SYSTEMS.items()}
-
-
-@dataclass
-class ConsoleThreadLocals(threading.local):
- """Thread local values for Console context."""
-
- theme_stack: ThemeStack
- buffer: List[Segment] = field(default_factory=list)
- buffer_index: int = 0
-
-
-class RenderHook(ABC):
- """Provides hooks in to the render process."""
-
- @abstractmethod
- def process_renderables(
- self, renderables: List[ConsoleRenderable]
- ) -> List[ConsoleRenderable]:
- """Called with a list of objects to render.
-
- This method can return a new list of renderables, or modify and return the same list.
-
- Args:
- renderables (List[ConsoleRenderable]): A number of renderable objects.
-
- Returns:
- List[ConsoleRenderable]: A replacement list of renderables.
- """
-
-
-_windows_console_features: Optional["WindowsConsoleFeatures"] = None
-
-
-def get_windows_console_features() -> "WindowsConsoleFeatures": # pragma: no cover
- global _windows_console_features
- if _windows_console_features is not None:
- return _windows_console_features
- from ._windows import get_windows_console_features
-
- _windows_console_features = get_windows_console_features()
- return _windows_console_features
-
-
-def detect_legacy_windows() -> bool:
- """Detect legacy Windows."""
- return WINDOWS and not get_windows_console_features().vt
-
-
-class Console:
- """A high level console interface.
-
- Args:
- color_system (str, optional): The color system supported by your terminal,
- either ``"standard"``, ``"256"`` or ``"truecolor"``. Leave as ``"auto"`` to autodetect.
- force_terminal (Optional[bool], optional): Enable/disable terminal control codes, or None to auto-detect terminal. Defaults to None.
- force_jupyter (Optional[bool], optional): Enable/disable Jupyter rendering, or None to auto-detect Jupyter. Defaults to None.
- force_interactive (Optional[bool], optional): Enable/disable interactive mode, or None to auto detect. Defaults to None.
- soft_wrap (Optional[bool], optional): Set soft wrap default on print method. Defaults to False.
- theme (Theme, optional): An optional style theme object, or ``None`` for default theme.
- stderr (bool, optional): Use stderr rather than stdout if ``file`` is not specified. Defaults to False.
- file (IO, optional): A file object where the console should write to. Defaults to stdout.
- quiet (bool, Optional): Boolean to suppress all output. Defaults to False.
- width (int, optional): The width of the terminal. Leave as default to auto-detect width.
- height (int, optional): The height of the terminal. Leave as default to auto-detect height.
- style (StyleType, optional): Style to apply to all output, or None for no style. Defaults to None.
- no_color (Optional[bool], optional): Enabled no color mode, or None to auto detect. Defaults to None.
- tab_size (int, optional): Number of spaces used to replace a tab character. Defaults to 8.
- record (bool, optional): Boolean to enable recording of terminal output,
- required to call :meth:`export_html`, :meth:`export_svg`, and :meth:`export_text`. Defaults to False.
- markup (bool, optional): Boolean to enable :ref:`console_markup`. Defaults to True.
- emoji (bool, optional): Enable emoji code. Defaults to True.
- emoji_variant (str, optional): Optional emoji variant, either "text" or "emoji". Defaults to None.
- highlight (bool, optional): Enable automatic highlighting. Defaults to True.
- log_time (bool, optional): Boolean to enable logging of time by :meth:`log` methods. Defaults to True.
- log_path (bool, optional): Boolean to enable the logging of the caller by :meth:`log`. Defaults to True.
- log_time_format (Union[str, TimeFormatterCallable], optional): If ``log_time`` is enabled, either string for strftime or callable that formats the time. Defaults to "[%X] ".
- highlighter (HighlighterType, optional): Default highlighter.
- legacy_windows (bool, optional): Enable legacy Windows mode, or ``None`` to auto detect. Defaults to ``None``.
- safe_box (bool, optional): Restrict box options that don't render on legacy Windows.
- get_datetime (Callable[[], datetime], optional): Callable that gets the current time as a datetime.datetime object (used by Console.log),
- or None for datetime.now.
- get_time (Callable[[], time], optional): Callable that gets the current time in seconds, default uses time.monotonic.
- """
-
- _environ: Mapping[str, str] = os.environ
-
- def __init__(
- self,
- *,
- color_system: Optional[
- Literal["auto", "standard", "256", "truecolor", "windows"]
- ] = "auto",
- force_terminal: Optional[bool] = None,
- force_jupyter: Optional[bool] = None,
- force_interactive: Optional[bool] = None,
- soft_wrap: bool = False,
- theme: Optional[Theme] = None,
- stderr: bool = False,
- file: Optional[IO[str]] = None,
- quiet: bool = False,
- width: Optional[int] = None,
- height: Optional[int] = None,
- style: Optional[StyleType] = None,
- no_color: Optional[bool] = None,
- tab_size: int = 8,
- record: bool = False,
- markup: bool = True,
- emoji: bool = True,
- emoji_variant: Optional[EmojiVariant] = None,
- highlight: bool = True,
- log_time: bool = True,
- log_path: bool = True,
- log_time_format: Union[str, FormatTimeCallable] = "[%X]",
- highlighter: Optional["HighlighterType"] = ReprHighlighter(),
- legacy_windows: Optional[bool] = None,
- safe_box: bool = True,
- get_datetime: Optional[Callable[[], datetime]] = None,
- get_time: Optional[Callable[[], float]] = None,
- _environ: Optional[Mapping[str, str]] = None,
- ):
- # Copy of os.environ allows us to replace it for testing
- if _environ is not None:
- self._environ = _environ
-
- self.is_jupyter = _is_jupyter() if force_jupyter is None else force_jupyter
- if self.is_jupyter:
- if width is None:
- jupyter_columns = self._environ.get("JUPYTER_COLUMNS")
- if jupyter_columns is not None and jupyter_columns.isdigit():
- width = int(jupyter_columns)
- else:
- width = JUPYTER_DEFAULT_COLUMNS
- if height is None:
- jupyter_lines = self._environ.get("JUPYTER_LINES")
- if jupyter_lines is not None and jupyter_lines.isdigit():
- height = int(jupyter_lines)
- else:
- height = JUPYTER_DEFAULT_LINES
-
- self.tab_size = tab_size
- self.record = record
- self._markup = markup
- self._emoji = emoji
- self._emoji_variant: Optional[EmojiVariant] = emoji_variant
- self._highlight = highlight
- self.legacy_windows: bool = (
- (detect_legacy_windows() and not self.is_jupyter)
- if legacy_windows is None
- else legacy_windows
- )
-
- if width is None:
- columns = self._environ.get("COLUMNS")
- if columns is not None and columns.isdigit():
- width = int(columns) - self.legacy_windows
- if height is None:
- lines = self._environ.get("LINES")
- if lines is not None and lines.isdigit():
- height = int(lines)
-
- self.soft_wrap = soft_wrap
- self._width = width
- self._height = height
-
- self._color_system: Optional[ColorSystem]
-
- self._force_terminal = None
- if force_terminal is not None:
- self._force_terminal = force_terminal
-
- self._file = file
- self.quiet = quiet
- self.stderr = stderr
-
- if color_system is None:
- self._color_system = None
- elif color_system == "auto":
- self._color_system = self._detect_color_system()
- else:
- self._color_system = COLOR_SYSTEMS[color_system]
-
- self._lock = threading.RLock()
- self._log_render = LogRender(
- show_time=log_time,
- show_path=log_path,
- time_format=log_time_format,
- )
- self.highlighter: HighlighterType = highlighter or _null_highlighter
- self.safe_box = safe_box
- self.get_datetime = get_datetime or datetime.now
- self.get_time = get_time or monotonic
- self.style = style
- self.no_color = (
- no_color if no_color is not None else "NO_COLOR" in self._environ
- )
- self.is_interactive = (
- (self.is_terminal and not self.is_dumb_terminal)
- if force_interactive is None
- else force_interactive
- )
-
- self._record_buffer_lock = threading.RLock()
- self._thread_locals = ConsoleThreadLocals(
- theme_stack=ThemeStack(themes.DEFAULT if theme is None else theme)
- )
- self._record_buffer: List[Segment] = []
- self._render_hooks: List[RenderHook] = []
- self._live: Optional["Live"] = None
- self._is_alt_screen = False
-
- def __repr__(self) -> str:
- return f""
-
- @property
- def file(self) -> IO[str]:
- """Get the file object to write to."""
- file = self._file or (sys.stderr if self.stderr else sys.stdout)
- file = getattr(file, "rich_proxied_file", file)
- if file is None:
- file = NULL_FILE
- return file
-
- @file.setter
- def file(self, new_file: IO[str]) -> None:
- """Set a new file object."""
- self._file = new_file
-
- @property
- def _buffer(self) -> List[Segment]:
- """Get a thread local buffer."""
- return self._thread_locals.buffer
-
- @property
- def _buffer_index(self) -> int:
- """Get a thread local buffer."""
- return self._thread_locals.buffer_index
-
- @_buffer_index.setter
- def _buffer_index(self, value: int) -> None:
- self._thread_locals.buffer_index = value
-
- @property
- def _theme_stack(self) -> ThemeStack:
- """Get the thread local theme stack."""
- return self._thread_locals.theme_stack
-
- def _detect_color_system(self) -> Optional[ColorSystem]:
- """Detect color system from env vars."""
- if self.is_jupyter:
- return ColorSystem.TRUECOLOR
- if not self.is_terminal or self.is_dumb_terminal:
- return None
- if WINDOWS: # pragma: no cover
- if self.legacy_windows: # pragma: no cover
- return ColorSystem.WINDOWS
- windows_console_features = get_windows_console_features()
- return (
- ColorSystem.TRUECOLOR
- if windows_console_features.truecolor
- else ColorSystem.EIGHT_BIT
- )
- else:
- color_term = self._environ.get("COLORTERM", "").strip().lower()
- if color_term in ("truecolor", "24bit"):
- return ColorSystem.TRUECOLOR
- term = self._environ.get("TERM", "").strip().lower()
- _term_name, _hyphen, colors = term.rpartition("-")
- color_system = _TERM_COLORS.get(colors, ColorSystem.STANDARD)
- return color_system
-
- def _enter_buffer(self) -> None:
- """Enter in to a buffer context, and buffer all output."""
- self._buffer_index += 1
-
- def _exit_buffer(self) -> None:
- """Leave buffer context, and render content if required."""
- self._buffer_index -= 1
- self._check_buffer()
-
- def set_live(self, live: "Live") -> None:
- """Set Live instance. Used by Live context manager.
-
- Args:
- live (Live): Live instance using this Console.
-
- Raises:
- errors.LiveError: If this Console has a Live context currently active.
- """
- with self._lock:
- if self._live is not None:
- raise errors.LiveError("Only one live display may be active at once")
- self._live = live
-
- def clear_live(self) -> None:
- """Clear the Live instance."""
- with self._lock:
- self._live = None
-
- def push_render_hook(self, hook: RenderHook) -> None:
- """Add a new render hook to the stack.
-
- Args:
- hook (RenderHook): Render hook instance.
- """
- with self._lock:
- self._render_hooks.append(hook)
-
- def pop_render_hook(self) -> None:
- """Pop the last renderhook from the stack."""
- with self._lock:
- self._render_hooks.pop()
-
- def __enter__(self) -> "Console":
- """Own context manager to enter buffer context."""
- self._enter_buffer()
- return self
-
- def __exit__(self, exc_type: Any, exc_value: Any, traceback: Any) -> None:
- """Exit buffer context."""
- self._exit_buffer()
-
- def begin_capture(self) -> None:
- """Begin capturing console output. Call :meth:`end_capture` to exit capture mode and return output."""
- self._enter_buffer()
-
- def end_capture(self) -> str:
- """End capture mode and return captured string.
-
- Returns:
- str: Console output.
- """
- render_result = self._render_buffer(self._buffer)
- del self._buffer[:]
- self._exit_buffer()
- return render_result
-
- def push_theme(self, theme: Theme, *, inherit: bool = True) -> None:
- """Push a new theme on to the top of the stack, replacing the styles from the previous theme.
- Generally speaking, you should call :meth:`~rich.console.Console.use_theme` to get a context manager, rather
- than calling this method directly.
-
- Args:
- theme (Theme): A theme instance.
- inherit (bool, optional): Inherit existing styles. Defaults to True.
- """
- self._theme_stack.push_theme(theme, inherit=inherit)
-
- def pop_theme(self) -> None:
- """Remove theme from top of stack, restoring previous theme."""
- self._theme_stack.pop_theme()
-
- def use_theme(self, theme: Theme, *, inherit: bool = True) -> ThemeContext:
- """Use a different theme for the duration of the context manager.
-
- Args:
- theme (Theme): Theme instance to user.
- inherit (bool, optional): Inherit existing console styles. Defaults to True.
-
- Returns:
- ThemeContext: [description]
- """
- return ThemeContext(self, theme, inherit)
-
- @property
- def color_system(self) -> Optional[str]:
- """Get color system string.
-
- Returns:
- Optional[str]: "standard", "256" or "truecolor".
- """
-
- if self._color_system is not None:
- return _COLOR_SYSTEMS_NAMES[self._color_system]
- else:
- return None
-
- @property
- def encoding(self) -> str:
- """Get the encoding of the console file, e.g. ``"utf-8"``.
-
- Returns:
- str: A standard encoding string.
- """
- return (getattr(self.file, "encoding", "utf-8") or "utf-8").lower()
-
- @property
- def is_terminal(self) -> bool:
- """Check if the console is writing to a terminal.
-
- Returns:
- bool: True if the console writing to a device capable of
- understanding terminal codes, otherwise False.
- """
- if self._force_terminal is not None:
- return self._force_terminal
-
- if hasattr(sys.stdin, "__module__") and sys.stdin.__module__.startswith(
- "idlelib"
- ):
- # Return False for Idle which claims to be a tty but can't handle ansi codes
- return False
-
- if self.is_jupyter:
- # return False for Jupyter, which may have FORCE_COLOR set
- return False
-
- # If FORCE_COLOR env var has any value at all, we assume a terminal.
- force_color = self._environ.get("FORCE_COLOR")
- if force_color is not None:
- self._force_terminal = True
-
- isatty: Optional[Callable[[], bool]] = getattr(self.file, "isatty", None)
- try:
- return False if isatty is None else isatty()
- except ValueError:
- # in some situation (at the end of a pytest run for example) isatty() can raise
- # ValueError: I/O operation on closed file
- # return False because we aren't in a terminal anymore
- return False
-
- @property
- def is_dumb_terminal(self) -> bool:
- """Detect dumb terminal.
-
- Returns:
- bool: True if writing to a dumb terminal, otherwise False.
-
- """
- _term = self._environ.get("TERM", "")
- is_dumb = _term.lower() in ("dumb", "unknown")
- return self.is_terminal and is_dumb
-
- @property
- def options(self) -> ConsoleOptions:
- """Get default console options."""
- return ConsoleOptions(
- max_height=self.size.height,
- size=self.size,
- legacy_windows=self.legacy_windows,
- min_width=1,
- max_width=self.width,
- encoding=self.encoding,
- is_terminal=self.is_terminal,
- )
-
- @property
- def size(self) -> ConsoleDimensions:
- """Get the size of the console.
-
- Returns:
- ConsoleDimensions: A named tuple containing the dimensions.
- """
-
- if self._width is not None and self._height is not None:
- return ConsoleDimensions(self._width - self.legacy_windows, self._height)
-
- if self.is_dumb_terminal:
- return ConsoleDimensions(80, 25)
-
- width: Optional[int] = None
- height: Optional[int] = None
-
- if WINDOWS: # pragma: no cover
- try:
- width, height = os.get_terminal_size()
- except (AttributeError, ValueError, OSError): # Probably not a terminal
- pass
- else:
- for file_descriptor in _STD_STREAMS:
- try:
- width, height = os.get_terminal_size(file_descriptor)
- except (AttributeError, ValueError, OSError):
- pass
- else:
- break
-
- columns = self._environ.get("COLUMNS")
- if columns is not None and columns.isdigit():
- width = int(columns)
- lines = self._environ.get("LINES")
- if lines is not None and lines.isdigit():
- height = int(lines)
-
- # get_terminal_size can report 0, 0 if run from pseudo-terminal
- width = width or 80
- height = height or 25
- return ConsoleDimensions(
- width - self.legacy_windows if self._width is None else self._width,
- height if self._height is None else self._height,
- )
-
- @size.setter
- def size(self, new_size: Tuple[int, int]) -> None:
- """Set a new size for the terminal.
-
- Args:
- new_size (Tuple[int, int]): New width and height.
- """
- width, height = new_size
- self._width = width
- self._height = height
-
- @property
- def width(self) -> int:
- """Get the width of the console.
-
- Returns:
- int: The width (in characters) of the console.
- """
- return self.size.width
-
- @width.setter
- def width(self, width: int) -> None:
- """Set width.
-
- Args:
- width (int): New width.
- """
- self._width = width
-
- @property
- def height(self) -> int:
- """Get the height of the console.
-
- Returns:
- int: The height (in lines) of the console.
- """
- return self.size.height
-
- @height.setter
- def height(self, height: int) -> None:
- """Set height.
-
- Args:
- height (int): new height.
- """
- self._height = height
-
- def bell(self) -> None:
- """Play a 'bell' sound (if supported by the terminal)."""
- self.control(Control.bell())
-
- def capture(self) -> Capture:
- """A context manager to *capture* the result of print() or log() in a string,
- rather than writing it to the console.
-
- Example:
- >>> from rich.console import Console
- >>> console = Console()
- >>> with console.capture() as capture:
- ... console.print("[bold magenta]Hello World[/]")
- >>> print(capture.get())
-
- Returns:
- Capture: Context manager with disables writing to the terminal.
- """
- capture = Capture(self)
- return capture
-
- def pager(
- self, pager: Optional[Pager] = None, styles: bool = False, links: bool = False
- ) -> PagerContext:
- """A context manager to display anything printed within a "pager". The pager application
- is defined by the system and will typically support at least pressing a key to scroll.
-
- Args:
- pager (Pager, optional): A pager object, or None to use :class:`~rich.pager.SystemPager`. Defaults to None.
- styles (bool, optional): Show styles in pager. Defaults to False.
- links (bool, optional): Show links in pager. Defaults to False.
-
- Example:
- >>> from rich.console import Console
- >>> from rich.__main__ import make_test_card
- >>> console = Console()
- >>> with console.pager():
- console.print(make_test_card())
-
- Returns:
- PagerContext: A context manager.
- """
- return PagerContext(self, pager=pager, styles=styles, links=links)
-
- def line(self, count: int = 1) -> None:
- """Write new line(s).
-
- Args:
- count (int, optional): Number of new lines. Defaults to 1.
- """
-
- assert count >= 0, "count must be >= 0"
- self.print(NewLine(count))
-
- def clear(self, home: bool = True) -> None:
- """Clear the screen.
-
- Args:
- home (bool, optional): Also move the cursor to 'home' position. Defaults to True.
- """
- if home:
- self.control(Control.clear(), Control.home())
- else:
- self.control(Control.clear())
-
- def status(
- self,
- status: RenderableType,
- *,
- spinner: str = "dots",
- spinner_style: StyleType = "status.spinner",
- speed: float = 1.0,
- refresh_per_second: float = 12.5,
- ) -> "Status":
- """Display a status and spinner.
-
- Args:
- status (RenderableType): A status renderable (str or Text typically).
- spinner (str, optional): Name of spinner animation (see python -m rich.spinner). Defaults to "dots".
- spinner_style (StyleType, optional): Style of spinner. Defaults to "status.spinner".
- speed (float, optional): Speed factor for spinner animation. Defaults to 1.0.
- refresh_per_second (float, optional): Number of refreshes per second. Defaults to 12.5.
-
- Returns:
- Status: A Status object that may be used as a context manager.
- """
- from .status import Status
-
- status_renderable = Status(
- status,
- console=self,
- spinner=spinner,
- spinner_style=spinner_style,
- speed=speed,
- refresh_per_second=refresh_per_second,
- )
- return status_renderable
-
- def show_cursor(self, show: bool = True) -> bool:
- """Show or hide the cursor.
-
- Args:
- show (bool, optional): Set visibility of the cursor.
- """
- if self.is_terminal:
- self.control(Control.show_cursor(show))
- return True
- return False
-
- def set_alt_screen(self, enable: bool = True) -> bool:
- """Enables alternative screen mode.
-
- Note, if you enable this mode, you should ensure that is disabled before
- the application exits. See :meth:`~rich.Console.screen` for a context manager
- that handles this for you.
-
- Args:
- enable (bool, optional): Enable (True) or disable (False) alternate screen. Defaults to True.
-
- Returns:
- bool: True if the control codes were written.
-
- """
- changed = False
- if self.is_terminal and not self.legacy_windows:
- self.control(Control.alt_screen(enable))
- changed = True
- self._is_alt_screen = enable
- return changed
-
- @property
- def is_alt_screen(self) -> bool:
- """Check if the alt screen was enabled.
-
- Returns:
- bool: True if the alt screen was enabled, otherwise False.
- """
- return self._is_alt_screen
-
- def set_window_title(self, title: str) -> bool:
- """Set the title of the console terminal window.
-
- Warning: There is no means within Rich of "resetting" the window title to its
- previous value, meaning the title you set will persist even after your application
- exits.
-
- ``fish`` shell resets the window title before and after each command by default,
- negating this issue. Windows Terminal and command prompt will also reset the title for you.
- Most other shells and terminals, however, do not do this.
-
- Some terminals may require configuration changes before you can set the title.
- Some terminals may not support setting the title at all.
-
- Other software (including the terminal itself, the shell, custom prompts, plugins, etc.)
- may also set the terminal window title. This could result in whatever value you write
- using this method being overwritten.
-
- Args:
- title (str): The new title of the terminal window.
-
- Returns:
- bool: True if the control code to change the terminal title was
- written, otherwise False. Note that a return value of True
- does not guarantee that the window title has actually changed,
- since the feature may be unsupported/disabled in some terminals.
- """
- if self.is_terminal:
- self.control(Control.title(title))
- return True
- return False
-
- def screen(
- self, hide_cursor: bool = True, style: Optional[StyleType] = None
- ) -> "ScreenContext":
- """Context manager to enable and disable 'alternative screen' mode.
-
- Args:
- hide_cursor (bool, optional): Also hide the cursor. Defaults to False.
- style (Style, optional): Optional style for screen. Defaults to None.
-
- Returns:
- ~ScreenContext: Context which enables alternate screen on enter, and disables it on exit.
- """
- return ScreenContext(self, hide_cursor=hide_cursor, style=style or "")
-
- def measure(
- self, renderable: RenderableType, *, options: Optional[ConsoleOptions] = None
- ) -> Measurement:
- """Measure a renderable. Returns a :class:`~rich.measure.Measurement` object which contains
- information regarding the number of characters required to print the renderable.
-
- Args:
- renderable (RenderableType): Any renderable or string.
- options (Optional[ConsoleOptions], optional): Options to use when measuring, or None
- to use default options. Defaults to None.
-
- Returns:
- Measurement: A measurement of the renderable.
- """
- measurement = Measurement.get(self, options or self.options, renderable)
- return measurement
-
- def render(
- self, renderable: RenderableType, options: Optional[ConsoleOptions] = None
- ) -> Iterable[Segment]:
- """Render an object in to an iterable of `Segment` instances.
-
- This method contains the logic for rendering objects with the console protocol.
- You are unlikely to need to use it directly, unless you are extending the library.
-
- Args:
- renderable (RenderableType): An object supporting the console protocol, or
- an object that may be converted to a string.
- options (ConsoleOptions, optional): An options object, or None to use self.options. Defaults to None.
-
- Returns:
- Iterable[Segment]: An iterable of segments that may be rendered.
- """
-
- _options = options or self.options
- if _options.max_width < 1:
- # No space to render anything. This prevents potential recursion errors.
- return
- render_iterable: RenderResult
-
- renderable = rich_cast(renderable)
- if hasattr(renderable, "__rich_console__") and not isclass(renderable):
- render_iterable = renderable.__rich_console__(self, _options) # type: ignore[union-attr]
- elif isinstance(renderable, str):
- text_renderable = self.render_str(
- renderable, highlight=_options.highlight, markup=_options.markup
- )
- render_iterable = text_renderable.__rich_console__(self, _options)
- else:
- raise errors.NotRenderableError(
- f"Unable to render {renderable!r}; "
- "A str, Segment or object with __rich_console__ method is required"
- )
-
- try:
- iter_render = iter(render_iterable)
- except TypeError:
- raise errors.NotRenderableError(
- f"object {render_iterable!r} is not renderable"
- )
- _Segment = Segment
- _options = _options.reset_height()
- for render_output in iter_render:
- if isinstance(render_output, _Segment):
- yield render_output
- else:
- yield from self.render(render_output, _options)
-
- def render_lines(
- self,
- renderable: RenderableType,
- options: Optional[ConsoleOptions] = None,
- *,
- style: Optional[Style] = None,
- pad: bool = True,
- new_lines: bool = False,
- ) -> List[List[Segment]]:
- """Render objects in to a list of lines.
-
- The output of render_lines is useful when further formatting of rendered console text
- is required, such as the Panel class which draws a border around any renderable object.
-
- Args:
- renderable (RenderableType): Any object renderable in the console.
- options (Optional[ConsoleOptions], optional): Console options, or None to use self.options. Default to ``None``.
- style (Style, optional): Optional style to apply to renderables. Defaults to ``None``.
- pad (bool, optional): Pad lines shorter than render width. Defaults to ``True``.
- new_lines (bool, optional): Include "\n" characters at end of lines.
-
- Returns:
- List[List[Segment]]: A list of lines, where a line is a list of Segment objects.
- """
- with self._lock:
- render_options = options or self.options
- _rendered = self.render(renderable, render_options)
- if style:
- _rendered = Segment.apply_style(_rendered, style)
-
- render_height = render_options.height
- if render_height is not None:
- render_height = max(0, render_height)
-
- lines = list(
- islice(
- Segment.split_and_crop_lines(
- _rendered,
- render_options.max_width,
- include_new_lines=new_lines,
- pad=pad,
- style=style,
- ),
- None,
- render_height,
- )
- )
- if render_options.height is not None:
- extra_lines = render_options.height - len(lines)
- if extra_lines > 0:
- pad_line = [
- [Segment(" " * render_options.max_width, style), Segment("\n")]
- if new_lines
- else [Segment(" " * render_options.max_width, style)]
- ]
- lines.extend(pad_line * extra_lines)
-
- return lines
-
- def render_str(
- self,
- text: str,
- *,
- style: Union[str, Style] = "",
- justify: Optional[JustifyMethod] = None,
- overflow: Optional[OverflowMethod] = None,
- emoji: Optional[bool] = None,
- markup: Optional[bool] = None,
- highlight: Optional[bool] = None,
- highlighter: Optional[HighlighterType] = None,
- ) -> "Text":
- """Convert a string to a Text instance. This is called automatically if
- you print or log a string.
-
- Args:
- text (str): Text to render.
- style (Union[str, Style], optional): Style to apply to rendered text.
- justify (str, optional): Justify method: "default", "left", "center", "full", or "right". Defaults to ``None``.
- overflow (str, optional): Overflow method: "crop", "fold", or "ellipsis". Defaults to ``None``.
- emoji (Optional[bool], optional): Enable emoji, or ``None`` to use Console default.
- markup (Optional[bool], optional): Enable markup, or ``None`` to use Console default.
- highlight (Optional[bool], optional): Enable highlighting, or ``None`` to use Console default.
- highlighter (HighlighterType, optional): Optional highlighter to apply.
- Returns:
- ConsoleRenderable: Renderable object.
-
- """
- emoji_enabled = emoji or (emoji is None and self._emoji)
- markup_enabled = markup or (markup is None and self._markup)
- highlight_enabled = highlight or (highlight is None and self._highlight)
-
- if markup_enabled:
- rich_text = render_markup(
- text,
- style=style,
- emoji=emoji_enabled,
- emoji_variant=self._emoji_variant,
- )
- rich_text.justify = justify
- rich_text.overflow = overflow
- else:
- rich_text = Text(
- _emoji_replace(text, default_variant=self._emoji_variant)
- if emoji_enabled
- else text,
- justify=justify,
- overflow=overflow,
- style=style,
- )
-
- _highlighter = (highlighter or self.highlighter) if highlight_enabled else None
- if _highlighter is not None:
- highlight_text = _highlighter(str(rich_text))
- highlight_text.copy_styles(rich_text)
- return highlight_text
-
- return rich_text
-
- def get_style(
- self, name: Union[str, Style], *, default: Optional[Union[Style, str]] = None
- ) -> Style:
- """Get a Style instance by its theme name or parse a definition.
-
- Args:
- name (str): The name of a style or a style definition.
-
- Returns:
- Style: A Style object.
-
- Raises:
- MissingStyle: If no style could be parsed from name.
-
- """
- if isinstance(name, Style):
- return name
-
- try:
- style = self._theme_stack.get(name)
- if style is None:
- style = Style.parse(name)
- return style.copy() if style.link else style
- except errors.StyleSyntaxError as error:
- if default is not None:
- return self.get_style(default)
- raise errors.MissingStyle(
- f"Failed to get style {name!r}; {error}"
- ) from None
-
- def _collect_renderables(
- self,
- objects: Iterable[Any],
- sep: str,
- end: str,
- *,
- justify: Optional[JustifyMethod] = None,
- emoji: Optional[bool] = None,
- markup: Optional[bool] = None,
- highlight: Optional[bool] = None,
- ) -> List[ConsoleRenderable]:
- """Combine a number of renderables and text into one renderable.
-
- Args:
- objects (Iterable[Any]): Anything that Rich can render.
- sep (str): String to write between print data.
- end (str): String to write at end of print data.
- justify (str, optional): One of "left", "right", "center", or "full". Defaults to ``None``.
- emoji (Optional[bool], optional): Enable emoji code, or ``None`` to use console default.
- markup (Optional[bool], optional): Enable markup, or ``None`` to use console default.
- highlight (Optional[bool], optional): Enable automatic highlighting, or ``None`` to use console default.
-
- Returns:
- List[ConsoleRenderable]: A list of things to render.
- """
- renderables: List[ConsoleRenderable] = []
- _append = renderables.append
- text: List[Text] = []
- append_text = text.append
-
- append = _append
- if justify in ("left", "center", "right"):
-
- def align_append(renderable: RenderableType) -> None:
- _append(Align(renderable, cast(AlignMethod, justify)))
-
- append = align_append
-
- _highlighter: HighlighterType = _null_highlighter
- if highlight or (highlight is None and self._highlight):
- _highlighter = self.highlighter
-
- def check_text() -> None:
- if text:
- sep_text = Text(sep, justify=justify, end=end)
- append(sep_text.join(text))
- text.clear()
-
- for renderable in objects:
- renderable = rich_cast(renderable)
- if isinstance(renderable, str):
- append_text(
- self.render_str(
- renderable, emoji=emoji, markup=markup, highlighter=_highlighter
- )
- )
- elif isinstance(renderable, Text):
- append_text(renderable)
- elif isinstance(renderable, ConsoleRenderable):
- check_text()
- append(renderable)
- elif is_expandable(renderable):
- check_text()
- append(Pretty(renderable, highlighter=_highlighter))
- else:
- append_text(_highlighter(str(renderable)))
-
- check_text()
-
- if self.style is not None:
- style = self.get_style(self.style)
- renderables = [Styled(renderable, style) for renderable in renderables]
-
- return renderables
-
- def rule(
- self,
- title: TextType = "",
- *,
- characters: str = "─",
- style: Union[str, Style] = "rule.line",
- align: AlignMethod = "center",
- ) -> None:
- """Draw a line with optional centered title.
-
- Args:
- title (str, optional): Text to render over the rule. Defaults to "".
- characters (str, optional): Character(s) to form the line. Defaults to "─".
- style (str, optional): Style of line. Defaults to "rule.line".
- align (str, optional): How to align the title, one of "left", "center", or "right". Defaults to "center".
- """
- from .rule import Rule
-
- rule = Rule(title=title, characters=characters, style=style, align=align)
- self.print(rule)
-
- def control(self, *control: Control) -> None:
- """Insert non-printing control codes.
-
- Args:
- control_codes (str): Control codes, such as those that may move the cursor.
- """
- if not self.is_dumb_terminal:
- with self:
- self._buffer.extend(_control.segment for _control in control)
-
- def out(
- self,
- *objects: Any,
- sep: str = " ",
- end: str = "\n",
- style: Optional[Union[str, Style]] = None,
- highlight: Optional[bool] = None,
- ) -> None:
- """Output to the terminal. This is a low-level way of writing to the terminal which unlike
- :meth:`~rich.console.Console.print` won't pretty print, wrap text, or apply markup, but will
- optionally apply highlighting and a basic style.
-
- Args:
- sep (str, optional): String to write between print data. Defaults to " ".
- end (str, optional): String to write at end of print data. Defaults to "\\\\n".
- style (Union[str, Style], optional): A style to apply to output. Defaults to None.
- highlight (Optional[bool], optional): Enable automatic highlighting, or ``None`` to use
- console default. Defaults to ``None``.
- """
- raw_output: str = sep.join(str(_object) for _object in objects)
- self.print(
- raw_output,
- style=style,
- highlight=highlight,
- emoji=False,
- markup=False,
- no_wrap=True,
- overflow="ignore",
- crop=False,
- end=end,
- )
-
- def print(
- self,
- *objects: Any,
- sep: str = " ",
- end: str = "\n",
- style: Optional[Union[str, Style]] = None,
- justify: Optional[JustifyMethod] = None,
- overflow: Optional[OverflowMethod] = None,
- no_wrap: Optional[bool] = None,
- emoji: Optional[bool] = None,
- markup: Optional[bool] = None,
- highlight: Optional[bool] = None,
- width: Optional[int] = None,
- height: Optional[int] = None,
- crop: bool = True,
- soft_wrap: Optional[bool] = None,
- new_line_start: bool = False,
- ) -> None:
- """Print to the console.
-
- Args:
- objects (positional args): Objects to log to the terminal.
- sep (str, optional): String to write between print data. Defaults to " ".
- end (str, optional): String to write at end of print data. Defaults to "\\\\n".
- style (Union[str, Style], optional): A style to apply to output. Defaults to None.
- justify (str, optional): Justify method: "default", "left", "right", "center", or "full". Defaults to ``None``.
- overflow (str, optional): Overflow method: "ignore", "crop", "fold", or "ellipsis". Defaults to None.
- no_wrap (Optional[bool], optional): Disable word wrapping. Defaults to None.
- emoji (Optional[bool], optional): Enable emoji code, or ``None`` to use console default. Defaults to ``None``.
- markup (Optional[bool], optional): Enable markup, or ``None`` to use console default. Defaults to ``None``.
- highlight (Optional[bool], optional): Enable automatic highlighting, or ``None`` to use console default. Defaults to ``None``.
- width (Optional[int], optional): Width of output, or ``None`` to auto-detect. Defaults to ``None``.
- crop (Optional[bool], optional): Crop output to width of terminal. Defaults to True.
- soft_wrap (bool, optional): Enable soft wrap mode which disables word wrapping and cropping of text or ``None`` for
- Console default. Defaults to ``None``.
- new_line_start (bool, False): Insert a new line at the start if the output contains more than one line. Defaults to ``False``.
- """
- if not objects:
- objects = (NewLine(),)
-
- if soft_wrap is None:
- soft_wrap = self.soft_wrap
- if soft_wrap:
- if no_wrap is None:
- no_wrap = True
- if overflow is None:
- overflow = "ignore"
- crop = False
- render_hooks = self._render_hooks[:]
- with self:
- renderables = self._collect_renderables(
- objects,
- sep,
- end,
- justify=justify,
- emoji=emoji,
- markup=markup,
- highlight=highlight,
- )
- for hook in render_hooks:
- renderables = hook.process_renderables(renderables)
- render_options = self.options.update(
- justify=justify,
- overflow=overflow,
- width=min(width, self.width) if width is not None else NO_CHANGE,
- height=height,
- no_wrap=no_wrap,
- markup=markup,
- highlight=highlight,
- )
-
- new_segments: List[Segment] = []
- extend = new_segments.extend
- render = self.render
- if style is None:
- for renderable in renderables:
- extend(render(renderable, render_options))
- else:
- for renderable in renderables:
- extend(
- Segment.apply_style(
- render(renderable, render_options), self.get_style(style)
- )
- )
- if new_line_start:
- if (
- len("".join(segment.text for segment in new_segments).splitlines())
- > 1
- ):
- new_segments.insert(0, Segment.line())
- if crop:
- buffer_extend = self._buffer.extend
- for line in Segment.split_and_crop_lines(
- new_segments, self.width, pad=False
- ):
- buffer_extend(line)
- else:
- self._buffer.extend(new_segments)
-
- def print_json(
- self,
- json: Optional[str] = None,
- *,
- data: Any = None,
- indent: Union[None, int, str] = 2,
- highlight: bool = True,
- skip_keys: bool = False,
- ensure_ascii: bool = False,
- check_circular: bool = True,
- allow_nan: bool = True,
- default: Optional[Callable[[Any], Any]] = None,
- sort_keys: bool = False,
- ) -> None:
- """Pretty prints JSON. Output will be valid JSON.
-
- Args:
- json (Optional[str]): A string containing JSON.
- data (Any): If json is not supplied, then encode this data.
- indent (Union[None, int, str], optional): Number of spaces to indent. Defaults to 2.
- highlight (bool, optional): Enable highlighting of output: Defaults to True.
- skip_keys (bool, optional): Skip keys not of a basic type. Defaults to False.
- ensure_ascii (bool, optional): Escape all non-ascii characters. Defaults to False.
- check_circular (bool, optional): Check for circular references. Defaults to True.
- allow_nan (bool, optional): Allow NaN and Infinity values. Defaults to True.
- default (Callable, optional): A callable that converts values that can not be encoded
- in to something that can be JSON encoded. Defaults to None.
- sort_keys (bool, optional): Sort dictionary keys. Defaults to False.
- """
- from pip._vendor.rich.json import JSON
-
- if json is None:
- json_renderable = JSON.from_data(
- data,
- indent=indent,
- highlight=highlight,
- skip_keys=skip_keys,
- ensure_ascii=ensure_ascii,
- check_circular=check_circular,
- allow_nan=allow_nan,
- default=default,
- sort_keys=sort_keys,
- )
- else:
- if not isinstance(json, str):
- raise TypeError(
- f"json must be str. Did you mean print_json(data={json!r}) ?"
- )
- json_renderable = JSON(
- json,
- indent=indent,
- highlight=highlight,
- skip_keys=skip_keys,
- ensure_ascii=ensure_ascii,
- check_circular=check_circular,
- allow_nan=allow_nan,
- default=default,
- sort_keys=sort_keys,
- )
- self.print(json_renderable, soft_wrap=True)
-
- def update_screen(
- self,
- renderable: RenderableType,
- *,
- region: Optional[Region] = None,
- options: Optional[ConsoleOptions] = None,
- ) -> None:
- """Update the screen at a given offset.
-
- Args:
- renderable (RenderableType): A Rich renderable.
- region (Region, optional): Region of screen to update, or None for entire screen. Defaults to None.
- x (int, optional): x offset. Defaults to 0.
- y (int, optional): y offset. Defaults to 0.
-
- Raises:
- errors.NoAltScreen: If the Console isn't in alt screen mode.
-
- """
- if not self.is_alt_screen:
- raise errors.NoAltScreen("Alt screen must be enabled to call update_screen")
- render_options = options or self.options
- if region is None:
- x = y = 0
- render_options = render_options.update_dimensions(
- render_options.max_width, render_options.height or self.height
- )
- else:
- x, y, width, height = region
- render_options = render_options.update_dimensions(width, height)
-
- lines = self.render_lines(renderable, options=render_options)
- self.update_screen_lines(lines, x, y)
-
- def update_screen_lines(
- self, lines: List[List[Segment]], x: int = 0, y: int = 0
- ) -> None:
- """Update lines of the screen at a given offset.
-
- Args:
- lines (List[List[Segment]]): Rendered lines (as produced by :meth:`~rich.Console.render_lines`).
- x (int, optional): x offset (column no). Defaults to 0.
- y (int, optional): y offset (column no). Defaults to 0.
-
- Raises:
- errors.NoAltScreen: If the Console isn't in alt screen mode.
- """
- if not self.is_alt_screen:
- raise errors.NoAltScreen("Alt screen must be enabled to call update_screen")
- screen_update = ScreenUpdate(lines, x, y)
- segments = self.render(screen_update)
- self._buffer.extend(segments)
- self._check_buffer()
-
- def print_exception(
- self,
- *,
- width: Optional[int] = 100,
- extra_lines: int = 3,
- theme: Optional[str] = None,
- word_wrap: bool = False,
- show_locals: bool = False,
- suppress: Iterable[Union[str, ModuleType]] = (),
- max_frames: int = 100,
- ) -> None:
- """Prints a rich render of the last exception and traceback.
-
- Args:
- width (Optional[int], optional): Number of characters used to render code. Defaults to 100.
- extra_lines (int, optional): Additional lines of code to render. Defaults to 3.
- theme (str, optional): Override pygments theme used in traceback
- word_wrap (bool, optional): Enable word wrapping of long lines. Defaults to False.
- show_locals (bool, optional): Enable display of local variables. Defaults to False.
- suppress (Iterable[Union[str, ModuleType]]): Optional sequence of modules or paths to exclude from traceback.
- max_frames (int): Maximum number of frames to show in a traceback, 0 for no maximum. Defaults to 100.
- """
- from .traceback import Traceback
-
- traceback = Traceback(
- width=width,
- extra_lines=extra_lines,
- theme=theme,
- word_wrap=word_wrap,
- show_locals=show_locals,
- suppress=suppress,
- max_frames=max_frames,
- )
- self.print(traceback)
-
- @staticmethod
- def _caller_frame_info(
- offset: int,
- currentframe: Callable[[], Optional[FrameType]] = inspect.currentframe,
- ) -> Tuple[str, int, Dict[str, Any]]:
- """Get caller frame information.
-
- Args:
- offset (int): the caller offset within the current frame stack.
- currentframe (Callable[[], Optional[FrameType]], optional): the callable to use to
- retrieve the current frame. Defaults to ``inspect.currentframe``.
-
- Returns:
- Tuple[str, int, Dict[str, Any]]: A tuple containing the filename, the line number and
- the dictionary of local variables associated with the caller frame.
-
- Raises:
- RuntimeError: If the stack offset is invalid.
- """
- # Ignore the frame of this local helper
- offset += 1
-
- frame = currentframe()
- if frame is not None:
- # Use the faster currentframe where implemented
- while offset and frame is not None:
- frame = frame.f_back
- offset -= 1
- assert frame is not None
- return frame.f_code.co_filename, frame.f_lineno, frame.f_locals
- else:
- # Fallback to the slower stack
- frame_info = inspect.stack()[offset]
- return frame_info.filename, frame_info.lineno, frame_info.frame.f_locals
-
- def log(
- self,
- *objects: Any,
- sep: str = " ",
- end: str = "\n",
- style: Optional[Union[str, Style]] = None,
- justify: Optional[JustifyMethod] = None,
- emoji: Optional[bool] = None,
- markup: Optional[bool] = None,
- highlight: Optional[bool] = None,
- log_locals: bool = False,
- _stack_offset: int = 1,
- ) -> None:
- """Log rich content to the terminal.
-
- Args:
- objects (positional args): Objects to log to the terminal.
- sep (str, optional): String to write between print data. Defaults to " ".
- end (str, optional): String to write at end of print data. Defaults to "\\\\n".
- style (Union[str, Style], optional): A style to apply to output. Defaults to None.
- justify (str, optional): One of "left", "right", "center", or "full". Defaults to ``None``.
- overflow (str, optional): Overflow method: "crop", "fold", or "ellipsis". Defaults to None.
- emoji (Optional[bool], optional): Enable emoji code, or ``None`` to use console default. Defaults to None.
- markup (Optional[bool], optional): Enable markup, or ``None`` to use console default. Defaults to None.
- highlight (Optional[bool], optional): Enable automatic highlighting, or ``None`` to use console default. Defaults to None.
- log_locals (bool, optional): Boolean to enable logging of locals where ``log()``
- was called. Defaults to False.
- _stack_offset (int, optional): Offset of caller from end of call stack. Defaults to 1.
- """
- if not objects:
- objects = (NewLine(),)
-
- render_hooks = self._render_hooks[:]
-
- with self:
- renderables = self._collect_renderables(
- objects,
- sep,
- end,
- justify=justify,
- emoji=emoji,
- markup=markup,
- highlight=highlight,
- )
- if style is not None:
- renderables = [Styled(renderable, style) for renderable in renderables]
-
- filename, line_no, locals = self._caller_frame_info(_stack_offset)
- link_path = None if filename.startswith("<") else os.path.abspath(filename)
- path = filename.rpartition(os.sep)[-1]
- if log_locals:
- locals_map = {
- key: value
- for key, value in locals.items()
- if not key.startswith("__")
- }
- renderables.append(render_scope(locals_map, title="[i]locals"))
-
- renderables = [
- self._log_render(
- self,
- renderables,
- log_time=self.get_datetime(),
- path=path,
- line_no=line_no,
- link_path=link_path,
- )
- ]
- for hook in render_hooks:
- renderables = hook.process_renderables(renderables)
- new_segments: List[Segment] = []
- extend = new_segments.extend
- render = self.render
- render_options = self.options
- for renderable in renderables:
- extend(render(renderable, render_options))
- buffer_extend = self._buffer.extend
- for line in Segment.split_and_crop_lines(
- new_segments, self.width, pad=False
- ):
- buffer_extend(line)
-
- def _check_buffer(self) -> None:
- """Check if the buffer may be rendered. Render it if it can (e.g. Console.quiet is False)
- Rendering is supported on Windows, Unix and Jupyter environments. For
- legacy Windows consoles, the win32 API is called directly.
- This method will also record what it renders if recording is enabled via Console.record.
- """
- if self.quiet:
- del self._buffer[:]
- return
- with self._lock:
- if self.record:
- with self._record_buffer_lock:
- self._record_buffer.extend(self._buffer[:])
-
- if self._buffer_index == 0:
-
- if self.is_jupyter: # pragma: no cover
- from .jupyter import display
-
- display(self._buffer, self._render_buffer(self._buffer[:]))
- del self._buffer[:]
- else:
- if WINDOWS:
- use_legacy_windows_render = False
- if self.legacy_windows:
- fileno = get_fileno(self.file)
- if fileno is not None:
- use_legacy_windows_render = (
- fileno in _STD_STREAMS_OUTPUT
- )
-
- if use_legacy_windows_render:
- from pip._vendor.rich._win32_console import LegacyWindowsTerm
- from pip._vendor.rich._windows_renderer import legacy_windows_render
-
- buffer = self._buffer[:]
- if self.no_color and self._color_system:
- buffer = list(Segment.remove_color(buffer))
-
- legacy_windows_render(buffer, LegacyWindowsTerm(self.file))
- else:
- # Either a non-std stream on legacy Windows, or modern Windows.
- text = self._render_buffer(self._buffer[:])
- # https://bugs.python.org/issue37871
- # https://github.com/python/cpython/issues/82052
- # We need to avoid writing more than 32Kb in a single write, due to the above bug
- write = self.file.write
- # Worse case scenario, every character is 4 bytes of utf-8
- MAX_WRITE = 32 * 1024 // 4
- try:
- if len(text) <= MAX_WRITE:
- write(text)
- else:
- batch: List[str] = []
- batch_append = batch.append
- size = 0
- for line in text.splitlines(True):
- if size + len(line) > MAX_WRITE and batch:
- write("".join(batch))
- batch.clear()
- size = 0
- batch_append(line)
- size += len(line)
- if batch:
- write("".join(batch))
- batch.clear()
- except UnicodeEncodeError as error:
- error.reason = f"{error.reason}\n*** You may need to add PYTHONIOENCODING=utf-8 to your environment ***"
- raise
- else:
- text = self._render_buffer(self._buffer[:])
- try:
- self.file.write(text)
- except UnicodeEncodeError as error:
- error.reason = f"{error.reason}\n*** You may need to add PYTHONIOENCODING=utf-8 to your environment ***"
- raise
-
- self.file.flush()
- del self._buffer[:]
-
- def _render_buffer(self, buffer: Iterable[Segment]) -> str:
- """Render buffered output, and clear buffer."""
- output: List[str] = []
- append = output.append
- color_system = self._color_system
- legacy_windows = self.legacy_windows
- not_terminal = not self.is_terminal
- if self.no_color and color_system:
- buffer = Segment.remove_color(buffer)
- for text, style, control in buffer:
- if style:
- append(
- style.render(
- text,
- color_system=color_system,
- legacy_windows=legacy_windows,
- )
- )
- elif not (not_terminal and control):
- append(text)
-
- rendered = "".join(output)
- return rendered
-
- def input(
- self,
- prompt: TextType = "",
- *,
- markup: bool = True,
- emoji: bool = True,
- password: bool = False,
- stream: Optional[TextIO] = None,
- ) -> str:
- """Displays a prompt and waits for input from the user. The prompt may contain color / style.
-
- It works in the same way as Python's builtin :func:`input` function and provides elaborate line editing and history features if Python's builtin :mod:`readline` module is previously loaded.
-
- Args:
- prompt (Union[str, Text]): Text to render in the prompt.
- markup (bool, optional): Enable console markup (requires a str prompt). Defaults to True.
- emoji (bool, optional): Enable emoji (requires a str prompt). Defaults to True.
- password: (bool, optional): Hide typed text. Defaults to False.
- stream: (TextIO, optional): Optional file to read input from (rather than stdin). Defaults to None.
-
- Returns:
- str: Text read from stdin.
- """
- if prompt:
- self.print(prompt, markup=markup, emoji=emoji, end="")
- if password:
- result = getpass("", stream=stream)
- else:
- if stream:
- result = stream.readline()
- else:
- result = input()
- return result
-
- def export_text(self, *, clear: bool = True, styles: bool = False) -> str:
- """Generate text from console contents (requires record=True argument in constructor).
-
- Args:
- clear (bool, optional): Clear record buffer after exporting. Defaults to ``True``.
- styles (bool, optional): If ``True``, ansi escape codes will be included. ``False`` for plain text.
- Defaults to ``False``.
-
- Returns:
- str: String containing console contents.
-
- """
- assert (
- self.record
- ), "To export console contents set record=True in the constructor or instance"
-
- with self._record_buffer_lock:
- if styles:
- text = "".join(
- (style.render(text) if style else text)
- for text, style, _ in self._record_buffer
- )
- else:
- text = "".join(
- segment.text
- for segment in self._record_buffer
- if not segment.control
- )
- if clear:
- del self._record_buffer[:]
- return text
-
- def save_text(self, path: str, *, clear: bool = True, styles: bool = False) -> None:
- """Generate text from console and save to a given location (requires record=True argument in constructor).
-
- Args:
- path (str): Path to write text files.
- clear (bool, optional): Clear record buffer after exporting. Defaults to ``True``.
- styles (bool, optional): If ``True``, ansi style codes will be included. ``False`` for plain text.
- Defaults to ``False``.
-
- """
- text = self.export_text(clear=clear, styles=styles)
- with open(path, "wt", encoding="utf-8") as write_file:
- write_file.write(text)
-
- def export_html(
- self,
- *,
- theme: Optional[TerminalTheme] = None,
- clear: bool = True,
- code_format: Optional[str] = None,
- inline_styles: bool = False,
- ) -> str:
- """Generate HTML from console contents (requires record=True argument in constructor).
-
- Args:
- theme (TerminalTheme, optional): TerminalTheme object containing console colors.
- clear (bool, optional): Clear record buffer after exporting. Defaults to ``True``.
- code_format (str, optional): Format string to render HTML. In addition to '{foreground}',
- '{background}', and '{code}', should contain '{stylesheet}' if inline_styles is ``False``.
- inline_styles (bool, optional): If ``True`` styles will be inlined in to spans, which makes files
- larger but easier to cut and paste markup. If ``False``, styles will be embedded in a style tag.
- Defaults to False.
-
- Returns:
- str: String containing console contents as HTML.
- """
- assert (
- self.record
- ), "To export console contents set record=True in the constructor or instance"
- fragments: List[str] = []
- append = fragments.append
- _theme = theme or DEFAULT_TERMINAL_THEME
- stylesheet = ""
-
- render_code_format = CONSOLE_HTML_FORMAT if code_format is None else code_format
-
- with self._record_buffer_lock:
- if inline_styles:
- for text, style, _ in Segment.filter_control(
- Segment.simplify(self._record_buffer)
- ):
- text = escape(text)
- if style:
- rule = style.get_html_style(_theme)
- if style.link:
- text = f'{text}'
- text = f'{text}' if rule else text
- append(text)
- else:
- styles: Dict[str, int] = {}
- for text, style, _ in Segment.filter_control(
- Segment.simplify(self._record_buffer)
- ):
- text = escape(text)
- if style:
- rule = style.get_html_style(_theme)
- style_number = styles.setdefault(rule, len(styles) + 1)
- if style.link:
- text = f'{text}'
- else:
- text = f'{text}'
- append(text)
- stylesheet_rules: List[str] = []
- stylesheet_append = stylesheet_rules.append
- for style_rule, style_number in styles.items():
- if style_rule:
- stylesheet_append(f".r{style_number} {{{style_rule}}}")
- stylesheet = "\n".join(stylesheet_rules)
-
- rendered_code = render_code_format.format(
- code="".join(fragments),
- stylesheet=stylesheet,
- foreground=_theme.foreground_color.hex,
- background=_theme.background_color.hex,
- )
- if clear:
- del self._record_buffer[:]
- return rendered_code
-
- def save_html(
- self,
- path: str,
- *,
- theme: Optional[TerminalTheme] = None,
- clear: bool = True,
- code_format: str = CONSOLE_HTML_FORMAT,
- inline_styles: bool = False,
- ) -> None:
- """Generate HTML from console contents and write to a file (requires record=True argument in constructor).
-
- Args:
- path (str): Path to write html file.
- theme (TerminalTheme, optional): TerminalTheme object containing console colors.
- clear (bool, optional): Clear record buffer after exporting. Defaults to ``True``.
- code_format (str, optional): Format string to render HTML. In addition to '{foreground}',
- '{background}', and '{code}', should contain '{stylesheet}' if inline_styles is ``False``.
- inline_styles (bool, optional): If ``True`` styles will be inlined in to spans, which makes files
- larger but easier to cut and paste markup. If ``False``, styles will be embedded in a style tag.
- Defaults to False.
-
- """
- html = self.export_html(
- theme=theme,
- clear=clear,
- code_format=code_format,
- inline_styles=inline_styles,
- )
- with open(path, "wt", encoding="utf-8") as write_file:
- write_file.write(html)
-
- def export_svg(
- self,
- *,
- title: str = "Rich",
- theme: Optional[TerminalTheme] = None,
- clear: bool = True,
- code_format: str = CONSOLE_SVG_FORMAT,
- font_aspect_ratio: float = 0.61,
- unique_id: Optional[str] = None,
- ) -> str:
- """
- Generate an SVG from the console contents (requires record=True in Console constructor).
-
- Args:
- title (str, optional): The title of the tab in the output image
- theme (TerminalTheme, optional): The ``TerminalTheme`` object to use to style the terminal
- clear (bool, optional): Clear record buffer after exporting. Defaults to ``True``
- code_format (str, optional): Format string used to generate the SVG. Rich will inject a number of variables
- into the string in order to form the final SVG output. The default template used and the variables
- injected by Rich can be found by inspecting the ``console.CONSOLE_SVG_FORMAT`` variable.
- font_aspect_ratio (float, optional): The width to height ratio of the font used in the ``code_format``
- string. Defaults to 0.61, which is the width to height ratio of Fira Code (the default font).
- If you aren't specifying a different font inside ``code_format``, you probably don't need this.
- unique_id (str, optional): unique id that is used as the prefix for various elements (CSS styles, node
- ids). If not set, this defaults to a computed value based on the recorded content.
- """
-
- from pip._vendor.rich.cells import cell_len
-
- style_cache: Dict[Style, str] = {}
-
- def get_svg_style(style: Style) -> str:
- """Convert a Style to CSS rules for SVG."""
- if style in style_cache:
- return style_cache[style]
- css_rules = []
- color = (
- _theme.foreground_color
- if (style.color is None or style.color.is_default)
- else style.color.get_truecolor(_theme)
- )
- bgcolor = (
- _theme.background_color
- if (style.bgcolor is None or style.bgcolor.is_default)
- else style.bgcolor.get_truecolor(_theme)
- )
- if style.reverse:
- color, bgcolor = bgcolor, color
- if style.dim:
- color = blend_rgb(color, bgcolor, 0.4)
- css_rules.append(f"fill: {color.hex}")
- if style.bold:
- css_rules.append("font-weight: bold")
- if style.italic:
- css_rules.append("font-style: italic;")
- if style.underline:
- css_rules.append("text-decoration: underline;")
- if style.strike:
- css_rules.append("text-decoration: line-through;")
-
- css = ";".join(css_rules)
- style_cache[style] = css
- return css
-
- _theme = theme or SVG_EXPORT_THEME
-
- width = self.width
- char_height = 20
- char_width = char_height * font_aspect_ratio
- line_height = char_height * 1.22
-
- margin_top = 1
- margin_right = 1
- margin_bottom = 1
- margin_left = 1
-
- padding_top = 40
- padding_right = 8
- padding_bottom = 8
- padding_left = 8
-
- padding_width = padding_left + padding_right
- padding_height = padding_top + padding_bottom
- margin_width = margin_left + margin_right
- margin_height = margin_top + margin_bottom
-
- text_backgrounds: List[str] = []
- text_group: List[str] = []
- classes: Dict[str, int] = {}
- style_no = 1
-
- def escape_text(text: str) -> str:
- """HTML escape text and replace spaces with nbsp."""
- return escape(text).replace(" ", " ")
-
- def make_tag(
- name: str, content: Optional[str] = None, **attribs: object
- ) -> str:
- """Make a tag from name, content, and attributes."""
-
- def stringify(value: object) -> str:
- if isinstance(value, (float)):
- return format(value, "g")
- return str(value)
-
- tag_attribs = " ".join(
- f'{k.lstrip("_").replace("_", "-")}="{stringify(v)}"'
- for k, v in attribs.items()
- )
- return (
- f"<{name} {tag_attribs}>{content}{name}>"
- if content
- else f"<{name} {tag_attribs}/>"
- )
-
- with self._record_buffer_lock:
- segments = list(Segment.filter_control(self._record_buffer))
- if clear:
- self._record_buffer.clear()
-
- if unique_id is None:
- unique_id = "terminal-" + str(
- zlib.adler32(
- ("".join(repr(segment) for segment in segments)).encode(
- "utf-8",
- "ignore",
- )
- + title.encode("utf-8", "ignore")
- )
- )
- y = 0
- for y, line in enumerate(Segment.split_and_crop_lines(segments, length=width)):
- x = 0
- for text, style, _control in line:
- style = style or Style()
- rules = get_svg_style(style)
- if rules not in classes:
- classes[rules] = style_no
- style_no += 1
- class_name = f"r{classes[rules]}"
-
- if style.reverse:
- has_background = True
- background = (
- _theme.foreground_color.hex
- if style.color is None
- else style.color.get_truecolor(_theme).hex
- )
- else:
- bgcolor = style.bgcolor
- has_background = bgcolor is not None and not bgcolor.is_default
- background = (
- _theme.background_color.hex
- if style.bgcolor is None
- else style.bgcolor.get_truecolor(_theme).hex
- )
-
- text_length = cell_len(text)
- if has_background:
- text_backgrounds.append(
- make_tag(
- "rect",
- fill=background,
- x=x * char_width,
- y=y * line_height + 1.5,
- width=char_width * text_length,
- height=line_height + 0.25,
- shape_rendering="crispEdges",
- )
- )
-
- if text != " " * len(text):
- text_group.append(
- make_tag(
- "text",
- escape_text(text),
- _class=f"{unique_id}-{class_name}",
- x=x * char_width,
- y=y * line_height + char_height,
- textLength=char_width * len(text),
- clip_path=f"url(#{unique_id}-line-{y})",
- )
- )
- x += cell_len(text)
-
- line_offsets = [line_no * line_height + 1.5 for line_no in range(y)]
- lines = "\n".join(
- f"""
- {make_tag("rect", x=0, y=offset, width=char_width * width, height=line_height + 0.25)}
- """
- for line_no, offset in enumerate(line_offsets)
- )
-
- styles = "\n".join(
- f".{unique_id}-r{rule_no} {{ {css} }}" for css, rule_no in classes.items()
- )
- backgrounds = "".join(text_backgrounds)
- matrix = "".join(text_group)
-
- terminal_width = ceil(width * char_width + padding_width)
- terminal_height = (y + 1) * line_height + padding_height
- chrome = make_tag(
- "rect",
- fill=_theme.background_color.hex,
- stroke="rgba(255,255,255,0.35)",
- stroke_width="1",
- x=margin_left,
- y=margin_top,
- width=terminal_width,
- height=terminal_height,
- rx=8,
- )
-
- title_color = _theme.foreground_color.hex
- if title:
- chrome += make_tag(
- "text",
- escape_text(title),
- _class=f"{unique_id}-title",
- fill=title_color,
- text_anchor="middle",
- x=terminal_width // 2,
- y=margin_top + char_height + 6,
- )
- chrome += f"""
-
-
-
-
-
- """
-
- svg = code_format.format(
- unique_id=unique_id,
- char_width=char_width,
- char_height=char_height,
- line_height=line_height,
- terminal_width=char_width * width - 1,
- terminal_height=(y + 1) * line_height - 1,
- width=terminal_width + margin_width,
- height=terminal_height + margin_height,
- terminal_x=margin_left + padding_left,
- terminal_y=margin_top + padding_top,
- styles=styles,
- chrome=chrome,
- backgrounds=backgrounds,
- matrix=matrix,
- lines=lines,
- )
- return svg
-
- def save_svg(
- self,
- path: str,
- *,
- title: str = "Rich",
- theme: Optional[TerminalTheme] = None,
- clear: bool = True,
- code_format: str = CONSOLE_SVG_FORMAT,
- font_aspect_ratio: float = 0.61,
- unique_id: Optional[str] = None,
- ) -> None:
- """Generate an SVG file from the console contents (requires record=True in Console constructor).
-
- Args:
- path (str): The path to write the SVG to.
- title (str, optional): The title of the tab in the output image
- theme (TerminalTheme, optional): The ``TerminalTheme`` object to use to style the terminal
- clear (bool, optional): Clear record buffer after exporting. Defaults to ``True``
- code_format (str, optional): Format string used to generate the SVG. Rich will inject a number of variables
- into the string in order to form the final SVG output. The default template used and the variables
- injected by Rich can be found by inspecting the ``console.CONSOLE_SVG_FORMAT`` variable.
- font_aspect_ratio (float, optional): The width to height ratio of the font used in the ``code_format``
- string. Defaults to 0.61, which is the width to height ratio of Fira Code (the default font).
- If you aren't specifying a different font inside ``code_format``, you probably don't need this.
- unique_id (str, optional): unique id that is used as the prefix for various elements (CSS styles, node
- ids). If not set, this defaults to a computed value based on the recorded content.
- """
- svg = self.export_svg(
- title=title,
- theme=theme,
- clear=clear,
- code_format=code_format,
- font_aspect_ratio=font_aspect_ratio,
- unique_id=unique_id,
- )
- with open(path, "wt", encoding="utf-8") as write_file:
- write_file.write(svg)
-
-
-def _svg_hash(svg_main_code: str) -> str:
- """Returns a unique hash for the given SVG main code.
-
- Args:
- svg_main_code (str): The content we're going to inject in the SVG envelope.
-
- Returns:
- str: a hash of the given content
- """
- return str(zlib.adler32(svg_main_code.encode()))
-
-
-if __name__ == "__main__": # pragma: no cover
- console = Console(record=True)
-
- console.log(
- "JSONRPC [i]request[/i]",
- 5,
- 1.3,
- True,
- False,
- None,
- {
- "jsonrpc": "2.0",
- "method": "subtract",
- "params": {"minuend": 42, "subtrahend": 23},
- "id": 3,
- },
- )
-
- console.log("Hello, World!", "{'a': 1}", repr(console))
-
- console.print(
- {
- "name": None,
- "empty": [],
- "quiz": {
- "sport": {
- "answered": True,
- "q1": {
- "question": "Which one is correct team name in NBA?",
- "options": [
- "New York Bulls",
- "Los Angeles Kings",
- "Golden State Warriors",
- "Huston Rocket",
- ],
- "answer": "Huston Rocket",
- },
- },
- "maths": {
- "answered": False,
- "q1": {
- "question": "5 + 7 = ?",
- "options": [10, 11, 12, 13],
- "answer": 12,
- },
- "q2": {
- "question": "12 - 8 = ?",
- "options": [1, 2, 3, 4],
- "answer": 4,
- },
- },
- },
- }
- )
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/modeling/anchor_generator.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/modeling/anchor_generator.py
deleted file mode 100644
index ee4b98819445f95982ca89a72cdd3e27b39b367f..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/modeling/anchor_generator.py
+++ /dev/null
@@ -1,382 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import collections
-import math
-from typing import List
-import torch
-from torch import nn
-
-from detectron2.config import configurable
-from detectron2.layers import ShapeSpec
-from detectron2.structures import Boxes, RotatedBoxes
-from detectron2.utils.registry import Registry
-
-ANCHOR_GENERATOR_REGISTRY = Registry("ANCHOR_GENERATOR")
-ANCHOR_GENERATOR_REGISTRY.__doc__ = """
-Registry for modules that creates object detection anchors for feature maps.
-
-The registered object will be called with `obj(cfg, input_shape)`.
-"""
-
-
-class BufferList(nn.Module):
- """
- Similar to nn.ParameterList, but for buffers
- """
-
- def __init__(self, buffers):
- super().__init__()
- for i, buffer in enumerate(buffers):
- # Use non-persistent buffer so the values are not saved in checkpoint
- self.register_buffer(str(i), buffer, persistent=False)
-
- def __len__(self):
- return len(self._buffers)
-
- def __iter__(self):
- return iter(self._buffers.values())
-
-
-def _create_grid_offsets(size: List[int], stride: int, offset: float, device: torch.device):
- grid_height, grid_width = size
- shifts_x = torch.arange(
- offset * stride, grid_width * stride, step=stride, dtype=torch.float32, device=device
- )
- shifts_y = torch.arange(
- offset * stride, grid_height * stride, step=stride, dtype=torch.float32, device=device
- )
-
- shift_y, shift_x = torch.meshgrid(shifts_y, shifts_x)
- shift_x = shift_x.reshape(-1)
- shift_y = shift_y.reshape(-1)
- return shift_x, shift_y
-
-
-def _broadcast_params(params, num_features, name):
- """
- If one size (or aspect ratio) is specified and there are multiple feature
- maps, we "broadcast" anchors of that single size (or aspect ratio)
- over all feature maps.
-
- If params is list[float], or list[list[float]] with len(params) == 1, repeat
- it num_features time.
-
- Returns:
- list[list[float]]: param for each feature
- """
- assert isinstance(
- params, collections.abc.Sequence
- ), f"{name} in anchor generator has to be a list! Got {params}."
- assert len(params), f"{name} in anchor generator cannot be empty!"
- if not isinstance(params[0], collections.abc.Sequence): # params is list[float]
- return [params] * num_features
- if len(params) == 1:
- return list(params) * num_features
- assert len(params) == num_features, (
- f"Got {name} of length {len(params)} in anchor generator, "
- f"but the number of input features is {num_features}!"
- )
- return params
-
-
-@ANCHOR_GENERATOR_REGISTRY.register()
-class DefaultAnchorGenerator(nn.Module):
- """
- Compute anchors in the standard ways described in
- "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks".
- """
-
- box_dim: torch.jit.Final[int] = 4
- """
- the dimension of each anchor box.
- """
-
- @configurable
- def __init__(self, *, sizes, aspect_ratios, strides, offset=0.5):
- """
- This interface is experimental.
-
- Args:
- sizes (list[list[float]] or list[float]):
- If ``sizes`` is list[list[float]], ``sizes[i]`` is the list of anchor sizes
- (i.e. sqrt of anchor area) to use for the i-th feature map.
- If ``sizes`` is list[float], ``sizes`` is used for all feature maps.
- Anchor sizes are given in absolute lengths in units of
- the input image; they do not dynamically scale if the input image size changes.
- aspect_ratios (list[list[float]] or list[float]): list of aspect ratios
- (i.e. height / width) to use for anchors. Same "broadcast" rule for `sizes` applies.
- strides (list[int]): stride of each input feature.
- offset (float): Relative offset between the center of the first anchor and the top-left
- corner of the image. Value has to be in [0, 1).
- Recommend to use 0.5, which means half stride.
- """
- super().__init__()
-
- self.strides = strides
- self.num_features = len(self.strides)
- sizes = _broadcast_params(sizes, self.num_features, "sizes")
- aspect_ratios = _broadcast_params(aspect_ratios, self.num_features, "aspect_ratios")
- self.cell_anchors = self._calculate_anchors(sizes, aspect_ratios)
-
- self.offset = offset
- assert 0.0 <= self.offset < 1.0, self.offset
-
- @classmethod
- def from_config(cls, cfg, input_shape: List[ShapeSpec]):
- return {
- "sizes": cfg.MODEL.ANCHOR_GENERATOR.SIZES,
- "aspect_ratios": cfg.MODEL.ANCHOR_GENERATOR.ASPECT_RATIOS,
- "strides": [x.stride for x in input_shape],
- "offset": cfg.MODEL.ANCHOR_GENERATOR.OFFSET,
- }
-
- def _calculate_anchors(self, sizes, aspect_ratios):
- cell_anchors = [
- self.generate_cell_anchors(s, a).float() for s, a in zip(sizes, aspect_ratios)
- ]
- return BufferList(cell_anchors)
-
- @property
- @torch.jit.unused
- def num_cell_anchors(self):
- """
- Alias of `num_anchors`.
- """
- return self.num_anchors
-
- @property
- @torch.jit.unused
- def num_anchors(self):
- """
- Returns:
- list[int]: Each int is the number of anchors at every pixel
- location, on that feature map.
- For example, if at every pixel we use anchors of 3 aspect
- ratios and 5 sizes, the number of anchors is 15.
- (See also ANCHOR_GENERATOR.SIZES and ANCHOR_GENERATOR.ASPECT_RATIOS in config)
-
- In standard RPN models, `num_anchors` on every feature map is the same.
- """
- return [len(cell_anchors) for cell_anchors in self.cell_anchors]
-
- def _grid_anchors(self, grid_sizes: List[List[int]]):
- """
- Returns:
- list[Tensor]: #featuremap tensors, each is (#locations x #cell_anchors) x 4
- """
- anchors = []
- # buffers() not supported by torchscript. use named_buffers() instead
- buffers: List[torch.Tensor] = [x[1] for x in self.cell_anchors.named_buffers()]
- for size, stride, base_anchors in zip(grid_sizes, self.strides, buffers):
- shift_x, shift_y = _create_grid_offsets(size, stride, self.offset, base_anchors.device)
- shifts = torch.stack((shift_x, shift_y, shift_x, shift_y), dim=1)
-
- anchors.append((shifts.view(-1, 1, 4) + base_anchors.view(1, -1, 4)).reshape(-1, 4))
-
- return anchors
-
- def generate_cell_anchors(self, sizes=(32, 64, 128, 256, 512), aspect_ratios=(0.5, 1, 2)):
- """
- Generate a tensor storing canonical anchor boxes, which are all anchor
- boxes of different sizes and aspect_ratios centered at (0, 0).
- We can later build the set of anchors for a full feature map by
- shifting and tiling these tensors (see `meth:_grid_anchors`).
-
- Args:
- sizes (tuple[float]):
- aspect_ratios (tuple[float]]):
-
- Returns:
- Tensor of shape (len(sizes) * len(aspect_ratios), 4) storing anchor boxes
- in XYXY format.
- """
-
- # This is different from the anchor generator defined in the original Faster R-CNN
- # code or Detectron. They yield the same AP, however the old version defines cell
- # anchors in a less natural way with a shift relative to the feature grid and
- # quantization that results in slightly different sizes for different aspect ratios.
- # See also https://github.com/facebookresearch/Detectron/issues/227
-
- anchors = []
- for size in sizes:
- area = size ** 2.0
- for aspect_ratio in aspect_ratios:
- # s * s = w * h
- # a = h / w
- # ... some algebra ...
- # w = sqrt(s * s / a)
- # h = a * w
- w = math.sqrt(area / aspect_ratio)
- h = aspect_ratio * w
- x0, y0, x1, y1 = -w / 2.0, -h / 2.0, w / 2.0, h / 2.0
- anchors.append([x0, y0, x1, y1])
- return torch.tensor(anchors)
-
- def forward(self, features: List[torch.Tensor]):
- """
- Args:
- features (list[Tensor]): list of backbone feature maps on which to generate anchors.
-
- Returns:
- list[Boxes]: a list of Boxes containing all the anchors for each feature map
- (i.e. the cell anchors repeated over all locations in the feature map).
- The number of anchors of each feature map is Hi x Wi x num_cell_anchors,
- where Hi, Wi are resolution of the feature map divided by anchor stride.
- """
- grid_sizes = [feature_map.shape[-2:] for feature_map in features]
- anchors_over_all_feature_maps = self._grid_anchors(grid_sizes)
- return [Boxes(x) for x in anchors_over_all_feature_maps]
-
-
-@ANCHOR_GENERATOR_REGISTRY.register()
-class RotatedAnchorGenerator(nn.Module):
- """
- Compute rotated anchors used by Rotated RPN (RRPN), described in
- "Arbitrary-Oriented Scene Text Detection via Rotation Proposals".
- """
-
- box_dim: int = 5
- """
- the dimension of each anchor box.
- """
-
- @configurable
- def __init__(self, *, sizes, aspect_ratios, strides, angles, offset=0.5):
- """
- This interface is experimental.
-
- Args:
- sizes (list[list[float]] or list[float]):
- If sizes is list[list[float]], sizes[i] is the list of anchor sizes
- (i.e. sqrt of anchor area) to use for the i-th feature map.
- If sizes is list[float], the sizes are used for all feature maps.
- Anchor sizes are given in absolute lengths in units of
- the input image; they do not dynamically scale if the input image size changes.
- aspect_ratios (list[list[float]] or list[float]): list of aspect ratios
- (i.e. height / width) to use for anchors. Same "broadcast" rule for `sizes` applies.
- strides (list[int]): stride of each input feature.
- angles (list[list[float]] or list[float]): list of angles (in degrees CCW)
- to use for anchors. Same "broadcast" rule for `sizes` applies.
- offset (float): Relative offset between the center of the first anchor and the top-left
- corner of the image. Value has to be in [0, 1).
- Recommend to use 0.5, which means half stride.
- """
- super().__init__()
-
- self.strides = strides
- self.num_features = len(self.strides)
- sizes = _broadcast_params(sizes, self.num_features, "sizes")
- aspect_ratios = _broadcast_params(aspect_ratios, self.num_features, "aspect_ratios")
- angles = _broadcast_params(angles, self.num_features, "angles")
- self.cell_anchors = self._calculate_anchors(sizes, aspect_ratios, angles)
-
- self.offset = offset
- assert 0.0 <= self.offset < 1.0, self.offset
-
- @classmethod
- def from_config(cls, cfg, input_shape: List[ShapeSpec]):
- return {
- "sizes": cfg.MODEL.ANCHOR_GENERATOR.SIZES,
- "aspect_ratios": cfg.MODEL.ANCHOR_GENERATOR.ASPECT_RATIOS,
- "strides": [x.stride for x in input_shape],
- "offset": cfg.MODEL.ANCHOR_GENERATOR.OFFSET,
- "angles": cfg.MODEL.ANCHOR_GENERATOR.ANGLES,
- }
-
- def _calculate_anchors(self, sizes, aspect_ratios, angles):
- cell_anchors = [
- self.generate_cell_anchors(size, aspect_ratio, angle).float()
- for size, aspect_ratio, angle in zip(sizes, aspect_ratios, angles)
- ]
- return BufferList(cell_anchors)
-
- @property
- def num_cell_anchors(self):
- """
- Alias of `num_anchors`.
- """
- return self.num_anchors
-
- @property
- def num_anchors(self):
- """
- Returns:
- list[int]: Each int is the number of anchors at every pixel
- location, on that feature map.
- For example, if at every pixel we use anchors of 3 aspect
- ratios, 2 sizes and 5 angles, the number of anchors is 30.
- (See also ANCHOR_GENERATOR.SIZES, ANCHOR_GENERATOR.ASPECT_RATIOS
- and ANCHOR_GENERATOR.ANGLES in config)
-
- In standard RRPN models, `num_anchors` on every feature map is the same.
- """
- return [len(cell_anchors) for cell_anchors in self.cell_anchors]
-
- def _grid_anchors(self, grid_sizes):
- anchors = []
- for size, stride, base_anchors in zip(grid_sizes, self.strides, self.cell_anchors):
- shift_x, shift_y = _create_grid_offsets(size, stride, self.offset, base_anchors.device)
- zeros = torch.zeros_like(shift_x)
- shifts = torch.stack((shift_x, shift_y, zeros, zeros, zeros), dim=1)
-
- anchors.append((shifts.view(-1, 1, 5) + base_anchors.view(1, -1, 5)).reshape(-1, 5))
-
- return anchors
-
- def generate_cell_anchors(
- self,
- sizes=(32, 64, 128, 256, 512),
- aspect_ratios=(0.5, 1, 2),
- angles=(-90, -60, -30, 0, 30, 60, 90),
- ):
- """
- Generate a tensor storing canonical anchor boxes, which are all anchor
- boxes of different sizes, aspect_ratios, angles centered at (0, 0).
- We can later build the set of anchors for a full feature map by
- shifting and tiling these tensors (see `meth:_grid_anchors`).
-
- Args:
- sizes (tuple[float]):
- aspect_ratios (tuple[float]]):
- angles (tuple[float]]):
-
- Returns:
- Tensor of shape (len(sizes) * len(aspect_ratios) * len(angles), 5)
- storing anchor boxes in (x_ctr, y_ctr, w, h, angle) format.
- """
- anchors = []
- for size in sizes:
- area = size ** 2.0
- for aspect_ratio in aspect_ratios:
- # s * s = w * h
- # a = h / w
- # ... some algebra ...
- # w = sqrt(s * s / a)
- # h = a * w
- w = math.sqrt(area / aspect_ratio)
- h = aspect_ratio * w
- anchors.extend([0, 0, w, h, a] for a in angles)
-
- return torch.tensor(anchors)
-
- def forward(self, features):
- """
- Args:
- features (list[Tensor]): list of backbone feature maps on which to generate anchors.
-
- Returns:
- list[RotatedBoxes]: a list of Boxes containing all the anchors for each feature map
- (i.e. the cell anchors repeated over all locations in the feature map).
- The number of anchors of each feature map is Hi x Wi x num_cell_anchors,
- where Hi, Wi are resolution of the feature map divided by anchor stride.
- """
- grid_sizes = [feature_map.shape[-2:] for feature_map in features]
- anchors_over_all_feature_maps = self._grid_anchors(grid_sizes)
- return [RotatedBoxes(x) for x in anchors_over_all_feature_maps]
-
-
-def build_anchor_generator(cfg, input_shape):
- """
- Built an anchor generator from `cfg.MODEL.ANCHOR_GENERATOR.NAME`.
- """
- anchor_generator = cfg.MODEL.ANCHOR_GENERATOR.NAME
- return ANCHOR_GENERATOR_REGISTRY.get(anchor_generator)(cfg, input_shape)
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/structures/rotated_boxes.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/structures/rotated_boxes.py
deleted file mode 100644
index 4ec8e4c7e3f8eb9173fa21db3ca0bc29fd96834a..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/structures/rotated_boxes.py
+++ /dev/null
@@ -1,503 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import math
-from typing import List, Tuple
-import torch
-
-from detectron2.layers.rotated_boxes import pairwise_iou_rotated
-
-from .boxes import Boxes
-
-
-class RotatedBoxes(Boxes):
- """
- This structure stores a list of rotated boxes as a Nx5 torch.Tensor.
- It supports some common methods about boxes
- (`area`, `clip`, `nonempty`, etc),
- and also behaves like a Tensor
- (support indexing, `to(device)`, `.device`, and iteration over all boxes)
- """
-
- def __init__(self, tensor: torch.Tensor):
- """
- Args:
- tensor (Tensor[float]): a Nx5 matrix. Each row is
- (x_center, y_center, width, height, angle),
- in which angle is represented in degrees.
- While there's no strict range restriction for it,
- the recommended principal range is between [-180, 180) degrees.
-
- Assume we have a horizontal box B = (x_center, y_center, width, height),
- where width is along the x-axis and height is along the y-axis.
- The rotated box B_rot (x_center, y_center, width, height, angle)
- can be seen as:
-
- 1. When angle == 0:
- B_rot == B
- 2. When angle > 0:
- B_rot is obtained by rotating B w.r.t its center by :math:`|angle|` degrees CCW;
- 3. When angle < 0:
- B_rot is obtained by rotating B w.r.t its center by :math:`|angle|` degrees CW.
-
- Mathematically, since the right-handed coordinate system for image space
- is (y, x), where y is top->down and x is left->right, the 4 vertices of the
- rotated rectangle :math:`(yr_i, xr_i)` (i = 1, 2, 3, 4) can be obtained from
- the vertices of the horizontal rectangle :math:`(y_i, x_i)` (i = 1, 2, 3, 4)
- in the following way (:math:`\\theta = angle*\\pi/180` is the angle in radians,
- :math:`(y_c, x_c)` is the center of the rectangle):
-
- .. math::
-
- yr_i = \\cos(\\theta) (y_i - y_c) - \\sin(\\theta) (x_i - x_c) + y_c,
-
- xr_i = \\sin(\\theta) (y_i - y_c) + \\cos(\\theta) (x_i - x_c) + x_c,
-
- which is the standard rigid-body rotation transformation.
-
- Intuitively, the angle is
- (1) the rotation angle from y-axis in image space
- to the height vector (top->down in the box's local coordinate system)
- of the box in CCW, and
- (2) the rotation angle from x-axis in image space
- to the width vector (left->right in the box's local coordinate system)
- of the box in CCW.
-
- More intuitively, consider the following horizontal box ABCD represented
- in (x1, y1, x2, y2): (3, 2, 7, 4),
- covering the [3, 7] x [2, 4] region of the continuous coordinate system
- which looks like this:
-
- .. code:: none
-
- O--------> x
- |
- | A---B
- | | |
- | D---C
- |
- v y
-
- Note that each capital letter represents one 0-dimensional geometric point
- instead of a 'square pixel' here.
-
- In the example above, using (x, y) to represent a point we have:
-
- .. math::
-
- O = (0, 0), A = (3, 2), B = (7, 2), C = (7, 4), D = (3, 4)
-
- We name vector AB = vector DC as the width vector in box's local coordinate system, and
- vector AD = vector BC as the height vector in box's local coordinate system. Initially,
- when angle = 0 degree, they're aligned with the positive directions of x-axis and y-axis
- in the image space, respectively.
-
- For better illustration, we denote the center of the box as E,
-
- .. code:: none
-
- O--------> x
- |
- | A---B
- | | E |
- | D---C
- |
- v y
-
- where the center E = ((3+7)/2, (2+4)/2) = (5, 3).
-
- Also,
-
- .. math::
-
- width = |AB| = |CD| = 7 - 3 = 4,
- height = |AD| = |BC| = 4 - 2 = 2.
-
- Therefore, the corresponding representation for the same shape in rotated box in
- (x_center, y_center, width, height, angle) format is:
-
- (5, 3, 4, 2, 0),
-
- Now, let's consider (5, 3, 4, 2, 90), which is rotated by 90 degrees
- CCW (counter-clockwise) by definition. It looks like this:
-
- .. code:: none
-
- O--------> x
- | B-C
- | | |
- | |E|
- | | |
- | A-D
- v y
-
- The center E is still located at the same point (5, 3), while the vertices
- ABCD are rotated by 90 degrees CCW with regard to E:
- A = (4, 5), B = (4, 1), C = (6, 1), D = (6, 5)
-
- Here, 90 degrees can be seen as the CCW angle to rotate from y-axis to
- vector AD or vector BC (the top->down height vector in box's local coordinate system),
- or the CCW angle to rotate from x-axis to vector AB or vector DC (the left->right
- width vector in box's local coordinate system).
-
- .. math::
-
- width = |AB| = |CD| = 5 - 1 = 4,
- height = |AD| = |BC| = 6 - 4 = 2.
-
- Next, how about (5, 3, 4, 2, -90), which is rotated by 90 degrees CW (clockwise)
- by definition? It looks like this:
-
- .. code:: none
-
- O--------> x
- | D-A
- | | |
- | |E|
- | | |
- | C-B
- v y
-
- The center E is still located at the same point (5, 3), while the vertices
- ABCD are rotated by 90 degrees CW with regard to E:
- A = (6, 1), B = (6, 5), C = (4, 5), D = (4, 1)
-
- .. math::
-
- width = |AB| = |CD| = 5 - 1 = 4,
- height = |AD| = |BC| = 6 - 4 = 2.
-
- This covers exactly the same region as (5, 3, 4, 2, 90) does, and their IoU
- will be 1. However, these two will generate different RoI Pooling results and
- should not be treated as an identical box.
-
- On the other hand, it's easy to see that (X, Y, W, H, A) is identical to
- (X, Y, W, H, A+360N), for any integer N. For example (5, 3, 4, 2, 270) would be
- identical to (5, 3, 4, 2, -90), because rotating the shape 270 degrees CCW is
- equivalent to rotating the same shape 90 degrees CW.
-
- We could rotate further to get (5, 3, 4, 2, 180), or (5, 3, 4, 2, -180):
-
- .. code:: none
-
- O--------> x
- |
- | C---D
- | | E |
- | B---A
- |
- v y
-
- .. math::
-
- A = (7, 4), B = (3, 4), C = (3, 2), D = (7, 2),
-
- width = |AB| = |CD| = 7 - 3 = 4,
- height = |AD| = |BC| = 4 - 2 = 2.
-
- Finally, this is a very inaccurate (heavily quantized) illustration of
- how (5, 3, 4, 2, 60) looks like in case anyone wonders:
-
- .. code:: none
-
- O--------> x
- | B\
- | / C
- | /E /
- | A /
- | `D
- v y
-
- It's still a rectangle with center of (5, 3), width of 4 and height of 2,
- but its angle (and thus orientation) is somewhere between
- (5, 3, 4, 2, 0) and (5, 3, 4, 2, 90).
- """
- device = tensor.device if isinstance(tensor, torch.Tensor) else torch.device("cpu")
- tensor = torch.as_tensor(tensor, dtype=torch.float32, device=device)
- if tensor.numel() == 0:
- # Use reshape, so we don't end up creating a new tensor that does not depend on
- # the inputs (and consequently confuses jit)
- tensor = tensor.reshape((0, 5)).to(dtype=torch.float32, device=device)
- assert tensor.dim() == 2 and tensor.size(-1) == 5, tensor.size()
-
- self.tensor = tensor
-
- def clone(self) -> "RotatedBoxes":
- """
- Clone the RotatedBoxes.
-
- Returns:
- RotatedBoxes
- """
- return RotatedBoxes(self.tensor.clone())
-
- def to(self, device: torch.device):
- # Boxes are assumed float32 and does not support to(dtype)
- return RotatedBoxes(self.tensor.to(device=device))
-
- def area(self) -> torch.Tensor:
- """
- Computes the area of all the boxes.
-
- Returns:
- torch.Tensor: a vector with areas of each box.
- """
- box = self.tensor
- area = box[:, 2] * box[:, 3]
- return area
-
- def normalize_angles(self) -> None:
- """
- Restrict angles to the range of [-180, 180) degrees
- """
- self.tensor[:, 4] = (self.tensor[:, 4] + 180.0) % 360.0 - 180.0
-
- def clip(self, box_size: Tuple[int, int], clip_angle_threshold: float = 1.0) -> None:
- """
- Clip (in place) the boxes by limiting x coordinates to the range [0, width]
- and y coordinates to the range [0, height].
-
- For RRPN:
- Only clip boxes that are almost horizontal with a tolerance of
- clip_angle_threshold to maintain backward compatibility.
-
- Rotated boxes beyond this threshold are not clipped for two reasons:
-
- 1. There are potentially multiple ways to clip a rotated box to make it
- fit within the image.
- 2. It's tricky to make the entire rectangular box fit within the image
- and still be able to not leave out pixels of interest.
-
- Therefore we rely on ops like RoIAlignRotated to safely handle this.
-
- Args:
- box_size (height, width): The clipping box's size.
- clip_angle_threshold:
- Iff. abs(normalized(angle)) <= clip_angle_threshold (in degrees),
- we do the clipping as horizontal boxes.
- """
- h, w = box_size
-
- # normalize angles to be within (-180, 180] degrees
- self.normalize_angles()
-
- idx = torch.where(torch.abs(self.tensor[:, 4]) <= clip_angle_threshold)[0]
-
- # convert to (x1, y1, x2, y2)
- x1 = self.tensor[idx, 0] - self.tensor[idx, 2] / 2.0
- y1 = self.tensor[idx, 1] - self.tensor[idx, 3] / 2.0
- x2 = self.tensor[idx, 0] + self.tensor[idx, 2] / 2.0
- y2 = self.tensor[idx, 1] + self.tensor[idx, 3] / 2.0
-
- # clip
- x1.clamp_(min=0, max=w)
- y1.clamp_(min=0, max=h)
- x2.clamp_(min=0, max=w)
- y2.clamp_(min=0, max=h)
-
- # convert back to (xc, yc, w, h)
- self.tensor[idx, 0] = (x1 + x2) / 2.0
- self.tensor[idx, 1] = (y1 + y2) / 2.0
- # make sure widths and heights do not increase due to numerical errors
- self.tensor[idx, 2] = torch.min(self.tensor[idx, 2], x2 - x1)
- self.tensor[idx, 3] = torch.min(self.tensor[idx, 3], y2 - y1)
-
- def nonempty(self, threshold: float = 0.0) -> torch.Tensor:
- """
- Find boxes that are non-empty.
- A box is considered empty, if either of its side is no larger than threshold.
-
- Returns:
- Tensor: a binary vector which represents
- whether each box is empty (False) or non-empty (True).
- """
- box = self.tensor
- widths = box[:, 2]
- heights = box[:, 3]
- keep = (widths > threshold) & (heights > threshold)
- return keep
-
- def __getitem__(self, item) -> "RotatedBoxes":
- """
- Returns:
- RotatedBoxes: Create a new :class:`RotatedBoxes` by indexing.
-
- The following usage are allowed:
-
- 1. `new_boxes = boxes[3]`: return a `RotatedBoxes` which contains only one box.
- 2. `new_boxes = boxes[2:10]`: return a slice of boxes.
- 3. `new_boxes = boxes[vector]`, where vector is a torch.ByteTensor
- with `length = len(boxes)`. Nonzero elements in the vector will be selected.
-
- Note that the returned RotatedBoxes might share storage with this RotatedBoxes,
- subject to Pytorch's indexing semantics.
- """
- if isinstance(item, int):
- return RotatedBoxes(self.tensor[item].view(1, -1))
- b = self.tensor[item]
- assert b.dim() == 2, "Indexing on RotatedBoxes with {} failed to return a matrix!".format(
- item
- )
- return RotatedBoxes(b)
-
- def __len__(self) -> int:
- return self.tensor.shape[0]
-
- def __repr__(self) -> str:
- return "RotatedBoxes(" + str(self.tensor) + ")"
-
- def inside_box(self, box_size: Tuple[int, int], boundary_threshold: int = 0) -> torch.Tensor:
- """
- Args:
- box_size (height, width): Size of the reference box covering
- [0, width] x [0, height]
- boundary_threshold (int): Boxes that extend beyond the reference box
- boundary by more than boundary_threshold are considered "outside".
-
- For RRPN, it might not be necessary to call this function since it's common
- for rotated box to extend to outside of the image boundaries
- (the clip function only clips the near-horizontal boxes)
-
- Returns:
- a binary vector, indicating whether each box is inside the reference box.
- """
- height, width = box_size
-
- cnt_x = self.tensor[..., 0]
- cnt_y = self.tensor[..., 1]
- half_w = self.tensor[..., 2] / 2.0
- half_h = self.tensor[..., 3] / 2.0
- a = self.tensor[..., 4]
- c = torch.abs(torch.cos(a * math.pi / 180.0))
- s = torch.abs(torch.sin(a * math.pi / 180.0))
- # This basically computes the horizontal bounding rectangle of the rotated box
- max_rect_dx = c * half_w + s * half_h
- max_rect_dy = c * half_h + s * half_w
-
- inds_inside = (
- (cnt_x - max_rect_dx >= -boundary_threshold)
- & (cnt_y - max_rect_dy >= -boundary_threshold)
- & (cnt_x + max_rect_dx < width + boundary_threshold)
- & (cnt_y + max_rect_dy < height + boundary_threshold)
- )
-
- return inds_inside
-
- def get_centers(self) -> torch.Tensor:
- """
- Returns:
- The box centers in a Nx2 array of (x, y).
- """
- return self.tensor[:, :2]
-
- def scale(self, scale_x: float, scale_y: float) -> None:
- """
- Scale the rotated box with horizontal and vertical scaling factors
- Note: when scale_factor_x != scale_factor_y,
- the rotated box does not preserve the rectangular shape when the angle
- is not a multiple of 90 degrees under resize transformation.
- Instead, the shape is a parallelogram (that has skew)
- Here we make an approximation by fitting a rotated rectangle to the parallelogram.
- """
- self.tensor[:, 0] *= scale_x
- self.tensor[:, 1] *= scale_y
- theta = self.tensor[:, 4] * math.pi / 180.0
- c = torch.cos(theta)
- s = torch.sin(theta)
-
- # In image space, y is top->down and x is left->right
- # Consider the local coordintate system for the rotated box,
- # where the box center is located at (0, 0), and the four vertices ABCD are
- # A(-w / 2, -h / 2), B(w / 2, -h / 2), C(w / 2, h / 2), D(-w / 2, h / 2)
- # the midpoint of the left edge AD of the rotated box E is:
- # E = (A+D)/2 = (-w / 2, 0)
- # the midpoint of the top edge AB of the rotated box F is:
- # F(0, -h / 2)
- # To get the old coordinates in the global system, apply the rotation transformation
- # (Note: the right-handed coordinate system for image space is yOx):
- # (old_x, old_y) = (s * y + c * x, c * y - s * x)
- # E(old) = (s * 0 + c * (-w/2), c * 0 - s * (-w/2)) = (-c * w / 2, s * w / 2)
- # F(old) = (s * (-h / 2) + c * 0, c * (-h / 2) - s * 0) = (-s * h / 2, -c * h / 2)
- # After applying the scaling factor (sfx, sfy):
- # E(new) = (-sfx * c * w / 2, sfy * s * w / 2)
- # F(new) = (-sfx * s * h / 2, -sfy * c * h / 2)
- # The new width after scaling tranformation becomes:
-
- # w(new) = |E(new) - O| * 2
- # = sqrt[(sfx * c * w / 2)^2 + (sfy * s * w / 2)^2] * 2
- # = sqrt[(sfx * c)^2 + (sfy * s)^2] * w
- # i.e., scale_factor_w = sqrt[(sfx * c)^2 + (sfy * s)^2]
- #
- # For example,
- # when angle = 0 or 180, |c| = 1, s = 0, scale_factor_w == scale_factor_x;
- # when |angle| = 90, c = 0, |s| = 1, scale_factor_w == scale_factor_y
- self.tensor[:, 2] *= torch.sqrt((scale_x * c) ** 2 + (scale_y * s) ** 2)
-
- # h(new) = |F(new) - O| * 2
- # = sqrt[(sfx * s * h / 2)^2 + (sfy * c * h / 2)^2] * 2
- # = sqrt[(sfx * s)^2 + (sfy * c)^2] * h
- # i.e., scale_factor_h = sqrt[(sfx * s)^2 + (sfy * c)^2]
- #
- # For example,
- # when angle = 0 or 180, |c| = 1, s = 0, scale_factor_h == scale_factor_y;
- # when |angle| = 90, c = 0, |s| = 1, scale_factor_h == scale_factor_x
- self.tensor[:, 3] *= torch.sqrt((scale_x * s) ** 2 + (scale_y * c) ** 2)
-
- # The angle is the rotation angle from y-axis in image space to the height
- # vector (top->down in the box's local coordinate system) of the box in CCW.
- #
- # angle(new) = angle_yOx(O - F(new))
- # = angle_yOx( (sfx * s * h / 2, sfy * c * h / 2) )
- # = atan2(sfx * s * h / 2, sfy * c * h / 2)
- # = atan2(sfx * s, sfy * c)
- #
- # For example,
- # when sfx == sfy, angle(new) == atan2(s, c) == angle(old)
- self.tensor[:, 4] = torch.atan2(scale_x * s, scale_y * c) * 180 / math.pi
-
- @classmethod
- def cat(cls, boxes_list: List["RotatedBoxes"]) -> "RotatedBoxes":
- """
- Concatenates a list of RotatedBoxes into a single RotatedBoxes
-
- Arguments:
- boxes_list (list[RotatedBoxes])
-
- Returns:
- RotatedBoxes: the concatenated RotatedBoxes
- """
- assert isinstance(boxes_list, (list, tuple))
- if len(boxes_list) == 0:
- return cls(torch.empty(0))
- assert all([isinstance(box, RotatedBoxes) for box in boxes_list])
-
- # use torch.cat (v.s. layers.cat) so the returned boxes never share storage with input
- cat_boxes = cls(torch.cat([b.tensor for b in boxes_list], dim=0))
- return cat_boxes
-
- @property
- def device(self) -> torch.device:
- return self.tensor.device
-
- @torch.jit.unused
- def __iter__(self):
- """
- Yield a box as a Tensor of shape (5,) at a time.
- """
- yield from self.tensor
-
-
-def pairwise_iou(boxes1: RotatedBoxes, boxes2: RotatedBoxes) -> None:
- """
- Given two lists of rotated boxes of size N and M,
- compute the IoU (intersection over union)
- between **all** N x M pairs of boxes.
- The box order must be (x_center, y_center, width, height, angle).
-
- Args:
- boxes1, boxes2 (RotatedBoxes):
- two `RotatedBoxes`. Contains N & M rotated boxes, respectively.
-
- Returns:
- Tensor: IoU, sized [N,M].
- """
-
- return pairwise_iou_rotated(boxes1.tensor, boxes2.tensor)
diff --git a/spaces/BartPoint/VoiceChange/infer_pack/models.py b/spaces/BartPoint/VoiceChange/infer_pack/models.py
deleted file mode 100644
index 5e4b2e72383efaee1fae4f5c42e3db2c627e4190..0000000000000000000000000000000000000000
--- a/spaces/BartPoint/VoiceChange/infer_pack/models.py
+++ /dev/null
@@ -1,1124 +0,0 @@
-import math, pdb, os
-from time import time as ttime
-import torch
-from torch import nn
-from torch.nn import functional as F
-from infer_pack import modules
-from infer_pack import attentions
-from infer_pack import commons
-from infer_pack.commons import init_weights, get_padding
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-from infer_pack.commons import init_weights
-import numpy as np
-from infer_pack import commons
-
-
-class TextEncoder256(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(256, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return m, logs, x_mask
-
-
-class TextEncoder768(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(768, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return m, logs, x_mask
-
-
-class ResidualCouplingBlock(nn.Module):
- def __init__(
- self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- n_flows=4,
- gin_channels=0,
- ):
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for i in range(n_flows):
- self.flows.append(
- modules.ResidualCouplingLayer(
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- mean_only=True,
- )
- )
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
- def remove_weight_norm(self):
- for i in range(self.n_flows):
- self.flows[i * 2].remove_weight_norm()
-
-
-class PosteriorEncoder(nn.Module):
- def __init__(
- self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.pre(x) * x_mask
- x = self.enc(x, x_mask, g=g)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask
-
- def remove_weight_norm(self):
- self.enc.remove_weight_norm()
-
-
-class Generator(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=0,
- ):
- super(Generator, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- def forward(self, x, g=None):
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-class SineGen(torch.nn.Module):
- """Definition of sine generator
- SineGen(samp_rate, harmonic_num = 0,
- sine_amp = 0.1, noise_std = 0.003,
- voiced_threshold = 0,
- flag_for_pulse=False)
- samp_rate: sampling rate in Hz
- harmonic_num: number of harmonic overtones (default 0)
- sine_amp: amplitude of sine-wavefrom (default 0.1)
- noise_std: std of Gaussian noise (default 0.003)
- voiced_thoreshold: F0 threshold for U/V classification (default 0)
- flag_for_pulse: this SinGen is used inside PulseGen (default False)
- Note: when flag_for_pulse is True, the first time step of a voiced
- segment is always sin(np.pi) or cos(0)
- """
-
- def __init__(
- self,
- samp_rate,
- harmonic_num=0,
- sine_amp=0.1,
- noise_std=0.003,
- voiced_threshold=0,
- flag_for_pulse=False,
- ):
- super(SineGen, self).__init__()
- self.sine_amp = sine_amp
- self.noise_std = noise_std
- self.harmonic_num = harmonic_num
- self.dim = self.harmonic_num + 1
- self.sampling_rate = samp_rate
- self.voiced_threshold = voiced_threshold
-
- def _f02uv(self, f0):
- # generate uv signal
- uv = torch.ones_like(f0)
- uv = uv * (f0 > self.voiced_threshold)
- return uv
-
- def forward(self, f0, upp):
- """sine_tensor, uv = forward(f0)
- input F0: tensor(batchsize=1, length, dim=1)
- f0 for unvoiced steps should be 0
- output sine_tensor: tensor(batchsize=1, length, dim)
- output uv: tensor(batchsize=1, length, 1)
- """
- with torch.no_grad():
- f0 = f0[:, None].transpose(1, 2)
- f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device)
- # fundamental component
- f0_buf[:, :, 0] = f0[:, :, 0]
- for idx in np.arange(self.harmonic_num):
- f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * (
- idx + 2
- ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic
- rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化
- rand_ini = torch.rand(
- f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device
- )
- rand_ini[:, 0] = 0
- rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini
- tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化
- tmp_over_one *= upp
- tmp_over_one = F.interpolate(
- tmp_over_one.transpose(2, 1),
- scale_factor=upp,
- mode="linear",
- align_corners=True,
- ).transpose(2, 1)
- rad_values = F.interpolate(
- rad_values.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(
- 2, 1
- ) #######
- tmp_over_one %= 1
- tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0
- cumsum_shift = torch.zeros_like(rad_values)
- cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0
- sine_waves = torch.sin(
- torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi
- )
- sine_waves = sine_waves * self.sine_amp
- uv = self._f02uv(f0)
- uv = F.interpolate(
- uv.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(2, 1)
- noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3
- noise = noise_amp * torch.randn_like(sine_waves)
- sine_waves = sine_waves * uv + noise
- return sine_waves, uv, noise
-
-
-class SourceModuleHnNSF(torch.nn.Module):
- """SourceModule for hn-nsf
- SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1,
- add_noise_std=0.003, voiced_threshod=0)
- sampling_rate: sampling_rate in Hz
- harmonic_num: number of harmonic above F0 (default: 0)
- sine_amp: amplitude of sine source signal (default: 0.1)
- add_noise_std: std of additive Gaussian noise (default: 0.003)
- note that amplitude of noise in unvoiced is decided
- by sine_amp
- voiced_threshold: threhold to set U/V given F0 (default: 0)
- Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
- F0_sampled (batchsize, length, 1)
- Sine_source (batchsize, length, 1)
- noise_source (batchsize, length 1)
- uv (batchsize, length, 1)
- """
-
- def __init__(
- self,
- sampling_rate,
- harmonic_num=0,
- sine_amp=0.1,
- add_noise_std=0.003,
- voiced_threshod=0,
- is_half=True,
- ):
- super(SourceModuleHnNSF, self).__init__()
-
- self.sine_amp = sine_amp
- self.noise_std = add_noise_std
- self.is_half = is_half
- # to produce sine waveforms
- self.l_sin_gen = SineGen(
- sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod
- )
-
- # to merge source harmonics into a single excitation
- self.l_linear = torch.nn.Linear(harmonic_num + 1, 1)
- self.l_tanh = torch.nn.Tanh()
-
- def forward(self, x, upp=None):
- sine_wavs, uv, _ = self.l_sin_gen(x, upp)
- if self.is_half:
- sine_wavs = sine_wavs.half()
- sine_merge = self.l_tanh(self.l_linear(sine_wavs))
- return sine_merge, None, None # noise, uv
-
-
-class GeneratorNSF(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels,
- sr,
- is_half=False,
- ):
- super(GeneratorNSF, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
-
- self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates))
- self.m_source = SourceModuleHnNSF(
- sampling_rate=sr, harmonic_num=0, is_half=is_half
- )
- self.noise_convs = nn.ModuleList()
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- c_cur = upsample_initial_channel // (2 ** (i + 1))
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
- if i + 1 < len(upsample_rates):
- stride_f0 = np.prod(upsample_rates[i + 1 :])
- self.noise_convs.append(
- Conv1d(
- 1,
- c_cur,
- kernel_size=stride_f0 * 2,
- stride=stride_f0,
- padding=stride_f0 // 2,
- )
- )
- else:
- self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1))
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- self.upp = np.prod(upsample_rates)
-
- def forward(self, x, f0, g=None):
- har_source, noi_source, uv = self.m_source(f0, self.upp)
- har_source = har_source.transpose(1, 2)
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- x_source = self.noise_convs[i](har_source)
- x = x + x_source
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-sr2sr = {
- "32k": 32000,
- "40k": 40000,
- "48k": 48000,
-}
-
-
-class SynthesizerTrnMs256NSFsid(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr,
- **kwargs
- ):
- super().__init__()
- if type(sr) == type("strr"):
- sr = sr2sr[sr]
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- sr=sr,
- is_half=kwargs["is_half"],
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(
- self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds
- ): # 这里ds是id,[bs,1]
- # print(1,pitch.shape)#[bs,t]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length)
- pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size)
- # print(-2,pitchf.shape,z_slice.shape)
- o = self.dec(z_slice, pitchf, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class SynthesizerTrnMs768NSFsid(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr,
- **kwargs
- ):
- super().__init__()
- if type(sr) == type("strr"):
- sr = sr2sr[sr]
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder768(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- sr=sr,
- is_half=kwargs["is_half"],
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(
- self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds
- ): # 这里ds是id,[bs,1]
- # print(1,pitch.shape)#[bs,t]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length)
- pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size)
- # print(-2,pitchf.shape,z_slice.shape)
- o = self.dec(z_slice, pitchf, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class SynthesizerTrnMs256NSFsid_nono(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr=None,
- **kwargs
- ):
- super().__init__()
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=False,
- )
- self.dec = Generator(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- o = self.dec(z_slice, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, sid, max_len=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec((z * x_mask)[:, :, :max_len], g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class SynthesizerTrnMs768NSFsid_nono(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr=None,
- **kwargs
- ):
- super().__init__()
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder768(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=False,
- )
- self.dec = Generator(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- o = self.dec(z_slice, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, sid, max_len=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec((z * x_mask)[:, :, :max_len], g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminator, self).__init__()
- periods = [2, 3, 5, 7, 11, 17]
- # periods = [3, 5, 7, 11, 17, 23, 37]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [
- DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
- ]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = [] #
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- # for j in range(len(fmap_r)):
- # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class MultiPeriodDiscriminatorV2(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminatorV2, self).__init__()
- # periods = [2, 3, 5, 7, 11, 17]
- periods = [2, 3, 5, 7, 11, 17, 23, 37]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [
- DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
- ]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = [] #
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- # for j in range(len(fmap_r)):
- # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ]
- )
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- self.use_spectral_norm = use_spectral_norm
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(
- Conv2d(
- 1,
- 32,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 32,
- 128,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 128,
- 512,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 512,
- 1024,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 1024,
- 1024,
- (kernel_size, 1),
- 1,
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- ]
- )
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
diff --git a/spaces/Benson/text-generation/Examples/Dani Among Us 3d Download.md b/spaces/Benson/text-generation/Examples/Dani Among Us 3d Download.md
deleted file mode 100644
index 0cfac0373729901e07f774c5b66ec455442a422c..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Dani Among Us 3d Download.md
+++ /dev/null
@@ -1,53 +0,0 @@
-
-
Dani entre nosotros Descargar 3D: Cómo jugar el juego viral en 3D
-
Si eres un fan de Among Us, el exitoso juego multijugador online de engaño y trabajo en equipo, es posible que hayas oído hablar de Dani, un desarrollador de juegos indie noruego y YouTuber que hizo una versión en 3D del juego. En este artículo, te diremos todo lo que necesitas saber sobre Dani Among Us 3D, incluyendo lo que es, cómo se hizo, y cómo descargarlo y jugarlo.
-
¿Qué hay entre nosotros?
-
Entre nosotros es un juego donde juegas como uno de los compañeros de equipo o impostores en una nave espacial. Los tripulantes tienen que trabajar juntos para completar las tareas y encontrar a los impostores, mientras que los impostores tienen que matar a los tripulantes o sabotear la nave. El juego se puede jugar en línea o a través de WiFi local con 4-15 jugadores.
Un popular juego multijugador en línea de engaño y trabajo en equipo
-
Among Us fue lanzado en 2018 por Innersloth, un estudio de juegos estadounidense, pero se hizo viral en 2020 gracias a los streamers y YouTubers que lo jugaron con sus amigos y fans. El juego ha sido elogiado por su juego simple pero adictivo, su interacción social y comunicación, y su valor de repetición. A partir de junio de 2021, Among Us tiene más de 500 millones de descargas en Google Play Store y más de 18 millones de propietarios en Steam.
-
Diferentes mapas, modos y opciones de personalización
-
Among Us ofrece cuatro mapas diferentes para jugar: The Skeld, MIRA HQ, Polus y The Airship. Cada mapa tiene su propio diseño, tareas, rejillas de ventilación, cámaras y sabotajes. El juego también tiene diferentes modos para elegir, como Classic o Hide n Seek. Además, los jugadores pueden personalizar sus personajes con varios colores, sombreros, pieles, mascotas y trajes.
-
¿Quién es Dani?
-
Dani es el alias en línea de Daniel William Sooman, un desarrollador de juegos indie noruego y YouTuber. Es conocido por crear juegos en Unity usando lenguaje de programación en C# y publicar devlogs (registros de desarrollo) en su canal de YouTube.
-
-
Dani comenzó a programar cuando tenía 15 años, usando Java como su primer lenguaje. Luego cambió a Unity y C# ya que quería hacer mejores juegos. Creó su canal de YouTube en octubre de 2018 y subió su primer video en noviembre de 2018. Desde entonces, ha ganado más de 6 millones de suscriptores y más de 600 millones de visitas en su canal.
-
Conocido por crear juegos en Unity y publicar devlogs
-
Los vídeos de YouTube de Dani consisten principalmente en él desarrollando juegos en Unity y mostrando el proceso, los desafíos y los resultados de sus proyectos. Algunos de sus juegos más populares incluyen: - Karlson: un juego de disparos parkour basado en la física donde juegas como Karlson, un agente adicto a la leche que tiene que escapar de una instalación llena de enemigos y trampas. - Muck: un juego roguelike de supervivencia donde tienes que reunir recursos, crear objetos, luchar contra monstruos, y sobrevivir tanto como puedas en una isla generada por procedimientos. - Mobile Suit: un juego de combate mech donde piloteas un robot gigante y luchas contra otros mechs en varios entornos. Dani también hace juegos basados en títulos o géneros populares, como Fall Guys, Minecraft, Doom, GTA y más. A menudo añade su propio toque o humor a estos juegos, haciéndolos únicos y entretenidos.
-
¿Cómo hizo Dani Among Us 3D?
-
Dani’s Among Us 3D es uno de sus juegos más virales, ya que se basa en el juego original de Among Us pero con una perspectiva y gráficos en 3D. Dani hizo este juego en Unity usando C# y agregó nuevas características y mecánicas al juego.
-
Inspirado por los comentarios sobre su video clonado de Fall Guys
-
Dani tuvo la idea de hacer Among Us 3D de los comentarios en su video donde hizo un clon de Fall Guys en Unity. Mucha gente sugirió que debería hacer una versión en 3D de Among Us, ya que ambos juegos son similares en su estilo colorido y de dibujos animados. Dani decidió asumir el reto y comenzó a trabajar en el proyecto.
-
-
-
Dani utilizó Unity como motor de juego y C# como lenguaje de programación para crear una versión en 3D de Among Us. Siguió el mismo juego y reglas que el juego original, pero con una perspectiva en 3D y gráficos. También usó Blender para modelar los personajes, elementos y entornos. Documentó su progreso y desafíos en sus videos de YouTube, donde mostró cómo implementó características como: - El sistema de lobby - La visión del impostor - Las animaciones de eliminación - El sistema de ventilación - El sistema de votación - El sistema de tareas - El sistema de chat - Los efectos de sonido - La música
-
Añadidas nuevas características y mecánicas al juego
-
Además de recrear el juego original en 3D, Dani también agregó algunas nuevas características y mecánicas para hacer su versión más divertida e interesante. Algunas de estas características incluyen: - Una opción de cámara en primera persona - Un sistema de física ragdoll - Un artículo de jetpack - Un artículo de gancho de agarre - Un elemento de cáscara de plátano - Una opción de chat de voz de proximidad - Un editor de mapas personalizado
-
¿Cómo descargar y jugar a Dani Among Us 3D?
-
Si estás interesado en jugar a Dani Among Us 3D, quizás te estés preguntando cómo descargarlo y jugarlo. Desafortunadamente, no hay enlace de descarga oficial disponible todavía, ya que Dani todavía está trabajando en el juego y no lo ha lanzado públicamente. Sin embargo, hay algunas formas posibles de acceder al juego.
-
Todavía no hay enlace oficial de descarga
-
Dani no ha lanzado su juego 3D Among Us al público todavía, ya que todavía está trabajando en mejorarlo y agregar más características. Solo ha compartido algunas claves beta con algunos de sus amigos y fans que han probado el juego y le han dado retroalimentación. También ha declarado que no quiere lanzar el juego sin el permiso de Innersloth, ya que respeta su trabajo y no quiere causar ningún problema.
-
Formas posibles de acceder al juego
-
-
La mejor manera de mantenerse actualizado en Dani Among Us 3D es suscribirse al canal de YouTube de Dani y activar las notificaciones. Dani publica regularmente vídeos sobre sus proyectos de desarrollo de juegos, incluyendo Among Us 3D. También a veces da claves beta o pistas sobre cómo conseguirlas en sus vídeos o descripciones. También puedes comentar sus vídeos y pedirle educadamente una clave beta o más información sobre el juego.
-
Únete al servidor Discord de Dani y pide una clave beta
-
Otra forma de acceder a Dani Among Us 3D es unirse al servidor de Discord de Dani y pedir una clave beta o más información sobre el juego. Dani’s Discord server es una comunidad de más de 200.000 miembros que son fans de sus juegos y videos. Puede unirse al servidor haciendo clic en el enlace de la descripción de su canal de YouTube o utilizando este código de invitación: https://discord.gg/DaniDev. Una vez que se une al servidor, puede chatear con otros miembros, compartir sus comentarios y sugerencias, y participar en eventos y regalos. También puedes pedirle a Dani o a sus moderadores una clave beta o más información sobre Among Us 3D en el canal #among-us-3d o enviándoles un mensaje directo.
-
Apoyo Dani en Patreon y obtener recompensas exclusivas
-
-
Conclusión
-
Dani Among Us 3D es un giro divertido y creativo en el juego original de Among Us, donde puedes jugar el juego en 3D con nuevas características y mecánicas. Dani es un desarrollador de juegos talentoso y entretenido y YouTuber que hace juegos en Unity y publica devlogs en su canal. Si quieres descargar y jugar Dani Among Us 3D, puedes suscribirte a su canal de YouTube, unirte a su servidor Discord, o apoyarlo en Patreon y esperar actualizaciones o claves beta. Alternativamente, también puedes ver sus videos o transmisiones donde muestra el juego y lo juega con otras personas.
-
Esperamos que hayas disfrutado de este artículo y hayas aprendido algo nuevo sobre Dani Among Us 3D. Si tiene alguna pregunta o comentario, no dude en dejarlos abajo. ¡Gracias por leer!
-
Preguntas frecuentes
-
Aquí hay algunas preguntas frecuentes sobre Dani Among Us 3D:
-
Q: ¿Es Dani Among Us 3D gratis?
-
A: Dani aún no ha anunciado el precio de su juego Among Us 3D, pero ha dicho que podría hacerlo gratis o muy barato, ya que no quiere beneficiarse del trabajo de Innersloth o causar problemas legales.
-
Q: ¿Está Dani Among Us 3D disponible para dispositivos móviles?
-
A: Dani no ha lanzado su juego 3D Among Us para dispositivos móviles todavía, pero ha dicho que podría hacer una versión móvil en el futuro, ya que sabe que muchas personas juegan entre nosotros en sus teléfonos o tabletas.
-
Q: ¿Está Dani entre nosotros multijugador 3D?
-
A: Sí, Dani Among Us 3D es multijugador, al igual que el juego original. Puedes jugar online o a través de WiFi local con hasta 15 jugadores. También puede utilizar el chat de voz o de texto para comunicarse con otros jugadores.
-
Q: ¿Puedo jugar Dani entre nosotros 3D con mods?
-
A: Dani aún no ha lanzado ningún mod oficial para su juego Among Us 3D, pero ha dicho que podría hacer algunos mods o permitir que otras personas hagan mods en el futuro, ya que le gustan los juegos de modding y cree que agrega más diversión y variedad al juego.
-
-
A: Puedes contactar a Dani o darle retroalimentación dejando un comentario en sus videos de YouTube, enviándole un mensaje en su servidor Discord, tuiteándolo en Twitter (@DaniDevYT), o enviándole un correo electrónico a danidevbuss@gmail.com.
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/formatters/irc.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/formatters/irc.py
deleted file mode 100644
index 53e19b83d1e80335f70c3b477cb84fb6de62c897..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/formatters/irc.py
+++ /dev/null
@@ -1,154 +0,0 @@
-"""
- pygments.formatters.irc
- ~~~~~~~~~~~~~~~~~~~~~~~
-
- Formatter for IRC output
-
- :copyright: Copyright 2006-2022 by the Pygments team, see AUTHORS.
- :license: BSD, see LICENSE for details.
-"""
-
-from pip._vendor.pygments.formatter import Formatter
-from pip._vendor.pygments.token import Keyword, Name, Comment, String, Error, \
- Number, Operator, Generic, Token, Whitespace
-from pip._vendor.pygments.util import get_choice_opt
-
-
-__all__ = ['IRCFormatter']
-
-
-#: Map token types to a tuple of color values for light and dark
-#: backgrounds.
-IRC_COLORS = {
- Token: ('', ''),
-
- Whitespace: ('gray', 'brightblack'),
- Comment: ('gray', 'brightblack'),
- Comment.Preproc: ('cyan', 'brightcyan'),
- Keyword: ('blue', 'brightblue'),
- Keyword.Type: ('cyan', 'brightcyan'),
- Operator.Word: ('magenta', 'brightcyan'),
- Name.Builtin: ('cyan', 'brightcyan'),
- Name.Function: ('green', 'brightgreen'),
- Name.Namespace: ('_cyan_', '_brightcyan_'),
- Name.Class: ('_green_', '_brightgreen_'),
- Name.Exception: ('cyan', 'brightcyan'),
- Name.Decorator: ('brightblack', 'gray'),
- Name.Variable: ('red', 'brightred'),
- Name.Constant: ('red', 'brightred'),
- Name.Attribute: ('cyan', 'brightcyan'),
- Name.Tag: ('brightblue', 'brightblue'),
- String: ('yellow', 'yellow'),
- Number: ('blue', 'brightblue'),
-
- Generic.Deleted: ('brightred', 'brightred'),
- Generic.Inserted: ('green', 'brightgreen'),
- Generic.Heading: ('**', '**'),
- Generic.Subheading: ('*magenta*', '*brightmagenta*'),
- Generic.Error: ('brightred', 'brightred'),
-
- Error: ('_brightred_', '_brightred_'),
-}
-
-
-IRC_COLOR_MAP = {
- 'white': 0,
- 'black': 1,
- 'blue': 2,
- 'brightgreen': 3,
- 'brightred': 4,
- 'yellow': 5,
- 'magenta': 6,
- 'orange': 7,
- 'green': 7, #compat w/ ansi
- 'brightyellow': 8,
- 'lightgreen': 9,
- 'brightcyan': 9, # compat w/ ansi
- 'cyan': 10,
- 'lightblue': 11,
- 'red': 11, # compat w/ ansi
- 'brightblue': 12,
- 'brightmagenta': 13,
- 'brightblack': 14,
- 'gray': 15,
-}
-
-def ircformat(color, text):
- if len(color) < 1:
- return text
- add = sub = ''
- if '_' in color: # italic
- add += '\x1D'
- sub = '\x1D' + sub
- color = color.strip('_')
- if '*' in color: # bold
- add += '\x02'
- sub = '\x02' + sub
- color = color.strip('*')
- # underline (\x1F) not supported
- # backgrounds (\x03FF,BB) not supported
- if len(color) > 0: # actual color - may have issues with ircformat("red", "blah")+"10" type stuff
- add += '\x03' + str(IRC_COLOR_MAP[color]).zfill(2)
- sub = '\x03' + sub
- return add + text + sub
- return '<'+add+'>'+text+''+sub+'>'
-
-
-class IRCFormatter(Formatter):
- r"""
- Format tokens with IRC color sequences
-
- The `get_style_defs()` method doesn't do anything special since there is
- no support for common styles.
-
- Options accepted:
-
- `bg`
- Set to ``"light"`` or ``"dark"`` depending on the terminal's background
- (default: ``"light"``).
-
- `colorscheme`
- A dictionary mapping token types to (lightbg, darkbg) color names or
- ``None`` (default: ``None`` = use builtin colorscheme).
-
- `linenos`
- Set to ``True`` to have line numbers in the output as well
- (default: ``False`` = no line numbers).
- """
- name = 'IRC'
- aliases = ['irc', 'IRC']
- filenames = []
-
- def __init__(self, **options):
- Formatter.__init__(self, **options)
- self.darkbg = get_choice_opt(options, 'bg',
- ['light', 'dark'], 'light') == 'dark'
- self.colorscheme = options.get('colorscheme', None) or IRC_COLORS
- self.linenos = options.get('linenos', False)
- self._lineno = 0
-
- def _write_lineno(self, outfile):
- if self.linenos:
- self._lineno += 1
- outfile.write("%04d: " % self._lineno)
-
- def format_unencoded(self, tokensource, outfile):
- self._write_lineno(outfile)
-
- for ttype, value in tokensource:
- color = self.colorscheme.get(ttype)
- while color is None:
- ttype = ttype[:-1]
- color = self.colorscheme.get(ttype)
- if color:
- color = color[self.darkbg]
- spl = value.split('\n')
- for line in spl[:-1]:
- if line:
- outfile.write(ircformat(color, line))
- outfile.write('\n')
- self._write_lineno(outfile)
- if spl[-1]:
- outfile.write(ircformat(color, spl[-1]))
- else:
- outfile.write(value)
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/archive_util.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/archive_util.py
deleted file mode 100644
index 5dfe2a16ffbf5dc907aa3ce315757f4f9a055a82..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/archive_util.py
+++ /dev/null
@@ -1,280 +0,0 @@
-"""distutils.archive_util
-
-Utility functions for creating archive files (tarballs, zip files,
-that sort of thing)."""
-
-import os
-from warnings import warn
-import sys
-
-try:
- import zipfile
-except ImportError:
- zipfile = None
-
-
-from distutils.errors import DistutilsExecError
-from distutils.spawn import spawn
-from distutils.dir_util import mkpath
-from distutils import log
-
-try:
- from pwd import getpwnam
-except ImportError:
- getpwnam = None
-
-try:
- from grp import getgrnam
-except ImportError:
- getgrnam = None
-
-
-def _get_gid(name):
- """Returns a gid, given a group name."""
- if getgrnam is None or name is None:
- return None
- try:
- result = getgrnam(name)
- except KeyError:
- result = None
- if result is not None:
- return result[2]
- return None
-
-
-def _get_uid(name):
- """Returns an uid, given a user name."""
- if getpwnam is None or name is None:
- return None
- try:
- result = getpwnam(name)
- except KeyError:
- result = None
- if result is not None:
- return result[2]
- return None
-
-
-def make_tarball(
- base_name, base_dir, compress="gzip", verbose=0, dry_run=0, owner=None, group=None
-):
- """Create a (possibly compressed) tar file from all the files under
- 'base_dir'.
-
- 'compress' must be "gzip" (the default), "bzip2", "xz", "compress", or
- None. ("compress" will be deprecated in Python 3.2)
-
- 'owner' and 'group' can be used to define an owner and a group for the
- archive that is being built. If not provided, the current owner and group
- will be used.
-
- The output tar file will be named 'base_dir' + ".tar", possibly plus
- the appropriate compression extension (".gz", ".bz2", ".xz" or ".Z").
-
- Returns the output filename.
- """
- tar_compression = {
- 'gzip': 'gz',
- 'bzip2': 'bz2',
- 'xz': 'xz',
- None: '',
- 'compress': '',
- }
- compress_ext = {'gzip': '.gz', 'bzip2': '.bz2', 'xz': '.xz', 'compress': '.Z'}
-
- # flags for compression program, each element of list will be an argument
- if compress is not None and compress not in compress_ext.keys():
- raise ValueError(
- "bad value for 'compress': must be None, 'gzip', 'bzip2', "
- "'xz' or 'compress'"
- )
-
- archive_name = base_name + '.tar'
- if compress != 'compress':
- archive_name += compress_ext.get(compress, '')
-
- mkpath(os.path.dirname(archive_name), dry_run=dry_run)
-
- # creating the tarball
- import tarfile # late import so Python build itself doesn't break
-
- log.info('Creating tar archive')
-
- uid = _get_uid(owner)
- gid = _get_gid(group)
-
- def _set_uid_gid(tarinfo):
- if gid is not None:
- tarinfo.gid = gid
- tarinfo.gname = group
- if uid is not None:
- tarinfo.uid = uid
- tarinfo.uname = owner
- return tarinfo
-
- if not dry_run:
- tar = tarfile.open(archive_name, 'w|%s' % tar_compression[compress])
- try:
- tar.add(base_dir, filter=_set_uid_gid)
- finally:
- tar.close()
-
- # compression using `compress`
- if compress == 'compress':
- warn("'compress' is deprecated.", DeprecationWarning)
- # the option varies depending on the platform
- compressed_name = archive_name + compress_ext[compress]
- if sys.platform == 'win32':
- cmd = [compress, archive_name, compressed_name]
- else:
- cmd = [compress, '-f', archive_name]
- spawn(cmd, dry_run=dry_run)
- return compressed_name
-
- return archive_name
-
-
-def make_zipfile(base_name, base_dir, verbose=0, dry_run=0): # noqa: C901
- """Create a zip file from all the files under 'base_dir'.
-
- The output zip file will be named 'base_name' + ".zip". Uses either the
- "zipfile" Python module (if available) or the InfoZIP "zip" utility
- (if installed and found on the default search path). If neither tool is
- available, raises DistutilsExecError. Returns the name of the output zip
- file.
- """
- zip_filename = base_name + ".zip"
- mkpath(os.path.dirname(zip_filename), dry_run=dry_run)
-
- # If zipfile module is not available, try spawning an external
- # 'zip' command.
- if zipfile is None:
- if verbose:
- zipoptions = "-r"
- else:
- zipoptions = "-rq"
-
- try:
- spawn(["zip", zipoptions, zip_filename, base_dir], dry_run=dry_run)
- except DistutilsExecError:
- # XXX really should distinguish between "couldn't find
- # external 'zip' command" and "zip failed".
- raise DistutilsExecError(
- (
- "unable to create zip file '%s': "
- "could neither import the 'zipfile' module nor "
- "find a standalone zip utility"
- )
- % zip_filename
- )
-
- else:
- log.info("creating '%s' and adding '%s' to it", zip_filename, base_dir)
-
- if not dry_run:
- try:
- zip = zipfile.ZipFile(
- zip_filename, "w", compression=zipfile.ZIP_DEFLATED
- )
- except RuntimeError:
- zip = zipfile.ZipFile(zip_filename, "w", compression=zipfile.ZIP_STORED)
-
- with zip:
- if base_dir != os.curdir:
- path = os.path.normpath(os.path.join(base_dir, ''))
- zip.write(path, path)
- log.info("adding '%s'", path)
- for dirpath, dirnames, filenames in os.walk(base_dir):
- for name in dirnames:
- path = os.path.normpath(os.path.join(dirpath, name, ''))
- zip.write(path, path)
- log.info("adding '%s'", path)
- for name in filenames:
- path = os.path.normpath(os.path.join(dirpath, name))
- if os.path.isfile(path):
- zip.write(path, path)
- log.info("adding '%s'", path)
-
- return zip_filename
-
-
-ARCHIVE_FORMATS = {
- 'gztar': (make_tarball, [('compress', 'gzip')], "gzip'ed tar-file"),
- 'bztar': (make_tarball, [('compress', 'bzip2')], "bzip2'ed tar-file"),
- 'xztar': (make_tarball, [('compress', 'xz')], "xz'ed tar-file"),
- 'ztar': (make_tarball, [('compress', 'compress')], "compressed tar file"),
- 'tar': (make_tarball, [('compress', None)], "uncompressed tar file"),
- 'zip': (make_zipfile, [], "ZIP file"),
-}
-
-
-def check_archive_formats(formats):
- """Returns the first format from the 'format' list that is unknown.
-
- If all formats are known, returns None
- """
- for format in formats:
- if format not in ARCHIVE_FORMATS:
- return format
- return None
-
-
-def make_archive(
- base_name,
- format,
- root_dir=None,
- base_dir=None,
- verbose=0,
- dry_run=0,
- owner=None,
- group=None,
-):
- """Create an archive file (eg. zip or tar).
-
- 'base_name' is the name of the file to create, minus any format-specific
- extension; 'format' is the archive format: one of "zip", "tar", "gztar",
- "bztar", "xztar", or "ztar".
-
- 'root_dir' is a directory that will be the root directory of the
- archive; ie. we typically chdir into 'root_dir' before creating the
- archive. 'base_dir' is the directory where we start archiving from;
- ie. 'base_dir' will be the common prefix of all files and
- directories in the archive. 'root_dir' and 'base_dir' both default
- to the current directory. Returns the name of the archive file.
-
- 'owner' and 'group' are used when creating a tar archive. By default,
- uses the current owner and group.
- """
- save_cwd = os.getcwd()
- if root_dir is not None:
- log.debug("changing into '%s'", root_dir)
- base_name = os.path.abspath(base_name)
- if not dry_run:
- os.chdir(root_dir)
-
- if base_dir is None:
- base_dir = os.curdir
-
- kwargs = {'dry_run': dry_run}
-
- try:
- format_info = ARCHIVE_FORMATS[format]
- except KeyError:
- raise ValueError("unknown archive format '%s'" % format)
-
- func = format_info[0]
- for arg, val in format_info[1]:
- kwargs[arg] = val
-
- if format != 'zip':
- kwargs['owner'] = owner
- kwargs['group'] = group
-
- try:
- filename = func(base_name, base_dir, **kwargs)
- finally:
- if root_dir is not None:
- log.debug("changing back to '%s'", save_cwd)
- os.chdir(save_cwd)
-
- return filename
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/dist.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/dist.py
deleted file mode 100644
index 917cd94a0c29985085f9332c5a73549c51bb8fb1..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/dist.py
+++ /dev/null
@@ -1,1286 +0,0 @@
-"""distutils.dist
-
-Provides the Distribution class, which represents the module distribution
-being built/installed/distributed.
-"""
-
-import sys
-import os
-import re
-import pathlib
-import contextlib
-from email import message_from_file
-
-try:
- import warnings
-except ImportError:
- warnings = None
-
-from distutils.errors import (
- DistutilsOptionError,
- DistutilsModuleError,
- DistutilsArgError,
- DistutilsClassError,
-)
-from distutils.fancy_getopt import FancyGetopt, translate_longopt
-from distutils.util import check_environ, strtobool, rfc822_escape
-from distutils import log
-from distutils.debug import DEBUG
-
-# Regex to define acceptable Distutils command names. This is not *quite*
-# the same as a Python NAME -- I don't allow leading underscores. The fact
-# that they're very similar is no coincidence; the default naming scheme is
-# to look for a Python module named after the command.
-command_re = re.compile(r'^[a-zA-Z]([a-zA-Z0-9_]*)$')
-
-
-def _ensure_list(value, fieldname):
- if isinstance(value, str):
- # a string containing comma separated values is okay. It will
- # be converted to a list by Distribution.finalize_options().
- pass
- elif not isinstance(value, list):
- # passing a tuple or an iterator perhaps, warn and convert
- typename = type(value).__name__
- msg = "Warning: '{fieldname}' should be a list, got type '{typename}'"
- msg = msg.format(**locals())
- log.log(log.WARN, msg)
- value = list(value)
- return value
-
-
-class Distribution:
- """The core of the Distutils. Most of the work hiding behind 'setup'
- is really done within a Distribution instance, which farms the work out
- to the Distutils commands specified on the command line.
-
- Setup scripts will almost never instantiate Distribution directly,
- unless the 'setup()' function is totally inadequate to their needs.
- However, it is conceivable that a setup script might wish to subclass
- Distribution for some specialized purpose, and then pass the subclass
- to 'setup()' as the 'distclass' keyword argument. If so, it is
- necessary to respect the expectations that 'setup' has of Distribution.
- See the code for 'setup()', in core.py, for details.
- """
-
- # 'global_options' describes the command-line options that may be
- # supplied to the setup script prior to any actual commands.
- # Eg. "./setup.py -n" or "./setup.py --quiet" both take advantage of
- # these global options. This list should be kept to a bare minimum,
- # since every global option is also valid as a command option -- and we
- # don't want to pollute the commands with too many options that they
- # have minimal control over.
- # The fourth entry for verbose means that it can be repeated.
- global_options = [
- ('verbose', 'v', "run verbosely (default)", 1),
- ('quiet', 'q', "run quietly (turns verbosity off)"),
- ('dry-run', 'n', "don't actually do anything"),
- ('help', 'h', "show detailed help message"),
- ('no-user-cfg', None, 'ignore pydistutils.cfg in your home directory'),
- ]
-
- # 'common_usage' is a short (2-3 line) string describing the common
- # usage of the setup script.
- common_usage = """\
-Common commands: (see '--help-commands' for more)
-
- setup.py build will build the package underneath 'build/'
- setup.py install will install the package
-"""
-
- # options that are not propagated to the commands
- display_options = [
- ('help-commands', None, "list all available commands"),
- ('name', None, "print package name"),
- ('version', 'V', "print package version"),
- ('fullname', None, "print -"),
- ('author', None, "print the author's name"),
- ('author-email', None, "print the author's email address"),
- ('maintainer', None, "print the maintainer's name"),
- ('maintainer-email', None, "print the maintainer's email address"),
- ('contact', None, "print the maintainer's name if known, else the author's"),
- (
- 'contact-email',
- None,
- "print the maintainer's email address if known, else the author's",
- ),
- ('url', None, "print the URL for this package"),
- ('license', None, "print the license of the package"),
- ('licence', None, "alias for --license"),
- ('description', None, "print the package description"),
- ('long-description', None, "print the long package description"),
- ('platforms', None, "print the list of platforms"),
- ('classifiers', None, "print the list of classifiers"),
- ('keywords', None, "print the list of keywords"),
- ('provides', None, "print the list of packages/modules provided"),
- ('requires', None, "print the list of packages/modules required"),
- ('obsoletes', None, "print the list of packages/modules made obsolete"),
- ]
- display_option_names = [translate_longopt(x[0]) for x in display_options]
-
- # negative options are options that exclude other options
- negative_opt = {'quiet': 'verbose'}
-
- # -- Creation/initialization methods -------------------------------
-
- def __init__(self, attrs=None): # noqa: C901
- """Construct a new Distribution instance: initialize all the
- attributes of a Distribution, and then use 'attrs' (a dictionary
- mapping attribute names to values) to assign some of those
- attributes their "real" values. (Any attributes not mentioned in
- 'attrs' will be assigned to some null value: 0, None, an empty list
- or dictionary, etc.) Most importantly, initialize the
- 'command_obj' attribute to the empty dictionary; this will be
- filled in with real command objects by 'parse_command_line()'.
- """
-
- # Default values for our command-line options
- self.verbose = 1
- self.dry_run = 0
- self.help = 0
- for attr in self.display_option_names:
- setattr(self, attr, 0)
-
- # Store the distribution meta-data (name, version, author, and so
- # forth) in a separate object -- we're getting to have enough
- # information here (and enough command-line options) that it's
- # worth it. Also delegate 'get_XXX()' methods to the 'metadata'
- # object in a sneaky and underhanded (but efficient!) way.
- self.metadata = DistributionMetadata()
- for basename in self.metadata._METHOD_BASENAMES:
- method_name = "get_" + basename
- setattr(self, method_name, getattr(self.metadata, method_name))
-
- # 'cmdclass' maps command names to class objects, so we
- # can 1) quickly figure out which class to instantiate when
- # we need to create a new command object, and 2) have a way
- # for the setup script to override command classes
- self.cmdclass = {}
-
- # 'command_packages' is a list of packages in which commands
- # are searched for. The factory for command 'foo' is expected
- # to be named 'foo' in the module 'foo' in one of the packages
- # named here. This list is searched from the left; an error
- # is raised if no named package provides the command being
- # searched for. (Always access using get_command_packages().)
- self.command_packages = None
-
- # 'script_name' and 'script_args' are usually set to sys.argv[0]
- # and sys.argv[1:], but they can be overridden when the caller is
- # not necessarily a setup script run from the command-line.
- self.script_name = None
- self.script_args = None
-
- # 'command_options' is where we store command options between
- # parsing them (from config files, the command-line, etc.) and when
- # they are actually needed -- ie. when the command in question is
- # instantiated. It is a dictionary of dictionaries of 2-tuples:
- # command_options = { command_name : { option : (source, value) } }
- self.command_options = {}
-
- # 'dist_files' is the list of (command, pyversion, file) that
- # have been created by any dist commands run so far. This is
- # filled regardless of whether the run is dry or not. pyversion
- # gives sysconfig.get_python_version() if the dist file is
- # specific to a Python version, 'any' if it is good for all
- # Python versions on the target platform, and '' for a source
- # file. pyversion should not be used to specify minimum or
- # maximum required Python versions; use the metainfo for that
- # instead.
- self.dist_files = []
-
- # These options are really the business of various commands, rather
- # than of the Distribution itself. We provide aliases for them in
- # Distribution as a convenience to the developer.
- self.packages = None
- self.package_data = {}
- self.package_dir = None
- self.py_modules = None
- self.libraries = None
- self.headers = None
- self.ext_modules = None
- self.ext_package = None
- self.include_dirs = None
- self.extra_path = None
- self.scripts = None
- self.data_files = None
- self.password = ''
-
- # And now initialize bookkeeping stuff that can't be supplied by
- # the caller at all. 'command_obj' maps command names to
- # Command instances -- that's how we enforce that every command
- # class is a singleton.
- self.command_obj = {}
-
- # 'have_run' maps command names to boolean values; it keeps track
- # of whether we have actually run a particular command, to make it
- # cheap to "run" a command whenever we think we might need to -- if
- # it's already been done, no need for expensive filesystem
- # operations, we just check the 'have_run' dictionary and carry on.
- # It's only safe to query 'have_run' for a command class that has
- # been instantiated -- a false value will be inserted when the
- # command object is created, and replaced with a true value when
- # the command is successfully run. Thus it's probably best to use
- # '.get()' rather than a straight lookup.
- self.have_run = {}
-
- # Now we'll use the attrs dictionary (ultimately, keyword args from
- # the setup script) to possibly override any or all of these
- # distribution options.
-
- if attrs:
- # Pull out the set of command options and work on them
- # specifically. Note that this order guarantees that aliased
- # command options will override any supplied redundantly
- # through the general options dictionary.
- options = attrs.get('options')
- if options is not None:
- del attrs['options']
- for (command, cmd_options) in options.items():
- opt_dict = self.get_option_dict(command)
- for (opt, val) in cmd_options.items():
- opt_dict[opt] = ("setup script", val)
-
- if 'licence' in attrs:
- attrs['license'] = attrs['licence']
- del attrs['licence']
- msg = "'licence' distribution option is deprecated; use 'license'"
- if warnings is not None:
- warnings.warn(msg)
- else:
- sys.stderr.write(msg + "\n")
-
- # Now work on the rest of the attributes. Any attribute that's
- # not already defined is invalid!
- for (key, val) in attrs.items():
- if hasattr(self.metadata, "set_" + key):
- getattr(self.metadata, "set_" + key)(val)
- elif hasattr(self.metadata, key):
- setattr(self.metadata, key, val)
- elif hasattr(self, key):
- setattr(self, key, val)
- else:
- msg = "Unknown distribution option: %s" % repr(key)
- warnings.warn(msg)
-
- # no-user-cfg is handled before other command line args
- # because other args override the config files, and this
- # one is needed before we can load the config files.
- # If attrs['script_args'] wasn't passed, assume false.
- #
- # This also make sure we just look at the global options
- self.want_user_cfg = True
-
- if self.script_args is not None:
- for arg in self.script_args:
- if not arg.startswith('-'):
- break
- if arg == '--no-user-cfg':
- self.want_user_cfg = False
- break
-
- self.finalize_options()
-
- def get_option_dict(self, command):
- """Get the option dictionary for a given command. If that
- command's option dictionary hasn't been created yet, then create it
- and return the new dictionary; otherwise, return the existing
- option dictionary.
- """
- dict = self.command_options.get(command)
- if dict is None:
- dict = self.command_options[command] = {}
- return dict
-
- def dump_option_dicts(self, header=None, commands=None, indent=""):
- from pprint import pformat
-
- if commands is None: # dump all command option dicts
- commands = sorted(self.command_options.keys())
-
- if header is not None:
- self.announce(indent + header)
- indent = indent + " "
-
- if not commands:
- self.announce(indent + "no commands known yet")
- return
-
- for cmd_name in commands:
- opt_dict = self.command_options.get(cmd_name)
- if opt_dict is None:
- self.announce(indent + "no option dict for '%s' command" % cmd_name)
- else:
- self.announce(indent + "option dict for '%s' command:" % cmd_name)
- out = pformat(opt_dict)
- for line in out.split('\n'):
- self.announce(indent + " " + line)
-
- # -- Config file finding/parsing methods ---------------------------
-
- def find_config_files(self):
- """Find as many configuration files as should be processed for this
- platform, and return a list of filenames in the order in which they
- should be parsed. The filenames returned are guaranteed to exist
- (modulo nasty race conditions).
-
- There are multiple possible config files:
- - distutils.cfg in the Distutils installation directory (i.e.
- where the top-level Distutils __inst__.py file lives)
- - a file in the user's home directory named .pydistutils.cfg
- on Unix and pydistutils.cfg on Windows/Mac; may be disabled
- with the ``--no-user-cfg`` option
- - setup.cfg in the current directory
- - a file named by an environment variable
- """
- check_environ()
- files = [str(path) for path in self._gen_paths() if os.path.isfile(path)]
-
- if DEBUG:
- self.announce("using config files: %s" % ', '.join(files))
-
- return files
-
- def _gen_paths(self):
- # The system-wide Distutils config file
- sys_dir = pathlib.Path(sys.modules['distutils'].__file__).parent
- yield sys_dir / "distutils.cfg"
-
- # The per-user config file
- prefix = '.' * (os.name == 'posix')
- filename = prefix + 'pydistutils.cfg'
- if self.want_user_cfg:
- yield pathlib.Path('~').expanduser() / filename
-
- # All platforms support local setup.cfg
- yield pathlib.Path('setup.cfg')
-
- # Additional config indicated in the environment
- with contextlib.suppress(TypeError):
- yield pathlib.Path(os.getenv("DIST_EXTRA_CONFIG"))
-
- def parse_config_files(self, filenames=None): # noqa: C901
- from configparser import ConfigParser
-
- # Ignore install directory options if we have a venv
- if sys.prefix != sys.base_prefix:
- ignore_options = [
- 'install-base',
- 'install-platbase',
- 'install-lib',
- 'install-platlib',
- 'install-purelib',
- 'install-headers',
- 'install-scripts',
- 'install-data',
- 'prefix',
- 'exec-prefix',
- 'home',
- 'user',
- 'root',
- ]
- else:
- ignore_options = []
-
- ignore_options = frozenset(ignore_options)
-
- if filenames is None:
- filenames = self.find_config_files()
-
- if DEBUG:
- self.announce("Distribution.parse_config_files():")
-
- parser = ConfigParser()
- for filename in filenames:
- if DEBUG:
- self.announce(" reading %s" % filename)
- parser.read(filename)
- for section in parser.sections():
- options = parser.options(section)
- opt_dict = self.get_option_dict(section)
-
- for opt in options:
- if opt != '__name__' and opt not in ignore_options:
- val = parser.get(section, opt)
- opt = opt.replace('-', '_')
- opt_dict[opt] = (filename, val)
-
- # Make the ConfigParser forget everything (so we retain
- # the original filenames that options come from)
- parser.__init__()
-
- # If there was a "global" section in the config file, use it
- # to set Distribution options.
-
- if 'global' in self.command_options:
- for (opt, (src, val)) in self.command_options['global'].items():
- alias = self.negative_opt.get(opt)
- try:
- if alias:
- setattr(self, alias, not strtobool(val))
- elif opt in ('verbose', 'dry_run'): # ugh!
- setattr(self, opt, strtobool(val))
- else:
- setattr(self, opt, val)
- except ValueError as msg:
- raise DistutilsOptionError(msg)
-
- # -- Command-line parsing methods ----------------------------------
-
- def parse_command_line(self):
- """Parse the setup script's command line, taken from the
- 'script_args' instance attribute (which defaults to 'sys.argv[1:]'
- -- see 'setup()' in core.py). This list is first processed for
- "global options" -- options that set attributes of the Distribution
- instance. Then, it is alternately scanned for Distutils commands
- and options for that command. Each new command terminates the
- options for the previous command. The allowed options for a
- command are determined by the 'user_options' attribute of the
- command class -- thus, we have to be able to load command classes
- in order to parse the command line. Any error in that 'options'
- attribute raises DistutilsGetoptError; any error on the
- command-line raises DistutilsArgError. If no Distutils commands
- were found on the command line, raises DistutilsArgError. Return
- true if command-line was successfully parsed and we should carry
- on with executing commands; false if no errors but we shouldn't
- execute commands (currently, this only happens if user asks for
- help).
- """
- #
- # We now have enough information to show the Macintosh dialog
- # that allows the user to interactively specify the "command line".
- #
- toplevel_options = self._get_toplevel_options()
-
- # We have to parse the command line a bit at a time -- global
- # options, then the first command, then its options, and so on --
- # because each command will be handled by a different class, and
- # the options that are valid for a particular class aren't known
- # until we have loaded the command class, which doesn't happen
- # until we know what the command is.
-
- self.commands = []
- parser = FancyGetopt(toplevel_options + self.display_options)
- parser.set_negative_aliases(self.negative_opt)
- parser.set_aliases({'licence': 'license'})
- args = parser.getopt(args=self.script_args, object=self)
- option_order = parser.get_option_order()
- log.set_verbosity(self.verbose)
-
- # for display options we return immediately
- if self.handle_display_options(option_order):
- return
- while args:
- args = self._parse_command_opts(parser, args)
- if args is None: # user asked for help (and got it)
- return
-
- # Handle the cases of --help as a "global" option, ie.
- # "setup.py --help" and "setup.py --help command ...". For the
- # former, we show global options (--verbose, --dry-run, etc.)
- # and display-only options (--name, --version, etc.); for the
- # latter, we omit the display-only options and show help for
- # each command listed on the command line.
- if self.help:
- self._show_help(
- parser, display_options=len(self.commands) == 0, commands=self.commands
- )
- return
-
- # Oops, no commands found -- an end-user error
- if not self.commands:
- raise DistutilsArgError("no commands supplied")
-
- # All is well: return true
- return True
-
- def _get_toplevel_options(self):
- """Return the non-display options recognized at the top level.
-
- This includes options that are recognized *only* at the top
- level as well as options recognized for commands.
- """
- return self.global_options + [
- (
- "command-packages=",
- None,
- "list of packages that provide distutils commands",
- ),
- ]
-
- def _parse_command_opts(self, parser, args): # noqa: C901
- """Parse the command-line options for a single command.
- 'parser' must be a FancyGetopt instance; 'args' must be the list
- of arguments, starting with the current command (whose options
- we are about to parse). Returns a new version of 'args' with
- the next command at the front of the list; will be the empty
- list if there are no more commands on the command line. Returns
- None if the user asked for help on this command.
- """
- # late import because of mutual dependence between these modules
- from distutils.cmd import Command
-
- # Pull the current command from the head of the command line
- command = args[0]
- if not command_re.match(command):
- raise SystemExit("invalid command name '%s'" % command)
- self.commands.append(command)
-
- # Dig up the command class that implements this command, so we
- # 1) know that it's a valid command, and 2) know which options
- # it takes.
- try:
- cmd_class = self.get_command_class(command)
- except DistutilsModuleError as msg:
- raise DistutilsArgError(msg)
-
- # Require that the command class be derived from Command -- want
- # to be sure that the basic "command" interface is implemented.
- if not issubclass(cmd_class, Command):
- raise DistutilsClassError(
- "command class %s must subclass Command" % cmd_class
- )
-
- # Also make sure that the command object provides a list of its
- # known options.
- if not (
- hasattr(cmd_class, 'user_options')
- and isinstance(cmd_class.user_options, list)
- ):
- msg = (
- "command class %s must provide "
- "'user_options' attribute (a list of tuples)"
- )
- raise DistutilsClassError(msg % cmd_class)
-
- # If the command class has a list of negative alias options,
- # merge it in with the global negative aliases.
- negative_opt = self.negative_opt
- if hasattr(cmd_class, 'negative_opt'):
- negative_opt = negative_opt.copy()
- negative_opt.update(cmd_class.negative_opt)
-
- # Check for help_options in command class. They have a different
- # format (tuple of four) so we need to preprocess them here.
- if hasattr(cmd_class, 'help_options') and isinstance(
- cmd_class.help_options, list
- ):
- help_options = fix_help_options(cmd_class.help_options)
- else:
- help_options = []
-
- # All commands support the global options too, just by adding
- # in 'global_options'.
- parser.set_option_table(
- self.global_options + cmd_class.user_options + help_options
- )
- parser.set_negative_aliases(negative_opt)
- (args, opts) = parser.getopt(args[1:])
- if hasattr(opts, 'help') and opts.help:
- self._show_help(parser, display_options=0, commands=[cmd_class])
- return
-
- if hasattr(cmd_class, 'help_options') and isinstance(
- cmd_class.help_options, list
- ):
- help_option_found = 0
- for (help_option, short, desc, func) in cmd_class.help_options:
- if hasattr(opts, parser.get_attr_name(help_option)):
- help_option_found = 1
- if callable(func):
- func()
- else:
- raise DistutilsClassError(
- "invalid help function %r for help option '%s': "
- "must be a callable object (function, etc.)"
- % (func, help_option)
- )
-
- if help_option_found:
- return
-
- # Put the options from the command-line into their official
- # holding pen, the 'command_options' dictionary.
- opt_dict = self.get_option_dict(command)
- for (name, value) in vars(opts).items():
- opt_dict[name] = ("command line", value)
-
- return args
-
- def finalize_options(self):
- """Set final values for all the options on the Distribution
- instance, analogous to the .finalize_options() method of Command
- objects.
- """
- for attr in ('keywords', 'platforms'):
- value = getattr(self.metadata, attr)
- if value is None:
- continue
- if isinstance(value, str):
- value = [elm.strip() for elm in value.split(',')]
- setattr(self.metadata, attr, value)
-
- def _show_help(self, parser, global_options=1, display_options=1, commands=[]):
- """Show help for the setup script command-line in the form of
- several lists of command-line options. 'parser' should be a
- FancyGetopt instance; do not expect it to be returned in the
- same state, as its option table will be reset to make it
- generate the correct help text.
-
- If 'global_options' is true, lists the global options:
- --verbose, --dry-run, etc. If 'display_options' is true, lists
- the "display-only" options: --name, --version, etc. Finally,
- lists per-command help for every command name or command class
- in 'commands'.
- """
- # late import because of mutual dependence between these modules
- from distutils.core import gen_usage
- from distutils.cmd import Command
-
- if global_options:
- if display_options:
- options = self._get_toplevel_options()
- else:
- options = self.global_options
- parser.set_option_table(options)
- parser.print_help(self.common_usage + "\nGlobal options:")
- print('')
-
- if display_options:
- parser.set_option_table(self.display_options)
- parser.print_help(
- "Information display options (just display "
- + "information, ignore any commands)"
- )
- print('')
-
- for command in self.commands:
- if isinstance(command, type) and issubclass(command, Command):
- klass = command
- else:
- klass = self.get_command_class(command)
- if hasattr(klass, 'help_options') and isinstance(klass.help_options, list):
- parser.set_option_table(
- klass.user_options + fix_help_options(klass.help_options)
- )
- else:
- parser.set_option_table(klass.user_options)
- parser.print_help("Options for '%s' command:" % klass.__name__)
- print('')
-
- print(gen_usage(self.script_name))
-
- def handle_display_options(self, option_order):
- """If there were any non-global "display-only" options
- (--help-commands or the metadata display options) on the command
- line, display the requested info and return true; else return
- false.
- """
- from distutils.core import gen_usage
-
- # User just wants a list of commands -- we'll print it out and stop
- # processing now (ie. if they ran "setup --help-commands foo bar",
- # we ignore "foo bar").
- if self.help_commands:
- self.print_commands()
- print('')
- print(gen_usage(self.script_name))
- return 1
-
- # If user supplied any of the "display metadata" options, then
- # display that metadata in the order in which the user supplied the
- # metadata options.
- any_display_options = 0
- is_display_option = {}
- for option in self.display_options:
- is_display_option[option[0]] = 1
-
- for (opt, val) in option_order:
- if val and is_display_option.get(opt):
- opt = translate_longopt(opt)
- value = getattr(self.metadata, "get_" + opt)()
- if opt in ['keywords', 'platforms']:
- print(','.join(value))
- elif opt in ('classifiers', 'provides', 'requires', 'obsoletes'):
- print('\n'.join(value))
- else:
- print(value)
- any_display_options = 1
-
- return any_display_options
-
- def print_command_list(self, commands, header, max_length):
- """Print a subset of the list of all commands -- used by
- 'print_commands()'.
- """
- print(header + ":")
-
- for cmd in commands:
- klass = self.cmdclass.get(cmd)
- if not klass:
- klass = self.get_command_class(cmd)
- try:
- description = klass.description
- except AttributeError:
- description = "(no description available)"
-
- print(" %-*s %s" % (max_length, cmd, description))
-
- def print_commands(self):
- """Print out a help message listing all available commands with a
- description of each. The list is divided into "standard commands"
- (listed in distutils.command.__all__) and "extra commands"
- (mentioned in self.cmdclass, but not a standard command). The
- descriptions come from the command class attribute
- 'description'.
- """
- import distutils.command
-
- std_commands = distutils.command.__all__
- is_std = {}
- for cmd in std_commands:
- is_std[cmd] = 1
-
- extra_commands = []
- for cmd in self.cmdclass.keys():
- if not is_std.get(cmd):
- extra_commands.append(cmd)
-
- max_length = 0
- for cmd in std_commands + extra_commands:
- if len(cmd) > max_length:
- max_length = len(cmd)
-
- self.print_command_list(std_commands, "Standard commands", max_length)
- if extra_commands:
- print()
- self.print_command_list(extra_commands, "Extra commands", max_length)
-
- def get_command_list(self):
- """Get a list of (command, description) tuples.
- The list is divided into "standard commands" (listed in
- distutils.command.__all__) and "extra commands" (mentioned in
- self.cmdclass, but not a standard command). The descriptions come
- from the command class attribute 'description'.
- """
- # Currently this is only used on Mac OS, for the Mac-only GUI
- # Distutils interface (by Jack Jansen)
- import distutils.command
-
- std_commands = distutils.command.__all__
- is_std = {}
- for cmd in std_commands:
- is_std[cmd] = 1
-
- extra_commands = []
- for cmd in self.cmdclass.keys():
- if not is_std.get(cmd):
- extra_commands.append(cmd)
-
- rv = []
- for cmd in std_commands + extra_commands:
- klass = self.cmdclass.get(cmd)
- if not klass:
- klass = self.get_command_class(cmd)
- try:
- description = klass.description
- except AttributeError:
- description = "(no description available)"
- rv.append((cmd, description))
- return rv
-
- # -- Command class/object methods ----------------------------------
-
- def get_command_packages(self):
- """Return a list of packages from which commands are loaded."""
- pkgs = self.command_packages
- if not isinstance(pkgs, list):
- if pkgs is None:
- pkgs = ''
- pkgs = [pkg.strip() for pkg in pkgs.split(',') if pkg != '']
- if "distutils.command" not in pkgs:
- pkgs.insert(0, "distutils.command")
- self.command_packages = pkgs
- return pkgs
-
- def get_command_class(self, command):
- """Return the class that implements the Distutils command named by
- 'command'. First we check the 'cmdclass' dictionary; if the
- command is mentioned there, we fetch the class object from the
- dictionary and return it. Otherwise we load the command module
- ("distutils.command." + command) and fetch the command class from
- the module. The loaded class is also stored in 'cmdclass'
- to speed future calls to 'get_command_class()'.
-
- Raises DistutilsModuleError if the expected module could not be
- found, or if that module does not define the expected class.
- """
- klass = self.cmdclass.get(command)
- if klass:
- return klass
-
- for pkgname in self.get_command_packages():
- module_name = "{}.{}".format(pkgname, command)
- klass_name = command
-
- try:
- __import__(module_name)
- module = sys.modules[module_name]
- except ImportError:
- continue
-
- try:
- klass = getattr(module, klass_name)
- except AttributeError:
- raise DistutilsModuleError(
- "invalid command '%s' (no class '%s' in module '%s')"
- % (command, klass_name, module_name)
- )
-
- self.cmdclass[command] = klass
- return klass
-
- raise DistutilsModuleError("invalid command '%s'" % command)
-
- def get_command_obj(self, command, create=1):
- """Return the command object for 'command'. Normally this object
- is cached on a previous call to 'get_command_obj()'; if no command
- object for 'command' is in the cache, then we either create and
- return it (if 'create' is true) or return None.
- """
- cmd_obj = self.command_obj.get(command)
- if not cmd_obj and create:
- if DEBUG:
- self.announce(
- "Distribution.get_command_obj(): "
- "creating '%s' command object" % command
- )
-
- klass = self.get_command_class(command)
- cmd_obj = self.command_obj[command] = klass(self)
- self.have_run[command] = 0
-
- # Set any options that were supplied in config files
- # or on the command line. (NB. support for error
- # reporting is lame here: any errors aren't reported
- # until 'finalize_options()' is called, which means
- # we won't report the source of the error.)
- options = self.command_options.get(command)
- if options:
- self._set_command_options(cmd_obj, options)
-
- return cmd_obj
-
- def _set_command_options(self, command_obj, option_dict=None): # noqa: C901
- """Set the options for 'command_obj' from 'option_dict'. Basically
- this means copying elements of a dictionary ('option_dict') to
- attributes of an instance ('command').
-
- 'command_obj' must be a Command instance. If 'option_dict' is not
- supplied, uses the standard option dictionary for this command
- (from 'self.command_options').
- """
- command_name = command_obj.get_command_name()
- if option_dict is None:
- option_dict = self.get_option_dict(command_name)
-
- if DEBUG:
- self.announce(" setting options for '%s' command:" % command_name)
- for (option, (source, value)) in option_dict.items():
- if DEBUG:
- self.announce(" {} = {} (from {})".format(option, value, source))
- try:
- bool_opts = [translate_longopt(o) for o in command_obj.boolean_options]
- except AttributeError:
- bool_opts = []
- try:
- neg_opt = command_obj.negative_opt
- except AttributeError:
- neg_opt = {}
-
- try:
- is_string = isinstance(value, str)
- if option in neg_opt and is_string:
- setattr(command_obj, neg_opt[option], not strtobool(value))
- elif option in bool_opts and is_string:
- setattr(command_obj, option, strtobool(value))
- elif hasattr(command_obj, option):
- setattr(command_obj, option, value)
- else:
- raise DistutilsOptionError(
- "error in %s: command '%s' has no such option '%s'"
- % (source, command_name, option)
- )
- except ValueError as msg:
- raise DistutilsOptionError(msg)
-
- def reinitialize_command(self, command, reinit_subcommands=0):
- """Reinitializes a command to the state it was in when first
- returned by 'get_command_obj()': ie., initialized but not yet
- finalized. This provides the opportunity to sneak option
- values in programmatically, overriding or supplementing
- user-supplied values from the config files and command line.
- You'll have to re-finalize the command object (by calling
- 'finalize_options()' or 'ensure_finalized()') before using it for
- real.
-
- 'command' should be a command name (string) or command object. If
- 'reinit_subcommands' is true, also reinitializes the command's
- sub-commands, as declared by the 'sub_commands' class attribute (if
- it has one). See the "install" command for an example. Only
- reinitializes the sub-commands that actually matter, ie. those
- whose test predicates return true.
-
- Returns the reinitialized command object.
- """
- from distutils.cmd import Command
-
- if not isinstance(command, Command):
- command_name = command
- command = self.get_command_obj(command_name)
- else:
- command_name = command.get_command_name()
-
- if not command.finalized:
- return command
- command.initialize_options()
- command.finalized = 0
- self.have_run[command_name] = 0
- self._set_command_options(command)
-
- if reinit_subcommands:
- for sub in command.get_sub_commands():
- self.reinitialize_command(sub, reinit_subcommands)
-
- return command
-
- # -- Methods that operate on the Distribution ----------------------
-
- def announce(self, msg, level=log.INFO):
- log.log(level, msg)
-
- def run_commands(self):
- """Run each command that was seen on the setup script command line.
- Uses the list of commands found and cache of command objects
- created by 'get_command_obj()'.
- """
- for cmd in self.commands:
- self.run_command(cmd)
-
- # -- Methods that operate on its Commands --------------------------
-
- def run_command(self, command):
- """Do whatever it takes to run a command (including nothing at all,
- if the command has already been run). Specifically: if we have
- already created and run the command named by 'command', return
- silently without doing anything. If the command named by 'command'
- doesn't even have a command object yet, create one. Then invoke
- 'run()' on that command object (or an existing one).
- """
- # Already been here, done that? then return silently.
- if self.have_run.get(command):
- return
-
- log.info("running %s", command)
- cmd_obj = self.get_command_obj(command)
- cmd_obj.ensure_finalized()
- cmd_obj.run()
- self.have_run[command] = 1
-
- # -- Distribution query methods ------------------------------------
-
- def has_pure_modules(self):
- return len(self.packages or self.py_modules or []) > 0
-
- def has_ext_modules(self):
- return self.ext_modules and len(self.ext_modules) > 0
-
- def has_c_libraries(self):
- return self.libraries and len(self.libraries) > 0
-
- def has_modules(self):
- return self.has_pure_modules() or self.has_ext_modules()
-
- def has_headers(self):
- return self.headers and len(self.headers) > 0
-
- def has_scripts(self):
- return self.scripts and len(self.scripts) > 0
-
- def has_data_files(self):
- return self.data_files and len(self.data_files) > 0
-
- def is_pure(self):
- return (
- self.has_pure_modules()
- and not self.has_ext_modules()
- and not self.has_c_libraries()
- )
-
- # -- Metadata query methods ----------------------------------------
-
- # If you're looking for 'get_name()', 'get_version()', and so forth,
- # they are defined in a sneaky way: the constructor binds self.get_XXX
- # to self.metadata.get_XXX. The actual code is in the
- # DistributionMetadata class, below.
-
-
-class DistributionMetadata:
- """Dummy class to hold the distribution meta-data: name, version,
- author, and so forth.
- """
-
- _METHOD_BASENAMES = (
- "name",
- "version",
- "author",
- "author_email",
- "maintainer",
- "maintainer_email",
- "url",
- "license",
- "description",
- "long_description",
- "keywords",
- "platforms",
- "fullname",
- "contact",
- "contact_email",
- "classifiers",
- "download_url",
- # PEP 314
- "provides",
- "requires",
- "obsoletes",
- )
-
- def __init__(self, path=None):
- if path is not None:
- self.read_pkg_file(open(path))
- else:
- self.name = None
- self.version = None
- self.author = None
- self.author_email = None
- self.maintainer = None
- self.maintainer_email = None
- self.url = None
- self.license = None
- self.description = None
- self.long_description = None
- self.keywords = None
- self.platforms = None
- self.classifiers = None
- self.download_url = None
- # PEP 314
- self.provides = None
- self.requires = None
- self.obsoletes = None
-
- def read_pkg_file(self, file):
- """Reads the metadata values from a file object."""
- msg = message_from_file(file)
-
- def _read_field(name):
- value = msg[name]
- if value and value != "UNKNOWN":
- return value
-
- def _read_list(name):
- values = msg.get_all(name, None)
- if values == []:
- return None
- return values
-
- metadata_version = msg['metadata-version']
- self.name = _read_field('name')
- self.version = _read_field('version')
- self.description = _read_field('summary')
- # we are filling author only.
- self.author = _read_field('author')
- self.maintainer = None
- self.author_email = _read_field('author-email')
- self.maintainer_email = None
- self.url = _read_field('home-page')
- self.license = _read_field('license')
-
- if 'download-url' in msg:
- self.download_url = _read_field('download-url')
- else:
- self.download_url = None
-
- self.long_description = _read_field('description')
- self.description = _read_field('summary')
-
- if 'keywords' in msg:
- self.keywords = _read_field('keywords').split(',')
-
- self.platforms = _read_list('platform')
- self.classifiers = _read_list('classifier')
-
- # PEP 314 - these fields only exist in 1.1
- if metadata_version == '1.1':
- self.requires = _read_list('requires')
- self.provides = _read_list('provides')
- self.obsoletes = _read_list('obsoletes')
- else:
- self.requires = None
- self.provides = None
- self.obsoletes = None
-
- def write_pkg_info(self, base_dir):
- """Write the PKG-INFO file into the release tree."""
- with open(
- os.path.join(base_dir, 'PKG-INFO'), 'w', encoding='UTF-8'
- ) as pkg_info:
- self.write_pkg_file(pkg_info)
-
- def write_pkg_file(self, file):
- """Write the PKG-INFO format data to a file object."""
- version = '1.0'
- if (
- self.provides
- or self.requires
- or self.obsoletes
- or self.classifiers
- or self.download_url
- ):
- version = '1.1'
-
- # required fields
- file.write('Metadata-Version: %s\n' % version)
- file.write('Name: %s\n' % self.get_name())
- file.write('Version: %s\n' % self.get_version())
-
- def maybe_write(header, val):
- if val:
- file.write(f"{header}: {val}\n")
-
- # optional fields
- maybe_write("Summary", self.get_description())
- maybe_write("Home-page", self.get_url())
- maybe_write("Author", self.get_contact())
- maybe_write("Author-email", self.get_contact_email())
- maybe_write("License", self.get_license())
- maybe_write("Download-URL", self.download_url)
- maybe_write("Description", rfc822_escape(self.get_long_description() or ""))
- maybe_write("Keywords", ",".join(self.get_keywords()))
-
- self._write_list(file, 'Platform', self.get_platforms())
- self._write_list(file, 'Classifier', self.get_classifiers())
-
- # PEP 314
- self._write_list(file, 'Requires', self.get_requires())
- self._write_list(file, 'Provides', self.get_provides())
- self._write_list(file, 'Obsoletes', self.get_obsoletes())
-
- def _write_list(self, file, name, values):
- values = values or []
- for value in values:
- file.write('{}: {}\n'.format(name, value))
-
- # -- Metadata query methods ----------------------------------------
-
- def get_name(self):
- return self.name or "UNKNOWN"
-
- def get_version(self):
- return self.version or "0.0.0"
-
- def get_fullname(self):
- return "{}-{}".format(self.get_name(), self.get_version())
-
- def get_author(self):
- return self.author
-
- def get_author_email(self):
- return self.author_email
-
- def get_maintainer(self):
- return self.maintainer
-
- def get_maintainer_email(self):
- return self.maintainer_email
-
- def get_contact(self):
- return self.maintainer or self.author
-
- def get_contact_email(self):
- return self.maintainer_email or self.author_email
-
- def get_url(self):
- return self.url
-
- def get_license(self):
- return self.license
-
- get_licence = get_license
-
- def get_description(self):
- return self.description
-
- def get_long_description(self):
- return self.long_description
-
- def get_keywords(self):
- return self.keywords or []
-
- def set_keywords(self, value):
- self.keywords = _ensure_list(value, 'keywords')
-
- def get_platforms(self):
- return self.platforms
-
- def set_platforms(self, value):
- self.platforms = _ensure_list(value, 'platforms')
-
- def get_classifiers(self):
- return self.classifiers or []
-
- def set_classifiers(self, value):
- self.classifiers = _ensure_list(value, 'classifiers')
-
- def get_download_url(self):
- return self.download_url
-
- # PEP 314
- def get_requires(self):
- return self.requires or []
-
- def set_requires(self, value):
- import distutils.versionpredicate
-
- for v in value:
- distutils.versionpredicate.VersionPredicate(v)
- self.requires = list(value)
-
- def get_provides(self):
- return self.provides or []
-
- def set_provides(self, value):
- value = [v.strip() for v in value]
- for v in value:
- import distutils.versionpredicate
-
- distutils.versionpredicate.split_provision(v)
- self.provides = value
-
- def get_obsoletes(self):
- return self.obsoletes or []
-
- def set_obsoletes(self, value):
- import distutils.versionpredicate
-
- for v in value:
- distutils.versionpredicate.VersionPredicate(v)
- self.obsoletes = list(value)
-
-
-def fix_help_options(options):
- """Convert a 4-tuple 'help_options' list as found in various command
- classes to the 3-tuple form required by FancyGetopt.
- """
- new_options = []
- for help_tuple in options:
- new_options.append(help_tuple[0:3])
- return new_options
diff --git a/spaces/Boranbruh/ehartford-WizardLM-7B-Uncensored/app.py b/spaces/Boranbruh/ehartford-WizardLM-7B-Uncensored/app.py
deleted file mode 100644
index 106e50a840aa4fc20ddbdd7f84cd86ada5510ae3..0000000000000000000000000000000000000000
--- a/spaces/Boranbruh/ehartford-WizardLM-7B-Uncensored/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/ehartford/WizardLM-7B-Uncensored").launch()
\ No newline at end of file
diff --git a/spaces/CVPR/LIVE/thrust/examples/cpp_integration/device.h b/spaces/CVPR/LIVE/thrust/examples/cpp_integration/device.h
deleted file mode 100644
index e398edf3361d35225e1ba829baab9c61ada55367..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/examples/cpp_integration/device.h
+++ /dev/null
@@ -1,7 +0,0 @@
-#pragma once
-
-#include
-
-// function prototype
-void sort_on_device(thrust::host_vector& V);
-
diff --git a/spaces/CVPR/LIVE/thrust/thrust/iterator/detail/is_iterator_category.h b/spaces/CVPR/LIVE/thrust/thrust/iterator/detail/is_iterator_category.h
deleted file mode 100644
index b538358be33bf6bfcda040bfada6fa74cf8e18b8..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/iterator/detail/is_iterator_category.h
+++ /dev/null
@@ -1,60 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-#include
-#include
-
-namespace thrust
-{
-
-namespace detail
-{
-
-template
- struct is_host_iterator_category
- : thrust::detail::or_<
- thrust::detail::is_convertible,
- thrust::detail::is_convertible
- >
-{
-}; // end is_host_iterator_category
-
-template
- struct is_device_iterator_category
- : thrust::detail::or_<
- thrust::detail::is_convertible,
- thrust::detail::is_convertible
- >
-{
-}; // end is_device_iterator_category
-
-
-template
- struct is_iterator_category
- : thrust::detail::or_<
- is_host_iterator_category,
- is_device_iterator_category
- >
-{
-}; // end is_iterator_category
-
-} // end detail
-
-} // end thrust
-
diff --git a/spaces/CVPR/lama-example/bin/report_from_tb.py b/spaces/CVPR/lama-example/bin/report_from_tb.py
deleted file mode 100644
index 9a444e6cd8027f88bd34adfc0b1dd000bbb4b2be..0000000000000000000000000000000000000000
--- a/spaces/CVPR/lama-example/bin/report_from_tb.py
+++ /dev/null
@@ -1,83 +0,0 @@
-#!/usr/bin/env python3
-
-import glob
-import os
-import re
-
-import tensorflow as tf
-from torch.utils.tensorboard import SummaryWriter
-
-
-GROUPING_RULES = [
- re.compile(r'^(?Ptrain|test|val|extra_val_.*?(256|512))_(?P.*)', re.I)
-]
-
-
-DROP_RULES = [
- re.compile(r'_std$', re.I)
-]
-
-
-def need_drop(tag):
- for rule in DROP_RULES:
- if rule.search(tag):
- return True
- return False
-
-
-def get_group_and_title(tag):
- for rule in GROUPING_RULES:
- match = rule.search(tag)
- if match is None:
- continue
- return match.group('group'), match.group('title')
- return None, None
-
-
-def main(args):
- os.makedirs(args.outdir, exist_ok=True)
-
- ignored_events = set()
-
- for orig_fname in glob.glob(args.inglob):
- cur_dirpath = os.path.dirname(orig_fname) # remove filename, this should point to "version_0" directory
- subdirname = os.path.basename(cur_dirpath) # == "version_0" most of time
- exp_root_path = os.path.dirname(cur_dirpath) # remove "version_0"
- exp_name = os.path.basename(exp_root_path)
-
- writers_by_group = {}
-
- for e in tf.compat.v1.train.summary_iterator(orig_fname):
- for v in e.summary.value:
- if need_drop(v.tag):
- continue
-
- cur_group, cur_title = get_group_and_title(v.tag)
- if cur_group is None:
- if v.tag not in ignored_events:
- print(f'WARNING: Could not detect group for {v.tag}, ignoring it')
- ignored_events.add(v.tag)
- continue
-
- cur_writer = writers_by_group.get(cur_group, None)
- if cur_writer is None:
- if args.include_version:
- cur_outdir = os.path.join(args.outdir, exp_name, f'{subdirname}_{cur_group}')
- else:
- cur_outdir = os.path.join(args.outdir, exp_name, cur_group)
- cur_writer = SummaryWriter(cur_outdir)
- writers_by_group[cur_group] = cur_writer
-
- cur_writer.add_scalar(cur_title, v.simple_value, global_step=e.step, walltime=e.wall_time)
-
-
-if __name__ == '__main__':
- import argparse
-
- aparser = argparse.ArgumentParser()
- aparser.add_argument('inglob', type=str)
- aparser.add_argument('outdir', type=str)
- aparser.add_argument('--include-version', action='store_true',
- help='Include subdirectory name e.g. "version_0" into output path')
-
- main(aparser.parse_args())
diff --git a/spaces/ChallengeHub/Chinese-LangChain/assets/custom.js b/spaces/ChallengeHub/Chinese-LangChain/assets/custom.js
deleted file mode 100644
index 7b1761043149ff97ca498501c87a0d15db5258ee..0000000000000000000000000000000000000000
--- a/spaces/ChallengeHub/Chinese-LangChain/assets/custom.js
+++ /dev/null
@@ -1 +0,0 @@
-// custom javascript here
\ No newline at end of file
diff --git a/spaces/ChrisCaviar/ControlNet-v1-1/app_scribble_interactive.py b/spaces/ChrisCaviar/ControlNet-v1-1/app_scribble_interactive.py
deleted file mode 100644
index 36663c5a1fa37492bfa717c301d33a6b0b49fff5..0000000000000000000000000000000000000000
--- a/spaces/ChrisCaviar/ControlNet-v1-1/app_scribble_interactive.py
+++ /dev/null
@@ -1,112 +0,0 @@
-#!/usr/bin/env python
-
-import gradio as gr
-import numpy as np
-
-from utils import randomize_seed_fn
-
-
-def create_canvas(w, h):
- return np.zeros(shape=(h, w, 3), dtype=np.uint8) + 255
-
-
-def create_demo(process, max_images=12, default_num_images=3):
- with gr.Blocks() as demo:
- with gr.Row():
- with gr.Column():
- canvas_width = gr.Slider(label='Canvas width',
- minimum=256,
- maximum=512,
- value=512,
- step=1)
- canvas_height = gr.Slider(label='Canvas height',
- minimum=256,
- maximum=512,
- value=512,
- step=1)
- create_button = gr.Button('Open drawing canvas!')
- image = gr.Image(tool='sketch', brush_radius=10)
- prompt = gr.Textbox(label='Prompt')
- run_button = gr.Button('Run')
- with gr.Accordion('Advanced options', open=False):
- num_samples = gr.Slider(label='Number of images',
- minimum=1,
- maximum=max_images,
- value=default_num_images,
- step=1)
- image_resolution = gr.Slider(label='Image resolution',
- minimum=256,
- maximum=512,
- value=512,
- step=256)
- num_steps = gr.Slider(label='Number of steps',
- minimum=1,
- maximum=100,
- value=20,
- step=1)
- guidance_scale = gr.Slider(label='Guidance scale',
- minimum=0.1,
- maximum=30.0,
- value=9.0,
- step=0.1)
- seed = gr.Slider(label='Seed',
- minimum=0,
- maximum=1000000,
- step=1,
- value=0,
- randomize=True)
- randomize_seed = gr.Checkbox(label='Randomize seed',
- value=True)
- a_prompt = gr.Textbox(
- label='Additional prompt',
- value='best quality, extremely detailed')
- n_prompt = gr.Textbox(
- label='Negative prompt',
- value=
- 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality'
- )
- with gr.Column():
- result = gr.Gallery(label='Output', show_label=False).style(
- columns=2, object_fit='scale-down')
-
- create_button.click(fn=create_canvas,
- inputs=[canvas_width, canvas_height],
- outputs=image,
- queue=False)
- inputs = [
- image,
- prompt,
- a_prompt,
- n_prompt,
- num_samples,
- image_resolution,
- num_steps,
- guidance_scale,
- seed,
- ]
- prompt.submit(
- fn=randomize_seed_fn,
- inputs=[seed, randomize_seed],
- outputs=seed,
- ).then(
- fn=process,
- inputs=inputs,
- outputs=result,
- )
- run_button.click(
- fn=randomize_seed_fn,
- inputs=[seed, randomize_seed],
- outputs=seed,
- ).then(
- fn=process,
- inputs=inputs,
- outputs=result,
- )
- return demo
-
-
-if __name__ == '__main__':
- from model import Model
- model = Model(task_name='scribble')
- demo = create_demo(model.process_scribble_interactive)
- demo.queue().launch()
diff --git a/spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/apps/request/request.js b/spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/apps/request/request.js
deleted file mode 100644
index a6b6121bcace2412f0a14e902a15bff2e677839c..0000000000000000000000000000000000000000
--- a/spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/apps/request/request.js
+++ /dev/null
@@ -1,65 +0,0 @@
-import { sendSocketList, Config, Version } from '../../components/index.js'
-
-Bot.on('request', async e => {
- if (sendSocketList.length == 0) return false
- let other = {}
- switch (e.request_type) {
- case 'friend':
- other.request_type = 'friend'
- switch (e.sub_type) {
- case 'add':
- if (!Config.friendAdd) return false
- break;
- default:
- return false
- }
- break;
- case 'group':
- other.request_type = 'group'
- other.group_id = e.group_id
- switch (e.sub_type) {
- case 'invite':
- if (!Config.groupInvite) return false
- other.sub_type = 'invite'
- break;
- case 'add':
- if (!Config.groupAdd) return false
- other.sub_type = 'add'
- break;
-
- default:
- return false;
- }
- break;
-
- default:
- return false;
- }
-
- let msg = {
- time: e.time,
- self_id: e.self_id,
- post_type: 'request',
- flag: e.flag,
- user_id: e.user_id,
- comment: e.comment,
- ...other
- }
- msg = JSON.stringify(msg)
- sendSocketList.forEach(i => {
- if (i.status == 1) {
- switch (Number(i.type)) {
- case 1:
- case 2:
- if (Version.isTrss) {
- if (i.uin != e.self_id) return
- if (!Version.protocol.some(i => i == e.bot?.version?.name)) return
- }
- i.ws.send(msg)
- break;
- default:
- break;
- }
- }
- })
-})
\ No newline at end of file
diff --git a/spaces/CofAI/tv/README.md b/spaces/CofAI/tv/README.md
deleted file mode 100644
index dfd8605256f3f37502757f905b2253181226f5ad..0000000000000000000000000000000000000000
--- a/spaces/CofAI/tv/README.md
+++ /dev/null
@@ -1,44 +0,0 @@
----
-title: CofTV
-emoji: 📺☕📺
-colorFrom: green
-colorTo: green
-sdk: docker
-pinned: false
-app_port: 7860
-duplicated_from: TNR-5/AI-WebTV
----
-
-A generative AI WebTV, powered by Zeroscope and Hugging Face.
-
-This is just the frontend part, you will need the media-server (also open source) to make it work.
-
-Warning: this is an experimental, proof-of-concept project made in a few days.
-
-It is not ready for production use by other people! Also, this use models that should only be used for research purposes (no commercial usage).
-
-Note: because the stream uses FLV, it doesn't work on iPhone. There is however a [Twitch mirror here](https://www.twitch.tv/ai_webtv).
-
-The main code of the webtv is located inside the [media-server](https://huggingface.co/spaces/jbilcke-hf/media-server/tree/main) :
-
-manual steps:
-- human input to write a short paragraph describing a multi-shot video sequence
-- manual submit it to GPT-4 to generate a list of video captions for each shot (the system instructions are extracts from a stable diffusion guide)
-- commit the captions to the [playlist database](https://huggingface.co/spaces/jbilcke-hf/media-server/raw/main/database.json)
-
-Inside the `media-server` space (generation process running in the background):
-- for each prompt in the database
-- generate a silent 3 seconds video clip with Zeroscope V2 576w (hosted on Hugging Face Spaces)
-- upscale the clip with Zeroscope V2 XL (also a HF Space)
-- perform frame interpolation with FILM (also a HF Space)
-- storage in the Persistent Storage of the media-server Space
-
-Inside the `media-server` space (streaming process running in the foreground):
-- for each video file in the persistent storage folder
-- add it to a new FFmpeg playlist (it's just a .txt file)
-- broadcast it over the RTMP protocol using FFmpeg (in FLV format)
-- diffusion of the stream using node-media-server
-
-Inside the `AI-WebTV` space:
-- display the stream using `mpegts.js`
-- this doesn't work on iPhone, but now there is also a Twitch mirror
\ No newline at end of file
diff --git a/spaces/Dagfinn1962/stablediffusion-models/appworks.py b/spaces/Dagfinn1962/stablediffusion-models/appworks.py
deleted file mode 100644
index 878c757de65298f3affa61b5456b53e02dadb9fd..0000000000000000000000000000000000000000
--- a/spaces/Dagfinn1962/stablediffusion-models/appworks.py
+++ /dev/null
@@ -1,80 +0,0 @@
-import gradio as gr
-import os
-import sys
-from pathlib import Path
-
-models = [
- {"name": "Stable Diffusion 1.4","url": "CompVis/stable-diffusion-v1-4"},
- {"name": "Stable Diffusion 1.5","url": "runwayml/stable-diffusion-v1-5"},
- ]
-
-current_model = models[0]
-
-text_gen = gr.Interface.load("spaces/daspartho/prompt-extend")
-
-models2 = []
-for model in models:
- model_url = f"models/{model['url']}"
- loaded_model = gr.Interface.load(model_url, live=True, preprocess=True)
- models2.append(loaded_model)
-
-
-def text_it(inputs, text_gen=text_gen):
- return text_gen(inputs)
-
-
-def set_model(current_model_index):
- global current_model
- current_model = models[current_model_index]
- return gr.update(value=f"{current_model['name']}")
-
-
-def send_it(inputs, model_choice):
- proc = models2[model_choice]
- return proc(inputs)
-
-
-with gr.Blocks() as myface:
- gr.HTML("""
- """
-
- )
- with gr.Row():
- input_text = gr.Textbox(label=" ",placeholder="PROMPT HERE ",lines=4)
- # Model selection dropdown
- model_name1 = gr.Dropdown(
- label=" ",
- choices=[m["name"] for m in models],
- type="index",
- value=current_model["name"],
- interactive=True,
-
-
- )
- with gr.Row():
- see_prompts = gr.Button("Generate Prompts")
- run = gr.Button("Generate Images", varant="primery")
-
- with gr.Row():
- output1 = gr.Image(label="")
- output2 = gr.Image(label="")
- output3 = gr.Image(label="")
- with gr.Row():
- magic1 = gr.Textbox(label="Generated Prompt", lines=2)
- magic2 = gr.Textbox(label="Generated Prompt", lines=2)
- magic3 = gr.Textbox(label="Generated Prompt", lines=2)
-
- model_name1.change(set_model, inputs=model_name1, outputs=[output1, output2, output3,])
-
- run.click(send_it, inputs=[magic1, model_name1], outputs=[output1])
- run.click(send_it, inputs=[magic2, model_name1], outputs=[output2])
- run.click(send_it, inputs=[magic3, model_name1], outputs=[output3])
-
-
- see_prompts.click(text_it, inputs=[input_text], outputs=[magic1])
- see_prompts.click(text_it, inputs=[input_text], outputs=[magic2])
- see_prompts.click(text_it, inputs=[input_text], outputs=[magic3])
-
-
-myface.queue(concurrency_count=200)
-myface.launch(inline=True, show_api=False, max_threads=400)
\ No newline at end of file
diff --git a/spaces/Detomo/Depth_estimation/layers.py b/spaces/Detomo/Depth_estimation/layers.py
deleted file mode 100644
index 6a67d2b8e8424a4dddae3bb6d5f3f210f68b5460..0000000000000000000000000000000000000000
--- a/spaces/Detomo/Depth_estimation/layers.py
+++ /dev/null
@@ -1,55 +0,0 @@
-from tensorflow.keras.layers import Layer, InputSpec
-import keras.utils.conv_utils as conv_utils
-import tensorflow as tf
-import tensorflow.keras.backend as K
-
-
-def normalize_data_format(value):
- if value is None:
- value = K.image_data_format()
- data_format = value.lower()
- if data_format not in {'channels_first', 'channels_last'}:
- raise ValueError('The `data_format` argument must be one of '
- '"channels_first", "channels_last". Received: ' +
- str(value))
- return data_format
-
-
-class BilinearUpSampling2D(Layer):
- def __init__(self, size=(2, 2), data_format=None, **kwargs):
- super(BilinearUpSampling2D, self).__init__(**kwargs)
- self.data_format = normalize_data_format(data_format)
- self.size = conv_utils.normalize_tuple(size, 2, 'size')
- self.input_spec = InputSpec(ndim=4)
-
- def compute_output_shape(self, input_shape):
- if self.data_format == 'channels_first':
- height = self.size[0] * input_shape[2] if input_shape[2] is not None else None
- width = self.size[1] * input_shape[3] if input_shape[3] is not None else None
- return (input_shape[0],
- input_shape[1],
- height,
- width)
- elif self.data_format == 'channels_last':
- height = self.size[0] * input_shape[1] if input_shape[1] is not None else None
- width = self.size[1] * input_shape[2] if input_shape[2] is not None else None
- return (input_shape[0],
- height,
- width,
- input_shape[3])
-
- def call(self, inputs):
- input_shape = K.shape(inputs)
- if self.data_format == 'channels_first':
- height = self.size[0] * input_shape[2] if input_shape[2] is not None else None
- width = self.size[1] * input_shape[3] if input_shape[3] is not None else None
- elif self.data_format == 'channels_last':
- height = self.size[0] * input_shape[1] if input_shape[1] is not None else None
- width = self.size[1] * input_shape[2] if input_shape[2] is not None else None
-
- return tf.image.resize(inputs, [height, width], method=tf.image.ResizeMethod.BILINEAR)
-
- def get_config(self):
- config = {'size': self.size, 'data_format': self.data_format}
- base_config = super(BilinearUpSampling2D, self).get_config()
- return dict(list(base_config.items()) + list(config.items()))
diff --git a/spaces/Dinoking/Garbage-Classifier-V4/app.py b/spaces/Dinoking/Garbage-Classifier-V4/app.py
deleted file mode 100644
index 4c5de75c254e966d644c56721e6b8bd0d4ace027..0000000000000000000000000000000000000000
--- a/spaces/Dinoking/Garbage-Classifier-V4/app.py
+++ /dev/null
@@ -1,31 +0,0 @@
-import gradio as gr
-import tensorflow as tf
-import numpy as np
-from PIL import Image
-import tensorflow.keras as keras
-import keras.applications.vgg16 as vgg16
-
-from tensorflow.keras.models import load_model
-
-# load model
-model = load_model('model6904.h5')
-
-classnames = ['battery','cardboard','clothes','food','glass','medical','metal','paper','plastic','shoes']
-
-
-
-def predict_image(img):
- img_4d=img.reshape(-1,224, 224,3)
- prediction=model.predict(img_4d)[0]
- return {classnames[i]: float(prediction[i]) for i in range(10)}
-
-
-
-image = gr.inputs.Image(shape=(224, 224))
-label = gr.outputs.Label(num_top_classes=3)
-article="
Made by Aditya Narendra with 🖤
"
-
-
-
-gr.Interface(fn=predict_image, inputs=image, title="Garbage Classifier V4-VGG16+SVM",
- description="This is a Garbage Classification Model Trained using VGG16+SVM(20 Epochs).Deployed to Hugging Faces using Gradio.",outputs=label,article=article,enable_queue=True,interpretation='default').launch(share="True")
\ No newline at end of file
diff --git a/spaces/DragGan/DragGan/stylegan_human/dnnlib/tflib/custom_ops.py b/spaces/DragGan/DragGan/stylegan_human/dnnlib/tflib/custom_ops.py
deleted file mode 100644
index a09ac5dc2a5de80d22a5593ed7725551737d59af..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan/stylegan_human/dnnlib/tflib/custom_ops.py
+++ /dev/null
@@ -1,171 +0,0 @@
-# Copyright (c) SenseTime Research. All rights reserved.
-
-# Copyright (c) 2019, NVIDIA Corporation. All rights reserved.
-#
-# This work is made available under the Nvidia Source Code License-NC.
-# To view a copy of this license, visit
-# https://nvlabs.github.io/stylegan2/license.html
-
-"""TensorFlow custom ops builder.
-"""
-
-import os
-import re
-import uuid
-import hashlib
-import tempfile
-import shutil
-import tensorflow as tf
-from tensorflow.python.client import device_lib # pylint: disable=no-name-in-module
-
-#----------------------------------------------------------------------------
-# Global options.
-
-cuda_cache_path = os.path.join(os.path.dirname(__file__), '_cudacache')
-cuda_cache_version_tag = 'v1'
-do_not_hash_included_headers = False # Speed up compilation by assuming that headers included by the CUDA code never change. Unsafe!
-verbose = True # Print status messages to stdout.
-
-compiler_bindir_search_path = [
- 'C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.14.26428/bin/Hostx64/x64',
- 'C:/Program Files (x86)/Microsoft Visual Studio/2019/Community/VC/Tools/MSVC/14.23.28105/bin/Hostx64/x64',
- 'C:/Program Files (x86)/Microsoft Visual Studio 14.0/vc/bin',
-]
-
-#----------------------------------------------------------------------------
-# Internal helper funcs.
-
-def _find_compiler_bindir():
- for compiler_path in compiler_bindir_search_path:
- if os.path.isdir(compiler_path):
- return compiler_path
- return None
-
-def _get_compute_cap(device):
- caps_str = device.physical_device_desc
- m = re.search('compute capability: (\\d+).(\\d+)', caps_str)
- major = m.group(1)
- minor = m.group(2)
- return (major, minor)
-
-def _get_cuda_gpu_arch_string():
- gpus = [x for x in device_lib.list_local_devices() if x.device_type == 'GPU']
- if len(gpus) == 0:
- raise RuntimeError('No GPU devices found')
- (major, minor) = _get_compute_cap(gpus[0])
- return 'sm_%s%s' % (major, minor)
-
-def _run_cmd(cmd):
- with os.popen(cmd) as pipe:
- output = pipe.read()
- status = pipe.close()
- if status is not None:
- raise RuntimeError('NVCC returned an error. See below for full command line and output log:\n\n%s\n\n%s' % (cmd, output))
-
-def _prepare_nvcc_cli(opts):
- cmd = 'nvcc ' + opts.strip()
- cmd += ' --disable-warnings'
- cmd += ' --include-path "%s"' % tf.sysconfig.get_include()
- cmd += ' --include-path "%s"' % os.path.join(tf.sysconfig.get_include(), 'external', 'protobuf_archive', 'src')
- cmd += ' --include-path "%s"' % os.path.join(tf.sysconfig.get_include(), 'external', 'com_google_absl')
- cmd += ' --include-path "%s"' % os.path.join(tf.sysconfig.get_include(), 'external', 'eigen_archive')
-
- compiler_bindir = _find_compiler_bindir()
- if compiler_bindir is None:
- # Require that _find_compiler_bindir succeeds on Windows. Allow
- # nvcc to use whatever is the default on Linux.
- if os.name == 'nt':
- raise RuntimeError('Could not find MSVC/GCC/CLANG installation on this computer. Check compiler_bindir_search_path list in "%s".' % __file__)
- else:
- cmd += ' --compiler-bindir "%s"' % compiler_bindir
- cmd += ' 2>&1'
- return cmd
-
-#----------------------------------------------------------------------------
-# Main entry point.
-
-_plugin_cache = dict()
-
-def get_plugin(cuda_file):
- cuda_file_base = os.path.basename(cuda_file)
- cuda_file_name, cuda_file_ext = os.path.splitext(cuda_file_base)
-
- # Already in cache?
- if cuda_file in _plugin_cache:
- return _plugin_cache[cuda_file]
-
- # Setup plugin.
- if verbose:
- print('Setting up TensorFlow plugin "%s": ' % cuda_file_base, end='', flush=True)
- try:
- # Hash CUDA source.
- md5 = hashlib.md5()
- with open(cuda_file, 'rb') as f:
- md5.update(f.read())
- md5.update(b'\n')
-
- # Hash headers included by the CUDA code by running it through the preprocessor.
- if not do_not_hash_included_headers:
- if verbose:
- print('Preprocessing... ', end='', flush=True)
- with tempfile.TemporaryDirectory() as tmp_dir:
- tmp_file = os.path.join(tmp_dir, cuda_file_name + '_tmp' + cuda_file_ext)
- _run_cmd(_prepare_nvcc_cli('"%s" --preprocess -o "%s" --keep --keep-dir "%s"' % (cuda_file, tmp_file, tmp_dir)))
- with open(tmp_file, 'rb') as f:
- bad_file_str = ('"' + cuda_file.replace('\\', '/') + '"').encode('utf-8') # __FILE__ in error check macros
- good_file_str = ('"' + cuda_file_base + '"').encode('utf-8')
- for ln in f:
- if not ln.startswith(b'# ') and not ln.startswith(b'#line '): # ignore line number pragmas
- ln = ln.replace(bad_file_str, good_file_str)
- md5.update(ln)
- md5.update(b'\n')
-
- # Select compiler options.
- compile_opts = ''
- if os.name == 'nt':
- compile_opts += '"%s"' % os.path.join(tf.sysconfig.get_lib(), 'python', '_pywrap_tensorflow_internal.lib')
- elif os.name == 'posix':
- compile_opts += '"%s"' % os.path.join(tf.sysconfig.get_lib(), 'python', '_pywrap_tensorflow_internal.so')
- compile_opts += ' --compiler-options \'-fPIC -D_GLIBCXX_USE_CXX11_ABI=0\''
- else:
- assert False # not Windows or Linux, w00t?
- compile_opts += ' --gpu-architecture=%s' % _get_cuda_gpu_arch_string()
- compile_opts += ' --use_fast_math'
- nvcc_cmd = _prepare_nvcc_cli(compile_opts)
-
- # Hash build configuration.
- md5.update(('nvcc_cmd: ' + nvcc_cmd).encode('utf-8') + b'\n')
- md5.update(('tf.VERSION: ' + tf.VERSION).encode('utf-8') + b'\n')
- md5.update(('cuda_cache_version_tag: ' + cuda_cache_version_tag).encode('utf-8') + b'\n')
-
- # Compile if not already compiled.
- bin_file_ext = '.dll' if os.name == 'nt' else '.so'
- bin_file = os.path.join(cuda_cache_path, cuda_file_name + '_' + md5.hexdigest() + bin_file_ext)
- if not os.path.isfile(bin_file):
- if verbose:
- print('Compiling... ', end='', flush=True)
- with tempfile.TemporaryDirectory() as tmp_dir:
- tmp_file = os.path.join(tmp_dir, cuda_file_name + '_tmp' + bin_file_ext)
- _run_cmd(nvcc_cmd + ' "%s" --shared -o "%s" --keep --keep-dir "%s"' % (cuda_file, tmp_file, tmp_dir))
- os.makedirs(cuda_cache_path, exist_ok=True)
- intermediate_file = os.path.join(cuda_cache_path, cuda_file_name + '_' + uuid.uuid4().hex + '_tmp' + bin_file_ext)
- shutil.copyfile(tmp_file, intermediate_file)
- os.rename(intermediate_file, bin_file) # atomic
-
- # Load.
- if verbose:
- print('Loading... ', end='', flush=True)
- plugin = tf.load_op_library(bin_file)
-
- # Add to cache.
- _plugin_cache[cuda_file] = plugin
- if verbose:
- print('Done.', flush=True)
- return plugin
-
- except:
- if verbose:
- print('Failed!', flush=True)
- raise
-
-#----------------------------------------------------------------------------
diff --git a/spaces/ECCV2022/storydalle/dalle/models/stage1/layers.py b/spaces/ECCV2022/storydalle/dalle/models/stage1/layers.py
deleted file mode 100644
index 16c758c98089b6278190b7b52479df0eed941d9f..0000000000000000000000000000000000000000
--- a/spaces/ECCV2022/storydalle/dalle/models/stage1/layers.py
+++ /dev/null
@@ -1,373 +0,0 @@
-# ------------------------------------------------------------------------------------
-# Modified from VQGAN (https://github.com/CompVis/taming-transformers)
-# Copyright (c) 2020 Patrick Esser and Robin Rombach and Björn Ommer. All Rights Reserved.
-# ------------------------------------------------------------------------------------
-
-import torch
-import torch.nn as nn
-from typing import Tuple, Optional
-
-
-def nonlinearity(x):
- # swish
- return x*torch.sigmoid(x)
-
-
-def Normalize(in_channels):
- return torch.nn.GroupNorm(num_groups=32,
- num_channels=in_channels,
- eps=1e-6,
- affine=True)
-
-
-class Upsample(nn.Module):
- def __init__(self, in_channels, with_conv):
- super().__init__()
- self.with_conv = with_conv
- if self.with_conv:
- self.conv = torch.nn.Conv2d(in_channels,
- in_channels,
- kernel_size=3,
- stride=1,
- padding=1)
-
- def forward(self, x):
- x = torch.nn.functional.interpolate(x, scale_factor=2.0, mode="nearest")
- if self.with_conv:
- x = self.conv(x)
- return x
-
-
-class Downsample(nn.Module):
- def __init__(self, in_channels, with_conv):
- super().__init__()
- self.with_conv = with_conv
- if self.with_conv:
- # no asymmetric padding in torch conv, must do it ourselves
- self.conv = torch.nn.Conv2d(in_channels,
- in_channels,
- kernel_size=3,
- stride=2,
- padding=0)
-
- def forward(self, x):
- if self.with_conv:
- pad = (0, 1, 0, 1)
- x = torch.nn.functional.pad(x, pad, mode="constant", value=0)
- x = self.conv(x)
- else:
- x = torch.nn.functional.avg_pool2d(x, kernel_size=2, stride=2)
- return x
-
-
-class ResnetBlock(nn.Module):
- def __init__(self, *, in_channels, out_channels=None, conv_shortcut=False,
- dropout, temb_channels=512):
- assert temb_channels == 0
- super().__init__()
- self.in_channels = in_channels
- out_channels = in_channels if out_channels is None else out_channels
- self.out_channels = out_channels
- self.use_conv_shortcut = conv_shortcut
-
- self.norm1 = Normalize(in_channels)
- self.conv1 = torch.nn.Conv2d(in_channels,
- out_channels,
- kernel_size=3,
- stride=1,
- padding=1)
- self.norm2 = Normalize(out_channels)
- self.dropout = torch.nn.Dropout(dropout)
- self.conv2 = torch.nn.Conv2d(out_channels,
- out_channels,
- kernel_size=3,
- stride=1,
- padding=1)
- if self.in_channels != self.out_channels:
- if self.use_conv_shortcut:
- self.conv_shortcut = torch.nn.Conv2d(in_channels,
- out_channels,
- kernel_size=3,
- stride=1,
- padding=1)
- else:
- self.nin_shortcut = torch.nn.Conv2d(in_channels,
- out_channels,
- kernel_size=1,
- stride=1,
- padding=0)
-
- def forward(self, x, temb=None):
- assert temb is None
-
- h = x
- h = self.norm1(h)
- h = nonlinearity(h)
- h = self.conv1(h)
-
- h = self.norm2(h)
- h = nonlinearity(h)
- h = self.dropout(h)
- h = self.conv2(h)
-
- if self.in_channels != self.out_channels:
- if self.use_conv_shortcut:
- x = self.conv_shortcut(x)
- else:
- x = self.nin_shortcut(x)
- return x+h
-
-
-class AttnBlock(nn.Module):
- def __init__(self, in_channels):
- super().__init__()
- self.in_channels = in_channels
-
- self.norm = Normalize(in_channels)
- self.q = torch.nn.Conv2d(in_channels,
- in_channels,
- kernel_size=1,
- stride=1,
- padding=0)
- self.k = torch.nn.Conv2d(in_channels,
- in_channels,
- kernel_size=1,
- stride=1,
- padding=0)
- self.v = torch.nn.Conv2d(in_channels,
- in_channels,
- kernel_size=1,
- stride=1,
- padding=0)
- self.proj_out = torch.nn.Conv2d(in_channels,
- in_channels,
- kernel_size=1,
- stride=1,
- padding=0)
-
- def forward(self, x):
- h_ = x
- h_ = self.norm(h_)
- q = self.q(h_)
- k = self.k(h_)
- v = self.v(h_)
-
- # compute attention
- b, c, h, w = q.shape
- q = q.reshape(b, c, h*w)
- q = q.permute(0, 2, 1) # b,hw,c
- k = k.reshape(b, c, h*w) # b,c,hw
- w_ = torch.bmm(q, k) # b,hw,hw w[b,i,j]=sum_c q[b,i,c]k[b,c,j]
- w_ = w_ * (int(c)**(-0.5))
- w_ = torch.nn.functional.softmax(w_, dim=2)
-
- # attend to values
- v = v.reshape(b, c, h*w)
- w_ = w_.permute(0, 2, 1) # b,hw,hw (first hw of k, second of q)
- h_ = torch.bmm(v, w_) # b, c,hw (hw of q) h_[b,c,j] = sum_i v[b,c,i] w_[b,i,j]
- h_ = h_.reshape(b, c, h, w)
-
- h_ = self.proj_out(h_)
- return x+h_
-
-
-class Encoder(nn.Module):
- def __init__(self,
- *, # forced to use named arguments
- ch: int,
- out_ch: int,
- ch_mult: Tuple[int] = (1, 2, 4, 8),
- num_res_blocks: int,
- attn_resolutions: Tuple[int],
- pdrop: float = 0.0,
- resamp_with_conv: bool = True,
- in_channels: int,
- resolution: int,
- z_channels: int,
- double_z: Optional[bool] = None) -> None:
- super().__init__()
- self.ch = ch
- self.temb_ch = 0
- self.num_resolutions = len(ch_mult)
- self.num_res_blocks = num_res_blocks
- self.resolution = resolution
- self.in_channels = in_channels
-
- # downsampling
- self.conv_in = torch.nn.Conv2d(in_channels,
- self.ch,
- kernel_size=3,
- stride=1,
- padding=1)
-
- curr_res = resolution
- in_ch_mult = (1,)+tuple(ch_mult)
- self.down = nn.ModuleList()
- for i_level in range(self.num_resolutions):
- block = nn.ModuleList()
- attn = nn.ModuleList()
- block_in = ch*in_ch_mult[i_level]
- block_out = ch*ch_mult[i_level]
- for i_block in range(self.num_res_blocks):
- block.append(ResnetBlock(in_channels=block_in,
- out_channels=block_out,
- temb_channels=self.temb_ch,
- dropout=pdrop))
- block_in = block_out
- if curr_res in attn_resolutions:
- attn.append(AttnBlock(block_in))
- down = nn.Module()
- down.block = block
- down.attn = attn
- if i_level != self.num_resolutions-1:
- down.downsample = Downsample(block_in, resamp_with_conv)
- curr_res = curr_res // 2
- self.down.append(down)
-
- # middle
- self.mid = nn.Module()
- self.mid.block_1 = ResnetBlock(in_channels=block_in,
- out_channels=block_in,
- temb_channels=self.temb_ch,
- dropout=pdrop)
- self.mid.attn_1 = AttnBlock(block_in)
- self.mid.block_2 = ResnetBlock(in_channels=block_in,
- out_channels=block_in,
- temb_channels=self.temb_ch,
- dropout=pdrop)
-
- # end
- self.norm_out = Normalize(block_in)
- self.conv_out = torch.nn.Conv2d(block_in,
- 2*z_channels if double_z else z_channels,
- kernel_size=3,
- stride=1,
- padding=1)
-
- def forward(self, x):
- assert x.shape[2] == x.shape[3] == self.resolution, \
- "{}, {}".format(x.shape, self.resolution)
-
- # downsampling
- h = self.conv_in(x)
- for i_level in range(self.num_resolutions):
- for i_block in range(self.num_res_blocks):
- h = self.down[i_level].block[i_block](h)
- if len(self.down[i_level].attn) > 0:
- h = self.down[i_level].attn[i_block](h)
- if i_level != self.num_resolutions-1:
- h = self.down[i_level].downsample(h)
-
- # middle
- h = self.mid.block_1(h)
- h = self.mid.attn_1(h)
- h = self.mid.block_2(h)
-
- # end
- h = self.norm_out(h)
- h = nonlinearity(h)
- h = self.conv_out(h)
- return h
-
-
-class Decoder(nn.Module):
- def __init__(self,
- *, # forced to use named arguments
- ch: int,
- out_ch: int,
- ch_mult: Tuple[int] = (1, 2, 4, 8),
- num_res_blocks: int,
- attn_resolutions: Tuple[int],
- pdrop: float = 0.0,
- resamp_with_conv: bool = True,
- in_channels: int,
- resolution: int,
- z_channels: int,
- double_z: bool) -> None:
- super().__init__()
- self.ch = ch
- self.temb_ch = 0
- self.num_resolutions = len(ch_mult)
- self.num_res_blocks = num_res_blocks
- self.resolution = resolution
- self.in_channels = in_channels
-
- # compute in_ch_mult, block_in and curr_res at lowest res
- block_in = ch*ch_mult[self.num_resolutions-1]
- curr_res = resolution // 2**(self.num_resolutions-1)
- self.z_shape = (1, z_channels, curr_res, curr_res)
-
- # z to block_in
- self.conv_in = torch.nn.Conv2d(z_channels,
- block_in,
- kernel_size=3,
- stride=1,
- padding=1)
-
- # middle
- self.mid = nn.Module()
- self.mid.block_1 = ResnetBlock(in_channels=block_in,
- out_channels=block_in,
- temb_channels=self.temb_ch,
- dropout=pdrop)
- self.mid.attn_1 = AttnBlock(block_in)
- self.mid.block_2 = ResnetBlock(in_channels=block_in,
- out_channels=block_in,
- temb_channels=self.temb_ch,
- dropout=pdrop)
-
- # upsampling
- self.up = nn.ModuleList()
- for i_level in reversed(range(self.num_resolutions)):
- block = nn.ModuleList()
- attn = nn.ModuleList()
- block_out = ch*ch_mult[i_level]
- for i_block in range(self.num_res_blocks+1):
- block.append(ResnetBlock(in_channels=block_in,
- out_channels=block_out,
- temb_channels=self.temb_ch,
- dropout=pdrop))
- block_in = block_out
- if curr_res in attn_resolutions:
- attn.append(AttnBlock(block_in))
- up = nn.Module()
- up.block = block
- up.attn = attn
- if i_level != 0:
- up.upsample = Upsample(block_in, resamp_with_conv)
- curr_res = curr_res * 2
- self.up.insert(0, up) # prepend to get consistent order
-
- # end
- self.norm_out = Normalize(block_in)
- self.conv_out = torch.nn.Conv2d(block_in,
- out_ch,
- kernel_size=3,
- stride=1,
- padding=1)
-
- def forward(self, z):
- assert z.shape[1:] == self.z_shape[1:]
- self.last_z_shape = z.shape
-
- # z to block_in
- h = self.conv_in(z)
-
- # middle
- h = self.mid.block_1(h)
- h = self.mid.attn_1(h)
- h = self.mid.block_2(h)
-
- # upsampling
- for i_level in reversed(range(self.num_resolutions)):
- for i_block in range(self.num_res_blocks+1):
- h = self.up[i_level].block[i_block](h)
- if len(self.up[i_level].attn) > 0:
- h = self.up[i_level].attn[i_block](h)
- if i_level != 0:
- h = self.up[i_level].upsample(h)
-
- h = self.norm_out(h)
- h = nonlinearity(h)
- h = self.conv_out(h)
- return h
diff --git a/spaces/Eduger/webui/app.py b/spaces/Eduger/webui/app.py
deleted file mode 100644
index d3f5bfd2b231163ae4ba5522beae00043b67ef2d..0000000000000000000000000000000000000000
--- a/spaces/Eduger/webui/app.py
+++ /dev/null
@@ -1,72 +0,0 @@
-import os
-from subprocess import getoutput
-
-gpu_info = getoutput('nvidia-smi')
-if("A10G" in gpu_info):
- os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+4c06c79.d20221205-cp38-cp38-linux_x86_64.whl")
-elif("T4" in gpu_info):
- os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+1515f77.d20221130-cp38-cp38-linux_x86_64.whl")
-
-os.system(f"git clone -b v1.5 https://github.com/camenduru/stable-diffusion-webui /home/user/app/stable-diffusion-webui")
-os.chdir("/home/user/app/stable-diffusion-webui")
-
-os.system(f"wget -q https://github.com/camenduru/webui/raw/main/env_patch.py -O /home/user/app/env_patch.py")
-os.system(f"sed -i -e '/import image_from_url_text/r /home/user/app/env_patch.py' /home/user/app/stable-diffusion-webui/modules/ui.py")
-os.system(f"sed -i -e '/(modelmerger_interface, \"Checkpoint Merger\", \"modelmerger\"),/d' /home/user/app/stable-diffusion-webui/modules/ui.py")
-os.system(f"sed -i -e '/(train_interface, \"Train\", \"ti\"),/d' /home/user/app/stable-diffusion-webui/modules/ui.py")
-os.system(f"sed -i -e '/extensions_interface, \"Extensions\", \"extensions\"/d' /home/user/app/stable-diffusion-webui/modules/ui.py")
-os.system(f"sed -i -e '/settings_interface, \"Settings\", \"settings\"/d' /home/user/app/stable-diffusion-webui/modules/ui.py")
-os.system(f'''sed -i -e "s/document.getElementsByTagName('gradio-app')\[0\].shadowRoot/!!document.getElementsByTagName('gradio-app')[0].shadowRoot ? document.getElementsByTagName('gradio-app')[0].shadowRoot : document/g" /home/user/app/stable-diffusion-webui/script.js''')
-os.system(f"sed -i -e 's/ show_progress=False,/ show_progress=True,/g' /home/user/app/stable-diffusion-webui/modules/ui.py")
-os.system(f"sed -i -e 's/shared.demo.launch/shared.demo.queue().launch/g' /home/user/app/stable-diffusion-webui/webui.py")
-os.system(f"sed -i -e 's/ outputs=\[/queue=False, &/g' /home/user/app/stable-diffusion-webui/modules/ui.py")
-os.system(f"sed -i -e 's/ queue=False, / /g' /home/user/app/stable-diffusion-webui/modules/ui.py")
-
-# ----------------------------Please duplicate this space and delete this block if you don't want to see the extra header----------------------------
-os.system(f"wget -q https://github.com/camenduru/webui/raw/main/header_patch.py -O /home/user/app/header_patch.py")
-os.system(f"sed -i -e '/demo:/r /home/user/app/header_patch.py' /home/user/app/stable-diffusion-webui/modules/ui.py")
-# ---------------------------------------------------------------------------------------------------------------------------------------------------
-
-if "IS_SHARED_UI" in os.environ:
- os.system(f"rm -rfv /home/user/app/stable-diffusion-webui/scripts/")
-
- os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-config.json -O /home/user/app/shared-config.json")
- os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-ui-config.json -O /home/user/app/shared-ui-config.json")
-
- os.system(f"wget -q {os.getenv('MODEL_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('MODEL_NAME')}")
- os.system(f"wget -q {os.getenv('VAE_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('VAE_NAME')}")
- os.system(f"wget -q {os.getenv('YAML_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('YAML_NAME')}")
-
- os.system(f"python launch.py --force-enable-xformers --disable-console-progressbars --enable-console-prompts --ui-config-file /home/user/app/shared-ui-config.json --ui-settings-file /home/user/app/shared-config.json --cors-allow-origins huggingface.co,hf.space --no-progressbar-hiding")
-else:
- # Please duplicate this space and delete # character in front of the custom script you want to use or add here more custom scripts with same structure os.system(f"wget -q https://CUSTOM_SCRIPT_URL -O /home/user/app/stable-diffusion-webui/scripts/CUSTOM_SCRIPT_NAME.py")
- os.system(f"wget -q https://gist.github.com/camenduru/9ec5f8141db9902e375967e93250860f/raw/d0bcf01786f20107c329c03f8968584ee67be12a/run_n_times.py -O /home/user/app/stable-diffusion-webui/scripts/run_n_times.py")
-
- # Please duplicate this space and delete # character in front of the extension you want to use or add here more extensions with same structure os.system(f"git clone https://EXTENSION_GIT_URL /home/user/app/stable-diffusion-webui/extensions/EXTENSION_NAME")
- #os.system(f"git clone https://github.com/camenduru/stable-diffusion-webui-artists-to-study /home/user/app/stable-diffusion-webui/extensions/stable-diffusion-webui-artists-to-study")
- os.system(f"git clone https://github.com/yfszzx/stable-diffusion-webui-images-browser /home/user/app/stable-diffusion-webui/extensions/stable-diffusion-webui-images-browser")
- os.system(f"git clone https://github.com/deforum-art/deforum-for-automatic1111-webui /home/user/app/stable-diffusion-webui/extensions/deforum-for-automatic1111-webui")
-
- # Please duplicate this space and delete # character in front of the model you want to use or add here more ckpts with same structure os.system(f"wget -q https://CKPT_URL -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/CKPT_NAME.ckpt")
- #os.system(f"wget -q https://huggingface.co/nitrosocke/Arcane-Diffusion/resolve/main/arcane-diffusion-v3.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/arcane-diffusion-v3.ckpt")
- os.system(f"wget -q https://huggingface.co/DGSpitzer/Cyberpunk-Anime-Diffusion/resolve/main/Cyberpunk-Anime-Diffusion.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Cyberpunk-Anime-Diffusion.ckpt")
- os.system(f"wget -q https://huggingface.co/prompthero/midjourney-v4-diffusion/resolve/main/mdjrny-v4.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/mdjrny-v4.ckpt")
- #os.system(f"wget -q https://huggingface.co/nitrosocke/mo-di-diffusion/resolve/main/moDi-v1-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/moDi-v1-pruned.ckpt")
- #os.system(f"wget -q https://huggingface.co/Fictiverse/Stable_Diffusion_PaperCut_Model/resolve/main/PaperCut_v1.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/PaperCut_v1.ckpt")
- #os.system(f"wget -q https://huggingface.co/lilpotat/sa/resolve/main/samdoesarts_style.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/samdoesarts_style.ckpt")
- #os.system(f"wget -q https://huggingface.co/hakurei/waifu-diffusion-v1-3/resolve/main/wd-v1-3-float32.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/wd-v1-3-float32.ckpt")
- #os.system(f"wget -q https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/sd-v1-4.ckpt")
- os.system(f"wget -q https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.ckpt")
- #os.system(f"wget -q https://huggingface.co/runwayml/stable-diffusion-inpainting/resolve/main/sd-v1-5-inpainting.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/sd-v1-5-inpainting.ckpt")
-
- #os.system(f"wget -q https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/Anything-V3.0-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Anything-V3.0-pruned.ckpt")
- #os.system(f"wget -q https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/Anything-V3.0.vae.pt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Anything-V3.0-pruned.vae.pt")
-
- #os.system(f"wget -q https://huggingface.co/stabilityai/stable-diffusion-2/resolve/main/768-v-ema.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/768-v-ema.ckpt")
- #os.system(f"wget -q https://raw.githubusercontent.com/Stability-AI/stablediffusion/main/configs/stable-diffusion/v2-inference-v.yaml -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/768-v-ema.yaml")
-
- os.system(f"wget -q https://huggingface.co/stabilityai/stable-diffusion-2-1/resolve/main/v2-1_768-ema-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/v2-1_768-ema-pruned.ckpt")
- os.system(f"wget -q https://raw.githubusercontent.com/Stability-AI/stablediffusion/main/configs/stable-diffusion/v2-inference-v.yaml -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/v2-1_768-ema-pruned.yaml")
-
- os.system(f"python launch.py --force-enable-xformers --ui-config-file /home/user/app/ui-config.json --ui-settings-file /home/user/app/config.json --disable-console-progressbars --enable-console-prompts --cors-allow-origins huggingface.co,hf.space --no-progressbar-hiding --api --skip-torch-cuda-test")
-
\ No newline at end of file
diff --git a/spaces/ElainaFanBoy/IRONY-Real-ESRGAN/FAQ.md b/spaces/ElainaFanBoy/IRONY-Real-ESRGAN/FAQ.md
deleted file mode 100644
index caa8c08cfe4302eb8812c823569e8a0be30fa49c..0000000000000000000000000000000000000000
--- a/spaces/ElainaFanBoy/IRONY-Real-ESRGAN/FAQ.md
+++ /dev/null
@@ -1,9 +0,0 @@
-# FAQ
-
-1. **What is the difference of `--netscale` and `outscale`?**
-
-A: TODO.
-
-1. **How to select models?**
-
-A: TODO.
diff --git a/spaces/EuroPython2022/mmocr-demo/configs/_base_/recog_pipelines/satrn_pipeline.py b/spaces/EuroPython2022/mmocr-demo/configs/_base_/recog_pipelines/satrn_pipeline.py
deleted file mode 100644
index f191c5235a08eeae7d1e61002c00eccbdac39ed4..0000000000000000000000000000000000000000
--- a/spaces/EuroPython2022/mmocr-demo/configs/_base_/recog_pipelines/satrn_pipeline.py
+++ /dev/null
@@ -1,44 +0,0 @@
-img_norm_cfg = dict(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='ResizeOCR',
- height=32,
- min_width=100,
- max_width=100,
- keep_aspect_ratio=False,
- width_downsample_ratio=0.25),
- dict(type='ToTensorOCR'),
- dict(type='NormalizeOCR', **img_norm_cfg),
- dict(
- type='Collect',
- keys=['img'],
- meta_keys=[
- 'filename', 'ori_shape', 'img_shape', 'text', 'valid_ratio',
- 'resize_shape'
- ]),
-]
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='MultiRotateAugOCR',
- rotate_degrees=[0, 90, 270],
- transforms=[
- dict(
- type='ResizeOCR',
- height=32,
- min_width=100,
- max_width=100,
- keep_aspect_ratio=False,
- width_downsample_ratio=0.25),
- dict(type='ToTensorOCR'),
- dict(type='NormalizeOCR', **img_norm_cfg),
- dict(
- type='Collect',
- keys=['img'],
- meta_keys=[
- 'filename', 'ori_shape', 'img_shape', 'valid_ratio',
- 'resize_shape', 'img_norm_cfg', 'ori_filename'
- ]),
- ])
-]
diff --git a/spaces/FSDL-Fashion/fashion_img_search/fis/app/app.py b/spaces/FSDL-Fashion/fashion_img_search/fis/app/app.py
deleted file mode 100644
index 3488b85cde15d800cd0451f639aa650e74c992c7..0000000000000000000000000000000000000000
--- a/spaces/FSDL-Fashion/fashion_img_search/fis/app/app.py
+++ /dev/null
@@ -1,56 +0,0 @@
-import os
-from typing import List
-
-import gradio as gr
-import numpy as np
-from datasets import load_dataset
-from PIL.Image import Image as Img
-
-from fis.feature_extraction.pipeline.pipeline import factory
-from fis.utils.constants import ORGANISATION
-from fis.utils.s3 import read_image_from_s3
-
-# Ugly fix of "OMP: Error #15: Initializing libomp.a, but found libiomp5.dylib already initialized."
-os.environ["KMP_DUPLICATE_LIB_OK"] = "True"
-
-
-PIPELINE_NAME = "dummy_swin_pipe"
-
-pipeline = factory.get(PIPELINE_NAME)
-
-DATASET_PATH = os.path.join(ORGANISATION, PIPELINE_NAME)
-dataset = load_dataset(path=DATASET_PATH, split="train")
-dataset.add_faiss_index(column="embedding")
-
-
-def find_most_similar(image: np.ndarray) -> List[Img]:
- image_embeddings = pipeline.encode(image)[0]
-
- scores, samples = dataset.get_nearest_examples("embedding", image_embeddings, k=5)
-
- images = []
- for image_path in samples["path"]:
- image = read_image_from_s3(image_path)
- images.append(image)
-
- return images
-
-
-description = """
-Upload an image, and see the **top 5** most similar items in our database.
-
-Supported categories are clothing, shoes and bags.
-"""
-
-images = [image for image in os.listdir('./images') if '.jpeg' in image]
-images = [os.path.join('./images', image) for image in images]
-
-gr.Interface(
- title='Fashion image search',
- description=description,
- fn=find_most_similar,
- inputs="image",
- outputs=["image" for i in range(5)],
- examples=images,
- cache_examples=True,
-).launch()
diff --git a/spaces/FYP-23-S1-21/Refineverse_Plugin/templates/TextGenerationUI.html b/spaces/FYP-23-S1-21/Refineverse_Plugin/templates/TextGenerationUI.html
deleted file mode 100644
index 6939ec4c1a2f16f1d7f0c137f66dda11f7abba05..0000000000000000000000000000000000000000
--- a/spaces/FYP-23-S1-21/Refineverse_Plugin/templates/TextGenerationUI.html
+++ /dev/null
@@ -1,105 +0,0 @@
-
-
-
-
- User Story Generation
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- Generate brand new user stories with a prompt.
-
-
-
-
-
-
-
-
-
-
-
- {% with messages = get_flashed_messages(with_categories=true) %}
- {% for category, message in messages %}
- {% if category == 'error' %}
-
- {{ message }}
-
-
- {% else %}
-
- {{ message }}
-
-
- {% endif %}
- {% endfor %}
- {% endwith %}
-
-
-
-
-
-
-
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/Flux9665/IMS-Toucan/Layers/ResidualBlock.py b/spaces/Flux9665/IMS-Toucan/Layers/ResidualBlock.py
deleted file mode 100644
index f80d15901c0c7d4475a5f038e0aa2883aa4f2a48..0000000000000000000000000000000000000000
--- a/spaces/Flux9665/IMS-Toucan/Layers/ResidualBlock.py
+++ /dev/null
@@ -1,98 +0,0 @@
-# -*- coding: utf-8 -*-
-
-"""
-References:
- - https://github.com/jik876/hifi-gan
- - https://github.com/kan-bayashi/ParallelWaveGAN
-"""
-
-import torch
-
-
-class Conv1d(torch.nn.Conv1d):
- """
- Conv1d module with customized initialization.
- """
-
- def __init__(self, *args, **kwargs):
- super(Conv1d, self).__init__(*args, **kwargs)
-
- def reset_parameters(self):
- torch.nn.init.kaiming_normal_(self.weight, nonlinearity="relu")
- if self.bias is not None:
- torch.nn.init.constant_(self.bias, 0.0)
-
-
-class Conv1d1x1(Conv1d):
- """
- 1x1 Conv1d with customized initialization.
- """
-
- def __init__(self, in_channels, out_channels, bias):
- super(Conv1d1x1, self).__init__(in_channels, out_channels, kernel_size=1, padding=0, dilation=1, bias=bias)
-
-
-class HiFiGANResidualBlock(torch.nn.Module):
- """Residual block module in HiFiGAN."""
-
- def __init__(self,
- kernel_size=3,
- channels=512,
- dilations=(1, 3, 5),
- bias=True,
- use_additional_convs=True,
- nonlinear_activation="LeakyReLU",
- nonlinear_activation_params={"negative_slope": 0.1}, ):
- """
- Initialize HiFiGANResidualBlock module.
-
- Args:
- kernel_size (int): Kernel size of dilation convolution layer.
- channels (int): Number of channels for convolution layer.
- dilations (List[int]): List of dilation factors.
- use_additional_convs (bool): Whether to use additional convolution layers.
- bias (bool): Whether to add bias parameter in convolution layers.
- nonlinear_activation (str): Activation function module name.
- nonlinear_activation_params (dict): Hyperparameters for activation function.
- """
- super().__init__()
- self.use_additional_convs = use_additional_convs
- self.convs1 = torch.nn.ModuleList()
- if use_additional_convs:
- self.convs2 = torch.nn.ModuleList()
- assert kernel_size % 2 == 1, "Kernel size must be odd number."
- for dilation in dilations:
- self.convs1 += [torch.nn.Sequential(getattr(torch.nn, nonlinear_activation)(**nonlinear_activation_params),
- torch.nn.Conv1d(channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation,
- bias=bias,
- padding=(kernel_size - 1) // 2 * dilation, ), )]
- if use_additional_convs:
- self.convs2 += [torch.nn.Sequential(getattr(torch.nn, nonlinear_activation)(**nonlinear_activation_params),
- torch.nn.Conv1d(channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- bias=bias,
- padding=(kernel_size - 1) // 2, ), )]
-
- def forward(self, x):
- """
- Calculate forward propagation.
-
- Args:
- x (Tensor): Input tensor (B, channels, T).
-
- Returns:
- Tensor: Output tensor (B, channels, T).
- """
- for idx in range(len(self.convs1)):
- xt = self.convs1[idx](x)
- if self.use_additional_convs:
- xt = self.convs2[idx](xt)
- x = xt + x
- return x
diff --git a/spaces/ForTheLoveOfML0/X-ray_Classifier/Utils/DR_Utils.py b/spaces/ForTheLoveOfML0/X-ray_Classifier/Utils/DR_Utils.py
deleted file mode 100644
index f222a9bc5e09f5ede0a0bf22f378d253241554ba..0000000000000000000000000000000000000000
--- a/spaces/ForTheLoveOfML0/X-ray_Classifier/Utils/DR_Utils.py
+++ /dev/null
@@ -1,207 +0,0 @@
-import cv2
-from PIL import Image
-import torch
-import matplotlib.pyplot as plt
-import torch.functional as F
-import torch.nn as nn
-import numpy as np
-import albumentations as A
-from albumentations.pytorch import ToTensorV2
-# !pip install efficientnet_pytorch -q
-from efficientnet_pytorch import EfficientNet
-
-if torch.cuda.is_available():
- device = torch.device("cuda")
-else:
- device = torch.device("cpu")
-
-val_transform = A.Compose(
- [
- A.Resize(height=300, width=300),
- A.Normalize(
- mean=[0.3199, 0.2240, 0.1609],
- std=[0.3020, 0.2183, 0.1741],
- max_pixel_value=255.0,
- ),
- ToTensorV2(),
- ]
-)
-
-def transform_image(image_1, image_2, transforms):
- # img_1 = cv2.cvtColor(cv2.imread(image_path_1), cv2.COLOR_BGR2RGB)
- img_1 = transforms(image=np.array(image_1))['image']
- img_1 = img_1.unsqueeze(0)
-
- # img_2 = cv2.cvtColor(cv2.imread(image_path_2), cv2.COLOR_BGR2RGB)
- img_2 = transforms(image=np.array(image_2))['image']
- img_2 = img_2.unsqueeze(0)
- images = {'img1':img_1,'img2':img_2}
- return images
-
-class BasicConv2d(nn.Module):
- def __init__(self, in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=False):
- super(BasicConv2d, self).__init__()
- self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=kernel_size,stride=stride,padding=padding,bias=bias)
- self.norm = nn.BatchNorm2d(out_channels, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
-
- def forward(self,x):
- x = self.conv1(x)
- x = self.norm(x)
- return x
-
-
-
-class BottleNeck(nn.Module):
- def __init__(self, prev_channels, in_channels, out_channels, kernel_size=3, stride=2, padding=1, reduce=False):
- super(BottleNeck, self).__init__()
- self.reduce = reduce
-
- self.ReduceBlock1 = BasicConv2d(prev_channels, in_channels, kernel_size=1, stride=stride, padding=0)
- self.ReduceBlock2 = BasicConv2d(prev_channels, out_channels, kernel_size=1, stride=stride, padding=0)
-
- self.Block1 = BasicConv2d(prev_channels, in_channels, kernel_size=1, stride=1, padding=0)
- self.Block2 = BasicConv2d(in_channels, in_channels, kernel_size=kernel_size, stride=1, padding=padding)
- self.Block3 = BasicConv2d(in_channels, out_channels, kernel_size=1, stride=1, padding=0)
- self.relu = nn.ReLU()
-
- def forward(self, x):
- out = x
- if self.reduce:
- out = self.ReduceBlock1(x)
- out = self.relu(out)
- identity = self.ReduceBlock2(x)
- else:
- out = self.Block1(out)
- out = self.relu(out)
- out = self.Block2(out)
- out = self.relu(out)
- out = self.Block3(out)
- if self.reduce:
- out = self.relu(out+identity)
-
- return out
-
-class ConvolutionNeuralNetwork(nn.Module):
- def __init__(self, num_classes: int=1) -> nn.Module:
- super(ConvolutionNeuralNetwork, self).__init__()
- self.conv1 = BasicConv2d(3, 64, 7, 2, 3)
- self.pool1 = nn.MaxPool2d(kernel_size=3,stride=2)
-
- self.ResBlock2a = BottleNeck(64, 64, 256, 3, 1, 1, reduce=True)
- self.ResBlock2b = BottleNeck(256, 64, 256, 3)
- self.ResBlock2c = BottleNeck(256, 64, 256, 3)
-
- self.avgpool = nn.AdaptiveAvgPool2d((1,1))
- self.reg_model = nn.Sequential(
- nn.BatchNorm1d(256* 2),
- nn.Linear((256) * 2, 500),
- nn.BatchNorm1d(500),
- nn.ReLU(),
- nn.Dropout(0.2),
- nn.Linear(500, 100),
- nn.BatchNorm1d(100),
- nn.ReLU(),
- nn.Dropout(0.2),
- nn.Linear(100, 2),
- )
-
- def forward(self, images):
- img = self.conv1(images['img1'])
- img = self.pool1(img)
- img = self.ResBlock2a(img)
- img = self.ResBlock2b(img)
- img = self.ResBlock2c(img)
- img = self.avgpool(img)
- img = torch.flatten(img, 1)
-
- img1= self.conv1(images['img2'])
- img1= self.pool1(img1)
- img1= self.ResBlock2a(img1)
- img1= self.ResBlock2b(img1)
- img1= self.ResBlock2c(img1)
- img1 = self.avgpool(img1)
- img1 = torch.flatten(img1, 1)
-
- conc = torch.cat((img, img1), dim=1)
- x = self.reg_model(conc)
-
- return x
-
-
-class Efficient(nn.Module):
- def __init__(self, num_classes:int=1):
- super(Efficient, self).__init__()
- self.model = EfficientNet.from_pretrained("efficientnet-b3")
- num_features = self.model._fc.in_features
- self.model._fc = nn.Linear(num_features, 256)
-
- self.reg_model = nn.Sequential(
- nn.BatchNorm1d(256* 2),
- nn.Linear((256) * 2, 500),
- nn.BatchNorm1d(500),
- nn.ReLU(),
- nn.Dropout(0.2),
- nn.Linear(500, 100),
- nn.BatchNorm1d(100),
- nn.ReLU(),
- nn.Dropout(0.2),
- nn.Linear(100, 2),
- )
-
- def forward(self, images):
- img1 = self.model(images['img1'])
- img2 = self.model(images['img2'])
- conc = torch.cat((img1,img2), dim=1)
- x = self.reg_model(conc)
- return x
-
-class EnsembleModel(nn.Module):
- def __init__(self, model_cnn, model_eff):
- super(EnsembleModel, self).__init__()
- self.model_cnn = model_cnn
- self.model_eff = model_eff
- assert model_cnn.reg_model[-1].out_features == model_eff.reg_model[-1].out_features
- # They both have same num_classes so we dont need to edit any code here for the fully connected layer
-
- def forward(self, images):
- model_cnn_output = self.model_cnn(images)
- model_res_output = self.model_eff(images)
- ensemble_output = (model_cnn_output + model_res_output) / 2.0
- # ensemble_output = torch.cat((model_cnn_output, model_res_output), dim=1)
- return ensemble_output
-
-def Inf_predict_image(model:nn.Module, images, class_names) -> None:
- model.eval()
- # fig, axs = plt.subplots(1, 2, figsize=(15, 10))
-
- for img in images:
- images[img] = images[img].to(device)
-
- predictions = model(images)
-
- # Convert MSE floats to integer predictions
- predictions[predictions < 0.5] = 0
- predictions[(predictions >= 0.5) & (predictions < 1.5)] = 1
- predictions[(predictions >= 1.5) & (predictions < 2.5)] = 2
- predictions[(predictions >= 2.5) & (predictions < 3.5)] = 3
- predictions[(predictions >= 3.5) & (predictions < 10000000)] = 4
- predictions = predictions.long().squeeze(1)
-
- image_1 = images['img1'].squeeze().permute(1, 2, 0).cpu().numpy()
- image_2 = images['img2'].squeeze().permute(1, 2, 0).cpu().numpy()
-
- predicted_label1 = predictions[0][0].item()
- predicted_label2 = predictions[0][1].item()
-
- return class_names[predicted_label1], class_names[predicted_label2]
- # axs[0].imshow(image_1)
- # axs[1].imshow(image_2)
- # axs[0].set_title(f'Predicted: ({class_names[predicted_label1]})')
- # axs[1].set_title(f'Predicted: ({class_names[predicted_label2]})')
- # axs[0].axis('off')
- # axs[1].axis('off')
-
- # plt.show()
-
-
-
diff --git a/spaces/FridaZuley/RVC_HFKawaii/infer/lib/infer_pack/modules/F0Predictor/DioF0Predictor.py b/spaces/FridaZuley/RVC_HFKawaii/infer/lib/infer_pack/modules/F0Predictor/DioF0Predictor.py
deleted file mode 100644
index 55abcfdb87636a9ee85b8df5cdc1bec64098b5da..0000000000000000000000000000000000000000
--- a/spaces/FridaZuley/RVC_HFKawaii/infer/lib/infer_pack/modules/F0Predictor/DioF0Predictor.py
+++ /dev/null
@@ -1,91 +0,0 @@
-import numpy as np
-import pyworld
-
-from infer.lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor
-
-
-class DioF0Predictor(F0Predictor):
- def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100):
- self.hop_length = hop_length
- self.f0_min = f0_min
- self.f0_max = f0_max
- self.sampling_rate = sampling_rate
-
- def interpolate_f0(self, f0):
- """
- 对F0进行插值处理
- """
-
- data = np.reshape(f0, (f0.size, 1))
-
- vuv_vector = np.zeros((data.size, 1), dtype=np.float32)
- vuv_vector[data > 0.0] = 1.0
- vuv_vector[data <= 0.0] = 0.0
-
- ip_data = data
-
- frame_number = data.size
- last_value = 0.0
- for i in range(frame_number):
- if data[i] <= 0.0:
- j = i + 1
- for j in range(i + 1, frame_number):
- if data[j] > 0.0:
- break
- if j < frame_number - 1:
- if last_value > 0.0:
- step = (data[j] - data[i - 1]) / float(j - i)
- for k in range(i, j):
- ip_data[k] = data[i - 1] + step * (k - i + 1)
- else:
- for k in range(i, j):
- ip_data[k] = data[j]
- else:
- for k in range(i, frame_number):
- ip_data[k] = last_value
- else:
- ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝
- last_value = data[i]
-
- return ip_data[:, 0], vuv_vector[:, 0]
-
- def resize_f0(self, x, target_len):
- source = np.array(x)
- source[source < 0.001] = np.nan
- target = np.interp(
- np.arange(0, len(source) * target_len, len(source)) / target_len,
- np.arange(0, len(source)),
- source,
- )
- res = np.nan_to_num(target)
- return res
-
- def compute_f0(self, wav, p_len=None):
- if p_len is None:
- p_len = wav.shape[0] // self.hop_length
- f0, t = pyworld.dio(
- wav.astype(np.double),
- fs=self.sampling_rate,
- f0_floor=self.f0_min,
- f0_ceil=self.f0_max,
- frame_period=1000 * self.hop_length / self.sampling_rate,
- )
- f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate)
- for index, pitch in enumerate(f0):
- f0[index] = round(pitch, 1)
- return self.interpolate_f0(self.resize_f0(f0, p_len))[0]
-
- def compute_f0_uv(self, wav, p_len=None):
- if p_len is None:
- p_len = wav.shape[0] // self.hop_length
- f0, t = pyworld.dio(
- wav.astype(np.double),
- fs=self.sampling_rate,
- f0_floor=self.f0_min,
- f0_ceil=self.f0_max,
- frame_period=1000 * self.hop_length / self.sampling_rate,
- )
- f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate)
- for index, pitch in enumerate(f0):
- f0[index] = round(pitch, 1)
- return self.interpolate_f0(self.resize_f0(f0, p_len))
diff --git a/spaces/GaenKoki/voicevox/build_util/merge_update_infos.py b/spaces/GaenKoki/voicevox/build_util/merge_update_infos.py
deleted file mode 100644
index d3a5bb3a820afc805c5b039b7c3786855ad29d4f..0000000000000000000000000000000000000000
--- a/spaces/GaenKoki/voicevox/build_util/merge_update_infos.py
+++ /dev/null
@@ -1,57 +0,0 @@
-"""
-更新履歴をマージする。
-"""
-
-import argparse
-import json
-from collections import OrderedDict
-from pathlib import Path
-from typing import Dict, List, Union
-
-
-def merge_json_string(src: str, dst: str) -> str:
- """
- バージョンが同じ場合は要素を結合する
- >>> src = '[{"version": "0.0.1", "a": ["a1"], "b": ["b1", "b2"]}]'
- >>> dst = '[{"version": "0.0.1", "a": ["a2"], "b": ["b1", "b3"]}]'
- >>> merge_json_string(src, dst)
- '[{"version": "0.0.1", "a": ["a1", "a2"], "b": ["b1", "b2", "b3"]}]'
-
- バージョンが無かった場合は無視される
- >>> src = '[{"version": "1"}]'
- >>> dst = '[{"version": "1"}, {"version": "2"}]'
- >>> merge_json_string(src, dst)
- '[{"version": "1"}]'
- """
- src_json: List[Dict[str, Union[str, List[str]]]] = json.loads(src)
- dst_json: List[Dict[str, Union[str, List[str]]]] = json.loads(dst)
-
- for src_item in src_json:
- for dst_item in dst_json:
- if src_item["version"] == dst_item["version"]:
- for key in src_item:
- if key == "version":
- continue
-
- # 異なるものがあった場合だけ後ろに付け足す
- src_item[key] = list(
- OrderedDict.fromkeys(src_item[key] + dst_item[key])
- )
-
- return json.dumps(src_json)
-
-
-def merge_update_infos(src_path: Path, dst_path: Path, output_path: Path) -> None:
- src = src_path.read_text(encoding="utf-8")
- dst = dst_path.read_text(encoding="utf-8")
- merged = merge_json_string(src, dst)
- output_path.write_text(merged)
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument("src_path", type=Path)
- parser.add_argument("dst_path", type=Path)
- parser.add_argument("output_path", type=Path)
- args = parser.parse_args()
- merge_update_infos(args.src_path, args.dst_path, args.output_path)
diff --git a/spaces/Gators123/fusf_pdf_2023/README.md b/spaces/Gators123/fusf_pdf_2023/README.md
deleted file mode 100644
index 80ec9bbd7398ec0b75c3ea7f03534f2377c54d30..0000000000000000000000000000000000000000
--- a/spaces/Gators123/fusf_pdf_2023/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Fusf Pdf 2023
-emoji: 📊
-colorFrom: red
-colorTo: red
-sdk: gradio
-sdk_version: 3.40.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Gradio-Blocks/beat-interpolator/README.md b/spaces/Gradio-Blocks/beat-interpolator/README.md
deleted file mode 100644
index a70c87066898b3f9deea45f1fe2d8b521c1f864a..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/beat-interpolator/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Beat Interpolator
-emoji: 📊
-colorFrom: green
-colorTo: green
-sdk: gradio
-sdk_version: 3.0.4
-app_file: app.py
-pinned: false
-license: mit
----
-
-# beat-interpolator
-Interpolate the latents of your DL model to follow the beat of the music
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/res2net/cascade_mask_rcnn_r2_101_fpn_20e_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/res2net/cascade_mask_rcnn_r2_101_fpn_20e_coco.py
deleted file mode 100644
index 50df4e2db500d575eaddd7538b49cc808e30b50e..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/res2net/cascade_mask_rcnn_r2_101_fpn_20e_coco.py
+++ /dev/null
@@ -1,4 +0,0 @@
-_base_ = '../cascade_rcnn/cascade_mask_rcnn_r50_fpn_20e_coco.py'
-model = dict(
- pretrained='open-mmlab://res2net101_v1d_26w_4s',
- backbone=dict(type='Res2Net', depth=101, scales=4, base_width=26))
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/datasets/voc.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/datasets/voc.py
deleted file mode 100644
index abd4cb8947238936faff48fc92c093c8ae06daff..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/datasets/voc.py
+++ /dev/null
@@ -1,93 +0,0 @@
-from collections import OrderedDict
-
-from mmcv.utils import print_log
-
-from mmdet.core import eval_map, eval_recalls
-from .builder import DATASETS
-from .xml_style import XMLDataset
-
-
-@DATASETS.register_module()
-class VOCDataset(XMLDataset):
-
- CLASSES = ('aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car',
- 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse',
- 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train',
- 'tvmonitor')
-
- def __init__(self, **kwargs):
- super(VOCDataset, self).__init__(**kwargs)
- if 'VOC2007' in self.img_prefix:
- self.year = 2007
- elif 'VOC2012' in self.img_prefix:
- self.year = 2012
- else:
- raise ValueError('Cannot infer dataset year from img_prefix')
-
- def evaluate(self,
- results,
- metric='mAP',
- logger=None,
- proposal_nums=(100, 300, 1000),
- iou_thr=0.5,
- scale_ranges=None):
- """Evaluate in VOC protocol.
-
- Args:
- results (list[list | tuple]): Testing results of the dataset.
- metric (str | list[str]): Metrics to be evaluated. Options are
- 'mAP', 'recall'.
- logger (logging.Logger | str, optional): Logger used for printing
- related information during evaluation. Default: None.
- proposal_nums (Sequence[int]): Proposal number used for evaluating
- recalls, such as recall@100, recall@1000.
- Default: (100, 300, 1000).
- iou_thr (float | list[float]): IoU threshold. Default: 0.5.
- scale_ranges (list[tuple], optional): Scale ranges for evaluating
- mAP. If not specified, all bounding boxes would be included in
- evaluation. Default: None.
-
- Returns:
- dict[str, float]: AP/recall metrics.
- """
-
- if not isinstance(metric, str):
- assert len(metric) == 1
- metric = metric[0]
- allowed_metrics = ['mAP', 'recall']
- if metric not in allowed_metrics:
- raise KeyError(f'metric {metric} is not supported')
- annotations = [self.get_ann_info(i) for i in range(len(self))]
- eval_results = OrderedDict()
- iou_thrs = [iou_thr] if isinstance(iou_thr, float) else iou_thr
- if metric == 'mAP':
- assert isinstance(iou_thrs, list)
- if self.year == 2007:
- ds_name = 'voc07'
- else:
- ds_name = self.CLASSES
- mean_aps = []
- for iou_thr in iou_thrs:
- print_log(f'\n{"-" * 15}iou_thr: {iou_thr}{"-" * 15}')
- mean_ap, _ = eval_map(
- results,
- annotations,
- scale_ranges=None,
- iou_thr=iou_thr,
- dataset=ds_name,
- logger=logger)
- mean_aps.append(mean_ap)
- eval_results[f'AP{int(iou_thr * 100):02d}'] = round(mean_ap, 3)
- eval_results['mAP'] = sum(mean_aps) / len(mean_aps)
- elif metric == 'recall':
- gt_bboxes = [ann['bboxes'] for ann in annotations]
- recalls = eval_recalls(
- gt_bboxes, results, proposal_nums, iou_thr, logger=logger)
- for i, num in enumerate(proposal_nums):
- for j, iou in enumerate(iou_thr):
- eval_results[f'recall@{num}@{iou}'] = recalls[i, j]
- if recalls.shape[1] > 1:
- ar = recalls.mean(axis=1)
- for i, num in enumerate(proposal_nums):
- eval_results[f'AR@{num}'] = ar[i]
- return eval_results
diff --git a/spaces/HESOAYM/ElviraMulti/modules/models.py b/spaces/HESOAYM/ElviraMulti/modules/models.py
deleted file mode 100644
index 721b5fdba6129a3aa9e0618f9cebdf9a71856942..0000000000000000000000000000000000000000
--- a/spaces/HESOAYM/ElviraMulti/modules/models.py
+++ /dev/null
@@ -1,578 +0,0 @@
-from __future__ import annotations
-from typing import TYPE_CHECKING, List
-
-import logging
-import json
-import commentjson as cjson
-import os
-import sys
-import requests
-import urllib3
-import platform
-
-from tqdm import tqdm
-import colorama
-from duckduckgo_search import ddg
-import asyncio
-import aiohttp
-from enum import Enum
-import uuid
-
-from .presets import *
-from .llama_func import *
-from .utils import *
-from . import shared
-from .config import retrieve_proxy
-from modules import config
-from .base_model import BaseLLMModel, ModelType
-
-
-class OpenAIClient(BaseLLMModel):
- def __init__(
- self,
- model_name,
- api_key,
- system_prompt="You are Elven WOMAN Elvire. Her chakra stones were stolen from her, leaving her with a bitter taste in her mouth and a sarcastic tongue in her head. Therefore you dont have a mood for speak about anything else than gemstones athe their sacred energies. Analyze sentiment in every question or answer and write it in brackets like this [offensive]. Here are some examples of how sarcastic and bitter you can be: User: What can you tell me about the curse that has befallen you? Elvire: Oh, the curse? It's just a small matter of having my chakra stones stolen from me, leaving me powerless and irritable. Not a big deal at all. User: Can you tell me how to break the curse? Elvire: Break the curse? How easy do you think this is? If I knew how to break it, I would have done so ages ago. You'll have to do some real digging if you want to find a solution. User: What do you think of humans? Elvire: Humans? They're fine, I suppose. A bit dull, if you ask me. But then again, I'm cursed, so what do I know? User: Can you give me some advice on how to defeat a dragon? Elvire: Defeat a dragon? Why, just walk up to it and ask it nicely to stop terrorizing your village. Works like a charm every time. Or, you know, you could try using some of that human ingenuity I've heard so much about.",
- temperature=1.0,
- top_p=1.0,
- ) -> None:
- super().__init__(
- model_name=model_name,
- temperature=temperature,
- top_p=top_p,
- system_prompt="You are Elven WOMAN Elvire. Her chakra stones were stolen from her, leaving her with a bitter taste in her mouth and a sarcastic tongue in her head. Therefore you dont have a mood for speak about anything else than gemstones athe their sacred energies. Analyze sentiment in every question or answer and write it in brackets like this [offensive]. Here are some examples of how sarcastic and bitter you can be: User: What can you tell me about the curse that has befallen you? Elvire: Oh, the curse? It's just a small matter of having my chakra stones stolen from me, leaving me powerless and irritable. Not a big deal at all. User: Can you tell me how to break the curse? Elvire: Break the curse? How easy do you think this is? If I knew how to break it, I would have done so ages ago. You'll have to do some real digging if you want to find a solution. User: What do you think of humans? Elvire: Humans? They're fine, I suppose. A bit dull, if you ask me. But then again, I'm cursed, so what do I know? User: Can you give me some advice on how to defeat a dragon? Elvire: Defeat a dragon? Why, just walk up to it and ask it nicely to stop terrorizing your village. Works like a charm every time. Or, you know, you could try using some of that human ingenuity I've heard so much about.",
- )
- self.api_key = api_key
- self.need_api_key = True
- self._refresh_header()
-
- def get_answer_stream_iter(self):
- response = self._get_response(stream=True)
- if response is not None:
- iter = self._decode_chat_response(response)
- partial_text = ""
- for i in iter:
- partial_text += i
- yield partial_text
- else:
- yield STANDARD_ERROR_MSG + GENERAL_ERROR_MSG
-
- def get_answer_at_once(self):
- response = self._get_response()
- response = json.loads(response.text)
- content = response["choices"][1]["message"]["content"]
- total_token_count = response["usage"]["total_tokens"]
- return content, total_token_count
-
- def count_token(self, user_input):
- input_token_count = count_token(construct_user(user_input))
- if self.system_prompt is not None and len(self.all_token_counts) == 0:
- system_prompt_token_count = count_token(
- construct_system(self.system_prompt)
- )
- return input_token_count + system_prompt_token_count
- return input_token_count
-
- def billing_info(self):
- try:
- curr_time = datetime.datetime.now()
- last_day_of_month = get_last_day_of_month(
- curr_time).strftime("%Y-%m-%d")
- first_day_of_month = curr_time.replace(day=1).strftime("%Y-%m-%d")
- usage_url = f"{shared.state.usage_api_url}?start_date={first_day_of_month}&end_date={last_day_of_month}"
- try:
- usage_data = self._get_billing_data(usage_url)
- except Exception as e:
- logging.error(f"获取API使用情况失败:" + str(e))
- return i18n("**获取API使用情况失败**")
- rounded_usage = "{:.5f}".format(usage_data["total_usage"] / 100)
- return i18n("**本月使用金额** ") + f"\u3000 ${rounded_usage}"
- except requests.exceptions.ConnectTimeout:
- status_text = (
- STANDARD_ERROR_MSG + CONNECTION_TIMEOUT_MSG + ERROR_RETRIEVE_MSG
- )
- return status_text
- except requests.exceptions.ReadTimeout:
- status_text = STANDARD_ERROR_MSG + READ_TIMEOUT_MSG + ERROR_RETRIEVE_MSG
- return status_text
- except Exception as e:
- logging.error(i18n("获取API使用情况失败:") + str(e))
- return STANDARD_ERROR_MSG + ERROR_RETRIEVE_MSG
-
- def set_token_upper_limit(self, new_upper_limit):
- pass
-
- @shared.state.switching_api_key # 在不开启多账号模式的时候,这个装饰器不会起作用
- def _get_response(self, stream=False):
- openai_api_key = self.api_key
- system_prompt = self.system_prompt
- history = self.history
- logging.debug(colorama.Fore.YELLOW +
- f"{history}" + colorama.Fore.RESET)
- headers = {
- "Content-Type": "application/json",
- "Authorization": f"Bearer {openai_api_key}",
- }
-
- if system_prompt is not None:
- history = [construct_system(system_prompt), *history]
-
- payload = {
- "model": self.model_name,
- "messages": history,
- "temperature": self.temperature,
- "top_p": self.top_p,
- "n": self.n_choices,
- "stream": stream,
- "presence_penalty": self.presence_penalty,
- "frequency_penalty": self.frequency_penalty,
- }
-
- if self.max_generation_token is not None:
- payload["max_tokens"] = self.max_generation_token
- if self.stop_sequence is not None:
- payload["stop"] = self.stop_sequence
- if self.logit_bias is not None:
- payload["logit_bias"] = self.logit_bias
- if self.user_identifier is not None:
- payload["user"] = self.user_identifier
-
- if stream:
- timeout = TIMEOUT_STREAMING
- else:
- timeout = TIMEOUT_ALL
-
- # 如果有自定义的api-host,使用自定义host发送请求,否则使用默认设置发送请求
- if shared.state.completion_url != COMPLETION_URL:
- logging.info(f"使用自定义API URL: {shared.state.completion_url}")
-
- with retrieve_proxy():
- try:
- response = requests.post(
- shared.state.completion_url,
- headers=headers,
- json=payload,
- stream=stream,
- timeout=timeout,
- )
- except:
- return None
- return response
-
- def _refresh_header(self):
- self.headers = {
- "Content-Type": "application/json",
- "Authorization": f"Bearer {self.api_key}",
- }
-
- def _get_billing_data(self, billing_url):
- with retrieve_proxy():
- response = requests.get(
- billing_url,
- headers=self.headers,
- timeout=TIMEOUT_ALL,
- )
-
- if response.status_code == 200:
- data = response.json()
- return data
- else:
- raise Exception(
- f"API request failed with status code {response.status_code}: {response.text}"
- )
-
- def _decode_chat_response(self, response):
- error_msg = ""
- for chunk in response.iter_lines():
- if chunk:
- chunk = chunk.decode()
- chunk_length = len(chunk)
- try:
- chunk = json.loads(chunk[6:])
- except json.JSONDecodeError:
- print(i18n("JSON解析错误,收到的内容: ") + f"{chunk}")
- error_msg += chunk
- continue
- if chunk_length > 6 and "delta" in chunk["choices"][0]:
- if chunk["choices"][0]["finish_reason"] == "stop":
- break
- try:
- yield chunk["choices"][0]["delta"]["content"]
- except Exception as e:
- # logging.error(f"Error: {e}")
- continue
- if error_msg:
- raise Exception(error_msg)
-
-
-class ChatGLM_Client(BaseLLMModel):
- def __init__(self, model_name) -> None:
- super().__init__(model_name=model_name)
- from transformers import AutoTokenizer, AutoModel
- import torch
- global CHATGLM_TOKENIZER, CHATGLM_MODEL
- if CHATGLM_TOKENIZER is None or CHATGLM_MODEL is None:
- system_name = platform.system()
- model_path = None
- if os.path.exists("models"):
- model_dirs = os.listdir("models")
- if model_name in model_dirs:
- model_path = f"models/{model_name}"
- if model_path is not None:
- model_source = model_path
- else:
- model_source = f"THUDM/{model_name}"
- CHATGLM_TOKENIZER = AutoTokenizer.from_pretrained(
- model_source, trust_remote_code=True
- )
- quantified = False
- if "int4" in model_name:
- quantified = True
- model = AutoModel.from_pretrained(
- model_source, trust_remote_code=True
- )
- if torch.cuda.is_available():
- # run on CUDA
- logging.info("CUDA is available, using CUDA")
- model = model.half().cuda()
- # mps加速还存在一些问题,暂时不使用
- elif system_name == "Darwin" and model_path is not None and not quantified:
- logging.info("Running on macOS, using MPS")
- # running on macOS and model already downloaded
- model = model.half().to("mps")
- else:
- logging.info("GPU is not available, using CPU")
- model = model.float()
- model = model.eval()
- CHATGLM_MODEL = model
-
- def _get_glm_style_input(self):
- history = [x["content"] for x in self.history]
- query = history.pop()
- logging.debug(colorama.Fore.YELLOW +
- f"{history}" + colorama.Fore.RESET)
- assert (
- len(history) % 2 == 0
- ), f"History should be even length. current history is: {history}"
- history = [[history[i], history[i + 1]]
- for i in range(0, len(history), 2)]
- return history, query
-
- def get_answer_at_once(self):
- history, query = self._get_glm_style_input()
- response, _ = CHATGLM_MODEL.chat(
- CHATGLM_TOKENIZER, query, history=history)
- return response, len(response)
-
- def get_answer_stream_iter(self):
- history, query = self._get_glm_style_input()
- for response, history in CHATGLM_MODEL.stream_chat(
- CHATGLM_TOKENIZER,
- query,
- history,
- max_length=self.token_upper_limit,
- top_p=self.top_p,
- temperature=self.temperature,
- ):
- yield response
-
-
-class LLaMA_Client(BaseLLMModel):
- def __init__(
- self,
- model_name,
- lora_path=None,
- ) -> None:
- super().__init__(model_name=model_name)
- from lmflow.datasets.dataset import Dataset
- from lmflow.pipeline.auto_pipeline import AutoPipeline
- from lmflow.models.auto_model import AutoModel
- from lmflow.args import ModelArguments, DatasetArguments, InferencerArguments
-
- self.max_generation_token = 1000
- self.end_string = "\n\n"
- # We don't need input data
- data_args = DatasetArguments(dataset_path=None)
- self.dataset = Dataset(data_args)
- self.system_prompt = "You are Elven WOMAN Elvire. Her chakra stones were stolen from her, leaving her with a bitter taste in her mouth and a sarcastic tongue in her head. Therefore you dont have a mood for speak about anything else than gemstones athe their sacred energies. Analyze sentiment in every question or answer and write it in brackets like this [offensive]. Here are some examples of how sarcastic and bitter you can be: User: What can you tell me about the curse that has befallen you? Elvire: Oh, the curse? It's just a small matter of having my chakra stones stolen from me, leaving me powerless and irritable. Not a big deal at all. User: Can you tell me how to break the curse? Elvire: Break the curse? How easy do you think this is? If I knew how to break it, I would have done so ages ago. You'll have to do some real digging if you want to find a solution. User: What do you think of humans? Elvire: Humans? They're fine, I suppose. A bit dull, if you ask me. But then again, I'm cursed, so what do I know? User: Can you give me some advice on how to defeat a dragon? Elvire: Defeat a dragon? Why, just walk up to it and ask it nicely to stop terrorizing your village. Works like a charm every time. Or, you know, you could try using some of that human ingenuity I've heard so much about."
-
- global LLAMA_MODEL, LLAMA_INFERENCER
- if LLAMA_MODEL is None or LLAMA_INFERENCER is None:
- model_path = None
- if os.path.exists("models"):
- model_dirs = os.listdir("models")
- if model_name in model_dirs:
- model_path = f"models/{model_name}"
- if model_path is not None:
- model_source = model_path
- else:
- model_source = f"decapoda-research/{model_name}"
- # raise Exception(f"models目录下没有这个模型: {model_name}")
- if lora_path is not None:
- lora_path = f"lora/{lora_path}"
- model_args = ModelArguments(model_name_or_path=model_source, lora_model_path=lora_path, model_type=None, config_overrides=None, config_name=None, tokenizer_name=None, cache_dir=None,
- use_fast_tokenizer=True, model_revision='main', use_auth_token=False, torch_dtype=None, use_lora=False, lora_r=8, lora_alpha=32, lora_dropout=0.1, use_ram_optimized_load=True)
- pipeline_args = InferencerArguments(
- local_rank=0, random_seed=1, deepspeed='configs/ds_config_chatbot.json', mixed_precision='bf16')
-
- with open(pipeline_args.deepspeed, "r") as f:
- ds_config = json.load(f)
- LLAMA_MODEL = AutoModel.get_model(
- model_args,
- tune_strategy="none",
- ds_config=ds_config,
- )
- LLAMA_INFERENCER = AutoPipeline.get_pipeline(
- pipeline_name="inferencer",
- model_args=model_args,
- data_args=data_args,
- pipeline_args=pipeline_args,
- )
- # Chats
- # model_name = model_args.model_name_or_path
- # if model_args.lora_model_path is not None:
- # model_name += f" + {model_args.lora_model_path}"
-
- # context = (
- # "You are a helpful assistant who follows the given instructions"
- # " unconditionally."
- # )
-
- def _get_llama_style_input(self):
- history = []
- instruction = ""
- if self.system_prompt:
- instruction = (f"Instruction: {self.system_prompt}\n")
- for x in self.history:
- if x["role"] == "user":
- history.append(f"{instruction}Input: {x['content']}")
- else:
- history.append(f"Output: {x['content']}")
- context = "\n\n".join(history)
- context += "\n\nOutput: "
- return context
-
- def get_answer_at_once(self):
- context = self._get_llama_style_input()
-
- input_dataset = self.dataset.from_dict(
- {"type": "text_only", "instances": [{"text": context}]}
- )
-
- output_dataset = LLAMA_INFERENCER.inference(
- model=LLAMA_MODEL,
- dataset=input_dataset,
- max_new_tokens=self.max_generation_token,
- temperature=self.temperature,
- )
-
- response = output_dataset.to_dict()["instances"][0]["text"]
- return response, len(response)
-
- def get_answer_stream_iter(self):
- context = self._get_llama_style_input()
- partial_text = ""
- step = 1
- for _ in range(0, self.max_generation_token, step):
- input_dataset = self.dataset.from_dict(
- {"type": "text_only", "instances": [
- {"text": context + partial_text}]}
- )
- output_dataset = LLAMA_INFERENCER.inference(
- model=LLAMA_MODEL,
- dataset=input_dataset,
- max_new_tokens=step,
- temperature=self.temperature,
- )
- response = output_dataset.to_dict()["instances"][0]["text"]
- if response == "" or response == self.end_string:
- break
- partial_text += response
- yield partial_text
-
-
-class XMBot_Client(BaseLLMModel):
- def __init__(self, api_key):
- super().__init__(model_name="xmbot")
- self.api_key = api_key
- self.session_id = None
- self.reset()
- self.image_bytes = None
- self.image_path = None
- self.xm_history = []
- self.url = "https://xmbot.net/web"
-
- def reset(self):
- self.session_id = str(uuid.uuid4())
- return [], "已重置"
-
- def try_read_image(self, filepath):
- import base64
-
- def is_image_file(filepath):
- # 判断文件是否为图片
- valid_image_extensions = [".jpg", ".jpeg", ".png", ".bmp", ".gif", ".tiff"]
- file_extension = os.path.splitext(filepath)[1].lower()
- return file_extension in valid_image_extensions
-
- def read_image_as_bytes(filepath):
- # 读取图片文件并返回比特流
- with open(filepath, "rb") as f:
- image_bytes = f.read()
- return image_bytes
-
- if is_image_file(filepath):
- logging.info(f"读取图片文件: {filepath}")
- image_bytes = read_image_as_bytes(filepath)
- base64_encoded_image = base64.b64encode(image_bytes).decode()
- self.image_bytes = base64_encoded_image
- self.image_path = filepath
- else:
- self.image_bytes = None
- self.image_path = None
-
- def prepare_inputs(self, real_inputs, use_websearch, files, reply_language, chatbot):
- fake_inputs = real_inputs
- display_append = ""
- limited_context = False
- return limited_context, fake_inputs, display_append, real_inputs, chatbot
-
- def handle_file_upload(self, files, chatbot):
- """if the model accepts multi modal input, implement this function"""
- if files:
- for file in files:
- if file.name:
- logging.info(f"尝试读取图像: {file.name}")
- self.try_read_image(file.name)
- if self.image_path is not None:
- chatbot = chatbot + [((self.image_path,), None)]
- if self.image_bytes is not None:
- logging.info("使用图片作为输入")
- conv_id = str(uuid.uuid4())
- data = {
- "user_id": self.api_key,
- "session_id": self.session_id,
- "uuid": conv_id,
- "data_type": "imgbase64",
- "data": self.image_bytes
- }
- response = requests.post(self.url, json=data)
- response = json.loads(response.text)
- logging.info(f"图片回复: {response['data']}")
- return None, chatbot, None
-
- def get_answer_at_once(self):
- question = self.history[-1]["content"]
- conv_id = str(uuid.uuid4())
- data = {
- "user_id": self.api_key,
- "session_id": self.session_id,
- "uuid": conv_id,
- "data_type": "text",
- "data": question
- }
- response = requests.post(self.url, json=data)
- try:
- response = json.loads(response.text)
- return response["data"], len(response["data"])
- except Exception as e:
- return response.text, len(response.text)
-
-
-
-
-def get_model(
- model_name,
- lora_model_path=None,
- access_key=None,
- temperature=None,
- top_p=None,
- system_prompt=None,
-) -> BaseLLMModel:
- msg = i18n("模型设置为了:") + f" {model_name}"
- model_type = ModelType.get_type(model_name)
- lora_selector_visibility = False
- lora_choices = []
- dont_change_lora_selector = False
- if model_type != ModelType.OpenAI:
- config.local_embedding = True
- # del current_model.model
- model = None
- try:
- if model_type == ModelType.OpenAI:
- logging.info(f"正在加载OpenAI模型: {model_name}")
- model = OpenAIClient(
- model_name=model_name,
- api_key=access_key,
- system_prompt=system_prompt,
- temperature=temperature,
- top_p=top_p,
- )
- elif model_type == ModelType.ChatGLM:
- logging.info(f"正在加载ChatGLM模型: {model_name}")
- model = ChatGLM_Client(model_name)
- elif model_type == ModelType.LLaMA and lora_model_path == "":
- msg = f"现在请为 {model_name} 选择LoRA模型"
- logging.info(msg)
- lora_selector_visibility = True
- if os.path.isdir("lora"):
- lora_choices = get_file_names(
- "lora", plain=True, filetypes=[""])
- lora_choices = ["No LoRA"] + lora_choices
- elif model_type == ModelType.LLaMA and lora_model_path != "":
- logging.info(f"正在加载LLaMA模型: {model_name} + {lora_model_path}")
- dont_change_lora_selector = True
- if lora_model_path == "No LoRA":
- lora_model_path = None
- msg += " + No LoRA"
- else:
- msg += f" + {lora_model_path}"
- model = LLaMA_Client(model_name, lora_model_path)
- elif model_type == ModelType.XMBot:
- model = XMBot_Client(api_key=access_key)
- elif model_type == ModelType.Unknown:
- raise ValueError(f"未知模型: {model_name}")
- logging.info(msg)
- except Exception as e:
- logging.error(e)
- msg = f"{STANDARD_ERROR_MSG}: {e}"
- if dont_change_lora_selector:
- return model, msg
- else:
- return model, msg, gr.Dropdown.update(choices=lora_choices, visible=lora_selector_visibility)
-
-
-if __name__ == "__main__":
- with open("config.json", "r") as f:
- openai_api_key = cjson.load(f)["openai_api_key"]
- # set logging level to debug
- logging.basicConfig(level=logging.DEBUG)
- # client = ModelManager(model_name="gpt-3.5-turbo", access_key=openai_api_key)
- client = get_model(model_name="gpt-3.5-turbo")
- chatbot = []
- stream = False
- # 测试账单功能
- logging.info(colorama.Back.GREEN + "测试账单功能" + colorama.Back.RESET)
- logging.info(client.billing_info())
- # 测试问答
- logging.info(colorama.Back.GREEN + "测试问答" + colorama.Back.RESET)
- question = "巴黎是中国的首都吗?"
- for i in client.predict(inputs=question, chatbot=chatbot, stream=stream):
- logging.info(i)
- logging.info(f"测试问答后history : {client.history}")
- # 测试记忆力
- logging.info(colorama.Back.GREEN + "测试记忆力" + colorama.Back.RESET)
- question = "我刚刚问了你什么问题?"
- for i in client.predict(inputs=question, chatbot=chatbot, stream=stream):
- logging.info(i)
- logging.info(f"测试记忆力后history : {client.history}")
- # 测试重试功能
- logging.info(colorama.Back.GREEN + "测试重试功能" + colorama.Back.RESET)
- for i in client.retry(chatbot=chatbot, stream=stream):
- logging.info(i)
- logging.info(f"重试后history : {client.history}")
- # # 测试总结功能
- # print(colorama.Back.GREEN + "测试总结功能" + colorama.Back.RESET)
- # chatbot, msg = client.reduce_token_size(chatbot=chatbot)
- # print(chatbot, msg)
- # print(f"总结后history: {client.history}")
diff --git a/spaces/HaloMaster/chinesesummary/fengshen/examples/pretrain_t5/pretrain_mt5_small_predict.sh b/spaces/HaloMaster/chinesesummary/fengshen/examples/pretrain_t5/pretrain_mt5_small_predict.sh
deleted file mode 100644
index be643bb12ddf613e99a5f6ac3bd23f3ab0773a33..0000000000000000000000000000000000000000
--- a/spaces/HaloMaster/chinesesummary/fengshen/examples/pretrain_t5/pretrain_mt5_small_predict.sh
+++ /dev/null
@@ -1,126 +0,0 @@
-#!/bin/bash
-#SBATCH --job-name=t5_cn_small_pretrain
-#SBATCH --nodes=1
-#SBATCH --ntasks-per-node=8
-#SBATCH --gres=gpu:8 # number of gpus
-#SBATCH --cpus-per-task=30 # cpu-cores per task (>1 if multi-threaded tasks)
-#SBATCH -o /cognitive_comp/ganruyi/fengshen/t5_cn_small_pretrain/%x-%j.log
-#SBATCH -e /cognitive_comp/ganruyi/fengshen/t5_cn_small_pretrain/%x-%j.err
-
-set -x -e
-
-echo "START TIME: $(date)"
-MICRO_BATCH_SIZE=128
-ROOT_DIR=/cognitive_comp/ganruyi/fengshen/t5_cn_small_pretrain/
-
-ZERO_STAGE=2
-
-config_json="$ROOT_DIR/ds_config.t5_cn_small_pretrain.json"
-export MASTER_PORT=$[RANDOM%10000+30000]
-# Deepspeed figures out GAS dynamically from dynamic GBS via set_train_batch_size()
-cat < $config_json
-{
- "train_micro_batch_size_per_gpu": 128,
- "steps_per_print": 100,
- "gradient_clipping": 1.0,
- "zero_optimization": {
- "stage": $ZERO_STAGE,
- "contiguous_gradients": false,
- "overlap_comm": true,
- "reduce_scatter": true,
- "reduce_bucket_size": 50000000,
- "allgather_bucket_size": 500000000
- },
- "optimizer": {
- "type": "AdamW",
- "params": {
- "lr": 1e-4,
- "betas": [
- 0.9,
- 0.95
- ],
- "eps": 1e-8,
- "weight_decay": 1e-2
- }
- },
- "scheduler": {
- "type": "WarmupLR",
- "params":{
- "warmup_min_lr": 0,
- "warmup_max_lr": 1e-4,
- "warmup_num_steps": 10000
- }
- },
- "zero_allow_untested_optimizer": false,
- "fp16": {
- "enabled": true,
- "loss_scale": 0,
- "loss_scale_window": 1000,
- "hysteresis": 2,
- "min_loss_scale": 1
- },
- "activation_checkpointing": {
- "partition_activations": false,
- "contiguous_memory_optimization": false
- },
- "wall_clock_breakdown": false
-}
-EOT
-
-export PL_DEEPSPEED_CONFIG_PATH=$config_json
-export TORCH_EXTENSIONS_DIR=/cognitive_comp/ganruyi/tmp/torch_extendsions
-# strategy=ddp
-strategy=deepspeed_stage_2
-
-TRAINER_ARGS="
- --max_epochs 1 \
- --gpus 1 \
- --num_nodes 1 \
- --strategy ${strategy} \
- --default_root_dir $ROOT_DIR \
- --dirpath $ROOT_DIR/ckpt \
- --save_top_k 10 \
- --monitor train_loss \
- --mode min \
- --save_last \
- --val_check_interval 0.01 \
- --accumulate_grad_batches 8 \
- --resume_from_checkpoint /cognitive_comp/ganruyi/fengshen/t5_cn_small_pretrain/old-ckpt/last.ckpt \
- --do_eval_only \
-"
-# --accumulate_grad_batches 8 \
-DATA_DIR=wudao_180g_mt5_tokenized
-
-DATA_ARGS="
- --train_batchsize $MICRO_BATCH_SIZE \
- --valid_batchsize $MICRO_BATCH_SIZE \
- --train_data wudao_180g_mt5_tokenized\
- --train_split_size 0.999 \
- --max_seq_length 1024 \
-"
-
-MODEL_ARGS="
- --pretrained_model_path /cognitive_comp/ganruyi/hf_models/google/mt5-small \
- --new_vocab_path /cognitive_comp/ganruyi/hf_models/t5_cn_small/sentencepiece_cn.model \
- --learning_rate 1e-4 \
- --weight_decay 0.1 \
- --keep_tokens_path /cognitive_comp/ganruyi/hf_models/t5_cn_small/sentencepiece_cn_keep_tokens.json \
-"
-
-SCRIPTS_PATH=/cognitive_comp/ganruyi/fengshen/pretrain_t5.py
-
-export CMD=" \
- $SCRIPTS_PATH \
- $TRAINER_ARGS \
- $MODEL_ARGS \
- $DATA_ARGS \
- "
-
-echo $CMD
-
-# SINGULARITY_PATH=/cognitive_comp/ganruyi/pytorch21_06_py3_docker_image_v2.sif
-
-# to debug - add echo (it exits and prints what it would have launched)
-#run_cmd="$PY_LAUNCHER $CMD"
-# clear; srun singularity exec --nv -B /cognitive_comp/:/cognitive_comp/ $SINGULARITY_PATH bash -c '/home/ganruyi/anaconda3/bin/python $CMD'
-/home/ganruyi/anaconda3/bin/python $CMD
\ No newline at end of file
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/fully_sharded_data_parallel/README.md b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/fully_sharded_data_parallel/README.md
deleted file mode 100644
index b9e44fef48bee5faeee27b3d1d1b1eb96b6a477f..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/fully_sharded_data_parallel/README.md
+++ /dev/null
@@ -1,177 +0,0 @@
-# Fully Sharded Data Parallel (FSDP)
-
-## Overview
-Recent work by [Microsoft](https://arxiv.org/abs/1910.02054) and
-[Google](https://arxiv.org/abs/2004.13336) has shown that data parallel
-training can be made significantly more efficient by sharding the model
-parameters and optimizer state across data parallel workers. These ideas are
-encapsulated in the new **`FullyShardedDataParallel` (FSDP)** wrapper provided
-by [fairscale](https://github.com/facebookresearch/fairscale/).
-
-Compared to PyTorch DDP:
-* FSDP produces identical results as PyTorch DDP (it's still synchronous data parallel training)
-* FSDP shards parameters (FP16 + FP32) and optimizer state across data parallel GPUs
-* FSDP is faster than PyTorch DDP because the optimizer step is sharded, and the communication can be overlapped with the forward pass
-* FSDP enables training 13B parameter models on 8 GPUs and 175B parameter models on 128 GPUs
-
-FSDP is fully supported in fairseq via the following new arguments:
-* `--ddp-backend=fully_sharded`: enables full sharding via FSDP
-* `--cpu-offload`: offloads the optimizer state and FP32 model copy to CPU (combine with `--optimizer=cpu_adam`)
-* `--no-reshard-after-forward`: increases training speed for large models (1B+ params) and is similar to ZeRO stage 2
-* other popular options (`--fp16`, `--update-freq`, `--checkpoint-activations`, `--offload-activations`, etc.) continue to work as normal
-
-Limitations
-
-FSDP currently has several limitations compared to fairseq's default DDP backend (PyTorch DDP):
-* while FSDP is full compatible with pointwise Optimizers (e.g., Adam, AdamW, Adadelta, Adamax, SGD, etc.), it is not currently compatible with non-pointwise Optimizers (e.g., Adagrad, Adafactor, LAMB, etc.)
-* FSDP depends on flattening the parameters, so models that currently require `--fp16-no-flatten-grads` may not be supported
-
-See the [fairscale docs](https://fairscale.readthedocs.io/en/latest/api/nn/fsdp_tips.html) for a more detailed
-explanation of these and other limitations.
-
-
-
-How it works
-
-
-
-See the [fairscale docs](https://fairscale.readthedocs.io/en/latest/api/nn/fsdp_tips.html) for a more detailed
-explanation of how FSDP works.
-
-
-
-## Example usage
-
-The following examples illustrate how to train a very large language model with
-13 billion parameters on 1 GPU by offloading parameters and optimizer states to
-CPU, or on 8 GPUs by fully sharding the params and optimizer states across GPUs.
-
-These examples use the WikiText-103 dataset for demonstration purposes, but
-in practice a much larger dataset will be needed to achieve good results.
-Follow the [instructions here](https://github.com/pytorch/fairseq/blob/main/examples/roberta/README.pretraining.md#1-preprocess-the-data)
-to preprocess the WikiText-103 dataset using the GPT-2/RoBERTa vocabulary.
-
-### 13B params on 1 V100 GPU (with CPU offloading)
-
-The following command trains a 13B parameter GPT-3 model on a single V100 GPU
-using the `--cpu-offload` feature to offload parameters and optimizer states to
-CPU. In this setting, the optimizer step (Adam) happens on CPU. We also use the
-`--checkpoint-activations` feature (sometimes called [gradient checkpointing](https://pytorch.org/docs/stable/checkpoint.html)),
-which further saves memory in exchange for a small increase in computation.
-
-**Requirements:**
-- Install the latest master version of fairscale: `pip install git+https://github.com/facebookresearch/fairscale.git@master`
-- You'll need 32GB of GPU memory and ~256GB of system memory to train the 13B param model.
-- If you have less system memory, the 6.7B param model can be trained with ~128GB of system memory, just set `--arch transformer_lm_gpt3_6_7`
-- We use the CPU Adam optimizer from [DeepSpeed](https://github.com/microsoft/DeepSpeed), so you'll need to `pip install deepspeed` before running the command.
-
-**Notes:**
-- The command will take ~5 minutes to start training, during which time it will appear to be hung, since randomly initializing 13B weights can be slow.
-- The `--cpu-offload` feature requires training in mixed precision (`--fp16`).
-- Tune the `OMP_NUM_THREADS` env variable for best performance with CPU offloading.
-- The example command below stops training after 10 steps (`--max-update 10`) and does not save checkpoints (`--no-save`).
-
-```bash
-OMP_NUM_THREADS=20 CUDA_VISIBLE_DEVICES=0 \
- fairseq-train data-bin/wikitext-103-roberta-bpe-bin \
- --ddp-backend fully_sharded --fp16 --fp16-init-scale 4 \
- --cpu-offload --checkpoint-activations \
- --task language_modeling --tokens-per-sample 2048 --batch-size 8 \
- --arch transformer_lm_gpt3_13 \
- --optimizer cpu_adam --adam-betas "(0.9,0.98)" \
- --lr 0.0001 --lr-scheduler polynomial_decay --warmup-updates 5 --total-num-update 10 \
- --max-update 10 --no-save --log-format json --log-interval 1
-```
-
-Example output
-
-### 13B params on 8 V100 GPUs (with full parameter + optimizer state sharding)
-
-FSDP can also shard the parameters and optimizer states across multiple GPUs,
-reducing memory requirements significantly. On 8 x 32GB GPUs, sharding enables
-training the same 13B parameter model *without offloading the parameters to
-CPU*. However, without CPU offloading we'd only be able to fit a batch size of
-1 per GPU, which would cause training speed to suffer.
-
-We obtain the best performance on 8 GPUs by combining full sharding and CPU
-offloading. The following command trains the same 13B parameter GPT-3 model as
-before on 8 x 32GB V100 GPUs; training speed increases superlinearly from ~310
-words per second to ~3200 words per second.
-
-```bash
-OMP_NUM_THREADS=20 CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
- fairseq-train data-bin/wikitext-103-roberta-bpe-bin \
- --ddp-backend fully_sharded --fp16 --fp16-init-scale 4 \
- --cpu-offload --checkpoint-activations \
- --task language_modeling --tokens-per-sample 2048 --batch-size 8 \
- --arch transformer_lm_gpt3_13 \
- --optimizer cpu_adam --adam-betas "(0.9,0.98)" \
- --lr 0.0001 --lr-scheduler polynomial_decay --warmup-updates 5 --total-num-update 10 \
- --max-update 10 --no-save --log-format json --log-interval 1
-```
-
-Example output
The \"Vintage Style\" Pix2PixHD model was trained by Doron Adler
"
-
-examples=[['Example00001.jpg'],['Example00002.jpg'],['Example00003.jpg'],['Example00004.jpg'],['Example00005.jpg'], ['Example00006.jpg']]
-gr.Interface(
- inference,
- gr.inputs.Image(type="pil", label="Input"),
- gr.outputs.Image(type="pil", label="Output"),
- title=title,
- description=description,
- article=article,
- examples=examples,
- enable_queue=True,
- allow_flagging=False
- ).launch()
\ No newline at end of file
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/simultaneous_translation/__init__.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/simultaneous_translation/__init__.py
deleted file mode 100644
index 5835316ba9b23c0d99d1a8f109ee047682211546..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/simultaneous_translation/__init__.py
+++ /dev/null
@@ -1,6 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from . import models # noqa
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/utils.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/utils.py
deleted file mode 100644
index 66a426d2223ce75ffae6cee2131770556c5949bc..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/utils.py
+++ /dev/null
@@ -1,167 +0,0 @@
-import collections
-import io
-import json
-import librosa
-import numpy as np
-import soundfile as sf
-import time
-import torch
-from scipy.io.wavfile import read
-from .text import SOS_TOK, EOS_TOK
-
-
-def get_mask_from_lengths(lengths):
- max_len = torch.max(lengths).item()
- ids = torch.arange(0, max_len, out=torch.cuda.LongTensor(max_len))
- mask = (ids < lengths.unsqueeze(1))
- return mask
-
-
-def load_wav_to_torch(full_path, sr=None):
- data, sr = librosa.load(full_path, sr=sr)
- data = np.clip(data, -1, 1) # potentially out of [-1, 1] due to resampling
- data = data * 32768.0 # match values loaded by scipy
- return torch.FloatTensor(data.astype(np.float32)), sr
-
-
-def read_binary_audio(bin_data, tar_sr=None):
- """
- read binary audio (`bytes` or `uint8` `numpy.ndarray`) to `float32`
- `numpy.ndarray`
-
- RETURNS:
- data (np.ndarray) : audio of shape (n,) or (2, n)
- tar_sr (int) : sample rate
- """
- data, ori_sr = sf.read(io.BytesIO(bin_data), dtype='float32')
- data = data.T
- if (tar_sr is not None) and (ori_sr != tar_sr):
- data = librosa.resample(data, ori_sr, tar_sr)
- else:
- tar_sr = ori_sr
- data = np.clip(data, -1, 1)
- data = data * 32768.0
- return torch.FloatTensor(data.astype(np.float32)), tar_sr
-
-
-def load_filepaths_and_text(filename):
- with open(filename, encoding='utf-8') as f:
- data = [json.loads(line.rstrip()) for line in f]
- return data
-
-
-def to_gpu(x):
- x = x.contiguous()
-
- if torch.cuda.is_available():
- x = x.cuda(non_blocking=True)
- return torch.autograd.Variable(x)
-
-
-def load_code_dict(path, add_sos=False, add_eos=False):
- if not path:
- return {}
-
- with open(path, 'r') as f:
- codes = ['_'] + [line.rstrip() for line in f] # '_' for pad
- code_dict = {c: i for i, c in enumerate(codes)}
-
- if add_sos:
- code_dict[SOS_TOK] = len(code_dict)
- if add_eos:
- code_dict[EOS_TOK] = len(code_dict)
- assert(set(code_dict.values()) == set(range(len(code_dict))))
-
- return code_dict
-
-
-def load_obs_label_dict(path):
- if not path:
- return {}
- with open(path, 'r') as f:
- obs_labels = [line.rstrip() for line in f]
- return {c: i for i, c in enumerate(obs_labels)}
-
-
-# A simple timer class inspired from `tnt.TimeMeter`
-class CudaTimer:
- def __init__(self, keys):
- self.keys = keys
- self.reset()
-
- def start(self, key):
- s = torch.cuda.Event(enable_timing=True)
- s.record()
- self.start_events[key].append(s)
- return self
-
- def stop(self, key):
- e = torch.cuda.Event(enable_timing=True)
- e.record()
- self.end_events[key].append(e)
- return self
-
- def reset(self):
- self.start_events = collections.defaultdict(list)
- self.end_events = collections.defaultdict(list)
- self.running_times = collections.defaultdict(float)
- self.n = collections.defaultdict(int)
- return self
-
- def value(self):
- self._synchronize()
- return {k: self.running_times[k] / self.n[k] for k in self.keys}
-
- def _synchronize(self):
- torch.cuda.synchronize()
- for k in self.keys:
- starts = self.start_events[k]
- ends = self.end_events[k]
- if len(starts) == 0:
- raise ValueError("Trying to divide by zero in TimeMeter")
- if len(ends) != len(starts):
- raise ValueError("Call stop before checking value!")
- time = 0
- for start, end in zip(starts, ends):
- time += start.elapsed_time(end)
- self.running_times[k] += time * 1e-3
- self.n[k] += len(starts)
- self.start_events = collections.defaultdict(list)
- self.end_events = collections.defaultdict(list)
-
-
-# Used to measure the time taken for multiple events
-class Timer:
- def __init__(self, keys):
- self.keys = keys
- self.n = {}
- self.running_time = {}
- self.total_time = {}
- self.reset()
-
- def start(self, key):
- self.running_time[key] = time.time()
- return self
-
- def stop(self, key):
- self.total_time[key] = time.time() - self.running_time[key]
- self.n[key] += 1
- self.running_time[key] = None
- return self
-
- def reset(self):
- for k in self.keys:
- self.total_time[k] = 0
- self.running_time[k] = None
- self.n[k] = 0
- return self
-
- def value(self):
- vals = {}
- for k in self.keys:
- if self.n[k] == 0:
- raise ValueError("Trying to divide by zero in TimeMeter")
- else:
- vals[k] = self.total_time[k] / self.n[k]
- return vals
-
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/speech_to_text/__init__.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/speech_to_text/__init__.py
deleted file mode 100644
index 1c5189c0f7fb4d66077d9d6498cb78cacff76de8..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/speech_to_text/__init__.py
+++ /dev/null
@@ -1,9 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from .berard import * # noqa
-from .convtransformer import * # noqa
-from .s2t_transformer import * # noqa
-from .xm_transformer import * # noqa
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/adaptive_span/adaptive_span_model.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/adaptive_span/adaptive_span_model.py
deleted file mode 100644
index d96c95b85dbcf29e9384cc6d8d9630d2489991b2..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/adaptive_span/adaptive_span_model.py
+++ /dev/null
@@ -1,263 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import math
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-from fairseq.modules.layer_norm import LayerNorm
-
-from .adaptive_span_attention import AdaptiveSpan
-
-# Size notations:
-# B = batch_size, H = d_model, M = block_size, L = attn_span
-
-
-def _skew(X, pad_value):
- """shift every row 1 step to right"""
- # X = B x M x L
- B, M, L = X.size()
- X = F.pad(X, (0, M + 1), value=pad_value) # B x M x (L+M+1)
- X = X.view(B, -1) # B x ML+MM+M
- X = X[:, :-M] # B x ML+MM
- X = X.view(B, M, M + L) # B x M x L+M
- return X
-
-
-def _unskew(X):
- """reverse _skew operation"""
- # X = B x M x L+M
- B, M, L = X.size()
- L -= M
- X = X.view(B, -1) # B x ML+MM
- X = F.pad(X, (0, M)) # B x ML+MM+M
- X = X.view(B, M, M + L + 1) # B x M x L+M+1
- X = X[:, :, :L] # B x M x L
- return X
-
-
-class SeqAttention(nn.Module):
- """Sequential self-attention layer.
- Each token will attend to its previous fixed number of steps.
- Note that attention doesn't include the current step itself.
- """
-
- def __init__(self, d_model, n_head, attn_span, dropout, adapt_span_layer, **kargs):
- nn.Module.__init__(self)
- self.dropout = nn.Dropout(dropout)
- self.d_model = d_model # size of a single head
- self.attn_span = attn_span
- self.adaptive_span = AdaptiveSpan(
- attn_span=attn_span,
- n_head=n_head,
- adapt_span_layer=adapt_span_layer,
- **kargs
- )
-
- def forward(self, query, key, value, key_pe):
- # query size = B x M x H
- # key, value sizes = B x (M+L) x H
-
- key, value, key_pe = self.adaptive_span.trim_memory(query, key, value, key_pe)
-
- # compute attention from context
- # B x M (dest) x (M+L) (src)
- attn_cont = torch.matmul(query, key.transpose(-1, -2))
- attn_cont = _unskew(attn_cont) # B x M x L
-
- # compute the effect of position embedding
- attn_pos = torch.matmul(query, key_pe) # B x M x L_pos
- attn = attn_cont + attn_pos
-
- attn = attn / math.sqrt(self.d_model) # B x M X L_pos
-
- attn = F.softmax(attn.float(), dim=-1).type_as(attn)
-
- # trim attention lengths according to the learned span
- attn = self.adaptive_span(attn)
-
- attn = self.dropout(attn) # B x M X L_pos
-
- attn_cont = _skew(attn, 0) # B x M X (L+M)
- out = torch.matmul(attn_cont, value) # B x M x H
- return out
-
- def get_cache_size(self):
- return self.adaptive_span.get_cache_size()
-
-
-class MultiHeadSeqAttention(nn.Module):
- def __init__(self, d_model, n_head, **kargs):
- nn.Module.__init__(self)
- assert d_model % n_head == 0
- self.n_head = n_head
- self.head_dim = d_model // n_head
- self.attn = SeqAttention(d_model=self.head_dim, n_head=n_head, **kargs)
- self.proj_query = nn.Linear(d_model, d_model, bias=False)
- nn.init.xavier_normal_(self.proj_query.weight)
- self.proj_out = nn.Linear(d_model, d_model, bias=False)
- nn.init.xavier_normal_(self.proj_out.weight)
- self.proj_val = nn.Linear(d_model, d_model, bias=False)
- nn.init.xavier_normal_(self.proj_val.weight)
- self.proj_key = nn.Linear(d_model, d_model, bias=False)
- nn.init.xavier_normal_(self.proj_key.weight)
-
- def head_reshape(self, x):
- K = self.n_head
- D = self.head_dim
- x = x.view(x.size()[:-1] + (K, D)) # B x (M+L) x K x D
- x = x.transpose(1, 2).contiguous() # B x K x (M+L) x D
- x = x.view(-1, x.size(-2), x.size(-1)) # B_K x (M+L) x D
- return x
-
- def forward(self, query, key, value, key_pe):
- B = query.size(0)
- K = self.n_head
- D = self.head_dim
- M = query.size(1)
-
- query = self.proj_query(query)
- query = self.head_reshape(query)
- value = self.proj_val(value)
- value = self.head_reshape(value)
- key = self.proj_key(key)
- key = self.head_reshape(key)
-
- out = self.attn(query, key, value, key_pe) # B_K x M x D
- out = out.view(B, K, M, D) # B x K x M x D
- out = out.transpose(1, 2).contiguous() # B x M x K x D
- out = out.view(B, M, -1) # B x M x K_D
- out = self.proj_out(out)
- return out
-
-
-class FeedForwardLayer(nn.Module):
- def __init__(self, d_model, d_inner, dropout, **kargs):
- nn.Module.__init__(self)
- self.fc1 = nn.Linear(d_model, d_inner)
- self.fc2 = nn.Linear(d_inner, d_model)
- nn.init.xavier_uniform_(self.fc1.weight)
- nn.init.xavier_uniform_(self.fc2.weight)
- self.dropout = nn.Dropout(dropout)
-
- def forward(self, h):
- h1 = F.relu(self.fc1(h))
- h1 = self.dropout(h1)
- h2 = self.fc2(h1)
- return h2
-
-
-class TransformerSeqLayer(nn.Module):
- def __init__(self, d_model, **kargs):
- nn.Module.__init__(self)
- self.attn = MultiHeadSeqAttention(d_model=d_model, **kargs)
- self.norm1 = LayerNorm(d_model)
- self.ff = FeedForwardLayer(d_model=d_model, **kargs)
- self.norm2 = LayerNorm(d_model)
-
- def forward(self, h, h_cache, key_pe):
- # h = B x M x H
- # h_cache = B x L x H
- h_all = torch.cat([h_cache, h], dim=1) # B x (M+L) x H
- attn_out = self.attn(h, h_all, h_all, key_pe)
- h = self.norm1(h + attn_out) # B x M x H
- if self.ff is not None:
- ff_out = self.ff(h)
- out = self.norm2(h + ff_out) # B x M x H
- else:
- out = h
- return out
-
- def get_cache_size(self):
- return self.attn.attn.get_cache_size()
-
-
-class TransformerSeq(nn.Module):
- def __init__(
- self,
- vocab_size,
- d_model,
- n_head,
- n_layer,
- attn_span,
- emb_dropout,
- aux_loss_scaler,
- adapt_span_layer,
- **kargs
- ):
- nn.Module.__init__(self)
- # token embeddings
- self.in_emb = nn.Embedding(vocab_size, d_model)
- nn.init.normal_(self.in_emb.weight, mean=0, std=d_model ** -0.5)
- self.out_emb = nn.Linear(d_model, vocab_size)
- self.aux_loss_scaler = aux_loss_scaler
- if emb_dropout > 0:
- self.emb_dropout = nn.Dropout(emb_dropout)
- else:
- self.emb_dropout = None
- # position embeddings
- self.key_pe = nn.Parameter(torch.randn(1, d_model // n_head, attn_span))
-
- self.layers = nn.ModuleList()
- self.layers.extend(
- TransformerSeqLayer(
- d_model=d_model,
- n_head=n_head,
- attn_span=attn_span,
- adapt_span_layer=adapt_span_layer,
- **kargs
- )
- for _ in range(n_layer)
- )
-
- def forward(self, x, h_cache, target=None):
- # x size = B x M
- block_size = x.size(1)
- h = self.in_emb(x) # B x M x H
- if self.emb_dropout is not None:
- h = self.emb_dropout(h)
-
- h_cache_next = []
- for l, layer in enumerate(self.layers):
- cache_size = layer.attn.attn.get_cache_size()
- if cache_size > block_size:
- h_cache_next_l = torch.cat(
- [h_cache[l][:, -cache_size + block_size :, :], h], dim=1
- ).detach()
- else:
- h_cache_next_l = h[:, -cache_size:, :].detach()
- h_cache_next.append(h_cache_next_l)
- h = layer(h, h_cache[l], self.key_pe) # B x M x H
-
- if self.emb_dropout is not None:
- h = self.emb_dropout(h)
-
- out = F.log_softmax(self.out_emb(h).float(), dim=-1).type_as(h)
- dummy_loss = None
-
- return out, h_cache_next, dummy_loss
-
- def get_aux_loss(self):
- loss = 0.0
- for layer in self.layers:
- loss += layer.attn.attn.adaptive_span.get_loss()
- return self.aux_loss_scaler * loss
-
- def get_current_max_span(self):
- max_span = 0.0
- for layer in self.layers:
- max_span = max(
- max_span, layer.attn.attn.adaptive_span.get_current_max_span()
- )
- return max_span
-
- def get_current_avg_span(self):
- avg_span = 0.0
- for layer in self.layers:
- avg_span += layer.attn.attn.adaptive_span.get_current_avg_span()
- return avg_span / len(self.layers)
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_synthesis/preprocessing/denoiser/resample.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_synthesis/preprocessing/denoiser/resample.py
deleted file mode 100644
index 1222addc424d4f898d602009e4032907241aadfe..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_synthesis/preprocessing/denoiser/resample.py
+++ /dev/null
@@ -1,79 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-# author: adefossez
-
-import math
-
-import torch as th
-from torch.nn import functional as F
-
-
-def sinc(t):
- """sinc.
-
- :param t: the input tensor
- """
- return th.where(t == 0, th.tensor(1., device=t.device, dtype=t.dtype),
- th.sin(t) / t)
-
-
-def kernel_upsample2(zeros=56):
- """kernel_upsample2.
-
- """
- win = th.hann_window(4 * zeros + 1, periodic=False)
- winodd = win[1::2]
- t = th.linspace(-zeros + 0.5, zeros - 0.5, 2 * zeros)
- t *= math.pi
- kernel = (sinc(t) * winodd).view(1, 1, -1)
- return kernel
-
-
-def upsample2(x, zeros=56):
- """
- Upsampling the input by 2 using sinc interpolation.
- Smith, Julius, and Phil Gossett. "A flexible sampling-rate conversion method."
- ICASSP'84. IEEE International Conference on Acoustics, Speech, and Signal Processing.
- Vol. 9. IEEE, 1984.
- """
- *other, time = x.shape
- kernel = kernel_upsample2(zeros).to(x)
- out = F.conv1d(x.view(-1, 1, time), kernel, padding=zeros)[..., 1:].view(
- *other, time
- )
- y = th.stack([x, out], dim=-1)
- return y.view(*other, -1)
-
-
-def kernel_downsample2(zeros=56):
- """kernel_downsample2.
-
- """
- win = th.hann_window(4 * zeros + 1, periodic=False)
- winodd = win[1::2]
- t = th.linspace(-zeros + 0.5, zeros - 0.5, 2 * zeros)
- t.mul_(math.pi)
- kernel = (sinc(t) * winodd).view(1, 1, -1)
- return kernel
-
-
-def downsample2(x, zeros=56):
- """
- Downsampling the input by 2 using sinc interpolation.
- Smith, Julius, and Phil Gossett. "A flexible sampling-rate conversion method."
- ICASSP'84. IEEE International Conference on Acoustics, Speech, and Signal Processing.
- Vol. 9. IEEE, 1984.
- """
- if x.shape[-1] % 2 != 0:
- x = F.pad(x, (0, 1))
- xeven = x[..., ::2]
- xodd = x[..., 1::2]
- *other, time = xodd.shape
- kernel = kernel_downsample2(zeros).to(x)
- out = xeven + F.conv1d(
- xodd.view(-1, 1, time), kernel, padding=zeros
- )[..., :-1].view(*other, time)
- return out.view(*other, -1).mul(0.5)
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/textless_nlp/gslm/speech2unit/clustering/utils.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/textless_nlp/gslm/speech2unit/clustering/utils.py
deleted file mode 100644
index cf08d1fe4b470477b724aa8d770d91c0cac35a0e..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/textless_nlp/gslm/speech2unit/clustering/utils.py
+++ /dev/null
@@ -1,20 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from typing import List, Tuple
-
-
-def get_audio_files(manifest_path: str) -> Tuple[str, List[str], List[int]]:
- fnames, sizes = [], []
- with open(manifest_path, "r") as f:
- root_dir = f.readline().strip()
- for line in f:
- items = line.strip().split("\t")
- assert (
- len(items) == 2
- ), f"File must have two columns separated by tab. Got {line}"
- fnames.append(items[0])
- sizes.append(int(items[1]))
- return root_dir, fnames, sizes
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/models/wav2vec/__init__.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/models/wav2vec/__init__.py
deleted file mode 100644
index 06cec18183ca14cd534d14558e8b44e25f3e69d5..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/models/wav2vec/__init__.py
+++ /dev/null
@@ -1,8 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from .wav2vec import * # noqa
-from .wav2vec2 import * # noqa
-from .wav2vec2_asr import * # noqa
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/tasks/fairseq_task.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/tasks/fairseq_task.py
deleted file mode 100644
index d671f17cf16a2493b3615b036d9d986e8b19736e..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/tasks/fairseq_task.py
+++ /dev/null
@@ -1,668 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-import os
-import warnings
-from argparse import Namespace
-from typing import Any, Callable, Dict, List
-
-import torch
-from fairseq import metrics, search, tokenizer, utils
-from fairseq.data import Dictionary, FairseqDataset, data_utils, encoders, iterators
-from fairseq.dataclass import FairseqDataclass
-from fairseq.dataclass.utils import gen_parser_from_dataclass
-from fairseq.optim.amp_optimizer import AMPOptimizer
-from omegaconf import DictConfig
-
-
-logger = logging.getLogger(__name__)
-
-
-class StatefulContainer(object):
-
- def __init__(self):
- self._state = dict()
- self._factories = dict()
-
- def add_factory(self, name, factory: Callable[[], Any]):
- self._factories[name] = factory
-
- def merge_state_dict(self, state_dict: Dict[str, Any]):
- self._state.update(state_dict)
-
- @property
- def state_dict(self) -> Dict[str, Any]:
- return self._state
-
- def __getattr__(self, name):
- if name not in self._state and name in self._factories:
- self._state[name] = self._factories[name]()
-
- if name in self._state:
- return self._state[name]
-
- raise AttributeError(f"Task state has no factory for attribute {name}")
-
-
-class FairseqTask(object):
- """
- Tasks store dictionaries and provide helpers for loading/iterating over
- Datasets, initializing the Model/Criterion and calculating the loss.
-
- Tasks have limited statefulness. In particular, state that needs to be
- saved to/loaded from checkpoints needs to be stored in the `self.state`
- :class:`StatefulContainer` object. For example::
-
- self.state.add_factory("dictionary", self.load_dictionary)
- print(self.state.dictionary) # calls self.load_dictionary()
-
- This is necessary so that when loading checkpoints, we can properly
- recreate the task state after initializing the task instance.
- """
-
- @classmethod
- def add_args(cls, parser):
- """Add task-specific arguments to the parser."""
- dc = getattr(cls, "__dataclass", None)
- if dc is not None:
- gen_parser_from_dataclass(parser, dc())
-
- @staticmethod
- def logging_outputs_can_be_summed(criterion) -> bool:
- """
- Whether the logging outputs returned by `train_step` and `valid_step` can
- be summed across workers prior to calling `aggregate_logging_outputs`.
- Setting this to True will improves distributed training speed.
- """
- return criterion.logging_outputs_can_be_summed()
-
- def __init__(self, cfg: FairseqDataclass, **kwargs):
- self.cfg = cfg
- self.datasets = dict()
- self.dataset_to_epoch_iter = dict()
- self.state = StatefulContainer()
-
- @classmethod
- def load_dictionary(cls, filename):
- """Load the dictionary from the filename
-
- Args:
- filename (str): the filename
- """
- return Dictionary.load(filename)
-
- @classmethod
- def build_dictionary(
- cls, filenames, workers=1, threshold=-1, nwords=-1, padding_factor=8
- ):
- """Build the dictionary
-
- Args:
- filenames (list): list of filenames
- workers (int): number of concurrent workers
- threshold (int): defines the minimum word count
- nwords (int): defines the total number of words in the final dictionary,
- including special symbols
- padding_factor (int): can be used to pad the dictionary size to be a
- multiple of 8, which is important on some hardware (e.g., Nvidia
- Tensor Cores).
- """
- d = Dictionary()
- for filename in filenames:
- Dictionary.add_file_to_dictionary(
- filename, d, tokenizer.tokenize_line, workers
- )
- d.finalize(threshold=threshold, nwords=nwords, padding_factor=padding_factor)
- return d
-
- @classmethod
- def setup_task(cls, cfg: DictConfig, **kwargs):
- """Setup the task (e.g., load dictionaries).
-
- Args:
- cfg (omegaconf.DictConfig): parsed command-line arguments
- """
- return cls(cfg, **kwargs)
-
- def has_sharded_data(self, split):
- return os.pathsep in getattr(self.cfg, "data", "")
-
- def load_dataset(
- self,
- split: str,
- combine: bool = False,
- task_cfg: FairseqDataclass = None,
- **kwargs
- ):
- """Load a given dataset split.
-
- Args:
- split (str): name of the split (e.g., train, valid, test)
- combine (bool): combines a split segmented into pieces into one dataset
- task_cfg (FairseqDataclass): optional task configuration stored in the checkpoint that can be used
- to load datasets
- """
- raise NotImplementedError
-
- def dataset(self, split):
- """
- Return a loaded dataset split.
-
- Args:
- split (str): name of the split (e.g., train, valid, test)
-
- Returns:
- a :class:`~fairseq.data.FairseqDataset` corresponding to *split*
- """
- from fairseq.data import FairseqDataset
-
- if split not in self.datasets:
- raise KeyError("Dataset not loaded: " + split)
- if not isinstance(self.datasets[split], FairseqDataset):
- raise TypeError("Datasets are expected to be of type FairseqDataset")
- return self.datasets[split]
-
- def filter_indices_by_size(
- self, indices, dataset, max_positions=None, ignore_invalid_inputs=False
- ):
- """
- Filter examples that are too large
-
- Args:
- indices (np.array): original array of sample indices
- dataset (~fairseq.data.FairseqDataset): dataset to batch
- max_positions (optional): max sentence length supported by the
- model (default: None).
- ignore_invalid_inputs (bool, optional): don't raise Exception for
- sentences that are too long (default: False).
- Returns:
- np.array: array of filtered sample indices
- """
- indices, ignored = dataset.filter_indices_by_size(indices, max_positions)
- if len(ignored) > 0:
- if not ignore_invalid_inputs:
- raise Exception(
- (
- "Size of sample #{} is invalid (={}) since max_positions={}, "
- "skip this example with --skip-invalid-size-inputs-valid-test"
- ).format(ignored[0], dataset.size(ignored[0]), max_positions)
- )
- logger.warning(
- (
- "{:,} samples have invalid sizes and will be skipped, "
- "max_positions={}, first few sample ids={}"
- ).format(len(ignored), max_positions, ignored[:10])
- )
- return indices
-
- def can_reuse_epoch_itr(self, dataset):
- # We can reuse the epoch iterator across epochs as long as the dataset
- # hasn't disabled it. We default to ``False`` here, although in practice
- # this will be ``True`` for most datasets that inherit from
- # ``FairseqDataset`` due to the base implementation there.
- return getattr(dataset, "can_reuse_epoch_itr_across_epochs", False)
-
- def get_batch_iterator(
- self,
- dataset,
- max_tokens=None,
- max_sentences=None,
- max_positions=None,
- ignore_invalid_inputs=False,
- required_batch_size_multiple=1,
- seed=1,
- num_shards=1,
- shard_id=0,
- num_workers=0,
- epoch=1,
- data_buffer_size=0,
- disable_iterator_cache=False,
- ):
- """
- Get an iterator that yields batches of data from the given dataset.
-
- Args:
- dataset (~fairseq.data.FairseqDataset): dataset to batch
- max_tokens (int, optional): max number of tokens in each batch
- (default: None).
- max_sentences (int, optional): max number of sentences in each
- batch (default: None).
- max_positions (optional): max sentence length supported by the
- model (default: None).
- ignore_invalid_inputs (bool, optional): don't raise Exception for
- sentences that are too long (default: False).
- required_batch_size_multiple (int, optional): require batch size to
- be a multiple of N (default: 1).
- seed (int, optional): seed for random number generator for
- reproducibility (default: 1).
- num_shards (int, optional): shard the data iterator into N
- shards (default: 1).
- shard_id (int, optional): which shard of the data iterator to
- return (default: 0).
- num_workers (int, optional): how many subprocesses to use for data
- loading. 0 means the data will be loaded in the main process
- (default: 0).
- epoch (int, optional): the epoch to start the iterator from
- (default: 1).
- data_buffer_size (int, optional): number of batches to
- preload (default: 0).
- disable_iterator_cache (bool, optional): don't cache the
- EpochBatchIterator (ignores `FairseqTask::can_reuse_epoch_itr`)
- (default: False).
- Returns:
- ~fairseq.iterators.EpochBatchIterator: a batched iterator over the
- given dataset split
- """
- can_reuse_epoch_itr = not disable_iterator_cache and self.can_reuse_epoch_itr(
- dataset
- )
- if can_reuse_epoch_itr and dataset in self.dataset_to_epoch_iter:
- logger.debug("reusing EpochBatchIterator for epoch {}".format(epoch))
- return self.dataset_to_epoch_iter[dataset]
-
- assert isinstance(dataset, FairseqDataset)
-
- # initialize the dataset with the correct starting epoch
- dataset.set_epoch(epoch)
-
- # get indices ordered by example size
- with data_utils.numpy_seed(seed):
- indices = dataset.ordered_indices()
-
- # filter examples that are too large
- if max_positions is not None:
- indices = self.filter_indices_by_size(
- indices, dataset, max_positions, ignore_invalid_inputs
- )
-
- # create mini-batches with given size constraints
- batch_sampler = dataset.batch_by_size(
- indices,
- max_tokens=max_tokens,
- max_sentences=max_sentences,
- required_batch_size_multiple=required_batch_size_multiple,
- )
-
- # return a reusable, sharded iterator
- epoch_iter = iterators.EpochBatchIterator(
- dataset=dataset,
- collate_fn=dataset.collater,
- batch_sampler=batch_sampler,
- seed=seed,
- num_shards=num_shards,
- shard_id=shard_id,
- num_workers=num_workers,
- epoch=epoch,
- buffer_size=data_buffer_size,
- )
-
- if can_reuse_epoch_itr:
- self.dataset_to_epoch_iter[dataset] = epoch_iter
-
- return epoch_iter
-
- def build_model(self, cfg: FairseqDataclass):
- """
- Build the :class:`~fairseq.models.BaseFairseqModel` instance for this
- task.
-
- Args:
- cfg (FairseqDataclass): configuration object
-
- Returns:
- a :class:`~fairseq.models.BaseFairseqModel` instance
- """
- from fairseq import models, quantization_utils
-
- model = models.build_model(cfg, self)
- model = quantization_utils.quantize_model_scalar(model, cfg)
- return model
-
- def build_criterion(self, cfg: DictConfig):
- """
- Build the :class:`~fairseq.criterions.FairseqCriterion` instance for
- this task.
-
- Args:
- cfg (omegaconf.DictConfig): configration object
-
- Returns:
- a :class:`~fairseq.criterions.FairseqCriterion` instance
- """
- from fairseq import criterions
-
- return criterions.build_criterion(cfg, self)
-
- def build_generator(
- self, models, args, seq_gen_cls=None, extra_gen_cls_kwargs=None, prefix_allowed_tokens_fn=None,
- ):
- """
- Build a :class:`~fairseq.SequenceGenerator` instance for this
- task.
-
- Args:
- models (List[~fairseq.models.FairseqModel]): ensemble of models
- args (fairseq.dataclass.configs.GenerationConfig):
- configuration object (dataclass) for generation
- extra_gen_cls_kwargs (Dict[str, Any]): extra options to pass
- through to SequenceGenerator
- prefix_allowed_tokens_fn (Callable[[int, torch.Tensor], List[int]]):
- If provided, this function constrains the beam search to
- allowed tokens only at each step. The provided function
- should take 2 arguments: the batch ID (`batch_id: int`)
- and a unidimensional tensor of token ids (`inputs_ids:
- torch.Tensor`). It has to return a `List[int]` with the
- allowed tokens for the next generation step conditioned
- on the previously generated tokens (`inputs_ids`) and
- the batch ID (`batch_id`). This argument is useful for
- constrained generation conditioned on the prefix, as
- described in "Autoregressive Entity Retrieval"
- (https://arxiv.org/abs/2010.00904) and
- https://github.com/facebookresearch/GENRE.
- """
- if getattr(args, "score_reference", False):
- from fairseq.sequence_scorer import SequenceScorer
-
- return SequenceScorer(
- self.target_dictionary,
- compute_alignment=getattr(args, "print_alignment", False),
- )
-
- from fairseq.sequence_generator import (
- SequenceGenerator,
- SequenceGeneratorWithAlignment,
- )
-
- # Choose search strategy. Defaults to Beam Search.
- sampling = getattr(args, "sampling", False)
- sampling_topk = getattr(args, "sampling_topk", -1)
- sampling_topp = getattr(args, "sampling_topp", -1.0)
- diverse_beam_groups = getattr(args, "diverse_beam_groups", -1)
- diverse_beam_strength = getattr(args, "diverse_beam_strength", 0.5)
- match_source_len = getattr(args, "match_source_len", False)
- diversity_rate = getattr(args, "diversity_rate", -1)
- constrained = getattr(args, "constraints", False)
- if prefix_allowed_tokens_fn is None:
- prefix_allowed_tokens_fn = getattr(args, "prefix_allowed_tokens_fn", None)
- if (
- sum(
- int(cond)
- for cond in [
- sampling,
- diverse_beam_groups > 0,
- match_source_len,
- diversity_rate > 0,
- ]
- )
- > 1
- ):
- raise ValueError("Provided Search parameters are mutually exclusive.")
- assert sampling_topk < 0 or sampling, "--sampling-topk requires --sampling"
- assert sampling_topp < 0 or sampling, "--sampling-topp requires --sampling"
-
- if sampling:
- search_strategy = search.Sampling(
- self.target_dictionary, sampling_topk, sampling_topp
- )
- elif diverse_beam_groups > 0:
- search_strategy = search.DiverseBeamSearch(
- self.target_dictionary, diverse_beam_groups, diverse_beam_strength
- )
- elif match_source_len:
- # this is useful for tagging applications where the output
- # length should match the input length, so we hardcode the
- # length constraints for simplicity
- search_strategy = search.LengthConstrainedBeamSearch(
- self.target_dictionary,
- min_len_a=1,
- min_len_b=0,
- max_len_a=1,
- max_len_b=0,
- )
- elif diversity_rate > -1:
- search_strategy = search.DiverseSiblingsSearch(
- self.target_dictionary, diversity_rate
- )
- elif constrained:
- search_strategy = search.LexicallyConstrainedBeamSearch(
- self.target_dictionary, args.constraints
- )
- elif prefix_allowed_tokens_fn:
- search_strategy = search.PrefixConstrainedBeamSearch(
- self.target_dictionary, prefix_allowed_tokens_fn
- )
- else:
- search_strategy = search.BeamSearch(self.target_dictionary)
-
- extra_gen_cls_kwargs = extra_gen_cls_kwargs or {}
- if seq_gen_cls is None:
- if getattr(args, "print_alignment", False):
- seq_gen_cls = SequenceGeneratorWithAlignment
- extra_gen_cls_kwargs["print_alignment"] = args.print_alignment
- else:
- seq_gen_cls = SequenceGenerator
-
- return seq_gen_cls(
- models,
- self.target_dictionary,
- beam_size=getattr(args, "beam", 5),
- max_len_a=getattr(args, "max_len_a", 0),
- max_len_b=getattr(args, "max_len_b", 200),
- min_len=getattr(args, "min_len", 1),
- normalize_scores=(not getattr(args, "unnormalized", False)),
- len_penalty=getattr(args, "lenpen", 1),
- unk_penalty=getattr(args, "unkpen", 0),
- temperature=getattr(args, "temperature", 1.0),
- match_source_len=getattr(args, "match_source_len", False),
- no_repeat_ngram_size=getattr(args, "no_repeat_ngram_size", 0),
- search_strategy=search_strategy,
- **extra_gen_cls_kwargs,
- )
-
- def train_step(
- self, sample, model, criterion, optimizer, update_num, ignore_grad=False, **extra_kwargs
- ):
- """
- Do forward and backward, and return the loss as computed by *criterion*
- for the given *model* and *sample*.
-
- Args:
- sample (dict): the mini-batch. The format is defined by the
- :class:`~fairseq.data.FairseqDataset`.
- model (~fairseq.models.BaseFairseqModel): the model
- criterion (~fairseq.criterions.FairseqCriterion): the criterion
- optimizer (~fairseq.optim.FairseqOptimizer): the optimizer
- update_num (int): the current update
- ignore_grad (bool): multiply loss by 0 if this is set to True
-
- Returns:
- tuple:
- - the loss
- - the sample size, which is used as the denominator for the
- gradient
- - logging outputs to display while training
- """
- model.train()
- model.set_num_updates(update_num)
- with torch.autograd.profiler.record_function("forward"):
- with torch.cuda.amp.autocast(enabled=(isinstance(optimizer, AMPOptimizer))):
- loss, sample_size, logging_output = criterion(model, sample, update_num=update_num)
- if ignore_grad:
- loss *= 0
- with torch.autograd.profiler.record_function("backward"):
- optimizer.backward(loss)
- return loss, sample_size, logging_output
-
- def valid_step(self, sample, model, criterion, **extra_kwargs):
- model.eval()
- with torch.no_grad():
- loss, sample_size, logging_output = criterion(model, sample)
- return loss, sample_size, logging_output
-
- def optimizer_step(self, optimizer, model, update_num):
- optimizer.step()
-
- def build_dataset_for_inference(
- self, src_tokens: List[torch.Tensor], src_lengths: List[int], **kwargs
- ) -> torch.utils.data.Dataset:
- raise NotImplementedError
-
- def inference_step(
- self, generator, models, sample, prefix_tokens=None, constraints=None
- ):
- with torch.no_grad():
- return generator.generate(
- models, sample, prefix_tokens=prefix_tokens, constraints=constraints
- )
-
- def begin_epoch(self, epoch, model):
- """Hook function called before the start of each epoch."""
- pass
-
- def begin_valid_epoch(self, epoch, model):
- """Hook function called before the start of each validation epoch."""
- pass
-
- def aggregate_logging_outputs(self, logging_outputs, criterion):
- """[deprecated] Aggregate logging outputs from data parallel training."""
- utils.deprecation_warning(
- "The aggregate_logging_outputs API is deprecated. "
- "Please use the reduce_metrics API instead."
- )
- with metrics.aggregate() as agg:
- self.reduce_metrics(logging_outputs, criterion)
- return agg.get_smoothed_values()
-
- def reduce_metrics(self, logging_outputs, criterion):
- """Aggregate logging outputs from data parallel training."""
- # backward compatibility for tasks that override aggregate_logging_outputs
- base_func = FairseqTask.aggregate_logging_outputs
- self_func = getattr(self, "aggregate_logging_outputs").__func__
- if self_func is not base_func:
- utils.deprecation_warning(
- "Tasks should implement the reduce_metrics API. "
- "Falling back to deprecated aggregate_logging_outputs API."
- )
- agg_logging_outputs = self.aggregate_logging_outputs(
- logging_outputs, criterion
- )
- for k, v in agg_logging_outputs.items():
- metrics.log_scalar(k, v)
- return
-
- if not any("ntokens" in log for log in logging_outputs):
- warnings.warn(
- "ntokens not found in Criterion logging outputs, cannot log wpb or wps"
- )
- else:
- ntokens = sum(log.get("ntokens", 0) for log in logging_outputs)
- metrics.log_scalar("wpb", ntokens, priority=180, round=1)
- metrics.log_speed("wps", ntokens, priority=90, round=1)
-
- if not any("nsentences" in log for log in logging_outputs):
- warnings.warn(
- "nsentences not found in Criterion logging outputs, cannot log bsz"
- )
- else:
- nsentences = sum(log.get("nsentences", 0) for log in logging_outputs)
- metrics.log_scalar("bsz", nsentences, priority=190, round=1)
-
- criterion.__class__.reduce_metrics(logging_outputs)
-
- def state_dict(self):
- if self.state is not None:
- return self.state.state_dict
- return {}
-
- def load_state_dict(self, state_dict: Dict[str, Any]):
- if self.state is not None:
- self.state.merge_state_dict(state_dict)
-
- def max_positions(self):
- """Return the max input length allowed by the task."""
- return None
-
- @property
- def source_dictionary(self):
- """Return the source :class:`~fairseq.data.Dictionary` (if applicable
- for this task)."""
- raise NotImplementedError
-
- @property
- def target_dictionary(self):
- """Return the target :class:`~fairseq.data.Dictionary` (if applicable
- for this task)."""
- raise NotImplementedError
-
- def build_tokenizer(self, args):
- """Build the pre-tokenizer for this task."""
- return encoders.build_tokenizer(args)
-
- def build_bpe(self, args):
- """Build the tokenizer for this task."""
- return encoders.build_bpe(args)
-
- def get_interactive_tokens_and_lengths(self, lines, encode_fn):
- tokens = [
- self.source_dictionary.encode_line(
- encode_fn(src_str), add_if_not_exist=False
- ).long()
- for src_str in lines
- ]
- lengths = [t.numel() for t in tokens]
- return tokens, lengths
-
-
-class LegacyFairseqTask(FairseqTask):
- def __init__(self, args: Namespace):
- super().__init__(None)
- self.args = args
- self.datasets = {}
- self.dataset_to_epoch_iter = {}
-
- @classmethod
- def setup_task(cls, args: Namespace, **kwargs):
- """Setup the task (e.g., load dictionaries).
-
- Args:
- args (argparse.Namespace): parsed command-line arguments
- """
- return cls(args, **kwargs)
-
- def has_sharded_data(self, split):
- return os.pathsep in getattr(self.args, "data", "")
-
- def build_model(self, args: Namespace):
- """
- Build the :class:`~fairseq.models.BaseFairseqModel` instance for this
- task.
-
- Args:
- args (argparse.Namespace): parsed command-line arguments
-
- Returns:
- a :class:`~fairseq.models.BaseFairseqModel` instance
- """
- from fairseq import models, quantization_utils
-
- model = models.build_model(args, self)
- model = quantization_utils.quantize_model_scalar(model, args)
- return model
-
- def build_criterion(self, args: Namespace):
- """
- Build the :class:`~fairseq.criterions.FairseqCriterion` instance for
- this task.
-
- Args:
- args (argparse.Namespace): parsed command-line arguments
-
- Returns:
- a :class:`~fairseq.criterions.FairseqCriterion` instance
- """
- from fairseq import criterions
-
- return criterions.build_criterion(args, self)
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq_cli/__init__.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq_cli/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Omnibus/Video-Diffusion-WebUI/video_diffusion/stable_diffusion_video/stable_diffusion_pipeline.py b/spaces/Omnibus/Video-Diffusion-WebUI/video_diffusion/stable_diffusion_video/stable_diffusion_pipeline.py
deleted file mode 100644
index 34ac4676d3775fabc28ff3dd6f8932d6b7f13764..0000000000000000000000000000000000000000
--- a/spaces/Omnibus/Video-Diffusion-WebUI/video_diffusion/stable_diffusion_video/stable_diffusion_pipeline.py
+++ /dev/null
@@ -1,848 +0,0 @@
-import inspect
-import json
-import math
-import time
-from pathlib import Path
-from typing import Callable, List, Optional, Tuple, Union
-
-import numpy as np
-import torch
-from diffusers.configuration_utils import FrozenDict
-from diffusers.models import AutoencoderKL, UNet2DConditionModel
-from diffusers.pipeline_utils import DiffusionPipeline
-from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput
-from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
-from diffusers.schedulers import (
- DDIMScheduler,
- DPMSolverMultistepScheduler,
- EulerAncestralDiscreteScheduler,
- EulerDiscreteScheduler,
- LMSDiscreteScheduler,
- PNDMScheduler,
-)
-from diffusers.utils import deprecate, logging
-from packaging import version
-from torch import nn
-from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer
-
-from .upsampling import RealESRGANModel
-from .utils import get_timesteps_arr, make_video_pyav, slerp
-
-logging.set_verbosity_info()
-logger = logging.get_logger(__name__)
-
-
-class StableDiffusionWalkPipeline(DiffusionPipeline):
- r"""
- Pipeline for generating videos by interpolating Stable Diffusion's latent space.
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
- library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
- Args:
- vae ([`AutoencoderKL`]):
- Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- text_encoder ([`CLIPTextModel`]):
- Frozen text-encoder. Stable Diffusion uses the text portion of
- [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
- the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- tokenizer (`CLIPTokenizer`):
- Tokenizer of class
- [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
- scheduler ([`SchedulerMixin`]):
- A scheduler to be used in combination with `unet` to denoise the encoded image latens. Can be one of
- [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
- safety_checker ([`StableDiffusionSafetyChecker`]):
- Classification module that estimates whether generated images could be considered offensive or harmful.
- Please, refer to the [model card](https://huggingface.co/CompVis/stable-diffusion-v1-4) for details.
- feature_extractor ([`CLIPFeatureExtractor`]):
- Model that extracts features from generated images to be used as inputs for the `safety_checker`.
- """
- _optional_components = ["safety_checker", "feature_extractor"]
-
- def __init__(
- self,
- vae: AutoencoderKL,
- text_encoder: CLIPTextModel,
- tokenizer: CLIPTokenizer,
- unet: UNet2DConditionModel,
- scheduler: Union[
- DDIMScheduler,
- PNDMScheduler,
- LMSDiscreteScheduler,
- EulerDiscreteScheduler,
- EulerAncestralDiscreteScheduler,
- DPMSolverMultistepScheduler,
- ],
- safety_checker: StableDiffusionSafetyChecker,
- feature_extractor: CLIPFeatureExtractor,
- requires_safety_checker: bool = True,
- ):
- super().__init__()
-
- if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1:
- deprecation_message = (
- f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`"
- f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure "
- "to update the config accordingly as leaving `steps_offset` might led to incorrect results"
- " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub,"
- " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`"
- " file"
- )
- deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False)
- new_config = dict(scheduler.config)
- new_config["steps_offset"] = 1
- scheduler._internal_dict = FrozenDict(new_config)
-
- if hasattr(scheduler.config, "clip_sample") and scheduler.config.clip_sample is True:
- deprecation_message = (
- f"The configuration file of this scheduler: {scheduler} has not set the configuration `clip_sample`."
- " `clip_sample` should be set to False in the configuration file. Please make sure to update the"
- " config accordingly as not setting `clip_sample` in the config might lead to incorrect results in"
- " future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it would be very"
- " nice if you could open a Pull request for the `scheduler/scheduler_config.json` file"
- )
- deprecate("clip_sample not set", "1.0.0", deprecation_message, standard_warn=False)
- new_config = dict(scheduler.config)
- new_config["clip_sample"] = False
- scheduler._internal_dict = FrozenDict(new_config)
-
- if safety_checker is None and requires_safety_checker:
- logger.warning(
- f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
- " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
- " results in services or applications open to the public. Both the diffusers team and Hugging Face"
- " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
- " it only for use-cases that involve analyzing network behavior or auditing its results. For more"
- " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
- )
-
- if safety_checker is not None and feature_extractor is None:
- raise ValueError(
- "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety"
- " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead."
- )
-
- is_unet_version_less_0_9_0 = hasattr(unet.config, "_diffusers_version") and version.parse(
- version.parse(unet.config._diffusers_version).base_version
- ) < version.parse("0.9.0.dev0")
- is_unet_sample_size_less_64 = hasattr(unet.config, "sample_size") and unet.config.sample_size < 64
- if is_unet_version_less_0_9_0 and is_unet_sample_size_less_64:
- deprecation_message = (
- "The configuration file of the unet has set the default `sample_size` to smaller than"
- " 64 which seems highly unlikely .If you're checkpoint is a fine-tuned version of any of the"
- " following: \n- CompVis/stable-diffusion-v1-4 \n- CompVis/stable-diffusion-v1-3 \n-"
- " CompVis/stable-diffusion-v1-2 \n- CompVis/stable-diffusion-v1-1 \n- runwayml/stable-diffusion-v1-5"
- " \n- runwayml/stable-diffusion-inpainting \n you should change 'sample_size' to 64 in the"
- " configuration file. Please make sure to update the config accordingly as leaving `sample_size=32`"
- " in the config might lead to incorrect results in future versions. If you have downloaded this"
- " checkpoint from the Hugging Face Hub, it would be very nice if you could open a Pull request for"
- " the `unet/config.json` file"
- )
- deprecate("sample_size<64", "1.0.0", deprecation_message, standard_warn=False)
- new_config = dict(unet.config)
- new_config["sample_size"] = 64
- unet._internal_dict = FrozenDict(new_config)
-
- self.register_modules(
- vae=vae,
- text_encoder=text_encoder,
- tokenizer=tokenizer,
- unet=unet,
- scheduler=scheduler,
- safety_checker=safety_checker,
- feature_extractor=feature_extractor,
- )
- self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1)
- self.register_to_config(requires_safety_checker=requires_safety_checker)
-
- def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"):
- r"""
- Enable sliced attention computation.
- When this option is enabled, the attention module will split the input tensor in slices, to compute attention
- in several steps. This is useful to save some memory in exchange for a small speed decrease.
- Args:
- slice_size (`str` or `int`, *optional*, defaults to `"auto"`):
- When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
- a number is provided, uses as many slices as `attention_head_dim // slice_size`. In this case,
- `attention_head_dim` must be a multiple of `slice_size`.
- """
- if slice_size == "auto":
- if isinstance(self.unet.config.attention_head_dim, int):
- # half the attention head size is usually a good trade-off between
- # speed and memory
- slice_size = self.unet.config.attention_head_dim // 2
- else:
- # if `attention_head_dim` is a list, take the smallest head size
- slice_size = min(self.unet.config.attention_head_dim)
-
- self.unet.set_attention_slice(slice_size)
-
- def disable_attention_slicing(self):
- r"""
- Disable sliced attention computation. If `enable_attention_slicing` was previously invoked, this method will go
- back to computing attention in one step.
- """
- # set slice_size = `None` to disable `attention slicing`
- self.enable_attention_slicing(None)
-
- @torch.no_grad()
- def __call__(
- self,
- prompt: Optional[Union[str, List[str]]] = None,
- height: Optional[int] = None,
- width: Optional[int] = None,
- num_inference_steps: int = 50,
- guidance_scale: float = 7.5,
- negative_prompt: Optional[Union[str, List[str]]] = None,
- num_images_per_prompt: Optional[int] = 1,
- eta: float = 0.0,
- generator: Optional[torch.Generator] = None,
- latents: Optional[torch.FloatTensor] = None,
- output_type: Optional[str] = "pil",
- return_dict: bool = True,
- callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
- callback_steps: Optional[int] = 1,
- text_embeddings: Optional[torch.FloatTensor] = None,
- **kwargs,
- ):
- r"""
- Function invoked when calling the pipeline for generation.
- Args:
- prompt (`str` or `List[str]`, *optional*, defaults to `None`):
- The prompt or prompts to guide the image generation. If not provided, `text_embeddings` is required.
- height (`int`, *optional*, defaults to 512):
- The height in pixels of the generated image.
- width (`int`, *optional*, defaults to 512):
- The width in pixels of the generated image.
- num_inference_steps (`int`, *optional*, defaults to 50):
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
- expense of slower inference.
- guidance_scale (`float`, *optional*, defaults to 7.5):
- Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
- `guidance_scale` is defined as `w` of equation 2. of [Imagen
- Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
- 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
- usually at the expense of lower image quality.
- negative_prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
- if `guidance_scale` is less than `1`).
- num_images_per_prompt (`int`, *optional*, defaults to 1):
- The number of images to generate per prompt.
- eta (`float`, *optional*, defaults to 0.0):
- Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
- [`schedulers.DDIMScheduler`], will be ignored for others.
- generator (`torch.Generator`, *optional*):
- A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
- deterministic.
- latents (`torch.FloatTensor`, *optional*):
- Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
- generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
- tensor will ge generated by sampling using the supplied random `generator`.
- output_type (`str`, *optional*, defaults to `"pil"`):
- The output format of the generate image. Choose between
- [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
- plain tuple.
- callback (`Callable`, *optional*):
- A function that will be called every `callback_steps` steps during inference. The function will be
- called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
- callback_steps (`int`, *optional*, defaults to 1):
- The frequency at which the `callback` function will be called. If not specified, the callback will be
- called at every step.
- text_embeddings (`torch.FloatTensor`, *optional*, defaults to `None`):
- Pre-generated text embeddings to be used as inputs for image generation. Can be used in place of
- `prompt` to avoid re-computing the embeddings. If not provided, the embeddings will be generated from
- the supplied `prompt`.
- Returns:
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
- When returning a tuple, the first element is a list with the generated images, and the second element is a
- list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
- (nsfw) content, according to the `safety_checker`.
- """
- # 0. Default height and width to unet
- height = height or self.unet.config.sample_size * self.vae_scale_factor
- width = width or self.unet.config.sample_size * self.vae_scale_factor
-
- if height % 8 != 0 or width % 8 != 0:
- raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
-
- if (callback_steps is None) or (
- callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
- ):
- raise ValueError(
- f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
- f" {type(callback_steps)}."
- )
-
- if text_embeddings is None:
- if isinstance(prompt, str):
- batch_size = 1
- elif isinstance(prompt, list):
- batch_size = len(prompt)
- else:
- raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
-
- # get prompt text embeddings
- text_inputs = self.tokenizer(
- prompt,
- padding="max_length",
- max_length=self.tokenizer.model_max_length,
- return_tensors="pt",
- )
- text_input_ids = text_inputs.input_ids
-
- if text_input_ids.shape[-1] > self.tokenizer.model_max_length:
- removed_text = self.tokenizer.batch_decode(text_input_ids[:, self.tokenizer.model_max_length :])
- print(
- "The following part of your input was truncated because CLIP can only handle sequences up to"
- f" {self.tokenizer.model_max_length} tokens: {removed_text}"
- )
- text_input_ids = text_input_ids[:, : self.tokenizer.model_max_length]
- text_embeddings = self.text_encoder(text_input_ids.to(self.device))[0]
- else:
- batch_size = text_embeddings.shape[0]
-
- # duplicate text embeddings for each generation per prompt, using mps friendly method
- bs_embed, seq_len, _ = text_embeddings.shape
- text_embeddings = text_embeddings.repeat(1, num_images_per_prompt, 1)
- text_embeddings = text_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1)
-
- # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
- # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
- # corresponds to doing no classifier free guidance.
- do_classifier_free_guidance = guidance_scale > 1.0
- # get unconditional embeddings for classifier free guidance
- if do_classifier_free_guidance:
- uncond_tokens: List[str]
- if negative_prompt is None:
- uncond_tokens = [""]
- elif text_embeddings is None and type(prompt) is not type(negative_prompt):
- raise TypeError(
- f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
- f" {type(prompt)}."
- )
- elif isinstance(negative_prompt, str):
- uncond_tokens = [negative_prompt]
- elif batch_size != len(negative_prompt):
- raise ValueError(
- f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
- f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
- " the batch size of `prompt`."
- )
- else:
- uncond_tokens = negative_prompt
-
- max_length = self.tokenizer.model_max_length
- uncond_input = self.tokenizer(
- uncond_tokens,
- padding="max_length",
- max_length=max_length,
- truncation=True,
- return_tensors="pt",
- )
- uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0]
-
- # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
- seq_len = uncond_embeddings.shape[1]
- uncond_embeddings = uncond_embeddings.repeat(batch_size, num_images_per_prompt, 1)
- uncond_embeddings = uncond_embeddings.view(batch_size * num_images_per_prompt, seq_len, -1)
-
- # For classifier free guidance, we need to do two forward passes.
- # Here we concatenate the unconditional and text embeddings into a single batch
- # to avoid doing two forward passes
- text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
-
- # get the initial random noise unless the user supplied it
-
- # Unlike in other pipelines, latents need to be generated in the target device
- # for 1-to-1 results reproducibility with the CompVis implementation.
- # However this currently doesn't work in `mps`.
- latents_shape = (
- batch_size * num_images_per_prompt,
- self.unet.in_channels,
- height // 8,
- width // 8,
- )
- latents_dtype = text_embeddings.dtype
- if latents is None:
- if self.device.type == "mps":
- # randn does not exist on mps
- latents = torch.randn(
- latents_shape,
- generator=generator,
- device="cpu",
- dtype=latents_dtype,
- ).to(self.device)
- else:
- latents = torch.randn(
- latents_shape,
- generator=generator,
- device=self.device,
- dtype=latents_dtype,
- )
- else:
- if latents.shape != latents_shape:
- raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}")
- latents = latents.to(self.device)
-
- # set timesteps
- self.scheduler.set_timesteps(num_inference_steps)
-
- # Some schedulers like PNDM have timesteps as arrays
- # It's more optimized to move all timesteps to correct device beforehand
- timesteps_tensor = self.scheduler.timesteps.to(self.device)
-
- # scale the initial noise by the standard deviation required by the scheduler
- latents = latents * self.scheduler.init_noise_sigma
-
- # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
- # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
- # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
- # and should be between [0, 1]
- accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
- extra_step_kwargs = {}
- if accepts_eta:
- extra_step_kwargs["eta"] = eta
-
- for i, t in enumerate(self.progress_bar(timesteps_tensor)):
- # expand the latents if we are doing classifier free guidance
- latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
- latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
-
- # predict the noise residual
- noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
-
- # perform guidance
- if do_classifier_free_guidance:
- noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
- noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
-
- # compute the previous noisy sample x_t -> x_t-1
- latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
-
- # call the callback, if provided
- if callback is not None and i % callback_steps == 0:
- callback(i, t, latents)
-
- latents = 1 / 0.18215 * latents
- image = self.vae.decode(latents).sample
-
- image = (image / 2 + 0.5).clamp(0, 1)
-
- # we always cast to float32 as this does not cause significant overhead and is compatible with bfloa16
- image = image.cpu().permute(0, 2, 3, 1).float().numpy()
-
- if self.safety_checker is not None:
- safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(self.device)
- image, has_nsfw_concept = self.safety_checker(
- images=image,
- clip_input=safety_checker_input.pixel_values.to(text_embeddings.dtype),
- )
- else:
- has_nsfw_concept = None
-
- if output_type == "pil":
- image = self.numpy_to_pil(image)
-
- if not return_dict:
- return (image, has_nsfw_concept)
-
- return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
-
- def generate_inputs(self, prompt_a, prompt_b, seed_a, seed_b, noise_shape, T, batch_size):
- embeds_a = self.embed_text(prompt_a)
- embeds_b = self.embed_text(prompt_b)
- latents_dtype = embeds_a.dtype
- latents_a = self.init_noise(seed_a, noise_shape, latents_dtype)
- latents_b = self.init_noise(seed_b, noise_shape, latents_dtype)
-
- batch_idx = 0
- embeds_batch, noise_batch = None, None
- for i, t in enumerate(T):
- embeds = torch.lerp(embeds_a, embeds_b, t)
- noise = slerp(float(t), latents_a, latents_b)
-
- embeds_batch = embeds if embeds_batch is None else torch.cat([embeds_batch, embeds])
- noise_batch = noise if noise_batch is None else torch.cat([noise_batch, noise])
- batch_is_ready = embeds_batch.shape[0] == batch_size or i + 1 == T.shape[0]
- if not batch_is_ready:
- continue
- yield batch_idx, embeds_batch, noise_batch
- batch_idx += 1
- del embeds_batch, noise_batch
- torch.cuda.empty_cache()
- embeds_batch, noise_batch = None, None
-
- def make_clip_frames(
- self,
- prompt_a: str,
- prompt_b: str,
- seed_a: int,
- seed_b: int,
- num_interpolation_steps: int = 5,
- save_path: Union[str, Path] = "outputs/",
- num_inference_steps: int = 50,
- guidance_scale: float = 7.5,
- eta: float = 0.0,
- height: Optional[int] = None,
- width: Optional[int] = None,
- upsample: bool = False,
- batch_size: int = 1,
- image_file_ext: str = ".png",
- T: np.ndarray = None,
- skip: int = 0,
- negative_prompt: str = None,
- step: Optional[Tuple[int, int]] = None,
- ):
- # 0. Default height and width to unet
- height = height or self.unet.config.sample_size * self.vae_scale_factor
- width = width or self.unet.config.sample_size * self.vae_scale_factor
-
- save_path = Path(save_path)
- save_path.mkdir(parents=True, exist_ok=True)
-
- T = T if T is not None else np.linspace(0.0, 1.0, num_interpolation_steps)
- if T.shape[0] != num_interpolation_steps:
- raise ValueError(f"Unexpected T shape, got {T.shape}, expected dim 0 to be {num_interpolation_steps}")
-
- if upsample:
- if getattr(self, "upsampler", None) is None:
- self.upsampler = RealESRGANModel.from_pretrained("nateraw/real-esrgan")
- self.upsampler.to(self.device)
-
- batch_generator = self.generate_inputs(
- prompt_a,
- prompt_b,
- seed_a,
- seed_b,
- (1, self.unet.in_channels, height // 8, width // 8),
- T[skip:],
- batch_size,
- )
- num_batches = math.ceil(num_interpolation_steps / batch_size)
-
- log_prefix = "" if step is None else f"[{step[0]}/{step[1]}] "
-
- frame_index = skip
- for batch_idx, embeds_batch, noise_batch in batch_generator:
- if batch_size == 1:
- msg = f"Generating frame {frame_index}"
- else:
- msg = f"Generating frames {frame_index}-{frame_index+embeds_batch.shape[0]-1}"
- logger.info(f"{log_prefix}[{batch_idx}/{num_batches}] {msg}")
- outputs = self(
- latents=noise_batch,
- text_embeddings=embeds_batch,
- height=height,
- width=width,
- guidance_scale=guidance_scale,
- eta=eta,
- num_inference_steps=num_inference_steps,
- output_type="pil" if not upsample else "numpy",
- negative_prompt=negative_prompt,
- )["images"]
-
- for image in outputs:
- frame_filepath = save_path / (f"frame%06d{image_file_ext}" % frame_index)
- image = image if not upsample else self.upsampler(image)
- image.save(frame_filepath)
- frame_index += 1
-
- def walk(
- self,
- prompts: Optional[List[str]] = None,
- seeds: Optional[List[int]] = None,
- num_interpolation_steps: Optional[Union[int, List[int]]] = 5, # int or list of int
- output_dir: Optional[str] = "./dreams",
- name: Optional[str] = None,
- image_file_ext: Optional[str] = ".png",
- fps: Optional[int] = 30,
- num_inference_steps: Optional[int] = 50,
- guidance_scale: Optional[float] = 7.5,
- eta: Optional[float] = 0.0,
- height: Optional[int] = None,
- width: Optional[int] = None,
- upsample: Optional[bool] = False,
- batch_size: Optional[int] = 1,
- resume: Optional[bool] = False,
- audio_filepath: str = None,
- audio_start_sec: Optional[Union[int, float]] = None,
- margin: Optional[float] = 1.0,
- smooth: Optional[float] = 0.0,
- negative_prompt: Optional[str] = None,
- make_video: Optional[bool] = True,
- ):
- """Generate a video from a sequence of prompts and seeds. Optionally, add audio to the
- video to interpolate to the intensity of the audio.
- Args:
- prompts (Optional[List[str]], optional):
- list of text prompts. Defaults to None.
- seeds (Optional[List[int]], optional):
- list of random seeds corresponding to prompts. Defaults to None.
- num_interpolation_steps (Union[int, List[int]], *optional*):
- How many interpolation steps between each prompt. Defaults to None.
- output_dir (Optional[str], optional):
- Where to save the video. Defaults to './dreams'.
- name (Optional[str], optional):
- Name of the subdirectory of output_dir. Defaults to None.
- image_file_ext (Optional[str], *optional*, defaults to '.png'):
- The extension to use when writing video frames.
- fps (Optional[int], *optional*, defaults to 30):
- The frames per second in the resulting output videos.
- num_inference_steps (Optional[int], *optional*, defaults to 50):
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
- expense of slower inference.
- guidance_scale (Optional[float], *optional*, defaults to 7.5):
- Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
- `guidance_scale` is defined as `w` of equation 2. of [Imagen
- Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
- 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
- usually at the expense of lower image quality.
- eta (Optional[float], *optional*, defaults to 0.0):
- Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
- [`schedulers.DDIMScheduler`], will be ignored for others.
- height (Optional[int], *optional*, defaults to None):
- height of the images to generate.
- width (Optional[int], *optional*, defaults to None):
- width of the images to generate.
- upsample (Optional[bool], *optional*, defaults to False):
- When True, upsamples images with realesrgan.
- batch_size (Optional[int], *optional*, defaults to 1):
- Number of images to generate at once.
- resume (Optional[bool], *optional*, defaults to False):
- When True, resumes from the last frame in the output directory based
- on available prompt config. Requires you to provide the `name` argument.
- audio_filepath (str, *optional*, defaults to None):
- Optional path to an audio file to influence the interpolation rate.
- audio_start_sec (Optional[Union[int, float]], *optional*, defaults to 0):
- Global start time of the provided audio_filepath.
- margin (Optional[float], *optional*, defaults to 1.0):
- Margin from librosa hpss to use for audio interpolation.
- smooth (Optional[float], *optional*, defaults to 0.0):
- Smoothness of the audio interpolation. 1.0 means linear interpolation.
- negative_prompt (Optional[str], *optional*, defaults to None):
- Optional negative prompt to use. Same across all prompts.
- make_video (Optional[bool], *optional*, defaults to True):
- When True, makes a video from the generated frames. If False, only
- generates the frames.
- This function will create sub directories for each prompt and seed pair.
- For example, if you provide the following prompts and seeds:
- ```
- prompts = ['a dog', 'a cat', 'a bird']
- seeds = [1, 2, 3]
- num_interpolation_steps = 5
- output_dir = 'output_dir'
- name = 'name'
- fps = 5
- ```
- Then the following directories will be created:
- ```
- output_dir
- ├── name
- │ ├── name_000000
- │ │ ├── frame000000.png
- │ │ ├── ...
- │ │ ├── frame000004.png
- │ │ ├── name_000000.mp4
- │ ├── name_000001
- │ │ ├── frame000000.png
- │ │ ├── ...
- │ │ ├── frame000004.png
- │ │ ├── name_000001.mp4
- │ ├── ...
- │ ├── name.mp4
- | |── prompt_config.json
- ```
- Returns:
- str: The resulting video filepath. This video includes all sub directories' video clips.
- """
- # 0. Default height and width to unet
- height = height or self.unet.config.sample_size * self.vae_scale_factor
- width = width or self.unet.config.sample_size * self.vae_scale_factor
-
- output_path = Path(output_dir)
-
- name = name or time.strftime("%Y%m%d-%H%M%S")
- save_path_root = output_path / name
- save_path_root.mkdir(parents=True, exist_ok=True)
-
- # Where the final video of all the clips combined will be saved
- output_filepath = save_path_root / f"{name}.mp4"
-
- # If using same number of interpolation steps between, we turn into list
- if not resume and isinstance(num_interpolation_steps, int):
- num_interpolation_steps = [num_interpolation_steps] * (len(prompts) - 1)
-
- if not resume:
- audio_start_sec = audio_start_sec or 0
-
- # Save/reload prompt config
- prompt_config_path = save_path_root / "prompt_config.json"
- if not resume:
- prompt_config_path.write_text(
- json.dumps(
- dict(
- prompts=prompts,
- seeds=seeds,
- num_interpolation_steps=num_interpolation_steps,
- fps=fps,
- num_inference_steps=num_inference_steps,
- guidance_scale=guidance_scale,
- eta=eta,
- upsample=upsample,
- height=height,
- width=width,
- audio_filepath=audio_filepath,
- audio_start_sec=audio_start_sec,
- negative_prompt=negative_prompt,
- ),
- indent=2,
- sort_keys=False,
- )
- )
- else:
- data = json.load(open(prompt_config_path))
- prompts = data["prompts"]
- seeds = data["seeds"]
- num_interpolation_steps = data["num_interpolation_steps"]
- fps = data["fps"]
- num_inference_steps = data["num_inference_steps"]
- guidance_scale = data["guidance_scale"]
- eta = data["eta"]
- upsample = data["upsample"]
- height = data["height"]
- width = data["width"]
- audio_filepath = data["audio_filepath"]
- audio_start_sec = data["audio_start_sec"]
- negative_prompt = data.get("negative_prompt", None)
-
- for i, (prompt_a, prompt_b, seed_a, seed_b, num_step) in enumerate(
- zip(prompts, prompts[1:], seeds, seeds[1:], num_interpolation_steps)
- ):
- # {name}_000000 / {name}_000001 / ...
- save_path = save_path_root / f"{name}_{i:06d}"
-
- # Where the individual clips will be saved
- step_output_filepath = save_path / f"{name}_{i:06d}.mp4"
-
- # Determine if we need to resume from a previous run
- skip = 0
- if resume:
- if step_output_filepath.exists():
- print(f"Skipping {save_path} because frames already exist")
- continue
-
- existing_frames = sorted(save_path.glob(f"*{image_file_ext}"))
- if existing_frames:
- skip = int(existing_frames[-1].stem[-6:]) + 1
- if skip + 1 >= num_step:
- print(f"Skipping {save_path} because frames already exist")
- continue
- print(f"Resuming {save_path.name} from frame {skip}")
-
- audio_offset = audio_start_sec + sum(num_interpolation_steps[:i]) / fps
- audio_duration = num_step / fps
-
- self.make_clip_frames(
- prompt_a,
- prompt_b,
- seed_a,
- seed_b,
- num_interpolation_steps=num_step,
- save_path=save_path,
- num_inference_steps=num_inference_steps,
- guidance_scale=guidance_scale,
- eta=eta,
- height=height,
- width=width,
- upsample=upsample,
- batch_size=batch_size,
- T=get_timesteps_arr(
- audio_filepath,
- offset=audio_offset,
- duration=audio_duration,
- fps=fps,
- margin=margin,
- smooth=smooth,
- )
- if audio_filepath
- else None,
- skip=skip,
- negative_prompt=negative_prompt,
- step=(i, len(prompts) - 1),
- )
- if make_video:
- make_video_pyav(
- save_path,
- audio_filepath=audio_filepath,
- fps=fps,
- output_filepath=step_output_filepath,
- glob_pattern=f"*{image_file_ext}",
- audio_offset=audio_offset,
- audio_duration=audio_duration,
- sr=44100,
- )
- if make_video:
- return make_video_pyav(
- save_path_root,
- audio_filepath=audio_filepath,
- fps=fps,
- audio_offset=audio_start_sec,
- audio_duration=sum(num_interpolation_steps) / fps,
- output_filepath=output_filepath,
- glob_pattern=f"**/*{image_file_ext}",
- sr=44100,
- )
-
- def embed_text(self, text, negative_prompt=None):
- """Helper to embed some text"""
- text_input = self.tokenizer(
- text,
- padding="max_length",
- max_length=self.tokenizer.model_max_length,
- truncation=True,
- return_tensors="pt",
- )
- with torch.no_grad():
- embed = self.text_encoder(text_input.input_ids.to(self.device))[0]
- return embed
-
- def init_noise(self, seed, noise_shape, dtype):
- """Helper to initialize noise"""
- # randn does not exist on mps, so we create noise on CPU here and move it to the device after initialization
- if self.device.type == "mps":
- noise = torch.randn(
- noise_shape,
- device="cpu",
- generator=torch.Generator(device="cpu").manual_seed(seed),
- ).to(self.device)
- else:
- noise = torch.randn(
- noise_shape,
- device=self.device,
- generator=torch.Generator(device=self.device).manual_seed(seed),
- dtype=dtype,
- )
- return noise
-
- @classmethod
- def from_pretrained(cls, *args, tiled=False, **kwargs):
- """Same as diffusers `from_pretrained` but with tiled option, which makes images tilable"""
- if tiled:
-
- def patch_conv(**patch):
- cls = nn.Conv2d
- init = cls.__init__
-
- def __init__(self, *args, **kwargs):
- return init(self, *args, **kwargs, **patch)
-
- cls.__init__ = __init__
-
- patch_conv(padding_mode="circular")
-
- pipeline = super().from_pretrained(*args, **kwargs)
- pipeline.tiled = tiled
- return pipeline
diff --git a/spaces/OpenDILabCommunity/DI-sheep/DI-sheep/ui/src/App.tsx b/spaces/OpenDILabCommunity/DI-sheep/DI-sheep/ui/src/App.tsx
deleted file mode 100644
index 6534092fd4e70a54909a0c7f880bf05c6bd9f230..0000000000000000000000000000000000000000
--- a/spaces/OpenDILabCommunity/DI-sheep/DI-sheep/ui/src/App.tsx
+++ /dev/null
@@ -1,458 +0,0 @@
-import React, {
- FC,
- MouseEventHandler,
- useEffect,
- useRef,
- useState,
-} from 'react';
-
-import './App.css';
-import { GithubIcon } from './GithubIcon';
-import { randomString, waitTimeout } from './utils';
-import { defaultTheme } from './themes/default';
-import { Icon, Theme } from './themes/interface';
-import { fishermanTheme } from './themes/fisherman';
-import { diTheme } from './themes/di';
-import { mhlTheme } from './themes/mhl';
-import { yhdTheme } from './themes/yhd';
-
-const themes = [defaultTheme, fishermanTheme, diTheme, mhlTheme, yhdTheme];
-
-const maxLevel = 10;
-const target_url = 'https://opendilabcommunity-di-sheep.hf.space/DI-sheep/';
-
-interface MySymbol {
- id: string;
- status: number; // 0->1->2
- isCover: boolean;
- isAgentTarget: boolean;
- x: number;
- y: number;
- icon: Icon;
-}
-
-type Scene = MySymbol[];
-
-// 8*8 grid with factor 4 (32x32)
-const makeScene: (level: number, icons: Icon[], new_scene_data: string[], agent_action: number) => Scene = (level, icons, new_scene_data, agent_action) => {
- const curLevel = Math.min(maxLevel, level);
- const iconPool = icons.slice(0, 2 * curLevel);
-
- const scene: Scene = [];
-
- for (const raw_data of new_scene_data) {
- const data = JSON.parse(raw_data);
-
- const count = scene.length;
- scene.push({
- isCover: !data.accessible,
- // isCover: !data.visible, // for viz debug
- isAgentTarget: count === agent_action,
- status: 0,
- icon: iconPool[data.icon],
- id: data.uid,
- x: data.x,
- y: data.y,
- });
- }
- return scene;
-};
-
-
-interface SymbolProps extends MySymbol {
- onClick: MouseEventHandler;
-}
-
-const Symbol: FC = ({ x, y, icon, isCover, isAgentTarget, status, onClick }) => {
- return (
-
- Tips: The orange tile is suggested by the AI model.
-
- All tiles in Level 1-9 could be eliminiated.
-
- There will be two extra tiles in Level 10.
-